title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Galerkin Methods for Complementarity Problems and Variational
Inequalities | cs.LG cs.AI math.OC | Complementarity problems and variational inequalities arise in a wide variety
of areas, including machine learning, planning, game theory, and physical
simulation. In all of these areas, to handle large-scale problem instances, we
need fast approximate solution methods. One promising idea is Galerkin
approximation, in which we search for the best answer within the span of a
given set of basis functions. Bertsekas proposed one possible Galerkin method
for variational inequalities. However, this method can exhibit two problems in
practice: its approximation error is worse than might be expected based on the
ability of the basis to represent the desired solution, and each iteration
requires a projection step that is not always easy to implement efficiently.
So, in this paper, we present a new Galerkin method with improved behavior: our
new error bounds depend directly on the distance from the true solution to the
subspace spanned by our basis, and the only projections we require are onto the
feasible region or onto the span of our basis.
| Geoffrey J. Gordon | null | 1306.4753 | null | null |
From-Below Approximations in Boolean Matrix Factorization: Geometry and
New Algorithm | cs.NA cs.LG | We present new results on Boolean matrix factorization and a new algorithm
based on these results. The results emphasize the significance of
factorizations that provide from-below approximations of the input matrix.
While the previously proposed algorithms do not consider the possibly different
significance of different matrix entries, our results help measure such
significance and suggest where to focus when computing factors. An experimental
evaluation of the new algorithm on both synthetic and real data demonstrates
its good performance in terms of good coverage by the first k factors as well
as a small number of factors needed for exact decomposition and indicates that
the algorithm outperforms the available ones in these terms. We also propose
future research topics.
| Radim Belohlavek, Martin Trnecka | 10.1016/j.jcss.2015.06.002 | 1306.4905 | null | null |
Machine Teaching for Bayesian Learners in the Exponential Family | cs.LG | What if there is a teacher who knows the learning goal and wants to design
good training data for a machine learner? We propose an optimal teaching
framework aimed at learners who employ Bayesian models. Our framework is
expressed as an optimization problem over teaching examples that balance the
future loss of the learner and the effort of the teacher. This optimization
problem is in general hard. In the case where the learner employs conjugate
exponential family models, we present an approximate algorithm for finding the
optimal teaching set. Our algorithm optimizes the aggregate sufficient
statistics, then unpacks them into actual teaching examples. We give several
examples to illustrate our framework.
| Xiaojin Zhu | null | 1306.4947 | null | null |
Class Proportion Estimation with Application to Multiclass Anomaly
Rejection | stat.ML cs.LG | This work addresses two classification problems that fall under the heading
of domain adaptation, wherein the distributions of training and testing
examples differ. The first problem studied is that of class proportion
estimation, which is the problem of estimating the class proportions in an
unlabeled testing data set given labeled examples of each class. Compared to
previous work on this problem, our approach has the novel feature that it does
not require labeled training data from one of the classes. This property allows
us to address the second domain adaptation problem, namely, multiclass anomaly
rejection. Here, the goal is to design a classifier that has the option of
assigning a "reject" label, indicating that the instance did not arise from a
class present in the training data. We establish consistent learning strategies
for both of these domain adaptation problems, which to our knowledge are the
first of their kind. We also implement the class proportion estimation
technique and demonstrate its performance on several benchmark data sets.
| Tyler Sanderson and Clayton Scott | null | 1306.5056 | null | null |
Song-based Classification techniques for Endangered Bird Conservation | cs.LG | The work presented in this paper is part of a global framework which long
term goal is to design a wireless sensor network able to support the
observation of a population of endangered birds. We present the first stage for
which we have conducted a knowledge discovery approach on a sample of
acoustical data. We use MFCC features extracted from bird songs and we exploit
two knowledge discovery techniques. One that relies on clustering-based
approaches, that highlights the homogeneity in the songs of the species. The
other, based on predictive modeling, that demonstrates the good performances of
various machine learning techniques for the identification process. The
knowledge elicited provides promising results to consider a widespread study
and to elicit guidelines for designing a first version of the automatic
approach for data collection based on acoustic sensors.
| Erick Stattner and Wilfried Segretier and Martine Collard and Philippe
Hunel and Nicolas Vidot | null | 1306.5349 | null | null |
A Statistical Perspective on Algorithmic Leveraging | stat.ME cs.LG stat.ML | One popular method for dealing with large-scale data sets is sampling. For
example, by using the empirical statistical leverage scores as an importance
sampling distribution, the method of algorithmic leveraging samples and
rescales rows/columns of data matrices to reduce the data size before
performing computations on the subproblem. This method has been successful in
improving computational efficiency of algorithms for matrix problems such as
least-squares approximation, least absolute deviations approximation, and
low-rank matrix approximation. Existing work has focused on algorithmic issues
such as worst-case running times and numerical issues associated with providing
high-quality implementations, but none of it addresses statistical aspects of
this method.
In this paper, we provide a simple yet effective framework to evaluate the
statistical properties of algorithmic leveraging in the context of estimating
parameters in a linear regression model with a fixed number of predictors. We
show that from the statistical perspective of bias and variance, neither
leverage-based sampling nor uniform sampling dominates the other. This result
is particularly striking, given the well-known result that, from the
algorithmic perspective of worst-case analysis, leverage-based sampling
provides uniformly superior worst-case algorithmic results, when compared with
uniform sampling. Based on these theoretical results, we propose and analyze
two new leveraging algorithms. A detailed empirical evaluation of existing
leverage-based methods as well as these two new methods is carried out on both
synthetic and real data sets. The empirical results indicate that our theory is
a good predictor of practical performance of existing and new leverage-based
algorithms and that the new algorithms achieve improved performance.
| Ping Ma and Michael W. Mahoney and Bin Yu | null | 1306.5362 | null | null |
Model Reframing by Feature Context Change | cs.LG | The feature space (including both input and output variables) characterises a
data mining problem. In predictive (supervised) problems, the quality and
availability of features determines the predictability of the dependent
variable, and the performance of data mining models in terms of
misclassification or regression error. Good features, however, are usually
difficult to obtain. It is usual that many instances come with missing values,
either because the actual value for a given attribute was not available or
because it was too expensive. This is usually interpreted as a utility or
cost-sensitive learning dilemma, in this case between misclassification (or
regression error) costs and attribute tests costs. Both misclassification cost
(MC) and test cost (TC) can be integrated into a single measure, known as joint
cost (JC). We introduce methods and plots (such as the so-called JROC plots)
that can work with any of-the-shelf predictive technique, including ensembles,
such that we re-frame the model to use the appropriate subset of attributes
(the feature configuration) during deployment time. In other words, models are
trained with the available attributes (once and for all) and then deployed by
setting missing values on the attributes that are deemed ineffective for
reducing the joint cost. As the number of feature configuration combinations
grows exponentially with the number of features we introduce quadratic methods
that are able to approximate the optimal configuration and model choices, as
shown by the experimental results.
| Celestine-Periale Maguedong-Djoumessi | null | 1306.5487 | null | null |
Deep Learning by Scattering | cs.LG stat.ML | We introduce general scattering transforms as mathematical models of deep
neural networks with l2 pooling. Scattering networks iteratively apply complex
valued unitary operators, and the pooling is performed by a complex modulus. An
expected scattering defines a contractive representation of a high-dimensional
probability distribution, which preserves its mean-square norm. We show that
unsupervised learning can be casted as an optimization of the space contraction
to preserve the volume occupied by unlabeled examples, at each layer of the
network. Supervised learning and classification are performed with an averaged
scattering, which provides scattering estimations for multiple classes.
| St\'ephane Mallat and Ir\`ene Waldspurger | null | 1306.5532 | null | null |
Correlated random features for fast semi-supervised learning | stat.ML cs.LG | This paper presents Correlated Nystrom Views (XNV), a fast semi-supervised
algorithm for regression and classification. The algorithm draws on two main
ideas. First, it generates two views consisting of computationally inexpensive
random features. Second, XNV applies multiview regression using Canonical
Correlation Analysis (CCA) on unlabeled data to bias the regression towards
useful features. It has been shown that, if the views contains accurate
estimators, CCA regression can substantially reduce variance with a minimal
increase in bias. Random views are justified by recent theoretical and
empirical work showing that regression with random features closely
approximates kernel regression, implying that random views can be expected to
contain accurate estimators. We show that XNV consistently outperforms a
state-of-the-art algorithm for semi-supervised learning: substantially
improving predictive performance and reducing the variability of performance on
a wide variety of real-world datasets, whilst also reducing runtime by orders
of magnitude.
| Brian McWilliams, David Balduzzi and Joachim M. Buhmann | null | 1306.5554 | null | null |
Synthesizing Manipulation Sequences for Under-Specified Tasks using
Unrolled Markov Random Fields | cs.RO cs.AI cs.LG | Many tasks in human environments require performing a sequence of navigation
and manipulation steps involving objects. In unstructured human environments,
the location and configuration of the objects involved often change in
unpredictable ways. This requires a high-level planning strategy that is robust
and flexible in an uncertain environment. We propose a novel dynamic planning
strategy, which can be trained from a set of example sequences. High level
tasks are expressed as a sequence of primitive actions or controllers (with
appropriate parameters). Our score function, based on Markov Random Field
(MRF), captures the relations between environment, controllers, and their
arguments. By expressing the environment using sets of attributes, the approach
generalizes well to unseen scenarios. We train the parameters of our MRF using
a maximum margin learning method. We provide a detailed empirical validation of
our overall framework demonstrating successful plan strategies for a variety of
tasks.
| Jaeyong Sung, Bart Selman, Ashutosh Saxena | 10.1109/IROS.2014.6942972 | 1306.5707 | null | null |
Fourier PCA and Robust Tensor Decomposition | cs.LG cs.DS stat.ML | Fourier PCA is Principal Component Analysis of a matrix obtained from higher
order derivatives of the logarithm of the Fourier transform of a
distribution.We make this method algorithmic by developing a tensor
decomposition method for a pair of tensors sharing the same vectors in rank-$1$
decompositions. Our main application is the first provably polynomial-time
algorithm for underdetermined ICA, i.e., learning an $n \times m$ matrix $A$
from observations $y=Ax$ where $x$ is drawn from an unknown product
distribution with arbitrary non-Gaussian components. The number of component
distributions $m$ can be arbitrarily higher than the dimension $n$ and the
columns of $A$ only need to satisfy a natural and efficiently verifiable
nondegeneracy condition. As a second application, we give an alternative
algorithm for learning mixtures of spherical Gaussians with linearly
independent means. These results also hold in the presence of Gaussian noise.
| Navin Goyal, Santosh Vempala and Ying Xiao | null | 1306.5825 | null | null |
Design of an Agent for Answering Back in Smart Phones | cs.AI cs.HC cs.LG | The objective of the paper is to design an agent which provides efficient
response to the caller when a call goes unanswered in smartphones. The agent
provides responses through text messages, email etc stating the most likely
reason as to why the callee is unable to answer a call. Responses are composed
taking into consideration the importance of the present call and the situation
the callee is in at the moment like driving, sleeping, at work etc. The agent
makes decisons in the compostion of response messages based on the patterns it
has come across in the learning environment. Initially the user helps the agent
to compose response messages. The agent associates this message to the percept
it recieves with respect to the environment the callee is in. The user may
thereafter either choose to make to response system automatic or choose to
recieve suggestions from the agent for responses messages and confirm what is
to be sent to the caller.
| Sandeep Venkatesh, Meera V Patil, Nanditha Swamy | null | 1306.5884 | null | null |
A Randomized Nonmonotone Block Proximal Gradient Method for a Class of
Structured Nonlinear Programming | math.OC cs.LG cs.NA math.NA stat.ML | We propose a randomized nonmonotone block proximal gradient (RNBPG) method
for minimizing the sum of a smooth (possibly nonconvex) function and a
block-separable (possibly nonconvex nonsmooth) function. At each iteration,
this method randomly picks a block according to any prescribed probability
distribution and solves typically several associated proximal subproblems that
usually have a closed-form solution, until a certain progress on objective
value is achieved. In contrast to the usual randomized block coordinate descent
method [23,20], our method has a nonmonotone flavor and uses variable stepsizes
that can partially utilize the local curvature information of the smooth
component of objective function. We show that any accumulation point of the
solution sequence of the method is a stationary point of the problem {\it
almost surely} and the method is capable of finding an approximate stationary
point with high probability. We also establish a sublinear rate of convergence
for the method in terms of the minimal expected squared norm of certain
proximal gradients over the iterations. When the problem under consideration is
convex, we show that the expected objective values generated by RNBPG converge
to the optimal value of the problem. Under some assumptions, we further
establish a sublinear and linear rate of convergence on the expected objective
values generated by a monotone version of RNBPG. Finally, we conduct some
preliminary experiments to test the performance of RNBPG on the
$\ell_1$-regularized least-squares problem and a dual SVM problem in machine
learning. The computational results demonstrate that our method substantially
outperforms the randomized block coordinate {\it descent} method with fixed or
variable stepsizes.
| Zhaosong Lu and Lin Xiao | null | 1306.5918 | null | null |
Understanding the Predictive Power of Computational Mechanics and Echo
State Networks in Social Media | cs.SI cs.LG physics.soc-ph stat.AP stat.ML | There is a large amount of interest in understanding users of social media in
order to predict their behavior in this space. Despite this interest, user
predictability in social media is not well-understood. To examine this
question, we consider a network of fifteen thousand users on Twitter over a
seven week period. We apply two contrasting modeling paradigms: computational
mechanics and echo state networks. Both methods attempt to model the behavior
of users on the basis of their past behavior. We demonstrate that the behavior
of users on Twitter can be well-modeled as processes with self-feedback. We
find that the two modeling approaches perform very similarly for most users,
but that they differ in performance on a small subset of the users. By
exploring the properties of these performance-differentiated users, we
highlight the challenges faced in applying predictive models to dynamic social
data.
| David Darmon, Jared Sylvester, Michelle Girvan, William Rand | null | 1306.6111 | null | null |
Scaling Up Robust MDPs by Reinforcement Learning | cs.LG stat.ML | We consider large-scale Markov decision processes (MDPs) with parameter
uncertainty, under the robust MDP paradigm. Previous studies showed that robust
MDPs, based on a minimax approach to handle uncertainty, can be solved using
dynamic programming for small to medium sized problems. However, due to the
"curse of dimensionality", MDPs that model real-life problems are typically
prohibitively large for such approaches. In this work we employ a reinforcement
learning approach to tackle this planning problem: we develop a robust
approximate dynamic programming method based on a projected fixed point
equation to approximately solve large scale robust MDPs. We show that the
proposed method provably succeeds under certain technical conditions, and
demonstrate its effectiveness through simulation of an option pricing problem.
To the best of our knowledge, this is the first attempt to scale up the robust
MDPs paradigm.
| Aviv Tamar, Huan Xu, Shie Mannor | null | 1306.6189 | null | null |
Solving Relational MDPs with Exogenous Events and Additive Rewards | cs.AI cs.LG | We formalize a simple but natural subclass of service domains for relational
planning problems with object-centered, independent exogenous events and
additive rewards capturing, for example, problems in inventory control.
Focusing on this subclass, we present a new symbolic planning algorithm which
is the first algorithm that has explicit performance guarantees for relational
MDPs with exogenous events. In particular, under some technical conditions, our
planning algorithm provides a monotonic lower bound on the optimal value
function. To support this algorithm we present novel evaluation and reduction
techniques for generalized first order decision diagrams, a knowledge
representation for real-valued functions over relational world states. Our
planning algorithm uses a set of focus states, which serves as a training set,
to simplify and approximate the symbolic solution, and can thus be seen to
perform learning for planning. A preliminary experimental evaluation
demonstrates the validity of our approach.
| S. Joshi, R. Khardon, P. Tadepalli, A. Raghavan, A. Fern | null | 1306.6302 | null | null |
Traffic data reconstruction based on Markov random field modeling | stat.ML cond-mat.dis-nn cs.LG | We consider the traffic data reconstruction problem. Suppose we have the
traffic data of an entire city that are incomplete because some road data are
unobserved. The problem is to reconstruct the unobserved parts of the data. In
this paper, we propose a new method to reconstruct incomplete traffic data
collected from various traffic sensors. Our approach is based on Markov random
field modeling of road traffic. The reconstruction is achieved by using
mean-field method and a machine learning method. We numerically verify the
performance of our method using realistic simulated traffic data for the real
road network of Sendai, Japan.
| Shun Kataoka, Muneki Yasuda, Cyril Furtlehner and Kazuyuki Tanaka | 10.1088/0266-5611/30/2/025003 | 1306.6482 | null | null |
A Survey on Metric Learning for Feature Vectors and Structured Data | cs.LG cs.AI stat.ML | The need for appropriate ways to measure the distance or similarity between
data is ubiquitous in machine learning, pattern recognition and data mining,
but handcrafting such good metrics for specific problems is generally
difficult. This has led to the emergence of metric learning, which aims at
automatically learning a metric from data and has attracted a lot of interest
in machine learning and related fields for the past ten years. This survey
paper proposes a systematic review of the metric learning literature,
highlighting the pros and cons of each approach. We pay particular attention to
Mahalanobis distance metric learning, a well-studied and successful framework,
but additionally present a wide range of methods that have recently emerged as
powerful alternatives, including nonlinear metric learning, similarity learning
and local metric learning. Recent trends and extensions, such as
semi-supervised metric learning, metric learning for histogram data and the
derivation of generalization guarantees, are also covered. Finally, this survey
addresses metric learning for structured data, in particular edit distance
learning, and attempts to give an overview of the remaining challenges in
metric learning for the years to come.
| Aur\'elien Bellet, Amaury Habrard and Marc Sebban | null | 1306.6709 | null | null |
Evaluation Measures for Hierarchical Classification: a unified view and
novel approaches | cs.AI cs.LG | Hierarchical classification addresses the problem of classifying items into a
hierarchy of classes. An important issue in hierarchical classification is the
evaluation of different classification algorithms, which is complicated by the
hierarchical relations among the classes. Several evaluation measures have been
proposed for hierarchical classification using the hierarchy in different ways.
This paper studies the problem of evaluation in hierarchical classification by
analyzing and abstracting the key components of the existing performance
measures. It also proposes two alternative generic views of hierarchical
evaluation and introduces two corresponding novel measures. The proposed
measures, along with the state-of-the art ones, are empirically tested on three
large datasets from the domain of text classification. The empirical results
illustrate the undesirable behavior of existing approaches and how the proposed
methods overcome most of these methods across a range of cases.
| Aris Kosmopoulos, Ioannis Partalas, Eric Gaussier, Georgios Paliouras,
Ion Androutsopoulos | 10.1007/s10618-014-0382-x | 1306.6802 | null | null |
Memory Limited, Streaming PCA | stat.ML cs.IT cs.LG math.IT | We consider streaming, one-pass principal component analysis (PCA), in the
high-dimensional regime, with limited memory. Here, $p$-dimensional samples are
presented sequentially, and the goal is to produce the $k$-dimensional subspace
that best approximates these points. Standard algorithms require $O(p^2)$
memory; meanwhile no algorithm can do better than $O(kp)$ memory, since this is
what the output itself requires. Memory (or storage) complexity is most
meaningful when understood in the context of computational and sample
complexity. Sample complexity for high-dimensional PCA is typically studied in
the setting of the {\em spiked covariance model}, where $p$-dimensional points
are generated from a population covariance equal to the identity (white noise)
plus a low-dimensional perturbation (the spike) which is the signal to be
recovered. It is now well-understood that the spike can be recovered when the
number of samples, $n$, scales proportionally with the dimension, $p$. Yet, all
algorithms that provably achieve this, have memory complexity $O(p^2)$.
Meanwhile, algorithms with memory-complexity $O(kp)$ do not have provable
bounds on sample complexity comparable to $p$. We present an algorithm that
achieves both: it uses $O(kp)$ memory (meaning storage of any kind) and is able
to compute the $k$-dimensional spike with $O(p \log p)$ sample-complexity --
the first algorithm of its kind. While our theoretical analysis focuses on the
spiked covariance model, our simulations show that our algorithm is successful
on much more general models for the data.
| Ioannis Mitliagkas, Constantine Caramanis, Prateek Jain | null | 1307.0032 | null | null |
Simple one-pass algorithm for penalized linear regression with
cross-validation on MapReduce | stat.ML cs.DC cs.LG | In this paper, we propose a one-pass algorithm on MapReduce for penalized
linear regression
\[f_\lambda(\alpha, \beta) = \|Y - \alpha\mathbf{1} - X\beta\|_2^2 +
p_{\lambda}(\beta)\] where $\alpha$ is the intercept which can be omitted
depending on application; $\beta$ is the coefficients and $p_{\lambda}$ is the
penalized function with penalizing parameter $\lambda$. $f_\lambda(\alpha,
\beta)$ includes interesting classes such as Lasso, Ridge regression and
Elastic-net. Compared to latest iterative distributed algorithms requiring
multiple MapReduce jobs, our algorithm achieves huge performance improvement;
moreover, our algorithm is exact compared to the approximate algorithms such as
parallel stochastic gradient decent. Moreover, what our algorithm distinguishes
with others is that it trains the model with cross validation to choose optimal
$\lambda$ instead of user specified one.
Key words: penalized linear regression, lasso, elastic-net, ridge, MapReduce
| Kun Yang | null | 1307.0048 | null | null |
Concentration and Confidence for Discrete Bayesian Sequence Predictors | cs.LG stat.ML | Bayesian sequence prediction is a simple technique for predicting future
symbols sampled from an unknown measure on infinite sequences over a countable
alphabet. While strong bounds on the expected cumulative error are known, there
are only limited results on the distribution of this error. We prove tight
high-probability bounds on the cumulative error, which is measured in terms of
the Kullback-Leibler (KL) divergence. We also consider the problem of
constructing upper confidence bounds on the KL and Hellinger errors similar to
those constructed from Hoeffding-like bounds in the i.i.d. case. The new
results are applied to show that Bayesian sequence prediction can be used in
the Knows What It Knows (KWIK) framework with bounds that match the
state-of-the-art.
| Tor Lattimore and Marcus Hutter and Peter Sunehag | null | 1307.0127 | null | null |
Semi-supervised clustering methods | stat.ME cs.LG stat.ML | Cluster analysis methods seek to partition a data set into homogeneous
subgroups. It is useful in a wide variety of applications, including document
processing and modern genetics. Conventional clustering methods are
unsupervised, meaning that there is no outcome variable nor is anything known
about the relationship between the observations in the data set. In many
situations, however, information about the clusters is available in addition to
the values of the features. For example, the cluster labels of some
observations may be known, or certain observations may be known to belong to
the same cluster. In other cases, one may wish to identify clusters that are
associated with a particular outcome variable. This review describes several
clustering algorithms (known as "semi-supervised clustering" methods) that can
be applied in these situations. The majority of these methods are modifications
of the popular k-means clustering method, and several of them will be described
in detail. A brief description of some other semi-supervised clustering
algorithms is also provided.
| Eric Bair | 10.1002/wics.1270 | 1307.0252 | null | null |
Exploratory Learning | cs.LG | In multiclass semi-supervised learning (SSL), it is sometimes the case that
the number of classes present in the data is not known, and hence no labeled
examples are provided for some classes. In this paper we present variants of
well-known semi-supervised multiclass learning methods that are robust when the
data contains an unknown number of classes. In particular, we present an
"exploratory" extension of expectation-maximization (EM) that explores
different numbers of classes while learning. "Exploratory" SSL greatly improves
performance on three datasets in terms of F1 on the classes with seed examples
i.e., the classes which are expected to be in the data. Our Exploratory EM
algorithm also outperforms a SSL method based non-parametric Bayesian
clustering.
| Bhavana Dalvi, William W. Cohen, Jamie Callan | null | 1307.0253 | null | null |
WebSets: Extracting Sets of Entities from the Web Using Unsupervised
Information Extraction | cs.LG cs.CL cs.IR | We describe a open-domain information extraction method for extracting
concept-instance pairs from an HTML corpus. Most earlier approaches to this
problem rely on combining clusters of distributionally similar terms and
concept-instance pairs obtained with Hearst patterns. In contrast, our method
relies on a novel approach for clustering terms found in HTML tables, and then
assigning concept names to these clusters using Hearst patterns. The method can
be efficiently applied to a large corpus, and experimental results on several
datasets show that our method can accurately extract large numbers of
concept-instance pairs.
| Bhavana Dalvi, William W. Cohen, and Jamie Callan | null | 1307.0261 | null | null |
Algorithms of the LDA model [REPORT] | cs.LG cs.IR stat.ML | We review three algorithms for Latent Dirichlet Allocation (LDA). Two of them
are variational inference algorithms: Variational Bayesian inference and Online
Variational Bayesian inference and one is Markov Chain Monte Carlo (MCMC)
algorithm -- Collapsed Gibbs sampling. We compare their time complexity and
performance. We find that online variational Bayesian inference is the fastest
algorithm and still returns reasonably good results.
| Jaka \v{S}peh, Andrej Muhi\v{c}, Jan Rupnik | null | 1307.0317 | null | null |
Learning directed acyclic graphs based on sparsest permutations | math.ST cs.LG stat.TH | We consider the problem of learning a Bayesian network or directed acyclic
graph (DAG) model from observational data. A number of constraint-based,
score-based and hybrid algorithms have been developed for this purpose. For
constraint-based methods, statistical consistency guarantees typically rely on
the faithfulness assumption, which has been show to be restrictive especially
for graphs with cycles in the skeleton. However, there is only limited work on
consistency guarantees for score-based and hybrid algorithms and it has been
unclear whether consistency guarantees can be proven under weaker conditions
than the faithfulness assumption. In this paper, we propose the sparsest
permutation (SP) algorithm. This algorithm is based on finding the causal
ordering of the variables that yields the sparsest DAG. We prove that this new
score-based method is consistent under strictly weaker conditions than the
faithfulness assumption. We also demonstrate through simulations on small DAGs
that the SP algorithm compares favorably to the constraint-based PC and SGS
algorithms as well as the score-based Greedy Equivalence Search and hybrid
Max-Min Hill-Climbing method. In the Gaussian setting, we prove that our
algorithm boils down to finding the permutation of the variables with sparsest
Cholesky decomposition for the inverse covariance matrix. Using this
connection, we show that in the oracle setting, where the true covariance
matrix is known, the SP algorithm is in fact equivalent to $\ell_0$-penalized
maximum likelihood estimation.
| Garvesh Raskutti and Caroline Uhler | null | 1307.0366 | null | null |
Challenges in Representation Learning: A report on three machine
learning contests | stat.ML cs.LG | The ICML 2013 Workshop on Challenges in Representation Learning focused on
three challenges: the black box learning challenge, the facial expression
recognition challenge, and the multimodal learning challenge. We describe the
datasets created for these challenges and summarize the results of the
competitions. We provide suggestions for organizers of future challenges and
some comments on what kind of knowledge can be gained from machine learning
competitions.
| Ian J. Goodfellow, Dumitru Erhan, Pierre Luc Carrier, Aaron Courville,
Mehdi Mirza, Ben Hamner, Will Cukierski, Yichuan Tang, David Thaler,
Dong-Hyun Lee, Yingbo Zhou, Chetan Ramaiah, Fangxiang Feng, Ruifan Li,
Xiaojie Wang, Dimitris Athanasakis, John Shawe-Taylor, Maxim Milakov, John
Park, Radu Ionescu, Marius Popescu, Cristian Grozea, James Bergstra, Jingjing
Xie, Lukasz Romaszko, Bing Xu, Zhang Chuang, and Yoshua Bengio | null | 1307.0414 | null | null |
An Empirical Study into Annotator Agreement, Ground Truth Estimation,
and Algorithm Evaluation | cs.CV cs.AI cs.LG | Although agreement between annotators has been studied in the past from a
statistical viewpoint, little work has attempted to quantify the extent to
which this phenomenon affects the evaluation of computer vision (CV) object
detection algorithms. Many researchers utilise ground truth (GT) in experiments
and more often than not this GT is derived from one annotator's opinion. How
does the difference in opinion affect an algorithm's evaluation? Four examples
of typical CV problems are chosen, and a methodology is applied to each to
quantify the inter-annotator variance and to offer insight into the mechanisms
behind agreement and the use of GT. It is found that when detecting linear
objects annotator agreement is very low. The agreement in object position,
linear or otherwise, can be partially explained through basic image properties.
Automatic object detectors are compared to annotator agreement and it is found
that a clear relationship exists. Several methods for calculating GTs from a
number of annotations are applied and the resulting differences in the
performance of the object detectors are quantified. It is found that the rank
of a detector is highly dependent upon the method used to form the GT. It is
also found that although the STAPLE and LSML GT estimation methods appear to
represent the mean of the performance measured using the individual
annotations, when there are few annotations, or there is a large variance in
them, these estimates tend to degrade. Furthermore, one of the most commonly
adopted annotation combination methods--consensus voting--accentuates more
obvious features, which results in an overestimation of the algorithm's
performance. Finally, it is concluded that in some datasets it may not be
possible to state with any confidence that one algorithm outperforms another
when evaluating upon one GT and a method for calculating confidence bounds is
discussed.
| Thomas A. Lampert, Andr\'e Stumpf, Pierre Gan\c{c}arski | 10.1109/TIP.2016.2544703 | 1307.0426 | null | null |
Quantum support vector machine for big data classification | quant-ph cs.LG | Supervised machine learning is the classification of new data based on
already classified training examples. In this work, we show that the support
vector machine, an optimized binary classifier, can be implemented on a quantum
computer, with complexity logarithmic in the size of the vectors and the number
of training examples. In cases when classical sampling algorithms require
polynomial time, an exponential speed-up is obtained. At the core of this
quantum big data algorithm is a non-sparse matrix exponentiation technique for
efficiently performing a matrix inversion of the training data inner-product
(kernel) matrix.
| Patrick Rebentrost, Masoud Mohseni, Seth Lloyd | 10.1103/PhysRevLett.113.130503 | 1307.0471 | null | null |
Online discrete optimization in social networks in the presence of
Knightian uncertainty | math.OC cs.DC cs.LG | We study a model of collective real-time decision-making (or learning) in a
social network operating in an uncertain environment, for which no a priori
probabilistic model is available. Instead, the environment's impact on the
agents in the network is seen through a sequence of cost functions, revealed to
the agents in a causal manner only after all the relevant actions are taken.
There are two kinds of costs: individual costs incurred by each agent and
local-interaction costs incurred by each agent and its neighbors in the social
network. Moreover, agents have inertia: each agent has a default mixed strategy
that stays fixed regardless of the state of the environment, and must expend
effort to deviate from this strategy in order to respond to cost signals coming
from the environment. We construct a decentralized strategy, wherein each agent
selects its action based only on the costs directly affecting it and on the
decisions made by its neighbors in the network. In this setting, we quantify
social learning in terms of regret, which is given by the difference between
the realized network performance over a given time horizon and the best
performance that could have been achieved in hindsight by a fictitious
centralized entity with full knowledge of the environment's evolution. We show
that our strategy achieves the regret that scales polylogarithmically with the
time horizon and polynomially with the number of agents and the maximum number
of neighbors of any agent in the social network.
| Maxim Raginsky and Angelia Nedi\'c | null | 1307.0473 | null | null |
A non-parametric conditional factor regression model for
high-dimensional input and response | stat.ML cs.LG | In this paper, we propose a non-parametric conditional factor regression
(NCFR)model for domains with high-dimensional input and response. NCFR enhances
linear regression in two ways: a) introducing low-dimensional latent factors
leading to dimensionality reduction and b) integrating an Indian Buffet Process
as a prior for the latent factors to derive unlimited sparse dimensions.
Experimental results comparing NCRF to several alternatives give evidence to
remarkable prediction performance.
| Ava Bargi, Richard Yi Da Xu, Massimo Piccardi | null | 1307.0578 | null | null |
The Orchive : Data mining a massive bioacoustic archive | cs.LG cs.DB cs.SD | The Orchive is a large collection of over 20,000 hours of audio recordings
from the OrcaLab research facility located off the northern tip of Vancouver
Island. It contains recorded orca vocalizations from the 1980 to the present
time and is one of the largest resources of bioacoustic data in the world. We
have developed a web-based interface that allows researchers to listen to these
recordings, view waveform and spectral representations of the audio, label
clips with annotations, and view the results of machine learning classifiers
based on automatic audio features extraction. In this paper we describe such
classifiers that discriminate between background noise, orca calls, and the
voice notes that are present in most of the tapes. Furthermore we show
classification results for individual calls based on a previously existing orca
call catalog. We have also experimentally investigated the scalability of
classifiers over the entire Orchive.
| Steven Ness, Helena Symonds, Paul Spong, George Tzanetakis | null | 1307.0589 | null | null |
Discovering the Markov network structure | cs.IT cs.LG math.IT | In this paper a new proof is given for the supermodularity of information
content. Using the decomposability of the information content an algorithm is
given for discovering the Markov network graph structure endowed by the
pairwise Markov property of a given probability distribution. A discrete
probability distribution is given for which the equivalence of
Hammersley-Clifford theorem is fulfilled although some of the possible vector
realizations are taken on with zero probability. Our algorithm for discovering
the pairwise Markov network is illustrated on this example, too.
| Edith Kov\'acs and Tam\'as Sz\'antai | null | 1307.0643 | null | null |
Distributed Online Big Data Classification Using Context Information | cs.LG stat.ML | Distributed, online data mining systems have emerged as a result of
applications requiring analysis of large amounts of correlated and
high-dimensional data produced by multiple distributed data sources. We propose
a distributed online data classification framework where data is gathered by
distributed data sources and processed by a heterogeneous set of distributed
learners which learn online, at run-time, how to classify the different data
streams either by using their locally available classification functions or by
helping each other by classifying each other's data. Importantly, since the
data is gathered at different locations, sending the data to another learner to
process incurs additional costs such as delays, and hence this will be only
beneficial if the benefits obtained from a better classification will exceed
the costs. We model the problem of joint classification by the distributed and
heterogeneous learners from multiple data sources as a distributed contextual
bandit problem where each data is characterized by a specific context. We
develop a distributed online learning algorithm for which we can prove
sublinear regret. Compared to prior work in distributed online data mining, our
work is the first to provide analytic regret results characterizing the
performance of the proposed algorithm.
| Cem Tekin, Mihaela van der Schaar | null | 1307.0781 | null | null |
Data Fusion by Matrix Factorization | cs.LG cs.AI cs.DB stat.ML | For most problems in science and engineering we can obtain data sets that
describe the observed system from various perspectives and record the behavior
of its individual components. Heterogeneous data sets can be collectively mined
by data fusion. Fusion can focus on a specific target relation and exploit
directly associated data together with contextual data and data about system's
constraints. In the paper we describe a data fusion approach with penalized
matrix tri-factorization (DFMF) that simultaneously factorizes data matrices to
reveal hidden associations. The approach can directly consider any data that
can be expressed in a matrix, including those from feature-based
representations, ontologies, associations and networks. We demonstrate the
utility of DFMF for gene function prediction task with eleven different data
sources and for prediction of pharmacologic actions by fusing six data sources.
Our data fusion algorithm compares favorably to alternative data integration
approaches and achieves higher accuracy than can be obtained from any single
data source alone.
| Marinka \v{Z}itnik and Bla\v{z} Zupan | 10.1109/TPAMI.2014.2343973 | 1307.0803 | null | null |
Multi-Task Policy Search | stat.ML cs.AI cs.LG cs.RO | Learning policies that generalize across multiple tasks is an important and
challenging research topic in reinforcement learning and robotics. Training
individual policies for every single potential task is often impractical,
especially for continuous task variations, requiring more principled approaches
to share and transfer knowledge among similar tasks. We present a novel
approach for learning a nonlinear feedback policy that generalizes across
multiple tasks. The key idea is to define a parametrized policy as a function
of both the state and the task, which allows learning a single policy that
generalizes across multiple known and unknown tasks. Applications of our novel
approach to reinforcement and imitation learning in real-robot experiments are
shown.
| Marc Peter Deisenroth, Peter Englert, Jan Peters and Dieter Fox | null | 1307.0813 | null | null |
Semi-supervised Ranking Pursuit | stat.ML cs.IR cs.LG | We propose a novel sparse preference learning/ranking algorithm. Our
algorithm approximates the true utility function by a weighted sum of basis
functions using the squared loss on pairs of data points, and is a
generalization of the kernel matching pursuit method. It can operate both in a
supervised and a semi-supervised setting and allows efficient search for
multiple, near-optimal solutions. Furthermore, we describe the extension of the
algorithm suitable for combined ranking and regression tasks. In our
experiments we demonstrate that the proposed algorithm outperforms several
state-of-the-art learning methods when taking into account unlabeled data and
performs comparably in a supervised learning scenario, while providing sparser
solutions.
| Evgeni Tsivtsivadze and Tom Heskes | null | 1307.0846 | null | null |
On the minimal teaching sets of two-dimensional threshold functions | math.CO cs.LG math.NT | It is known that a minimal teaching set of any threshold function on the
twodimensional rectangular grid consists of 3 or 4 points. We derive exact
formulae for the numbers of functions corresponding to these values and further
refine them in the case of a minimal teaching set of size 3. We also prove that
the average cardinality of the minimal teaching sets of threshold functions is
asymptotically 7/2.
We further present corollaries of these results concerning some special
arrangements of lines in the plane.
| Max A. Alekseyev, Marina G. Basova, Nikolai Yu. Zolotykh | 10.1137/140978090 | 1307.1058 | null | null |
Investigating the Detection of Adverse Drug Events in a UK General
Practice Electronic Health-Care Database | cs.CE cs.LG | Data-mining techniques have frequently been developed for Spontaneous
reporting databases. These techniques aim to find adverse drug events
accurately and efficiently. Spontaneous reporting databases are prone to
missing information, under reporting and incorrect entries. This often results
in a detection lag or prevents the detection of some adverse drug events. These
limitations do not occur in electronic health-care databases. In this paper,
existing methods developed for spontaneous reporting databases are implemented
on both a spontaneous reporting database and a general practice electronic
health-care database and compared. The results suggests that the application of
existing methods to the general practice database may help find signals that
have gone undetected when using the spontaneous reporting system database. In
addition the general practice database provides far more supplementary
information, that if incorporated in analysis could provide a wealth of
information for identifying adverse events more accurately.
| Jenna Reps, Jan Feyereisl, Jonathan M. Garibaldi, Uwe Aickelin, Jack
E. Gibson, Richard B. Hubbard | null | 1307.1078 | null | null |
Application of a clustering framework to UK domestic electricity data | cs.CE cs.LG | This paper takes an approach to clustering domestic electricity load profiles
that has been successfully used with data from Portugal and applies it to UK
data. Clustering techniques are applied and it is found that the preferred
technique in the Portuguese work (a two stage process combining Self Organised
Maps and Kmeans) is not appropriate for the UK data. The work shows that up to
nine clusters of households can be identified with the differences in usage
profiles being visually striking. This demonstrates the appropriateness of
breaking the electricity usage patterns down to more detail than the two load
profiles currently published by the electricity industry. The paper details
initial results using data collected in Milton Keynes around 1990. Further work
is described and will concentrate on building accurate and meaningful clusters
of similar electricity users in order to better direct demand side management
initiatives to the most relevant target customers.
| Ian Dent, Uwe Aickelin, Tom Rodden | null | 1307.1079 | null | null |
AdaBoost and Forward Stagewise Regression are First-Order Convex
Optimization Methods | stat.ML cs.LG math.OC | Boosting methods are highly popular and effective supervised learning methods
which combine weak learners into a single accurate model with good statistical
performance. In this paper, we analyze two well-known boosting methods,
AdaBoost and Incremental Forward Stagewise Regression (FS$_\varepsilon$), by
establishing their precise connections to the Mirror Descent algorithm, which
is a first-order method in convex optimization. As a consequence of these
connections we obtain novel computational guarantees for these boosting
methods. In particular, we characterize convergence bounds of AdaBoost, related
to both the margin and log-exponential loss function, for any step-size
sequence. Furthermore, this paper presents, for the first time, precise
computational complexity results for FS$_\varepsilon$.
| Robert M. Freund, Paul Grigas, Rahul Mazumder | null | 1307.1192 | null | null |
Constructing Hierarchical Image-tags Bimodal Representations for Word
Tags Alternative Choice | cs.LG cs.NE | This paper describes our solution to the multi-modal learning challenge of
ICML. This solution comprises constructing three-level representations in three
consecutive stages and choosing correct tag words with a data-specific
strategy. Firstly, we use typical methods to obtain level-1 representations.
Each image is represented using MPEG-7 and gist descriptors with additional
features released by the contest organizers. And the corresponding word tags
are represented by bag-of-words model with a dictionary of 4000 words.
Secondly, we learn the level-2 representations using two stacked RBMs for each
modality. Thirdly, we propose a bimodal auto-encoder to learn the
similarities/dissimilarities between the pairwise image-tags as level-3
representations. Finally, during the test phase, based on one observation of
the dataset, we come up with a data-specific strategy to choose the correct tag
words leading to a leap of an improved overall performance. Our final average
accuracy on the private test set is 100%, which ranks the first place in this
challenge.
| Fangxiang Feng and Ruifan Li and Xiaojie Wang | null | 1307.1275 | null | null |
The Application of a Data Mining Framework to Energy Usage Profiling in
Domestic Residences using UK data | cs.CE cs.LG stat.AP | This paper describes a method for defining representative load profiles for
domestic electricity users in the UK. It considers bottom up and clustering
methods and then details the research plans for implementing and improving
existing framework approaches based on the overall usage profile. The work
focuses on adapting and applying analysis framework approaches to UK energy
data in order to determine the effectiveness of creating a few (single figures)
archetypical users with the intention of improving on the current methods of
determining usage profiles. The work is currently in progress and the paper
details initial results using data collected in Milton Keynes around 1990.
Various possible enhancements to the work are considered including a split
based on temperature to reflect the varying UK weather conditions.
| Ian Dent, Uwe Aickelin, Tom Rodden | null | 1307.1380 | null | null |
Creating Personalised Energy Plans. From Groups to Individuals using
Fuzzy C Means Clustering | cs.CE cs.LG | Changes in the UK electricity market mean that domestic users will be
required to modify their usage behaviour in order that supplies can be
maintained. Clustering allows usage profiles collected at the household level
to be clustered into groups and assigned a stereotypical profile which can be
used to target marketing campaigns. Fuzzy C Means clustering extends this by
allowing each household to be a member of many groups and hence provides the
opportunity to make personalised offers to the household dependent on their
degree of membership of each group. In addition, feedback can be provided on
how user's changing behaviour is moving them towards more "green" or cost
effective stereotypical usage.
| Ian Dent, Christian Wagner, Uwe Aickelin, Tom Rodden | null | 1307.1385 | null | null |
Examining the Classification Accuracy of TSVMs with ?Feature Selection
in Comparison with the GLAD Algorithm | cs.LG cs.CE | Gene expression data sets are used to classify and predict patient diagnostic
categories. As we know, it is extremely difficult and expensive to obtain gene
expression labelled examples. Moreover, conventional supervised approaches
cannot function properly when labelled data (training examples) are
insufficient using Support Vector Machines (SVM) algorithms. Therefore, in this
paper, we suggest Transductive Support Vector Machines (TSVMs) as
semi-supervised learning algorithms, learning with both labelled samples data
and unlabelled samples to perform the classification of microarray data. To
prune the superfluous genes and samples we used a feature selection method
called Recursive Feature Elimination (RFE), which is supposed to enhance the
output of classification and avoid the local optimization problem. We examined
the classification prediction accuracy of the TSVM-RFE algorithm in comparison
with the Genetic Learning Across Datasets (GLAD) algorithm, as both are
semi-supervised learning methods. Comparing these two methods, we found that
the TSVM-RFE surpassed both a SVM using RFE and GLAD.
| Hala Helmi, Jon M. Garibaldi and Uwe Aickelin | null | 1307.1387 | null | null |
Quiet in Class: Classification, Noise and the Dendritic Cell Algorithm | cs.LG cs.CR | Theoretical analyses of the Dendritic Cell Algorithm (DCA) have yielded
several criticisms about its underlying structure and operation. As a result,
several alterations and fixes have been suggested in the literature to correct
for these findings. A contribution of this work is to investigate the effects
of replacing the classification stage of the DCA (which is known to be flawed)
with a traditional machine learning technique. This work goes on to question
the merits of those unique properties of the DCA that are yet to be thoroughly
analysed. If none of these properties can be found to have a benefit over
traditional approaches, then "fixing" the DCA is arguably less efficient than
simply creating a new algorithm. This work examines the dynamic filtering
property of the DCA and questions the utility of this unique feature for the
anomaly detection problem. It is found that this feature, while advantageous
for noisy, time-ordered classification, is not as useful as a traditional
static filter for processing a synthetic dataset. It is concluded that there
are still unique features of the DCA left to investigate. Areas that may be of
benefit to the Artificial Immune Systems community are suggested.
| Feng Gu, Jan Feyereisl, Robert Oates, Jenna Reps, Julie Greensmith,
Uwe Aickelin | null | 1307.1391 | null | null |
Detect adverse drug reactions for drug Alendronate | cs.CE cs.LG | Adverse drug reaction (ADR) is widely concerned for public health issue. In
this study we propose an original approach to detect the ADRs using feature
matrix and feature selection. The experiments are carried out on the drug
Simvastatin. Major side effects for the drug are detected and better
performance is achieved compared to other computerized methods. The detected
ADRs are based on the computerized method, further investigation is needed.
| Yihui Liu, Uwe Aickelin | null | 1307.1394 | null | null |
Discovering Sequential Patterns in a UK General Practice Database | cs.LG cs.CE stat.AP | The wealth of computerised medical information becoming readily available
presents the opportunity to examine patterns of illnesses, therapies and
responses. These patterns may be able to predict illnesses that a patient is
likely to develop, allowing the implementation of preventative actions. In this
paper sequential rule mining is applied to a General Practice database to find
rules involving a patients age, gender and medical history. By incorporating
these rules into current health-care a patient can be highlighted as
susceptible to a future illness based on past or current illnesses, gender and
year of birth. This knowledge has the ability to greatly improve health-care
and reduce health-care costs.
| Jenna Reps, Jonathan M. Garibaldi, Uwe Aickelin, Daniele Soria, Jack
E. Gibson, Richard B. Hubbard | null | 1307.1411 | null | null |
Dropout Training as Adaptive Regularization | stat.ML cs.LG stat.ME | Dropout and other feature noising schemes control overfitting by artificially
corrupting the training data. For generalized linear models, dropout performs a
form of adaptive regularization. Using this viewpoint, we show that the dropout
regularizer is first-order equivalent to an L2 regularizer applied after
scaling the features by an estimate of the inverse diagonal Fisher information
matrix. We also establish a connection to AdaGrad, an online learning
algorithm, and find that a close relative of AdaGrad operates by repeatedly
solving linear dropout-regularized problems. By casting dropout as
regularization, we develop a natural semi-supervised algorithm that uses
unlabeled data to create a better adaptive regularizer. We apply this idea to
document classification tasks, and show that it consistently boosts the
performance of dropout training, improving on state-of-the-art results on the
IMDB reviews dataset.
| Stefan Wager, Sida Wang, and Percy Liang | null | 1307.1493 | null | null |
Comparing Data-mining Algorithms Developed for Longitudinal
Observational Databases | cs.LG cs.CE cs.DB | Longitudinal observational databases have become a recent interest in the
post marketing drug surveillance community due to their ability of presenting a
new perspective for detecting negative side effects. Algorithms mining
longitudinal observation databases are not restricted by many of the
limitations associated with the more conventional methods that have been
developed for spontaneous reporting system databases. In this paper we
investigate the robustness of four recently developed algorithms that mine
longitudinal observational databases by applying them to The Health Improvement
Network (THIN) for six drugs with well document known negative side effects.
Our results show that none of the existing algorithms was able to consistently
identify known adverse drug reactions above events related to the cause of the
drug and no algorithm was superior.
| Jenna Reps, Jonathan M. Garibaldi, Uwe Aickelin, Daniele Soria, Jack
E. Gibson, Richard B. Hubbard | null | 1307.1584 | null | null |
Supervised Learning and Anti-learning of Colorectal Cancer Classes and
Survival Rates from Cellular Biology Parameters | cs.LG cs.CE stat.ML | In this paper, we describe a dataset relating to cellular and physical
conditions of patients who are operated upon to remove colorectal tumours. This
data provides a unique insight into immunological status at the point of tumour
removal, tumour classification and post-operative survival. Attempts are made
to learn relationships between attributes (physical and immunological) and the
resulting tumour stage and survival. Results for conventional machine learning
approaches can be considered poor, especially for predicting tumour stages for
the most important types of cancer. This poor performance is further
investigated and compared with a synthetic, dataset based on the logical
exclusive-OR function and it is shown that there is a significant level of
'anti-learning' present in all supervised methods used and this can be
explained by the highly dimensional, complex and sparsely representative
dataset. For predicting the stage of cancer from the immunological attributes,
anti-learning approaches outperform a range of popular algorithms.
| Chris Roadknight, Uwe Aickelin, Guoping Qiu, John Scholefield, Lindy
Durrant | 10.1109/ICSMC.2012.6377825 | 1307.1599 | null | null |
Biomarker Clustering of Colorectal Cancer Data to Complement Clinical
Classification | cs.LG cs.CE | In this paper, we describe a dataset relating to cellular and physical
conditions of patients who are operated upon to remove colorectal tumours. This
data provides a unique insight into immunological status at the point of tumour
removal, tumour classification and post-operative survival. Attempts are made
to cluster this dataset and important subsets of it in an effort to
characterize the data and validate existing standards for tumour
classification. It is apparent from optimal clustering that existing tumour
classification is largely unrelated to immunological factors within a patient
and that there may be scope for re-evaluating treatment options and survival
estimates based on a combination of tumour physiology and patient
histochemistry.
| Chris Roadknight, Uwe Aickelin, Alex Ladas, Daniele Soria, John
Scholefield and Lindy Durrant | null | 1307.1601 | null | null |
Polyglot: Distributed Word Representations for Multilingual NLP | cs.CL cs.LG | Distributed word representations (word embeddings) have recently contributed
to competitive performance in language modeling and several NLP tasks. In this
work, we train word embeddings for more than 100 languages using their
corresponding Wikipedias. We quantitatively demonstrate the utility of our word
embeddings by using them as the sole features for training a part of speech
tagger for a subset of these languages. We find their performance to be
competitive with near state-of-art methods in English, Danish and Swedish.
Moreover, we investigate the semantic features captured by these embeddings
through the proximity of word groupings. We will release these embeddings
publicly to help researchers in the development and enhancement of multilingual
applications.
| Rami Al-Rfou, Bryan Perozzi, Steven Skiena | null | 1307.1662 | null | null |
Stochastic Optimization of PCA with Capped MSG | stat.ML cs.LG | We study PCA as a stochastic optimization problem and propose a novel
stochastic approximation algorithm which we refer to as "Matrix Stochastic
Gradient" (MSG), as well as a practical variant, Capped MSG. We study the
method both theoretically and empirically.
| Raman Arora, Andrew Cotter, and Nathan Srebro | null | 1307.1674 | null | null |
Approximate dynamic programming using fluid and diffusion approximations
with applications to power management | cs.LG math.OC | Neuro-dynamic programming is a class of powerful techniques for approximating
the solution to dynamic programming equations. In their most computationally
attractive formulations, these techniques provide the approximate solution only
within a prescribed finite-dimensional function class. Thus, the question that
always arises is how should the function class be chosen? The goal of this
paper is to propose an approach using the solutions to associated fluid and
diffusion approximations. In order to illustrate this approach, the paper
focuses on an application to dynamic speed scaling for power management in
computer processors.
| Wei Chen, Dayu Huang, Ankur A. Kulkarni, Jayakrishnan Unnikrishnan,
Quanyan Zhu, Prashant Mehta, Sean Meyn, Adam Wierman | 10.1109/CDC.2009.5399685 | 1307.1759 | null | null |
Ensemble Methods for Multi-label Classification | stat.ML cs.LG | Ensemble methods have been shown to be an effective tool for solving
multi-label classification tasks. In the RAndom k-labELsets (RAKEL) algorithm,
each member of the ensemble is associated with a small randomly-selected subset
of k labels. Then, a single label classifier is trained according to each
combination of elements in the subset. In this paper we adopt a similar
approach, however, instead of randomly choosing subsets, we select the minimum
required subsets of k labels that cover all labels and meet additional
constraints such as coverage of inter-label correlations. Construction of the
cover is achieved by formulating the subset selection as a minimum set covering
problem (SCP) and solving it by using approximation algorithms. Every cover
needs only to be prepared once by offline algorithms. Once prepared, a cover
may be applied to the classification of any given multi-label dataset whose
properties conform with those of the cover. The contribution of this paper is
two-fold. First, we introduce SCP as a general framework for constructing label
covers while allowing the user to incorporate cover construction constraints.
We demonstrate the effectiveness of this framework by proposing two
construction constraints whose enforcement produces covers that improve the
prediction performance of random selection. Second, we provide theoretical
bounds that quantify the probabilities of random selection to produce covers
that meet the proposed construction criteria. The experimental results indicate
that the proposed methods improve multi-label classification accuracy and
stability compared with the RAKEL algorithm and to other state-of-the-art
algorithms.
| Lior Rokach, Alon Schclar, Ehud Itach | null | 1307.1769 | null | null |
Loss minimization and parameter estimation with heavy tails | cs.LG stat.ML | This work studies applications and generalizations of a simple estimation
technique that provides exponential concentration under heavy-tailed
distributions, assuming only bounded low-order moments. We show that the
technique can be used for approximate minimization of smooth and strongly
convex losses, and specifically for least squares linear regression. For
instance, our $d$-dimensional estimator requires just
$\tilde{O}(d\log(1/\delta))$ random samples to obtain a constant factor
approximation to the optimal least squares loss with probability $1-\delta$,
without requiring the covariates or noise to be bounded or subgaussian. We
provide further applications to sparse linear regression and low-rank
covariance matrix estimation with similar allowances on the noise and covariate
distributions. The core technique is a generalization of the median-of-means
estimator to arbitrary metric spaces.
| Daniel Hsu and Sivan Sabato | null | 1307.1827 | null | null |
B-tests: Low Variance Kernel Two-Sample Tests | cs.LG stat.ML | A family of maximum mean discrepancy (MMD) kernel two-sample tests is
introduced. Members of the test family are called Block-tests or B-tests, since
the test statistic is an average over MMDs computed on subsets of the samples.
The choice of block size allows control over the tradeoff between test power
and computation time. In this respect, the $B$-test family combines favorable
properties of previously proposed MMD two-sample tests: B-tests are more
powerful than a linear time test where blocks are just pairs of samples, yet
they are more computationally efficient than a quadratic time test where a
single large block incorporating all the samples is used to compute a
U-statistic. A further important advantage of the B-tests is their
asymptotically Normal null distribution: this is by contrast with the
U-statistic, which is degenerate under the null hypothesis, and for which
estimates of the null distribution are computationally demanding. Recent
results on kernel selection for hypothesis testing transfer seamlessly to the
B-tests, yielding a means to optimize test power via kernel choice.
| Wojciech Zaremba (INRIA Saclay - Ile de France, CVN), Arthur Gretton,
Matthew Blaschko (INRIA Saclay - Ile de France, CVN) | null | 1307.1954 | null | null |
Using Clustering to extract Personality Information from socio economic
data | cs.LG cs.CE | It has become apparent that models that have been applied widely in
economics, including Machine Learning techniques and Data Mining methods,
should take into consideration principles that derive from the theories of
Personality Psychology in order to discover more comprehensive knowledge
regarding complicated economic behaviours. In this work, we present a method to
extract Behavioural Groups by using simple clustering techniques that can
potentially reveal aspects of the Personalities for their members. We believe
that this is very important because the psychological information regarding the
Personalities of individuals is limited in real world applications and because
it can become a useful tool in improving the traditional models of Knowledge
Economy.
| Alexandros Ladas, Uwe Aickelin, Jon Garibaldi, Eamonn Ferguson | null | 1307.1998 | null | null |
Finding the creatures of habit; Clustering households based on their
flexibility in using electricity | cs.LG cs.CE | Changes in the UK electricity market, particularly with the roll out of smart
meters, will provide greatly increased opportunities for initiatives intended
to change households' electricity usage patterns for the benefit of the overall
system. Users show differences in their regular behaviours and clustering
households into similar groupings based on this variability provides for
efficient targeting of initiatives. Those people who are stuck into a regular
pattern of activity may be the least receptive to an initiative to change
behaviour. A sample of 180 households from the UK are clustered into four
groups as an initial test of the concept and useful, actionable groupings are
found.
| Ian Dent, Tony Craig, Uwe Aickelin, Tom Rodden | null | 1307.2111 | null | null |
A PAC-Bayesian Tutorial with A Dropout Bound | cs.LG | This tutorial gives a concise overview of existing PAC-Bayesian theory
focusing on three generalization bounds. The first is an Occam bound which
handles rules with finite precision parameters and which states that
generalization loss is near training loss when the number of bits needed to
write the rule is small compared to the sample size. The second is a
PAC-Bayesian bound providing a generalization guarantee for posterior
distributions rather than for individual rules. The PAC-Bayesian bound
naturally handles infinite precision rule parameters, $L_2$ regularization,
{\em provides a bound for dropout training}, and defines a natural notion of a
single distinguished PAC-Bayesian posterior distribution. The third bound is a
training-variance bound --- a kind of bias-variance analysis but with bias
replaced by expected training loss. The training-variance bound dominates the
other bounds but is more difficult to interpret. It seems to suggest variance
reduction methods such as bagging and may ultimately provide a more meaningful
analysis of dropouts.
| David McAllester | null | 1307.2118 | null | null |
Transmodal Analysis of Neural Signals | q-bio.NC cs.LG q-bio.QM | Localizing neuronal activity in the brain, both in time and in space, is a
central challenge to advance the understanding of brain function. Because of
the inability of any single neuroimaging techniques to cover all aspects at
once, there is a growing interest to combine signals from multiple modalities
in order to benefit from the advantages of each acquisition method. Due to the
complexity and unknown parameterization of any suggested complete model of BOLD
response in functional magnetic resonance imaging (fMRI), the development of a
reliable ultimate fusion approach remains difficult. But besides the primary
goal of superior temporal and spatial resolution, conjoint analysis of data
from multiple imaging modalities can alternatively be used to segregate neural
information from physiological and acquisition noise. In this paper we suggest
a novel methodology which relies on constructing a quantifiable mapping of data
from one modality (electroencephalography; EEG) into another (fMRI), called
transmodal analysis of neural signals (TRANSfusion). TRANSfusion attempts to
map neural data embedded within the EEG signal into its reflection in fMRI
data. Assessing the mapping performance on unseen data allows to localize brain
areas where a significant portion of the signal could be reliably
reconstructed, hence the areas neural activity of which is reflected in both
EEG and fMRI data. Consecutive analysis of the learnt model allows to localize
areas associated with specific frequency bands of EEG, or areas functionally
related (connected or coherent) to any given EEG sensor. We demonstrate the
performance of TRANSfusion on artificial and real data from an auditory
experiment. We further speculate on possible alternative uses: cross-modal data
filtering and EEG-driven interpolation of fMRI signals to obtain arbitrarily
high temporal sampling of BOLD.
| Yaroslav O. Halchenko, Michael Hanke, James V. Haxby, Stephen Jose
Hanson, Christoph S. Herrmann | null | 1307.2150 | null | null |
Bridging Information Criteria and Parameter Shrinkage for Model
Selection | stat.ML cs.LG | Model selection based on classical information criteria, such as BIC, is
generally computationally demanding, but its properties are well studied. On
the other hand, model selection based on parameter shrinkage by $\ell_1$-type
penalties is computationally efficient. In this paper we make an attempt to
combine their strengths, and propose a simple approach that penalizes the
likelihood with data-dependent $\ell_1$ penalties as in adaptive Lasso and
exploits a fixed penalization parameter. Even for finite samples, its model
selection results approximately coincide with those based on information
criteria; in particular, we show that in some special cases, this approach and
the corresponding information criterion produce exactly the same model. One can
also consider this approach as a way to directly determine the penalization
parameter in adaptive Lasso to achieve information criteria-like model
selection. As extensions, we apply this idea to complex models including
Gaussian mixture model and mixture of factor analyzers, whose model selection
is traditionally difficult to do; by adopting suitable penalties, we provide
continuous approximators to the corresponding information criteria, which are
easy to optimize and enable efficient model selection.
| Kun Zhang, Heng Peng, Laiwan Chan, Aapo Hyvarinen | null | 1307.2307 | null | null |
Bayesian Discovery of Multiple Bayesian Networks via Transfer Learning | stat.ML cs.LG | Bayesian network structure learning algorithms with limited data are being
used in domains such as systems biology and neuroscience to gain insight into
the underlying processes that produce observed data. Learning reliable networks
from limited data is difficult, therefore transfer learning can improve the
robustness of learned networks by leveraging data from related tasks. Existing
transfer learning algorithms for Bayesian network structure learning give a
single maximum a posteriori estimate of network models. Yet, many other models
may be equally likely, and so a more informative result is provided by Bayesian
structure discovery. Bayesian structure discovery algorithms estimate posterior
probabilities of structural features, such as edges. We present transfer
learning for Bayesian structure discovery which allows us to explore the shared
and unique structural features among related tasks. Efficient computation
requires that our transfer learning objective factors into local calculations,
which we prove is given by a broad class of transfer biases. Theoretically, we
show the efficiency of our approach. Empirically, we show that compared to
single task learning, transfer learning is better able to positively identify
true edges. We apply the method to whole-brain neuroimaging data.
| Diane Oyen and Terran Lane | null | 1307.2312 | null | null |
Tuned Models of Peer Assessment in MOOCs | cs.LG cs.AI cs.HC stat.AP stat.ML | In massive open online courses (MOOCs), peer grading serves as a critical
tool for scaling the grading of complex, open-ended assignments to courses with
tens or hundreds of thousands of students. But despite promising initial
trials, it does not always deliver accurate results compared to human experts.
In this paper, we develop algorithms for estimating and correcting for grader
biases and reliabilities, showing significant improvement in peer grading
accuracy on real data with 63,199 peer grades from Coursera's HCI course
offerings --- the largest peer grading networks analysed to date. We relate
grader biases and reliabilities to other student factors such as student
engagement, performance as well as commenting style. We also show that our
model can lead to more intelligent assignment of graders to gradees.
| Chris Piech, Jonathan Huang, Zhenghao Chen, Chuong Do, Andrew Ng,
Daphne Koller | null | 1307.2579 | null | null |
Controlling the Precision-Recall Tradeoff in Differential Dependency
Network Analysis | stat.ML cs.LG | Graphical models have gained a lot of attention recently as a tool for
learning and representing dependencies among variables in multivariate data.
Often, domain scientists are looking specifically for differences among the
dependency networks of different conditions or populations (e.g. differences
between regulatory networks of different species, or differences between
dependency networks of diseased versus healthy populations). The standard
method for finding these differences is to learn the dependency networks for
each condition independently and compare them. We show that this approach is
prone to high false discovery rates (low precision) that can render the
analysis useless. We then show that by imposing a bias towards learning similar
dependency networks for each condition the false discovery rates can be reduced
to acceptable levels, at the cost of finding a reduced number of differences.
Algorithms developed in the transfer learning literature can be used to vary
the strength of the imposed similarity bias and provide a natural mechanism to
smoothly adjust this differential precision-recall tradeoff to cater to the
requirements of the analysis conducted. We present real case studies
(oncological and neurological) where domain experts use the proposed technique
to extract useful differential networks that shed light on the biological
processes involved in cancer and brain function.
| Diane Oyen, Alexandru Niculescu-Mizil, Rachel Ostroff, Alex Stewart,
Vincent P. Clark | null | 1307.2611 | null | null |
Error Rate Bounds in Crowdsourcing Models | stat.ML cs.LG stat.AP | Crowdsourcing is an effective tool for human-powered computation on many
tasks challenging for computers. In this paper, we provide finite-sample
exponential bounds on the error rate (in probability and in expectation) of
hyperplane binary labeling rules under the Dawid-Skene crowdsourcing model. The
bounds can be applied to analyze many common prediction methods, including the
majority voting and weighted majority voting. These bound results could be
useful for controlling the error rate and designing better algorithms. We show
that the oracle Maximum A Posterior (MAP) rule approximately optimizes our
upper bound on the mean error rate for any hyperplane binary labeling rule, and
propose a simple data-driven weighted majority voting (WMV) rule (called
one-step WMV) that attempts to approximate the oracle MAP and has a provable
theoretical guarantee on the error rate. Moreover, we use simulated and real
data to demonstrate that the data-driven EM-MAP rule is a good approximation to
the oracle MAP rule, and to demonstrate that the mean error rate of the
data-driven EM-MAP rule is also bounded by the mean error rate bound of the
oracle MAP rule with estimated parameters plugging into the bound.
| Hongwei Li, Bin Yu and Dengyong Zhou | null | 1307.2674 | null | null |
Flow-Based Algorithms for Local Graph Clustering | cs.DS cs.LG stat.ML | Given a subset S of vertices of an undirected graph G, the cut-improvement
problem asks us to find a subset S that is similar to A but has smaller
conductance. A very elegant algorithm for this problem has been given by
Andersen and Lang [AL08] and requires solving a small number of
single-commodity maximum flow computations over the whole graph G. In this
paper, we introduce LocalImprove, the first cut-improvement algorithm that is
local, i.e. that runs in time dependent on the size of the input set A rather
than on the size of the entire graph. Moreover, LocalImprove achieves this
local behaviour while essentially matching the same theoretical guarantee as
the global algorithm of Andersen and Lang.
The main application of LocalImprove is to the design of better
local-graph-partitioning algorithms. All previously known local algorithms for
graph partitioning are random-walk based and can only guarantee an output
conductance of O(\sqrt{OPT}) when the target set has conductance OPT \in [0,1].
Very recently, Zhu, Lattanzi and Mirrokni [ZLM13] improved this to O(OPT /
\sqrt{CONN}) where the internal connectivity parameter CONN \in [0,1] is
defined as the reciprocal of the mixing time of the random walk over the graph
induced by the target set. In this work, we show how to use LocalImprove to
obtain a constant approximation O(OPT) as long as CONN/OPT = Omega(1). This
yields the first flow-based algorithm. Moreover, its performance strictly
outperforms the ones based on random walks and surprisingly matches that of the
best known global algorithm, which is SDP-based, in this parameter regime
[MMV12].
Finally, our results show that spectral methods are not the only viable
approach to the construction of local graph partitioning algorithm and open
door to the study of algorithms with even better approximation and locality
guarantees.
| Lorenzo Orecchia, Zeyuan Allen Zhu | 10.1137/1.9781611973402.94 | 1307.2855 | null | null |
Semantic Context Forests for Learning-Based Knee Cartilage Segmentation
in 3D MR Images | cs.CV cs.LG q-bio.TO stat.ML | The automatic segmentation of human knee cartilage from 3D MR images is a
useful yet challenging task due to the thin sheet structure of the cartilage
with diffuse boundaries and inhomogeneous intensities. In this paper, we
present an iterative multi-class learning method to segment the femoral, tibial
and patellar cartilage simultaneously, which effectively exploits the spatial
contextual constraints between bone and cartilage, and also between different
cartilages. First, based on the fact that the cartilage grows in only certain
area of the corresponding bone surface, we extract the distance features of not
only to the surface of the bone, but more informatively, to the densely
registered anatomical landmarks on the bone surface. Second, we introduce a set
of iterative discriminative classifiers that at each iteration, probability
comparison features are constructed from the class confidence maps derived by
previously learned classifiers. These features automatically embed the semantic
context information between different cartilages of interest. Validated on a
total of 176 volumes from the Osteoarthritis Initiative (OAI) dataset, the
proposed approach demonstrates high robustness and accuracy of segmentation in
comparison with existing state-of-the-art MR cartilage segmentation methods.
| Quan Wang, Dijia Wu, Le Lu, Meizhu Liu, Kim L. Boyer and Shaohua Kevin
Zhou | 10.1007/978-3-319-05530-5_11 | 1307.2965 | null | null |
Accuracy of MAP segmentation with hidden Potts and Markov mesh prior
models via Path Constrained Viterbi Training, Iterated Conditional Modes and
Graph Cut based algorithms | cs.LG cs.CV stat.ML | In this paper, we study statistical classification accuracy of two different
Markov field environments for pixelwise image segmentation, considering the
labels of the image as hidden states and solving the estimation of such labels
as a solution of the MAP equation. The emission distribution is assumed the
same in all models, and the difference lays in the Markovian prior hypothesis
made over the labeling random field. The a priori labeling knowledge will be
modeled with a) a second order anisotropic Markov Mesh and b) a classical
isotropic Potts model. Under such models, we will consider three different
segmentation procedures, 2D Path Constrained Viterbi training for the Hidden
Markov Mesh, a Graph Cut based segmentation for the first order isotropic Potts
model, and ICM (Iterated Conditional Modes) for the second order isotropic
Potts model.
We provide a unified view of all three methods, and investigate goodness of
fit for classification, studying the influence of parameter estimation,
computational gain, and extent of automation in the statistical measures
Overall Accuracy, Relative Improvement and Kappa coefficient, allowing robust
and accurate statistical analysis on synthetic and real-life experimental data
coming from the field of Dental Diagnostic Radiography. All algorithms, using
the learned parameters, generate good segmentations with little interaction
when the images have a clear multimodal histogram. Suboptimal learning proves
to be frail in the case of non-distinctive modes, which limits the complexity
of usable models, and hence the achievable error rate as well.
All Matlab code written is provided in a toolbox available for download from
our website, following the Reproducible Research Paradigm.
| Ana Georgina Flesia, Josef Baumgartner, Javier Gimenez, Jorge Martinez | null | 1307.2971 | null | null |
Statistical Active Learning Algorithms for Noise Tolerance and
Differential Privacy | cs.LG cs.DS stat.ML | We describe a framework for designing efficient active learning algorithms
that are tolerant to random classification noise and are
differentially-private. The framework is based on active learning algorithms
that are statistical in the sense that they rely on estimates of expectations
of functions of filtered random examples. It builds on the powerful statistical
query framework of Kearns (1993).
We show that any efficient active statistical learning algorithm can be
automatically converted to an efficient active learning algorithm which is
tolerant to random classification noise as well as other forms of
"uncorrelated" noise. The complexity of the resulting algorithms has
information-theoretically optimal quadratic dependence on $1/(1-2\eta)$, where
$\eta$ is the noise rate.
We show that commonly studied concept classes including thresholds,
rectangles, and linear separators can be efficiently actively learned in our
framework. These results combined with our generic conversion lead to the first
computationally-efficient algorithms for actively learning some of these
concept classes in the presence of random classification noise that provide
exponential improvement in the dependence on the error $\epsilon$ over their
passive counterparts. In addition, we show that our algorithms can be
automatically converted to efficient active differentially-private algorithms.
This leads to the first differentially-private active learning algorithms with
exponential label savings over the passive case.
| Maria Florina Balcan, Vitaly Feldman | null | 1307.3102 | null | null |
Fast gradient descent for drifting least squares regression, with
application to bandits | cs.LG stat.ML | Online learning algorithms require to often recompute least squares
regression estimates of parameters. We study improving the computational
complexity of such algorithms by using stochastic gradient descent (SGD) type
schemes in place of classic regression solvers. We show that SGD schemes
efficiently track the true solutions of the regression problems, even in the
presence of a drift. This finding coupled with an $O(d)$ improvement in
complexity, where $d$ is the dimension of the data, make them attractive for
implementation in the big data settings. In the case when strong convexity in
the regression problem is guaranteed, we provide bounds on the error both in
expectation and high probability (the latter is often needed to provide
theoretical guarantees for higher level algorithms), despite the drifting least
squares solution. As an example of this case we prove that the regret
performance of an SGD version of the PEGE linear bandit algorithm
[Rusmevichientong and Tsitsiklis 2010] is worse that that of PEGE itself only
by a factor of $O(\log^4 n)$. When strong convexity of the regression problem
cannot be guaranteed, we investigate using an adaptive regularisation. We make
an empirical study of an adaptively regularised, SGD version of LinUCB [Li et
al. 2010] in a news article recommendation application, which uses the large
scale news recommendation dataset from Yahoo! front page. These experiments
show a large gain in computational complexity, with a consistently low tracking
error and click-through-rate (CTR) performance that is $75\%$ close.
| Nathaniel Korda, Prashanth L.A. and R\'emi Munos | null | 1307.3176 | null | null |
Optimal Bounds on Approximation of Submodular and XOS Functions by
Juntas | cs.DS cs.CC cs.LG | We investigate the approximability of several classes of real-valued
functions by functions of a small number of variables ({\em juntas}). Our main
results are tight bounds on the number of variables required to approximate a
function $f:\{0,1\}^n \rightarrow [0,1]$ within $\ell_2$-error $\epsilon$ over
the uniform distribution: 1. If $f$ is submodular, then it is $\epsilon$-close
to a function of $O(\frac{1}{\epsilon^2} \log \frac{1}{\epsilon})$ variables.
This is an exponential improvement over previously known results. We note that
$\Omega(\frac{1}{\epsilon^2})$ variables are necessary even for linear
functions. 2. If $f$ is fractionally subadditive (XOS) it is $\epsilon$-close
to a function of $2^{O(1/\epsilon^2)}$ variables. This result holds for all
functions with low total $\ell_1$-influence and is a real-valued analogue of
Friedgut's theorem for boolean functions. We show that $2^{\Omega(1/\epsilon)}$
variables are necessary even for XOS functions.
As applications of these results, we provide learning algorithms over the
uniform distribution. For XOS functions, we give a PAC learning algorithm that
runs in time $2^{poly(1/\epsilon)} poly(n)$. For submodular functions we give
an algorithm in the more demanding PMAC learning model (Balcan and Harvey,
2011) which requires a multiplicative $1+\gamma$ factor approximation with
probability at least $1-\epsilon$ over the target distribution. Our uniform
distribution algorithm runs in time $2^{poly(1/(\gamma\epsilon))} poly(n)$.
This is the first algorithm in the PMAC model that over the uniform
distribution can achieve a constant approximation factor arbitrarily close to 1
for all submodular functions. As follows from the lower bounds in (Feldman et
al., 2013) both of these algorithms are close to optimal. We also give
applications for proper learning, testing and agnostic learning with value
queries of these classes.
| Vitaly Feldman and Jan Vondrak | null | 1307.3301 | null | null |
Unsupervised Gene Expression Data using Enhanced Clustering Method | cs.CE cs.LG | Microarrays are made it possible to simultaneously monitor the expression
profiles of thousands of genes under various experimental conditions.
Identification of co-expressed genes and coherent patterns is the central goal
in microarray or gene expression data analysis and is an important task in
bioinformatics research. Feature selection is a process to select features
which are more informative. It is one of the important steps in knowledge
discovery. The problem is that not all features are important. Some of the
features may be redundant, and others may be irrelevant and noisy. In this work
the unsupervised Gene selection method and Enhanced Center Initialization
Algorithm (ECIA) with K-Means algorithms have been applied for clustering of
Gene Expression Data. This proposed clustering algorithm overcomes the
drawbacks in terms of specifying the optimal number of clusters and
initialization of good cluster centroids. Gene Expression Data show that could
identify compact clusters with performs well in terms of the Silhouette
Coefficients cluster measure.
| T.Chandrasekhar, K.Thangavel, E.Elayaraja, E.N.Sathishkumar | 10.1109/ICE-CCN.2013.6528554 | 1307.3337 | null | null |
Energy-aware adaptive bi-Lipschitz embeddings | cs.LG cs.IT math.IT | We propose a dimensionality reducing matrix design based on training data
with constraints on its Frobenius norm and number of rows. Our design criteria
is aimed at preserving the distances between the data points in the
dimensionality reduced space as much as possible relative to their distances in
original data space. This approach can be considered as a deterministic
Bi-Lipschitz embedding of the data points. We introduce a scalable learning
algorithm, dubbed AMUSE, and provide a rigorous estimation guarantee by
leveraging game theoretic tools. We also provide a generalization
characterization of our matrix based on our sample data. We use compressive
sensing problems as an example application of our problem, where the Frobenius
norm design constraint translates into the sensing energy.
| Bubacarr Bah, Ali Sadeghian and Volkan Cevher | null | 1307.3457 | null | null |
Performance Analysis of Clustering Algorithms for Gene Expression Data | cs.CE cs.LG | Microarray technology is a process that allows thousands of genes
simultaneously monitor to various experimental conditions. It is used to
identify the co-expressed genes in specific cells or tissues that are actively
used to make proteins, This method is used to analysis the gene expression, an
important task in bioinformatics research. Cluster analysis of gene expression
data has proved to be a useful tool for identifying co-expressed genes,
biologically relevant groupings of genes and samples. In this paper we analysed
K-Means with Automatic Generations of Merge Factor for ISODATA- AGMFI, to group
the microarray data sets on the basic of ISODATA. AGMFI is to generate initial
values for merge and Spilt factor, maximum merge times instead of selecting
efficient values as in ISODATA. The initial seeds for each cluster were
normally chosen either sequentially or randomly. The quality of the final
clusters was found to be influenced by these initial seeds. For the real life
problems, the suitable number of clusters cannot be predicted. To overcome the
above drawback the current research focused on developing the clustering
algorithms without giving the initial number of clusters.
| T.Chandrasekhar, K.Thangavel, E.Elayaraja | null | 1307.3549 | null | null |
MCMC Learning | cs.LG stat.ML | The theory of learning under the uniform distribution is rich and deep, with
connections to cryptography, computational complexity, and the analysis of
boolean functions to name a few areas. This theory however is very limited due
to the fact that the uniform distribution and the corresponding Fourier basis
are rarely encountered as a statistical model.
A family of distributions that vastly generalizes the uniform distribution on
the Boolean cube is that of distributions represented by Markov Random Fields
(MRF). Markov Random Fields are one of the main tools for modeling high
dimensional data in many areas of statistics and machine learning.
In this paper we initiate the investigation of extending central ideas,
methods and algorithms from the theory of learning under the uniform
distribution to the setup of learning concepts given examples from MRF
distributions. In particular, our results establish a novel connection between
properties of MCMC sampling of MRFs and learning under the MRF distribution.
| Varun Kanade, Elchanan Mossel | null | 1307.3617 | null | null |
A Data Management Approach for Dataset Selection Using Human Computation | cs.LG cs.IR | As the number of applications that use machine learning algorithms increases,
the need for labeled data useful for training such algorithms intensifies.
Getting labels typically involves employing humans to do the annotation,
which directly translates to training and working costs. Crowdsourcing
platforms have made labeling cheaper and faster, but they still involve
significant costs, especially for the cases where the potential set of
candidate data to be labeled is large. In this paper we describe a methodology
and a prototype system aiming at addressing this challenge for Web-scale
problems in an industrial setting. We discuss ideas on how to efficiently
select the data to use for training of machine learning algorithms in an
attempt to reduce cost. We show results achieving good performance with reduced
cost by carefully selecting which instances to label. Our proposed algorithm is
presented as part of a framework for managing and generating training datasets,
which includes, among other components, a human computation element.
| Alexandros Ntoulas, Omar Alonso, Vasilis Kandylas | null | 1307.3673 | null | null |
Minimum Error Rate Training and the Convex Hull Semiring | cs.LG | We describe the line search used in the minimum error rate training algorithm
MERT as the "inside score" of a weighted proof forest under a semiring defined
in terms of well-understood operations from computational geometry. This
conception leads to a straightforward complexity analysis of the dynamic
programming MERT algorithms of Macherey et al. (2008) and Kumar et al. (2009)
and practical approaches to implementation.
| Chris Dyer | null | 1307.3675 | null | null |
On Analyzing Estimation Errors due to Constrained Connections in Online
Review Systems | cs.SI cs.LG | Constrained connection is the phenomenon that a reviewer can only review a
subset of products/services due to narrow range of interests or limited
attention capacity. In this work, we study how constrained connections can
affect estimation performance in online review systems (ORS). We find that
reviewers' constrained connections will cause poor estimation performance, both
from the measurements of estimation accuracy and Bayesian Cramer Rao lower
bound.
| Junzhou Zhao | null | 1307.3687 | null | null |
Probabilistic inverse reinforcement learning in unknown environments | stat.ML cs.LG | We consider the problem of learning by demonstration from agents acting in
unknown stochastic Markov environments or games. Our aim is to estimate agent
preferences in order to construct improved policies for the same task that the
agents are trying to solve. To do so, we extend previous probabilistic
approaches for inverse reinforcement learning in known MDPs to the case of
unknown dynamics or opponents. We do this by deriving two simplified
probabilistic models of the demonstrator's policy and utility. For
tractability, we use maximum a posteriori estimation rather than full Bayesian
inference. Under a flat prior, this results in a convex optimisation problem.
We find that the resulting algorithms are highly competitive against a variety
of other methods for inverse reinforcement learning that do have knowledge of
the dynamics.
| Aristide C. Y. Tossou and Christos Dimitrakakis | null | 1307.3785 | null | null |
The Fundamental Learning Problem that Genetic Algorithms with Uniform
Crossover Solve Efficiently and Repeatedly As Evolution Proceeds | cs.NE cs.AI cs.CC cs.DM cs.LG | This paper establishes theoretical bonafides for implicit concurrent
multivariate effect evaluation--implicit concurrency for short---a broad and
versatile computational learning efficiency thought to underlie
general-purpose, non-local, noise-tolerant optimization in genetic algorithms
with uniform crossover (UGAs). We demonstrate that implicit concurrency is
indeed a form of efficient learning by showing that it can be used to obtain
close-to-optimal bounds on the time and queries required to approximately
correctly solve a constrained version (k=7, \eta=1/5) of a recognizable
computational learning problem: learning parities with noisy membership
queries. We argue that a UGA that treats the noisy membership query oracle as a
fitness function can be straightforwardly used to approximately correctly learn
the essential attributes in O(log^1.585 n) queries and O(n log^1.585 n) time,
where n is the total number of attributes. Our proof relies on an accessible
symmetry argument and the use of statistical hypothesis testing to reject a
global null hypothesis at the 10^-100 level of significance. It is, to the best
of our knowledge, the first relatively rigorous identification of efficient
computational learning in an evolutionary algorithm on a non-trivial learning
problem.
| Keki M. Burjorjee | null | 1307.3824 | null | null |
Bayesian Structured Prediction Using Gaussian Processes | stat.ML cs.LG | We introduce a conceptually novel structured prediction model, GPstruct,
which is kernelized, non-parametric and Bayesian, by design. We motivate the
model with respect to existing approaches, among others, conditional random
fields (CRFs), maximum margin Markov networks (M3N), and structured support
vector machines (SVMstruct), which embody only a subset of its properties. We
present an inference procedure based on Markov Chain Monte Carlo. The framework
can be instantiated for a wide range of structured objects such as linear
chains, trees, grids, and other general graphs. As a proof of concept, the
model is benchmarked on several natural language processing tasks and a video
gesture segmentation task involving a linear chain structure. We show
prediction accuracies for GPstruct which are comparable to or exceeding those
of CRFs and SVMstruct.
| Sebastien Bratieres, Novi Quadrianto, Zoubin Ghahramani | null | 1307.3846 | null | null |
On Soft Power Diagrams | cs.LG math.OC stat.ML | Many applications in data analysis begin with a set of points in a Euclidean
space that is partitioned into clusters. Common tasks then are to devise a
classifier deciding which of the clusters a new point is associated to, finding
outliers with respect to the clusters, or identifying the type of clustering
used for the partition.
One of the common kinds of clusterings are (balanced) least-squares
assignments with respect to a given set of sites. For these, there is a
'separating power diagram' for which each cluster lies in its own cell.
In the present paper, we aim for efficient algorithms for outlier detection
and the computation of thresholds that measure how similar a clustering is to a
least-squares assignment for fixed sites. For this purpose, we devise a new
model for the computation of a 'soft power diagram', which allows a soft
separation of the clusters with 'point counting properties'; e.g. we are able
to prescribe how many points we want to classify as outliers.
As our results hold for a more general non-convex model of free sites, we
describe it and our proofs in this more general way. Its locally optimal
solutions satisfy the aforementioned point counting properties. For our target
applications that use fixed sites, our algorithms are efficiently solvable to
global optimality by linear programming.
| Steffen Borgwardt | null | 1307.3949 | null | null |
Learning Markov networks with context-specific independences | cs.AI cs.LG stat.ML | Learning the Markov network structure from data is a problem that has
received considerable attention in machine learning, and in many other
application fields. This work focuses on a particular approach for this purpose
called independence-based learning. Such approach guarantees the learning of
the correct structure efficiently, whenever data is sufficient for representing
the underlying distribution. However, an important issue of such approach is
that the learned structures are encoded in an undirected graph. The problem
with graphs is that they cannot encode some types of independence relations,
such as the context-specific independences. They are a particular case of
conditional independences that is true only for a certain assignment of its
conditioning set, in contrast to conditional independences that must hold for
all its assignments. In this work we present CSPC, an independence-based
algorithm for learning structures that encode context-specific independences,
and encoding them in a log-linear model, instead of a graph. The central idea
of CSPC is combining the theoretical guarantees provided by the
independence-based approach with the benefits of representing complex
structures by using features in a log-linear model. We present experiments in a
synthetic case, showing that CSPC is more accurate than the state-of-the-art IB
algorithms when the underlying distribution contains CSIs.
| Alejandro Edera, Federico Schl\"uter, Facundo Bromberg | null | 1307.3964 | null | null |
Modified SPLICE and its Extension to Non-Stereo Data for Noise Robust
Speech Recognition | cs.LG cs.CV stat.ML | In this paper, a modification to the training process of the popular SPLICE
algorithm has been proposed for noise robust speech recognition. The
modification is based on feature correlations, and enables this stereo-based
algorithm to improve the performance in all noise conditions, especially in
unseen cases. Further, the modified framework is extended to work for
non-stereo datasets where clean and noisy training utterances, but not stereo
counterparts, are required. Finally, an MLLR-based computationally efficient
run-time noise adaptation method in SPLICE framework has been proposed. The
modified SPLICE shows 8.6% absolute improvement over SPLICE in Test C of
Aurora-2 database, and 2.93% overall. Non-stereo method shows 10.37% and 6.93%
absolute improvements over Aurora-2 and Aurora-4 baseline models respectively.
Run-time adaptation shows 9.89% absolute improvement in modified framework as
compared to SPLICE for Test C, and 4.96% overall w.r.t. standard MLLR
adaptation on HMMs.
| D. S. Pavan Kumar, N. Vishnu Prasad, Vikas Joshi, S. Umesh | 10.1109/ASRU.2013.6707725 | 1307.4048 | null | null |
A Safe Screening Rule for Sparse Logistic Regression | cs.LG stat.ML | The l1-regularized logistic regression (or sparse logistic regression) is a
widely used method for simultaneous classification and feature selection.
Although many recent efforts have been devoted to its efficient implementation,
its application to high dimensional data still poses significant challenges. In
this paper, we present a fast and effective sparse logistic regression
screening rule (Slores) to identify the 0 components in the solution vector,
which may lead to a substantial reduction in the number of features to be
entered to the optimization. An appealing feature of Slores is that the data
set needs to be scanned only once to run the screening and its computational
cost is negligible compared to that of solving the sparse logistic regression
problem. Moreover, Slores is independent of solvers for sparse logistic
regression, thus Slores can be integrated with any existing solver to improve
the efficiency. We have evaluated Slores using high-dimensional data sets from
different applications. Extensive experimental results demonstrate that Slores
outperforms the existing state-of-the-art screening rules and the efficiency of
solving sparse logistic regression is improved by one magnitude in general.
| Jie Wang, Jiayu Zhou, Jun Liu, Peter Wonka, Jieping Ye | null | 1307.4145 | null | null |
Efficient Mixed-Norm Regularization: Algorithms and Safe Screening
Methods | cs.LG stat.ML | Sparse learning has recently received increasing attention in many areas
including machine learning, statistics, and applied mathematics. The mixed-norm
regularization based on the l1q norm with q>1 is attractive in many
applications of regression and classification in that it facilitates group
sparsity in the model. The resulting optimization problem is, however,
challenging to solve due to the inherent structure of the mixed-norm
regularization. Existing work deals with special cases with q=1, 2, infinity,
and they cannot be easily extended to the general case. In this paper, we
propose an efficient algorithm based on the accelerated gradient method for
solving the general l1q-regularized problem. One key building block of the
proposed algorithm is the l1q-regularized Euclidean projection (EP_1q). Our
theoretical analysis reveals the key properties of EP_1q and illustrates why
EP_1q for the general q is significantly more challenging to solve than the
special cases. Based on our theoretical analysis, we develop an efficient
algorithm for EP_1q by solving two zero finding problems. To further improve
the efficiency of solving large dimensional mixed-norm regularized problems, we
propose a screening method which is able to quickly identify the inactive
groups, i.e., groups that have 0 components in the solution. This may lead to
substantial reduction in the number of groups to be entered to the
optimization. An appealing feature of our screening method is that the data set
needs to be scanned only once to run the screening. Compared to that of solving
the mixed-norm regularized problems, the computational cost of our screening
test is negligible. The key of the proposed screening method is an accurate
sensitivity analysis of the dual optimal solution when the regularization
parameter varies. Experimental results demonstrate the efficiency of the
proposed algorithm.
| Jie Wang, Jun Liu, Jieping Ye | null | 1307.4156 | null | null |
Supervised Metric Learning with Generalization Guarantees | cs.LG cs.AI stat.ML | The crucial importance of metrics in machine learning algorithms has led to
an increasing interest in optimizing distance and similarity functions, an area
of research known as metric learning. When data consist of feature vectors, a
large body of work has focused on learning a Mahalanobis distance. Less work
has been devoted to metric learning from structured objects (such as strings or
trees), most of it focusing on optimizing a notion of edit distance. We
identify two important limitations of current metric learning approaches.
First, they allow to improve the performance of local algorithms such as
k-nearest neighbors, but metric learning for global algorithms (such as linear
classifiers) has not been studied so far. Second, the question of the
generalization ability of metric learning methods has been largely ignored. In
this thesis, we propose theoretical and algorithmic contributions that address
these limitations. Our first contribution is the derivation of a new kernel
function built from learned edit probabilities. Our second contribution is a
novel framework for learning string and tree edit similarities inspired by the
recent theory of (e,g,t)-good similarity functions. Using uniform stability
arguments, we establish theoretical guarantees for the learned similarity that
give a bound on the generalization error of a linear classifier built from that
similarity. In our third contribution, we extend these ideas to metric learning
from feature vectors by proposing a bilinear similarity learning method that
efficiently optimizes the (e,g,t)-goodness. Generalization guarantees are
derived for our approach, highlighting that our method minimizes a tighter
bound on the generalization error of the classifier. Our last contribution is a
framework for establishing generalization bounds for a large class of existing
metric learning algorithms based on a notion of algorithmic robustness.
| Aur\'elien Bellet | null | 1307.4514 | null | null |
From Bandits to Experts: A Tale of Domination and Independence | cs.LG stat.ML | We consider the partial observability model for multi-armed bandits,
introduced by Mannor and Shamir. Our main result is a characterization of
regret in the directed observability model in terms of the dominating and
independence numbers of the observability graph. We also show that in the
undirected case, the learner can achieve optimal regret without even accessing
the observability graph before selecting an action. Both results are shown
using variants of the Exp3 algorithm operating on the observability graph in a
time-efficient manner.
| Noga Alon, Nicol\`o Cesa-Bianchi, Claudio Gentile, Yishay Mansour | null | 1307.4564 | null | null |
A New Convex Relaxation for Tensor Completion | cs.LG math.OC stat.ML | We study the problem of learning a tensor from a set of linear measurements.
A prominent methodology for this problem is based on a generalization of trace
norm regularization, which has been used extensively for learning low rank
matrices, to the tensor setting. In this paper, we highlight some limitations
of this approach and propose an alternative convex relaxation on the Euclidean
ball. We then describe a technique to solve the associated regularization
problem, which builds upon the alternating direction method of multipliers.
Experiments on one synthetic dataset and two real datasets indicate that the
proposed method improves significantly over tensor trace norm regularization in
terms of estimation error, while remaining computationally tractable.
| Bernardino Romera-Paredes and Massimiliano Pontil | null | 1307.4653 | null | null |
Efficient Reinforcement Learning in Deterministic Systems with Value
Function Generalization | cs.LG cs.AI cs.SY stat.ML | We consider the problem of reinforcement learning over episodes of a
finite-horizon deterministic system and as a solution propose optimistic
constraint propagation (OCP), an algorithm designed to synthesize efficient
exploration and value function generalization. We establish that when the true
value function lies within a given hypothesis class, OCP selects optimal
actions over all but at most K episodes, where K is the eluder dimension of the
given hypothesis class. We establish further efficiency and asymptotic
performance guarantees that apply even if the true value function does not lie
in the given hypothesis class, for the special case where the hypothesis class
is the span of pre-specified indicator functions over disjoint sets. We also
discuss the computational complexity of OCP and present computational results
involving two illustrative examples.
| Zheng Wen and Benjamin Van Roy | null | 1307.4847 | null | null |
Robust Subspace Clustering via Thresholding | stat.ML cs.IT cs.LG math.IT | The problem of clustering noisy and incompletely observed high-dimensional
data points into a union of low-dimensional subspaces and a set of outliers is
considered. The number of subspaces, their dimensions, and their orientations
are assumed unknown. We propose a simple low-complexity subspace clustering
algorithm, which applies spectral clustering to an adjacency matrix obtained by
thresholding the correlations between data points. In other words, the
adjacency matrix is constructed from the nearest neighbors of each data point
in spherical distance. A statistical performance analysis shows that the
algorithm exhibits robustness to additive noise and succeeds even when the
subspaces intersect. Specifically, our results reveal an explicit tradeoff
between the affinity of the subspaces and the tolerable noise level. We
furthermore prove that the algorithm succeeds even when the data points are
incompletely observed with the number of missing entries allowed to be (up to a
log-factor) linear in the ambient dimension. We also propose a simple scheme
that provably detects outliers, and we present numerical results on real and
synthetic data.
| Reinhard Heckel and Helmut B\"olcskei | null | 1307.4891 | null | null |
Large-scale Multi-label Learning with Missing Labels | cs.LG | The multi-label classification problem has generated significant interest in
recent years. However, existing approaches do not adequately address two key
challenges: (a) the ability to tackle problems with a large number (say
millions) of labels, and (b) the ability to handle data with missing labels. In
this paper, we directly address both these problems by studying the multi-label
problem in a generic empirical risk minimization (ERM) framework. Our
framework, despite being simple, is surprisingly able to encompass several
recent label-compression based methods which can be derived as special cases of
our method. To optimize the ERM problem, we develop techniques that exploit the
structure of specific loss functions - such as the squared loss function - to
offer efficient algorithms. We further show that our learning framework admits
formal excess risk bounds even in the presence of missing labels. Our risk
bounds are tight and demonstrate better generalization performance for low-rank
promoting trace-norm regularization when compared to (rank insensitive)
Frobenius norm regularization. Finally, we present extensive empirical results
on a variety of benchmark datasets and show that our methods perform
significantly better than existing label compression based methods and can
scale up to very large datasets such as the Wikipedia dataset.
| Hsiang-Fu Yu and Prateek Jain and Purushottam Kar and Inderjit S.
Dhillon | null | 1307.5101 | null | null |
Model-Based Policy Gradients with Parameter-Based Exploration by
Least-Squares Conditional Density Estimation | stat.ML cs.LG | The goal of reinforcement learning (RL) is to let an agent learn an optimal
control policy in an unknown environment so that future expected rewards are
maximized. The model-free RL approach directly learns the policy based on data
samples. Although using many samples tends to improve the accuracy of policy
learning, collecting a large number of samples is often expensive in practice.
On the other hand, the model-based RL approach first estimates the transition
model of the environment and then learns the policy based on the estimated
transition model. Thus, if the transition model is accurately learned from a
small amount of data, the model-based approach can perform better than the
model-free approach. In this paper, we propose a novel model-based RL method by
combining a recently proposed model-free policy search method called policy
gradients with parameter-based exploration and the state-of-the-art transition
model estimator called least-squares conditional density estimation. Through
experiments, we demonstrate the practical usefulness of the proposed method.
| Syogo Mori, Voot Tangkaratt, Tingting Zhao, Jun Morimoto, and Masashi
Sugiyama | null | 1307.5118 | null | null |
Random Binary Mappings for Kernel Learning and Efficient SVM | cs.CV cs.LG stat.ML | Support Vector Machines (SVMs) are powerful learners that have led to
state-of-the-art results in various computer vision problems. SVMs suffer from
various drawbacks in terms of selecting the right kernel, which depends on the
image descriptors, as well as computational and memory efficiency. This paper
introduces a novel kernel, which serves such issues well. The kernel is learned
by exploiting a large amount of low-complex, randomized binary mappings of the
input feature. This leads to an efficient SVM, while also alleviating the task
of kernel selection. We demonstrate the capabilities of our kernel on 6
standard vision benchmarks, in which we combine several common image
descriptors, namely histograms (Flowers17 and Daimler), attribute-like
descriptors (UCI, OSR, and a-VOC08), and Sparse Quantization (ImageNet).
Results show that our kernel learning adapts well to the different descriptors
types, achieving the performance of the kernels specifically tuned for each
image descriptor, and with similar evaluation cost as efficient SVM methods.
| Gemma Roig, Xavier Boix, Luc Van Gool | null | 1307.5161 | null | null |
Kernel Adaptive Metropolis-Hastings | stat.ML cs.LG | A Kernel Adaptive Metropolis-Hastings algorithm is introduced, for the
purpose of sampling from a target distribution with strongly nonlinear support.
The algorithm embeds the trajectory of the Markov chain into a reproducing
kernel Hilbert space (RKHS), such that the feature space covariance of the
samples informs the choice of proposal. The procedure is computationally
efficient and straightforward to implement, since the RKHS moves can be
integrated out analytically: our proposal distribution in the original space is
a normal distribution whose mean and covariance depend on where the current
sample lies in the support of the target distribution, and adapts to its local
covariance structure. Furthermore, the procedure requires neither gradients nor
any other higher order information about the target, making it particularly
attractive for contexts such as Pseudo-Marginal MCMC. Kernel Adaptive
Metropolis-Hastings outperforms competing fixed and adaptive samplers on
multivariate, highly nonlinear target distributions, arising in both real-world
and synthetic examples. Code may be downloaded at
https://github.com/karlnapf/kameleon-mcmc.
| Dino Sejdinovic, Heiko Strathmann, Maria Lomeli Garcia, Christophe
Andrieu, Arthur Gretton | null | 1307.5302 | null | null |
Towards Distribution-Free Multi-Armed Bandits with Combinatorial
Strategies | cs.LG | In this paper we study a generalized version of classical multi-armed bandits
(MABs) problem by allowing for arbitrary constraints on constituent bandits at
each decision point. The motivation of this study comes from many situations
that involve repeatedly making choices subject to arbitrary constraints in an
uncertain environment: for instance, regularly deciding which advertisements to
display online in order to gain high click-through-rate without knowing user
preferences, or what route to drive home each day under uncertain weather and
traffic conditions. Assume that there are $K$ unknown random variables (RVs),
i.e., arms, each evolving as an \emph{i.i.d} stochastic process over time. At
each decision epoch, we select a strategy, i.e., a subset of RVs, subject to
arbitrary constraints on constituent RVs.
We then gain a reward that is a linear combination of observations on
selected RVs.
The performance of prior results for this problem heavily depends on the
distribution of strategies generated by corresponding learning policy. For
example, if the reward-difference between the best and second best strategy
approaches zero, prior result may lead to arbitrarily large regret.
Meanwhile, when there are exponential number of possible strategies at each
decision point, naive extension of a prior distribution-free policy would cause
poor performance in terms of regret, computation and space complexity.
To this end, we propose an efficient Distribution-Free Learning (DFL) policy
that achieves zero regret, regardless of the probability distribution of the
resultant strategies.
Our learning policy has both $O(K)$ time complexity and $O(K)$ space
complexity. In successive generations, we show that even if finding the optimal
strategy at each decision point is NP-hard, our policy still allows for
approximated solutions while retaining near zero-regret.
| Xiang-yang Li, Shaojie Tang and Yaqin Zhou | null | 1307.5438 | null | null |
Non-stationary Stochastic Optimization | math.PR cs.LG stat.ML | We consider a non-stationary variant of a sequential stochastic optimization
problem, in which the underlying cost functions may change along the horizon.
We propose a measure, termed variation budget, that controls the extent of said
change, and study how restrictions on this budget impact achievable
performance. We identify sharp conditions under which it is possible to achieve
long-run-average optimality and more refined performance measures such as rate
optimality that fully characterize the complexity of such problems. In doing
so, we also establish a strong connection between two rather disparate strands
of literature: adversarial online convex optimization; and the more traditional
stochastic approximation paradigm (couched in a non-stationary setting). This
connection is the key to deriving well performing policies in the latter, by
leveraging structure of optimal policies in the former. Finally, tight bounds
on the minimax regret allow us to quantify the "price of non-stationarity,"
which mathematically captures the added complexity embedded in a temporally
changing environment versus a stationary one.
| O. Besbes, Y. Gur, and A. Zeevi | 10.1287/opre.2015.1408 | 1307.5449 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.