title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Infinite Mixed Membership Matrix Factorization | cs.LG cs.IR | Rating and recommendation systems have become a popular application area for
applying a suite of machine learning techniques. Current approaches rely
primarily on probabilistic interpretations and extensions of matrix
factorization, which factorizes a user-item ratings matrix into latent user and
item vectors. Most of these methods fail to model significant variations in
item ratings from otherwise similar users, a phenomenon known as the "Napoleon
Dynamite" effect. Recent efforts have addressed this problem by adding a
contextual bias term to the rating, which captures the mood under which a user
rates an item or the context in which an item is rated by a user. In this work,
we extend this model in a nonparametric sense by learning the optimal number of
moods or contexts from the data, and derive Gibbs sampling inference procedures
for our model. We evaluate our approach on the MovieLens 1M dataset, and show
significant improvements over the optimal parametric baseline, more than twice
the improvements previously encountered for this task. We also extract and
evaluate a DBLP dataset, wherein we predict the number of papers co-authored by
two authors, and present improvements over the parametric baseline on this
alternative domain as well.
| Avneesh Saluja, Mahdi Pakdaman, Dongzhen Piao, Ankur P. Parikh | null | 1401.3413 | null | null |
Analogical Dissimilarity: Definition, Algorithms and Two Experiments in
Machine Learning | cs.LG cs.AI | This paper defines the notion of analogical dissimilarity between four
objects, with a special focus on objects structured as sequences. Firstly, it
studies the case where the four objects have a null analogical dissimilarity,
i.e. are in analogical proportion. Secondly, when one of these objects is
unknown, it gives algorithms to compute it. Thirdly, it tackles the problem of
defining analogical dissimilarity, which is a measure of how far four objects
are from being in analogical proportion. In particular, when objects are
sequences, it gives a definition and an algorithm based on an optimal alignment
of the four sequences. It gives also learning algorithms, i.e. methods to find
the triple of objects in a learning sample which has the least analogical
dissimilarity with a given object. Two practical experiments are described: the
first is a classification problem on benchmarks of binary and nominal data, the
second shows how the generation of sequences by solving analogical equations
enables a handwritten character recognition system to rapidly be adapted to a
new writer.
| Laurent Miclet, Sabri Bayoudh, Arnaud Delhay | 10.1613/jair.2519 | 1401.3427 | null | null |
Latent Tree Models and Approximate Inference in Bayesian Networks | cs.LG | We propose a novel method for approximate inference in Bayesian networks
(BNs). The idea is to sample data from a BN, learn a latent tree model (LTM)
from the data offline, and when online, make inference with the LTM instead of
the original BN. Because LTMs are tree-structured, inference takes linear time.
In the meantime, they can represent complex relationship among leaf nodes and
hence the approximation accuracy is often good. Empirical evidence shows that
our method can achieve good approximation accuracy at low online computational
cost.
| Yi Wang, Nevin L. Zhang, Tao Chen | 10.1613/jair.2530 | 1401.3429 | null | null |
A Rigorously Bayesian Beam Model and an Adaptive Full Scan Model for
Range Finders in Dynamic Environments | cs.AI cs.LG | This paper proposes and experimentally validates a Bayesian network model of
a range finder adapted to dynamic environments. All modeling assumptions are
rigorously explained, and all model parameters have a physical interpretation.
This approach results in a transparent and intuitive model. With respect to the
state of the art beam model this paper: (i) proposes a different functional
form for the probability of range measurements caused by unmodeled objects,
(ii) intuitively explains the discontinuity encountered in te state of the art
beam model, and (iii) reduces the number of model parameters, while maintaining
the same representational power for experimental data. The proposed beam model
is called RBBM, short for Rigorously Bayesian Beam Model. A maximum likelihood
and a variational Bayesian estimator (both based on expectation-maximization)
are proposed to learn the model parameters.
Furthermore, the RBBM is extended to a full scan model in two steps: first,
to a full scan model for static environments and next, to a full scan model for
general, dynamic environments. The full scan model accounts for the dependency
between beams and adapts to the local sample density when using a particle
filter. In contrast to Gaussian-based state of the art models, the proposed
full scan model uses a sample-based approximation. This sample-based
approximation enables handling dynamic environments and capturing
multi-modality, which occurs even in simple static environments.
| Tinne De Laet, Joris De Schutter, Herman Bruyninckx | 10.1613/jair.2540 | 1401.3432 | null | null |
Adaptive Stochastic Resource Control: A Machine Learning Approach | cs.LG | The paper investigates stochastic resource allocation problems with scarce,
reusable resources and non-preemtive, time-dependent, interconnected tasks.
This approach is a natural generalization of several standard resource
management problems, such as scheduling and transportation problems. First,
reactive solutions are considered and defined as control policies of suitably
reformulated Markov decision processes (MDPs). We argue that this reformulation
has several favorable properties, such as it has finite state and action
spaces, it is aperiodic, hence all policies are proper and the space of control
policies can be safely restricted. Next, approximate dynamic programming (ADP)
methods, such as fitted Q-learning, are suggested for computing an efficient
control policy. In order to compactly maintain the cost-to-go function, two
representations are studied: hash tables and support vector regression (SVR),
particularly, nu-SVRs. Several additional improvements, such as the application
of limited-lookahead rollout algorithms in the initial phases, action space
decomposition, task clustering and distributed sampling are investigated, too.
Finally, experimental results on both benchmark and industry-related data are
presented.
| Bal\'azs Csan\'ad Cs\'aji, L\'aszl\'o Monostori | 10.1613/jair.2548 | 1401.3434 | null | null |
Transductive Rademacher Complexity and its Applications | cs.LG cs.AI stat.ML | We develop a technique for deriving data-dependent error bounds for
transductive learning algorithms based on transductive Rademacher complexity.
Our technique is based on a novel general error bound for transduction in terms
of transductive Rademacher complexity, together with a novel bounding technique
for Rademacher averages for particular algorithms, in terms of their
"unlabeled-labeled" representation. This technique is relevant to many advanced
graph-based transductive algorithms and we demonstrate its effectiveness by
deriving error bounds to three well known algorithms. Finally, we present a new
PAC-Bayesian bound for mixtures of transductive algorithms based on our
Rademacher bounds.
| Ran El-Yaniv, Dmitry Pechyony | 10.1613/jair.2587 | 1401.3441 | null | null |
Anytime Induction of Low-cost, Low-error Classifiers: a Sampling-based
Approach | cs.LG | Machine learning techniques are gaining prevalence in the production of a
wide range of classifiers for complex real-world applications with nonuniform
testing and misclassification costs. The increasing complexity of these
applications poses a real challenge to resource management during learning and
classification. In this work we introduce ACT (anytime cost-sensitive tree
learner), a novel framework for operating in such complex environments. ACT is
an anytime algorithm that allows learning time to be increased in return for
lower classification costs. It builds a tree top-down and exploits additional
time resources to obtain better estimations for the utility of the different
candidate splits. Using sampling techniques, ACT approximates the cost of the
subtree under each candidate split and favors the one with a minimal cost. As a
stochastic algorithm, ACT is expected to be able to escape local minima, into
which greedy methods may be trapped. Experiments with a variety of datasets
were conducted to compare ACT to the state-of-the-art cost-sensitive tree
learners. The results show that for the majority of domains ACT produces
significantly less costly trees. ACT also exhibits good anytime behavior with
diminishing returns.
| Saher Esmeir, Shaul Markovitch | 10.1613/jair.2602 | 1401.3447 | null | null |
A Multiagent Reinforcement Learning Algorithm with Non-linear Dynamics | cs.LG cs.MA | Several multiagent reinforcement learning (MARL) algorithms have been
proposed to optimize agents decisions. Due to the complexity of the problem,
the majority of the previously developed MARL algorithms assumed agents either
had some knowledge of the underlying game (such as Nash equilibria) and/or
observed other agents actions and the rewards they received.
We introduce a new MARL algorithm called the Weighted Policy Learner (WPL),
which allows agents to reach a Nash Equilibrium (NE) in benchmark
2-player-2-action games with minimum knowledge. Using WPL, the only feedback an
agent needs is its own local reward (the agent does not observe other agents
actions or rewards). Furthermore, WPL does not assume that agents know the
underlying game or the corresponding Nash Equilibrium a priori. We
experimentally show that our algorithm converges in benchmark
two-player-two-action games. We also show that our algorithm converges in the
challenging Shapleys game where previous MARL algorithms failed to converge
without knowing the underlying game or the NE. Furthermore, we show that WPL
outperforms the state-of-the-art algorithms in a more realistic setting of 100
agents interacting and learning concurrently.
An important aspect of understanding the behavior of a MARL algorithm is
analyzing the dynamics of the algorithm: how the policies of multiple learning
agents evolve over time as agents interact with one another. Such an analysis
not only verifies whether agents using a given MARL algorithm will eventually
converge, but also reveals the behavior of the MARL algorithm prior to
convergence. We analyze our algorithm in two-player-two-action games and show
that symbolically proving WPLs convergence is difficult, because of the
non-linear nature of WPLs dynamics, unlike previous MARL algorithms that had
either linear or piece-wise-linear dynamics. Instead, we numerically solve WPLs
dynamics differential equations and compare the solution to the dynamics of
previous MARL algorithms.
| Sherief Abdallah, Victor Lesser | 10.1613/jair.2628 | 1401.3454 | null | null |
Learning Bayesian Network Equivalence Classes with Ant Colony
Optimization | cs.NE cs.AI cs.LG | Bayesian networks are a useful tool in the representation of uncertain
knowledge. This paper proposes a new algorithm called ACO-E, to learn the
structure of a Bayesian network. It does this by conducting a search through
the space of equivalence classes of Bayesian networks using Ant Colony
Optimization (ACO). To this end, two novel extensions of traditional ACO
techniques are proposed and implemented. Firstly, multiple types of moves are
allowed. Secondly, moves can be given in terms of indices that are not based on
construction graph nodes. The results of testing show that ACO-E performs
better than a greedy search and other state-of-the-art and metaheuristic
algorithms whilst searching in the space of equivalence classes.
| R\'on\'an Daly, Qiang Shen | 10.1613/jair.2681 | 1401.3464 | null | null |
Efficient Markov Network Structure Discovery Using Independence Tests | cs.LG cs.AI stat.ML | We present two algorithms for learning the structure of a Markov network from
data: GSMN* and GSIMN. Both algorithms use statistical independence tests to
infer the structure by successively constraining the set of structures
consistent with the results of these tests. Until very recently, algorithms for
structure learning were based on maximum likelihood estimation, which has been
proved to be NP-hard for Markov networks due to the difficulty of estimating
the parameters of the network, needed for the computation of the data
likelihood. The independence-based approach does not require the computation of
the likelihood, and thus both GSMN* and GSIMN can compute the structure
efficiently (as shown in our experiments). GSMN* is an adaptation of the
Grow-Shrink algorithm of Margaritis and Thrun for learning the structure of
Bayesian networks. GSIMN extends GSMN* by additionally exploiting Pearls
well-known properties of the conditional independence relation to infer novel
independences from known ones, thus avoiding the performance of statistical
tests to estimate them. To accomplish this efficiently GSIMN uses the Triangle
theorem, also introduced in this work, which is a simplified version of the set
of Markov axioms. Experimental comparisons on artificial and real-world data
sets show GSIMN can yield significant savings with respect to GSMN*, while
generating a Markov network with comparable or in some cases improved quality.
We also compare GSIMN to a forward-chaining implementation, called GSIMN-FCH,
that produces all possible conditional independences resulting from repeatedly
applying Pearls theorems on the known conditional independence tests. The
results of this comparison show that GSIMN, by the sole use of the Triangle
theorem, is nearly optimal in terms of the set of independences tests that it
infers.
| Facundo Bromberg, Dimitris Margaritis, Vasant Honavar | 10.1613/jair.2773 | 1401.3478 | null | null |
Complex Question Answering: Unsupervised Learning Approaches and
Experiments | cs.CL cs.IR cs.LG | Complex questions that require inferencing and synthesizing information from
multiple documents can be seen as a kind of topic-oriented, informative
multi-document summarization where the goal is to produce a single text as a
compressed version of a set of documents with a minimum loss of relevant
information. In this paper, we experiment with one empirical method and two
unsupervised statistical machine learning techniques: K-means and Expectation
Maximization (EM), for computing relative importance of the sentences. We
compare the results of these approaches. Our experiments show that the
empirical approach outperforms the other two techniques and EM performs better
than K-means. However, the performance of these approaches depends entirely on
the feature set used and the weighting of these features. In order to measure
the importance and relevance to the user query we extract different kinds of
features (i.e. lexical, lexical semantic, cosine similarity, basic element,
tree kernel based syntactic and shallow-semantic) for each of the document
sentences. We use a local search technique to learn the weights of the
features. To the best of our knowledge, no study has used tree kernel functions
to encode syntactic/semantic information for more complex tasks such as
computing the relatedness between the query sentences and the document
sentences in order to generate query-focused summaries (or answers to complex
questions). For each of our methods of generating summaries (i.e. empirical,
K-means and EM) we show the effects of syntactic and shallow-semantic features
over the bag-of-words (BOW) features.
| Yllias Chali, Shafiq Rayhan Joty, Sadid A. Hasan | 10.1613/jair.2784 | 1401.3479 | null | null |
Content Modeling Using Latent Permutations | cs.IR cs.CL cs.LG | We present a novel Bayesian topic model for learning discourse-level document
structure. Our model leverages insights from discourse theory to constrain
latent topic assignments in a way that reflects the underlying organization of
document topics. We propose a global model in which both topic selection and
ordering are biased to be similar across a collection of related documents. We
show that this space of orderings can be effectively represented using a
distribution over permutations called the Generalized Mallows Model. We apply
our method to three complementary discourse-level tasks: cross-document
alignment, document segmentation, and information ordering. Our experiments
show that incorporating our permutation-based model in these applications
yields substantial improvements in performance over previously proposed
methods.
| Harr Chen, S.R.K. Branavan, Regina Barzilay, David R. Karger | 10.1613/jair.2830 | 1401.3488 | null | null |
Highly comparative feature-based time-series classification | cs.LG cs.AI cs.DB physics.data-an q-bio.QM | A highly comparative, feature-based approach to time series classification is
introduced that uses an extensive database of algorithms to extract thousands
of interpretable features from time series. These features are derived from
across the scientific time-series analysis literature, and include summaries of
time series in terms of their correlation structure, distribution, entropy,
stationarity, scaling properties, and fits to a range of time-series models.
After computing thousands of features for each time series in a training set,
those that are most informative of the class structure are selected using
greedy forward feature selection with a linear classifier. The resulting
feature-based classifiers automatically learn the differences between classes
using a reduced number of time-series properties, and circumvent the need to
calculate distances between time series. Representing time series in this way
results in orders of magnitude of dimensionality reduction, allowing the method
to perform well on very large datasets containing long time series or time
series of different lengths. For many of the datasets studied, classification
performance exceeded that of conventional instance-based classifiers, including
one nearest neighbor classifiers using Euclidean distances and dynamic time
warping and, most importantly, the features selected provide an understanding
of the properties of the dataset, insight that can guide further scientific
investigation.
| Ben D. Fulcher and Nick S. Jones | 10.1109/TKDE.2014.2316504 | 1401.3531 | null | null |
A Supervised Goal Directed Algorithm in Economical Choice Behaviour: An
Actor-Critic Approach | cs.GT cs.AI cs.LG | This paper aims to find an algorithmic structure that affords to predict and
explain economical choice behaviour particularly under uncertainty(random
policies) by manipulating the prevalent Actor-Critic learning method to comply
with the requirements we have been entrusted ever since the field of
neuroeconomics dawned on us. Whilst skimming some basics of neuroeconomics that
seem relevant to our discussion, we will try to outline some of the important
works which have so far been done to simulate choice making processes.
Concerning neurological findings that suggest the existence of two specific
functions that are executed through Basal Ganglia all the way up to sub-
cortical areas, namely 'rewards' and 'beliefs', we will offer a modified
version of actor/critic algorithm to shed a light on the relation between these
functions and most importantly resolve what is referred to as a challenge for
actor-critic algorithms, that is, the lack of inheritance or hierarchy which
avoids the system being evolved in continuous time tasks whence the convergence
might not be emerged.
| Keyvan Yahya | null | 1401.3579 | null | null |
A Brief History of Learning Classifier Systems: From CS-1 to XCS | cs.NE cs.LG | Modern Learning Classifier Systems can be characterized by their use of rule
accuracy as the utility metric for the search algorithm(s) discovering useful
rules. Such searching typically takes place within the restricted space of
co-active rules for efficiency. This paper gives an historical overview of the
evolution of such systems up to XCS, and then some of the subsequent
developments of XCS to different types of learning.
| Larry Bull | null | 1401.3607 | null | null |
Bayesian Conditional Density Filtering | stat.ML cs.LG stat.CO | We propose a Conditional Density Filtering (C-DF) algorithm for efficient
online Bayesian inference. C-DF adapts MCMC sampling to the online setting,
sampling from approximations to conditional posterior distributions obtained by
propagating surrogate conditional sufficient statistics (a function of data and
parameter estimates) as new data arrive. These quantities eliminate the need to
store or process the entire dataset simultaneously and offer a number of
desirable features. Often, these include a reduction in memory requirements and
runtime and improved mixing, along with state-of-the-art parameter inference
and prediction. These improvements are demonstrated through several
illustrative examples including an application to high dimensional compressed
regression. Finally, we show that C-DF samples converge to the target posterior
distribution asymptotically as sampling proceeds and more data arrives.
| Shaan Qamar, Rajarshi Guhaniyogi, David B. Dunson | null | 1401.3632 | null | null |
Coordinate Descent with Online Adaptation of Coordinate Frequencies | stat.ML cs.LG | Coordinate descent (CD) algorithms have become the method of choice for
solving a number of optimization problems in machine learning. They are
particularly popular for training linear models, including linear support
vector machine classification, LASSO regression, and logistic regression.
We consider general CD with non-uniform selection of coordinates. Instead of
fixing selection frequencies beforehand we propose an online adaptation
mechanism for this important parameter, called the adaptive coordinate
frequencies (ACF) method. This mechanism removes the need to estimate optimal
coordinate frequencies beforehand, and it automatically reacts to changing
requirements during an optimization run.
We demonstrate the usefulness of our ACF-CD approach for a variety of
optimization problems arising in machine learning contexts. Our algorithm
offers significant speed-ups over state-of-the-art training methods.
| Tobias Glasmachers and \"Ur\"un Dogan | null | 1401.3737 | null | null |
Structured Priors for Sparse-Representation-Based Hyperspectral Image
Classification | cs.CV cs.LG stat.ML | Pixel-wise classification, where each pixel is assigned to a predefined
class, is one of the most important procedures in hyperspectral image (HSI)
analysis. By representing a test pixel as a linear combination of a small
subset of labeled pixels, a sparse representation classifier (SRC) gives rather
plausible results compared with that of traditional classifiers such as the
support vector machine (SVM). Recently, by incorporating additional structured
sparsity priors, the second generation SRCs have appeared in the literature and
are reported to further improve the performance of HSI. These priors are based
on exploiting the spatial dependencies between the neighboring pixels, the
inherent structure of the dictionary, or both. In this paper, we review and
compare several structured priors for sparse-representation-based HSI
classification. We also propose a new structured prior called the low rank
group prior, which can be considered as a modification of the low rank prior.
Furthermore, we will investigate how different structured priors improve the
result for the HSI classification.
| Xiaoxia Sun, Qing Qu, Nasser M. Nasrabadi, Trac D. Tran | 10.1109/LGRS.2013.2290531 | 1401.3818 | null | null |
RoxyBot-06: Stochastic Prediction and Optimization in TAC Travel | cs.GT cs.LG | In this paper, we describe our autonomous bidding agent, RoxyBot, who emerged
victorious in the travel division of the 2006 Trading Agent Competition in a
photo finish. At a high level, the design of many successful trading agents can
be summarized as follows: (i) price prediction: build a model of market prices;
and (ii) optimization: solve for an approximately optimal set of bids, given
this model. To predict, RoxyBot builds a stochastic model of market prices by
simulating simultaneous ascending auctions. To optimize, RoxyBot relies on the
sample average approximation method, a stochastic optimization technique.
| Amy Greenwald, Seong Jae Lee, Victor Naroditskiy | 10.1613/jair.2904 | 1401.3829 | null | null |
An Active Learning Approach for Jointly Estimating Worker Performance
and Annotation Reliability with Crowdsourced Data | cs.LG cs.HC | Crowdsourcing platforms offer a practical solution to the problem of
affordably annotating large datasets for training supervised classifiers.
Unfortunately, poor worker performance frequently threatens to compromise
annotation reliability, and requesting multiple labels for every instance can
lead to large cost increases without guaranteeing good results. Minimizing the
required training samples using an active learning selection procedure reduces
the labeling requirement but can jeopardize classifier training by focusing on
erroneous annotations. This paper presents an active learning approach in which
worker performance, task difficulty, and annotation reliability are jointly
estimated and used to compute the risk function guiding the sample selection
procedure. We demonstrate that the proposed approach, which employs active
learning with Bayesian networks, significantly improves training accuracy and
correctly ranks the expertise of unknown labelers in the presence of annotation
noise.
| Liyue Zhao, Yu Zhang and Gita Sukthankar | null | 1401.3836 | null | null |
Learning to Make Predictions In Partially Observable Environments
Without a Generative Model | cs.LG cs.AI stat.ML | When faced with the problem of learning a model of a high-dimensional
environment, a common approach is to limit the model to make only a restricted
set of predictions, thereby simplifying the learning problem. These partial
models may be directly useful for making decisions or may be combined together
to form a more complete, structured model. However, in partially observable
(non-Markov) environments, standard model-learning methods learn generative
models, i.e. models that provide a probability distribution over all possible
futures (such as POMDPs). It is not straightforward to restrict such models to
make only certain predictions, and doing so does not always simplify the
learning problem. In this paper we present prediction profile models:
non-generative partial models for partially observable systems that make only a
given set of predictions, and are therefore far simpler than generative models
in some cases. We formalize the problem of learning a prediction profile model
as a transformation of the original model-learning problem, and show
empirically that one can learn prediction profile models that make a small set
of important predictions even in systems that are too complex for standard
generative models.
| Erik Talvitie, Satinder Singh | 10.1613/jair.3396 | 1401.3870 | null | null |
Non-Deterministic Policies in Markovian Decision Processes | cs.AI cs.LG | Markovian processes have long been used to model stochastic environments.
Reinforcement learning has emerged as a framework to solve sequential planning
and decision-making problems in such environments. In recent years, attempts
were made to apply methods from reinforcement learning to construct decision
support systems for action selection in Markovian environments. Although
conventional methods in reinforcement learning have proved to be useful in
problems concerning sequential decision-making, they cannot be applied in their
current form to decision support systems, such as those in medical domains, as
they suggest policies that are often highly prescriptive and leave little room
for the users input. Without the ability to provide flexible guidelines, it is
unlikely that these methods can gain ground with users of such systems. This
paper introduces the new concept of non-deterministic policies to allow more
flexibility in the users decision-making process, while constraining decisions
to remain near optimal solutions. We provide two algorithms to compute
non-deterministic policies in discrete domains. We study the output and running
time of these method on a set of synthetic and real-world problems. In an
experiment with human subjects, we show that humans assisted by hints based on
non-deterministic policies outperform both human-only and computer-only agents
in a web navigation task.
| Mahdi Milani Fard, Joelle Pineau | 10.1613/jair.3175 | 1401.3871 | null | null |
Properties of Bethe Free Energies and Message Passing in Gaussian Models | cs.LG cs.AI stat.ML | We address the problem of computing approximate marginals in Gaussian
probabilistic models by using mean field and fractional Bethe approximations.
We define the Gaussian fractional Bethe free energy in terms of the moment
parameters of the approximate marginals, derive a lower and an upper bound on
the fractional Bethe free energy and establish a necessary condition for the
lower bound to be bounded from below. It turns out that the condition is
identical to the pairwise normalizability condition, which is known to be a
sufficient condition for the convergence of the message passing algorithm. We
show that stable fixed points of the Gaussian message passing algorithm are
local minima of the Gaussian Bethe free energy. By a counterexample, we
disprove the conjecture stating that the unboundedness of the free energy
implies the divergence of the message passing algorithm.
| Botond Cseke, Tom Heskes | 10.1613/jair.3195 | 1401.3877 | null | null |
Regression Conformal Prediction with Nearest Neighbours | cs.LG | In this paper we apply Conformal Prediction (CP) to the k-Nearest Neighbours
Regression (k-NNR) algorithm and propose ways of extending the typical
nonconformity measure used for regression so far. Unlike traditional regression
methods which produce point predictions, Conformal Predictors output predictive
regions that satisfy a given confidence level. The regions produced by any
Conformal Predictor are automatically valid, however their tightness and
therefore usefulness depends on the nonconformity measure used by each CP. In
effect a nonconformity measure evaluates how strange a given example is
compared to a set of other examples based on some traditional machine learning
algorithm. We define six novel nonconformity measures based on the k-Nearest
Neighbours Regression algorithm and develop the corresponding CPs following
both the original (transductive) and the inductive CP approaches. A comparison
of the predictive regions produced by our measures with those of the typical
regression measure suggests that a major improvement in terms of predictive
region tightness is achieved by the new measures.
| Harris Papadopoulos, Vladimir Vovk, Alex Gammerman | 10.1613/jair.3198 | 1401.3880 | null | null |
Efficient Multi-Start Strategies for Local Search Algorithms | cs.LG cs.AI stat.ML | Local search algorithms applied to optimization problems often suffer from
getting trapped in a local optimum. The common solution for this deficiency is
to restart the algorithm when no progress is observed. Alternatively, one can
start multiple instances of a local search algorithm, and allocate
computational resources (in particular, processing time) to the instances
depending on their behavior. Hence, a multi-start strategy has to decide
(dynamically) when to allocate additional resources to a particular instance
and when to start new instances. In this paper we propose multi-start
strategies motivated by works on multi-armed bandit problems and Lipschitz
optimization with an unknown constant. The strategies continuously estimate the
potential performance of each algorithm instance by supposing a convergence
rate of the local search algorithm up to an unknown constant, and in every
phase allocate resources to those instances that could converge to the optimum
for a particular range of the constant. Asymptotic bounds are given on the
performance of the strategies. In particular, we prove that at most a quadratic
increase in the number of times the target function is evaluated is needed to
achieve the performance of a local search algorithm started from the attraction
region of the optimum. Experiments are provided using SPSA (Simultaneous
Perturbation Stochastic Approximation) and k-means as local search algorithms,
and the results indicate that the proposed strategies work well in practice,
and, in all cases studied, need only logarithmically more evaluations of the
target function as opposed to the theoretically suggested quadratic increase.
| Andr\'as Gy\"orgy, Levente Kocsis | 10.1613/jair.3313 | 1401.3894 | null | null |
Policy Invariance under Reward Transformations for General-Sum
Stochastic Games | cs.GT cs.LG | We extend the potential-based shaping method from Markov decision processes
to multi-player general-sum stochastic games. We prove that the Nash equilibria
in a stochastic game remains unchanged after potential-based shaping is applied
to the environment. The property of policy invariance provides a possible way
of speeding convergence when learning to play a stochastic game.
| Xiaosong Lu, Howard M. Schwartz, Sidney N. Givigi Jr | 10.1613/jair.3384 | 1401.3907 | null | null |
An Empirical Evaluation of Similarity Measures for Time Series
Classification | cs.LG cs.CV stat.ML | Time series are ubiquitous, and a measure to assess their similarity is a
core part of many computational systems. In particular, the similarity measure
is the most essential ingredient of time series clustering and classification
systems. Because of this importance, countless approaches to estimate time
series similarity have been proposed. However, there is a lack of comparative
studies using empirical, rigorous, quantitative, and large-scale assessment
strategies. In this article, we provide an extensive evaluation of similarity
measures for time series classification following the aforementioned
principles. We consider 7 different measures coming from alternative measure
`families', and 45 publicly-available time series data sets coming from a wide
variety of scientific domains. We focus on out-of-sample classification
accuracy, but in-sample accuracies and parameter choices are also discussed.
Our work is based on rigorous evaluation methodologies and includes the use of
powerful statistical significance tests to derive meaningful conclusions. The
obtained results show the equivalence, in terms of accuracy, of a number of
measures, but with one single candidate outperforming the rest. Such findings,
together with the followed methodology, invite researchers on the field to
adopt a more consistent evaluation criteria and a more informed decision
regarding the baseline measures to which new developments should be compared.
| Joan Serr\`a and Josep Lluis Arcos | 10.1016/j.knosys.2014.04.035 | 1401.3973 | null | null |
Stochastic Backpropagation and Approximate Inference in Deep Generative
Models | stat.ML cs.AI cs.LG stat.CO stat.ME | We marry ideas from deep neural networks and approximate Bayesian inference
to derive a generalised class of deep, directed generative models, endowed with
a new algorithm for scalable inference and learning. Our algorithm introduces a
recognition model to represent approximate posterior distributions, and that
acts as a stochastic encoder of the data. We develop stochastic
back-propagation -- rules for back-propagation through stochastic variables --
and use this to develop an algorithm that allows for joint optimisation of the
parameters of both the generative and recognition model. We demonstrate on
several real-world data sets that the model generates realistic samples,
provides accurate imputations of missing data and is a useful tool for
high-dimensional data visualisation.
| Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra | null | 1401.4082 | null | null |
Towards the selection of patients requiring ICD implantation by
automatic classification from Holter monitoring indices | cs.LG stat.AP | The purpose of this study is to optimize the selection of prophylactic
cardioverter defibrillator implantation candidates. Currently, the main
criterion for implantation is a low Left Ventricular Ejection Fraction (LVEF)
whose specificity is relatively poor. We designed two classifiers aimed to
predict, from long term ECG recordings (Holter), whether a low-LVEF patient is
likely or not to undergo ventricular arrhythmia in the next six months. One
classifier is a single hidden layer neural network whose variables are the most
relevant features extracted from Holter recordings, and the other classifier
has a structure that capitalizes on the physiological decomposition of the
arrhythmogenic factors into three disjoint groups: the myocardial substrate,
the triggers and the autonomic nervous system (ANS). In this ad hoc network,
the features were assigned to each group; one neural network classifier per
group was designed and its complexity was optimized. The outputs of the
classifiers were fed to a single neuron that provided the required probability
estimate. The latter was thresholded for final discrimination A dataset
composed of 186 pre-implantation 30-mn Holter recordings of patients equipped
with an implantable cardioverter defibrillator (ICD) in primary prevention was
used in order to design and test this classifier. 44 out of 186 patients
underwent at least one treated ventricular arrhythmia during the six-month
follow-up period. Performances of the designed classifier were evaluated using
a cross-test strategy that consists in splitting the database into several
combinations of a training set and a test set. The average arrhythmia
prediction performances of the ad-hoc classifier are NPV = 77% $\pm$ 13% and
PPV = 31% $\pm$ 19% (Negative Predictive Value $\pm$ std, Positive Predictive
Value $\pm$ std). According to our study, improving prophylactic
ICD-implantation candidate selection by automatic classification from ECG
features may be possible, but the availability of a sizable dataset appears to
be essential to decrease the number of False Negatives.
| Charles-Henri Cappelaere, R. Dubois, P. Roussel, G. Dreyfus | null | 1401.4128 | null | null |
Convex Optimization for Binary Classifier Aggregation in Multiclass
Problems | cs.LG | Multiclass problems are often decomposed into multiple binary problems that
are solved by individual binary classifiers whose results are integrated into a
final answer. Various methods, including all-pairs (APs), one-versus-all (OVA),
and error correcting output code (ECOC), have been studied, to decompose
multiclass problems into binary problems. However, little study has been made
to optimally aggregate binary problems to determine a final answer to the
multiclass problem. In this paper we present a convex optimization method for
an optimal aggregation of binary classifiers to estimate class membership
probabilities in multiclass problems. We model the class membership probability
as a softmax function which takes a conic combination of discrepancies induced
by individual binary classifiers, as an input. With this model, we formulate
the regularized maximum likelihood estimation as a convex optimization problem,
which is solved by the primal-dual interior point method. Connections of our
method to large margin classifiers are presented, showing that the large margin
formulation can be considered as a limiting case of our convex formulation.
Numerical experiments on synthetic and real-world data sets demonstrate that
our method outperforms existing aggregation methods as well as direct methods,
in terms of the classification accuracy and the quality of class membership
probability estimates.
| Sunho Park, TaeHyun Hwang, Seungjin Choi | null | 1401.4143 | null | null |
Cause Identification from Aviation Safety Incident Reports via Weakly
Supervised Semantic Lexicon Construction | cs.CL cs.LG | The Aviation Safety Reporting System collects voluntarily submitted reports
on aviation safety incidents to facilitate research work aiming to reduce such
incidents. To effectively reduce these incidents, it is vital to accurately
identify why these incidents occurred. More precisely, given a set of possible
causes, or shaping factors, this task of cause identification involves
identifying all and only those shaping factors that are responsible for the
incidents described in a report. We investigate two approaches to cause
identification. Both approaches exploit information provided by a semantic
lexicon, which is automatically constructed via Thelen and Riloffs Basilisk
framework augmented with our linguistic and algorithmic modifications. The
first approach labels a report using a simple heuristic, which looks for the
words and phrases acquired during the semantic lexicon learning process in the
report. The second approach recasts cause identification as a text
classification problem, employing supervised and transductive text
classification algorithms to learn models from incident reports labeled with
shaping factors and using the models to label unseen reports. Our experiments
show that both the heuristic-based approach and the learning-based approach
(when given sufficient training data) outperform the baseline system
significantly.
| Muhammad Arshad Ul Abedin, Vincent Ng, Latifur Khan | 10.1613/jair.2986 | 1401.4436 | null | null |
An Analysis of Random Projections in Cancelable Biometrics | cs.CV cs.LG stat.ML | With increasing concerns about security, the need for highly secure physical
biometrics-based authentication systems utilizing \emph{cancelable biometric}
technologies is on the rise. Because the problem of cancelable template
generation deals with the trade-off between template security and matching
performance, many state-of-the-art algorithms successful in generating high
quality cancelable biometrics all have random projection as one of their early
processing steps. This paper therefore presents a formal analysis of why random
projections is an essential step in cancelable biometrics. By formally defining
the notion of an \textit{Independent Subspace Structure} for datasets, it can
be shown that random projection preserves the subspace structure of data
vectors generated from a union of independent linear subspaces. The bound on
the minimum number of random vectors required for this to hold is also derived
and is shown to depend logarithmically on the number of data samples, not only
in independent subspaces but in disjoint subspace settings as well. The
theoretical analysis presented is supported in detail with empirical results on
real-world face recognition datasets.
| Devansh Arpit, Ifeoma Nwogu, Gaurav Srivastava, Venu Govindaraju | null | 1401.4489 | null | null |
General factorization framework for context-aware recommendations | cs.IR cs.LG | Context-aware recommendation algorithms focus on refining recommendations by
considering additional information, available to the system. This topic has
gained a lot of attention recently. Among others, several factorization methods
were proposed to solve the problem, although most of them assume explicit
feedback which strongly limits their real-world applicability. While these
algorithms apply various loss functions and optimization strategies, the
preference modeling under context is less explored due to the lack of tools
allowing for easy experimentation with various models. As context dimensions
are introduced beyond users and items, the space of possible preference models
and the importance of proper modeling largely increases.
In this paper we propose a General Factorization Framework (GFF), a single
flexible algorithm that takes the preference model as an input and computes
latent feature matrices for the input dimensions. GFF allows us to easily
experiment with various linear models on any context-aware recommendation task,
be it explicit or implicit feedback based. The scaling properties makes it
usable under real life circumstances as well.
We demonstrate the framework's potential by exploring various preference
models on a 4-dimensional context-aware problem with contexts that are
available for almost any real life datasets. We show in our experiments --
performed on five real life, implicit feedback datasets -- that proper
preference modelling significantly increases recommendation accuracy, and
previously unused models outperform the traditional ones. Novel models in GFF
also outperform state-of-the-art factorization algorithms.
We also extend the method to be fully compliant to the Multidimensional
Dataspace Model, one of the most extensive data models of context-enriched
data. Extended GFF allows the seamless incorporation of information into the
fac[truncated]
| Bal\'azs Hidasi, Domonkos Tikk | 10.1007/s10618-015-0417-y | 1401.4529 | null | null |
Excess Risk Bounds for Exponentially Concave Losses | cs.LG stat.ML | The overarching goal of this paper is to derive excess risk bounds for
learning from exp-concave loss functions in passive and sequential learning
settings. Exp-concave loss functions encompass several fundamental problems in
machine learning such as squared loss in linear regression, logistic loss in
classification, and negative logarithm loss in portfolio management. In batch
setting, we obtain sharp bounds on the performance of empirical risk
minimization performed in a linear hypothesis space and with respect to the
exp-concave loss functions. We also extend the results to the online setting
where the learner receives the training examples in a sequential manner. We
propose an online learning algorithm that is a properly modified version of
online Newton method to obtain sharp risk bounds. Under an additional mild
assumption on the loss function, we show that in both settings we are able to
achieve an excess risk bound of $O(d\log n/n)$ that holds with a high
probability.
| Mehrdad Mahdavi and Rong Jin | null | 1401.4566 | null | null |
miRNA and Gene Expression based Cancer Classification using Self-
Learning and Co-Training Approaches | cs.CE cs.LG | miRNA and gene expression profiles have been proved useful for classifying
cancer samples. Efficient classifiers have been recently sought and developed.
A number of attempts to classify cancer samples using miRNA/gene expression
profiles are known in literature. However, the use of semi-supervised learning
models have been used recently in bioinformatics, to exploit the huge corpuses
of publicly available sets. Using both labeled and unlabeled sets to train
sample classifiers, have not been previously considered when gene and miRNA
expression sets are used. Moreover, there is a motivation to integrate both
miRNA and gene expression for a semi-supervised cancer classification as that
provides more information on the characteristics of cancer samples. In this
paper, two semi-supervised machine learning approaches, namely self-learning
and co-training, are adapted to enhance the quality of cancer sample
classification. These approaches exploit the huge public corpuses to enrich the
training data. In self-learning, miRNA and gene based classifiers are enhanced
independently. While in co-training, both miRNA and gene expression profiles
are used simultaneously to provide different views of cancer samples. To our
knowledge, it is the first attempt to apply these learning approaches to cancer
classification. The approaches were evaluated using breast cancer,
hepatocellular carcinoma (HCC) and lung cancer expression sets. Results show up
to 20% improvement in F1-measure over Random Forests and SVM classifiers.
Co-Training also outperforms Low Density Separation (LDS) approach by around
25% improvement in F1-measure in breast cancer.
| Rania Ibrahim, Noha A. Yousri, Mohamed A. Ismail, Nagwa M. El-Makky | null | 1401.4589 | null | null |
Combining Evaluation Metrics via the Unanimous Improvement Ratio and its
Application to Clustering Tasks | cs.AI cs.LG | Many Artificial Intelligence tasks cannot be evaluated with a single quality
criterion and some sort of weighted combination is needed to provide system
rankings. A problem of weighted combination measures is that slight changes in
the relative weights may produce substantial changes in the system rankings.
This paper introduces the Unanimous Improvement Ratio (UIR), a measure that
complements standard metric combination criteria (such as van Rijsbergen's
F-measure) and indicates how robust the measured differences are to changes in
the relative weights of the individual metrics. UIR is meant to elucidate
whether a perceived difference between two systems is an artifact of how
individual metrics are weighted.
Besides discussing the theoretical foundations of UIR, this paper presents
empirical results that confirm the validity and usefulness of the metric for
the Text Clustering problem, where there is a tradeoff between precision and
recall based metrics and results are particularly sensitive to the weighting
scheme used to combine them. Remarkably, our experiments show that UIR can be
used as a predictor of how well differences between systems measured on a given
test bed will also hold in a different test bed.
| Enrique Amig\'o, Julio Gonzalo, Javier Artiles, Felisa Verdejo | 10.1613/jair.3401 | 1401.4590 | null | null |
Classification of IDS Alerts with Data Mining Techniques | cs.CR cs.DB cs.LG | A data mining technique to reduce the amount of false alerts within an IDS
system is proposed. The new technique achieves an accuracy of 99% compared to
97% by the current systems.
| Hany Nashat Gabra, Ayman Mohammad Bahaa-Eldin, Huda Korashy | null | 1401.4872 | null | null |
A Unifying Framework for Typical Multi-Task Multiple Kernel Learning
Problems | cs.LG | Over the past few years, Multi-Kernel Learning (MKL) has received significant
attention among data-driven feature selection techniques in the context of
kernel-based learning. MKL formulations have been devised and solved for a
broad spectrum of machine learning problems, including Multi-Task Learning
(MTL). Solving different MKL formulations usually involves designing algorithms
that are tailored to the problem at hand, which is, typically, a non-trivial
accomplishment.
In this paper we present a general Multi-Task Multi-Kernel Learning
(Multi-Task MKL) framework that subsumes well-known Multi-Task MKL
formulations, as well as several important MKL approaches on single-task
problems. We then derive a simple algorithm that can solve the unifying
framework. To demonstrate the flexibility of the proposed framework, we
formulate a new learning problem, namely Partially-Shared Common Space (PSCS)
Multi-Task MKL, and demonstrate its merits through experimentation.
| Cong Li, Michael Georgiopoulos, Georgios C. Anagnostopoulos | null | 1401.5136 | null | null |
The Why and How of Nonnegative Matrix Factorization | stat.ML cs.IR cs.LG math.OC | Nonnegative matrix factorization (NMF) has become a widely used tool for the
analysis of high-dimensional data as it automatically extracts sparse and
meaningful features from a set of nonnegative data vectors. We first illustrate
this property of NMF on three applications, in image processing, text mining
and hyperspectral imaging --this is the why. Then we address the problem of
solving NMF, which is NP-hard in general. We review some standard NMF
algorithms, and also present a recent subclass of NMF problems, referred to as
near-separable NMF, that can be solved efficiently (that is, in polynomial
time), even in the presence of noise --this is the how. Finally, we briefly
describe some problems in mathematics and computer science closely related to
NMF via the nonnegative rank.
| Nicolas Gillis | null | 1401.5226 | null | null |
HMACA: Towards Proposing a Cellular Automata Based Tool for Protein
Coding, Promoter Region Identification and Protein Structure Prediction | cs.CE cs.LG | Human body consists of lot of cells, each cell consist of DeOxaRibo Nucleic
Acid (DNA). Identifying the genes from the DNA sequences is a very difficult
task. But identifying the coding regions is more complex task compared to the
former. Identifying the protein which occupy little place in genes is a really
challenging issue. For understating the genes coding region analysis plays an
important role. Proteins are molecules with macro structure that are
responsible for a wide range of vital biochemical functions, which includes
acting as oxygen, cell signaling, antibody production, nutrient transport and
building up muscle fibers. Promoter region identification and protein structure
prediction has gained a remarkable attention in recent years. Even though there
are some identification techniques addressing this problem, the approximate
accuracy in identifying the promoter region is closely 68% to 72%. We have
developed a Cellular Automata based tool build with hybrid multiple attractor
cellular automata (HMACA) classifier for protein coding region, promoter region
identification and protein structure prediction which predicts the protein and
promoter regions with an accuracy of 76%. This tool also predicts the structure
of protein with an accuracy of 80%.
| Pokkuluri Kiran Sree, Inampudi Ramesh Babu, SSSN Usha Devi N | null | 1401.5364 | null | null |
Which Clustering Do You Want? Inducing Your Ideal Clustering with
Minimal Feedback | cs.IR cs.CL cs.LG | While traditional research on text clustering has largely focused on grouping
documents by topic, it is conceivable that a user may want to cluster documents
along other dimensions, such as the authors mood, gender, age, or sentiment.
Without knowing the users intention, a clustering algorithm will only group
documents along the most prominent dimension, which may not be the one the user
desires. To address the problem of clustering documents along the user-desired
dimension, previous work has focused on learning a similarity metric from data
manually annotated with the users intention or having a human construct a
feature space in an interactive manner during the clustering process. With the
goal of reducing reliance on human knowledge for fine-tuning the similarity
function or selecting the relevant features required by these approaches, we
propose a novel active clustering algorithm, which allows a user to easily
select the dimension along which she wants to cluster the documents by
inspecting only a small number of words. We demonstrate the viability of our
algorithm on a variety of commonly-used sentiment datasets.
| Sajib Dasgupta, Vincent Ng | 10.1613/jair.3003 | 1401.5389 | null | null |
Learning to Win by Reading Manuals in a Monte-Carlo Framework | cs.CL cs.AI cs.LG | Domain knowledge is crucial for effective performance in autonomous control
systems. Typically, human effort is required to encode this knowledge into a
control algorithm. In this paper, we present an approach to language grounding
which automatically interprets text in the context of a complex control
application, such as a game, and uses domain knowledge extracted from the text
to improve control performance. Both text analysis and control strategies are
learned jointly using only a feedback signal inherent to the application. To
effectively leverage textual information, our method automatically extracts the
text segment most relevant to the current game state, and labels it with a
task-centric predicate structure. This labeled text is then used to bias an
action selection policy for the game, guiding it towards promising regions of
the action space. We encode our model for text analysis and game playing in a
multi-layer neural network, representing linguistic decisions via latent
variables in the hidden layers, and game action quality via the output layer.
Operating within the Monte-Carlo Search framework, we estimate model parameters
using feedback from simulated games. We apply our approach to the complex
strategy game Civilization II using the official game manual as the text guide.
Our results show that a linguistically-informed game-playing agent
significantly outperforms its language-unaware counterpart, yielding a 34%
absolute improvement and winning over 65% of games when playing against the
built-in AI of Civilization.
| S.R.K. Branavan, David Silver, Regina Barzilay | 10.1613/jair.3484 | 1401.5390 | null | null |
Learning Mid-Level Features and Modeling Neuron Selectivity for Image
Classification | cs.CV cs.LG cs.NE cs.RO | We now know that mid-level features can greatly enhance the performance of
image learning, but how to automatically learn the image features efficiently
and in an unsupervised manner is still an open question. In this paper, we
present a very efficient mid-level feature learning approach (MidFea), which
only involves simple operations such as $k$-means clustering, convolution,
pooling, vector quantization and random projection. We explain why this simple
method generates the desired features, and argue that there is no need to spend
much time in learning low-level feature extractors. Furthermore, to boost the
performance, we propose to model the neuron selectivity (NS) principle by
building an additional layer over the mid-level features before feeding the
features into the classifier. We show that the NS-layer learns
category-specific neurons with both bottom-up inference and top-down analysis,
and thus supports fast inference for a query image. We run extensive
experiments on several public databases to demonstrate that our approach can
achieve state-of-the-art performances for face recognition, gender
classification, age estimation and object categorization. In particular, we
demonstrate that our approach is more than an order of magnitude faster than
some recently proposed sparse coding based methods.
| Shu Kong, Zhuolin Jiang, Qiang Yang | null | 1401.5535 | null | null |
Causal Discovery in a Binary Exclusive-or Skew Acyclic Model: BExSAM | stat.ML cs.LG | Discovering causal relations among observed variables in a given data set is
a major objective in studies of statistics and artificial intelligence.
Recently, some techniques to discover a unique causal model have been explored
based on non-Gaussianity of the observed data distribution. However, most of
these are limited to continuous data. In this paper, we present a novel causal
model for binary data and propose an efficient new approach to deriving the
unique causal model governing a given binary data set under skew distributions
of external binary noises. Experimental evaluation shows excellent performance
for both artificial and real world data sets.
| Takanori Inazumi, Takashi Washio, Shohei Shimizu, Joe Suzuki, Akihiro
Yamamoto, Yoshinobu Kawahara | null | 1401.5636 | null | null |
Efficiently Detecting Overlapping Communities through Seeding and
Semi-Supervised Learning | cs.SI cs.LG physics.soc-ph | Seeding then expanding is a commonly used scheme to discover overlapping
communities in a network. Most seeding methods are either too complex to scale
to large networks or too simple to select high-quality seeds, and the
non-principled functions used by most expanding methods lead to poor
performance when applied to diverse networks. This paper proposes a new method
that transforms a network into a corpus where each edge is treated as a
document, and all nodes of the network are treated as terms of the corpus. An
effective seeding method is also proposed that selects seeds as a training set,
then a principled expanding method based on semi-supervised learning is applied
to classify edges. We compare our new algorithm with four other community
detection algorithms on a wide range of synthetic and empirical networks.
Experimental results show that the new algorithm can significantly improve
clustering performance in most cases. Furthermore, the time complexity of the
new algorithm is linear to the number of edges, and this low complexity makes
the new algorithm scalable to large networks.
| Changxing Shang and Shengzhong Feng and Zhongying Zhao and Jianping
Fan | 10.1007/s13042-015-0338-5 | 1401.5888 | null | null |
Kernel Least Mean Square with Adaptive Kernel Size | stat.ML cs.LG | Kernel adaptive filters (KAF) are a class of powerful nonlinear filters
developed in Reproducing Kernel Hilbert Space (RKHS). The Gaussian kernel is
usually the default kernel in KAF algorithms, but selecting the proper kernel
size (bandwidth) is still an open important issue especially for learning with
small sample sizes. In previous research, the kernel size was set manually or
estimated in advance by Silvermans rule based on the sample distribution. This
study aims to develop an online technique for optimizing the kernel size of the
kernel least mean square (KLMS) algorithm. A sequential optimization strategy
is proposed, and a new algorithm is developed, in which the filter weights and
the kernel size are both sequentially updated by stochastic gradient algorithms
that minimize the mean square error (MSE). Theoretical results on convergence
are also presented. The excellent performance of the new algorithm is confirmed
by simulations on static function estimation and short term chaotic time series
prediction.
| Badong Chen, Junli Liang, Nanning Zheng, Jose C. Principe | 10.1016/j.neucom.2016.01.004 | 1401.5899 | null | null |
Gaussian-binary Restricted Boltzmann Machines on Modeling Natural Image
Statistics | cs.NE cs.LG stat.ML | We present a theoretical analysis of Gaussian-binary restricted Boltzmann
machines (GRBMs) from the perspective of density models. The key aspect of this
analysis is to show that GRBMs can be formulated as a constrained mixture of
Gaussians, which gives a much better insight into the model's capabilities and
limitations. We show that GRBMs are capable of learning meaningful features
both in a two-dimensional blind source separation task and in modeling natural
images. Further, we show that reported difficulties in training GRBMs are due
to the failure of the training algorithm rather than the model itself. Based on
our analysis we are able to propose several training recipes, which allowed
successful and fast training in our experiments. Finally, we discuss the
relationship of GRBMs to several modifications that have been proposed to
improve the model.
| Nan Wang and Jan Melchior and Laurenz Wiskott | 10.1371/journal.pone.0171015 | 1401.5900 | null | null |
Numerical weather prediction or stochastic modeling: an objective
criterion of choice for the global radiation forecasting | stat.AP cs.LG | Numerous methods exist and were developed for global radiation forecasting.
The two most popular types are the numerical weather predictions (NWP) and the
predictions using stochastic approaches. We propose to compute a parameter
noted constructed in part from the mutual information which is a quantity that
measures the mutual dependence of two variables. Both of these are calculated
with the objective to establish the more relevant method between NWP and
stochastic models concerning the current problem.
| Cyril Voyant (SPE), Gilles Notton (SPE), Christophe Paoli (SPE), Marie
Laure Nivet (SPE), Marc Muselli (SPE), Kahina Dahmani (LRIA) | null | 1401.6002 | null | null |
Matrix factorization with Binary Components | stat.ML cs.LG | Motivated by an application in computational biology, we consider low-rank
matrix factorization with $\{0,1\}$-constraints on one of the factors and
optionally convex constraints on the second one. In addition to the
non-convexity shared with other matrix factorization schemes, our problem is
further complicated by a combinatorial constraint set of size $2^{m \cdot r}$,
where $m$ is the dimension of the data points and $r$ the rank of the
factorization. Despite apparent intractability, we provide - in the line of
recent work on non-negative matrix factorization by Arora et al. (2012) - an
algorithm that provably recovers the underlying factorization in the exact case
with $O(m r 2^r + mnr + r^2 n)$ operations for $n$ datapoints. To obtain this
result, we use theory around the Littlewood-Offord lemma from combinatorics.
| Martin Slawski, Matthias Hein, Pavlo Lutsik | null | 1401.6024 | null | null |
Comparative study of Authorship Identification Techniques for Cyber
Forensics Analysis | cs.CY cs.CR cs.IR cs.LG | Authorship Identification techniques are used to identify the most
appropriate author from group of potential suspects of online messages and find
evidences to support the conclusion. Cybercriminals make misuse of online
communication for sending blackmail or a spam email and then attempt to hide
their true identities to void detection.Authorship Identification of online
messages is the contemporary research issue for identity tracing in cyber
forensics. This is highly interdisciplinary area as it takes advantage of
machine learning, information retrieval, and natural language processing. In
this paper, a study of recent techniques and automated approaches to
attributing authorship of online messages is presented. The focus of this
review study is to summarize all existing authorship identification techniques
used in literature to identify authors of online messages. Also it discusses
evaluation criteria and parameters for authorship attribution studies and list
open questions that will attract future work in this area.
| Smita Nirkhi, R.V. Dharaskar | null | 1401.6118 | null | null |
Controlling Complexity in Part-of-Speech Induction | cs.CL cs.LG | We consider the problem of fully unsupervised learning of grammatical
(part-of-speech) categories from unlabeled text. The standard
maximum-likelihood hidden Markov model for this task performs poorly, because
of its weak inductive bias and large model capacity. We address this problem by
refining the model and modifying the learning objective to control its capacity
via para- metric and non-parametric constraints. Our approach enforces
word-category association sparsity, adds morphological and orthographic
features, and eliminates hard-to-estimate parameters for rare words. We develop
an efficient learning algorithm that is not much more computationally intensive
than standard training. We also provide an open-source implementation of the
algorithm. Our experiments on five diverse languages (Bulgarian, Danish,
English, Portuguese, Spanish) achieve significant improvements compared with
previous methods for the same task.
| Jo\~ao V. Gra\c{c}a, Kuzman Ganchev, Luisa Coheur, Fernando Pereira,
Ben Taskar | 10.1613/jair.3348 | 1401.6131 | null | null |
Parsimonious Topic Models with Salient Word Discovery | cs.LG cs.CL cs.IR stat.ML | We propose a parsimonious topic model for text corpora. In related models
such as Latent Dirichlet Allocation (LDA), all words are modeled
topic-specifically, even though many words occur with similar frequencies
across different topics. Our modeling determines salient words for each topic,
which have topic-specific probabilities, with the rest explained by a universal
shared model. Further, in LDA all topics are in principle present in every
document. By contrast our model gives sparse topic representation, determining
the (small) subset of relevant topics for each document. We derive a Bayesian
Information Criterion (BIC), balancing model complexity and goodness of fit.
Here, interestingly, we identify an effective sample size and corresponding
penalty specific to each parameter type in our model. We minimize BIC to
jointly determine our entire model -- the topic-specific words,
document-specific topics, all model parameter values, {\it and} the total
number of topics -- in a wholly unsupervised fashion. Results on three text
corpora and an image dataset show that our model achieves higher test set
likelihood and better agreement with ground-truth class labels, compared to LDA
and to a model designed to incorporate sparsity.
| Hossein Soleimani, David J. Miller | 10.1109/TKDE.2014.2345378 | 1401.6169 | null | null |
Is Extreme Learning Machine Feasible? A Theoretical Assessment (Part II) | cs.LG | An extreme learning machine (ELM) can be regarded as a two stage feed-forward
neural network (FNN) learning system which randomly assigns the connections
with and within hidden neurons in the first stage and tunes the connections
with output neurons in the second stage. Therefore, ELM training is essentially
a linear learning problem, which significantly reduces the computational
burden. Numerous applications show that such a computation burden reduction
does not degrade the generalization capability. It has, however, been open that
whether this is true in theory. The aim of our work is to study the theoretical
feasibility of ELM by analyzing the pros and cons of ELM. In the previous part
on this topic, we pointed out that via appropriate selection of the activation
function, ELM does not degrade the generalization capability in the expectation
sense. In this paper, we launch the study in a different direction and show
that the randomness of ELM also leads to certain negative consequences. On one
hand, we find that the randomness causes an additional uncertainty problem of
ELM, both in approximation and learning. On the other hand, we theoretically
justify that there also exists an activation function such that the
corresponding ELM degrades the generalization capability. In particular, we
prove that the generalization capability of ELM with Gaussian kernel is
essentially worse than that of FNN with Gaussian kernel. To facilitate the use
of ELM, we also provide a remedy to such a degradation. We find that the
well-developed coefficient regularization technique can essentially improve the
generalization capability. The obtained results reveal the essential
characteristic of ELM and give theoretical guidance concerning how to use ELM.
| Shaobo Lin, Xia Liu, Jian Fang and Zongben Xu | null | 1401.6240 | null | null |
The Sampling-and-Learning Framework: A Statistical View of Evolutionary
Algorithms | cs.NE cs.LG | Evolutionary algorithms (EAs), a large class of general purpose optimization
algorithms inspired from the natural phenomena, are widely used in various
industrial optimizations and often show excellent performance. This paper
presents an attempt towards revealing their general power from a statistical
view of EAs. By summarizing a large range of EAs into the sampling-and-learning
framework, we show that the framework directly admits a general analysis on the
probable-absolute-approximate (PAA) query complexity. We particularly focus on
the framework with the learning subroutine being restricted as a binary
classification, which results in the sampling-and-classification (SAC)
algorithms. With the help of the learning theory, we obtain a general upper
bound on the PAA query complexity of SAC algorithms. We further compare SAC
algorithms with the uniform search in different situations. Under the
error-target independence condition, we show that SAC algorithms can achieve
polynomial speedup to the uniform search, but not super-polynomial speedup.
Under the one-side-error condition, we show that super-polynomial speedup can
be achieved. This work only touches the surface of the framework. Its power
under other conditions is still open.
| Yang Yu and Hong Qian | null | 1401.6333 | null | null |
Steady-state performance of non-negative least-mean-square algorithm and
its variants | cs.LG | Non-negative least-mean-square (NNLMS) algorithm and its variants have been
proposed for online estimation under non-negativity constraints. The transient
behavior of the NNLMS, Normalized NNLMS, Exponential NNLMS and Sign-Sign NNLMS
algorithms have been studied in our previous work. In this technical report, we
derive closed-form expressions for the steady-state excess mean-square error
(EMSE) for the four algorithms. Simulations results illustrate the accuracy of
the theoretical results. This is a complementary material to our previous work.
| Jie Chen and Jos\'e Carlos M. Bermudez and C\'edric Richard | 10.1109/LSP.2014.2320944 | 1401.6376 | null | null |
Predicting Nearly As Well As the Optimal Twice Differentiable Regressor | cs.LG stat.ML | We study nonlinear regression of real valued data in an individual sequence
manner, where we provide results that are guaranteed to hold without any
statistical assumptions. We address the convergence and undertraining issues of
conventional nonlinear regression methods and introduce an algorithm that
elegantly mitigates these issues via an incremental hierarchical structure,
(i.e., via an incremental decision tree). Particularly, we present a piecewise
linear (or nonlinear) regression algorithm that partitions the regressor space
in a data driven manner and learns a linear model at each region. Unlike the
conventional approaches, our algorithm gradually increases the number of
disjoint partitions on the regressor space in a sequential manner according to
the observed data. Through this data driven approach, our algorithm
sequentially and asymptotically achieves the performance of the optimal twice
differentiable regression function for any data sequence with an unknown and
arbitrary length. The computational complexity of the introduced algorithm is
only logarithmic in the data length under certain regularity conditions. We
provide the explicit description of the algorithm and demonstrate the
significant gains for the well-known benchmark real data sets and chaotic
signals.
| N. Denizcan Vanli, Muhammed O. Sayin, Suleyman S. Kozat | null | 1401.6413 | null | null |
Riffled Independence for Efficient Inference with Partial Rankings | cs.LG | Distributions over rankings are used to model data in a multitude of real
world settings such as preference analysis and political elections. Modeling
such distributions presents several computational challenges, however, due to
the factorial size of the set of rankings over an item set. Some of these
challenges are quite familiar to the artificial intelligence community, such as
how to compactly represent a distribution over a combinatorially large space,
and how to efficiently perform probabilistic inference with these
representations. With respect to ranking, however, there is the additional
challenge of what we refer to as human task complexity users are rarely willing
to provide a full ranking over a long list of candidates, instead often
preferring to provide partial ranking information. Simultaneously addressing
all of these challenges i.e., designing a compactly representable model which
is amenable to efficient inference and can be learned using partial ranking
data is a difficult task, but is necessary if we would like to scale to
problems with nontrivial size. In this paper, we show that the recently
proposed riffled independence assumptions cleanly and efficiently address each
of the above challenges. In particular, we establish a tight mathematical
connection between the concepts of riffled independence and of partial
rankings. This correspondence not only allows us to then develop efficient and
exact algorithms for performing inference tasks using riffled independence
based represen- tations with partial rankings, but somewhat surprisingly, also
shows that efficient inference is not possible for riffle independent models
(in a certain sense) with observations which do not take the form of partial
rankings. Finally, using our inference algorithm, we introduce the first method
for learning riffled independence based models from partially ranked data.
| Jonathan Huang, Ashish Kapoor, Carlos Guestrin | 10.1613/jair.3543 | 1401.6421 | null | null |
Toward Supervised Anomaly Detection | cs.LG | Anomaly detection is being regarded as an unsupervised learning task as
anomalies stem from adversarial or unlikely events with unknown distributions.
However, the predictive performance of purely unsupervised anomaly detection
often fails to match the required detection rates in many tasks and there
exists a need for labeled data to guide the model generation. Our first
contribution shows that classical semi-supervised approaches, originating from
a supervised classifier, are inappropriate and hardly detect new and unknown
anomalies. We argue that semi-supervised anomaly detection needs to ground on
the unsupervised learning paradigm and devise a novel algorithm that meets this
requirement. Although being intrinsically non-convex, we further show that the
optimization problem has a convex equivalent under relatively mild assumptions.
Additionally, we propose an active learning strategy to automatically filter
candidates for labeling. In an empirical study on network intrusion detection
data, we observe that the proposed learning methodology requires much less
labeled data than the state-of-the-art, while achieving higher detection
accuracies.
| Nico Goernitz, Marius Micha Kloft, Konrad Rieck, Ulf Brefeld | 10.1613/jair.3623 | 1401.6424 | null | null |
Towards Unsupervised Learning of Temporal Relations between Events | cs.LG cs.CL | Automatic extraction of temporal relations between event pairs is an
important task for several natural language processing applications such as
Question Answering, Information Extraction, and Summarization. Since most
existing methods are supervised and require large corpora, which for many
languages do not exist, we have concentrated our efforts to reduce the need for
annotated data as much as possible. This paper presents two different
algorithms towards this goal. The first algorithm is a weakly supervised
machine learning approach for classification of temporal relations between
events. In the first stage, the algorithm learns a general classifier from an
annotated corpus. Then, inspired by the hypothesis of "one type of temporal
relation per discourse, it extracts useful information from a cluster of
topically related documents. We show that by combining the global information
of such a cluster with local decisions of a general classifier, a bootstrapping
cross-document classifier can be built to extract temporal relations between
events. Our experiments show that without any additional annotated data, the
accuracy of the proposed algorithm is higher than that of several previous
successful systems. The second proposed method for temporal relation extraction
is based on the expectation maximization (EM) algorithm. Within EM, we used
different techniques such as a greedy best-first search and integer linear
programming for temporal inconsistency removal. We think that the experimental
results of our EM based algorithm, as a first step toward a fully unsupervised
temporal relation extraction method, is encouraging.
| Seyed Abolghasem Mirroshandel, Gholamreza Ghassem-Sani | 10.1613/jair.3693 | 1401.6427 | null | null |
Identification of Protein Coding Regions in Genomic DNA Using
Unsupervised FMACA Based Pattern Classifier | cs.CE cs.LG | Genes carry the instructions for making proteins that are found in a cell as
a specific sequence of nucleotides that are found in DNA molecules. But, the
regions of these genes that code for proteins may occupy only a small region of
the sequence. Identifying the coding regions play a vital role in understanding
these genes. In this paper we propose a unsupervised Fuzzy Multiple Attractor
Cellular Automata (FMCA) based pattern classifier to identify the coding region
of a DNA sequence. We propose a distinct K-Means algorithm for designing FMACA
classifier which is simple, efficient and produces more accurate classifier
than that has previously been obtained for a range of different sequence
lengths. Experimental results confirm the scalability of the proposed
Unsupervised FCA based classifier to handle large volume of datasets
irrespective of the number of classes, tuples and attributes. Good
classification accuracy has been established.
| Pokkuluri Kiran Sree, Inampudi Ramesh Babu | null | 1401.6484 | null | null |
Bayesian CP Factorization of Incomplete Tensors with Automatic Rank
Determination | cs.LG cs.CV stat.ML | CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful
technique for tensor completion through explicitly capturing the multilinear
latent factors. The existing CP algorithms require the tensor rank to be
manually specified, however, the determination of tensor rank remains a
challenging problem especially for CP rank. In addition, existing approaches do
not take into account uncertainty information of latent factors, as well as
missing entries. To address these issues, we formulate CP factorization using a
hierarchical probabilistic model and employ a fully Bayesian treatment by
incorporating a sparsity-inducing prior over multiple latent factors and the
appropriate hyperpriors over all hyperparameters, resulting in automatic rank
determination. To learn the model, we develop an efficient deterministic
Bayesian inference algorithm, which scales linearly with data size. Our method
is characterized as a tuning parameter-free approach, which can effectively
infer underlying multilinear factors with a low-rank constraint, while also
providing predictive distributions over missing entries. Extensive simulations
on synthetic data illustrate the intrinsic capability of our method to recover
the ground-truth of CP rank and prevent the overfitting problem, even when a
large amount of entries are missing. Moreover, the results from real-world
applications, including image inpainting and facial image synthesis,
demonstrate that our method outperforms state-of-the-art approaches for both
tensor factorization and tensor completion in terms of predictive performance.
| Qibin Zhao, Liqing Zhang, and Andrzej Cichocki | 10.1109/TPAMI.2015.2392756 | 1401.6497 | null | null |
A Machine Learning Approach for the Identification of Bengali Noun-Noun
Compound Multiword Expressions | cs.CL cs.LG | This paper presents a machine learning approach for identification of Bengali
multiword expressions (MWE) which are bigram nominal compounds. Our proposed
approach has two steps: (1) candidate extraction using chunk information and
various heuristic rules and (2) training the machine learning algorithm called
Random Forest to classify the candidates into two groups: bigram nominal
compound MWE or not bigram nominal compound MWE. A variety of association
measures, syntactic and linguistic clues and a set of WordNet-based similarity
features have been used for our MWE identification task. The approach presented
in this paper can be used to identify bigram nominal compound MWE in Bengali
running text.
| Vivekananda Gayen, Kamal Sarkar | null | 1401.6567 | null | null |
Ensembled Correlation Between Liver Analysis Outputs | stat.ML cs.CE cs.LG | Data mining techniques on the biological analysis are spreading for most of
the areas including the health care and medical information. We have applied
the data mining techniques, such as KNN, SVM, MLP or decision trees over a
unique dataset, which is collected from 16,380 analysis results for a year.
Furthermore we have also used meta-classifiers to question the increased
correlation rate between the liver disorder and the liver analysis outputs. The
results show that there is a correlation among ALT, AST, Billirubin Direct and
Billirubin Total down to 15% of error rate. Also the correlation coefficient is
up to 94%. This makes possible to predict the analysis results from each other
or disease patterns can be applied over the linear correlation of the
parameters.
| Sadi Evren Seker, Y. Unal, Z. Erdem, and H. Erdinc Kocer | null | 1401.6597 | null | null |
Painting Analysis Using Wavelets and Probabilistic Topic Models | cs.CV cs.LG stat.ML | In this paper, computer-based techniques for stylistic analysis of paintings
are applied to the five panels of the 14th century Peruzzi Altarpiece by Giotto
di Bondone. Features are extracted by combining a dual-tree complex wavelet
transform with a hidden Markov tree (HMT) model. Hierarchical clustering is
used to identify stylistic keywords in image patches, and keyword frequencies
are calculated for sub-images that each contains many patches. A generative
hierarchical Bayesian model learns stylistic patterns of keywords; these
patterns are then used to characterize the styles of the sub-images; this in
turn, permits to discriminate between paintings. Results suggest that such
unsupervised probabilistic topic models can be useful to distill characteristic
elements of style.
| Tong Wu, Gungor Polatkan, David Steel, William Brown, Ingrid
Daubechies and Robert Calderbank | null | 1401.6638 | null | null |
A continuous-time approach to online optimization | math.OC cs.LG stat.ML | We consider a family of learning strategies for online optimization problems
that evolve in continuous time and we show that they lead to no regret. From a
more traditional, discrete-time viewpoint, this continuous-time approach allows
us to derive the no-regret properties of a large class of discrete-time
algorithms including as special cases the exponential weight algorithm, online
mirror descent, smooth fictitious play and vanishingly smooth fictitious play.
In so doing, we obtain a unified view of many classical regret bounds, and we
show that they can be decomposed into a term stemming from continuous-time
considerations and a term which measures the disparity between discrete and
continuous time. As a result, we obtain a general class of infinite horizon
learning strategies that guarantee an $\mathcal{O}(n^{-1/2})$ regret bound
without having to resort to a doubling trick.
| Joon Kwon and Panayotis Mertikopoulos | null | 1401.6956 | null | null |
Kaldi+PDNN: Building DNN-based ASR Systems with Kaldi and PDNN | cs.LG cs.CL | The Kaldi toolkit is becoming popular for constructing automated speech
recognition (ASR) systems. Meanwhile, in recent years, deep neural networks
(DNNs) have shown state-of-the-art performance on various ASR tasks. This
document describes our open-source recipes to implement fully-fledged DNN
acoustic modeling using Kaldi and PDNN. PDNN is a lightweight deep learning
toolkit developed under the Theano environment. Using these recipes, we can
build up multiple systems including DNN hybrid systems, convolutional neural
network (CNN) systems and bottleneck feature systems. These recipes are
directly based on the Kaldi Switchboard 110-hour setup. However, adapting them
to new datasets is easy to achieve.
| Yajie Miao | null | 1401.6984 | null | null |
A Stochastic Quasi-Newton Method for Large-Scale Optimization | math.OC cs.LG stat.ML | The question of how to incorporate curvature information in stochastic
approximation methods is challenging. The direct application of classical
quasi- Newton updating techniques for deterministic optimization leads to noisy
curvature estimates that have harmful effects on the robustness of the
iteration. In this paper, we propose a stochastic quasi-Newton method that is
efficient, robust and scalable. It employs the classical BFGS update formula in
its limited memory form, and is based on the observation that it is beneficial
to collect curvature information pointwise, and at regular intervals, through
(sub-sampled) Hessian-vector products. This technique differs from the
classical approach that would compute differences of gradients, and where
controlling the quality of the curvature estimates can be difficult. We present
numerical results on problems arising in machine learning that suggest that the
proposed method shows much promise.
| R.H. Byrd, S.L. Hansen, J. Nocedal, Y.Singer | null | 1401.7020 | null | null |
Bayesian Properties of Normalized Maximum Likelihood and its Fast
Computation | cs.IT cs.LG math.IT stat.ML | The normalized maximized likelihood (NML) provides the minimax regret
solution in universal data compression, gambling, and prediction, and it plays
an essential role in the minimum description length (MDL) method of statistical
modeling and estimation. Here we show that the normalized maximum likelihood
has a Bayes-like representation as a mixture of the component models, even in
finite samples, though the weights of linear combination may be both positive
and negative. This representation addresses in part the relationship between
MDL and Bayes modeling. This representation has the advantage of speeding the
calculation of marginals and conditionals required for coding and prediction
applications.
| Andrew Barron, Teemu Roos and Kazuho Watanabe | null | 1401.7116 | null | null |
Bounding Embeddings of VC Classes into Maximum Classes | cs.LG math.CO stat.ML | One of the earliest conjectures in computational learning theory-the Sample
Compression conjecture-asserts that concept classes (equivalently set systems)
admit compression schemes of size linear in their VC dimension. To-date this
statement is known to be true for maximum classes---those that possess maximum
cardinality for their VC dimension. The most promising approach to positively
resolving the conjecture is by embedding general VC classes into maximum
classes without super-linear increase to their VC dimensions, as such
embeddings would extend the known compression schemes to all VC classes. We
show that maximum classes can be characterised by a local-connectivity property
of the graph obtained by viewing the class as a cubical complex. This geometric
characterisation of maximum VC classes is applied to prove a negative embedding
result which demonstrates VC-d classes that cannot be embedded in any maximum
class of VC dimension lower than 2d. On the other hand, we show that every VC-d
class C embeds in a VC-(d+D) maximum class where D is the deficiency of C,
i.e., the difference between the cardinalities of a maximum VC-d class and of
C. For VC-2 classes in binary n-cubes for 4 <= n <= 6, we give best possible
results on embedding into maximum classes. For some special classes of Boolean
functions, relationships with maximum classes are investigated. Finally we give
a general recursive procedure for embedding VC-d classes into VC-(d+k) maximum
classes for smallest k.
| J. Hyam Rubinstein and Benjamin I. P. Rubinstein and Peter L. Bartlett | null | 1401.7388 | null | null |
Smoothed Low Rank and Sparse Matrix Recovery by Iteratively Reweighted
Least Squares Minimization | cs.LG cs.CV stat.ML | This work presents a general framework for solving the low rank and/or sparse
matrix minimization problems, which may involve multiple non-smooth terms. The
Iteratively Reweighted Least Squares (IRLS) method is a fast solver, which
smooths the objective function and minimizes it by alternately updating the
variables and their weights. However, the traditional IRLS can only solve a
sparse only or low rank only minimization problem with squared loss or an
affine constraint. This work generalizes IRLS to solve joint/mixed low rank and
sparse minimization problems, which are essential formulations for many tasks.
As a concrete example, we solve the Schatten-$p$ norm and $\ell_{2,q}$-norm
regularized Low-Rank Representation (LRR) problem by IRLS, and theoretically
prove that the derived solution is a stationary point (globally optimal if
$p,q\geq1$). Our convergence proof of IRLS is more general than previous one
which depends on the special properties of the Schatten-$p$ norm and
$\ell_{2,q}$-norm. Extensive experiments on both synthetic and real data sets
demonstrate that our IRLS is much more efficient.
| Canyi Lu, Zhouchen Lin, Shuicheng Yan | 10.1109/TIP.2014.2380155 | 1401.7413 | null | null |
Bayesian nonparametric comorbidity analysis of psychiatric disorders | stat.ML cs.LG | The analysis of comorbidity is an open and complex research field in the
branch of psychiatry, where clinical experience and several studies suggest
that the relation among the psychiatric disorders may have etiological and
treatment implications. In this paper, we are interested in applying latent
feature modeling to find the latent structure behind the psychiatric disorders
that can help to examine and explain the relationships among them. To this end,
we use the large amount of information collected in the National Epidemiologic
Survey on Alcohol and Related Conditions (NESARC) database and propose to model
these data using a nonparametric latent model based on the Indian Buffet
Process (IBP). Due to the discrete nature of the data, we first need to adapt
the observation model for discrete random variables. We propose a generative
model in which the observations are drawn from a multinomial-logit distribution
given the IBP matrix. The implementation of an efficient Gibbs sampler is
accomplished using the Laplace approximation, which allows integrating out the
weighting factors of the multinomial-logit likelihood model. We also provide a
variational inference algorithm for this model, which provides a complementary
(and less expensive in terms of computational complexity) alternative to the
Gibbs sampler allowing us to deal with a larger number of data. Finally, we use
the model to analyze comorbidity among the psychiatric disorders diagnosed by
experts from the NESARC database.
| Francisco J. R. Ruiz, Isabel Valera, Carlos Blanco, Fernando
Perez-Cruz | null | 1401.7620 | null | null |
RES: Regularized Stochastic BFGS Algorithm | cs.LG math.OC stat.ML | RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno
(BFGS) quasi-Newton method is proposed to solve convex optimization problems
with stochastic objectives. The use of stochastic gradient descent algorithms
is widespread, but the number of iterations required to approximate optimal
arguments can be prohibitive in high dimensional problems. Application of
second order methods, on the other hand, is impracticable because computation
of objective function Hessian inverses incurs excessive computational cost.
BFGS modifies gradient descent by introducing a Hessian approximation matrix
computed from finite gradient differences. RES utilizes stochastic gradients in
lieu of deterministic gradients for both, the determination of descent
directions and the approximation of the objective function's curvature. Since
stochastic gradients can be computed at manageable computational cost RES is
realizable and retains the convergence rate advantages of its deterministic
counterparts. Convergence results show that lower and upper bounds on the
Hessian egeinvalues of the sample functions are sufficient to guarantee
convergence to optimal arguments. Numerical experiments showcase reductions in
convergence time relative to stochastic gradient descent algorithms and
non-regularized stochastic versions of BFGS. An application of RES to the
implementation of support vector machines is developed.
| Aryan Mokhtari and Alejandro Ribeiro | 10.1109/TSP.2014.2357775 | 1401.7625 | null | null |
Joint Inference of Multiple Label Types in Large Networks | cs.LG cs.SI stat.ML | We tackle the problem of inferring node labels in a partially labeled graph
where each node in the graph has multiple label types and each label type has a
large number of possible labels. Our primary example, and the focus of this
paper, is the joint inference of label types such as hometown, current city,
and employers, for users connected by a social network. Standard label
propagation fails to consider the properties of the label types and the
interactions between them. Our proposed method, called EdgeExplain, explicitly
models these, while still enabling scalable inference under a distributed
message-passing architecture. On a billion-node subset of the Facebook social
network, EdgeExplain significantly outperforms label propagation for several
label types, with lifts of up to 120% for recall@1 and 60% for recall@3.
| Deepayan Chakrabarti, Stanislav Funiak, Jonathan Chang, Sofus A.
Macskassy | null | 1401.7709 | null | null |
Security Evaluation of Support Vector Machines in Adversarial
Environments | cs.LG cs.CR | Support Vector Machines (SVMs) are among the most popular classification
techniques adopted in security applications like malware detection, intrusion
detection, and spam filtering. However, if SVMs are to be incorporated in
real-world security systems, they must be able to cope with attack patterns
that can either mislead the learning algorithm (poisoning), evade detection
(evasion), or gain information about their internal parameters (privacy
breaches). The main contributions of this chapter are twofold. First, we
introduce a formal general framework for the empirical evaluation of the
security of machine-learning systems. Second, according to our framework, we
demonstrate the feasibility of evasion, poisoning and privacy attacks against
SVMs in real-world security problems. For each attack technique, we evaluate
its impact and discuss whether (and how) it can be countered through an
adversary-aware design of SVMs. Our experiments are easily reproducible thanks
to open-source code that we have made available, together with all the employed
datasets, on a public repository.
| Battista Biggio and Igino Corona and Blaine Nelson and Benjamin I. P.
Rubinstein and Davide Maiorca and Giorgio Fumera and Giorgio Giacinto and and
Fabio Roli | null | 1401.7727 | null | null |
Maximum Margin Multiclass Nearest Neighbors | cs.LG math.ST stat.TH | We develop a general framework for margin-based multicategory classification
in metric spaces. The basic work-horse is a margin-regularized version of the
nearest-neighbor classifier. We prove generalization bounds that match the
state of the art in sample size $n$ and significantly improve the dependence on
the number of classes $k$. Our point of departure is a nearly Bayes-optimal
finite-sample risk bound independent of $k$. Although $k$-free, this bound is
unregularized and non-adaptive, which motivates our main result: Rademacher and
scale-sensitive margin bounds with a logarithmic dependence on $k$. As the best
previous risk estimates in this setting were of order $\sqrt k$, our bound is
exponentially sharper. From the algorithmic standpoint, in doubling metric
spaces our classifier may be trained on $n$ examples in $O(n^2\log n)$ time and
evaluated on new points in $O(\log n)$ time.
| Aryeh Kontorovich and Roi Weiss | null | 1401.7898 | null | null |
Support vector comparison machines | stat.ML cs.LG | In ranking problems, the goal is to learn a ranking function from labeled
pairs of input points. In this paper, we consider the related comparison
problem, where the label indicates which element of the pair is better, or if
there is no significant difference. We cast the learning problem as a margin
maximization, and show that it can be solved by converting it to a standard
SVM. We use simulated nonlinear patterns, a real learning to rank sushi data
set, and a chess data set to show that our proposed SVMcompare algorithm
outperforms SVMrank when there are equality pairs.
| David Venuto, Toby Dylan Hocking, Lakjaree Sphanurattana, Masashi
Sugiyama | null | 1401.8008 | null | null |
Empirically Evaluating Multiagent Learning Algorithms | cs.GT cs.LG | There exist many algorithms for learning how to play repeated bimatrix games.
Most of these algorithms are justified in terms of some sort of theoretical
guarantee. On the other hand, little is known about the empirical performance
of these algorithms. Most such claims in the literature are based on small
experiments, which has hampered understanding as well as the development of new
multiagent learning (MAL) algorithms. We have developed a new suite of tools
for running multiagent experiments: the MultiAgent Learning Testbed (MALT).
These tools are designed to facilitate larger and more comprehensive
experiments by removing the need to build one-off experimental code. MALT also
provides baseline implementations of many MAL algorithms, hopefully eliminating
or reducing differences between algorithm implementations and increasing the
reproducibility of results. Using this test suite, we ran an experiment
unprecedented in size. We analyzed the results according to a variety of
performance metrics including reward, maxmin distance, regret, and several
notions of equilibrium convergence. We confirmed several pieces of conventional
wisdom, but also discovered some surprising results. For example, we found that
single-agent $Q$-learning outperformed many more complicated and more modern
MAL algorithms.
| Erik Zawadzki, Asher Lipson, Kevin Leyton-Brown | null | 1401.8074 | null | null |
Extrinsic Methods for Coding and Dictionary Learning on Grassmann
Manifolds | cs.LG cs.CV stat.ML | Sparsity-based representations have recently led to notable results in
various visual recognition tasks. In a separate line of research, Riemannian
manifolds have been shown useful for dealing with features and models that do
not lie in Euclidean spaces. With the aim of building a bridge between the two
realms, we address the problem of sparse coding and dictionary learning over
the space of linear subspaces, which form Riemannian structures known as
Grassmann manifolds. To this end, we propose to embed Grassmann manifolds into
the space of symmetric matrices by an isometric mapping. This in turn enables
us to extend two sparse coding schemes to Grassmann manifolds. Furthermore, we
propose closed-form solutions for learning a Grassmann dictionary, atom by
atom. Lastly, to handle non-linearity in data, we extend the proposed Grassmann
sparse coding and dictionary learning algorithms through embedding into Hilbert
spaces.
Experiments on several classification tasks (gender recognition, gesture
classification, scene analysis, face recognition, action recognition and
dynamic texture classification) show that the proposed approaches achieve
considerable improvements in discrimination accuracy, in comparison to
state-of-the-art methods such as kernelized Affine Hull Method and
graph-embedding Grassmann discriminant analysis.
| Mehrtash Harandi, Richard Hartley, Chunhua Shen, Brian Lovell, Conrad
Sanderson | null | 1401.8126 | null | null |
Human Activity Recognition using Smartphone | cs.CY cs.HC cs.LG | Human activity recognition has wide applications in medical research and
human survey system. In this project, we design a robust activity recognition
system based on a smartphone. The system uses a 3-dimentional smartphone
accelerometer as the only sensor to collect time series signals, from which 31
features are generated in both time and frequency domain. Activities are
classified using 4 different passive learning methods, i.e., quadratic
classifier, k-nearest neighbor algorithm, support vector machine, and
artificial neural networks. Dimensionality reduction is performed through both
feature extraction and subset selection. Besides passive learning, we also
apply active learning algorithms to reduce data labeling expense. Experiment
results show that the classification rate of passive learning reaches 84.4% and
it is robust to common positions and poses of cellphone. The results of active
learning on real data demonstrate a reduction of labeling labor to achieve
comparable performance with passive learning.
| Amin Rasekh, Chien-An Chen, Yan Lu | null | 1401.8212 | null | null |
Online Clustering of Bandits | cs.LG stat.ML | We introduce a novel algorithmic approach to content recommendation based on
adaptive clustering of exploration-exploitation ("bandit") strategies. We
provide a sharp regret analysis of this algorithm in a standard stochastic
noise setting, demonstrate its scalability properties, and prove its
effectiveness on a number of artificial and real-world datasets. Our
experiments show a significant increase in prediction performance over
state-of-the-art methods for bandit problems.
| Claudio Gentile, Shuai Li, Giovanni Zappella | null | 1401.8257 | null | null |
Experiments with Three Approaches to Recognizing Lexical Entailment | cs.CL cs.AI cs.LG | Inference in natural language often involves recognizing lexical entailment
(RLE); that is, identifying whether one word entails another. For example,
"buy" entails "own". Two general strategies for RLE have been proposed: One
strategy is to manually construct an asymmetric similarity measure for context
vectors (directional similarity) and another is to treat RLE as a problem of
learning to recognize semantic relations using supervised machine learning
techniques (relation classification). In this paper, we experiment with two
recent state-of-the-art representatives of the two general strategies. The
first approach is an asymmetric similarity measure (an instance of the
directional similarity strategy), designed to capture the degree to which the
contexts of a word, a, form a subset of the contexts of another word, b. The
second approach (an instance of the relation classification strategy)
represents a word pair, a:b, with a feature vector that is the concatenation of
the context vectors of a and b, and then applies supervised learning to a
training set of labeled feature vectors. Additionally, we introduce a third
approach that is a new instance of the relation classification strategy. The
third approach represents a word pair, a:b, with a feature vector in which the
features are the differences in the similarities of a and b to a set of
reference words. All three approaches use vector space models (VSMs) of
semantics, based on word-context matrices. We perform an extensive evaluation
of the three approaches using three different datasets. The proposed new
approach (similarity differences) performs significantly better than the other
two approaches on some datasets and there is no dataset for which it is
significantly worse. Our results suggest it is beneficial to make connections
between the research in lexical entailment and the research in semantic
relation classification.
| Peter D. Turney and Saif M. Mohammad | 10.1017/S1351324913000387 | 1401.8269 | null | null |
Neural Variational Inference and Learning in Belief Networks | cs.LG stat.ML | Highly expressive directed latent variable models, such as sigmoid belief
networks, are difficult to train on large datasets because exact inference in
them is intractable and none of the approximate inference methods that have
been applied to them scale well. We propose a fast non-iterative approximate
inference method that uses a feedforward network to implement efficient exact
sampling from the variational posterior. The model and this inference network
are trained jointly by maximizing a variational lower bound on the
log-likelihood. Although the naive estimator of the inference model gradient is
too high-variance to be useful, we make it practical by applying several
straightforward model-independent variance reduction techniques. Applying our
approach to training sigmoid belief networks and deep autoregressive networks,
we show that it outperforms the wake-sleep algorithm on MNIST and achieves
state-of-the-art results on the Reuters RCV1 document dataset.
| Andriy Mnih, Karol Gregor | null | 1402.0030 | null | null |
Dual-to-kernel learning with ideals | stat.ML cs.LG math.AC math.AG math.ST stat.TH | In this paper, we propose a theory which unifies kernel learning and symbolic
algebraic methods. We show that both worlds are inherently dual to each other,
and we use this duality to combine the structure-awareness of algebraic methods
with the efficiency and generality of kernels. The main idea lies in relating
polynomial rings to feature space, and ideals to manifolds, then exploiting
this generative-discriminative duality on kernel matrices. We illustrate this
by proposing two algorithms, IPCA and AVICA, for simultaneous manifold and
feature learning, and test their accuracy on synthetic and real world data.
| Franz J. Kir\'aly, Martin Kreuzer, and Louis Theran | null | 1402.0099 | null | null |
Markov Blanket Ranking using Kernel-based Conditional Dependence
Measures | stat.ML cs.LG | Developing feature selection algorithms that move beyond a pure correlational
to a more causal analysis of observational data is an important problem in the
sciences. Several algorithms attempt to do so by discovering the Markov blanket
of a target, but they all contain a forward selection step which variables must
pass in order to be included in the conditioning set. As a result, these
algorithms may not consider all possible conditional multivariate combinations.
We improve on this limitation by proposing a backward elimination method that
uses a kernel-based conditional dependence measure to identify the Markov
blanket in a fully multivariate fashion. The algorithm is easy to implement and
compares favorably to other methods on synthetic and real datasets.
| Eric V. Strobl, Shyam Visweswaran | null | 1402.0108 | null | null |
Randomized Nonlinear Component Analysis | stat.ML cs.LG | Classical methods such as Principal Component Analysis (PCA) and Canonical
Correlation Analysis (CCA) are ubiquitous in statistics. However, these
techniques are only able to reveal linear relationships in data. Although
nonlinear variants of PCA and CCA have been proposed, these are computationally
prohibitive in the large scale.
In a separate strand of recent research, randomized methods have been
proposed to construct features that help reveal nonlinear patterns in data. For
basic tasks such as regression or classification, random features exhibit
little or no loss in performance, while achieving drastic savings in
computational requirements.
In this paper we leverage randomness to design scalable new variants of
nonlinear PCA and CCA; our ideas extend to key multivariate analysis tools such
as spectral clustering or LDA. We demonstrate our algorithms through
experiments on real-world data, on which we compare against the
state-of-the-art. A simple R implementation of the presented algorithms is
provided.
| David Lopez-Paz, Suvrit Sra, Alex Smola, Zoubin Ghahramani, Bernhard
Sch\"olkopf | null | 1402.0119 | null | null |
Collaborative Receptive Field Learning | cs.CV cs.LG cs.MM stat.ML | The challenge of object categorization in images is largely due to arbitrary
translations and scales of the foreground objects. To attack this difficulty,
we propose a new approach called collaborative receptive field learning to
extract specific receptive fields (RF's) or regions from multiple images, and
the selected RF's are supposed to focus on the foreground objects of a common
category. To this end, we solve the problem by maximizing a submodular function
over a similarity graph constructed by a pool of RF candidates. However,
measuring pairwise distance of RF's for building the similarity graph is a
nontrivial problem. Hence, we introduce a similarity metric called
pyramid-error distance (PED) to measure their pairwise distances through
summing up pyramid-like matching errors over a set of low-level features.
Besides, in consistent with the proposed PED, we construct a simple
nonparametric classifier for classification. Experimental results show that our
method effectively discovers the foreground objects in images, and improves
classification performance.
| Shu Kong, Zhuolin Jiang, Qiang Yang | null | 1402.0170 | null | null |
Principled Graph Matching Algorithms for Integrating Multiple Data
Sources | cs.DB cs.LG stat.ML | This paper explores combinatorial optimization for problems of max-weight
graph matching on multi-partite graphs, which arise in integrating multiple
data sources. Entity resolution-the data integration problem of performing
noisy joins on structured data-typically proceeds by first hashing each record
into zero or more blocks, scoring pairs of records that are co-blocked for
similarity, and then matching pairs of sufficient similarity. In the most
common case of matching two sources, it is often desirable for the final
matching to be one-to-one (a record may be matched with at most one other);
members of the database and statistical record linkage communities accomplish
such matchings in the final stage by weighted bipartite graph matching on
similarity scores. Such matchings are intuitively appealing: they leverage a
natural global property of many real-world entity stores-that of being nearly
deduped-and are known to provide significant improvements to precision and
recall. Unfortunately unlike the bipartite case, exact max-weight matching on
multi-partite graphs is known to be NP-hard. Our two-fold algorithmic
contributions approximate multi-partite max-weight matching: our first
algorithm borrows optimization techniques common to Bayesian probabilistic
inference; our second is a greedy approximation algorithm. In addition to a
theoretical guarantee on the latter, we present comparisons on a real-world ER
problem from Bing significantly larger than typically found in the literature,
publication data, and on a series of synthetic problems. Our results quantify
significant improvements due to exploiting multiple sources, which are made
possible by global one-to-one constraints linking otherwise independent
matching sub-problems. We also discover that our algorithms are complementary:
one being much more robust under noise, and the other being simple to implement
and very fast to run.
| Duo Zhang and Benjamin I. P. Rubinstein and Jim Gemmell | null | 1402.0282 | null | null |
Transductive Learning with Multi-class Volume Approximation | cs.LG stat.ML | Given a hypothesis space, the large volume principle by Vladimir Vapnik
prioritizes equivalence classes according to their volume in the hypothesis
space. The volume approximation has hitherto been successfully applied to
binary learning problems. In this paper, we extend it naturally to a more
general definition which can be applied to several transductive problem
settings, such as multi-class, multi-label and serendipitous learning. Even
though the resultant learning method involves a non-convex optimization
problem, the globally optimal solution is almost surely unique and can be
obtained in O(n^3) time. We theoretically provide stability and error analyses
for the proposed method, and then experimentally show that it is promising.
| Gang Niu, Bo Dai, Marthinus Christoffel du Plessis, and Masashi
Sugiyama | null | 1402.0288 | null | null |
A high-reproducibility and high-accuracy method for automated topic
classification | stat.ML cs.IR cs.LG physics.soc-ph | Much of human knowledge sits in large databases of unstructured text.
Leveraging this knowledge requires algorithms that extract and record metadata
on unstructured text documents. Assigning topics to documents will enable
intelligent search, statistical characterization, and meaningful
classification. Latent Dirichlet allocation (LDA) is the state-of-the-art in
topic classification. Here, we perform a systematic theoretical and numerical
analysis that demonstrates that current optimization techniques for LDA often
yield results which are not accurate in inferring the most suitable model
parameters. Adapting approaches for community detection in networks, we propose
a new algorithm which displays high-reproducibility and high-accuracy, and also
has high computational efficiency. We apply it to a large set of documents in
the English Wikipedia and reveal its hierarchical structure. Our algorithm
promises to make "big data" text analysis systems more reliable.
| Andrea Lancichinetti, M. Irmak Sirer, Jane X. Wang, Daniel Acuna,
Konrad K\"ording, Lu\'is A. Nunes Amaral | null | 1402.0422 | null | null |
A Lower Bound for the Variance of Estimators for Nakagami m Distribution | cs.LG | Recently, we have proposed a maximum likelihood iterative algorithm for
estimation of the parameters of the Nakagami-m distribution. This technique
performs better than state of art estimation techniques for this distribution.
This could be of particular use in low data or block based estimation problems.
In these scenarios, the estimator should be able to give accurate estimates in
the mean square sense with less amounts of data. Also, the estimates should
improve with the increase in number of blocks received. In this paper, we see
through our simulations, that our proposal is well designed for such
requirements. Further, it is well known in the literature that an efficient
estimator does not exist for Nakagami-m distribution. In this paper, we derive
a theoretical expression for the variance of our proposed estimator. We find
that this expression clearly fits the experimental curve for the variance of
the proposed estimator. This expression is pretty close to the cramer-rao lower
bound(CRLB).
| Rangeet Mitra, Amit Kumar Mishra and Tarun Choubisa | null | 1402.0452 | null | null |
Fine-Grained Visual Categorization via Multi-stage Metric Learning | cs.CV cs.LG stat.ML | Fine-grained visual categorization (FGVC) is to categorize objects into
subordinate classes instead of basic classes. One major challenge in FGVC is
the co-occurrence of two issues: 1) many subordinate classes are highly
correlated and are difficult to distinguish, and 2) there exists the large
intra-class variation (e.g., due to object pose). This paper proposes to
explicitly address the above two issues via distance metric learning (DML). DML
addresses the first issue by learning an embedding so that data points from the
same class will be pulled together while those from different classes should be
pushed apart from each other; and it addresses the second issue by allowing the
flexibility that only a portion of the neighbors (not all data points) from the
same class need to be pulled together. However, feature representation of an
image is often high dimensional, and DML is known to have difficulty in dealing
with high dimensional feature vectors since it would require $\mathcal{O}(d^2)$
for storage and $\mathcal{O}(d^3)$ for optimization. To this end, we proposed a
multi-stage metric learning framework that divides the large-scale high
dimensional learning problem to a series of simple subproblems, achieving
$\mathcal{O}(d)$ computational complexity. The empirical study with FVGC
benchmark datasets verifies that our method is both effective and efficient
compared to the state-of-the-art FGVC approaches.
| Qi Qian, Rong Jin, Shenghuo Zhu and Yuanqing Lin | null | 1402.0453 | null | null |
Applying Supervised Learning Algorithms and a New Feature Selection
Method to Predict Coronary Artery Disease | cs.LG stat.ML | From a fresh data science perspective, this thesis discusses the prediction
of coronary artery disease based on genetic variations at the DNA base pair
level, called Single-Nucleotide Polymorphisms (SNPs), collected from the
Ontario Heart Genomics Study (OHGS).
First, the thesis explains two commonly used supervised learning algorithms,
the k-Nearest Neighbour (k-NN) and Random Forest classifiers, and includes a
complete proof that the k-NN classifier is universally consistent in any finite
dimensional normed vector space. Second, the thesis introduces two
dimensionality reduction steps, Random Projections, a known feature extraction
technique based on the Johnson-Lindenstrauss lemma, and a new method termed
Mass Transportation Distance (MTD) Feature Selection for discrete domains.
Then, this thesis compares the performance of Random Projections with the k-NN
classifier against MTD Feature Selection and Random Forest, for predicting
artery disease based on accuracy, the F-Measure, and area under the Receiver
Operating Characteristic (ROC) curve.
The comparative results demonstrate that MTD Feature Selection with Random
Forest is vastly superior to Random Projections and k-NN. The Random Forest
classifier is able to obtain an accuracy of 0.6660 and an area under the ROC
curve of 0.8562 on the OHGS genetic dataset, when 3335 SNPs are selected by MTD
Feature Selection for classification. This area is considerably better than the
previous high score of 0.608 obtained by Davies et al. in 2010 on the same
dataset.
| Hubert Haoyang Duan | null | 1402.0459 | null | null |
Efficient Gradient-Based Inference through Transformations between Bayes
Nets and Neural Nets | cs.LG stat.ML | Hierarchical Bayesian networks and neural networks with stochastic hidden
units are commonly perceived as two separate types of models. We show that
either of these types of models can often be transformed into an instance of
the other, by switching between centered and differentiable non-centered
parameterizations of the latent variables. The choice of parameterization
greatly influences the efficiency of gradient-based posterior inference; we
show that they are often complementary to eachother, we clarify when each
parameterization is preferred and show how inference can be made robust. In the
non-centered form, a simple Monte Carlo estimator of the marginal likelihood
can be used for learning the parameters. Theoretical results are supported by
experiments.
| Diederik P. Kingma, Max Welling | null | 1402.0480 | null | null |
Taming the Monster: A Fast and Simple Algorithm for Contextual Bandits | cs.LG stat.ML | We present a new algorithm for the contextual bandit learning problem, where
the learner repeatedly takes one of $K$ actions in response to the observed
context, and observes the reward only for that chosen action. Our method
assumes access to an oracle for solving fully supervised cost-sensitive
classification problems and achieves the statistically optimal regret guarantee
with only $\tilde{O}(\sqrt{KT/\log N})$ oracle calls across all $T$ rounds,
where $N$ is the number of policies in the policy class we compete against. By
doing so, we obtain the most practical contextual bandit learning algorithm
amongst approaches that work for general policy classes. We further conduct a
proof-of-concept experiment which demonstrates the excellent computational and
prediction performance of (an online variant of) our algorithm relative to
several baselines.
| Alekh Agarwal, Daniel Hsu, Satyen Kale, John Langford, Lihong Li, and
Robert E. Schapire | null | 1402.0555 | null | null |
Parameterized Complexity Results for Exact Bayesian Network Structure
Learning | cs.AI cs.LG | Bayesian network structure learning is the notoriously difficult problem of
discovering a Bayesian network that optimally represents a given set of
training data. In this paper we study the computational worst-case complexity
of exact Bayesian network structure learning under graph theoretic restrictions
on the (directed) super-structure. The super-structure is an undirected graph
that contains as subgraphs the skeletons of solution networks. We introduce the
directed super-structure as a natural generalization of its undirected
counterpart. Our results apply to several variants of score-based Bayesian
network structure learning where the score of a network decomposes into local
scores of its nodes. Results: We show that exact Bayesian network structure
learning can be carried out in non-uniform polynomial time if the
super-structure has bounded treewidth, and in linear time if in addition the
super-structure has bounded maximum degree. Furthermore, we show that if the
directed super-structure is acyclic, then exact Bayesian network structure
learning can be carried out in quadratic time. We complement these positive
results with a number of hardness results. We show that both restrictions
(treewidth and degree) are essential and cannot be dropped without loosing
uniform polynomial time tractability (subject to a complexity-theoretic
assumption). Similarly, exact Bayesian network structure learning remains
NP-hard for "almost acyclic" directed super-structures. Furthermore, we show
that the restrictions remain essential if we do not search for a globally
optimal network but aim to improve a given network by means of at most k arc
additions, arc deletions, or arc reversals (k-neighborhood local search).
| Sebastian Ordyniak, Stefan Szeider | 10.1613/jair.3744 | 1402.0558 | null | null |
Safe Exploration of State and Action Spaces in Reinforcement Learning | cs.LG cs.AI | In this paper, we consider the important problem of safe exploration in
reinforcement learning. While reinforcement learning is well-suited to domains
with complex transition dynamics and high-dimensional state-action spaces, an
additional challenge is posed by the need for safe and efficient exploration.
Traditional exploration techniques are not particularly useful for solving
dangerous tasks, where the trial and error process may lead to the selection of
actions whose execution in some states may result in damage to the learning
system (or any other system). Consequently, when an agent begins an interaction
with a dangerous and high-dimensional state-action space, an important question
arises; namely, that of how to avoid (or at least minimize) damage caused by
the exploration of the state-action space. We introduce the PI-SRL algorithm
which safely improves suboptimal albeit robust behaviors for continuous state
and action control tasks and which efficiently learns from the experience
gained from the environment. We evaluate the proposed method in four complex
tasks: automatic car parking, pole-balancing, helicopter hovering, and business
management.
| Javier Garcia, Fernando Fernandez | 10.1613/jair.3761 | 1402.0560 | null | null |
Online Stochastic Optimization under Correlated Bandit Feedback | stat.ML cs.LG cs.SY | In this paper we consider the problem of online stochastic optimization of a
locally smooth function under bandit feedback. We introduce the high-confidence
tree (HCT) algorithm, a novel any-time $\mathcal{X}$-armed bandit algorithm,
and derive regret bounds matching the performance of existing state-of-the-art
in terms of dependency on number of steps and smoothness factor. The main
advantage of HCT is that it handles the challenging case of correlated rewards,
whereas existing methods require that the reward-generating process of each arm
is an identically and independent distributed (iid) random process. HCT also
improves on the state-of-the-art in terms of its memory requirement as well as
requiring a weaker smoothness assumption on the mean-reward function in compare
to the previous anytime algorithms. Finally, we discuss how HCT can be applied
to the problem of policy search in reinforcement learning and we report
preliminary empirical results.
| Mohammad Gheshlaghi Azar, Alessandro Lazaric and Emma Brunskill | null | 1402.0562 | null | null |
A Feature Subset Selection Algorithm Automatic Recommendation Method | cs.LG | Many feature subset selection (FSS) algorithms have been proposed, but not
all of them are appropriate for a given feature selection problem. At the same
time, so far there is rarely a good way to choose appropriate FSS algorithms
for the problem at hand. Thus, FSS algorithm automatic recommendation is very
important and practically useful. In this paper, a meta learning based FSS
algorithm automatic recommendation method is presented. The proposed method
first identifies the data sets that are most similar to the one at hand by the
k-nearest neighbor classification algorithm, and the distances among these data
sets are calculated based on the commonly-used data set characteristics. Then,
it ranks all the candidate FSS algorithms according to their performance on
these similar data sets, and chooses the algorithms with best performance as
the appropriate ones. The performance of the candidate FSS algorithms is
evaluated by a multi-criteria metric that takes into account not only the
classification accuracy over the selected features, but also the runtime of
feature selection and the number of selected features. The proposed
recommendation method is extensively tested on 115 real world data sets with 22
well-known and frequently-used different FSS algorithms for five representative
classifiers. The results show the effectiveness of our proposed FSS algorithm
recommendation method.
| Guangtao Wang, Qinbao Song, Heli Sun, Xueying Zhang, Baowen Xu, Yuming
Zhou | 10.1613/jair.3831 | 1402.0570 | null | null |
A Survey on Latent Tree Models and Applications | cs.LG | In data analysis, latent variables play a central role because they help
provide powerful insights into a wide variety of phenomena, ranging from
biological to human sciences. The latent tree model, a particular type of
probabilistic graphical models, deserves attention. Its simple structure - a
tree - allows simple and efficient inference, while its latent variables
capture complex relationships. In the past decade, the latent tree model has
been subject to significant theoretical and methodological developments. In
this review, we propose a comprehensive study of this model. First we summarize
key ideas underlying the model. Second we explain how it can be efficiently
learned from data. Third we illustrate its use within three types of
applications: latent structure discovery, multidimensional clustering, and
probabilistic inference. Finally, we conclude and give promising directions for
future researches in this field.
| Rapha\"el Mourad, Christine Sinoquet, Nevin L. Zhang, Tengfei Liu,
Philippe Leray | 10.1613/jair.3879 | 1402.0577 | null | null |
Generalization and Exploration via Randomized Value Functions | stat.ML cs.AI cs.LG cs.SY | We propose randomized least-squares value iteration (RLSVI) -- a new
reinforcement learning algorithm designed to explore and generalize efficiently
via linearly parameterized value functions. We explain why versions of
least-squares value iteration that use Boltzmann or epsilon-greedy exploration
can be highly inefficient, and we present computational results that
demonstrate dramatic efficiency gains enjoyed by RLSVI. Further, we establish
an upper bound on the expected regret of RLSVI that demonstrates
near-optimality in a tabula rasa learning context. More broadly, our results
suggest that randomized value functions offer a promising approach to tackling
a critical challenge in reinforcement learning: synthesizing efficient
exploration and effective generalization.
| Ian Osband, Benjamin Van Roy, Zheng Wen | null | 1402.0635 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.