categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
cs.MA cs.AI cs.DC cs.LG | null | 1312.7606 | null | null | http://arxiv.org/pdf/1312.7606v2 | 2014-11-05T19:50:03Z | 2013-12-30T00:16:34Z | Distributed Policy Evaluation Under Multiple Behavior Strategies | We apply diffusion strategies to develop a fully-distributed cooperative
reinforcement learning algorithm in which agents in a network communicate only
with their immediate neighbors to improve predictions about their environment.
The algorithm can also be applied to off-policy learning, meaning that the
agents can predict the response to a behavior different from the actual
policies they are following. The proposed distributed strategy is efficient,
with linear complexity in both computation time and memory footprint. We
provide a mean-square-error performance analysis and establish convergence
under constant step-size updates, which endow the network with continuous
learning capabilities. The results show a clear gain from cooperation: when the
individual agents can estimate the solution, cooperation increases stability
and reduces bias and variance of the prediction error; but, more importantly,
the network is able to approach the optimal solution even when none of the
individual agents can (e.g., when the individual behavior policies restrict
each agent to sample a small portion of the state space).
| [
"Sergio Valcarcel Macua, Jianshu Chen, Santiago Zazo, Ali H. Sayed",
"['Sergio Valcarcel Macua' 'Jianshu Chen' 'Santiago Zazo' 'Ali H. Sayed']"
] |
stat.ML cs.LG cs.SY | null | 1312.7651 | null | null | http://arxiv.org/pdf/1312.7651v2 | 2015-05-14T21:44:39Z | 2013-12-30T08:46:01Z | Petuum: A New Platform for Distributed Machine Learning on Big Data | What is a systematic way to efficiently apply a wide spectrum of advanced ML
programs to industrial scale problems, using Big Models (up to 100s of billions
of parameters) on Big Data (up to terabytes or petabytes)? Modern
parallelization strategies employ fine-grained operations and scheduling beyond
the classic bulk-synchronous processing paradigm popularized by MapReduce, or
even specialized graph-based execution that relies on graph representations of
ML programs. The variety of approaches tends to pull systems and algorithms
design in different directions, and it remains difficult to find a universal
platform applicable to a wide range of ML programs at scale. We propose a
general-purpose framework that systematically addresses data- and
model-parallel challenges in large-scale ML, by observing that many ML programs
are fundamentally optimization-centric and admit error-tolerant,
iterative-convergent algorithmic solutions. This presents unique opportunities
for an integrative system design, such as bounded-error network synchronization
and dynamic scheduling based on ML program structure. We demonstrate the
efficacy of these system designs versus well-known implementations of modern ML
algorithms, allowing ML programs to run in much less time and at considerably
larger model sizes, even on modestly-sized compute clusters.
| [
"Eric P. Xing, Qirong Ho, Wei Dai, Jin Kyu Kim, Jinliang Wei, Seunghak\n Lee, Xun Zheng, Pengtao Xie, Abhimanu Kumar, Yaoliang Yu",
"['Eric P. Xing' 'Qirong Ho' 'Wei Dai' 'Jin Kyu Kim' 'Jinliang Wei'\n 'Seunghak Lee' 'Xun Zheng' 'Pengtao Xie' 'Abhimanu Kumar' 'Yaoliang Yu']"
] |
cs.LG cs.GT | null | 1312.7658 | null | null | http://arxiv.org/pdf/1312.7658v1 | 2013-12-30T09:15:03Z | 2013-12-30T09:15:03Z | Response-Based Approachability and its Application to Generalized
No-Regret Algorithms | Approachability theory, introduced by Blackwell (1956), provides fundamental
results on repeated games with vector-valued payoffs, and has been usefully
applied since in the theory of learning in games and to learning algorithms in
the online adversarial setup. Given a repeated game with vector payoffs, a
target set $S$ is approachable by a certain player (the agent) if he can ensure
that the average payoff vector converges to that set no matter what his
adversary opponent does. Blackwell provided two equivalent sets of conditions
for a convex set to be approachable. The first (primary) condition is a
geometric separation condition, while the second (dual) condition requires that
the set be {\em non-excludable}, namely that for every mixed action of the
opponent there exists a mixed action of the agent (a {\em response}) such that
the resulting payoff vector belongs to $S$. Existing approachability algorithms
rely on the primal condition and essentially require to compute at each stage a
projection direction from a given point to $S$. In this paper, we introduce an
approachability algorithm that relies on Blackwell's {\em dual} condition.
Thus, rather than projection, the algorithm relies on computation of the
response to a certain action of the opponent at each stage. The utility of the
proposed algorithm is demonstrated by applying it to certain generalizations of
the classical regret minimization problem, which include regret minimization
with side constraints and regret minimization for global cost functions. In
these problems, computation of the required projections is generally complex
but a response is readily obtainable.
| [
"Andrey Bernstein and Nahum Shimkin",
"['Andrey Bernstein' 'Nahum Shimkin']"
] |
cs.LG math.OC stat.ML | null | 1312.7853 | null | null | http://arxiv.org/pdf/1312.7853v4 | 2014-05-13T20:24:28Z | 2013-12-30T20:23:38Z | Communication Efficient Distributed Optimization using an Approximate
Newton-type Method | We present a novel Newton-type method for distributed optimization, which is
particularly well suited for stochastic optimization and learning problems. For
quadratic objectives, the method enjoys a linear rate of convergence which
provably \emph{improves} with the data size, requiring an essentially constant
number of iterations under reasonable assumptions. We provide theoretical and
empirical evidence of the advantages of our method compared to other
approaches, such as one-shot parameter averaging and ADMM.
| [
"Ohad Shamir, Nathan Srebro, Tong Zhang",
"['Ohad Shamir' 'Nathan Srebro' 'Tong Zhang']"
] |
stat.ML cs.DC cs.LG | null | 1312.7869 | null | null | http://arxiv.org/pdf/1312.7869v2 | 2013-12-31T22:07:17Z | 2013-12-30T20:53:09Z | Consistent Bounded-Asynchronous Parameter Servers for Distributed ML | In distributed ML applications, shared parameters are usually replicated
among computing nodes to minimize network overhead. Therefore, proper
consistency model must be carefully chosen to ensure algorithm's correctness
and provide high throughput. Existing consistency models used in
general-purpose databases and modern distributed ML systems are either too
loose to guarantee correctness of the ML algorithms or too strict and thus fail
to fully exploit the computing power of the underlying distributed system.
Many ML algorithms fall into the category of \emph{iterative convergent
algorithms} which start from a randomly chosen initial point and converge to
optima by repeating iteratively a set of procedures. We've found that many such
algorithms are to a bounded amount of inconsistency and still converge
correctly. This property allows distributed ML to relax strict consistency
models to improve system performance while theoretically guarantees algorithmic
correctness. In this paper, we present several relaxed consistency models for
asynchronous parallel computation and theoretically prove their algorithmic
correctness. The proposed consistency models are implemented in a distributed
parameter server and evaluated in the context of a popular ML application:
topic modeling.
| [
"['Jinliang Wei' 'Wei Dai' 'Abhimanu Kumar' 'Xun Zheng' 'Qirong Ho'\n 'Eric P. Xing']",
"Jinliang Wei, Wei Dai, Abhimanu Kumar, Xun Zheng, Qirong Ho and Eric\n P. Xing"
] |
cs.LG | null | 1401.0044 | null | null | http://arxiv.org/pdf/1401.0044v1 | 2013-12-30T22:40:50Z | 2013-12-30T22:40:50Z | Approximating the Bethe partition function | When belief propagation (BP) converges, it does so to a stationary point of
the Bethe free energy $F$, and is often strikingly accurate. However, it may
converge only to a local optimum or may not converge at all. An algorithm was
recently introduced for attractive binary pairwise MRFs which is guaranteed to
return an $\epsilon$-approximation to the global minimum of $F$ in polynomial
time provided the maximum degree $\Delta=O(\log n)$, where $n$ is the number of
variables. Here we significantly improve this algorithm and derive several
results including a new approach based on analyzing first derivatives of $F$,
which leads to performance that is typically far superior and yields a fully
polynomial-time approximation scheme (FPTAS) for attractive models without any
degree restriction. Further, the method applies to general (non-attractive)
models, though with no polynomial time guarantee in this case, leading to the
important result that approximating $\log$ of the Bethe partition function,
$\log Z_B=-\min F$, for a general model to additive $\epsilon$-accuracy may be
reduced to a discrete MAP inference problem. We explore an application to
predicting equipment failure on an urban power network and demonstrate that the
Bethe approximation can perform well even when BP fails to converge.
| [
"['Adrian Weller' 'Tony Jebara']",
"Adrian Weller, Tony Jebara"
] |
cs.AI cs.LG cs.NE stat.ML | 10.1109/TCYB.2013.2265084 | 1401.0104 | null | null | http://arxiv.org/abs/1401.0104v1 | 2013-12-31T07:09:02Z | 2013-12-31T07:09:02Z | PSO-MISMO Modeling Strategy for Multi-Step-Ahead Time Series Prediction | Multi-step-ahead time series prediction is one of the most challenging
research topics in the field of time series modeling and prediction, and is
continually under research. Recently, the multiple-input several
multiple-outputs (MISMO) modeling strategy has been proposed as a promising
alternative for multi-step-ahead time series prediction, exhibiting advantages
compared with the two currently dominating strategies, the iterated and the
direct strategies. Built on the established MISMO strategy, this study proposes
a particle swarm optimization (PSO)-based MISMO modeling strategy, which is
capable of determining the number of sub-models in a self-adaptive mode, with
varying prediction horizons. Rather than deriving crisp divides with equal-size
s prediction horizons from the established MISMO, the proposed PSO-MISMO
strategy, implemented with neural networks, employs a heuristic to create
flexible divides with varying sizes of prediction horizons and to generate
corresponding sub-models, providing considerable flexibility in model
construction, which has been validated with simulated and real datasets.
| [
"['Yukun Bao' 'Tao Xiong' 'Zhongyi Hu']",
"Yukun Bao, Tao Xiong, Zhongyi Hu"
] |
cs.LG | null | 1401.0116 | null | null | http://arxiv.org/pdf/1401.0116v1 | 2013-12-31T09:13:09Z | 2013-12-31T09:13:09Z | Controlled Sparsity Kernel Learning | Multiple Kernel Learning(MKL) on Support Vector Machines(SVMs) has been a
popular front of research in recent times due to its success in application
problems like Object Categorization. This success is due to the fact that MKL
has the ability to choose from a variety of feature kernels to identify the
optimal kernel combination. But the initial formulation of MKL was only able to
select the best of the features and misses out many other informative kernels
presented. To overcome this, the Lp norm based formulation was proposed by
Kloft et. al. This formulation is capable of choosing a non-sparse set of
kernels through a control parameter p. Unfortunately, the parameter p does not
have a direct meaning to the number of kernels selected. We have observed that
stricter control over the number of kernels selected gives us an edge over
these techniques in terms of accuracy of classification and also helps us to
fine tune the algorithms to the time requirements at hand. In this work, we
propose a Controlled Sparsity Kernel Learning (CSKL) formulation that can
strictly control the number of kernels which we wish to select. The CSKL
formulation introduces a parameter t which directly corresponds to the number
of kernels selected. It is important to note that a search in t space is finite
and fast as compared to p. We have also provided an efficient Reduced Gradient
Descent based algorithm to solve the CSKL formulation, which is proven to
converge. Through our experiments on the Caltech101 Object Categorization
dataset, we have also shown that one can achieve better accuracies than the
previous formulations through the right choice of t.
| [
"['Dinesh Govindaraj' 'Raman Sankaran' 'Sreedal Menon'\n 'Chiranjib Bhattacharyya']",
"Dinesh Govindaraj, Raman Sankaran, Sreedal Menon, Chiranjib\n Bhattacharyya"
] |
stat.ML cs.LG stat.CO stat.ME | null | 1401.0118 | null | null | http://arxiv.org/pdf/1401.0118v1 | 2013-12-31T09:32:43Z | 2013-12-31T09:32:43Z | Black Box Variational Inference | Variational inference has become a widely used method to approximate
posteriors in complex latent variables models. However, deriving a variational
inference algorithm generally requires significant model-specific analysis, and
these efforts can hinder and deter us from quickly developing and exploring a
variety of models for a problem at hand. In this paper, we present a "black
box" variational inference algorithm, one that can be quickly applied to many
models with little additional derivation. Our method is based on a stochastic
optimization of the variational objective where the noisy gradient is computed
from Monte Carlo samples from the variational distribution. We develop a number
of methods to reduce the variance of the gradient, always maintaining the
criterion that we want to avoid difficult model-based derivations. We evaluate
our method against the corresponding black box sampling based methods. We find
that our method reaches better predictive likelihoods much faster than sampling
methods. Finally, we demonstrate that Black Box Variational Inference lets us
easily explore a wide space of models by quickly constructing and evaluating
several models of longitudinal healthcare data.
| [
"Rajesh Ranganath and Sean Gerrish and David M. Blei",
"['Rajesh Ranganath' 'Sean Gerrish' 'David M. Blei']"
] |
cs.NA cs.LG | null | 1401.0159 | null | null | http://arxiv.org/pdf/1401.0159v1 | 2013-12-31T15:25:50Z | 2013-12-31T15:25:50Z | Speeding-Up Convergence via Sequential Subspace Optimization: Current
State and Future Directions | This is an overview paper written in style of research proposal. In recent
years we introduced a general framework for large-scale unconstrained
optimization -- Sequential Subspace Optimization (SESOP) and demonstrated its
usefulness for sparsity-based signal/image denoising, deconvolution,
compressive sensing, computed tomography, diffraction imaging, support vector
machines. We explored its combination with Parallel Coordinate Descent and
Separable Surrogate Function methods, obtaining state of the art results in
above-mentioned areas. There are several methods, that are faster than plain
SESOP under specific conditions: Trust region Newton method - for problems with
easily invertible Hessian matrix; Truncated Newton method - when fast
multiplication by Hessian is available; Stochastic optimization methods - for
problems with large stochastic-type data; Multigrid methods - for problems with
nested multilevel structure. Each of these methods can be further improved by
merge with SESOP. One can also accelerate Augmented Lagrangian method for
constrained optimization problems and Alternating Direction Method of
Multipliers for problems with separable objective function and non-separable
constraints.
| [
"['Michael Zibulevsky']",
"Michael Zibulevsky"
] |
stat.ME cs.DS cs.IT cs.LG math.IT | null | 1401.0201 | null | null | http://arxiv.org/pdf/1401.0201v1 | 2013-12-31T18:17:09Z | 2013-12-31T18:17:09Z | Sparse Recovery with Very Sparse Compressed Counting | Compressed sensing (sparse signal recovery) often encounters nonnegative data
(e.g., images). Recently we developed the methodology of using (dense)
Compressed Counting for recovering nonnegative K-sparse signals. In this paper,
we adopt very sparse Compressed Counting for nonnegative signal recovery. Our
design matrix is sampled from a maximally-skewed p-stable distribution (0<p<1),
and we sparsify the design matrix so that on average (1-g)-fraction of the
entries become zero. The idea is related to very sparse stable random
projections (Li et al 2006 and Li 2007), the prior work for estimating summary
statistics of the data.
In our theoretical analysis, we show that, when p->0, it suffices to use M=
K/(1-exp(-gK) log N measurements, so that all coordinates can be recovered in
one scan of the coordinates. If g = 1 (i.e., dense design), then M = K log N.
If g= 1/K or 2/K (i.e., very sparse design), then M = 1.58K log N or M = 1.16K
log N. This means the design matrix can be indeed very sparse at only a minor
inflation of the sample complexity.
Interestingly, as p->1, the required number of measurements is essentially M
= 2.7K log N, provided g= 1/K. It turns out that this result is a general
worst-case bound.
| [
"Ping Li, Cun-Hui Zhang, Tong Zhang",
"['Ping Li' 'Cun-Hui Zhang' 'Tong Zhang']"
] |
cs.LG cs.DS | null | 1401.0247 | null | null | http://arxiv.org/pdf/1401.0247v2 | 2014-07-13T01:51:05Z | 2014-01-01T04:16:21Z | Robust Hierarchical Clustering | One of the most widely used techniques for data clustering is agglomerative
clustering. Such algorithms have been long used across many different fields
ranging from computational biology to social sciences to computer vision in
part because their output is easy to interpret. Unfortunately, it is well
known, however, that many of the classic agglomerative clustering algorithms
are not robust to noise. In this paper we propose and analyze a new robust
algorithm for bottom-up agglomerative clustering. We show that our algorithm
can be used to cluster accurately in cases where the data satisfies a number of
natural properties and where the traditional agglomerative algorithms fail. We
also show how to adapt our algorithm to the inductive setting where our given
data is only a small random sample of the entire data set. Experimental
evaluations on synthetic and real world data sets show that our algorithm
achieves better performance than other hierarchical algorithms in the presence
of noise.
| [
"['Maria-Florina Balcan' 'Yingyu Liang' 'Pramod Gupta']",
"Maria-Florina Balcan, Yingyu Liang, Pramod Gupta"
] |
cs.IR cs.LG | null | 1401.0255 | null | null | http://arxiv.org/pdf/1401.0255v1 | 2014-01-01T06:45:58Z | 2014-01-01T06:45:58Z | Modeling Attractiveness and Multiple Clicks in Sponsored Search Results | Click models are an important tool for leveraging user feedback, and are used
by commercial search engines for surfacing relevant search results. However,
existing click models are lacking in two aspects. First, they do not share
information across search results when computing attractiveness. Second, they
assume that users interact with the search results sequentially. Based on our
analysis of the click logs of a commercial search engine, we observe that the
sequential scan assumption does not always hold, especially for sponsored
search results. To overcome the above two limitations, we propose a new click
model. Our key insight is that sharing information across search results helps
in identifying important words or key-phrases which can then be used to
accurately compute attractiveness of a search result. Furthermore, we argue
that the click probability of a position as well as its attractiveness changes
during a user session and depends on the user's past click experience. Our
model seamlessly incorporates the effect of externalities (quality of other
search results displayed in response to a user query), user fatigue, as well as
pre and post-click relevance of a sponsored search result. We propose an
efficient one-pass inference scheme and empirically evaluate the performance of
our model via extensive experiments using the click logs of a large commercial
search engine.
| [
"['Dinesh Govindaraj' 'Tao Wang' 'S. V. N. Vishwanathan']",
"Dinesh Govindaraj, Tao Wang, S.V.N. Vishwanathan"
] |
cs.LG stat.ML | null | 1401.0304 | null | null | http://arxiv.org/pdf/1401.0304v2 | 2014-10-22T17:59:50Z | 2014-01-01T16:28:19Z | Learning without Concentration | We obtain sharp bounds on the performance of Empirical Risk Minimization
performed in a convex class and with respect to the squared loss, without
assuming that class members and the target are bounded functions or have
rapidly decaying tails.
Rather than resorting to a concentration-based argument, the method used here
relies on a `small-ball' assumption and thus holds for classes consisting of
heavy-tailed functions and for heavy-tailed targets.
The resulting estimates scale correctly with the `noise level' of the
problem, and when applied to the classical, bounded scenario, always improve
the known bounds.
| [
"Shahar Mendelson",
"['Shahar Mendelson']"
] |
cs.LG | null | 1401.0362 | null | null | http://arxiv.org/pdf/1401.0362v3 | 2015-07-13T06:11:18Z | 2014-01-02T03:12:28Z | EigenGP: Gaussian Process Models with Adaptive Eigenfunctions | Gaussian processes (GPs) provide a nonparametric representation of functions.
However, classical GP inference suffers from high computational cost for big
data. In this paper, we propose a new Bayesian approach, EigenGP, that learns
both basis dictionary elements--eigenfunctions of a GP prior--and prior
precisions in a sparse finite model. It is well known that, among all
orthogonal basis functions, eigenfunctions can provide the most compact
representation. Unlike other sparse Bayesian finite models where the basis
function has a fixed form, our eigenfunctions live in a reproducing kernel
Hilbert space as a finite linear combination of kernel functions. We learn the
dictionary elements--eigenfunctions--and the prior precisions over these
elements as well as all the other hyperparameters from data by maximizing the
model marginal likelihood. We explore computational linear algebra to simplify
the gradient computation significantly. Our experimental results demonstrate
improved predictive performance of EigenGP over alternative sparse GP methods
as well as relevance vector machine.
| [
"['Hao Peng' 'Yuan Qi']",
"Hao Peng and Yuan Qi"
] |
cs.LG stat.ML | null | 1401.0376 | null | null | http://arxiv.org/pdf/1401.0376v1 | 2014-01-02T07:32:01Z | 2014-01-02T07:32:01Z | Generalization Bounds for Representative Domain Adaptation | In this paper, we propose a novel framework to analyze the theoretical
properties of the learning process for a representative type of domain
adaptation, which combines data from multiple sources and one target (or
briefly called representative domain adaptation). In particular, we use the
integral probability metric to measure the difference between the distributions
of two domains and meanwhile compare it with the H-divergence and the
discrepancy distance. We develop the Hoeffding-type, the Bennett-type and the
McDiarmid-type deviation inequalities for multiple domains respectively, and
then present the symmetrization inequality for representative domain
adaptation. Next, we use the derived inequalities to obtain the Hoeffding-type
and the Bennett-type generalization bounds respectively, both of which are
based on the uniform entropy number. Moreover, we present the generalization
bounds based on the Rademacher complexity. Finally, we analyze the asymptotic
convergence and the rate of convergence of the learning process for
representative domain adaptation. We discuss the factors that affect the
asymptotic behavior of the learning process and the numerical experiments
support our theoretical findings as well. Meanwhile, we give a comparison with
the existing results of domain adaptation and the classical results under the
same-distribution assumption.
| [
"['Chao Zhang' 'Lei Zhang' 'Wei Fan' 'Jieping Ye']",
"Chao Zhang, Lei Zhang, Wei Fan, Jieping Ye"
] |
cs.CL cs.LG | null | 1401.0509 | null | null | http://arxiv.org/pdf/1401.0509v3 | 2014-03-07T23:31:02Z | 2013-12-20T17:08:26Z | Zero-Shot Learning for Semantic Utterance Classification | We propose a novel zero-shot learning method for semantic utterance
classification (SUC). It learns a classifier $f: X \to Y$ for problems where
none of the semantic categories $Y$ are present in the training set. The
framework uncovers the link between categories and utterances using a semantic
space. We show that this semantic space can be learned by deep neural networks
trained on large amounts of search engine query log data. More precisely, we
propose a novel method that can learn discriminative semantic features without
supervision. It uses the zero-shot learning framework to guide the learning of
the semantic features. We demonstrate the effectiveness of the zero-shot
semantic learning algorithm on the SUC dataset collected by (Tur, 2012).
Furthermore, we achieve state-of-the-art results by combining the semantic
features with a supervised method.
| [
"Yann N. Dauphin, Gokhan Tur, Dilek Hakkani-Tur, Larry Heck",
"['Yann N. Dauphin' 'Gokhan Tur' 'Dilek Hakkani-Tur' 'Larry Heck']"
] |
cs.PL cs.LG stat.ML | null | 1401.0514 | null | null | http://arxiv.org/pdf/1401.0514v2 | 2014-06-20T08:12:20Z | 2014-01-02T19:35:31Z | Structured Generative Models of Natural Source Code | We study the problem of building generative models of natural source code
(NSC); that is, source code written and understood by humans. Our primary
contribution is to describe a family of generative models for NSC that have
three key properties: First, they incorporate both sequential and hierarchical
structure. Second, we learn a distributed representation of source code
elements. Finally, they integrate closely with a compiler, which allows
leveraging compiler logic and abstractions when building structure into the
model. We also develop an extension that includes more complex structure,
refining how the model generates identifier tokens based on what variables are
currently in scope. Our models can be learned efficiently, and we show
empirically that including appropriate structure greatly improves the models,
measured by the probability of generating test programs.
| [
"['Chris J. Maddison' 'Daniel Tarlow']",
"Chris J. Maddison and Daniel Tarlow"
] |
cs.DS cs.LG stat.ML | null | 1401.0579 | null | null | http://arxiv.org/pdf/1401.0579v1 | 2014-01-03T02:52:17Z | 2014-01-03T02:52:17Z | More Algorithms for Provable Dictionary Learning | In dictionary learning, also known as sparse coding, the algorithm is given
samples of the form $y = Ax$ where $x\in \mathbb{R}^m$ is an unknown random
sparse vector and $A$ is an unknown dictionary matrix in $\mathbb{R}^{n\times
m}$ (usually $m > n$, which is the overcomplete case). The goal is to learn $A$
and $x$. This problem has been studied in neuroscience, machine learning,
visions, and image processing. In practice it is solved by heuristic algorithms
and provable algorithms seemed hard to find. Recently, provable algorithms were
found that work if the unknown feature vector $x$ is $\sqrt{n}$-sparse or even
sparser. Spielman et al. \cite{DBLP:journals/jmlr/SpielmanWW12} did this for
dictionaries where $m=n$; Arora et al. \cite{AGM} gave an algorithm for
overcomplete ($m >n$) and incoherent matrices $A$; and Agarwal et al.
\cite{DBLP:journals/corr/AgarwalAN13} handled a similar case but with weaker
guarantees.
This raised the problem of designing provable algorithms that allow sparsity
$\gg \sqrt{n}$ in the hidden vector $x$. The current paper designs algorithms
that allow sparsity up to $n/poly(\log n)$. It works for a class of matrices
where features are individually recoverable, a new notion identified in this
paper that may motivate further work.
The algorithm runs in quasipolynomial time because they use limited
enumeration.
| [
"['Sanjeev Arora' 'Aditya Bhaskara' 'Rong Ge' 'Tengyu Ma']",
"Sanjeev Arora, Aditya Bhaskara, Rong Ge, Tengyu Ma"
] |
cs.IT cs.LG math.IT math.PR stat.CO stat.ML | null | 1401.0711 | null | null | http://arxiv.org/pdf/1401.0711v2 | 2014-03-21T07:29:34Z | 2014-01-03T20:30:01Z | Computing Entropy Rate Of Symbol Sources & A Distribution-free Limit
Theorem | Entropy rate of sequential data-streams naturally quantifies the complexity
of the generative process. Thus entropy rate fluctuations could be used as a
tool to recognize dynamical perturbations in signal sources, and could
potentially be carried out without explicit background noise characterization.
However, state of the art algorithms to estimate the entropy rate have markedly
slow convergence; making such entropic approaches non-viable in practice. We
present here a fundamentally new approach to estimate entropy rates, which is
demonstrated to converge significantly faster in terms of input data lengths,
and is shown to be effective in diverse applications ranging from the
estimation of the entropy rate of English texts to the estimation of complexity
of chaotic dynamical systems. Additionally, the convergence rate of entropy
estimates do not follow from any standard limit theorem, and reported
algorithms fail to provide any confidence bounds on the computed values.
Exploiting a connection to the theory of probabilistic automata, we establish a
convergence rate of $O(\log \vert s \vert/\sqrt[3]{\vert s \vert})$ as a
function of the input length $\vert s \vert$, which then yields explicit
uncertainty estimates, as well as required data lengths to satisfy
pre-specified confidence bounds.
| [
"Ishanu Chattopadhyay and Hod Lipson",
"['Ishanu Chattopadhyay' 'Hod Lipson']"
] |
cs.LG cs.AI cs.CE cs.IT math.IT stat.ML | null | 1401.0742 | null | null | http://arxiv.org/pdf/1401.0742v1 | 2014-01-03T22:15:17Z | 2014-01-03T22:15:17Z | Data Smashing | Investigation of the underlying physics or biology from empirical data
requires a quantifiable notion of similarity - when do two observed data sets
indicate nearly identical generating processes, and when they do not. The
discriminating characteristics to look for in data is often determined by
heuristics designed by experts, $e.g.$, distinct shapes of "folded" lightcurves
may be used as "features" to classify variable stars, while determination of
pathological brain states might require a Fourier analysis of brainwave
activity. Finding good features is non-trivial. Here, we propose a universal
solution to this problem: we delineate a principle for quantifying similarity
between sources of arbitrary data streams, without a priori knowledge, features
or training. We uncover an algebraic structure on a space of symbolic models
for quantized data, and show that such stochastic generators may be added and
uniquely inverted; and that a model and its inverse always sum to the generator
of flat white noise. Therefore, every data stream has an anti-stream: data
generated by the inverse model. Similarity between two streams, then, is the
degree to which one, when summed to the other's anti-stream, mutually
annihilates all statistical structure to noise. We call this data smashing. We
present diverse applications, including disambiguation of brainwaves pertaining
to epileptic seizures, detection of anomalous cardiac rhythms, and
classification of astronomical objects from raw photometry. In our examples,
the data smashing principle, without access to any domain knowledge, meets or
exceeds the performance of specialized algorithms tuned by domain experts.
| [
"Ishanu Chattopadhyay and Hod Lipson",
"['Ishanu Chattopadhyay' 'Hod Lipson']"
] |
cs.CV cs.LG | 10.1109/TKDE.2013.126 | 1401.0764 | null | null | http://arxiv.org/abs/1401.0764v1 | 2014-01-04T02:05:35Z | 2014-01-04T02:05:35Z | Context-Aware Hypergraph Construction for Robust Spectral Clustering | Spectral clustering is a powerful tool for unsupervised data analysis. In
this paper, we propose a context-aware hypergraph similarity measure (CAHSM),
which leads to robust spectral clustering in the case of noisy data. We
construct three types of hypergraph---the pairwise hypergraph, the
k-nearest-neighbor (kNN) hypergraph, and the high-order over-clustering
hypergraph. The pairwise hypergraph captures the pairwise similarity of data
points; the kNN hypergraph captures the neighborhood of each point; and the
clustering hypergraph encodes high-order contexts within the dataset. By
combining the affinity information from these three hypergraphs, the CAHSM
algorithm is able to explore the intrinsic topological information of the
dataset. Therefore, data clustering using CAHSM tends to be more robust.
Considering the intra-cluster compactness and the inter-cluster separability of
vertices, we further design a discriminative hypergraph partitioning criterion
(DHPC). Using both CAHSM and DHPC, a robust spectral clustering algorithm is
developed. Theoretical analysis and experimental evaluation demonstrate the
effectiveness and robustness of the proposed algorithm.
| [
"Xi Li, Weiming Hu, Chunhua Shen, Anthony Dick, Zhongfei Zhang",
"['Xi Li' 'Weiming Hu' 'Chunhua Shen' 'Anthony Dick' 'Zhongfei Zhang']"
] |
cs.LG cs.CV | null | 1401.0767 | null | null | http://arxiv.org/pdf/1401.0767v1 | 2014-01-04T02:28:48Z | 2014-01-04T02:28:48Z | From Kernel Machines to Ensemble Learning | Ensemble methods such as boosting combine multiple learners to obtain better
prediction than could be obtained from any individual learner. Here we propose
a principled framework for directly constructing ensemble learning methods from
kernel methods. Unlike previous studies showing the equivalence between
boosting and support vector machines (SVMs), which needs a translation
procedure, we show that it is possible to design boosting-like procedure to
solve the SVM optimization problems.
In other words, it is possible to design ensemble methods directly from SVM
without any middle procedure.
This finding not only enables us to design new ensemble learning methods
directly from kernel methods, but also makes it possible to take advantage of
those highly-optimized fast linear SVM solvers for ensemble learning.
We exemplify this framework for designing binary ensemble learning as well as
a new multi-class ensemble learning methods.
Experimental results demonstrate the flexibility and usefulness of the
proposed framework.
| [
"['Chunhua Shen' 'Fayao Liu']",
"Chunhua Shen, Fayao Liu"
] |
math.OC cs.LG | null | 1401.0843 | null | null | http://arxiv.org/pdf/1401.0843v1 | 2014-01-04T19:57:26Z | 2014-01-04T19:57:26Z | Least Squares Policy Iteration with Instrumental Variables vs. Direct
Policy Search: Comparison Against Optimal Benchmarks Using Energy Storage | This paper studies approximate policy iteration (API) methods which use
least-squares Bellman error minimization for policy evaluation. We address
several of its enhancements, namely, Bellman error minimization using
instrumental variables, least-squares projected Bellman error minimization, and
projected Bellman error minimization using instrumental variables. We prove
that for a general discrete-time stochastic control problem, Bellman error
minimization using instrumental variables is equivalent to both variants of
projected Bellman error minimization. An alternative to these API methods is
direct policy search based on knowledge gradient. The practical performance of
these three approximate dynamic programming methods are then investigated in
the context of an application in energy storage, integrated with an
intermittent wind energy supply to fully serve a stochastic time-varying
electricity demand. We create a library of test problems using real-world data
and apply value iteration to find their optimal policies. These benchmarks are
then used to compare the developed policies. Our analysis indicates that API
with instrumental variables Bellman error minimization prominently outperforms
API with least-squares Bellman error minimization. However, these approaches
underperform our direct policy search implementation.
| [
"Warren R. Scott, Warren B. Powell, Somayeh Moazehi",
"['Warren R. Scott' 'Warren B. Powell' 'Somayeh Moazehi']"
] |
stat.ME cs.LG stat.ML | null | 1401.0852 | null | null | http://arxiv.org/pdf/1401.0852v2 | 2015-01-04T23:34:01Z | 2014-01-04T23:27:48Z | Concave Penalized Estimation of Sparse Gaussian Bayesian Networks | We develop a penalized likelihood estimation framework to estimate the
structure of Gaussian Bayesian networks from observational data. In contrast to
recent methods which accelerate the learning problem by restricting the search
space, our main contribution is a fast algorithm for score-based structure
learning which does not restrict the search space in any way and works on
high-dimensional datasets with thousands of variables. Our use of concave
regularization, as opposed to the more popular $\ell_0$ (e.g. BIC) penalty, is
new. Moreover, we provide theoretical guarantees which generalize existing
asymptotic results when the underlying distribution is Gaussian. Most notably,
our framework does not require the existence of a so-called faithful DAG
representation, and as a result the theory must handle the inherent
nonidentifiability of the estimation problem in a novel way. Finally, as a
matter of independent interest, we provide a comprehensive comparison of our
approach to several standard structure learning methods using open-source
packages developed for the R language. Based on these experiments, we show that
our algorithm is significantly faster than other competing methods while
obtaining higher sensitivity with comparable false discovery rates for
high-dimensional data. In particular, the total runtime for our method to
generate a solution path of 20 estimates for DAGs with 8000 nodes is around one
hour.
| [
"['Bryon Aragam' 'Qing Zhou']",
"Bryon Aragam and Qing Zhou"
] |
math.OC cs.LG math.NA stat.CO stat.ML | null | 1401.0869 | null | null | http://arxiv.org/pdf/1401.0869v3 | 2016-10-31T17:30:58Z | 2014-01-05T06:37:50Z | Schatten-$p$ Quasi-Norm Regularized Matrix Optimization via Iterative
Reweighted Singular Value Minimization | In this paper we study general Schatten-$p$ quasi-norm (SPQN) regularized
matrix minimization problems. In particular, we first introduce a class of
first-order stationary points for them, and show that the first-order
stationary points introduced in [11] for an SPQN regularized $vector$
minimization problem are equivalent to those of an SPQN regularized $matrix$
minimization reformulation. We also show that any local minimizer of the SPQN
regularized matrix minimization problems must be a first-order stationary
point. Moreover, we derive lower bounds for nonzero singular values of the
first-order stationary points and hence also of the local minimizers of the
SPQN regularized matrix minimization problems. The iterative reweighted
singular value minimization (IRSVM) methods are then proposed to solve these
problems, whose subproblems are shown to have a closed-form solution. In
contrast to the analogous methods for the SPQN regularized $vector$
minimization problems, the convergence analysis of these methods is
significantly more challenging. We develop a novel approach to establishing the
convergence of these methods, which makes use of the expression of a specific
solution of their subproblems and avoids the intricate issue of finding the
explicit expression for the Clarke subdifferential of the objective of their
subproblems. In particular, we show that any accumulation point of the sequence
generated by the IRSVM methods is a first-order stationary point of the
problems. Our computational results demonstrate that the IRSVM methods
generally outperform some recently developed state-of-the-art methods in terms
of solution quality and/or speed.
| [
"['Zhaosong Lu' 'Yong Zhang']",
"Zhaosong Lu and Yong Zhang"
] |
cs.LG cs.SI stat.ML | 10.1109/TSP.2014.2332441 | 1401.0887 | null | null | http://arxiv.org/abs/1401.0887v1 | 2014-01-05T12:17:51Z | 2014-01-05T12:17:51Z | Learning parametric dictionaries for graph signals | In sparse signal representation, the choice of a dictionary often involves a
tradeoff between two desirable properties -- the ability to adapt to specific
signal data and a fast implementation of the dictionary. To sparsely represent
signals residing on weighted graphs, an additional design challenge is to
incorporate the intrinsic geometric structure of the irregular data domain into
the atoms of the dictionary. In this work, we propose a parametric dictionary
learning algorithm to design data-adapted, structured dictionaries that
sparsely represent graph signals. In particular, we model graph signals as
combinations of overlapping local patterns. We impose the constraint that each
dictionary is a concatenation of subdictionaries, with each subdictionary being
a polynomial of the graph Laplacian matrix, representing a single pattern
translated to different areas of the graph. The learning algorithm adapts the
patterns to a training set of graph signals. Experimental results on both
synthetic and real datasets demonstrate that the dictionaries learned by the
proposed algorithm are competitive with and often better than unstructured
dictionaries learned by state-of-the-art numerical learning algorithms in terms
of sparse approximation of graph signals. In contrast to the unstructured
dictionaries, however, the dictionaries learned by the proposed algorithm
feature localized atoms and can be implemented in a computationally efficient
manner in signal processing tasks such as compression, denoising, and
classification.
| [
"Dorina Thanou, David I Shuman, Pascal Frossard",
"['Dorina Thanou' 'David I Shuman' 'Pascal Frossard']"
] |
cs.CV cs.LG stat.ML | null | 1401.0898 | null | null | http://arxiv.org/pdf/1401.0898v1 | 2014-01-05T14:52:27Z | 2014-01-05T14:52:27Z | Feature Selection Using Classifier in High Dimensional Data | Feature selection is frequently used as a pre-processing step to machine
learning. It is a process of choosing a subset of original features so that the
feature space is optimally reduced according to a certain evaluation criterion.
The central objective of this paper is to reduce the dimension of the data by
finding a small set of important features which can give good classification
performance. We have applied filter and wrapper approach with different
classifiers QDA and LDA respectively. A widely-used filter method is used for
bioinformatics data i.e. a univariate criterion separately on each feature,
assuming that there is no interaction between features and then applied
Sequential Feature Selection method. Experimental results show that filter
approach gives better performance in respect of Misclassification Error Rate.
| [
"Vijendra Singh and Shivani Pathak",
"['Vijendra Singh' 'Shivani Pathak']"
] |
cs.LG | null | 1401.1123 | null | null | http://arxiv.org/pdf/1401.1123v1 | 2014-01-06T15:53:25Z | 2014-01-06T15:53:25Z | Exploration vs Exploitation vs Safety: Risk-averse Multi-Armed Bandits | Motivated by applications in energy management, this paper presents the
Multi-Armed Risk-Aware Bandit (MARAB) algorithm. With the goal of limiting the
exploration of risky arms, MARAB takes as arm quality its conditional value at
risk. When the user-supplied risk level goes to 0, the arm quality tends toward
the essential infimum of the arm distribution density, and MARAB tends toward
the MIN multi-armed bandit algorithm, aimed at the arm with maximal minimal
value. As a first contribution, this paper presents a theoretical analysis of
the MIN algorithm under mild assumptions, establishing its robustness
comparatively to UCB. The analysis is supported by extensive experimental
validation of MIN and MARAB compared to UCB and state-of-art risk-aware MAB
algorithms on artificial and real-world problems.
| [
"Nicolas Galichet (LRI, INRIA Saclay - Ile de France), Mich\\`ele Sebag\n (LRI, INRIA Saclay - Ile de France), Olivier Teytaud (LRI, INRIA Saclay - Ile\n de France)",
"['Nicolas Galichet' 'Michèle Sebag' 'Olivier Teytaud']"
] |
cs.AI cs.GT cs.LG cs.MA q-bio.NC | null | 1401.1465 | null | null | http://arxiv.org/pdf/1401.1465v1 | 2014-01-07T18:28:20Z | 2014-01-07T18:28:20Z | Cortical prediction markets | We investigate cortical learning from the perspective of mechanism design.
First, we show that discretizing standard models of neurons and synaptic
plasticity leads to rational agents maximizing simple scoring rules. Second,
our main result is that the scoring rules are proper, implying that neurons
faithfully encode expected utilities in their synaptic weights and encode
high-scoring outcomes in their spikes. Third, with this foundation in hand, we
propose a biologically plausible mechanism whereby neurons backpropagate
incentives which allows them to optimize their usefulness to the rest of
cortex. Finally, experiments show that networks that backpropagate incentives
can learn simple tasks.
| [
"['David Balduzzi']",
"David Balduzzi"
] |
stat.ML cs.CV cs.LG physics.data-an stat.AP | null | 1401.1489 | null | null | http://arxiv.org/pdf/1401.1489v1 | 2014-01-07T20:16:05Z | 2014-01-07T20:16:05Z | Key point selection and clustering of swimmer coordination through
Sparse Fisher-EM | To answer the existence of optimal swimmer learning/teaching strategies, this
work introduces a two-level clustering in order to analyze temporal dynamics of
motor learning in breaststroke swimming. Each level have been performed through
Sparse Fisher-EM, a unsupervised framework which can be applied efficiently on
large and correlated datasets. The induced sparsity selects key points of the
coordination phase without any prior knowledge.
| [
"['John Komar' 'Romain Hérault' 'Ludovic Seifert']",
"John Komar and Romain H\\'erault and Ludovic Seifert"
] |
cs.LG cs.AI cs.SY | null | 1401.1549 | null | null | http://arxiv.org/pdf/1401.1549v2 | 2014-06-28T04:24:47Z | 2014-01-08T00:49:01Z | Optimal Demand Response Using Device Based Reinforcement Learning | Demand response (DR) for residential and small commercial buildings is
estimated to account for as much as 65% of the total energy savings potential
of DR, and previous work shows that a fully automated Energy Management System
(EMS) is a necessary prerequisite to DR in these areas. In this paper, we
propose a novel EMS formulation for DR problems in these sectors. Specifically,
we formulate a fully automated EMS's rescheduling problem as a reinforcement
learning (RL) problem, and argue that this RL problem can be approximately
solved by decomposing it over device clusters. Compared with existing
formulations, our new formulation (1) does not require explicitly modeling the
user's dissatisfaction on job rescheduling, (2) enables the EMS to
self-initiate jobs, (3) allows the user to initiate more flexible requests and
(4) has a computational complexity linear in the number of devices. We also
demonstrate the simulation results of applying Q-learning, one of the most
popular and classical RL algorithms, to a representative example.
| [
"Zheng Wen, Daniel O'Neill and Hamid Reza Maei",
"['Zheng Wen' \"Daniel O'Neill\" 'Hamid Reza Maei']"
] |
cs.LG cs.AI | 10.1016/j.eneco.2013.07.028 | 1401.1560 | null | null | http://arxiv.org/abs/1401.1560v1 | 2014-01-08T01:59:53Z | 2014-01-08T01:59:53Z | Beyond One-Step-Ahead Forecasting: Evaluation of Alternative
Multi-Step-Ahead Forecasting Models for Crude Oil Prices | An accurate prediction of crude oil prices over long future horizons is
challenging and of great interest to governments, enterprises, and investors.
This paper proposes a revised hybrid model built upon empirical mode
decomposition (EMD) based on the feed-forward neural network (FNN) modeling
framework incorporating the slope-based method (SBM), which is capable of
capturing the complex dynamic of crude oil prices. Three commonly used
multi-step-ahead prediction strategies proposed in the literature, including
iterated strategy, direct strategy, and MIMO (multiple-input multiple-output)
strategy, are examined and compared, and practical considerations for the
selection of a prediction strategy for multi-step-ahead forecasting relating to
crude oil prices are identified. The weekly data from the WTI (West Texas
Intermediate) crude oil spot price are used to compare the performance of the
alternative models under the EMD-SBM-FNN modeling framework with selected
counterparts. The quantitative and comprehensive assessments are performed on
the basis of prediction accuracy and computational cost. The results obtained
in this study indicate that the proposed EMD-SBM-FNN model using the MIMO
strategy is the best in terms of prediction accuracy with accredited
computational load.
| [
"Tao Xiong, Yukun Bao, Zhongyi Hu",
"['Tao Xiong' 'Yukun Bao' 'Zhongyi Hu']"
] |
cs.LG cs.CV stat.ML | null | 1401.1605 | null | null | http://arxiv.org/pdf/1401.1605v2 | 2014-04-14T08:04:46Z | 2014-01-08T08:47:44Z | Fast nonparametric clustering of structured time-series | In this publication, we combine two Bayesian non-parametric models: the
Gaussian Process (GP) and the Dirichlet Process (DP). Our innovation in the GP
model is to introduce a variation on the GP prior which enables us to model
structured time-series data, i.e. data containing groups where we wish to model
inter- and intra-group variability. Our innovation in the DP model is an
implementation of a new fast collapsed variational inference procedure which
enables us to optimize our variationala pproximation significantly faster than
standard VB approaches. In a biological time series application we show how our
model better captures salient features of the data, leading to better
consistency with existing biological classifications, while the associated
inference algorithm provides a twofold speed-up over EM-based variational
inference.
| [
"['James Hensman' 'Magnus Rattray' 'Neil D. Lawrence']",
"James Hensman and Magnus Rattray and Neil D. Lawrence"
] |
cs.CL cs.LG stat.ML | null | 1401.1803 | null | null | http://arxiv.org/pdf/1401.1803v1 | 2014-01-08T20:36:57Z | 2014-01-08T20:36:57Z | Learning Multilingual Word Representations using a Bag-of-Words
Autoencoder | Recent work on learning multilingual word representations usually relies on
the use of word-level alignements (e.g. infered with the help of GIZA++)
between translated sentences, in order to align the word embeddings in
different languages. In this workshop paper, we investigate an autoencoder
model for learning multilingual word representations that does without such
word-level alignements. The autoencoder is trained to reconstruct the
bag-of-word representation of given sentence from an encoded representation
extracted from its translation. We evaluate our approach on a multilingual
document classification task, where labeled data is available only for one
language (e.g. English) while classification must be performed in a different
language (e.g. French). In our experiments, we observe that our method compares
favorably with a previously proposed method that exploits word-level alignments
to learn word representations.
| [
"['Stanislas Lauly' 'Alex Boulanger' 'Hugo Larochelle']",
"Stanislas Lauly, Alex Boulanger, Hugo Larochelle"
] |
stat.ML cs.IT cs.LG cs.NA math.IT | null | 1401.1842 | null | null | http://arxiv.org/pdf/1401.1842v1 | 2014-01-08T21:39:03Z | 2014-01-08T21:39:03Z | Robust Large Scale Non-negative Matrix Factorization using Proximal
Point Algorithm | A robust algorithm for non-negative matrix factorization (NMF) is presented
in this paper with the purpose of dealing with large-scale data, where the
separability assumption is satisfied. In particular, we modify the Linear
Programming (LP) algorithm of [9] by introducing a reduced set of constraints
for exact NMF. In contrast to the previous approaches, the proposed algorithm
does not require the knowledge of factorization rank (extreme rays [3] or
topics [7]). Furthermore, motivated by a similar problem arising in the context
of metabolic network analysis [13], we consider an entirely different regime
where the number of extreme rays or topics can be much larger than the
dimension of the data vectors. The performance of the algorithm for different
synthetic data sets are provided.
| [
"['Jason Gejie Liu' 'Shuchin Aeron']",
"Jason Gejie Liu and Shuchin Aeron"
] |
cs.LG | null | 1401.1880 | null | null | http://arxiv.org/pdf/1401.1880v2 | 2015-03-25T18:40:46Z | 2014-01-09T01:50:09Z | DJ-MC: A Reinforcement-Learning Agent for Music Playlist Recommendation | In recent years, there has been growing focus on the study of automated
recommender systems. Music recommendation systems serve as a prominent domain
for such works, both from an academic and a commercial perspective. A
fundamental aspect of music perception is that music is experienced in temporal
context and in sequence. In this work we present DJ-MC, a novel
reinforcement-learning framework for music recommendation that does not
recommend songs individually but rather song sequences, or playlists, based on
a model of preferences for both songs and song transitions. The model is
learned online and is uniquely adapted for each listener. To reduce exploration
time, DJ-MC exploits user feedback to initialize a model, which it subsequently
updates by reinforcement. We evaluate our framework with human participants
using both real song and playlist data. Our results indicate that DJ-MC's
ability to recommend sequences of songs provides a significant improvement over
more straightforward approaches, which do not take transitions into account.
| [
"Elad Liebman, Maytal Saar-Tsechansky and Peter Stone",
"['Elad Liebman' 'Maytal Saar-Tsechansky' 'Peter Stone']"
] |
cs.LG stat.ML | null | 1401.1895 | null | null | http://arxiv.org/pdf/1401.1895v1 | 2014-01-09T05:16:35Z | 2014-01-09T05:16:35Z | Efficient unimodality test in clustering by signature testing | This paper provides a new unimodality test with application in hierarchical
clustering methods. The proposed method denoted by signature test (Sigtest),
transforms the data based on its statistics. The transformed data has much
smaller variation compared to the original data and can be evaluated in a
simple proposed unimodality test. Compared with the existing unimodality tests,
Sigtest is more accurate in detecting the overlapped clusters and has a much
less computational complexity. Simulation results demonstrate the efficiency of
this statistic test for both real and synthetic data sets.
| [
"Mahdi Shahbaba and Soosan Beheshti",
"['Mahdi Shahbaba' 'Soosan Beheshti']"
] |
cs.CE cs.LG q-fin.ST | 10.1016/j.knosys.2013.10.012 | 1401.1916 | null | null | http://arxiv.org/abs/1401.1916v1 | 2014-01-09T07:58:06Z | 2014-01-09T07:58:06Z | Multiple-output support vector regression with a firefly algorithm for
interval-valued stock price index forecasting | Highly accurate interval forecasting of a stock price index is fundamental to
successfully making a profit when making investment decisions, by providing a
range of values rather than a point estimate. In this study, we investigate the
possibility of forecasting an interval-valued stock price index series over
short and long horizons using multi-output support vector regression (MSVR).
Furthermore, this study proposes a firefly algorithm (FA)-based approach, built
on the established MSVR, for determining the parameters of MSVR (abbreviated as
FA-MSVR). Three globally traded broad market indices are used to compare the
performance of the proposed FA-MSVR method with selected counterparts. The
quantitative and comprehensive assessments are performed on the basis of
statistical criteria, economic criteria, and computational cost. In terms of
statistical criteria, we compare the out-of-sample forecasting using
goodness-of-forecast measures and testing approaches. In terms of economic
criteria, we assess the relative forecast performance with a simple trading
strategy. The results obtained in this study indicate that the proposed FA-MSVR
method is a promising alternative for forecasting interval-valued financial
time series.
| [
"Tao Xiong, Yukun Bao, Zhongyi Hu",
"['Tao Xiong' 'Yukun Bao' 'Zhongyi Hu']"
] |
cs.LG cs.AI cs.NE stat.ML | 10.1016/j.neucom.2013.01.027 | 1401.1926 | null | null | http://arxiv.org/abs/1401.1926v1 | 2014-01-09T08:41:55Z | 2014-01-09T08:41:55Z | A PSO and Pattern Search based Memetic Algorithm for SVMs Parameters
Optimization | Addressing the issue of SVMs parameters optimization, this study proposes an
efficient memetic algorithm based on Particle Swarm Optimization algorithm
(PSO) and Pattern Search (PS). In the proposed memetic algorithm, PSO is
responsible for exploration of the search space and the detection of the
potential regions with optimum solutions, while pattern search (PS) is used to
produce an effective exploitation on the potential regions obtained by PSO.
Moreover, a novel probabilistic selection strategy is proposed to select the
appropriate individuals among the current population to undergo local
refinement, keeping a well balance between exploration and exploitation.
Experimental results confirm that the local refinement with PS and our proposed
selection strategy are effective, and finally demonstrate effectiveness and
robustness of the proposed PSO-PS based MA for SVMs parameters optimization.
| [
"Yukun Bao, Zhongyi Hu, Tao Xiong",
"['Yukun Bao' 'Zhongyi Hu' 'Tao Xiong']"
] |
cs.LG stat.ML | null | 1401.1974 | null | null | http://arxiv.org/pdf/1401.1974v4 | 2014-01-29T01:54:57Z | 2014-01-09T12:08:07Z | Bayesian Nonparametric Multilevel Clustering with Group-Level Contexts | We present a Bayesian nonparametric framework for multilevel clustering which
utilizes group-level context information to simultaneously discover
low-dimensional structures of the group contents and partitions groups into
clusters. Using the Dirichlet process as the building block, our model
constructs a product base-measure with a nested structure to accommodate
content and context observations at multiple levels. The proposed model
possesses properties that link the nested Dirichlet processes (nDP) and the
Dirichlet process mixture models (DPM) in an interesting way: integrating out
all contents results in the DPM over contexts, whereas integrating out
group-specific contexts results in the nDP mixture over content variables. We
provide a Polya-urn view of the model and an efficient collapsed Gibbs
inference procedure. Extensive experiments on real-world datasets demonstrate
the advantage of utilizing context information via our model in both text and
image domains.
| [
"Vu Nguyen, Dinh Phung, XuanLong Nguyen, Svetha Venkatesh, Hung Hai Bui",
"['Vu Nguyen' 'Dinh Phung' 'XuanLong Nguyen' 'Svetha Venkatesh'\n 'Hung Hai Bui']"
] |
cs.GT cs.LG stat.ML | null | 1401.2086 | null | null | http://arxiv.org/pdf/1401.2086v2 | 2015-07-02T20:09:17Z | 2014-01-08T12:47:15Z | Actor-Critic Algorithms for Learning Nash Equilibria in N-player
General-Sum Games | We consider the problem of finding stationary Nash equilibria (NE) in a
finite discounted general-sum stochastic game. We first generalize a non-linear
optimization problem from Filar and Vrieze [2004] to a $N$-player setting and
break down this problem into simpler sub-problems that ensure there is no
Bellman error for a given state and an agent. We then provide a
characterization of solution points of these sub-problems that correspond to
Nash equilibria of the underlying game and for this purpose, we derive a set of
necessary and sufficient SG-SP (Stochastic Game - Sub-Problem) conditions.
Using these conditions, we develop two actor-critic algorithms: OFF-SGSP
(model-based) and ON-SGSP (model-free). Both algorithms use a critic that
estimates the value function for a fixed policy and an actor that performs
descent in the policy space using a descent direction that avoids local minima.
We establish that both algorithms converge, in self-play, to the equilibria of
a certain ordinary differential equation (ODE), whose stable limit points
coincide with stationary NE of the underlying general-sum stochastic game. On a
single state non-generic game (see Hart and Mas-Colell [2005]) as well as on a
synthetic two-player game setup with $810,000$ states, we establish that
ON-SGSP consistently outperforms NashQ ([Hu and Wellman, 2003] and FFQ
[Littman, 2001] algorithms.
| [
"['H. L Prasad' 'L. A. Prashanth' 'Shalabh Bhatnagar']",
"H.L Prasad, L.A.Prashanth and Shalabh Bhatnagar"
] |
cs.NE cs.LG | null | 1401.2224 | null | null | http://arxiv.org/pdf/1401.2224v1 | 2014-01-10T03:39:28Z | 2014-01-10T03:39:28Z | A Comparative Study of Reservoir Computing for Temporal Signal
Processing | Reservoir computing (RC) is a novel approach to time series prediction using
recurrent neural networks. In RC, an input signal perturbs the intrinsic
dynamics of a medium called a reservoir. A readout layer is then trained to
reconstruct a target output from the reservoir's state. The multitude of RC
architectures and evaluation metrics poses a challenge to both practitioners
and theorists who study the task-solving performance and computational power of
RC. In addition, in contrast to traditional computation models, the reservoir
is a dynamical system in which computation and memory are inseparable, and
therefore hard to analyze. Here, we compare echo state networks (ESN), a
popular RC architecture, with tapped-delay lines (DL) and nonlinear
autoregressive exogenous (NARX) networks, which we use to model systems with
limited computation and limited memory respectively. We compare the performance
of the three systems while computing three common benchmark time series:
H{\'e}non Map, NARMA10, and NARMA20. We find that the role of the reservoir in
the reservoir computing paradigm goes beyond providing a memory of the past
inputs. The DL and the NARX network have higher memorization capability, but
fall short of the generalization power of the ESN.
| [
"Alireza Goudarzi, Peter Banda, Matthew R. Lakin, Christof Teuscher,\n Darko Stefanovic",
"['Alireza Goudarzi' 'Peter Banda' 'Matthew R. Lakin' 'Christof Teuscher'\n 'Darko Stefanovic']"
] |
cs.NA cs.LG stat.ML | null | 1401.2288 | null | null | http://arxiv.org/pdf/1401.2288v3 | 2014-02-02T08:13:58Z | 2014-01-10T11:24:35Z | Extension of Sparse Randomized Kaczmarz Algorithm for Multiple
Measurement Vectors | The Kaczmarz algorithm is popular for iteratively solving an overdetermined
system of linear equations. The traditional Kaczmarz algorithm can approximate
the solution in few sweeps through the equations but a randomized version of
the Kaczmarz algorithm was shown to converge exponentially and independent of
number of equations. Recently an algorithm for finding sparse solution to a
linear system of equations has been proposed based on weighted randomized
Kaczmarz algorithm. These algorithms solves single measurement vector problem;
however there are applications were multiple-measurements are available. In
this work, the objective is to solve a multiple measurement vector problem with
common sparse support by modifying the randomized Kaczmarz algorithm. We have
also modeled the problem of face recognition from video as the multiple
measurement vector problem and solved using our proposed technique. We have
compared the proposed algorithm with state-of-art spectral projected gradient
algorithm for multiple measurement vectors on both real and synthetic datasets.
The Monte Carlo simulations confirms that our proposed algorithm have better
recovery and convergence rate than the MMV version of spectral projected
gradient algorithm under fairness constraints.
| [
"Hemant Kumar Aggarwal and Angshul Majumdar",
"['Hemant Kumar Aggarwal' 'Angshul Majumdar']"
] |
stat.ML cs.LG | null | 1401.2304 | null | null | http://arxiv.org/pdf/1401.2304v1 | 2014-01-10T12:23:47Z | 2014-01-10T12:23:47Z | Lasso and equivalent quadratic penalized models | The least absolute shrinkage and selection operator (lasso) and ridge
regression produce usually different estimates although input, loss function
and parameterization of the penalty are identical. In this paper we look for
ridge and lasso models with identical solution set.
It turns out, that the lasso model with shrink vector $\lambda$ and a
quadratic penalized model with shrink matrix as outer product of $\lambda$ with
itself are equivalent, in the sense that they have equal solutions. To achieve
this, we have to restrict the estimates to be positive. This doesn't limit the
area of application since we can easily decompose every estimate in a positive
and negative part. The resulting problem can be solved with a non negative
least square algorithm.
Beside this quadratic penalized model, an augmented regression model with
positive bounded estimates is developed which is also equivalent to the lasso
model, but is probably faster to solve.
| [
"['Stefan Hummelsheim']",
"Stefan Hummelsheim"
] |
cs.LG | null | 1401.2411 | null | null | http://arxiv.org/abs/1401.2411v2 | 2018-05-16T13:16:05Z | 2014-01-10T17:36:23Z | Clustering, Coding, and the Concept of Similarity | This paper develops a theory of clustering and coding which combines a
geometric model with a probabilistic model in a principled way. The geometric
model is a Riemannian manifold with a Riemannian metric, ${g}_{ij}({\bf x})$,
which we interpret as a measure of dissimilarity. The probabilistic model
consists of a stochastic process with an invariant probability measure which
matches the density of the sample input data. The link between the two models
is a potential function, $U({\bf x})$, and its gradient, $\nabla U({\bf x})$.
We use the gradient to define the dissimilarity metric, which guarantees that
our measure of dissimilarity will depend on the probability measure. Finally,
we use the dissimilarity metric to define a coordinate system on the embedded
Riemannian manifold, which gives us a low-dimensional encoding of our original
data.
| [
"['L. Thorne McCarty']",
"L. Thorne McCarty"
] |
cs.LG stat.CO stat.ML | 10.3182/20120711-3-BE-2027.00312 | 1401.2490 | null | null | http://arxiv.org/abs/1401.2490v1 | 2014-01-11T00:54:27Z | 2014-01-11T00:54:27Z | An Online Expectation-Maximisation Algorithm for Nonnegative Matrix
Factorisation Models | In this paper we formulate the nonnegative matrix factorisation (NMF) problem
as a maximum likelihood estimation problem for hidden Markov models and propose
online expectation-maximisation (EM) algorithms to estimate the NMF and the
other unknown static parameters. We also propose a sequential Monte Carlo
approximation of our online EM algorithm. We show the performance of the
proposed method with two numerical examples.
| [
"Sinan Yildirim, A. Taylan Cemgil, Sumeetpal S. Singh",
"['Sinan Yildirim' 'A. Taylan Cemgil' 'Sumeetpal S. Singh']"
] |
cs.LG stat.ML | 10.1016/j.neucom.2013.09.010 | 1401.2504 | null | null | http://arxiv.org/abs/1401.2504v1 | 2014-01-11T06:14:53Z | 2014-01-11T06:14:53Z | Multi-Step-Ahead Time Series Prediction using Multiple-Output Support
Vector Regression | Accurate time series prediction over long future horizons is challenging and
of great interest to both practitioners and academics. As a well-known
intelligent algorithm, the standard formulation of Support Vector Regression
(SVR) could be taken for multi-step-ahead time series prediction, only relying
either on iterated strategy or direct strategy. This study proposes a novel
multiple-step-ahead time series prediction approach which employs
multiple-output support vector regression (M-SVR) with multiple-input
multiple-output (MIMO) prediction strategy. In addition, the rank of three
leading prediction strategies with SVR is comparatively examined, providing
practical implications on the selection of the prediction strategy for
multi-step-ahead forecasting while taking SVR as modeling technique. The
proposed approach is validated with the simulated and real datasets. The
quantitative and comprehensive assessments are performed on the basis of the
prediction accuracy and computational cost. The results indicate that: 1) the
M-SVR using MIMO strategy achieves the best accurate forecasts with accredited
computational load, 2) the standard SVR using direct strategy achieves the
second best accurate forecasts, but with the most expensive computational cost,
and 3) the standard SVR using iterated strategy is the worst in terms of
prediction accuracy, but with the least computational cost.
| [
"['Yukun Bao' 'Tao Xiong' 'Zhongyi Hu']",
"Yukun Bao, Tao Xiong, Zhongyi Hu"
] |
q-bio.QM cs.CE cs.LG | 10.1371/journal.pcbi.1003500 | 1401.2668 | null | null | http://arxiv.org/abs/1401.2668v2 | 2014-01-15T01:55:17Z | 2014-01-12T20:41:08Z | MRFalign: Protein Homology Detection through Alignment of Markov Random
Fields | Sequence-based protein homology detection has been extensively studied and so
far the most sensitive method is based upon comparison of protein sequence
profiles, which are derived from multiple sequence alignment (MSA) of sequence
homologs in a protein family. A sequence profile is usually represented as a
position-specific scoring matrix (PSSM) or an HMM (Hidden Markov Model) and
accordingly PSSM-PSSM or HMM-HMM comparison is used for homolog detection. This
paper presents a new homology detection method MRFalign, consisting of three
key components: 1) a Markov Random Fields (MRF) representation of a protein
family; 2) a scoring function measuring similarity of two MRFs; and 3) an
efficient ADMM (Alternating Direction Method of Multipliers) algorithm aligning
two MRFs. Compared to HMM that can only model very short-range residue
correlation, MRFs can model long-range residue interaction pattern and thus,
encode information for the global 3D structure of a protein family.
Consequently, MRF-MRF comparison for remote homology detection shall be much
more sensitive than HMM-HMM or PSSM-PSSM comparison. Experiments confirm that
MRFalign outperforms several popular HMM or PSSM-based methods in terms of both
alignment accuracy and remote homology detection and that MRFalign works
particularly well for mainly beta proteins. For example, tested on the
benchmark SCOP40 (8353 proteins) for homology detection, PSSM-PSSM and HMM-HMM
succeed on 48% and 52% of proteins, respectively, at superfamily level, and on
15% and 27% of proteins, respectively, at fold level. In contrast, MRFalign
succeeds on 57.3% and 42.5% of proteins at superfamily and fold level,
respectively. This study implies that long-range residue interaction patterns
are very helpful for sequence-based homology detection. The software is
available for download at http://raptorx.uchicago.edu/download/.
| [
"['Jianzhu Ma' 'Sheng Wang' 'Zhiyong Wang' 'Jinbo Xu']",
"Jianzhu Ma, Sheng Wang, Zhiyong Wang and Jinbo Xu"
] |
cs.CE cs.LG | 10.1166/jbic.2013.1052 | 1401.2688 | null | null | http://arxiv.org/abs/1401.2688v1 | 2014-01-13T00:38:52Z | 2014-01-13T00:38:52Z | PSMACA: An Automated Protein Structure Prediction Using MACA (Multiple
Attractor Cellular Automata) | Protein Structure Predication from sequences of amino acid has gained a
remarkable attention in recent years. Even though there are some prediction
techniques addressing this problem, the approximate accuracy in predicting the
protein structure is closely 75%. An automated procedure was evolved with MACA
(Multiple Attractor Cellular Automata) for predicting the structure of the
protein. Most of the existing approaches are sequential which will classify the
input into four major classes and these are designed for similar sequences.
PSMACA is designed to identify ten classes from the sequences that share
twilight zone similarity and identity with the training sequences. This method
also predicts three states (helix, strand, and coil) for the structure. Our
comprehensive design considers 10 feature selection methods and 4 classifiers
to develop MACA (Multiple Attractor Cellular Automata) based classifiers that
are build for each of the ten classes. We have tested the proposed classifier
with twilight-zone and 1-high-similarity benchmark datasets with over three
dozens of modern competing predictors shows that PSMACA provides the best
overall accuracy that ranges between 77% and 88.7% depending on the dataset.
| [
"['Pokkuluri Kiran Sree' 'Inamupudi Ramesh Babu' 'SSSN Usha Devi N']",
"Pokkuluri Kiran Sree, Inamupudi Ramesh Babu, SSSN Usha Devi N"
] |
stat.ML cs.LG | null | 1401.2753 | null | null | http://arxiv.org/pdf/1401.2753v2 | 2015-01-02T09:17:48Z | 2014-01-13T08:47:44Z | Stochastic Optimization with Importance Sampling | Uniform sampling of training data has been commonly used in traditional
stochastic optimization algorithms such as Proximal Stochastic Gradient Descent
(prox-SGD) and Proximal Stochastic Dual Coordinate Ascent (prox-SDCA). Although
uniform sampling can guarantee that the sampled stochastic quantity is an
unbiased estimate of the corresponding true quantity, the resulting estimator
may have a rather high variance, which negatively affects the convergence of
the underlying optimization procedure. In this paper we study stochastic
optimization with importance sampling, which improves the convergence rate by
reducing the stochastic variance. Specifically, we study prox-SGD (actually,
stochastic mirror descent) with importance sampling and prox-SDCA with
importance sampling. For prox-SGD, instead of adopting uniform sampling
throughout the training process, the proposed algorithm employs importance
sampling to minimize the variance of the stochastic gradient. For prox-SDCA,
the proposed importance sampling scheme aims to achieve higher expected dual
value at each dual coordinate ascent step. We provide extensive theoretical
analysis to show that the convergence rates with the proposed importance
sampling methods can be significantly improved under suitable conditions both
for prox-SGD and for prox-SDCA. Experiments are provided to verify the
theoretical analysis.
| [
"['Peilin Zhao' 'Tong Zhang']",
"Peilin Zhao, Tong Zhang"
] |
cs.LG q-bio.QM stat.ML | null | 1401.2838 | null | null | http://arxiv.org/pdf/1401.2838v1 | 2014-01-13T14:02:37Z | 2014-01-13T14:02:37Z | GPS-ABC: Gaussian Process Surrogate Approximate Bayesian Computation | Scientists often express their understanding of the world through a
computationally demanding simulation program. Analyzing the posterior
distribution of the parameters given observations (the inverse problem) can be
extremely challenging. The Approximate Bayesian Computation (ABC) framework is
the standard statistical tool to handle these likelihood free problems, but
they require a very large number of simulations. In this work we develop two
new ABC sampling algorithms that significantly reduce the number of simulations
necessary for posterior inference. Both algorithms use confidence estimates for
the accept probability in the Metropolis Hastings step to adaptively choose the
number of necessary simulations. Our GPS-ABC algorithm stores the information
obtained from every simulation in a Gaussian process which acts as a surrogate
function for the simulated statistics. Experiments on a challenging realistic
biological problem illustrate the potential of these algorithms.
| [
"['Edward Meeds' 'Max Welling']",
"Edward Meeds and Max Welling"
] |
cs.NE cs.LG | null | 1401.2949 | null | null | http://arxiv.org/pdf/1401.2949v1 | 2014-01-10T12:46:56Z | 2014-01-10T12:46:56Z | Exploiting generalisation symmetries in accuracy-based learning
classifier systems: An initial study | Modern learning classifier systems typically exploit a niched genetic
algorithm to facilitate rule discovery. When used for reinforcement learning,
such rules represent generalisations over the state-action-reward space. Whilst
encouraging maximal generality, the niching can potentially hinder the
formation of generalisations in the state space which are symmetrical, or very
similar, over different actions. This paper introduces the use of rules which
contain multiple actions, maintaining accuracy and reward metrics for each
action. It is shown that problem symmetries can be exploited, improving
performance, whilst not degrading performance when symmetries are reduced.
| [
"['Larry Bull']",
"Larry Bull"
] |
stat.ML cs.LG | null | 1401.2955 | null | null | http://arxiv.org/pdf/1401.2955v1 | 2014-01-13T19:04:13Z | 2014-01-13T19:04:13Z | Binary Classifier Calibration: Bayesian Non-Parametric Approach | A set of probabilistic predictions is well calibrated if the events that are
predicted to occur with probability p do in fact occur about p fraction of the
time. Well calibrated predictions are particularly important when machine
learning models are used in decision analysis. This paper presents two new
non-parametric methods for calibrating outputs of binary classification models:
a method based on the Bayes optimal selection and a method based on the
Bayesian model averaging. The advantage of these methods is that they are
independent of the algorithm used to learn a predictive model, and they can be
applied in a post-processing step, after the model is learned. This makes them
applicable to a wide variety of machine learning models and methods. These
calibration methods, as well as other methods, are tested on a variety of
datasets in terms of both discrimination and calibration performance. The
results show the methods either outperform or are comparable in performance to
the state-of-the-art calibration methods.
| [
"['Mahdi Pakdaman Naeini' 'Gregory F. Cooper' 'Milos Hauskrecht']",
"Mahdi Pakdaman Naeini, Gregory F. Cooper, Milos Hauskrecht"
] |
cs.SE cs.LG | null | 1401.3069 | null | null | http://arxiv.org/pdf/1401.3069v2 | 2014-01-15T18:02:47Z | 2014-01-14T05:01:58Z | Use Case Point Approach Based Software Effort Estimation using Various
Support Vector Regression Kernel Methods | The job of software effort estimation is a critical one in the early stages
of the software development life cycle when the details of requirements are
usually not clearly identified. Various optimization techniques help in
improving the accuracy of effort estimation. The Support Vector Regression
(SVR) is one of several different soft-computing techniques that help in
getting optimal estimated values. The idea of SVR is based upon the computation
of a linear regression function in a high dimensional feature space where the
input data are mapped via a nonlinear function. Further, the SVR kernel methods
can be applied in transforming the input data and then based on these
transformations, an optimal boundary between the possible outputs can be
obtained. The main objective of the research work carried out in this paper is
to estimate the software effort using use case point approach. The use case
point approach relies on the use case diagram to estimate the size and effort
of software projects. Then, an attempt has been made to optimize the results
obtained from use case point analysis using various SVR kernel methods to
achieve better prediction accuracy.
| [
"Shashank Mouli Satapathy, Santanu Kumar Rath",
"['Shashank Mouli Satapathy' 'Santanu Kumar Rath']"
] |
cs.IT cs.LG math.IT | null | 1401.3148 | null | null | http://arxiv.org/pdf/1401.3148v1 | 2014-01-14T11:35:19Z | 2014-01-14T11:35:19Z | Dynamic Topology Adaptation and Distributed Estimation for Smart Grids | This paper presents new dynamic topology adaptation strategies for
distributed estimation in smart grids systems. We propose a dynamic exhaustive
search--based topology adaptation algorithm and a dynamic sparsity--inspired
topology adaptation algorithm, which can exploit the topology of smart grids
with poor--quality links and obtain performance gains. We incorporate an
optimized combining rule, named Hastings rule into our proposed dynamic
topology adaptation algorithms. Compared with the existing works in the
literature on distributed estimation, the proposed algorithms have a better
convergence rate and significantly improve the system performance. The
performance of the proposed algorithms is compared with that of existing
algorithms in the IEEE 14--bus system.
| [
"S. Xu, R. C. de Lamare and H. V. Poor",
"['S. Xu' 'R. C. de Lamare' 'H. V. Poor']"
] |
math.OC cs.LG cs.SY | null | 1401.3198 | null | null | http://arxiv.org/pdf/1401.3198v1 | 2014-01-14T14:40:29Z | 2014-01-14T14:40:29Z | Online Markov decision processes with Kullback-Leibler control cost | This paper considers an online (real-time) control problem that involves an
agent performing a discrete-time random walk over a finite state space. The
agent's action at each time step is to specify the probability distribution for
the next state given the current state. Following the set-up of Todorov, the
state-action cost at each time step is a sum of a state cost and a control cost
given by the Kullback-Leibler (KL) divergence between the agent's next-state
distribution and that determined by some fixed passive dynamics. The online
aspect of the problem is due to the fact that the state cost functions are
generated by a dynamic environment, and the agent learns the current state cost
only after selecting an action. An explicit construction of a computationally
efficient strategy with small regret (i.e., expected difference between its
actual total cost and the smallest cost attainable using noncausal knowledge of
the state costs) under mild regularity conditions is presented, along with a
demonstration of the performance of the proposed strategy on a simulated target
tracking problem. A number of new results on Markov decision processes with KL
control cost are also obtained.
| [
"['Peng Guan' 'Maxim Raginsky' 'Rebecca Willett']",
"Peng Guan and Maxim Raginsky and Rebecca Willett"
] |
cs.LG cs.SI stat.ML | null | 1401.3258 | null | null | http://arxiv.org/pdf/1401.3258v1 | 2014-01-14T17:07:01Z | 2014-01-14T17:07:01Z | A Boosting Approach to Learning Graph Representations | Learning the right graph representation from noisy, multisource data has
garnered significant interest in recent years. A central tenet of this problem
is relational learning. Here the objective is to incorporate the partial
information each data source gives us in a way that captures the true
underlying relationships. To address this challenge, we present a general,
boosting-inspired framework for combining weak evidence of entity associations
into a robust similarity metric. We explore the extent to which different
quality measurements yield graph representations that are suitable for
community detection. We then present empirical results on both synthetic and
real datasets demonstrating the utility of this framework. Our framework leads
to suitable global graph representations from quality measurements local to
each edge. Finally, we discuss future extensions and theoretical considerations
of learning useful graph representations from weak feedback in general
application settings.
| [
"['Rajmonda Caceres' 'Kevin Carter' 'Jeremy Kun']",
"Rajmonda Caceres, Kevin Carter, Jeremy Kun"
] |
cs.CL cs.LG cs.SD | null | 1401.3322 | null | null | http://arxiv.org/pdf/1401.3322v1 | 2013-12-24T08:45:07Z | 2013-12-24T08:45:07Z | A Subband-Based SVM Front-End for Robust ASR | This work proposes a novel support vector machine (SVM) based robust
automatic speech recognition (ASR) front-end that operates on an ensemble of
the subband components of high-dimensional acoustic waveforms. The key issues
of selecting the appropriate SVM kernels for classification in frequency
subbands and the combination of individual subband classifiers using ensemble
methods are addressed. The proposed front-end is compared with state-of-the-art
ASR front-ends in terms of robustness to additive noise and linear filtering.
Experiments performed on the TIMIT phoneme classification task demonstrate the
benefits of the proposed subband based SVM front-end: it outperforms the
standard cepstral front-end in the presence of noise and linear filtering for
signal-to-noise ratio (SNR) below 12-dB. A combination of the proposed
front-end with a conventional front-end such as MFCC yields further
improvements over the individual front ends across the full range of noise
levels.
| [
"['Jibran Yousafzai' 'Zoran Cvetkovic' 'Peter Sollich' 'Matthew Ager']",
"Jibran Yousafzai and Zoran Cvetkovic and Peter Sollich and Matthew\n Ager"
] |
cs.CL cs.LG | null | 1401.3372 | null | null | http://arxiv.org/pdf/1401.3372v1 | 2014-01-14T22:10:30Z | 2014-01-14T22:10:30Z | Learning Language from a Large (Unannotated) Corpus | A novel approach to the fully automated, unsupervised extraction of
dependency grammars and associated syntax-to-semantic-relationship mappings
from large text corpora is described. The suggested approach builds on the
authors' prior work with the Link Grammar, RelEx and OpenCog systems, as well
as on a number of prior papers and approaches from the statistical language
learning literature. If successful, this approach would enable the mining of
all the information needed to power a natural language comprehension and
generation system, directly from a large, unannotated corpus.
| [
"Linas Vepstas and Ben Goertzel",
"['Linas Vepstas' 'Ben Goertzel']"
] |
stat.ML cs.LG | null | 1401.3390 | null | null | http://arxiv.org/pdf/1401.3390v1 | 2014-01-14T23:52:16Z | 2014-01-14T23:52:16Z | Binary Classifier Calibration: Non-parametric approach | Accurate calibration of probabilistic predictive models learned is critical
for many practical prediction and decision-making tasks. There are two main
categories of methods for building calibrated classifiers. One approach is to
develop methods for learning probabilistic models that are well-calibrated, ab
initio. The other approach is to use some post-processing methods for
transforming the output of a classifier to be well calibrated, as for example
histogram binning, Platt scaling, and isotonic regression. One advantage of the
post-processing approach is that it can be applied to any existing
probabilistic classification model that was constructed using any
machine-learning method.
In this paper, we first introduce two measures for evaluating how well a
classifier is calibrated. We prove three theorems showing that using a simple
histogram binning post-processing method, it is possible to make a classifier
be well calibrated while retaining its discrimination capability. Also, by
casting the histogram binning method as a density-based non-parametric binary
classifier, we can extend it using two simple non-parametric density estimation
methods. We demonstrate the performance of the proposed calibration methods on
synthetic and real datasets. Experimental results show that the proposed
methods either outperform or are comparable to existing calibration methods.
| [
"['Mahdi Pakdaman Naeini' 'Gregory F. Cooper' 'Milos Hauskrecht']",
"Mahdi Pakdaman Naeini, Gregory F. Cooper, Milos Hauskrecht"
] |
cs.CV cs.LG stat.ML | null | 1401.3409 | null | null | http://arxiv.org/pdf/1401.3409v3 | 2014-10-23T02:05:18Z | 2014-01-15T02:17:33Z | Low-Rank Modeling and Its Applications in Image Analysis | Low-rank modeling generally refers to a class of methods that solve problems
by representing variables of interest as low-rank matrices. It has achieved
great success in various fields including computer vision, data mining, signal
processing and bioinformatics. Recently, much progress has been made in
theories, algorithms and applications of low-rank modeling, such as exact
low-rank matrix recovery via convex programming and matrix completion applied
to collaborative filtering. These advances have brought more and more
attentions to this topic. In this paper, we review the recent advance of
low-rank modeling, the state-of-the-art algorithms, and related applications in
image analysis. We first give an overview to the concept of low-rank modeling
and challenging problems in this area. Then, we summarize the models and
algorithms for low-rank matrix recovery and illustrate their advantages and
limitations with numerical experiments. Next, we introduce a few applications
of low-rank modeling in the context of image analysis. Finally, we conclude
this paper with some discussions.
| [
"['Xiaowei Zhou' 'Can Yang' 'Hongyu Zhao' 'Weichuan Yu']",
"Xiaowei Zhou, Can Yang, Hongyu Zhao, Weichuan Yu"
] |
cs.LG cs.IR | null | 1401.3413 | null | null | http://arxiv.org/pdf/1401.3413v1 | 2014-01-15T02:39:15Z | 2014-01-15T02:39:15Z | Infinite Mixed Membership Matrix Factorization | Rating and recommendation systems have become a popular application area for
applying a suite of machine learning techniques. Current approaches rely
primarily on probabilistic interpretations and extensions of matrix
factorization, which factorizes a user-item ratings matrix into latent user and
item vectors. Most of these methods fail to model significant variations in
item ratings from otherwise similar users, a phenomenon known as the "Napoleon
Dynamite" effect. Recent efforts have addressed this problem by adding a
contextual bias term to the rating, which captures the mood under which a user
rates an item or the context in which an item is rated by a user. In this work,
we extend this model in a nonparametric sense by learning the optimal number of
moods or contexts from the data, and derive Gibbs sampling inference procedures
for our model. We evaluate our approach on the MovieLens 1M dataset, and show
significant improvements over the optimal parametric baseline, more than twice
the improvements previously encountered for this task. We also extract and
evaluate a DBLP dataset, wherein we predict the number of papers co-authored by
two authors, and present improvements over the parametric baseline on this
alternative domain as well.
| [
"['Avneesh Saluja' 'Mahdi Pakdaman' 'Dongzhen Piao' 'Ankur P. Parikh']",
"Avneesh Saluja, Mahdi Pakdaman, Dongzhen Piao, Ankur P. Parikh"
] |
cs.LG cs.AI | 10.1613/jair.2519 | 1401.3427 | null | null | http://arxiv.org/abs/1401.3427v1 | 2014-01-15T04:42:13Z | 2014-01-15T04:42:13Z | Analogical Dissimilarity: Definition, Algorithms and Two Experiments in
Machine Learning | This paper defines the notion of analogical dissimilarity between four
objects, with a special focus on objects structured as sequences. Firstly, it
studies the case where the four objects have a null analogical dissimilarity,
i.e. are in analogical proportion. Secondly, when one of these objects is
unknown, it gives algorithms to compute it. Thirdly, it tackles the problem of
defining analogical dissimilarity, which is a measure of how far four objects
are from being in analogical proportion. In particular, when objects are
sequences, it gives a definition and an algorithm based on an optimal alignment
of the four sequences. It gives also learning algorithms, i.e. methods to find
the triple of objects in a learning sample which has the least analogical
dissimilarity with a given object. Two practical experiments are described: the
first is a classification problem on benchmarks of binary and nominal data, the
second shows how the generation of sequences by solving analogical equations
enables a handwritten character recognition system to rapidly be adapted to a
new writer.
| [
"['Laurent Miclet' 'Sabri Bayoudh' 'Arnaud Delhay']",
"Laurent Miclet, Sabri Bayoudh, Arnaud Delhay"
] |
cs.LG | 10.1613/jair.2530 | 1401.3429 | null | null | http://arxiv.org/abs/1401.3429v1 | 2014-01-15T04:46:37Z | 2014-01-15T04:46:37Z | Latent Tree Models and Approximate Inference in Bayesian Networks | We propose a novel method for approximate inference in Bayesian networks
(BNs). The idea is to sample data from a BN, learn a latent tree model (LTM)
from the data offline, and when online, make inference with the LTM instead of
the original BN. Because LTMs are tree-structured, inference takes linear time.
In the meantime, they can represent complex relationship among leaf nodes and
hence the approximation accuracy is often good. Empirical evidence shows that
our method can achieve good approximation accuracy at low online computational
cost.
| [
"Yi Wang, Nevin L. Zhang, Tao Chen",
"['Yi Wang' 'Nevin L. Zhang' 'Tao Chen']"
] |
cs.AI cs.LG | 10.1613/jair.2540 | 1401.3432 | null | null | http://arxiv.org/abs/1401.3432v1 | 2014-01-15T04:49:23Z | 2014-01-15T04:49:23Z | A Rigorously Bayesian Beam Model and an Adaptive Full Scan Model for
Range Finders in Dynamic Environments | This paper proposes and experimentally validates a Bayesian network model of
a range finder adapted to dynamic environments. All modeling assumptions are
rigorously explained, and all model parameters have a physical interpretation.
This approach results in a transparent and intuitive model. With respect to the
state of the art beam model this paper: (i) proposes a different functional
form for the probability of range measurements caused by unmodeled objects,
(ii) intuitively explains the discontinuity encountered in te state of the art
beam model, and (iii) reduces the number of model parameters, while maintaining
the same representational power for experimental data. The proposed beam model
is called RBBM, short for Rigorously Bayesian Beam Model. A maximum likelihood
and a variational Bayesian estimator (both based on expectation-maximization)
are proposed to learn the model parameters.
Furthermore, the RBBM is extended to a full scan model in two steps: first,
to a full scan model for static environments and next, to a full scan model for
general, dynamic environments. The full scan model accounts for the dependency
between beams and adapts to the local sample density when using a particle
filter. In contrast to Gaussian-based state of the art models, the proposed
full scan model uses a sample-based approximation. This sample-based
approximation enables handling dynamic environments and capturing
multi-modality, which occurs even in simple static environments.
| [
"['Tinne De Laet' 'Joris De Schutter' 'Herman Bruyninckx']",
"Tinne De Laet, Joris De Schutter, Herman Bruyninckx"
] |
cs.LG | 10.1613/jair.2548 | 1401.3434 | null | null | http://arxiv.org/abs/1401.3434v1 | 2014-01-15T04:50:50Z | 2014-01-15T04:50:50Z | Adaptive Stochastic Resource Control: A Machine Learning Approach | The paper investigates stochastic resource allocation problems with scarce,
reusable resources and non-preemtive, time-dependent, interconnected tasks.
This approach is a natural generalization of several standard resource
management problems, such as scheduling and transportation problems. First,
reactive solutions are considered and defined as control policies of suitably
reformulated Markov decision processes (MDPs). We argue that this reformulation
has several favorable properties, such as it has finite state and action
spaces, it is aperiodic, hence all policies are proper and the space of control
policies can be safely restricted. Next, approximate dynamic programming (ADP)
methods, such as fitted Q-learning, are suggested for computing an efficient
control policy. In order to compactly maintain the cost-to-go function, two
representations are studied: hash tables and support vector regression (SVR),
particularly, nu-SVRs. Several additional improvements, such as the application
of limited-lookahead rollout algorithms in the initial phases, action space
decomposition, task clustering and distributed sampling are investigated, too.
Finally, experimental results on both benchmark and industry-related data are
presented.
| [
"['Balázs Csanád Csáji' 'László Monostori']",
"Bal\\'azs Csan\\'ad Cs\\'aji, L\\'aszl\\'o Monostori"
] |
cs.LG cs.AI stat.ML | 10.1613/jair.2587 | 1401.3441 | null | null | http://arxiv.org/abs/1401.3441v1 | 2014-01-15T04:54:14Z | 2014-01-15T04:54:14Z | Transductive Rademacher Complexity and its Applications | We develop a technique for deriving data-dependent error bounds for
transductive learning algorithms based on transductive Rademacher complexity.
Our technique is based on a novel general error bound for transduction in terms
of transductive Rademacher complexity, together with a novel bounding technique
for Rademacher averages for particular algorithms, in terms of their
"unlabeled-labeled" representation. This technique is relevant to many advanced
graph-based transductive algorithms and we demonstrate its effectiveness by
deriving error bounds to three well known algorithms. Finally, we present a new
PAC-Bayesian bound for mixtures of transductive algorithms based on our
Rademacher bounds.
| [
"Ran El-Yaniv, Dmitry Pechyony",
"['Ran El-Yaniv' 'Dmitry Pechyony']"
] |
cs.LG | 10.1613/jair.2602 | 1401.3447 | null | null | http://arxiv.org/abs/1401.3447v1 | 2014-01-15T05:09:07Z | 2014-01-15T05:09:07Z | Anytime Induction of Low-cost, Low-error Classifiers: a Sampling-based
Approach | Machine learning techniques are gaining prevalence in the production of a
wide range of classifiers for complex real-world applications with nonuniform
testing and misclassification costs. The increasing complexity of these
applications poses a real challenge to resource management during learning and
classification. In this work we introduce ACT (anytime cost-sensitive tree
learner), a novel framework for operating in such complex environments. ACT is
an anytime algorithm that allows learning time to be increased in return for
lower classification costs. It builds a tree top-down and exploits additional
time resources to obtain better estimations for the utility of the different
candidate splits. Using sampling techniques, ACT approximates the cost of the
subtree under each candidate split and favors the one with a minimal cost. As a
stochastic algorithm, ACT is expected to be able to escape local minima, into
which greedy methods may be trapped. Experiments with a variety of datasets
were conducted to compare ACT to the state-of-the-art cost-sensitive tree
learners. The results show that for the majority of domains ACT produces
significantly less costly trees. ACT also exhibits good anytime behavior with
diminishing returns.
| [
"Saher Esmeir, Shaul Markovitch",
"['Saher Esmeir' 'Shaul Markovitch']"
] |
cs.LG cs.MA | 10.1613/jair.2628 | 1401.3454 | null | null | http://arxiv.org/abs/1401.3454v1 | 2014-01-15T05:13:47Z | 2014-01-15T05:13:47Z | A Multiagent Reinforcement Learning Algorithm with Non-linear Dynamics | Several multiagent reinforcement learning (MARL) algorithms have been
proposed to optimize agents decisions. Due to the complexity of the problem,
the majority of the previously developed MARL algorithms assumed agents either
had some knowledge of the underlying game (such as Nash equilibria) and/or
observed other agents actions and the rewards they received.
We introduce a new MARL algorithm called the Weighted Policy Learner (WPL),
which allows agents to reach a Nash Equilibrium (NE) in benchmark
2-player-2-action games with minimum knowledge. Using WPL, the only feedback an
agent needs is its own local reward (the agent does not observe other agents
actions or rewards). Furthermore, WPL does not assume that agents know the
underlying game or the corresponding Nash Equilibrium a priori. We
experimentally show that our algorithm converges in benchmark
two-player-two-action games. We also show that our algorithm converges in the
challenging Shapleys game where previous MARL algorithms failed to converge
without knowing the underlying game or the NE. Furthermore, we show that WPL
outperforms the state-of-the-art algorithms in a more realistic setting of 100
agents interacting and learning concurrently.
An important aspect of understanding the behavior of a MARL algorithm is
analyzing the dynamics of the algorithm: how the policies of multiple learning
agents evolve over time as agents interact with one another. Such an analysis
not only verifies whether agents using a given MARL algorithm will eventually
converge, but also reveals the behavior of the MARL algorithm prior to
convergence. We analyze our algorithm in two-player-two-action games and show
that symbolically proving WPLs convergence is difficult, because of the
non-linear nature of WPLs dynamics, unlike previous MARL algorithms that had
either linear or piece-wise-linear dynamics. Instead, we numerically solve WPLs
dynamics differential equations and compare the solution to the dynamics of
previous MARL algorithms.
| [
"Sherief Abdallah, Victor Lesser",
"['Sherief Abdallah' 'Victor Lesser']"
] |
cs.NE cs.AI cs.LG | 10.1613/jair.2681 | 1401.3464 | null | null | http://arxiv.org/abs/1401.3464v1 | 2014-01-15T05:22:48Z | 2014-01-15T05:22:48Z | Learning Bayesian Network Equivalence Classes with Ant Colony
Optimization | Bayesian networks are a useful tool in the representation of uncertain
knowledge. This paper proposes a new algorithm called ACO-E, to learn the
structure of a Bayesian network. It does this by conducting a search through
the space of equivalence classes of Bayesian networks using Ant Colony
Optimization (ACO). To this end, two novel extensions of traditional ACO
techniques are proposed and implemented. Firstly, multiple types of moves are
allowed. Secondly, moves can be given in terms of indices that are not based on
construction graph nodes. The results of testing show that ACO-E performs
better than a greedy search and other state-of-the-art and metaheuristic
algorithms whilst searching in the space of equivalence classes.
| [
"R\\'on\\'an Daly, Qiang Shen",
"['Rónán Daly' 'Qiang Shen']"
] |
cs.LG cs.AI stat.ML | 10.1613/jair.2773 | 1401.3478 | null | null | http://arxiv.org/abs/1401.3478v1 | 2014-01-15T05:33:29Z | 2014-01-15T05:33:29Z | Efficient Markov Network Structure Discovery Using Independence Tests | We present two algorithms for learning the structure of a Markov network from
data: GSMN* and GSIMN. Both algorithms use statistical independence tests to
infer the structure by successively constraining the set of structures
consistent with the results of these tests. Until very recently, algorithms for
structure learning were based on maximum likelihood estimation, which has been
proved to be NP-hard for Markov networks due to the difficulty of estimating
the parameters of the network, needed for the computation of the data
likelihood. The independence-based approach does not require the computation of
the likelihood, and thus both GSMN* and GSIMN can compute the structure
efficiently (as shown in our experiments). GSMN* is an adaptation of the
Grow-Shrink algorithm of Margaritis and Thrun for learning the structure of
Bayesian networks. GSIMN extends GSMN* by additionally exploiting Pearls
well-known properties of the conditional independence relation to infer novel
independences from known ones, thus avoiding the performance of statistical
tests to estimate them. To accomplish this efficiently GSIMN uses the Triangle
theorem, also introduced in this work, which is a simplified version of the set
of Markov axioms. Experimental comparisons on artificial and real-world data
sets show GSIMN can yield significant savings with respect to GSMN*, while
generating a Markov network with comparable or in some cases improved quality.
We also compare GSIMN to a forward-chaining implementation, called GSIMN-FCH,
that produces all possible conditional independences resulting from repeatedly
applying Pearls theorems on the known conditional independence tests. The
results of this comparison show that GSIMN, by the sole use of the Triangle
theorem, is nearly optimal in terms of the set of independences tests that it
infers.
| [
"Facundo Bromberg, Dimitris Margaritis, Vasant Honavar",
"['Facundo Bromberg' 'Dimitris Margaritis' 'Vasant Honavar']"
] |
cs.CL cs.IR cs.LG | 10.1613/jair.2784 | 1401.3479 | null | null | http://arxiv.org/abs/1401.3479v1 | 2014-01-15T05:33:57Z | 2014-01-15T05:33:57Z | Complex Question Answering: Unsupervised Learning Approaches and
Experiments | Complex questions that require inferencing and synthesizing information from
multiple documents can be seen as a kind of topic-oriented, informative
multi-document summarization where the goal is to produce a single text as a
compressed version of a set of documents with a minimum loss of relevant
information. In this paper, we experiment with one empirical method and two
unsupervised statistical machine learning techniques: K-means and Expectation
Maximization (EM), for computing relative importance of the sentences. We
compare the results of these approaches. Our experiments show that the
empirical approach outperforms the other two techniques and EM performs better
than K-means. However, the performance of these approaches depends entirely on
the feature set used and the weighting of these features. In order to measure
the importance and relevance to the user query we extract different kinds of
features (i.e. lexical, lexical semantic, cosine similarity, basic element,
tree kernel based syntactic and shallow-semantic) for each of the document
sentences. We use a local search technique to learn the weights of the
features. To the best of our knowledge, no study has used tree kernel functions
to encode syntactic/semantic information for more complex tasks such as
computing the relatedness between the query sentences and the document
sentences in order to generate query-focused summaries (or answers to complex
questions). For each of our methods of generating summaries (i.e. empirical,
K-means and EM) we show the effects of syntactic and shallow-semantic features
over the bag-of-words (BOW) features.
| [
"['Yllias Chali' 'Shafiq Rayhan Joty' 'Sadid A. Hasan']",
"Yllias Chali, Shafiq Rayhan Joty, Sadid A. Hasan"
] |
cs.IR cs.CL cs.LG | 10.1613/jair.2830 | 1401.3488 | null | null | http://arxiv.org/abs/1401.3488v1 | 2014-01-15T05:38:17Z | 2014-01-15T05:38:17Z | Content Modeling Using Latent Permutations | We present a novel Bayesian topic model for learning discourse-level document
structure. Our model leverages insights from discourse theory to constrain
latent topic assignments in a way that reflects the underlying organization of
document topics. We propose a global model in which both topic selection and
ordering are biased to be similar across a collection of related documents. We
show that this space of orderings can be effectively represented using a
distribution over permutations called the Generalized Mallows Model. We apply
our method to three complementary discourse-level tasks: cross-document
alignment, document segmentation, and information ordering. Our experiments
show that incorporating our permutation-based model in these applications
yields substantial improvements in performance over previously proposed
methods.
| [
"Harr Chen, S.R.K. Branavan, Regina Barzilay, David R. Karger",
"['Harr Chen' 'S. R. K. Branavan' 'Regina Barzilay' 'David R. Karger']"
] |
cs.LG cs.AI cs.DB physics.data-an q-bio.QM | 10.1109/TKDE.2014.2316504 | 1401.3531 | null | null | http://arxiv.org/abs/1401.3531v2 | 2014-05-09T00:05:57Z | 2014-01-15T09:41:50Z | Highly comparative feature-based time-series classification | A highly comparative, feature-based approach to time series classification is
introduced that uses an extensive database of algorithms to extract thousands
of interpretable features from time series. These features are derived from
across the scientific time-series analysis literature, and include summaries of
time series in terms of their correlation structure, distribution, entropy,
stationarity, scaling properties, and fits to a range of time-series models.
After computing thousands of features for each time series in a training set,
those that are most informative of the class structure are selected using
greedy forward feature selection with a linear classifier. The resulting
feature-based classifiers automatically learn the differences between classes
using a reduced number of time-series properties, and circumvent the need to
calculate distances between time series. Representing time series in this way
results in orders of magnitude of dimensionality reduction, allowing the method
to perform well on very large datasets containing long time series or time
series of different lengths. For many of the datasets studied, classification
performance exceeded that of conventional instance-based classifiers, including
one nearest neighbor classifiers using Euclidean distances and dynamic time
warping and, most importantly, the features selected provide an understanding
of the properties of the dataset, insight that can guide further scientific
investigation.
| [
"Ben D. Fulcher and Nick S. Jones",
"['Ben D. Fulcher' 'Nick S. Jones']"
] |
cs.GT cs.AI cs.LG | null | 1401.3579 | null | null | http://arxiv.org/pdf/1401.3579v3 | 2020-05-04T01:47:30Z | 2013-12-20T05:54:58Z | A Supervised Goal Directed Algorithm in Economical Choice Behaviour: An
Actor-Critic Approach | This paper aims to find an algorithmic structure that affords to predict and
explain economical choice behaviour particularly under uncertainty(random
policies) by manipulating the prevalent Actor-Critic learning method to comply
with the requirements we have been entrusted ever since the field of
neuroeconomics dawned on us. Whilst skimming some basics of neuroeconomics that
seem relevant to our discussion, we will try to outline some of the important
works which have so far been done to simulate choice making processes.
Concerning neurological findings that suggest the existence of two specific
functions that are executed through Basal Ganglia all the way up to sub-
cortical areas, namely 'rewards' and 'beliefs', we will offer a modified
version of actor/critic algorithm to shed a light on the relation between these
functions and most importantly resolve what is referred to as a challenge for
actor-critic algorithms, that is, the lack of inheritance or hierarchy which
avoids the system being evolved in continuous time tasks whence the convergence
might not be emerged.
| [
"Keyvan Yahya",
"['Keyvan Yahya']"
] |
cs.NE cs.LG | null | 1401.3607 | null | null | http://arxiv.org/pdf/1401.3607v2 | 2014-02-07T11:55:12Z | 2014-01-15T14:37:48Z | A Brief History of Learning Classifier Systems: From CS-1 to XCS | Modern Learning Classifier Systems can be characterized by their use of rule
accuracy as the utility metric for the search algorithm(s) discovering useful
rules. Such searching typically takes place within the restricted space of
co-active rules for efficiency. This paper gives an historical overview of the
evolution of such systems up to XCS, and then some of the subsequent
developments of XCS to different types of learning.
| [
"['Larry Bull']",
"Larry Bull"
] |
stat.ML cs.LG stat.CO | null | 1401.3632 | null | null | http://arxiv.org/pdf/1401.3632v3 | 2015-09-22T07:41:00Z | 2014-01-15T15:40:40Z | Bayesian Conditional Density Filtering | We propose a Conditional Density Filtering (C-DF) algorithm for efficient
online Bayesian inference. C-DF adapts MCMC sampling to the online setting,
sampling from approximations to conditional posterior distributions obtained by
propagating surrogate conditional sufficient statistics (a function of data and
parameter estimates) as new data arrive. These quantities eliminate the need to
store or process the entire dataset simultaneously and offer a number of
desirable features. Often, these include a reduction in memory requirements and
runtime and improved mixing, along with state-of-the-art parameter inference
and prediction. These improvements are demonstrated through several
illustrative examples including an application to high dimensional compressed
regression. Finally, we show that C-DF samples converge to the target posterior
distribution asymptotically as sampling proceeds and more data arrives.
| [
"['Shaan Qamar' 'Rajarshi Guhaniyogi' 'David B. Dunson']",
"Shaan Qamar, Rajarshi Guhaniyogi, David B. Dunson"
] |
stat.ML cs.LG | null | 1401.3737 | null | null | http://arxiv.org/pdf/1401.3737v1 | 2014-01-15T20:50:00Z | 2014-01-15T20:50:00Z | Coordinate Descent with Online Adaptation of Coordinate Frequencies | Coordinate descent (CD) algorithms have become the method of choice for
solving a number of optimization problems in machine learning. They are
particularly popular for training linear models, including linear support
vector machine classification, LASSO regression, and logistic regression.
We consider general CD with non-uniform selection of coordinates. Instead of
fixing selection frequencies beforehand we propose an online adaptation
mechanism for this important parameter, called the adaptive coordinate
frequencies (ACF) method. This mechanism removes the need to estimate optimal
coordinate frequencies beforehand, and it automatically reacts to changing
requirements during an optimization run.
We demonstrate the usefulness of our ACF-CD approach for a variety of
optimization problems arising in machine learning contexts. Our algorithm
offers significant speed-ups over state-of-the-art training methods.
| [
"Tobias Glasmachers and \\\"Ur\\\"un Dogan",
"['Tobias Glasmachers' 'Ürün Dogan']"
] |
cs.CV cs.LG stat.ML | 10.1109/LGRS.2013.2290531 | 1401.3818 | null | null | http://arxiv.org/abs/1401.3818v1 | 2014-01-16T03:21:26Z | 2014-01-16T03:21:26Z | Structured Priors for Sparse-Representation-Based Hyperspectral Image
Classification | Pixel-wise classification, where each pixel is assigned to a predefined
class, is one of the most important procedures in hyperspectral image (HSI)
analysis. By representing a test pixel as a linear combination of a small
subset of labeled pixels, a sparse representation classifier (SRC) gives rather
plausible results compared with that of traditional classifiers such as the
support vector machine (SVM). Recently, by incorporating additional structured
sparsity priors, the second generation SRCs have appeared in the literature and
are reported to further improve the performance of HSI. These priors are based
on exploiting the spatial dependencies between the neighboring pixels, the
inherent structure of the dictionary, or both. In this paper, we review and
compare several structured priors for sparse-representation-based HSI
classification. We also propose a new structured prior called the low rank
group prior, which can be considered as a modification of the low rank prior.
Furthermore, we will investigate how different structured priors improve the
result for the HSI classification.
| [
"Xiaoxia Sun, Qing Qu, Nasser M. Nasrabadi, Trac D. Tran",
"['Xiaoxia Sun' 'Qing Qu' 'Nasser M. Nasrabadi' 'Trac D. Tran']"
] |
cs.GT cs.LG | 10.1613/jair.2904 | 1401.3829 | null | null | http://arxiv.org/abs/1401.3829v1 | 2014-01-16T04:47:45Z | 2014-01-16T04:47:45Z | RoxyBot-06: Stochastic Prediction and Optimization in TAC Travel | In this paper, we describe our autonomous bidding agent, RoxyBot, who emerged
victorious in the travel division of the 2006 Trading Agent Competition in a
photo finish. At a high level, the design of many successful trading agents can
be summarized as follows: (i) price prediction: build a model of market prices;
and (ii) optimization: solve for an approximately optimal set of bids, given
this model. To predict, RoxyBot builds a stochastic model of market prices by
simulating simultaneous ascending auctions. To optimize, RoxyBot relies on the
sample average approximation method, a stochastic optimization technique.
| [
"Amy Greenwald, Seong Jae Lee, Victor Naroditskiy",
"['Amy Greenwald' 'Seong Jae Lee' 'Victor Naroditskiy']"
] |
cs.LG cs.HC | null | 1401.3836 | null | null | http://arxiv.org/pdf/1401.3836v1 | 2014-01-16T04:51:19Z | 2014-01-16T04:51:19Z | An Active Learning Approach for Jointly Estimating Worker Performance
and Annotation Reliability with Crowdsourced Data | Crowdsourcing platforms offer a practical solution to the problem of
affordably annotating large datasets for training supervised classifiers.
Unfortunately, poor worker performance frequently threatens to compromise
annotation reliability, and requesting multiple labels for every instance can
lead to large cost increases without guaranteeing good results. Minimizing the
required training samples using an active learning selection procedure reduces
the labeling requirement but can jeopardize classifier training by focusing on
erroneous annotations. This paper presents an active learning approach in which
worker performance, task difficulty, and annotation reliability are jointly
estimated and used to compute the risk function guiding the sample selection
procedure. We demonstrate that the proposed approach, which employs active
learning with Bayesian networks, significantly improves training accuracy and
correctly ranks the expertise of unknown labelers in the presence of annotation
noise.
| [
"['Liyue Zhao' 'Yu Zhang' 'Gita Sukthankar']",
"Liyue Zhao, Yu Zhang and Gita Sukthankar"
] |
cs.LG cs.AI stat.ML | 10.1613/jair.3396 | 1401.3870 | null | null | http://arxiv.org/abs/1401.3870v1 | 2014-01-16T05:08:29Z | 2014-01-16T05:08:29Z | Learning to Make Predictions In Partially Observable Environments
Without a Generative Model | When faced with the problem of learning a model of a high-dimensional
environment, a common approach is to limit the model to make only a restricted
set of predictions, thereby simplifying the learning problem. These partial
models may be directly useful for making decisions or may be combined together
to form a more complete, structured model. However, in partially observable
(non-Markov) environments, standard model-learning methods learn generative
models, i.e. models that provide a probability distribution over all possible
futures (such as POMDPs). It is not straightforward to restrict such models to
make only certain predictions, and doing so does not always simplify the
learning problem. In this paper we present prediction profile models:
non-generative partial models for partially observable systems that make only a
given set of predictions, and are therefore far simpler than generative models
in some cases. We formalize the problem of learning a prediction profile model
as a transformation of the original model-learning problem, and show
empirically that one can learn prediction profile models that make a small set
of important predictions even in systems that are too complex for standard
generative models.
| [
"['Erik Talvitie' 'Satinder Singh']",
"Erik Talvitie, Satinder Singh"
] |
cs.AI cs.LG | 10.1613/jair.3175 | 1401.3871 | null | null | http://arxiv.org/abs/1401.3871v1 | 2014-01-16T05:09:10Z | 2014-01-16T05:09:10Z | Non-Deterministic Policies in Markovian Decision Processes | Markovian processes have long been used to model stochastic environments.
Reinforcement learning has emerged as a framework to solve sequential planning
and decision-making problems in such environments. In recent years, attempts
were made to apply methods from reinforcement learning to construct decision
support systems for action selection in Markovian environments. Although
conventional methods in reinforcement learning have proved to be useful in
problems concerning sequential decision-making, they cannot be applied in their
current form to decision support systems, such as those in medical domains, as
they suggest policies that are often highly prescriptive and leave little room
for the users input. Without the ability to provide flexible guidelines, it is
unlikely that these methods can gain ground with users of such systems. This
paper introduces the new concept of non-deterministic policies to allow more
flexibility in the users decision-making process, while constraining decisions
to remain near optimal solutions. We provide two algorithms to compute
non-deterministic policies in discrete domains. We study the output and running
time of these method on a set of synthetic and real-world problems. In an
experiment with human subjects, we show that humans assisted by hints based on
non-deterministic policies outperform both human-only and computer-only agents
in a web navigation task.
| [
"Mahdi Milani Fard, Joelle Pineau",
"['Mahdi Milani Fard' 'Joelle Pineau']"
] |
cs.LG cs.AI stat.ML | 10.1613/jair.3195 | 1401.3877 | null | null | http://arxiv.org/abs/1401.3877v1 | 2014-01-16T05:11:12Z | 2014-01-16T05:11:12Z | Properties of Bethe Free Energies and Message Passing in Gaussian Models | We address the problem of computing approximate marginals in Gaussian
probabilistic models by using mean field and fractional Bethe approximations.
We define the Gaussian fractional Bethe free energy in terms of the moment
parameters of the approximate marginals, derive a lower and an upper bound on
the fractional Bethe free energy and establish a necessary condition for the
lower bound to be bounded from below. It turns out that the condition is
identical to the pairwise normalizability condition, which is known to be a
sufficient condition for the convergence of the message passing algorithm. We
show that stable fixed points of the Gaussian message passing algorithm are
local minima of the Gaussian Bethe free energy. By a counterexample, we
disprove the conjecture stating that the unboundedness of the free energy
implies the divergence of the message passing algorithm.
| [
"['Botond Cseke' 'Tom Heskes']",
"Botond Cseke, Tom Heskes"
] |
cs.LG | 10.1613/jair.3198 | 1401.3880 | null | null | http://arxiv.org/abs/1401.3880v1 | 2014-01-16T05:12:21Z | 2014-01-16T05:12:21Z | Regression Conformal Prediction with Nearest Neighbours | In this paper we apply Conformal Prediction (CP) to the k-Nearest Neighbours
Regression (k-NNR) algorithm and propose ways of extending the typical
nonconformity measure used for regression so far. Unlike traditional regression
methods which produce point predictions, Conformal Predictors output predictive
regions that satisfy a given confidence level. The regions produced by any
Conformal Predictor are automatically valid, however their tightness and
therefore usefulness depends on the nonconformity measure used by each CP. In
effect a nonconformity measure evaluates how strange a given example is
compared to a set of other examples based on some traditional machine learning
algorithm. We define six novel nonconformity measures based on the k-Nearest
Neighbours Regression algorithm and develop the corresponding CPs following
both the original (transductive) and the inductive CP approaches. A comparison
of the predictive regions produced by our measures with those of the typical
regression measure suggests that a major improvement in terms of predictive
region tightness is achieved by the new measures.
| [
"Harris Papadopoulos, Vladimir Vovk, Alex Gammerman",
"['Harris Papadopoulos' 'Vladimir Vovk' 'Alex Gammerman']"
] |
cs.LG cs.AI stat.ML | 10.1613/jair.3313 | 1401.3894 | null | null | http://arxiv.org/abs/1401.3894v1 | 2014-01-16T05:17:32Z | 2014-01-16T05:17:32Z | Efficient Multi-Start Strategies for Local Search Algorithms | Local search algorithms applied to optimization problems often suffer from
getting trapped in a local optimum. The common solution for this deficiency is
to restart the algorithm when no progress is observed. Alternatively, one can
start multiple instances of a local search algorithm, and allocate
computational resources (in particular, processing time) to the instances
depending on their behavior. Hence, a multi-start strategy has to decide
(dynamically) when to allocate additional resources to a particular instance
and when to start new instances. In this paper we propose multi-start
strategies motivated by works on multi-armed bandit problems and Lipschitz
optimization with an unknown constant. The strategies continuously estimate the
potential performance of each algorithm instance by supposing a convergence
rate of the local search algorithm up to an unknown constant, and in every
phase allocate resources to those instances that could converge to the optimum
for a particular range of the constant. Asymptotic bounds are given on the
performance of the strategies. In particular, we prove that at most a quadratic
increase in the number of times the target function is evaluated is needed to
achieve the performance of a local search algorithm started from the attraction
region of the optimum. Experiments are provided using SPSA (Simultaneous
Perturbation Stochastic Approximation) and k-means as local search algorithms,
and the results indicate that the proposed strategies work well in practice,
and, in all cases studied, need only logarithmically more evaluations of the
target function as opposed to the theoretically suggested quadratic increase.
| [
"Andr\\'as Gy\\\"orgy, Levente Kocsis",
"['András György' 'Levente Kocsis']"
] |
cs.GT cs.LG | 10.1613/jair.3384 | 1401.3907 | null | null | http://arxiv.org/abs/1401.3907v1 | 2014-01-16T05:22:56Z | 2014-01-16T05:22:56Z | Policy Invariance under Reward Transformations for General-Sum
Stochastic Games | We extend the potential-based shaping method from Markov decision processes
to multi-player general-sum stochastic games. We prove that the Nash equilibria
in a stochastic game remains unchanged after potential-based shaping is applied
to the environment. The property of policy invariance provides a possible way
of speeding convergence when learning to play a stochastic game.
| [
"Xiaosong Lu, Howard M. Schwartz, Sidney N. Givigi Jr",
"['Xiaosong Lu' 'Howard M. Schwartz' 'Sidney N. Givigi Jr']"
] |
cs.LG cs.CV stat.ML | 10.1016/j.knosys.2014.04.035 | 1401.3973 | null | null | http://arxiv.org/abs/1401.3973v1 | 2014-01-16T10:21:44Z | 2014-01-16T10:21:44Z | An Empirical Evaluation of Similarity Measures for Time Series
Classification | Time series are ubiquitous, and a measure to assess their similarity is a
core part of many computational systems. In particular, the similarity measure
is the most essential ingredient of time series clustering and classification
systems. Because of this importance, countless approaches to estimate time
series similarity have been proposed. However, there is a lack of comparative
studies using empirical, rigorous, quantitative, and large-scale assessment
strategies. In this article, we provide an extensive evaluation of similarity
measures for time series classification following the aforementioned
principles. We consider 7 different measures coming from alternative measure
`families', and 45 publicly-available time series data sets coming from a wide
variety of scientific domains. We focus on out-of-sample classification
accuracy, but in-sample accuracies and parameter choices are also discussed.
Our work is based on rigorous evaluation methodologies and includes the use of
powerful statistical significance tests to derive meaningful conclusions. The
obtained results show the equivalence, in terms of accuracy, of a number of
measures, but with one single candidate outperforming the rest. Such findings,
together with the followed methodology, invite researchers on the field to
adopt a more consistent evaluation criteria and a more informed decision
regarding the baseline measures to which new developments should be compared.
| [
"Joan Serr\\`a and Josep Lluis Arcos",
"['Joan Serrà' 'Josep Lluis Arcos']"
] |
stat.ML cs.AI cs.LG stat.CO stat.ME | null | 1401.4082 | null | null | http://arxiv.org/pdf/1401.4082v3 | 2014-05-30T10:00:36Z | 2014-01-16T16:33:23Z | Stochastic Backpropagation and Approximate Inference in Deep Generative
Models | We marry ideas from deep neural networks and approximate Bayesian inference
to derive a generalised class of deep, directed generative models, endowed with
a new algorithm for scalable inference and learning. Our algorithm introduces a
recognition model to represent approximate posterior distributions, and that
acts as a stochastic encoder of the data. We develop stochastic
back-propagation -- rules for back-propagation through stochastic variables --
and use this to develop an algorithm that allows for joint optimisation of the
parameters of both the generative and recognition model. We demonstrate on
several real-world data sets that the model generates realistic samples,
provides accurate imputations of missing data and is a useful tool for
high-dimensional data visualisation.
| [
"Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra",
"['Danilo Jimenez Rezende' 'Shakir Mohamed' 'Daan Wierstra']"
] |
cs.LG stat.AP | null | 1401.4128 | null | null | http://arxiv.org/pdf/1401.4128v1 | 2014-01-16T18:54:43Z | 2014-01-16T18:54:43Z | Towards the selection of patients requiring ICD implantation by
automatic classification from Holter monitoring indices | The purpose of this study is to optimize the selection of prophylactic
cardioverter defibrillator implantation candidates. Currently, the main
criterion for implantation is a low Left Ventricular Ejection Fraction (LVEF)
whose specificity is relatively poor. We designed two classifiers aimed to
predict, from long term ECG recordings (Holter), whether a low-LVEF patient is
likely or not to undergo ventricular arrhythmia in the next six months. One
classifier is a single hidden layer neural network whose variables are the most
relevant features extracted from Holter recordings, and the other classifier
has a structure that capitalizes on the physiological decomposition of the
arrhythmogenic factors into three disjoint groups: the myocardial substrate,
the triggers and the autonomic nervous system (ANS). In this ad hoc network,
the features were assigned to each group; one neural network classifier per
group was designed and its complexity was optimized. The outputs of the
classifiers were fed to a single neuron that provided the required probability
estimate. The latter was thresholded for final discrimination A dataset
composed of 186 pre-implantation 30-mn Holter recordings of patients equipped
with an implantable cardioverter defibrillator (ICD) in primary prevention was
used in order to design and test this classifier. 44 out of 186 patients
underwent at least one treated ventricular arrhythmia during the six-month
follow-up period. Performances of the designed classifier were evaluated using
a cross-test strategy that consists in splitting the database into several
combinations of a training set and a test set. The average arrhythmia
prediction performances of the ad-hoc classifier are NPV = 77% $\pm$ 13% and
PPV = 31% $\pm$ 19% (Negative Predictive Value $\pm$ std, Positive Predictive
Value $\pm$ std). According to our study, improving prophylactic
ICD-implantation candidate selection by automatic classification from ECG
features may be possible, but the availability of a sizable dataset appears to
be essential to decrease the number of False Negatives.
| [
"['Charles-Henri Cappelaere' 'R. Dubois' 'P. Roussel' 'G. Dreyfus']",
"Charles-Henri Cappelaere, R. Dubois, P. Roussel, G. Dreyfus"
] |
cs.LG | null | 1401.4143 | null | null | http://arxiv.org/pdf/1401.4143v1 | 2014-01-16T19:49:02Z | 2014-01-16T19:49:02Z | Convex Optimization for Binary Classifier Aggregation in Multiclass
Problems | Multiclass problems are often decomposed into multiple binary problems that
are solved by individual binary classifiers whose results are integrated into a
final answer. Various methods, including all-pairs (APs), one-versus-all (OVA),
and error correcting output code (ECOC), have been studied, to decompose
multiclass problems into binary problems. However, little study has been made
to optimally aggregate binary problems to determine a final answer to the
multiclass problem. In this paper we present a convex optimization method for
an optimal aggregation of binary classifiers to estimate class membership
probabilities in multiclass problems. We model the class membership probability
as a softmax function which takes a conic combination of discrepancies induced
by individual binary classifiers, as an input. With this model, we formulate
the regularized maximum likelihood estimation as a convex optimization problem,
which is solved by the primal-dual interior point method. Connections of our
method to large margin classifiers are presented, showing that the large margin
formulation can be considered as a limiting case of our convex formulation.
Numerical experiments on synthetic and real-world data sets demonstrate that
our method outperforms existing aggregation methods as well as direct methods,
in terms of the classification accuracy and the quality of class membership
probability estimates.
| [
"Sunho Park, TaeHyun Hwang, Seungjin Choi",
"['Sunho Park' 'TaeHyun Hwang' 'Seungjin Choi']"
] |
cs.CL cs.LG | 10.1613/jair.2986 | 1401.4436 | null | null | http://arxiv.org/abs/1401.4436v1 | 2014-01-16T04:53:44Z | 2014-01-16T04:53:44Z | Cause Identification from Aviation Safety Incident Reports via Weakly
Supervised Semantic Lexicon Construction | The Aviation Safety Reporting System collects voluntarily submitted reports
on aviation safety incidents to facilitate research work aiming to reduce such
incidents. To effectively reduce these incidents, it is vital to accurately
identify why these incidents occurred. More precisely, given a set of possible
causes, or shaping factors, this task of cause identification involves
identifying all and only those shaping factors that are responsible for the
incidents described in a report. We investigate two approaches to cause
identification. Both approaches exploit information provided by a semantic
lexicon, which is automatically constructed via Thelen and Riloffs Basilisk
framework augmented with our linguistic and algorithmic modifications. The
first approach labels a report using a simple heuristic, which looks for the
words and phrases acquired during the semantic lexicon learning process in the
report. The second approach recasts cause identification as a text
classification problem, employing supervised and transductive text
classification algorithms to learn models from incident reports labeled with
shaping factors and using the models to label unseen reports. Our experiments
show that both the heuristic-based approach and the learning-based approach
(when given sufficient training data) outperform the baseline system
significantly.
| [
"['Muhammad Arshad Ul Abedin' 'Vincent Ng' 'Latifur Khan']",
"Muhammad Arshad Ul Abedin, Vincent Ng, Latifur Khan"
] |
cs.CV cs.LG stat.ML | null | 1401.4489 | null | null | http://arxiv.org/pdf/1401.4489v3 | 2014-11-14T02:38:09Z | 2014-01-17T23:21:56Z | An Analysis of Random Projections in Cancelable Biometrics | With increasing concerns about security, the need for highly secure physical
biometrics-based authentication systems utilizing \emph{cancelable biometric}
technologies is on the rise. Because the problem of cancelable template
generation deals with the trade-off between template security and matching
performance, many state-of-the-art algorithms successful in generating high
quality cancelable biometrics all have random projection as one of their early
processing steps. This paper therefore presents a formal analysis of why random
projections is an essential step in cancelable biometrics. By formally defining
the notion of an \textit{Independent Subspace Structure} for datasets, it can
be shown that random projection preserves the subspace structure of data
vectors generated from a union of independent linear subspaces. The bound on
the minimum number of random vectors required for this to hold is also derived
and is shown to depend logarithmically on the number of data samples, not only
in independent subspaces but in disjoint subspace settings as well. The
theoretical analysis presented is supported in detail with empirical results on
real-world face recognition datasets.
| [
"['Devansh Arpit' 'Ifeoma Nwogu' 'Gaurav Srivastava' 'Venu Govindaraju']",
"Devansh Arpit, Ifeoma Nwogu, Gaurav Srivastava, Venu Govindaraju"
] |
cs.IR cs.LG | 10.1007/s10618-015-0417-y | 1401.4529 | null | null | http://arxiv.org/abs/1401.4529v2 | 2015-05-19T11:50:22Z | 2014-01-18T11:13:26Z | General factorization framework for context-aware recommendations | Context-aware recommendation algorithms focus on refining recommendations by
considering additional information, available to the system. This topic has
gained a lot of attention recently. Among others, several factorization methods
were proposed to solve the problem, although most of them assume explicit
feedback which strongly limits their real-world applicability. While these
algorithms apply various loss functions and optimization strategies, the
preference modeling under context is less explored due to the lack of tools
allowing for easy experimentation with various models. As context dimensions
are introduced beyond users and items, the space of possible preference models
and the importance of proper modeling largely increases.
In this paper we propose a General Factorization Framework (GFF), a single
flexible algorithm that takes the preference model as an input and computes
latent feature matrices for the input dimensions. GFF allows us to easily
experiment with various linear models on any context-aware recommendation task,
be it explicit or implicit feedback based. The scaling properties makes it
usable under real life circumstances as well.
We demonstrate the framework's potential by exploring various preference
models on a 4-dimensional context-aware problem with contexts that are
available for almost any real life datasets. We show in our experiments --
performed on five real life, implicit feedback datasets -- that proper
preference modelling significantly increases recommendation accuracy, and
previously unused models outperform the traditional ones. Novel models in GFF
also outperform state-of-the-art factorization algorithms.
We also extend the method to be fully compliant to the Multidimensional
Dataspace Model, one of the most extensive data models of context-enriched
data. Extended GFF allows the seamless incorporation of information into the
fac[truncated]
| [
"['Balázs Hidasi' 'Domonkos Tikk']",
"Bal\\'azs Hidasi, Domonkos Tikk"
] |
cs.LG stat.ML | null | 1401.4566 | null | null | http://arxiv.org/pdf/1401.4566v2 | 2014-02-08T05:02:49Z | 2014-01-18T17:07:38Z | Excess Risk Bounds for Exponentially Concave Losses | The overarching goal of this paper is to derive excess risk bounds for
learning from exp-concave loss functions in passive and sequential learning
settings. Exp-concave loss functions encompass several fundamental problems in
machine learning such as squared loss in linear regression, logistic loss in
classification, and negative logarithm loss in portfolio management. In batch
setting, we obtain sharp bounds on the performance of empirical risk
minimization performed in a linear hypothesis space and with respect to the
exp-concave loss functions. We also extend the results to the online setting
where the learner receives the training examples in a sequential manner. We
propose an online learning algorithm that is a properly modified version of
online Newton method to obtain sharp risk bounds. Under an additional mild
assumption on the loss function, we show that in both settings we are able to
achieve an excess risk bound of $O(d\log n/n)$ that holds with a high
probability.
| [
"['Mehrdad Mahdavi' 'Rong Jin']",
"Mehrdad Mahdavi and Rong Jin"
] |
cs.CE cs.LG | null | 1401.4589 | null | null | http://arxiv.org/pdf/1401.4589v1 | 2014-01-18T21:02:32Z | 2014-01-18T21:02:32Z | miRNA and Gene Expression based Cancer Classification using Self-
Learning and Co-Training Approaches | miRNA and gene expression profiles have been proved useful for classifying
cancer samples. Efficient classifiers have been recently sought and developed.
A number of attempts to classify cancer samples using miRNA/gene expression
profiles are known in literature. However, the use of semi-supervised learning
models have been used recently in bioinformatics, to exploit the huge corpuses
of publicly available sets. Using both labeled and unlabeled sets to train
sample classifiers, have not been previously considered when gene and miRNA
expression sets are used. Moreover, there is a motivation to integrate both
miRNA and gene expression for a semi-supervised cancer classification as that
provides more information on the characteristics of cancer samples. In this
paper, two semi-supervised machine learning approaches, namely self-learning
and co-training, are adapted to enhance the quality of cancer sample
classification. These approaches exploit the huge public corpuses to enrich the
training data. In self-learning, miRNA and gene based classifiers are enhanced
independently. While in co-training, both miRNA and gene expression profiles
are used simultaneously to provide different views of cancer samples. To our
knowledge, it is the first attempt to apply these learning approaches to cancer
classification. The approaches were evaluated using breast cancer,
hepatocellular carcinoma (HCC) and lung cancer expression sets. Results show up
to 20% improvement in F1-measure over Random Forests and SVM classifiers.
Co-Training also outperforms Low Density Separation (LDS) approach by around
25% improvement in F1-measure in breast cancer.
| [
"Rania Ibrahim, Noha A. Yousri, Mohamed A. Ismail, Nagwa M. El-Makky",
"['Rania Ibrahim' 'Noha A. Yousri' 'Mohamed A. Ismail' 'Nagwa M. El-Makky']"
] |
cs.AI cs.LG | 10.1613/jair.3401 | 1401.4590 | null | null | http://arxiv.org/abs/1401.4590v1 | 2014-01-18T21:03:23Z | 2014-01-18T21:03:23Z | Combining Evaluation Metrics via the Unanimous Improvement Ratio and its
Application to Clustering Tasks | Many Artificial Intelligence tasks cannot be evaluated with a single quality
criterion and some sort of weighted combination is needed to provide system
rankings. A problem of weighted combination measures is that slight changes in
the relative weights may produce substantial changes in the system rankings.
This paper introduces the Unanimous Improvement Ratio (UIR), a measure that
complements standard metric combination criteria (such as van Rijsbergen's
F-measure) and indicates how robust the measured differences are to changes in
the relative weights of the individual metrics. UIR is meant to elucidate
whether a perceived difference between two systems is an artifact of how
individual metrics are weighted.
Besides discussing the theoretical foundations of UIR, this paper presents
empirical results that confirm the validity and usefulness of the metric for
the Text Clustering problem, where there is a tradeoff between precision and
recall based metrics and results are particularly sensitive to the weighting
scheme used to combine them. Remarkably, our experiments show that UIR can be
used as a predictor of how well differences between systems measured on a given
test bed will also hold in a different test bed.
| [
"Enrique Amig\\'o, Julio Gonzalo, Javier Artiles, Felisa Verdejo",
"['Enrique Amigó' 'Julio Gonzalo' 'Javier Artiles' 'Felisa Verdejo']"
] |
cs.CR cs.DB cs.LG | null | 1401.4872 | null | null | http://arxiv.org/pdf/1401.4872v1 | 2014-01-20T11:58:23Z | 2014-01-20T11:58:23Z | Classification of IDS Alerts with Data Mining Techniques | A data mining technique to reduce the amount of false alerts within an IDS
system is proposed. The new technique achieves an accuracy of 99% compared to
97% by the current systems.
| [
"['Hany Nashat Gabra' 'Ayman Mohammad Bahaa-Eldin' 'Huda Korashy']",
"Hany Nashat Gabra, Ayman Mohammad Bahaa-Eldin, Huda Korashy"
] |
cs.LG | null | 1401.5136 | null | null | http://arxiv.org/pdf/1401.5136v1 | 2014-01-21T01:16:44Z | 2014-01-21T01:16:44Z | A Unifying Framework for Typical Multi-Task Multiple Kernel Learning
Problems | Over the past few years, Multi-Kernel Learning (MKL) has received significant
attention among data-driven feature selection techniques in the context of
kernel-based learning. MKL formulations have been devised and solved for a
broad spectrum of machine learning problems, including Multi-Task Learning
(MTL). Solving different MKL formulations usually involves designing algorithms
that are tailored to the problem at hand, which is, typically, a non-trivial
accomplishment.
In this paper we present a general Multi-Task Multi-Kernel Learning
(Multi-Task MKL) framework that subsumes well-known Multi-Task MKL
formulations, as well as several important MKL approaches on single-task
problems. We then derive a simple algorithm that can solve the unifying
framework. To demonstrate the flexibility of the proposed framework, we
formulate a new learning problem, namely Partially-Shared Common Space (PSCS)
Multi-Task MKL, and demonstrate its merits through experimentation.
| [
"Cong Li, Michael Georgiopoulos, Georgios C. Anagnostopoulos",
"['Cong Li' 'Michael Georgiopoulos' 'Georgios C. Anagnostopoulos']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.