categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG cs.SI stat.ML | null | 1310.1545 | null | null | http://arxiv.org/pdf/1310.1545v1 | 2013-10-06T05:47:50Z | 2013-10-06T05:47:50Z | Learning Hidden Structures with Relational Models by Adequately
Involving Rich Information in A Network | Effectively modelling hidden structures in a network is very practical but
theoretically challenging. Existing relational models only involve very limited
information, namely the binary directional link data, embedded in a network to
learn hidden networking structures. There is other rich and meaningful
information (e.g., various attributes of entities and more granular information
than binary elements such as "like" or "dislike") missed, which play a critical
role in forming and understanding relations in a network. In this work, we
propose an informative relational model (InfRM) framework to adequately involve
rich information and its granularity in a network, including metadata
information about each entity and various forms of link data. Firstly, an
effective metadata information incorporation method is employed on the prior
information from relational models MMSB and LFRM. This is to encourage the
entities with similar metadata information to have similar hidden structures.
Secondly, we propose various solutions to cater for alternative forms of link
data. Substantial efforts have been made towards modelling appropriateness and
efficiency, for example, using conjugate priors. We evaluate our framework and
its inference algorithms in different datasets, which shows the generality and
effectiveness of our models in capturing implicit structures in networks.
| [
"['Xuhui Fan' 'Richard Yi Da Xu' 'Longbing Cao' 'Yin Song']",
"Xuhui Fan, Richard Yi Da Xu, Longbing Cao, Yin Song"
] |
cs.LG cs.CE | null | 1310.1659 | null | null | http://arxiv.org/pdf/1310.1659v1 | 2013-10-07T02:26:45Z | 2013-10-07T02:26:45Z | MINT: Mutual Information based Transductive Feature Selection for
Genetic Trait Prediction | Whole genome prediction of complex phenotypic traits using high-density
genotyping arrays has attracted a great deal of attention, as it is relevant to
the fields of plant and animal breeding and genetic epidemiology. As the number
of genotypes is generally much bigger than the number of samples, predictive
models suffer from the curse-of-dimensionality. The curse-of-dimensionality
problem not only affects the computational efficiency of a particular genomic
selection method, but can also lead to poor performance, mainly due to
correlation among markers. In this work we proposed the first transductive
feature selection method based on the MRMR (Max-Relevance and Min-Redundancy)
criterion which we call MINT. We applied MINT on genetic trait prediction
problems and showed that in general MINT is a better feature selection method
than the state-of-the-art inductive method mRMR.
| [
"['Dan He' 'Irina Rish' 'David Haws' 'Simon Teyssedre' 'Zivan Karaman'\n 'Laxmi Parida']",
"Dan He, Irina Rish, David Haws, Simon Teyssedre, Zivan Karaman, Laxmi\n Parida"
] |
stat.ML cs.LG | null | 1310.1757 | null | null | http://arxiv.org/pdf/1310.1757v2 | 2014-01-11T17:13:56Z | 2013-10-07T12:42:41Z | A Deep and Tractable Density Estimator | The Neural Autoregressive Distribution Estimator (NADE) and its real-valued
version RNADE are competitive density models of multidimensional data across a
variety of domains. These models use a fixed, arbitrary ordering of the data
dimensions. One can easily condition on variables at the beginning of the
ordering, and marginalize out variables at the end of the ordering, however
other inference tasks require approximate inference. In this work we introduce
an efficient procedure to simultaneously train a NADE model for each possible
ordering of the variables, by sharing parameters across all these models. We
can thus use the most convenient model for each inference task at hand, and
ensembles of such models with different orderings are immediately available.
Moreover, unlike the original NADE, our training procedure scales to deep
models. Empirically, ensembles of Deep NADE models obtain state of the art
density estimation performance.
| [
"Benigno Uria, Iain Murray, Hugo Larochelle",
"['Benigno Uria' 'Iain Murray' 'Hugo Larochelle']"
] |
cs.LG math.OC stat.ML | 10.1109/ICMLA.2013.72 | 1310.1840 | null | null | http://arxiv.org/abs/1310.1840v1 | 2013-10-07T16:04:28Z | 2013-10-07T16:04:28Z | Parallel coordinate descent for the Adaboost problem | We design a randomised parallel version of Adaboost based on previous studies
on parallel coordinate descent. The algorithm uses the fact that the logarithm
of the exponential loss is a function with coordinate-wise Lipschitz continuous
gradient, in order to define the step lengths. We provide the proof of
convergence for this randomised Adaboost algorithm and a theoretical
parallelisation speedup factor. We finally provide numerical examples on
learning problems of various sizes that show that the algorithm is competitive
with concurrent approaches, especially for large scale problems.
| [
"Olivier Fercoq",
"['Olivier Fercoq']"
] |
cs.LG stat.ML | null | 1310.1934 | null | null | http://arxiv.org/pdf/1310.1934v1 | 2013-10-07T20:05:52Z | 2013-10-07T20:05:52Z | Discriminative Features via Generalized Eigenvectors | Representing examples in a way that is compatible with the underlying
classifier can greatly enhance the performance of a learning system. In this
paper we investigate scalable techniques for inducing discriminative features
by taking advantage of simple second order structure in the data. We focus on
multiclass classification and show that features extracted from the generalized
eigenvectors of the class conditional second moments lead to classifiers with
excellent empirical performance. Moreover, these features have attractive
theoretical properties, such as inducing representations that are invariant to
linear transformations of the input. We evaluate classifiers built from these
features on three different tasks, obtaining state of the art results.
| [
"['Nikos Karampatziakis' 'Paul Mineiro']",
"Nikos Karampatziakis, Paul Mineiro"
] |
cs.AI cs.LG stat.ML | null | 1310.1947 | null | null | http://arxiv.org/pdf/1310.1947v1 | 2013-10-07T20:43:16Z | 2013-10-07T20:43:16Z | Bayesian Optimization With Censored Response Data | Bayesian optimization (BO) aims to minimize a given blackbox function using a
model that is updated whenever new evidence about the function becomes
available. Here, we address the problem of BO under partially right-censored
response data, where in some evaluations we only obtain a lower bound on the
function value. The ability to handle such response data allows us to
adaptively censor costly function evaluations in minimization problems where
the cost of a function evaluation corresponds to the function value. One
important application giving rise to such censored data is the
runtime-minimizing variant of the algorithm configuration problem: finding
settings of a given parametric algorithm that minimize the runtime required for
solving problem instances from a given distribution. We demonstrate that
terminating slow algorithm runs prematurely and handling the resulting
right-censored observations can substantially improve the state of the art in
model-based algorithm configuration.
| [
"Frank Hutter and Holger Hoos and Kevin Leyton-Brown",
"['Frank Hutter' 'Holger Hoos' 'Kevin Leyton-Brown']"
] |
cs.LG stat.ML | null | 1310.1949 | null | null | http://arxiv.org/pdf/1310.1949v2 | 2013-10-21T15:18:37Z | 2013-10-07T20:48:58Z | Least Squares Revisited: Scalable Approaches for Multi-class Prediction | This work provides simple algorithms for multi-class (and multi-label)
prediction in settings where both the number of examples n and the data
dimension d are relatively large. These robust and parameter free algorithms
are essentially iterative least-squares updates and very versatile both in
theory and in practice. On the theoretical front, we present several variants
with convergence guarantees. Owing to their effective use of second-order
structure, these algorithms are substantially better than first-order methods
in many practical scenarios. On the empirical side, we present a scalable
stagewise variant of our approach, which achieves dramatic computational
speedups over popular optimization packages such as Liblinear and Vowpal Wabbit
on standard datasets (MNIST and CIFAR-10), while attaining state-of-the-art
accuracies.
| [
"['Alekh Agarwal' 'Sham M. Kakade' 'Nikos Karampatziakis' 'Le Song'\n 'Gregory Valiant']",
"Alekh Agarwal, Sham M. Kakade, Nikos Karampatziakis, Le Song, Gregory\n Valiant"
] |
cs.LG | null | 1310.2049 | null | null | http://arxiv.org/pdf/1310.2049v1 | 2013-10-08T09:03:28Z | 2013-10-08T09:03:28Z | Fast Multi-Instance Multi-Label Learning | In many real-world tasks, particularly those involving data objects with
complicated semantics such as images and texts, one object can be represented
by multiple instances and simultaneously be associated with multiple labels.
Such tasks can be formulated as multi-instance multi-label learning (MIML)
problems, and have been extensively studied during the past few years. Existing
MIML approaches have been found useful in many applications; however, most of
them can only handle moderate-sized data. To efficiently handle large data
sets, in this paper we propose the MIMLfast approach, which first constructs a
low-dimensional subspace shared by all labels, and then trains label specific
linear models to optimize approximated ranking loss via stochastic gradient
descent. Although the MIML problem is complicated, MIMLfast is able to achieve
excellent performance by exploiting label relations with shared space and
discovering sub-concepts for complicated labels. Experiments show that the
performance of MIMLfast is highly competitive to state-of-the-art techniques,
whereas its time cost is much less; particularly, on a data set with 20K bags
and 180K instances, MIMLfast is more than 100 times faster than existing MIML
approaches. On a larger data set where none of existing approaches can return
results in 24 hours, MIMLfast takes only 12 minutes. Moreover, our approach is
able to identify the most representative instance for each label, and thus
providing a chance to understand the relation between input patterns and output
label semantics.
| [
"['Sheng-Jun Huang' 'Zhi-Hua Zhou']",
"Sheng-Jun Huang and Zhi-Hua Zhou"
] |
stat.ML cs.DC cs.LG math.OC | null | 1310.2059 | null | null | http://arxiv.org/pdf/1310.2059v1 | 2013-10-08T09:31:27Z | 2013-10-08T09:31:27Z | Distributed Coordinate Descent Method for Learning with Big Data | In this paper we develop and analyze Hydra: HYbriD cooRdinAte descent method
for solving loss minimization problems with big data. We initially partition
the coordinates (features) and assign each partition to a different node of a
cluster. At every iteration, each node picks a random subset of the coordinates
from those it owns, independently from the other computers, and in parallel
computes and applies updates to the selected coordinates based on a simple
closed-form formula. We give bounds on the number of iterations sufficient to
approximately solve the problem with high probability, and show how it depends
on the data and on the partitioning. We perform numerical experiments with a
LASSO instance described by a 3TB matrix.
| [
"Peter Richt\\'arik and Martin Tak\\'a\\v{c}",
"['Peter Richtárik' 'Martin Takáč']"
] |
cs.CY cs.LG | 10.5121/ijdkp.2013.3504 | 1310.2071 | null | null | http://arxiv.org/abs/1310.2071v1 | 2013-10-08T10:12:15Z | 2013-10-08T10:12:15Z | Predicting Students' Performance Using ID3 And C4.5 Classification
Algorithms | An educational institution needs to have an approximate prior knowledge of
enrolled students to predict their performance in future academics. This helps
them to identify promising students and also provides them an opportunity to
pay attention to and improve those who would probably get lower grades. As a
solution, we have developed a system which can predict the performance of
students from their previous performances using concepts of data mining
techniques under Classification. We have analyzed the data set containing
information about students, such as gender, marks scored in the board
examinations of classes X and XII, marks and rank in entrance examinations and
results in first year of the previous batch of students. By applying the ID3
(Iterative Dichotomiser 3) and C4.5 classification algorithms on this data, we
have predicted the general and individual performance of freshly admitted
students in future examinations.
| [
"['Kalpesh Adhatrao' 'Aditya Gaykar' 'Amiraj Dhawan' 'Rohit Jha'\n 'Vipul Honrao']",
"Kalpesh Adhatrao, Aditya Gaykar, Amiraj Dhawan, Rohit Jha and Vipul\n Honrao"
] |
stat.ML cs.LG math.OC | 10.1137/130940670 | 1310.2273 | null | null | http://arxiv.org/abs/1310.2273v2 | 2014-09-16T09:11:30Z | 2013-10-08T20:30:38Z | Semidefinite Programming Based Preconditioning for More Robust
Near-Separable Nonnegative Matrix Factorization | Nonnegative matrix factorization (NMF) under the separability assumption can
provably be solved efficiently, even in the presence of noise, and has been
shown to be a powerful technique in document classification and hyperspectral
unmixing. This problem is referred to as near-separable NMF and requires that
there exists a cone spanned by a small subset of the columns of the input
nonnegative matrix approximately containing all columns. In this paper, we
propose a preconditioning based on semidefinite programming making the input
matrix well-conditioned. This in turn can improve significantly the performance
of near-separable NMF algorithms which is illustrated on the popular successive
projection algorithm (SPA). The new preconditioned SPA is provably more robust
to noise, and outperforms SPA on several synthetic data sets. We also show how
an active-set method allow us to apply the preconditioning on large-scale
real-world hyperspectral images.
| [
"Nicolas Gillis and Stephen A. Vavasis",
"['Nicolas Gillis' 'Stephen A. Vavasis']"
] |
cs.LG cs.CL stat.AP stat.ML | null | 1310.2408 | null | null | http://arxiv.org/pdf/1310.2408v1 | 2013-10-09T09:23:10Z | 2013-10-09T09:23:10Z | Improved Bayesian Logistic Supervised Topic Models with Data
Augmentation | Supervised topic models with a logistic likelihood have two issues that
potentially limit their practical use: 1) response variables are usually
over-weighted by document word counts; and 2) existing variational inference
methods make strict mean-field assumptions. We address these issues by: 1)
introducing a regularization constant to better balance the two parts based on
an optimization formulation of Bayesian inference; and 2) developing a simple
Gibbs sampling algorithm by introducing auxiliary Polya-Gamma variables and
collapsing out Dirichlet variables. Our augment-and-collapse sampling algorithm
has analytical forms of each conditional distribution without making any
restricting assumptions and can be easily parallelized. Empirical results
demonstrate significant improvements on prediction performance and time
efficiency.
| [
"['Jun Zhu' 'Xun Zheng' 'Bo Zhang']",
"Jun Zhu, Xun Zheng, Bo Zhang"
] |
cs.LG cs.IR stat.ML | null | 1310.2409 | null | null | http://arxiv.org/pdf/1310.2409v1 | 2013-10-09T09:32:56Z | 2013-10-09T09:32:56Z | Discriminative Relational Topic Models | Many scientific and engineering fields involve analyzing network data. For
document networks, relational topic models (RTMs) provide a probabilistic
generative process to describe both the link structure and document contents,
and they have shown promise on predicting network structures and discovering
latent topic representations. However, existing RTMs have limitations in both
the restricted model expressiveness and incapability of dealing with imbalanced
network data. To expand the scope and improve the inference accuracy of RTMs,
this paper presents three extensions: 1) unlike the common link likelihood with
a diagonal weight matrix that allows the-same-topic interactions only, we
generalize it to use a full weight matrix that captures all pairwise topic
interactions and is applicable to asymmetric networks; 2) instead of doing
standard Bayesian inference, we perform regularized Bayesian inference
(RegBayes) with a regularization parameter to deal with the imbalanced link
structure issue in common real networks and improve the discriminative ability
of learned latent representations; and 3) instead of doing variational
approximation with strict mean-field assumptions, we present collapsed Gibbs
sampling algorithms for the generalized relational topic models by exploring
data augmentation without making restricting assumptions. Under the generic
RegBayes framework, we carefully investigate two popular discriminative loss
functions, namely, the logistic log-loss and the max-margin hinge loss.
Experimental results on several real network datasets demonstrate the
significance of these extensions on improving the prediction performance, and
the time efficiency can be dramatically improved with a simple fast
approximation method.
| [
"Ning Chen, Jun Zhu, Fei Xia, Bo Zhang",
"['Ning Chen' 'Jun Zhu' 'Fei Xia' 'Bo Zhang']"
] |
stat.ML cs.LG math.PR | null | 1310.2451 | null | null | http://arxiv.org/pdf/1310.2451v2 | 2016-12-14T13:45:18Z | 2013-10-09T12:18:29Z | M-Power Regularized Least Squares Regression | Regularization is used to find a solution that both fits the data and is
sufficiently smooth, and thereby is very effective for designing and refining
learning algorithms. But the influence of its exponent remains poorly
understood. In particular, it is unclear how the exponent of the reproducing
kernel Hilbert space~(RKHS) regularization term affects the accuracy and the
efficiency of kernel-based learning algorithms. Here we consider regularized
least squares regression (RLSR) with an RKHS regularization raised to the power
of m, where m is a variable real exponent. We design an efficient algorithm for
solving the associated minimization problem, we provide a theoretical analysis
of its stability, and we compare its advantage with respect to computational
complexity, speed of convergence and prediction accuracy to the classical
kernel ridge regression algorithm where the regularization exponent m is fixed
at 2. Our results show that the m-power RLSR problem can be solved efficiently,
and support the suggestion that one can use a regularization term that grows
significantly slower than the standard quadratic growth in the RKHS norm.
| [
"Julien Audiffren (LIF), Hachem Kadri (LIF)",
"['Julien Audiffren' 'Hachem Kadri']"
] |
stat.ML cs.AI cs.LG | null | 1310.2627 | null | null | http://arxiv.org/pdf/1310.2627v2 | 2015-11-07T05:11:48Z | 2013-10-09T20:39:08Z | A Sparse and Adaptive Prior for Time-Dependent Model Parameters | We consider the scenario where the parameters of a probabilistic model are
expected to vary over time. We construct a novel prior distribution that
promotes sparsity and adapts the strength of correlation between parameters at
successive timesteps, based on the data. We derive approximate variational
inference procedures for learning and prediction with this prior. We test the
approach on two tasks: forecasting financial quantities from relevant text, and
modeling language contingent on time-varying financial measurements.
| [
"Dani Yogatama and Bryan R. Routledge and Noah A. Smith",
"['Dani Yogatama' 'Bryan R. Routledge' 'Noah A. Smith']"
] |
cs.LG | null | 1310.2646 | null | null | http://arxiv.org/pdf/1310.2646v1 | 2013-10-09T22:24:28Z | 2013-10-09T22:24:28Z | Localized Iterative Methods for Interpolation in Graph Structured Data | In this paper, we present two localized graph filtering based methods for
interpolating graph signals defined on the vertices of arbitrary graphs from
only a partial set of samples. The first method is an extension of previous
work on reconstructing bandlimited graph signals from partially observed
samples. The iterative graph filtering approach very closely approximates the
solution proposed in the that work, while being computationally more efficient.
As an alternative, we propose a regularization based framework in which we
define the cost of reconstruction to be a combination of smoothness of the
graph signal and the reconstruction error with respect to the known samples,
and find solutions that minimize this cost. We provide both a closed form
solution and a computationally efficient iterative solution of the optimization
problem. The experimental results on the recommendation system datasets
demonstrate effectiveness of the proposed methods.
| [
"['Sunil K. Narang' 'Akshay Gadde' 'Eduard Sanou' 'Antonio Ortega']",
"Sunil K. Narang, Akshay Gadde, Eduard Sanou and Antonio Ortega"
] |
physics.data-an cs.LG physics.comp-ph | null | 1310.2700 | null | null | http://arxiv.org/pdf/1310.2700v2 | 2013-10-17T21:06:22Z | 2013-10-10T04:00:03Z | Analyzing Big Data with Dynamic Quantum Clustering | How does one search for a needle in a multi-dimensional haystack without
knowing what a needle is and without knowing if there is one in the haystack?
This kind of problem requires a paradigm shift - away from hypothesis driven
searches of the data - towards a methodology that lets the data speak for
itself. Dynamic Quantum Clustering (DQC) is such a methodology. DQC is a
powerful visual method that works with big, high-dimensional data. It exploits
variations of the density of the data (in feature space) and unearths subsets
of the data that exhibit correlations among all the measured variables. The
outcome of a DQC analysis is a movie that shows how and why sets of data-points
are eventually classified as members of simple clusters or as members of - what
we call - extended structures. This allows DQC to be successfully used in a
non-conventional exploratory mode where one searches data for unexpected
information without the need to model the data. We show how this works for big,
complex, real-world datasets that come from five distinct fields: i.e., x-ray
nano-chemistry, condensed matter, biology, seismology and finance. These
studies show how DQC excels at uncovering unexpected, small - but meaningful -
subsets of the data that contain important information. We also establish an
important new result: namely, that big, complex datasets often contain
interesting structures that will be missed by many conventional clustering
techniques. Experience shows that these structures appear frequently enough
that it is crucial to know they can exist, and that when they do, they encode
important hidden information. In short, we not only demonstrate that DQC can be
flexibly applied to datasets that present significantly different challenges,
we also show how a simple analysis can be used to look for the needle in the
haystack, determine what it is, and find what this means.
| [
"['M. Weinstein' 'F. Meirer' 'A. Hume' 'Ph. Sciau' 'G. Shaked'\n 'R. Hofstetter' 'E. Persi' 'A. Mehta' 'D. Horn']",
"M. Weinstein, F. Meirer, A. Hume, Ph. Sciau, G. Shaked, R. Hofstetter,\n E. Persi, A. Mehta, and D. Horn"
] |
cs.AI cs.DL cs.LG cs.LO | null | 1310.2797 | null | null | http://arxiv.org/pdf/1310.2797v1 | 2013-10-10T12:53:04Z | 2013-10-10T12:53:04Z | Lemma Mining over HOL Light | Large formal mathematical libraries consist of millions of atomic inference
steps that give rise to a corresponding number of proved statements (lemmas).
Analogously to the informal mathematical practice, only a tiny fraction of such
statements is named and re-used in later proofs by formal mathematicians. In
this work, we suggest and implement criteria defining the estimated usefulness
of the HOL Light lemmas for proving further theorems. We use these criteria to
mine the large inference graph of all lemmas in the core HOL Light library,
adding thousands of the best lemmas to the pool of named statements that can be
re-used in later proofs. The usefulness of the new lemmas is then evaluated by
comparing the performance of automated proving of the core HOL Light theorems
with and without such added lemmas.
| [
"['Cezary Kaliszyk' 'Josef Urban']",
"Cezary Kaliszyk and Josef Urban"
] |
cs.AI cs.DL cs.LG cs.LO cs.MS | 10.1007/s10817-015-9330-8 | 1310.2805 | null | null | http://arxiv.org/abs/1310.2805v1 | 2013-10-10T13:24:07Z | 2013-10-10T13:24:07Z | MizAR 40 for Mizar 40 | As a present to Mizar on its 40th anniversary, we develop an AI/ATP system
that in 30 seconds of real time on a 14-CPU machine automatically proves 40% of
the theorems in the latest official version of the Mizar Mathematical Library
(MML). This is a considerable improvement over previous performance of large-
theory AI/ATP methods measured on the whole MML. To achieve that, a large suite
of AI/ATP methods is employed and further developed. We implement the most
useful methods efficiently, to scale them to the 150000 formulas in MML. This
reduces the training times over the corpus to 1-3 seconds, allowing a simple
practical deployment of the methods in the online automated reasoning service
for the Mizar users (MizAR).
| [
"['Cezary Kaliszyk' 'Josef Urban']",
"Cezary Kaliszyk and Josef Urban"
] |
stat.ML cs.LG stat.CO stat.ME | null | 1310.2816 | null | null | http://arxiv.org/pdf/1310.2816v1 | 2013-10-10T13:47:40Z | 2013-10-10T13:47:40Z | Gibbs Max-margin Topic Models with Data Augmentation | Max-margin learning is a powerful approach to building classifiers and
structured output predictors. Recent work on max-margin supervised topic models
has successfully integrated it with Bayesian topic models to discover
discriminative latent semantic structures and make accurate predictions for
unseen testing data. However, the resulting learning problems are usually hard
to solve because of the non-smoothness of the margin loss. Existing approaches
to building max-margin supervised topic models rely on an iterative procedure
to solve multiple latent SVM subproblems with additional mean-field assumptions
on the desired posterior distributions. This paper presents an alternative
approach by defining a new max-margin loss. Namely, we present Gibbs max-margin
supervised topic models, a latent variable Gibbs classifier to discover hidden
topic representations for various tasks, including classification, regression
and multi-task learning. Gibbs max-margin supervised topic models minimize an
expected margin loss, which is an upper bound of the existing margin loss
derived from an expected prediction rule. By introducing augmented variables
and integrating out the Dirichlet variables analytically by conjugacy, we
develop simple Gibbs sampling algorithms with no restricting assumptions and no
need to solve SVM subproblems. Furthermore, each step of the
"augment-and-collapse" Gibbs sampling algorithms has an analytical conditional
distribution, from which samples can be easily drawn. Experimental results
demonstrate significant improvements on time efficiency. The classification
performance is also significantly improved over competitors on binary,
multi-class and multi-label classification tasks.
| [
"['Jun Zhu' 'Ning Chen' 'Hugh Perkins' 'Bo Zhang']",
"Jun Zhu, Ning Chen, Hugh Perkins, Bo Zhang"
] |
stat.ML cs.CV cs.LG math.ST stat.TH | 10.1109/TPAMI.2016.2544315 | 1310.2880 | null | null | http://arxiv.org/abs/1310.2880v7 | 2016-03-17T14:55:09Z | 2013-10-10T16:47:22Z | Feature Selection with Annealing for Computer Vision and Big Data
Learning | Many computer vision and medical imaging problems are faced with learning
from large-scale datasets, with millions of observations and features. In this
paper we propose a novel efficient learning scheme that tightens a sparsity
constraint by gradually removing variables based on a criterion and a schedule.
The attractive fact that the problem size keeps dropping throughout the
iterations makes it particularly suitable for big data learning. Our approach
applies generically to the optimization of any differentiable loss function,
and finds applications in regression, classification and ranking. The resultant
algorithms build variable screening into estimation and are extremely simple to
implement. We provide theoretical guarantees of convergence and selection
consistency. In addition, one dimensional piecewise linear response functions
are used to account for nonlinearity and a second order prior is imposed on
these functions to avoid overfitting. Experiments on real and synthetic data
show that the proposed method compares very well with other state of the art
methods in regression, classification and ranking while being computationally
very efficient and scalable.
| [
"['Adrian Barbu' 'Yiyuan She' 'Liangjing Ding' 'Gary Gramajo']",
"Adrian Barbu, Yiyuan She, Liangjing Ding, Gary Gramajo"
] |
stat.ME cs.LG stat.ML | null | 1310.2931 | null | null | http://arxiv.org/pdf/1310.2931v2 | 2014-11-01T01:48:35Z | 2013-10-10T19:57:45Z | Feedback Detection for Live Predictors | A predictor that is deployed in a live production system may perturb the
features it uses to make predictions. Such a feedback loop can occur, for
example, when a model that predicts a certain type of behavior ends up causing
the behavior it predicts, thus creating a self-fulfilling prophecy. In this
paper we analyze predictor feedback detection as a causal inference problem,
and introduce a local randomization scheme that can be used to detect
non-linear feedback in real-world problems. We conduct a pilot study for our
proposed methodology using a predictive system currently deployed as a part of
a search engine.
| [
"Stefan Wager, Nick Chamandy, Omkar Muralidharan, and Amir Najmi",
"['Stefan Wager' 'Nick Chamandy' 'Omkar Muralidharan' 'Amir Najmi']"
] |
cs.AI cs.LG | null | 1310.2955 | null | null | http://arxiv.org/pdf/1310.2955v1 | 2013-10-10T20:22:33Z | 2013-10-10T20:22:33Z | Spontaneous Analogy by Piggybacking on a Perceptual System | Most computational models of analogy assume they are given a delineated
source domain and often a specified target domain. These systems do not address
how analogs can be isolated from large domains and spontaneously retrieved from
long-term memory, a process we call spontaneous analogy. We present a system
that represents relational structures as feature bags. Using this
representation, our system leverages perceptual algorithms to automatically
create an ontology of relational structures and to efficiently retrieve analogs
for new relational structures from long-term memory. We provide a demonstration
of our approach that takes a set of unsegmented stories, constructs an ontology
of analogical schemas (corresponding to plot devices), and uses this ontology
to efficiently find analogs within new stories, yielding significant
time-savings over linear analog retrieval at a small accuracy cost.
| [
"['Marc Pickett' 'David W. Aha']",
"Marc Pickett and David W. Aha"
] |
cs.LG | null | 1310.2959 | null | null | http://arxiv.org/pdf/1310.2959v2 | 2014-02-27T21:19:41Z | 2013-10-10T20:30:06Z | Scaling Graph-based Semi Supervised Learning to Large Number of Labels
Using Count-Min Sketch | Graph-based Semi-supervised learning (SSL) algorithms have been successfully
used in a large number of applications. These methods classify initially
unlabeled nodes by propagating label information over the structure of graph
starting from seed nodes. Graph-based SSL algorithms usually scale linearly
with the number of distinct labels (m), and require O(m) space on each node.
Unfortunately, there exist many applications of practical significance with
very large m over large graphs, demanding better space and time complexity. In
this paper, we propose MAD-SKETCH, a novel graph-based SSL algorithm which
compactly stores label distribution on each node using Count-min Sketch, a
randomized data structure. We present theoretical analysis showing that under
mild conditions, MAD-SKETCH can reduce space complexity at each node from O(m)
to O(log m), and achieve similar savings in time complexity as well. We support
our analysis through experiments on multiple real world datasets. We observe
that MAD-SKETCH achieves similar performance as existing state-of-the-art
graph- based SSL algorithms, while requiring smaller memory footprint and at
the same time achieving up to 10x speedup. We find that MAD-SKETCH is able to
scale to datasets with one million labels, which is beyond the scope of
existing graph- based SSL algorithms.
| [
"['Partha Pratim Talukdar' 'William Cohen']",
"Partha Pratim Talukdar, William Cohen"
] |
cs.LG math.PR | null | 1310.2997 | null | null | http://arxiv.org/pdf/1310.2997v2 | 2013-11-19T07:13:05Z | 2013-10-11T02:01:53Z | Bandits with Switching Costs: T^{2/3} Regret | We study the adversarial multi-armed bandit problem in a setting where the
player incurs a unit cost each time he switches actions. We prove that the
player's $T$-round minimax regret in this setting is
$\widetilde{\Theta}(T^{2/3})$, thereby closing a fundamental gap in our
understanding of learning with bandit feedback. In the corresponding
full-information version of the problem, the minimax regret is known to grow at
a much slower rate of $\Theta(\sqrt{T})$. The difference between these two
rates provides the \emph{first} indication that learning with bandit feedback
can be significantly harder than learning with full-information feedback
(previous results only showed a different dependence on the number of actions,
but not on $T$.)
In addition to characterizing the inherent difficulty of the multi-armed
bandit problem with switching costs, our results also resolve several other
open problems in online learning. One direct implication is that learning with
bandit feedback against bounded-memory adaptive adversaries has a minimax
regret of $\widetilde{\Theta}(T^{2/3})$. Another implication is that the
minimax regret of online learning in adversarial Markov decision processes
(MDPs) is $\widetilde{\Theta}(T^{2/3})$. The key to all of our results is a new
randomized construction of a multi-scale random walk, which is of independent
interest and likely to prove useful in additional settings.
| [
"['Ofer Dekel' 'Jian Ding' 'Tomer Koren' 'Yuval Peres']",
"Ofer Dekel, Jian Ding, Tomer Koren, Yuval Peres"
] |
cs.LG cs.CL stat.ML | null | 1310.3099 | null | null | http://arxiv.org/pdf/1310.3099v2 | 2014-09-22T13:52:44Z | 2013-10-11T12:07:57Z | A Bayesian Network View on Acoustic Model-Based Techniques for Robust
Speech Recognition | This article provides a unifying Bayesian network view on various approaches
for acoustic model adaptation, missing feature, and uncertainty decoding that
are well-known in the literature of robust automatic speech recognition. The
representatives of these classes can often be deduced from a Bayesian network
that extends the conventional hidden Markov models used in speech recognition.
These extensions, in turn, can in many cases be motivated from an underlying
observation model that relates clean and distorted feature vectors. By
converting the observation models into a Bayesian network representation, we
formulate the corresponding compensation rules leading to a unified view on
known derivations as well as to new formulations for certain approaches. The
generic Bayesian perspective provided in this contribution thus highlights
structural differences and similarities between the analyzed approaches.
| [
"['Roland Maas' 'Christian Huemmer' 'Armin Sehr' 'Walter Kellermann']",
"Roland Maas, Christian Huemmer, Armin Sehr, Walter Kellermann"
] |
stat.ML cs.LG | 10.1109/ICMLA.2013.84 | 1310.3101 | null | null | http://arxiv.org/abs/1310.3101v1 | 2013-10-11T12:14:00Z | 2013-10-11T12:14:00Z | Deep Multiple Kernel Learning | Deep learning methods have predominantly been applied to large artificial
neural networks. Despite their state-of-the-art performance, these large
networks typically do not generalize well to datasets with limited sample
sizes. In this paper, we take a different approach by learning multiple layers
of kernels. We combine kernels at each layer and then optimize over an estimate
of the support vector machine leave-one-out error rather than the dual
objective function. Our experiments on a variety of datasets show that each
layer successively increases performance with only a few base kernels.
| [
"['Eric Strobl' 'Shyam Visweswaran']",
"Eric Strobl, Shyam Visweswaran"
] |
cs.IR cs.CL cs.LG | null | 1310.3333 | null | null | http://arxiv.org/pdf/1310.3333v1 | 2013-10-12T03:48:38Z | 2013-10-12T03:48:38Z | Visualizing Bags of Vectors | The motivation of this work is two-fold - a) to compare between two different
modes of visualizing data that exists in a bag of vectors format b) to propose
a theoretical model that supports a new mode of visualizing data. Visualizing
high dimensional data can be achieved using Minimum Volume Embedding, but the
data has to exist in a format suitable for computing similarities while
preserving local distances. This paper compares the visualization between two
methods of representing data and also proposes a new method providing sample
visualizations for that method.
| [
"['Sriramkumar Balasubramanian' 'Raghuram Reddy Nagireddy']",
"Sriramkumar Balasubramanian and Raghuram Reddy Nagireddy"
] |
cs.NI cs.LG | null | 1310.3407 | null | null | http://arxiv.org/pdf/1310.3407v1 | 2013-10-12T17:20:41Z | 2013-10-12T17:20:41Z | Joint Indoor Localization and Radio Map Construction with Limited
Deployment Load | One major bottleneck in the practical implementation of received signal
strength (RSS) based indoor localization systems is the extensive deployment
efforts required to construct the radio maps through fingerprinting. In this
paper, we aim to design an indoor localization scheme that can be directly
employed without building a full fingerprinted radio map of the indoor
environment. By accumulating the information of localized RSSs, this scheme can
also simultaneously construct the radio map with limited calibration. To design
this scheme, we employ a source data set that possesses the same spatial
correlation of the RSSs in the indoor environment under study. The knowledge of
this data set is then transferred to a limited number of calibration
fingerprints and one or several RSS observations with unknown locations, in
order to perform direct localization of these observations using manifold
alignment. We test two different source data sets, namely a simulated radio
propagation map and the environments plan coordinates. For moving users, we
exploit the correlation of their observations to improve the localization
accuracy. The online testing in two indoor environments shows that the plan
coordinates achieve better results than the simulated radio maps, and a
negligible degradation with 70-85% reduction in calibration load.
| [
"['Sameh Sorour' 'Yves Lostanlen' 'Shahrokh Valaee']",
"Sameh Sorour, Yves Lostanlen, Shahrokh Valaee"
] |
cs.SI cs.LG physics.soc-ph | null | 1310.3492 | null | null | http://arxiv.org/pdf/1310.3492v1 | 2013-10-13T16:35:00Z | 2013-10-13T16:35:00Z | Predicting Social Links for New Users across Aligned Heterogeneous
Social Networks | Online social networks have gained great success in recent years and many of
them involve multiple kinds of nodes and complex relationships. Among these
relationships, social links among users are of great importance. Many existing
link prediction methods focus on predicting social links that will appear in
the future among all users based upon a snapshot of the social network. In
real-world social networks, many new users are joining in the service every
day. Predicting links for new users are more important. Different from
conventional link prediction problems, link prediction for new users are more
challenging due to the following reasons: (1) differences in information
distributions between new users and the existing active users (i.e., old
users); (2) lack of information from the new users in the network. We propose a
link prediction method called SCAN-PS (Supervised Cross Aligned Networks link
prediction with Personalized Sampling), to solve the link prediction problem
for new users with information transferred from both the existing active users
in the target network and other source networks through aligned accounts. We
proposed a within-target-network personalized sampling method to process the
existing active users' information in order to accommodate the differences in
information distributions before the intra-network knowledge transfer. SCAN-PS
can also exploit information in other source networks, where the user accounts
are aligned with the target network. In this way, SCAN-PS could solve the cold
start problem when information of these new users is total absent in the target
network.
| [
"Jiawei Zhang, Xiangnan Kong, Philip S. Yu",
"['Jiawei Zhang' 'Xiangnan Kong' 'Philip S. Yu']"
] |
cs.NA cs.LG stat.ML | null | 1310.3556 | null | null | http://arxiv.org/pdf/1310.3556v2 | 2013-12-14T12:13:32Z | 2013-10-14T03:49:02Z | Identifying Influential Entries in a Matrix | For any matrix A in R^(m x n) of rank \rho, we present a probability
distribution over the entries of A (the element-wise leverage scores of
equation (2)) that reveals the most influential entries in the matrix. From a
theoretical perspective, we prove that sampling at most s = O ((m + n) \rho^2
ln (m + n)) entries of the matrix (see eqn. (3) for the precise value of s)
with respect to these scores and solving the nuclear norm minimization problem
on the sampled entries, reconstructs A exactly. To the best of our knowledge,
these are the strongest theoretical guarantees on matrix completion without any
incoherence assumptions on the matrix A. From an experimental perspective, we
show that entries corresponding to high element-wise leverage scores reveal
structural properties of the data matrix that are of interest to domain
scientists.
| [
"['Abhisek Kundu' 'Srinivas Nambirajan' 'Petros Drineas']",
"Abhisek Kundu, Srinivas Nambirajan, Petros Drineas"
] |
cs.LG cs.CE | null | 1310.3567 | null | null | http://arxiv.org/pdf/1310.3567v3 | 2015-05-05T20:23:49Z | 2013-10-14T06:00:31Z | An Extreme Learning Machine Approach to Predicting Near Chaotic HCCI
Combustion Phasing in Real-Time | Fuel efficient Homogeneous Charge Compression Ignition (HCCI) engine
combustion timing predictions must contend with non-linear chemistry,
non-linear physics, period doubling bifurcation(s), turbulent mixing, model
parameters that can drift day-to-day, and air-fuel mixture state information
that cannot typically be resolved on a cycle-to-cycle basis, especially during
transients. In previous work, an abstract cycle-to-cycle mapping function
coupled with $\epsilon$-Support Vector Regression was shown to predict
experimentally observed cycle-to-cycle combustion timing over a wide range of
engine conditions, despite some of the aforementioned difficulties. The main
limitation of the previous approach was that a partially acausual randomly
sampled training dataset was used to train proof of concept offline
predictions. The objective of this paper is to address this limitation by
proposing a new online adaptive Extreme Learning Machine (ELM) extension named
Weighted Ring-ELM. This extension enables fully causal combustion timing
predictions at randomly chosen engine set points, and is shown to achieve
results that are as good as or better than the previous offline method. The
broader objective of this approach is to enable a new class of real-time model
predictive control strategies for high variability HCCI and, ultimately, to
bring HCCI's low engine-out NOx and reduced CO2 emissions to production
engines.
| [
"['Adam Vaughan' 'Stanislav V. Bohac']",
"Adam Vaughan and Stanislav V. Bohac"
] |
cs.LG stat.AP | null | 1310.3607 | null | null | http://arxiv.org/pdf/1310.3607v1 | 2013-10-14T09:42:54Z | 2013-10-14T09:42:54Z | Predicting college basketball match outcomes using machine learning
techniques: some results and lessons learned | Most existing work on predicting NCAAB matches has been developed in a
statistical context. Trusting the capabilities of ML techniques, particularly
classification learners, to uncover the importance of features and learn their
relationships, we evaluated a number of different paradigms on this task. In
this paper, we summarize our work, pointing out that attributes seem to be more
important than models, and that there seems to be an upper limit to predictive
quality.
| [
"Albrecht Zimmermann, Sruthi Moorthy and Zifan Shi",
"['Albrecht Zimmermann' 'Sruthi Moorthy' 'Zifan Shi']"
] |
cs.DS cs.DC cs.LG cs.LO | null | 1310.3609 | null | null | http://arxiv.org/pdf/1310.3609v4 | 2014-09-17T11:07:09Z | 2013-10-14T09:50:49Z | Scalable Verification of Markov Decision Processes | Markov decision processes (MDP) are useful to model concurrent process
optimisation problems, but verifying them with numerical methods is often
intractable. Existing approximative approaches do not scale well and are
limited to memoryless schedulers. Here we present the basis of scalable
verification for MDPSs, using an O(1) memory representation of
history-dependent schedulers. We thus facilitate scalable learning techniques
and the use of massively parallel verification.
| [
"Axel Legay, Sean Sedwards and Louis-Marie Traonouez",
"['Axel Legay' 'Sean Sedwards' 'Louis-Marie Traonouez']"
] |
stat.ML cs.LG cs.SY | null | 1310.3697 | null | null | http://arxiv.org/pdf/1310.3697v1 | 2013-10-14T14:36:22Z | 2013-10-14T14:36:22Z | Variance Adjusted Actor Critic Algorithms | We present an actor-critic framework for MDPs where the objective is the
variance-adjusted expected return. Our critic uses linear function
approximation, and we extend the concept of compatible features to the
variance-adjusted setting. We present an episodic actor-critic algorithm and
show that it converges almost surely to a locally optimal point of the
objective function.
| [
"Aviv Tamar, Shie Mannor",
"['Aviv Tamar' 'Shie Mannor']"
] |
stat.ML cs.LG stat.CO | null | 1310.3892 | null | null | http://arxiv.org/pdf/1310.3892v3 | 2014-05-05T13:10:03Z | 2013-10-15T01:27:14Z | Ridge Fusion in Statistical Learning | We propose a penalized likelihood method to jointly estimate multiple
precision matrices for use in quadratic discriminant analysis and model based
clustering. A ridge penalty and a ridge fusion penalty are used to introduce
shrinkage and promote similarity between precision matrix estimates. Block-wise
coordinate descent is used for optimization, and validation likelihood is used
for tuning parameter selection. Our method is applied in quadratic discriminant
analysis and semi-supervised model based clustering.
| [
"['Bradley S. Price' 'Charles J. Geyer' 'Adam J. Rothman']",
"Bradley S. Price, Charles J. Geyer, and Adam J. Rothman"
] |
cs.LG cs.IT math.IT physics.data-an stat.ML | null | 1310.4210 | null | null | http://arxiv.org/pdf/1310.4210v2 | 2014-02-05T22:21:06Z | 2013-10-15T21:19:22Z | Demystifying Information-Theoretic Clustering | We propose a novel method for clustering data which is grounded in
information-theoretic principles and requires no parametric assumptions.
Previous attempts to use information theory to define clusters in an
assumption-free way are based on maximizing mutual information between data and
cluster labels. We demonstrate that this intuition suffers from a fundamental
conceptual flaw that causes clustering performance to deteriorate as the amount
of data increases. Instead, we return to the axiomatic foundations of
information theory to define a meaningful clustering measure based on the
notion of consistency under coarse-graining for finite data.
| [
"Greg Ver Steeg, Aram Galstyan, Fei Sha, Simon DeDeo",
"['Greg Ver Steeg' 'Aram Galstyan' 'Fei Sha' 'Simon DeDeo']"
] |
q-bio.BM cs.LG | null | 1310.4223 | null | null | http://arxiv.org/pdf/1310.4223v1 | 2013-10-15T23:04:00Z | 2013-10-15T23:04:00Z | Exact Learning of RNA Energy Parameters From Structure | We consider the problem of exact learning of parameters of a linear RNA
energy model from secondary structure data. A necessary and sufficient
condition for learnability of parameters is derived, which is based on
computing the convex hull of union of translated Newton polytopes of input
sequences. The set of learned energy parameters is characterized as the convex
cone generated by the normal vectors to those facets of the resulting polytope
that are incident to the origin. In practice, the sufficient condition may not
be satisfied by the entire training data set; hence, computing a maximal subset
of training data for which the sufficient condition is satisfied is often
desired. We show that problem is NP-hard in general for an arbitrary
dimensional feature space. Using a randomized greedy algorithm, we select a
subset of RNA STRAND v2.0 database that satisfies the sufficient condition for
separate A-U, C-G, G-U base pair counting model. The set of learned energy
parameters includes experimentally measured energies of A-U, C-G, and G-U
pairs; hence, our parameter set is in agreement with the Turner parameters.
| [
"Hamidreza Chitsaz, Mohammad Aminisharifabad",
"['Hamidreza Chitsaz' 'Mohammad Aminisharifabad']"
] |
cs.LG math.PR | null | 1310.4227 | null | null | http://arxiv.org/pdf/1310.4227v1 | 2013-10-15T23:30:52Z | 2013-10-15T23:30:52Z | On Measure Concentration of Random Maximum A-Posteriori Perturbations | The maximum a-posteriori (MAP) perturbation framework has emerged as a useful
approach for inference and learning in high dimensional complex models. By
maximizing a randomly perturbed potential function, MAP perturbations generate
unbiased samples from the Gibbs distribution. Unfortunately, the computational
cost of generating so many high-dimensional random variables can be
prohibitive. More efficient algorithms use sequential sampling strategies based
on the expected value of low dimensional MAP perturbations. This paper develops
new measure concentration inequalities that bound the number of samples needed
to estimate such expected values. Applying the general result to MAP
perturbations can yield a more efficient algorithm to approximate sampling from
the Gibbs distribution. The measure concentration result is of general interest
and may be applicable to other areas involving expected estimations.
| [
"Francesco Orabona, Tamir Hazan, Anand D. Sarwate, Tommi Jaakkola",
"['Francesco Orabona' 'Tamir Hazan' 'Anand D. Sarwate' 'Tommi Jaakkola']"
] |
stat.ML cs.LG | null | 1310.4252 | null | null | http://arxiv.org/pdf/1310.4252v1 | 2013-10-16T03:04:47Z | 2013-10-16T03:04:47Z | Multilabel Consensus Classification | In the era of big data, a large amount of noisy and incomplete data can be
collected from multiple sources for prediction tasks. Combining multiple models
or data sources helps to counteract the effects of low data quality and the
bias of any single model or data source, and thus can improve the robustness
and the performance of predictive models. Out of privacy, storage and bandwidth
considerations, in certain circumstances one has to combine the predictions
from multiple models or data sources to obtain the final predictions without
accessing the raw data. Consensus-based prediction combination algorithms are
effective for such situations. However, current research on prediction
combination focuses on the single label setting, where an instance can have one
and only one label. Nonetheless, data nowadays are usually multilabeled, such
that more than one label have to be predicted at the same time. Direct
applications of existing prediction combination methods to multilabel settings
can lead to degenerated performance. In this paper, we address the challenges
of combining predictions from multiple multilabel classifiers and propose two
novel algorithms, MLCM-r (MultiLabel Consensus Maximization for ranking) and
MLCM-a (MLCM for microAUC). These algorithms can capture label correlations
that are common in multilabel classifications, and optimize corresponding
performance metrics. Experimental results on popular multilabel classification
tasks verify the theoretical analysis and effectiveness of the proposed
methods.
| [
"['Sihong Xie' 'Xiangnan Kong' 'Jing Gao' 'Wei Fan' 'Philip S. Yu']",
"Sihong Xie and Xiangnan Kong and Jing Gao and Wei Fan and Philip S.Yu"
] |
stat.ML cs.LG | null | 1310.4362 | null | null | http://arxiv.org/pdf/1310.4362v1 | 2013-10-16T13:13:45Z | 2013-10-16T13:13:45Z | Bayesian Information Sharing Between Noise And Regression Models
Improves Prediction of Weak Effects | We consider the prediction of weak effects in a multiple-output regression
setup, when covariates are expected to explain a small amount, less than
$\approx 1%$, of the variance of the target variables. To facilitate the
prediction of the weak effects, we constrain our model structure by introducing
a novel Bayesian approach of sharing information between the regression model
and the noise model. Further reduction of the effective number of parameters is
achieved by introducing an infinite shrinkage prior and group sparsity in the
context of the Bayesian reduced rank regression, and using the Bayesian
infinite factor model as a flexible low-rank noise model. In our experiments
the model incorporating the novelties outperformed alternatives in genomic
prediction of rich phenotype data. In particular, the information sharing
between the noise and regression models led to significant improvement in
prediction accuracy.
| [
"Jussi Gillberg, Pekka Marttinen, Matti Pirinen, Antti J Kangas, Pasi\n Soininen, Marjo-Riitta J\\\"arvelin, Mika Ala-Korpela, Samuel Kaski",
"['Jussi Gillberg' 'Pekka Marttinen' 'Matti Pirinen' 'Antti J Kangas'\n 'Pasi Soininen' 'Marjo-Riitta Järvelin' 'Mika Ala-Korpela' 'Samuel Kaski']"
] |
stat.ML cs.LG | null | 1310.4456 | null | null | http://arxiv.org/pdf/1310.4456v1 | 2013-10-16T17:33:34Z | 2013-10-16T17:33:34Z | Inference, Sampling, and Learning in Copula Cumulative Distribution
Networks | The cumulative distribution network (CDN) is a recently developed class of
probabilistic graphical models (PGMs) permitting a copula factorization, in
which the CDF, rather than the density, is factored. Despite there being much
recent interest within the machine learning community about copula
representations, there has been scarce research into the CDN, its amalgamation
with copula theory, and no evaluation of its performance. Algorithms for
inference, sampling, and learning in these models are underdeveloped compared
those of other PGMs, hindering widerspread use.
One advantage of the CDN is that it allows the factors to be parameterized as
copulae, combining the benefits of graphical models with those of copula
theory. In brief, the use of a copula parameterization enables greater
modelling flexibility by separating representation of the marginals from the
dependence structure, permitting more efficient and robust learning. Another
advantage is that the CDN permits the representation of implicit latent
variables, whose parameterization and connectivity are not required to be
specified. Unfortunately, that the model can encode only latent relationships
between variables severely limits its utility.
In this thesis, we present inference, learning, and sampling for CDNs, and
further the state-of-the-art. First, we explain the basics of copula theory and
the representation of copula CDNs. Then, we discuss inference in the models,
and develop the first sampling algorithm. We explain standard learning methods,
propose an algorithm for learning from data missing completely at random
(MCAR), and develop a novel algorithm for learning models of arbitrary
treewidth and size. Properties of the models and algorithms are investigated
through Monte Carlo simulations. We conclude with further discussion of the
advantages and limitations of CDNs, and suggest future work.
| [
"['Stefan Douglas Webb']",
"Stefan Douglas Webb"
] |
cs.CR cs.LG | null | 1310.4485 | null | null | http://arxiv.org/pdf/1310.4485v1 | 2013-10-15T12:12:44Z | 2013-10-15T12:12:44Z | The BeiHang Keystroke Dynamics Authentication System | Keystroke Dynamics is an important biometric solution for person
authentication. Based upon keystroke dynamics, this paper designs an embedded
password protection device, develops an online system, collects two public
databases for promoting the research on keystroke authentication, exploits the
Gabor filter bank to characterize keystroke dynamics, and provides benchmark
results of three popular classification algorithms, one-class support vector
machine, Gaussian classifier, and nearest neighbour classifier.
| [
"Juan Liu, Baochang Zhang, Linlin Shen, Jianzhuang Liu, Jason Zhao",
"['Juan Liu' 'Baochang Zhang' 'Linlin Shen' 'Jianzhuang Liu' 'Jason Zhao']"
] |
cs.CE cs.LG | null | 1310.4495 | null | null | http://arxiv.org/pdf/1310.4495v1 | 2013-10-16T15:01:19Z | 2013-10-16T15:01:19Z | Multiple Attractor Cellular Automata (MACA) for Addressing Major
Problems in Bioinformatics | CA has grown as potential classifier for addressing major problems in
bioinformatics. Lot of bioinformatics problems like predicting the protein
coding region, finding the promoter region, predicting the structure of protein
and many other problems in bioinformatics can be addressed through Cellular
Automata. Even though there are some prediction techniques addressing these
problems, the approximate accuracy level is very less. An automated procedure
was proposed with MACA (Multiple Attractor Cellular Automata) which can address
all these problems. The genetic algorithm is also used to find rules with good
fitness values. Extensive experiments are conducted for reporting the accuracy
of the proposed tool. The average accuracy of MACA when tested with ENCODE,
BG570, HMR195, Fickett and Tongue, ASP67 datasets is 78%.
| [
"['Pokkuluri Kiran Sree' 'Inampudi Ramesh Babu' 'SSSN Usha Devi Nedunuri']",
"Pokkuluri Kiran Sree, Inampudi Ramesh Babu and SSSN Usha Devi Nedunuri"
] |
cs.CL cs.LG stat.ML | null | 1310.4546 | null | null | http://arxiv.org/pdf/1310.4546v1 | 2013-10-16T23:28:53Z | 2013-10-16T23:28:53Z | Distributed Representations of Words and Phrases and their
Compositionality | The recently introduced continuous Skip-gram model is an efficient method for
learning high-quality distributed vector representations that capture a large
number of precise syntactic and semantic word relationships. In this paper we
present several extensions that improve both the quality of the vectors and the
training speed. By subsampling of the frequent words we obtain significant
speedup and also learn more regular word representations. We also describe a
simple alternative to the hierarchical softmax called negative sampling. An
inherent limitation of word representations is their indifference to word order
and their inability to represent idiomatic phrases. For example, the meanings
of "Canada" and "Air" cannot be easily combined to obtain "Air Canada".
Motivated by this example, we present a simple method for finding phrases in
text, and show that learning good vector representations for millions of
phrases is possible.
| [
"Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean",
"['Tomas Mikolov' 'Ilya Sutskever' 'Kai Chen' 'Greg Corrado' 'Jeffrey Dean']"
] |
cs.LG cs.SI physics.soc-ph | null | 1310.4579 | null | null | http://arxiv.org/pdf/1310.4579v1 | 2013-10-17T04:21:37Z | 2013-10-17T04:21:37Z | Discriminative Link Prediction using Local Links, Node Features and
Community Structure | A link prediction (LP) algorithm is given a graph, and has to rank, for each
node, other nodes that are candidates for new linkage. LP is strongly motivated
by social search and recommendation applications. LP techniques often focus on
global properties (graph conductance, hitting or commute times, Katz score) or
local properties (Adamic-Adar and many variations, or node feature vectors),
but rarely combine these signals. Furthermore, neither of these extremes
exploit link densities at the intermediate level of communities. In this paper
we describe a discriminative LP algorithm that exploits two new signals. First,
a co-clustering algorithm provides community level link density estimates,
which are used to qualify observed links with a surprise value. Second, links
in the immediate neighborhood of the link to be predicted are not interpreted
at face value, but through a local model of node feature similarities. These
signals are combined into a discriminative link predictor. We evaluate the new
predictor using five diverse data sets that are standard in the literature. We
report on significant accuracy boosts compared to standard LP methods
(including Adamic-Adar and random walk). Apart from the new predictor, another
contribution is a rigorous protocol for benchmarking and reporting LP
algorithms, which reveals the regions of strengths and weaknesses of all the
predictors studied here, and establishes the new proposal as the most robust.
| [
"['Abir De' 'Niloy Ganguly' 'Soumen Chakrabarti']",
"Abir De, Niloy Ganguly, Soumen Chakrabarti"
] |
math.ST cs.LG stat.TH | null | 1310.4661 | null | null | http://arxiv.org/pdf/1310.4661v2 | 2015-02-02T20:11:21Z | 2013-10-17T11:42:07Z | Minimax rates in permutation estimation for feature matching | The problem of matching two sets of features appears in various tasks of
computer vision and can be often formalized as a problem of permutation
estimation. We address this problem from a statistical point of view and
provide a theoretical analysis of the accuracy of several natural estimators.
To this end, the minimax rate of separation is investigated and its expression
is obtained as a function of the sample size, noise level and dimension. We
consider the cases of homoscedastic and heteroscedastic noise and establish, in
each case, tight upper bounds on the separation distance of several estimators.
These upper bounds are shown to be unimprovable both in the homoscedastic and
heteroscedastic settings. Interestingly, these bounds demonstrate that a phase
transition occurs when the dimension $d$ of the features is of the order of the
logarithm of the number of features $n$. For $d=O(\log n)$, the rate is
dimension free and equals $\sigma (\log n)^{1/2}$, where $\sigma$ is the noise
level. In contrast, when $d$ is larger than $c\log n$ for some constant $c>0$,
the minimax rate increases with $d$ and is of the order $\sigma(d\log
n)^{1/4}$. We also discuss the computational aspects of the estimators and
provide empirical evidence of their consistency on synthetic data. Finally, we
show that our results extend to more general matching criteria.
| [
"Olivier Collier and Arnak S. Dalalyan",
"['Olivier Collier' 'Arnak S. Dalalyan']"
] |
stat.ML cs.LG | null | 1310.4849 | null | null | http://arxiv.org/pdf/1310.4849v3 | 2015-03-06T15:58:09Z | 2013-10-17T20:34:04Z | On the Bayes-optimality of F-measure maximizers | The F-measure, which has originally been introduced in information retrieval,
is nowadays routinely used as a performance metric for problems such as binary
classification, multi-label classification, and structured output prediction.
Optimizing this measure is a statistically and computationally challenging
problem, since no closed-form solution exists. Adopting a decision-theoretic
perspective, this article provides a formal and experimental analysis of
different approaches for maximizing the F-measure. We start with a Bayes-risk
analysis of related loss functions, such as Hamming loss and subset zero-one
loss, showing that optimizing such losses as a surrogate of the F-measure leads
to a high worst-case regret. Subsequently, we perform a similar type of
analysis for F-measure maximizing algorithms, showing that such algorithms are
approximate, while relying on additional assumptions regarding the statistical
distribution of the binary response variables. Furthermore, we present a new
algorithm which is not only computationally efficient but also Bayes-optimal,
regardless of the underlying distribution. To this end, the algorithm requires
only a quadratic (with respect to the number of binary responses) number of
parameters of the joint distribution. We illustrate the practical performance
of all analyzed methods by means of experiments with multi-label classification
problems.
| [
"['Willem Waegeman' 'Krzysztof Dembczynski' 'Arkadiusz Jachnik'\n 'Weiwei Cheng' 'Eyke Hullermeier']",
"Willem Waegeman, Krzysztof Dembczynski, Arkadiusz Jachnik, Weiwei\n Cheng, Eyke Hullermeier"
] |
cs.DL cs.CL cs.LG | 10.5121/acij.2013.4501 | 1310.4909 | null | null | http://arxiv.org/abs/1310.4909v1 | 2013-10-18T04:18:09Z | 2013-10-18T04:18:09Z | Text Classification For Authorship Attribution Analysis | Authorship attribution mainly deals with undecided authorship of literary
texts. Authorship attribution is useful in resolving issues like uncertain
authorship, recognize authorship of unknown texts, spot plagiarism so on.
Statistical methods can be used to set apart the approach of an author
numerically. The basic methodologies that are made use in computational
stylometry are word length, sentence length, vocabulary affluence, frequencies
etc. Each author has an inborn style of writing, which is particular to
himself. Statistical quantitative techniques can be used to differentiate the
approach of an author in a numerical way. The problem can be broken down into
three sub problems as author identification, author characterization and
similarity detection. The steps involved are pre-processing, extracting
features, classification and author identification. For this different
classifiers can be used. Here fuzzy learning classifier and SVM are used. After
author identification the SVM was found to have more accuracy than Fuzzy
classifier. Later combined the classifiers to obtain a better accuracy when
compared to individual SVM and fuzzy classifier.
| [
"['M. Sudheep Elayidom' 'Chinchu Jose' 'Anitta Puthussery' 'Neenu K Sasi']",
"M. Sudheep Elayidom, Chinchu Jose, Anitta Puthussery, Neenu K Sasi"
] |
cs.LG cs.CV stat.ML | null | 1310.4945 | null | null | http://arxiv.org/pdf/1310.4945v2 | 2014-02-20T16:41:40Z | 2013-10-18T08:31:54Z | A novel sparsity and clustering regularization | We propose a novel SPARsity and Clustering (SPARC) regularizer, which is a
modified version of the previous octagonal shrinkage and clustering algorithm
for regression (OSCAR), where, the proposed regularizer consists of a
$K$-sparse constraint and a pair-wise $\ell_{\infty}$ norm restricted on the
$K$ largest components in magnitude. The proposed regularizer is able to
separably enforce $K$-sparsity and encourage the non-zeros to be equal in
magnitude. Moreover, it can accurately group the features without shrinking
their magnitude. In fact, SPARC is closely related to OSCAR, so that the
proximity operator of the former can be efficiently computed based on that of
the latter, allowing using proximal splitting algorithms to solve problems with
SPARC regularization. Experiments on synthetic data and with benchmark breast
cancer data show that SPARC is a competitive group-sparsity inducing
regularizer for regression and classification.
| [
"Xiangrong Zeng and M\\'ario A. T. Figueiredo",
"['Xiangrong Zeng' 'Mário A. T. Figueiredo']"
] |
cs.LG | null | 1310.4977 | null | null | http://arxiv.org/pdf/1310.4977v1 | 2013-10-18T11:37:33Z | 2013-10-18T11:37:33Z | Learning Tensors in Reproducing Kernel Hilbert Spaces with Multilinear
Spectral Penalties | We present a general framework to learn functions in tensor product
reproducing kernel Hilbert spaces (TP-RKHSs). The methodology is based on a
novel representer theorem suitable for existing as well as new spectral
penalties for tensors. When the functions in the TP-RKHS are defined on the
Cartesian product of finite discrete sets, in particular, our main problem
formulation admits as a special case existing tensor completion problems. Other
special cases include transfer learning with multimodal side information and
multilinear multitask learning. For the latter case, our kernel-based view is
instrumental to derive nonlinear extensions of existing model classes. We give
a novel algorithm and show in experiments the usefulness of the proposed
extensions.
| [
"Marco Signoretto and Lieven De Lathauwer and Johan A.K. Suykens",
"['Marco Signoretto' 'Lieven De Lathauwer' 'Johan A. K. Suykens']"
] |
cs.LG stat.ML | null | 1310.5007 | null | null | http://arxiv.org/pdf/1310.5007v1 | 2013-10-17T04:01:25Z | 2013-10-17T04:01:25Z | Online Classification Using a Voted RDA Method | We propose a voted dual averaging method for online classification problems
with explicit regularization. This method employs the update rule of the
regularized dual averaging (RDA) method, but only on the subsequence of
training examples where a classification error is made. We derive a bound on
the number of mistakes made by this method on the training set, as well as its
generalization error rate. We also introduce the concept of relative strength
of regularization, and show how it affects the mistake bound and generalization
performance. We experimented with the method using $\ell_1$ regularization on a
large-scale natural language processing task, and obtained state-of-the-art
classification performance with fairly sparse models.
| [
"['Tianbing Xu' 'Jianfeng Gao' 'Lin Xiao' 'Amelia Regan']",
"Tianbing Xu, Jianfeng Gao, Lin Xiao, Amelia Regan"
] |
cs.LG | null | 1310.5008 | null | null | http://arxiv.org/pdf/1310.5008v1 | 2013-10-17T04:17:20Z | 2013-10-17T04:17:20Z | Thompson Sampling in Dynamic Systems for Contextual Bandit Problems | We consider the multiarm bandit problems in the timevarying dynamic system
for rich structural features. For the nonlinear dynamic model, we propose the
approximate inference for the posterior distributions based on Laplace
Approximation. For the context bandit problems, Thompson Sampling is adopted
based on the underlying posterior distributions of the parameters. More
specifically, we introduce the discount decays on the previous samples impact
and analyze the different decay rates with the underlying sample dynamics.
Consequently, the exploration and exploitation is adaptively tradeoff according
to the dynamics in the system.
| [
"Tianbing Xu, Yaming Yu, John Turner, Amelia Regan",
"['Tianbing Xu' 'Yaming Yu' 'John Turner' 'Amelia Regan']"
] |
cs.LG stat.ML | null | 1310.5034 | null | null | http://arxiv.org/pdf/1310.5034v2 | 2014-07-02T14:05:49Z | 2013-10-18T14:31:02Z | A Theoretical and Experimental Comparison of the EM and SEM Algorithm | In this paper we provide a new analysis of the SEM algorithm. Unlike previous
work, we focus on the analysis of a single run of the algorithm. First, we
discuss the algorithm for general mixture distributions. Second, we consider
Gaussian mixture models and show that with high probability the update
equations of the EM algorithm and its stochastic variant are almost the same,
given that the input set is sufficiently large. Our experiments confirm that
this still holds for a large number of successive update steps. In particular,
for Gaussian mixture models, we show that the stochastic variant runs nearly
twice as fast.
| [
"['Johannes Blömer' 'Kathrin Bujna' 'Daniel Kuntze']",
"Johannes Bl\\\"omer, Kathrin Bujna, and Daniel Kuntze"
] |
cs.NA cs.LG math.OC stat.ML | null | 1310.5035 | null | null | http://arxiv.org/pdf/1310.5035v2 | 2014-05-29T02:14:13Z | 2013-10-18T14:31:08Z | Linearized Alternating Direction Method with Parallel Splitting and
Adaptive Penalty for Separable Convex Programs in Machine Learning | Many problems in machine learning and other fields can be (re)for-mulated as
linearly constrained separable convex programs. In most of the cases, there are
multiple blocks of variables. However, the traditional alternating direction
method (ADM) and its linearized version (LADM, obtained by linearizing the
quadratic penalty term) are for the two-block case and cannot be naively
generalized to solve the multi-block case. So there is great demand on
extending the ADM based methods for the multi-block case. In this paper, we
propose LADM with parallel splitting and adaptive penalty (LADMPSAP) to solve
multi-block separable convex programs efficiently. When all the component
objective functions have bounded subgradients, we obtain convergence results
that are stronger than those of ADM and LADM, e.g., allowing the penalty
parameter to be unbounded and proving the sufficient and necessary conditions}
for global convergence. We further propose a simple optimality measure and
reveal the convergence rate of LADMPSAP in an ergodic sense. For programs with
extra convex set constraints, with refined parameter estimation we devise a
practical version of LADMPSAP for faster convergence. Finally, we generalize
LADMPSAP to handle programs with more difficult objective functions by
linearizing part of the objective function as well. LADMPSAP is particularly
suitable for sparse representation and low-rank recovery problems because its
subproblems have closed form solutions and the sparsity and low-rankness of the
iterates can be preserved during the iteration. It is also highly
parallelizable and hence fits for parallel or distributed computing. Numerical
experiments testify to the advantages of LADMPSAP in speed and numerical
accuracy.
| [
"Zhouchen Lin, Risheng Liu, Huan Li",
"['Zhouchen Lin' 'Risheng Liu' 'Huan Li']"
] |
cs.LG cs.AI cs.CL cs.IR | null | 1310.5042 | null | null | http://arxiv.org/pdf/1310.5042v1 | 2013-10-18T14:50:39Z | 2013-10-18T14:50:39Z | Distributional semantics beyond words: Supervised learning of analogy
and paraphrase | There have been several efforts to extend distributional semantics beyond
individual words, to measure the similarity of word pairs, phrases, and
sentences (briefly, tuples; ordered sets of words, contiguous or
noncontiguous). One way to extend beyond words is to compare two tuples using a
function that combines pairwise similarities between the component words in the
tuples. A strength of this approach is that it works with both relational
similarity (analogy) and compositional similarity (paraphrase). However, past
work required hand-coding the combination function for different tasks. The
main contribution of this paper is that combination functions are generated by
supervised learning. We achieve state-of-the-art results in measuring
relational similarity between word pairs (SAT analogies and SemEval~2012 Task
2) and measuring compositional similarity between noun-modifier phrases and
unigrams (multiple-choice paraphrase questions).
| [
"Peter D. Turney",
"['Peter D. Turney']"
] |
cs.CV cs.LG stat.ML | null | 1310.5082 | null | null | http://arxiv.org/pdf/1310.5082v1 | 2013-10-18T16:34:04Z | 2013-10-18T16:34:04Z | On the Suitable Domain for SVM Training in Image Coding | Conventional SVM-based image coding methods are founded on independently
restricting the distortion in every image coefficient at some particular image
representation. Geometrically, this implies allowing arbitrary signal
distortions in an $n$-dimensional rectangle defined by the
$\varepsilon$-insensitivity zone in each dimension of the selected image
representation domain. Unfortunately, not every image representation domain is
well-suited for such a simple, scalar-wise, approach because statistical and/or
perceptual interactions between the coefficients may exist. These interactions
imply that scalar approaches may induce distortions that do not follow the
image statistics and/or are perceptually annoying. Taking into account these
relations would imply using non-rectangular $\varepsilon$-insensitivity regions
(allowing coupled distortions in different coefficients), which is beyond the
conventional SVM formulation.
In this paper, we report a condition on the suitable domain for developing
efficient SVM image coding schemes. We analytically demonstrate that no linear
domain fulfills this condition because of the statistical and perceptual
inter-coefficient relations that exist in these domains. This theoretical
result is experimentally confirmed by comparing SVM learning in previously
reported linear domains and in a recently proposed non-linear perceptual domain
that simultaneously reduces the statistical and perceptual relations (so it is
closer to fulfilling the proposed condition). These results highlight the
relevance of an appropriate choice of the image representation before SVM
learning.
| [
"['Gustavo Camps-Valls' 'Juan Gutiérrez' 'Gabriel Gómez-Pérez' 'Jesús Malo']",
"Gustavo Camps-Valls, Juan Guti\\'errez, Gabriel G\\'omez-P\\'erez,\n Jes\\'us Malo"
] |
stat.ML cs.LG | 10.1109/MSP.2013.2250591 | 1310.5089 | null | null | http://arxiv.org/abs/1310.5089v1 | 2013-10-18T16:44:05Z | 2013-10-18T16:44:05Z | Kernel Multivariate Analysis Framework for Supervised Subspace Learning:
A Tutorial on Linear and Kernel Multivariate Methods | Feature extraction and dimensionality reduction are important tasks in many
fields of science dealing with signal processing and analysis. The relevance of
these techniques is increasing as current sensory devices are developed with
ever higher resolution, and problems involving multimodal data sources become
more common. A plethora of feature extraction methods are available in the
literature collectively grouped under the field of Multivariate Analysis (MVA).
This paper provides a uniform treatment of several methods: Principal Component
Analysis (PCA), Partial Least Squares (PLS), Canonical Correlation Analysis
(CCA) and Orthonormalized PLS (OPLS), as well as their non-linear extensions
derived by means of the theory of reproducing kernel Hilbert spaces. We also
review their connections to other methods for classification and statistical
dependence estimation, and introduce some recent developments to deal with the
extreme cases of large-scale and low-sized problems. To illustrate the wide
applicability of these methods in both classification and regression problems,
we analyze their performance in a benchmark of publicly available data sets,
and pay special attention to specific real applications involving audio
processing for music genre prediction and hyperspectral satellite images for
Earth and climate monitoring.
| [
"Jer\\'onimo Arenas-Garc\\'ia, Kaare Brandt Petersen, Gustavo\n Camps-Valls, Lars Kai Hansen",
"['Jerónimo Arenas-García' 'Kaare Brandt Petersen' 'Gustavo Camps-Valls'\n 'Lars Kai Hansen']"
] |
stat.ML cs.LG | null | 1310.5095 | null | null | http://arxiv.org/pdf/1310.5095v1 | 2013-10-18T17:00:34Z | 2013-10-18T17:00:34Z | Regularization in Relevance Learning Vector Quantization Using l one
Norms | We propose in this contribution a method for l one regularization in
prototype based relevance learning vector quantization (LVQ) for sparse
relevance profiles. Sparse relevance profiles in hyperspectral data analysis
fade down those spectral bands which are not necessary for classification. In
particular, we consider the sparsity in the relevance profile enforced by LASSO
optimization. The latter one is obtained by a gradient learning scheme using a
differentiable parametrized approximation of the $l_{1}$-norm, which has an
upper error bound. We extend this regularization idea also to the matrix
learning variant of LVQ as the natural generalization of relevance learning.
| [
"Martin Riedel, Marika K\\\"astner, Fabrice Rossi (SAMM), Thomas Villmann",
"['Martin Riedel' 'Marika Kästner' 'Fabrice Rossi' 'Thomas Villmann']"
] |
cond-mat.dis-nn cs.LG physics.soc-ph q-fin.GN | 10.1103/PhysRevLett.112.050602 | 1310.5114 | null | null | http://arxiv.org/abs/1310.5114v3 | 2013-12-10T15:07:34Z | 2013-10-18T18:10:01Z | Explore or exploit? A generic model and an exactly solvable case | Finding a good compromise between the exploitation of known resources and the
exploration of unknown, but potentially more profitable choices, is a general
problem, which arises in many different scientific disciplines. We propose a
stylized model for these exploration-exploitation situations, including
population or economic growth, portfolio optimisation, evolutionary dynamics,
or the problem of optimal pinning of vortices or dislocations in disordered
materials. We find the exact growth rate of this model for tree-like geometries
and prove the existence of an optimal migration rate in this case. Numerical
simulations in the one-dimensional case confirm the generic existence of an
optimum.
| [
"['Thomas Gueudré' 'Alexander Dobrinevski' 'Jean-Philippe Bouchaud']",
"Thomas Gueudr\\'e and Alexander Dobrinevski and Jean-Philippe Bouchaud"
] |
null | null | 1310.5249 | null | null | http://arxiv.org/abs/1310.5249v1 | 2013-10-19T17:24:39Z | 2013-10-19T17:24:39Z | Graph-Based Approaches to Clustering Network-Constrained Trajectory Data | Clustering trajectory data attracted considerable attention in the last few years. Most of prior work assumed that moving objects can move freely in an euclidean space and did not consider the eventual presence of an underlying road network and its influence on evaluating the similarity between trajectories. In this paper, we present an approach to clustering such network-constrained trajectory data. More precisely we aim at discovering groups of road segments that are often travelled by the same trajectories. To achieve this end, we model the interactions between segments w.r.t. their similarity as a weighted graph to which we apply a community detection algorithm to discover meaningful clusters. We showcase our proposition through experimental results obtained on synthetic datasets. | [
"['Mohamed Khalil El Mahrsi' 'Fabrice Rossi']"
] |
stat.ML cs.AI cs.LG stat.ME | null | 1310.5288 | null | null | http://arxiv.org/pdf/1310.5288v3 | 2013-12-31T14:10:34Z | 2013-10-20T01:26:45Z | GPatt: Fast Multidimensional Pattern Extrapolation with Gaussian
Processes | Gaussian processes are typically used for smoothing and interpolation on
small datasets. We introduce a new Bayesian nonparametric framework -- GPatt --
enabling automatic pattern extrapolation with Gaussian processes on large
multidimensional datasets. GPatt unifies and extends highly expressive kernels
and fast exact inference techniques. Without human intervention -- no hand
crafting of kernel features, and no sophisticated initialisation procedures --
we show that GPatt can solve large scale pattern extrapolation, inpainting, and
kernel discovery problems, including a problem with 383400 training points. We
find that GPatt significantly outperforms popular alternative scalable Gaussian
process methods in speed and accuracy. Moreover, we discover profound
differences between each of these methods, suggesting expressive kernels,
nonparametric representations, and exact inference are useful for modelling
large scale multidimensional patterns.
| [
"['Andrew Gordon Wilson' 'Elad Gilboa' 'Arye Nehorai' 'John P. Cunningham']",
"Andrew Gordon Wilson, Elad Gilboa, Arye Nehorai, John P. Cunningham"
] |
stat.ML cs.LG | null | 1310.5347 | null | null | http://arxiv.org/pdf/1310.5347v1 | 2013-10-20T16:58:57Z | 2013-10-20T16:58:57Z | Bayesian Extensions of Kernel Least Mean Squares | The kernel least mean squares (KLMS) algorithm is a computationally efficient
nonlinear adaptive filtering method that "kernelizes" the celebrated (linear)
least mean squares algorithm. We demonstrate that the least mean squares
algorithm is closely related to the Kalman filtering, and thus, the KLMS can be
interpreted as an approximate Bayesian filtering method. This allows us to
systematically develop extensions of the KLMS by modifying the underlying
state-space and observation models. The resulting extensions introduce many
desirable properties such as "forgetting", and the ability to learn from
discrete data, while retaining the computational simplicity and time complexity
of the original algorithm.
| [
"['Il Memming Park' 'Sohan Seth' 'Steven Van Vaerenbergh']",
"Il Memming Park, Sohan Seth, Steven Van Vaerenbergh"
] |
cs.LG | null | 1310.5393 | null | null | http://arxiv.org/pdf/1310.5393v1 | 2013-10-21T01:06:56Z | 2013-10-21T01:06:56Z | Multi-Task Regularization with Covariance Dictionary for Linear
Classifiers | In this paper we propose a multi-task linear classifier learning problem
called D-SVM (Dictionary SVM). D-SVM uses a dictionary of parameter covariance
shared by all tasks to do multi-task knowledge transfer among different tasks.
We formally define the learning problem of D-SVM and show two interpretations
of this problem, from both the probabilistic and kernel perspectives. From the
probabilistic perspective, we show that our learning formulation is actually a
MAP estimation on all optimization variables. We also show its equivalence to a
multiple kernel learning problem in which one is trying to find a re-weighting
kernel for features from a dictionary of basis (despite the fact that only
linear classifiers are learned). Finally, we describe an alternative
optimization scheme to minimize the objective function and present empirical
studies to valid our algorithm.
| [
"['Fanyi Xiao' 'Ruikun Luo' 'Zhiding Yu']",
"Fanyi Xiao, Ruikun Luo, Zhiding Yu"
] |
cs.LG cs.DC stat.ML | null | 1310.5426 | null | null | http://arxiv.org/pdf/1310.5426v2 | 2013-10-25T22:08:12Z | 2013-10-21T04:58:11Z | MLI: An API for Distributed Machine Learning | MLI is an Application Programming Interface designed to address the
challenges of building Machine Learn- ing algorithms in a distributed setting
based on data-centric computing. Its primary goal is to simplify the
development of high-performance, scalable, distributed algorithms. Our initial
results show that, relative to existing systems, this interface can be used to
build distributed implementations of a wide variety of common Machine Learning
algorithms with minimal complexity and highly competitive performance and
scalability.
| [
"Evan R. Sparks, Ameet Talwalkar, Virginia Smith, Jey Kottalam, Xinghao\n Pan, Joseph Gonzalez, Michael J. Franklin, Michael I. Jordan, Tim Kraska",
"['Evan R. Sparks' 'Ameet Talwalkar' 'Virginia Smith' 'Jey Kottalam'\n 'Xinghao Pan' 'Joseph Gonzalez' 'Michael J. Franklin' 'Michael I. Jordan'\n 'Tim Kraska']"
] |
cs.LG | null | 1310.5665 | null | null | http://arxiv.org/pdf/1310.5665v3 | 2014-12-02T20:42:17Z | 2013-10-21T18:27:25Z | Learning Theory and Algorithms for Revenue Optimization in Second-Price
Auctions with Reserve | Second-price auctions with reserve play a critical role for modern search
engine and popular online sites since the revenue of these companies often
directly de- pends on the outcome of such auctions. The choice of the reserve
price is the main mechanism through which the auction revenue can be influenced
in these electronic markets. We cast the problem of selecting the reserve price
to optimize revenue as a learning problem and present a full theoretical
analysis dealing with the complex properties of the corresponding loss
function. We further give novel algorithms for solving this problem and report
the results of several experiments in both synthetic and real data
demonstrating their effectiveness.
| [
"Mehryar Mohri and Andres Mu\\~noz Medina",
"['Mehryar Mohri' 'Andres Muñoz Medina']"
] |
math.NA cs.CV cs.LG math.OC stat.ML | null | 1310.5715 | null | null | http://arxiv.org/pdf/1310.5715v5 | 2015-01-16T17:11:24Z | 2013-10-21T20:15:44Z | Stochastic Gradient Descent, Weighted Sampling, and the Randomized
Kaczmarz algorithm | We obtain an improved finite-sample guarantee on the linear convergence of
stochastic gradient descent for smooth and strongly convex objectives,
improving from a quadratic dependence on the conditioning $(L/\mu)^2$ (where
$L$ is a bound on the smoothness and $\mu$ on the strong convexity) to a linear
dependence on $L/\mu$. Furthermore, we show how reweighting the sampling
distribution (i.e. importance sampling) is necessary in order to further
improve convergence, and obtain a linear dependence in the average smoothness,
dominating previous results. We also discuss importance sampling for SGD more
broadly and show how it can improve convergence also in other scenarios. Our
results are based on a connection we make between SGD and the randomized
Kaczmarz algorithm, which allows us to transfer ideas between the separate
bodies of literature studying each of the two methods. In particular, we recast
the randomized Kaczmarz algorithm as an instance of SGD, and apply our results
to prove its exponential convergence, but to the solution of a weighted least
squares problem rather than the original least squares problem. We then present
a modified Kaczmarz algorithm with partially biased sampling which does
converge to the original least squares solution with the same exponential
convergence rate.
| [
"Deanna Needell, Nathan Srebro, Rachel Ward",
"['Deanna Needell' 'Nathan Srebro' 'Rachel Ward']"
] |
stat.ML cs.LG | null | 1310.5738 | null | null | http://arxiv.org/pdf/1310.5738v1 | 2013-10-21T22:02:17Z | 2013-10-21T22:02:17Z | A Kernel for Hierarchical Parameter Spaces | We define a family of kernels for mixed continuous/discrete hierarchical
parameter spaces and show that they are positive definite.
| [
"Frank Hutter and Michael A. Osborne",
"['Frank Hutter' 'Michael A. Osborne']"
] |
cs.LG | null | 1310.5796 | null | null | http://arxiv.org/pdf/1310.5796v4 | 2016-04-04T23:35:45Z | 2013-10-22T04:28:12Z | Relative Deviation Learning Bounds and Generalization with Unbounded
Loss Functions | We present an extensive analysis of relative deviation bounds, including
detailed proofs of two-sided inequalities and their implications. We also give
detailed proofs of two-sided generalization bounds that hold in the general
case of unbounded loss functions, under the assumption that a moment of the
loss is bounded. These bounds are useful in the analysis of importance
weighting and other learning tasks such as unbounded regression.
| [
"['Corinna Cortes' 'Spencer Greenberg' 'Mehryar Mohri']",
"Corinna Cortes, Spencer Greenberg, Mehryar Mohri"
] |
cs.LG | null | 1310.6007 | null | null | http://arxiv.org/pdf/1310.6007v3 | 2013-11-11T08:21:58Z | 2013-10-22T18:44:29Z | Efficient Optimization for Sparse Gaussian Process Regression | We propose an efficient optimization algorithm for selecting a subset of
training data to induce sparsity for Gaussian process regression. The algorithm
estimates an inducing set and the hyperparameters using a single objective,
either the marginal likelihood or a variational free energy. The space and time
complexity are linear in training set size, and the algorithm can be applied to
large regression problems on discrete or continuous domains. Empirical
evaluation shows state-of-art performance in discrete cases and competitive
results in the continuous case.
| [
"['Yanshuai Cao' 'Marcus A. Brubaker' 'David J. Fleet' 'Aaron Hertzmann']",
"Yanshuai Cao, Marcus A. Brubaker, David J. Fleet, Aaron Hertzmann"
] |
stat.ML cs.AI cs.LG | 10.3233/978-1-61499-419-0-537 | 1310.6288 | null | null | http://arxiv.org/abs/1310.6288v1 | 2013-10-23T16:43:59Z | 2013-10-23T16:43:59Z | Spatial-Spectral Boosting Analysis for Stroke Patients' Motor Imagery
EEG in Rehabilitation Training | Current studies about motor imagery based rehabilitation training systems for
stroke subjects lack an appropriate analytic method, which can achieve a
considerable classification accuracy, at the same time detects gradual changes
of imagery patterns during rehabilitation process and disinters potential
mechanisms about motor function recovery. In this study, we propose an adaptive
boosting algorithm based on the cortex plasticity and spectral band shifts.
This approach models the usually predetermined spatial-spectral configurations
in EEG study into variable preconditions, and introduces a new heuristic of
stochastic gradient boost for training base learners under these preconditions.
We compare our proposed algorithm with commonly used methods on datasets
collected from 2 months' clinical experiments. The simulation results
demonstrate the effectiveness of the method in detecting the variations of
stroke patients' EEG patterns. By chronologically reorganizing the weight
parameters of the learned additive model, we verify the spatial compensatory
mechanism on impaired cortex and detect the changes of accentuation bands in
spectral domain, which may contribute important prior knowledge for
rehabilitation practice.
| [
"Hao Zhang and Liqing Zhang",
"['Hao Zhang' 'Liqing Zhang']"
] |
cs.LG | null | 1310.6304 | null | null | http://arxiv.org/pdf/1310.6304v2 | 2013-10-24T17:36:27Z | 2013-10-23T17:33:26Z | Combining Structured and Unstructured Randomness in Large Scale PCA | Principal Component Analysis (PCA) is a ubiquitous tool with many
applications in machine learning including feature construction, subspace
embedding, and outlier detection. In this paper, we present an algorithm for
computing the top principal components of a dataset with a large number of rows
(examples) and columns (features). Our algorithm leverages both structured and
unstructured random projections to retain good accuracy while being
computationally efficient. We demonstrate the technique on the winning
submission the KDD 2010 Cup.
| [
"['Nikos Karampatziakis' 'Paul Mineiro']",
"Nikos Karampatziakis, Paul Mineiro"
] |
cs.LG cs.AI stat.ML | null | 1310.6343 | null | null | http://arxiv.org/pdf/1310.6343v1 | 2013-10-23T19:49:32Z | 2013-10-23T19:49:32Z | Provable Bounds for Learning Some Deep Representations | We give algorithms with provable guarantees that learn a class of deep nets
in the generative model view popularized by Hinton and others. Our generative
model is an $n$ node multilayer neural net that has degree at most $n^{\gamma}$
for some $\gamma <1$ and each edge has a random edge weight in $[-1,1]$. Our
algorithm learns {\em almost all} networks in this class with polynomial
running time. The sample complexity is quadratic or cubic depending upon the
details of the model.
The algorithm uses layerwise learning. It is based upon a novel idea of
observing correlations among features and using these to infer the underlying
edge structure via a global graph recovery procedure. The analysis of the
algorithm reveals interesting structure of neural networks with random edge
weights.
| [
"['Sanjeev Arora' 'Aditya Bhaskara' 'Rong Ge' 'Tengyu Ma']",
"Sanjeev Arora and Aditya Bhaskara and Rong Ge and Tengyu Ma"
] |
cs.LG q-bio.NC stat.ML | null | 1310.6536 | null | null | http://arxiv.org/pdf/1310.6536v1 | 2013-10-24T09:33:17Z | 2013-10-24T09:33:17Z | Randomized co-training: from cortical neurons to machine learning and
back again | Despite its size and complexity, the human cortex exhibits striking
anatomical regularities, suggesting there may simple meta-algorithms underlying
cortical learning and computation. We expect such meta-algorithms to be of
interest since they need to operate quickly, scalably and effectively with
little-to-no specialized assumptions.
This note focuses on a specific question: How can neurons use vast quantities
of unlabeled data to speed up learning from the comparatively rare labels
provided by reward systems? As a partial answer, we propose randomized
co-training as a biologically plausible meta-algorithm satisfying the above
requirements. As evidence, we describe a biologically-inspired algorithm,
Correlated Nystrom Views (XNV) that achieves state-of-the-art performance in
semi-supervised learning, and sketch work in progress on a neuronal
implementation.
| [
"['David Balduzzi']",
"David Balduzzi"
] |
stat.ML cs.LG | null | 1310.6740 | null | null | http://arxiv.org/pdf/1310.6740v1 | 2013-10-24T14:15:39Z | 2013-10-24T14:15:39Z | Active Learning of Linear Embeddings for Gaussian Processes | We propose an active learning method for discovering low-dimensional
structure in high-dimensional Gaussian process (GP) tasks. Such problems are
increasingly frequent and important, but have hitherto presented severe
practical difficulties. We further introduce a novel technique for
approximately marginalizing GP hyperparameters, yielding marginal predictions
robust to hyperparameter mis-specification. Our method offers an efficient
means of performing GP regression, quadrature, or Bayesian optimization in
high-dimensional spaces.
| [
"['Roman Garnett' 'Michael A. Osborne' 'Philipp Hennig']",
"Roman Garnett and Michael A. Osborne and Philipp Hennig"
] |
cs.AI cs.CL cs.LG | 10.1371/journal.pone.0085733.s001 | 1310.6775 | null | null | http://arxiv.org/abs/1310.6775v1 | 2013-10-24T21:10:53Z | 2013-10-24T21:10:53Z | Durkheim Project Data Analysis Report | This report describes the suicidality prediction models created under the
DARPA DCAPS program in association with the Durkheim Project
[http://durkheimproject.org/]. The models were built primarily from
unstructured text (free-format clinician notes) for several hundred patient
records obtained from the Veterans Health Administration (VHA). The models were
constructed using a genetic programming algorithm applied to bag-of-words and
bag-of-phrases datasets. The influence of additional structured data was
explored but was found to be minor. Given the small dataset size,
classification between cohorts was high fidelity (98%). Cross-validation
suggests these models are reasonably predictive, with an accuracy of 50% to 69%
on five rotating folds, with ensemble averages of 58% to 67%. One particularly
noteworthy result is that word-pairs can dramatically improve classification
accuracy; but this is the case only when one of the words in the pair is
already known to have a high predictive value. By contrast, the set of all
possible word-pairs does not improve on a simple bag-of-words model.
| [
"Linas Vepstas",
"['Linas Vepstas']"
] |
cs.SI cs.LG physics.soc-ph stat.ML | null | 1310.6998 | null | null | http://arxiv.org/pdf/1310.6998v1 | 2013-10-25T18:35:22Z | 2013-10-25T18:35:22Z | Predicting the NFL using Twitter | We study the relationship between social media output and National Football
League (NFL) games, using a dataset containing messages from Twitter and NFL
game statistics. Specifically, we consider tweets pertaining to specific teams
and games in the NFL season and use them alongside statistical game data to
build predictive models for future game outcomes (which team will win?) and
sports betting outcomes (which team will win with the point spread? will the
total points be over/under the line?). We experiment with several feature sets
and find that simple features using large volumes of tweets can match or exceed
the performance of more traditional features that use game statistics.
| [
"Shiladitya Sinha, Chris Dyer, Kevin Gimpel, and Noah A. Smith",
"['Shiladitya Sinha' 'Chris Dyer' 'Kevin Gimpel' 'Noah A. Smith']"
] |
cs.LG stat.ML | null | 1310.7048 | null | null | http://arxiv.org/pdf/1310.7048v1 | 2013-10-25T23:01:52Z | 2013-10-25T23:01:52Z | Scaling SVM and Least Absolute Deviations via Exact Data Reduction | The support vector machine (SVM) is a widely used method for classification.
Although many efforts have been devoted to develop efficient solvers, it
remains challenging to apply SVM to large-scale problems. A nice property of
SVM is that the non-support vectors have no effect on the resulting classifier.
Motivated by this observation, we present fast and efficient screening rules to
discard non-support vectors by analyzing the dual problem of SVM via
variational inequalities (DVI). As a result, the number of data instances to be
entered into the optimization can be substantially reduced. Some appealing
features of our screening method are: (1) DVI is safe in the sense that the
vectors discarded by DVI are guaranteed to be non-support vectors; (2) the data
set needs to be scanned only once to run the screening, whose computational
cost is negligible compared to that of solving the SVM problem; (3) DVI is
independent of the solvers and can be integrated with any existing efficient
solvers. We also show that the DVI technique can be extended to detect
non-support vectors in the least absolute deviations regression (LAD). To the
best of our knowledge, there are currently no screening methods for LAD. We
have evaluated DVI on both synthetic and real data sets. Experiments indicate
that DVI significantly outperforms the existing state-of-the-art screening
rules for SVM, and is very effective in discarding non-support vectors for LAD.
The speedup gained by DVI rules can be up to two orders of magnitude.
| [
"['Jie Wang' 'Peter Wonka' 'Jieping Ye']",
"Jie Wang and Peter Wonka and Jieping Ye"
] |
cs.LG cs.AI stat.ML stat.OT | null | 1310.7163 | null | null | http://arxiv.org/pdf/1310.7163v1 | 2013-10-27T06:29:55Z | 2013-10-27T06:29:55Z | Generalized Thompson Sampling for Contextual Bandits | Thompson Sampling, one of the oldest heuristics for solving multi-armed
bandits, has recently been shown to demonstrate state-of-the-art performance.
The empirical success has led to great interests in theoretical understanding
of this heuristic. In this paper, we approach this problem in a way very
different from existing efforts. In particular, motivated by the connection
between Thompson Sampling and exponentiated updates, we propose a new family of
algorithms called Generalized Thompson Sampling in the expert-learning
framework, which includes Thompson Sampling as a special case. Similar to most
expert-learning algorithms, Generalized Thompson Sampling uses a loss function
to adjust the experts' weights. General regret bounds are derived, which are
also instantiated to two important loss functions: square loss and logarithmic
loss. In contrast to existing bounds, our results apply to quite general
contextual bandits. More importantly, they quantify the effect of the "prior"
distribution on the regret bounds.
| [
"Lihong Li",
"['Lihong Li']"
] |
cs.LG math.OC stat.ML | null | 1310.7300 | null | null | http://arxiv.org/pdf/1310.7300v2 | 2015-08-31T18:14:36Z | 2013-10-28T03:08:48Z | Relax but stay in control: from value to algorithms for online Markov
decision processes | Online learning algorithms are designed to perform in non-stationary
environments, but generally there is no notion of a dynamic state to model
constraints on current and future actions as a function of past actions.
State-based models are common in stochastic control settings, but commonly used
frameworks such as Markov Decision Processes (MDPs) assume a known stationary
environment. In recent years, there has been a growing interest in combining
the above two frameworks and considering an MDP setting in which the cost
function is allowed to change arbitrarily after each time step. However, most
of the work in this area has been algorithmic: given a problem, one would
develop an algorithm almost from scratch. Moreover, the presence of the state
and the assumption of an arbitrarily varying environment complicate both the
theoretical analysis and the development of computationally efficient methods.
This paper describes a broad extension of the ideas proposed by Rakhlin et al.
to give a general framework for deriving algorithms in an MDP setting with
arbitrarily changing costs. This framework leads to a unifying view of existing
methods and provides a general procedure for constructing new ones. Several new
methods are presented, and one of them is shown to have important advantages
over a similar method developed from scratch via an online version of
approximate dynamic programming.
| [
"Peng Guan, Maxim Raginsky, Rebecca Willett",
"['Peng Guan' 'Maxim Raginsky' 'Rebecca Willett']"
] |
stat.ML cs.LG math.NA math.OC | 10.1137/130946782 | 1310.7529 | null | null | http://arxiv.org/abs/1310.7529v3 | 2014-04-07T08:47:07Z | 2013-10-28T18:41:48Z | Successive Nonnegative Projection Algorithm for Robust Nonnegative Blind
Source Separation | In this paper, we propose a new fast and robust recursive algorithm for
near-separable nonnegative matrix factorization, a particular nonnegative blind
source separation problem. This algorithm, which we refer to as the successive
nonnegative projection algorithm (SNPA), is closely related to the popular
successive projection algorithm (SPA), but takes advantage of the nonnegativity
constraint in the decomposition. We prove that SNPA is more robust than SPA and
can be applied to a broader class of nonnegative matrices. This is illustrated
on some synthetic data sets, and on a real-world hyperspectral image.
| [
"['Nicolas Gillis']",
"Nicolas Gillis"
] |
stat.ML cs.LG | null | 1310.7780 | null | null | http://arxiv.org/pdf/1310.7780v2 | 2014-04-29T20:40:42Z | 2013-10-29T12:21:12Z | The Information Geometry of Mirror Descent | Information geometry applies concepts in differential geometry to probability
and statistics and is especially useful for parameter estimation in exponential
families where parameters are known to lie on a Riemannian manifold.
Connections between the geometric properties of the induced manifold and
statistical properties of the estimation problem are well-established. However
developing first-order methods that scale to larger problems has been less of a
focus in the information geometry community. The best known algorithm that
incorporates manifold structure is the second-order natural gradient descent
algorithm introduced by Amari. On the other hand, stochastic approximation
methods have led to the development of first-order methods for optimizing noisy
objective functions. A recent generalization of the Robbins-Monro algorithm
known as mirror descent, developed by Nemirovski and Yudin is a first order
method that induces non-Euclidean geometries. However current analysis of
mirror descent does not precisely characterize the induced non-Euclidean
geometry nor does it consider performance in terms of statistical relative
efficiency. In this paper, we prove that mirror descent induced by Bregman
divergences is equivalent to the natural gradient descent algorithm on the dual
Riemannian manifold. Using this equivalence, it follows that (1) mirror descent
is the steepest descent direction along the Riemannian manifold of the
exponential family; (2) mirror descent with log-likelihood loss applied to
parameter estimation in exponential families asymptotically achieves the
classical Cram\'er-Rao lower bound and (3) natural gradient descent for
manifolds corresponding to exponential families can be implemented as a
first-order method through mirror descent.
| [
"Garvesh Raskutti and Sayan Mukherjee",
"['Garvesh Raskutti' 'Sayan Mukherjee']"
] |
cs.LG | 10.1109/ITSC.2012.6338621 | 1310.7795 | null | null | http://arxiv.org/abs/1310.7795v1 | 2013-10-29T13:18:41Z | 2013-10-29T13:18:41Z | An Unsupervised Feature Learning Approach to Improve Automatic Incident
Detection | Sophisticated automatic incident detection (AID) technology plays a key role
in contemporary transportation systems. Though many papers were devoted to
study incident classification algorithms, few study investigated how to enhance
feature representation of incidents to improve AID performance. In this paper,
we propose to use an unsupervised feature learning algorithm to generate higher
level features to represent incidents. We used real incident data in the
experiments and found that effective feature mapping function can be learnt
from the data crosses the test sites. With the enhanced features, detection
rate (DR), false alarm rate (FAR) and mean time to detect (MTTD) are
significantly improved in all of the three representative cases. This approach
also provides an alternative way to reduce the amount of labeled data, which is
expensive to obtain, required in training better incident classifiers since the
feature learning is unsupervised.
| [
"Jimmy SJ. Ren, Wei Wang, Jiawei Wang, Stephen Liao",
"['Jimmy SJ. Ren' 'Wei Wang' 'Jiawei Wang' 'Stephen Liao']"
] |
astro-ph.IM cs.LG stat.ML | 10.1088/0004-637X/777/2/83 | 1310.7868 | null | null | http://arxiv.org/abs/1310.7868v1 | 2013-10-29T16:37:13Z | 2013-10-29T16:37:13Z | Automatic Classification of Variable Stars in Catalogs with missing data | We present an automatic classification method for astronomical catalogs with
missing data. We use Bayesian networks, a probabilistic graphical model, that
allows us to perform inference to pre- dict missing values given observed data
and dependency relationships between variables. To learn a Bayesian network
from incomplete data, we use an iterative algorithm that utilises sampling
methods and expectation maximization to estimate the distributions and
probabilistic dependencies of variables from data with missing values. To test
our model we use three catalogs with missing data (SAGE, 2MASS and UBVI) and
one complete catalog (MACHO). We examine how classification accuracy changes
when information from missing data catalogs is included, how our method
compares to traditional missing data approaches and at what computational cost.
Integrating these catalogs with missing data we find that classification of
variable objects improves by few percent and by 15% for quasar detection while
keeping the computational cost the same.
| [
"Karim Pichara and Pavlos Protopapas",
"['Karim Pichara' 'Pavlos Protopapas']"
] |
cs.LG math.OC stat.ML | null | 1310.7991 | null | null | http://arxiv.org/pdf/1310.7991v2 | 2014-07-28T22:55:12Z | 2013-10-30T01:12:03Z | Learning Sparsely Used Overcomplete Dictionaries via Alternating
Minimization | We consider the problem of sparse coding, where each sample consists of a
sparse linear combination of a set of dictionary atoms, and the task is to
learn both the dictionary elements and the mixing coefficients. Alternating
minimization is a popular heuristic for sparse coding, where the dictionary and
the coefficients are estimated in alternate steps, keeping the other fixed.
Typically, the coefficients are estimated via $\ell_1$ minimization, keeping
the dictionary fixed, and the dictionary is estimated through least squares,
keeping the coefficients fixed. In this paper, we establish local linear
convergence for this variant of alternating minimization and establish that the
basin of attraction for the global optimum (corresponding to the true
dictionary and the coefficients) is $\order{1/s^2}$, where $s$ is the sparsity
level in each sample and the dictionary satisfies RIP. Combined with the recent
results of approximate dictionary estimation, this yields provable guarantees
for exact recovery of both the dictionary elements and the coefficients, when
the dictionary elements are incoherent.
| [
"Alekh Agarwal, Animashree Anandkumar, Prateek Jain, Praneeth\n Netrapalli",
"['Alekh Agarwal' 'Animashree Anandkumar' 'Prateek Jain'\n 'Praneeth Netrapalli']"
] |
cs.LG cs.IR stat.ML | null | 1310.7994 | null | null | http://arxiv.org/pdf/1310.7994v1 | 2013-10-30T01:19:26Z | 2013-10-30T01:19:26Z | Necessary and Sufficient Conditions for Novel Word Detection in
Separable Topic Models | The simplicial condition and other stronger conditions that imply it have
recently played a central role in developing polynomial time algorithms with
provable asymptotic consistency and sample complexity guarantees for topic
estimation in separable topic models. Of these algorithms, those that rely
solely on the simplicial condition are impractical while the practical ones
need stronger conditions. In this paper, we demonstrate, for the first time,
that the simplicial condition is a fundamental, algorithm-independent,
information-theoretic necessary condition for consistent separable topic
estimation. Furthermore, under solely the simplicial condition, we present a
practical quadratic-complexity algorithm based on random projections which
consistently detects all novel words of all topics using only up to
second-order empirical word moments. This algorithm is amenable to distributed
implementation making it attractive for 'big-data' scenarios involving a
network of large distributed databases.
| [
"Weicong Ding, Prakash Ishwar, Mohammad H. Rohban, Venkatesh Saligrama",
"['Weicong Ding' 'Prakash Ishwar' 'Mohammad H. Rohban'\n 'Venkatesh Saligrama']"
] |
cs.LG stat.ML | null | 1310.8004 | null | null | http://arxiv.org/pdf/1310.8004v1 | 2013-10-30T02:11:48Z | 2013-10-30T02:11:48Z | Online Ensemble Learning for Imbalanced Data Streams | While both cost-sensitive learning and online learning have been studied
extensively, the effort in simultaneously dealing with these two issues is
limited. Aiming at this challenge task, a novel learning framework is proposed
in this paper. The key idea is based on the fusion of online ensemble
algorithms and the state of the art batch mode cost-sensitive bagging/boosting
algorithms. Within this framework, two separately developed research areas are
bridged together, and a batch of theoretically sound online cost-sensitive
bagging and online cost-sensitive boosting algorithms are first proposed.
Unlike other online cost-sensitive learning algorithms lacking theoretical
analysis of asymptotic properties, the convergence of the proposed algorithms
is guaranteed under certain conditions, and the experimental evidence with
benchmark data sets also validates the effectiveness and efficiency of the
proposed methods.
| [
"['Boyu Wang' 'Joelle Pineau']",
"Boyu Wang, Joelle Pineau"
] |
cs.LG stat.ML | null | 1310.8243 | null | null | http://arxiv.org/pdf/1310.8243v1 | 2013-10-30T17:49:11Z | 2013-10-30T17:49:11Z | Para-active learning | Training examples are not all equally informative. Active learning strategies
leverage this observation in order to massively reduce the number of examples
that need to be labeled. We leverage the same observation to build a generic
strategy for parallelizing learning algorithms. This strategy is effective
because the search for informative examples is highly parallelizable and
because we show that its performance does not deteriorate when the sifting
process relies on a slightly outdated model. Parallel active learning is
particularly attractive to train nonlinear models with non-linear
representations because there are few practical parallel learning algorithms
for such models. We report preliminary experiments using both kernel SVMs and
SGD-trained neural networks.
| [
"Alekh Agarwal, Leon Bottou, Miroslav Dudik, John Langford",
"['Alekh Agarwal' 'Leon Bottou' 'Miroslav Dudik' 'John Langford']"
] |
cs.LG stat.ML | null | 1310.8320 | null | null | http://arxiv.org/pdf/1310.8320v1 | 2013-10-30T20:56:50Z | 2013-10-30T20:56:50Z | Safe and Efficient Screening For Sparse Support Vector Machine | Screening is an effective technique for speeding up the training process of a
sparse learning model by removing the features that are guaranteed to be
inactive the process. In this paper, we present a efficient screening technique
for sparse support vector machine based on variational inequality. The
technique is both efficient and safe.
| [
"['Zheng Zhao' 'Jun Liu']",
"Zheng Zhao, Jun Liu"
] |
cs.LG | null | 1310.8418 | null | null | http://arxiv.org/pdf/1310.8418v4 | 2015-03-16T21:06:08Z | 2013-10-31T08:00:21Z | An efficient distributed learning algorithm based on effective local
functional approximations | Scalable machine learning over big data is an important problem that is
receiving a lot of attention in recent years. On popular distributed
environments such as Hadoop running on a cluster of commodity machines,
communication costs are substantial and algorithms need to be designed suitably
considering those costs. In this paper we give a novel approach to the
distributed training of linear classifiers (involving smooth losses and L2
regularization) that is designed to reduce the total communication costs. At
each iteration, the nodes minimize locally formed approximate objective
functions; then the resulting minimizers are combined to form a descent
direction to move. Our approach gives a lot of freedom in the formation of the
approximate objective function as well as in the choice of methods to solve
them. The method is shown to have $O(log(1/\epsilon))$ time convergence. The
method can be viewed as an iterative parameter mixing method. A special
instantiation yields a parallel stochastic gradient descent method with strong
convergence. When communication times between nodes are large, our method is
much faster than the Terascale method (Agarwal et al., 2011), which is a state
of the art distributed solver based on the statistical query model (Chuet al.,
2006) that computes function and gradient values in a distributed fashion. We
also evaluate against other recent distributed methods and demonstrate superior
performance of our method.
| [
"['Dhruv Mahajan' 'Nikunj Agrawal' 'S. Sathiya Keerthi' 'S. Sundararajan'\n 'Leon Bottou']",
"Dhruv Mahajan, Nikunj Agrawal, S. Sathiya Keerthi, S. Sundararajan,\n Leon Bottou"
] |
cs.LG | null | 1310.8428 | null | null | http://arxiv.org/pdf/1310.8428v2 | 2013-11-17T04:04:49Z | 2013-10-31T09:00:39Z | Multilabel Classification through Random Graph Ensembles | We present new methods for multilabel classification, relying on ensemble
learning on a collection of random output graphs imposed on the multilabel and
a kernel-based structured output learner as the base classifier. For ensemble
learning, differences among the output graphs provide the required base
classifier diversity and lead to improved performance in the increasing size of
the ensemble. We study different methods of forming the ensemble prediction,
including majority voting and two methods that perform inferences over the
graph structures before or after combining the base models into the ensemble.
We compare the methods against the state-of-the-art machine learning approaches
on a set of heterogeneous multilabel benchmark problems, including multilabel
AdaBoost, convex multitask feature learning, as well as single target learning
approaches represented by Bagging and SVM. In our experiments, the random graph
ensembles are very competitive and robust, ranking first or second on most of
the datasets. Overall, our results show that random graph ensembles are viable
alternatives to flat multilabel and multitask learners.
| [
"['Hongyu Su' 'Juho Rousu']",
"Hongyu Su, Juho Rousu"
] |
cs.NI cs.LG | null | 1310.8467 | null | null | http://arxiv.org/pdf/1310.8467v1 | 2013-10-31T11:57:06Z | 2013-10-31T11:57:06Z | Reinforcement Learning Framework for Opportunistic Routing in WSNs | Routing packets opportunistically is an essential part of multihop ad hoc
wireless sensor networks. The existing routing techniques are not adaptive
opportunistic. In this paper we have proposed an adaptive opportunistic routing
scheme that routes packets opportunistically in order to ensure that packet
loss is avoided. Learning and routing are combined in the framework that
explores the optimal routing possibilities. In this paper we implemented this
Reinforced learning framework using a customer simulator. The experimental
results revealed that the scheme is able to exploit the opportunistic to
optimize routing of packets even though the network structure is unknown.
| [
"G.Srinivas Rao, A.V.Ramana",
"['G. Srinivas Rao' 'A. V. Ramana']"
] |
cs.LG stat.ML | null | 1310.8499 | null | null | http://arxiv.org/pdf/1310.8499v2 | 2014-05-20T16:22:43Z | 2013-10-31T13:47:30Z | Deep AutoRegressive Networks | We introduce a deep, generative autoencoder capable of learning hierarchies
of distributed representations from data. Successive deep stochastic hidden
layers are equipped with autoregressive connections, which enable the model to
be sampled from quickly and exactly via ancestral sampling. We derive an
efficient approximate parameter estimation method based on the minimum
description length (MDL) principle, which can be seen as maximising a
variational lower bound on the log-likelihood, with a feedforward neural
network implementing approximate inference. We demonstrate state-of-the-art
generative performance on a number of classic data sets: several UCI data sets,
MNIST and Atari 2600 games.
| [
"['Karol Gregor' 'Ivo Danihelka' 'Andriy Mnih' 'Charles Blundell'\n 'Daan Wierstra']",
"Karol Gregor, Ivo Danihelka, Andriy Mnih, Charles Blundell, Daan\n Wierstra"
] |
cs.LG | 10.1371/journal.pone.0094137 | 1311.0202 | null | null | http://arxiv.org/abs/1311.0202v1 | 2013-10-17T03:44:18Z | 2013-10-17T03:44:18Z | A systematic comparison of supervised classifiers | Pattern recognition techniques have been employed in a myriad of industrial,
medical, commercial and academic applications. To tackle such a diversity of
data, many techniques have been devised. However, despite the long tradition of
pattern recognition research, there is no technique that yields the best
classification in all scenarios. Therefore, the consideration of as many as
possible techniques presents itself as an fundamental practice in applications
aiming at high accuracy. Typical works comparing methods either emphasize the
performance of a given algorithm in validation tests or systematically compare
various algorithms, assuming that the practical use of these methods is done by
experts. In many occasions, however, researchers have to deal with their
practical classification tasks without an in-depth knowledge about the
underlying mechanisms behind parameters. Actually, the adequate choice of
classifiers and parameters alike in such practical circumstances constitutes a
long-standing problem and is the subject of the current paper. We carried out a
study on the performance of nine well-known classifiers implemented by the Weka
framework and compared the dependence of the accuracy with their configuration
parameter configurations. The analysis of performance with default parameters
revealed that the k-nearest neighbors method exceeds by a large margin the
other methods when high dimensional datasets are considered. When other
configuration of parameters were allowed, we found that it is possible to
improve the quality of SVM in more than 20% even if parameters are set
randomly. Taken together, the investigation conducted in this paper suggests
that, apart from the SVM implementation, Weka's default configuration of
parameters provides an performance close the one achieved with the optimal
configuration.
| [
"['D. R. Amancio' 'C. H. Comin' 'D. Casanova' 'G. Travieso' 'O. M. Bruno'\n 'F. A. Rodrigues' 'L. da F. Costa']",
"D. R. Amancio, C. H. Comin, D. Casanova, G. Travieso, O. M. Bruno, F.\n A. Rodrigues and L. da F. Costa"
] |
cs.LG stat.ML | null | 1311.0222 | null | null | http://arxiv.org/pdf/1311.0222v2 | 2013-11-05T17:53:10Z | 2013-11-01T16:51:02Z | Online Learning with Multiple Operator-valued Kernels | We consider the problem of learning a vector-valued function f in an online
learning setting. The function f is assumed to lie in a reproducing Hilbert
space of operator-valued kernels. We describe two online algorithms for
learning f while taking into account the output structure. A first contribution
is an algorithm, ONORMA, that extends the standard kernel-based online learning
algorithm NORMA from scalar-valued to operator-valued setting. We report a
cumulative error bound that holds both for classification and regression. We
then define a second algorithm, MONORMA, which addresses the limitation of
pre-defining the output structure in ONORMA by learning sequentially a linear
combination of operator-valued kernels. Our experiments show that the proposed
algorithms achieve good performance results with low computational cost.
| [
"Julien Audiffren (LIF), Hachem Kadri (LIF)",
"['Julien Audiffren' 'Hachem Kadri']"
] |
math.ST cs.IT cs.LG math.IT stat.ME stat.TH | null | 1311.0274 | null | null | http://arxiv.org/pdf/1311.0274v1 | 2013-11-01T19:41:42Z | 2013-11-01T19:41:42Z | Nearly Optimal Sample Size in Hypothesis Testing for High-Dimensional
Regression | We consider the problem of fitting the parameters of a high-dimensional
linear regression model. In the regime where the number of parameters $p$ is
comparable to or exceeds the sample size $n$, a successful approach uses an
$\ell_1$-penalized least squares estimator, known as Lasso. Unfortunately,
unlike for linear estimators (e.g., ordinary least squares), no
well-established method exists to compute confidence intervals or p-values on
the basis of the Lasso estimator. Very recently, a line of work
\cite{javanmard2013hypothesis, confidenceJM, GBR-hypothesis} has addressed this
problem by constructing a debiased version of the Lasso estimator. In this
paper, we study this approach for random design model, under the assumption
that a good estimator exists for the precision matrix of the design. Our
analysis improves over the state of the art in that it establishes nearly
optimal \emph{average} testing power if the sample size $n$ asymptotically
dominates $s_0 (\log p)^2$, with $s_0$ being the sparsity level (number of
non-zero coefficients). Earlier work obtains provable guarantees only for much
larger sample size, namely it requires $n$ to asymptotically dominate $(s_0
\log p)^2$.
In particular, for random designs with a sparse precision matrix we show that
an estimator thereof having the required properties can be computed
efficiently. Finally, we evaluate this approach on synthetic data and compare
it with earlier proposals.
| [
"['Adel Javanmard' 'Andrea Montanari']",
"Adel Javanmard and Andrea Montanari"
] |
stat.ML cs.LG | null | 1311.0466 | null | null | http://arxiv.org/pdf/1311.0466v1 | 2013-11-03T13:51:55Z | 2013-11-03T13:51:55Z | Thompson Sampling for Complex Bandit Problems | We consider stochastic multi-armed bandit problems with complex actions over
a set of basic arms, where the decision maker plays a complex action rather
than a basic arm in each round. The reward of the complex action is some
function of the basic arms' rewards, and the feedback observed may not
necessarily be the reward per-arm. For instance, when the complex actions are
subsets of the arms, we may only observe the maximum reward over the chosen
subset. Thus, feedback across complex actions may be coupled due to the nature
of the reward function. We prove a frequentist regret bound for Thompson
sampling in a very general setting involving parameter, action and observation
spaces and a likelihood function over them. The bound holds for
discretely-supported priors over the parameter space and without additional
structural properties such as closed-form posteriors, conjugate prior structure
or independence across arms. The regret bound scales logarithmically with time
but, more importantly, with an improved constant that non-trivially captures
the coupling across complex actions due to the structure of the rewards. As
applications, we derive improved regret bounds for classes of complex bandit
problems involving selecting subsets of arms, including the first nontrivial
regret bounds for nonlinear MAX reward feedback from subsets.
| [
"['Aditya Gopalan' 'Shie Mannor' 'Yishay Mansour']",
"Aditya Gopalan, Shie Mannor and Yishay Mansour"
] |
stat.ML cs.LG | null | 1311.0468 | null | null | http://arxiv.org/pdf/1311.0468v1 | 2013-11-03T14:18:56Z | 2013-11-03T14:18:56Z | Thompson Sampling for Online Learning with Linear Experts | In this note, we present a version of the Thompson sampling algorithm for the
problem of online linear generalization with full information (i.e., the
experts setting), studied by Kalai and Vempala, 2005. The algorithm uses a
Gaussian prior and time-varying Gaussian likelihoods, and we show that it
essentially reduces to Kalai and Vempala's Follow-the-Perturbed-Leader
strategy, with exponentially distributed noise replaced by Gaussian noise. This
implies sqrt(T) regret bounds for Thompson sampling (with time-varying
likelihood) for online learning with full information.
| [
"['Aditya Gopalan']",
"Aditya Gopalan"
] |
cs.LG cs.DC | null | 1311.0636 | null | null | http://arxiv.org/pdf/1311.0636v1 | 2013-11-04T10:31:11Z | 2013-11-04T10:31:11Z | A Parallel SGD method with Strong Convergence | This paper proposes a novel parallel stochastic gradient descent (SGD) method
that is obtained by applying parallel sets of SGD iterations (each set
operating on one node using the data residing in it) for finding the direction
in each iteration of a batch descent method. The method has strong convergence
properties. Experiments on datasets with high dimensional feature spaces show
the value of this method.
| [
"['Dhruv Mahajan' 'S. Sathiya Keerthi' 'S. Sundararajan' 'Leon Bottou']",
"Dhruv Mahajan, S. Sathiya Keerthi, S. Sundararajan, Leon Bottou"
] |
stat.ML cs.LG cs.NE | null | 1311.0701 | null | null | http://arxiv.org/pdf/1311.0701v7 | 2014-03-05T19:32:29Z | 2013-11-04T13:56:23Z | On Fast Dropout and its Applicability to Recurrent Networks | Recurrent Neural Networks (RNNs) are rich models for the processing of
sequential data. Recent work on advancing the state of the art has been focused
on the optimization or modelling of RNNs, mostly motivated by adressing the
problems of the vanishing and exploding gradients. The control of overfitting
has seen considerably less attention. This paper contributes to that by
analyzing fast dropout, a recent regularization method for generalized linear
models and neural networks from a back-propagation inspired perspective. We
show that fast dropout implements a quadratic form of an adaptive,
per-parameter regularizer, which rewards large weights in the light of
underfitting, penalizes them for overconfident predictions and vanishes at
minima of an unregularized training loss. The derivatives of that regularizer
are exclusively based on the training error signal. One consequence of this is
the absense of a global weight attractor, which is particularly appealing for
RNNs, since the dynamics are not biased towards a certain regime. We positively
test the hypothesis that this improves the performance of RNNs on four musical
data sets.
| [
"Justin Bayer, Christian Osendorfer, Daniela Korhammer, Nutan Chen,\n Sebastian Urban, Patrick van der Smagt",
"['Justin Bayer' 'Christian Osendorfer' 'Daniela Korhammer' 'Nutan Chen'\n 'Sebastian Urban' 'Patrick van der Smagt']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.