title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Efficient Blind Compressed Sensing Using Sparsifying Transforms with
Convergence Guarantees and Application to MRI | cs.LG stat.ML | Natural signals and images are well-known to be approximately sparse in
transform domains such as Wavelets and DCT. This property has been heavily
exploited in various applications in image processing and medical imaging.
Compressed sensing exploits the sparsity of images or image patches in a
transform domain or synthesis dictionary to reconstruct images from
undersampled measurements. In this work, we focus on blind compressed sensing,
where the underlying sparsifying transform is a priori unknown, and propose a
framework to simultaneously reconstruct the underlying image as well as the
sparsifying transform from highly undersampled measurements. The proposed block
coordinate descent type algorithms involve highly efficient optimal updates.
Importantly, we prove that although the proposed blind compressed sensing
formulations are highly nonconvex, our algorithms are globally convergent
(i.e., they converge from any initialization) to the set of critical points of
the objectives defining the formulations. These critical points are guaranteed
to be at least partial global and partial local minimizers. The exact point(s)
of convergence may depend on initialization. We illustrate the usefulness of
the proposed framework for magnetic resonance image reconstruction from highly
undersampled k-space measurements. As compared to previous methods involving
the synthesis dictionary model, our approach is much faster, while also
providing promising reconstruction quality.
| Saiprasad Ravishankar and Yoram Bresler | null | 1501.02923 | null | null |
Random Bits Regression: a Strong General Predictor for Big Data | stat.ML cs.LG | To improve accuracy and speed of regressions and classifications, we present
a data-based prediction method, Random Bits Regression (RBR). This method first
generates a large number of random binary intermediate/derived features based
on the original input matrix, and then performs regularized linear/logistic
regression on those intermediate/derived features to predict the outcome.
Benchmark analyses on a simulated dataset, UCI machine learning repository
datasets and a GWAS dataset showed that RBR outperforms other popular methods
in accuracy and robustness. RBR (available on
https://sourceforge.net/projects/rbr/) is very fast and requires reasonable
memories, therefore, provides a strong, robust and fast predictor in the big
data era.
| Yi Wang, Yi Li, Momiao Xiong, Li Jin | 10.1186/s41044-016-0010-4 | 1501.02990 | null | null |
On Generalizing the C-Bound to the Multiclass and Multi-label Settings | stat.ML cs.LG | The C-bound, introduced in Lacasse et al., gives a tight upper bound on the
risk of a binary majority vote classifier. In this work, we present a first
step towards extending this work to more complex outputs, by providing
generalizations of the C-bound to the multiclass and multi-label settings.
| Francois Laviolette, Emilie Morvant (LHC), Liva Ralaivola,
Jean-Francis Roy | null | 1501.03001 | null | null |
An Improvement to the Domain Adaptation Bound in a PAC-Bayesian context | stat.ML cs.LG | This paper provides a theoretical analysis of domain adaptation based on the
PAC-Bayesian theory. We propose an improvement of the previous domain
adaptation bound obtained by Germain et al. in two ways. We first give another
generalization bound tighter and easier to interpret. Moreover, we provide a
new analysis of the constant term appearing in the bound that can be of high
interest for developing new algorithmic solutions.
| Pascal Germain, Amaury Habrard (LHC), Francois Laviolette, Emilie
Morvant (LHC) | null | 1501.03002 | null | null |
Exploring the efficacy of molecular fragments of different complexity in
computational SAR modeling | cs.CE cs.LG | An important first step in computational SAR modeling is to transform the
compounds into a representation that can be processed by predictive modeling
techniques. This is typically a feature vector where each feature indicates the
presence or absence of a molecular fragment. While the traditional approach to
SAR modeling employed size restricted fingerprints derived from path fragments,
much research in recent years focussed on mining more complex graph based
fragments. Today, there seems to be a growing consensus in the data mining
community that these more expressive fragments should be more useful. We
question this consensus and show experimentally that fragments of low
complexity, i.e. sequences, perform better than equally large sets of more
complex ones, an effect we explain by pairwise correlation among fragments and
the ability of a fragment set to encode compounds from different classes
distinctly. The size restriction on these sets is based on ordering the
fragments by class-correlation scores. In addition, we also evaluate the
effects of using a significance value instead of a length restriction for path
fragments and find a significant reduction in the number of features with
little loss in performance.
| Albrecht Zimmermann, Bj\"orn Bringmann, Luc De Raedt | null | 1501.03015 | null | null |
Deep Learning with Nonparametric Clustering | cs.LG | Clustering is an essential problem in machine learning and data mining. One
vital factor that impacts clustering performance is how to learn or design the
data representation (or features). Fortunately, recent advances in deep
learning can learn unsupervised features effectively, and have yielded state of
the art performance in many classification problems, such as character
recognition, object recognition and document categorization. However, little
attention has been paid to the potential of deep learning for unsupervised
clustering problems. In this paper, we propose a deep belief network with
nonparametric clustering. As an unsupervised method, our model first leverages
the advantages of deep learning for feature representation and dimension
reduction. Then, it performs nonparametric clustering under a maximum margin
framework -- a discriminative clustering model and can be trained online
efficiently in the code space. Lastly model parameters are refined in the deep
belief network. Thus, this model can learn features for clustering and infer
model complexity in an unified framework. The experimental results show the
advantage of our approach over competitive baselines.
| Gang Chen | null | 1501.03084 | null | null |
Using Riemannian geometry for SSVEP-based Brain Computer Interface | cs.LG stat.ML | Riemannian geometry has been applied to Brain Computer Interface (BCI) for
brain signals classification yielding promising results. Studying
electroencephalographic (EEG) signals from their associated covariance matrices
allows a mitigation of common sources of variability (electronic, electrical,
biological) by constructing a representation which is invariant to these
perturbations. While working in Euclidean space with covariance matrices is
known to be error-prone, one might take advantage of algorithmic advances in
information geometry and matrix manifold to implement methods for Symmetric
Positive-Definite (SPD) matrices. This paper proposes a comprehensive review of
the actual tools of information geometry and how they could be applied on
covariance matrices of EEG. In practice, covariance matrices should be
estimated, thus a thorough study of all estimators is conducted on real EEG
dataset. As a main contribution, this paper proposes an online implementation
of a classifier in the Riemannian space and its subsequent assessment in
Steady-State Visually Evoked Potential (SSVEP) experimentations.
| Emmanuel K. Kalunga, Sylvain Chevallier, Quentin Barthelemy | 10.1016/j.neucom.2016.01.007 | 1501.03227 | null | null |
Classification with Low Rank and Missing Data | cs.LG | We consider classification and regression tasks where we have missing data
and assume that the (clean) data resides in a low rank subspace. Finding a
hidden subspace is known to be computationally hard. Nevertheless, using a
non-proper formulation we give an efficient agnostic algorithm that classifies
as good as the best linear classifier coupled with the best low-dimensional
subspace in which the data resides. A direct implication is that our algorithm
can linearly (and non-linearly through kernels) classify provably as well as
the best classifier that has access to the full data.
| Elad Hazan and Roi Livni and Yishay Mansour | null | 1501.03273 | null | null |
Hard to Cheat: A Turing Test based on Answering Questions about Images | cs.AI cs.CL cs.CV cs.LG | Progress in language and image understanding by machines has sparkled the
interest of the research community in more open-ended, holistic tasks, and
refueled an old AI dream of building intelligent machines. We discuss a few
prominent challenges that characterize such holistic tasks and argue for
"question answering about images" as a particular appealing instance of such a
holistic task. In particular, we point out that it is a version of a Turing
Test that is likely to be more robust to over-interpretations and contrast it
with tasks like grounding and generation of descriptions. Finally, we discuss
tools to measure progress in this field.
| Mateusz Malinowski and Mario Fritz | null | 1501.03302 | null | null |
Unbiased Bayes for Big Data: Paths of Partial Posteriors | stat.ML cs.LG stat.ME | A key quantity of interest in Bayesian inference are expectations of
functions with respect to a posterior distribution. Markov Chain Monte Carlo is
a fundamental tool to consistently compute these expectations via averaging
samples drawn from an approximate posterior. However, its feasibility is being
challenged in the era of so called Big Data as all data needs to be processed
in every iteration. Realising that such simulation is an unnecessarily hard
problem if the goal is estimation, we construct a computationally scalable
methodology that allows unbiased estimation of the required expectations --
without explicit simulation from the full posterior. The scheme's variance is
finite by construction and straightforward to control, leading to algorithms
that are provably unbiased and naturally arrive at a desired error tolerance.
This is achieved at an average computational complexity that is sub-linear in
the size of the dataset and its free parameters are easy to tune. We
demonstrate the utility and generality of the methodology on a range of common
statistical models applied to large-scale benchmark and real-world datasets.
| Heiko Strathmann, Dino Sejdinovic, Mark Girolami | null | 1501.03326 | null | null |
Dirichlet Process Parsimonious Mixtures for clustering | stat.ML cs.LG stat.ME | The parsimonious Gaussian mixture models, which exploit an eigenvalue
decomposition of the group covariance matrices of the Gaussian mixture, have
shown their success in particular in cluster analysis. Their estimation is in
general performed by maximum likelihood estimation and has also been considered
from a parametric Bayesian prospective. We propose new Dirichlet Process
Parsimonious mixtures (DPPM) which represent a Bayesian nonparametric
formulation of these parsimonious Gaussian mixture models. The proposed DPPM
models are Bayesian nonparametric parsimonious mixture models that allow to
simultaneously infer the model parameters, the optimal number of mixture
components and the optimal parsimonious mixture structure from the data. We
develop a Gibbs sampling technique for maximum a posteriori (MAP) estimation of
the developed DPMM models and provide a Bayesian model selection framework by
using Bayes factors. We apply them to cluster simulated data and real data
sets, and compare them to the standard parsimonious mixture models. The
obtained results highlight the effectiveness of the proposed nonparametric
parsimonious mixture models as a good nonparametric alternative for the
parametric parsimonious models.
| Faicel Chamroukhi, Marius Bartcus, Herv\'e Glotin | null | 1501.03347 | null | null |
A Proximal Approach for Sparse Multiclass SVM | cs.LG | Sparsity-inducing penalties are useful tools to design multiclass support
vector machines (SVMs). In this paper, we propose a convex optimization
approach for efficiently and exactly solving the multiclass SVM learning
problem involving a sparse regularization and the multiclass hinge loss
formulated by Crammer and Singer. We provide two algorithms: the first one
dealing with the hinge loss as a penalty term, and the other one addressing the
case when the hinge loss is enforced through a constraint. The related convex
optimization problems can be efficiently solved thanks to the flexibility
offered by recent primal-dual proximal algorithms and epigraphical splitting
techniques. Experiments carried out on several datasets demonstrate the
interest of considering the exact expression of the hinge loss rather than a
smooth approximation. The efficiency of the proposed algorithms w.r.t. several
state-of-the-art methods is also assessed through comparisons of execution
times.
| G. Chierchia, Nelly Pustelnik, Jean-Christophe Pesquet, B.
Pesquet-Popescu | null | 1501.03669 | null | null |
Multi-view learning for multivariate performance measures optimization | cs.LG | In this paper, we propose the problem of optimizing multivariate performance
measures from multi-view data, and an effective method to solve it. This
problem has two features: the data points are presented by multiple views, and
the target of learning is to optimize complex multivariate performance
measures. We propose to learn a linear discriminant functions for each view,
and combine them to construct a overall multivariate mapping function for
mult-view data. To learn the parameters of the linear dis- criminant functions
of different views to optimize multivariate performance measures, we formulate
a optimization problem. In this problem, we propose to minimize the complexity
of the linear discriminant functions of each view, encourage the consistences
of the responses of different views over the same data points, and minimize the
upper boundary of a given multivariate performance measure. To optimize this
problem, we employ the cutting-plane method in an iterative algorithm. In each
iteration, we update a set of constrains, and optimize the mapping function
parameter of each view one by one.
| Jim Jing-Yan Wang | null | 1501.03786 | null | null |
The Fast Convergence of Incremental PCA | cs.LG stat.ML | We consider a situation in which we see samples in $\mathbb{R}^d$ drawn
i.i.d. from some distribution with mean zero and unknown covariance A. We wish
to compute the top eigenvector of A in an incremental fashion - with an
algorithm that maintains an estimate of the top eigenvector in O(d) space, and
incrementally adjusts the estimate with each new data point that arrives. Two
classical such schemes are due to Krasulina (1969) and Oja (1983). We give
finite-sample convergence rates for both.
| Akshay Balsubramani, Sanjoy Dasgupta, Yoav Freund | null | 1501.03796 | null | null |
PAC-Bayes with Minimax for Confidence-Rated Transduction | cs.LG stat.ML | We consider using an ensemble of binary classifiers for transductive
prediction, when unlabeled test data are known in advance. We derive minimax
optimal rules for confidence-rated prediction in this setting. By using
PAC-Bayes analysis on these rules, we obtain data-dependent performance
guarantees without distributional assumptions on the data. Our analysis
techniques are readily extended to a setting in which the predictor is allowed
to abstain.
| Akshay Balsubramani, Yoav Freund | null | 1501.03838 | null | null |
Understanding Kernel Ridge Regression: Common behaviors from simple
functions to density functionals | physics.comp-ph cs.LG stat.ML | Accurate approximations to density functionals have recently been obtained
via machine learning (ML). By applying ML to a simple function of one variable
without any random sampling, we extract the qualitative dependence of errors on
hyperparameters. We find universal features of the behavior in extreme limits,
including both very small and very large length scales, and the noise-free
limit. We show how such features arise in ML models of density functionals.
| Kevin Vu, John Snyder, Li Li, Matthias Rupp, Brandon F. Chen, Tarek
Khelif, Klaus-Robert M\"uller, Kieron Burke | null | 1501.03854 | null | null |
Feature Selection based on Machine Learning in MRIs for Hippocampal
Segmentation | physics.med-ph cs.CV cs.LG | Neurodegenerative diseases are frequently associated with structural changes
in the brain. Magnetic Resonance Imaging (MRI) scans can show these variations
and therefore be used as a supportive feature for a number of neurodegenerative
diseases. The hippocampus has been known to be a biomarker for Alzheimer
disease and other neurological and psychiatric diseases. However, it requires
accurate, robust and reproducible delineation of hippocampal structures. Fully
automatic methods are usually the voxel based approach, for each voxel a number
of local features were calculated. In this paper we compared four different
techniques for feature selection from a set of 315 features extracted for each
voxel: (i) filter method based on the Kolmogorov-Smirnov test; two wrapper
methods, respectively, (ii) Sequential Forward Selection and (iii) Sequential
Backward Elimination; and (iv) embedded method based on the Random Forest
Classifier on a set of 10 T1-weighted brain MRIs and tested on an independent
set of 25 subjects. The resulting segmentations were compared with manual
reference labelling. By using only 23 features for each voxel (sequential
backward elimination) we obtained comparable state of-the-art performances with
respect to the standard tool FreeSurfer.
| Sabina Tangaro, Nicola Amoroso, Massimo Brescia, Stefano Cavuoti,
Andrea Chincarini, Rosangela Errico, Paolo Inglese, Giuseppe Longo, Rosalia
Maglietta, Andrea Tateo, Giuseppe Riccio, Roberto Bellotti | 10.1155/2015/814104 | 1501.03915 | null | null |
Value Iteration with Options and State Aggregation | cs.AI cs.LG stat.ML | This paper presents a way of solving Markov Decision Processes that combines
state abstraction and temporal abstraction. Specifically, we combine state
aggregation with the options framework and demonstrate that they work well
together and indeed it is only after one combines the two that the full benefit
of each is realized. We introduce a hierarchical value iteration algorithm
where we first coarsely solve subgoals and then use these approximate solutions
to exactly solve the MDP. This algorithm solved several problems faster than
vanilla value iteration.
| Kamil Ciosek and David Silver | null | 1501.03959 | null | null |
Stochastic Gradient Based Extreme Learning Machines For Online Learning
of Advanced Combustion Engines | cs.NE cs.LG cs.SY | In this article, a stochastic gradient based online learning algorithm for
Extreme Learning Machines (ELM) is developed (SG-ELM). A stability criterion
based on Lyapunov approach is used to prove both asymptotic stability of
estimation error and stability in the estimated parameters suitable for
identification of nonlinear dynamic systems. The developed algorithm not only
guarantees stability, but also reduces the computational demand compared to the
OS-ELM approach based on recursive least squares. In order to demonstrate the
effectiveness of the algorithm on a real-world scenario, an advanced combustion
engine identification problem is considered. The algorithm is applied to two
case studies: An online regression learning for system identification of a
Homogeneous Charge Compression Ignition (HCCI) Engine and an online
classification learning (with class imbalance) for identifying the dynamic
operating envelope of the HCCI Engine. The results indicate that the accuracy
of the proposed SG-ELM is comparable to that of the state-of-the-art but adds
stability and a reduction in computational effort.
| Vijay Manikandan Janakiraman and XuanLong Nguyen and Dennis Assanis | null | 1501.03975 | null | null |
Stochastic Local Interaction (SLI) Model: Interfacing Machine Learning
and Geostatistics | cs.LG stat.ML | Machine learning and geostatistics are powerful mathematical frameworks for
modeling spatial data. Both approaches, however, suffer from poor scaling of
the required computational resources for large data applications. We present
the Stochastic Local Interaction (SLI) model, which employs a local
representation to improve computational efficiency. SLI combines geostatistics
and machine learning with ideas from statistical physics and computational
geometry. It is based on a joint probability density function defined by an
energy functional which involves local interactions implemented by means of
kernel functions with adaptive local kernel bandwidths. SLI is expressed in
terms of an explicit, typically sparse, precision (inverse covariance) matrix.
This representation leads to a semi-analytical expression for interpolation
(prediction), which is valid in any number of dimensions and avoids the
computationally costly covariance matrix inversion.
| Dionissios T. Hristopulos | 10.1016/j.cageo.2015.05.018 | 1501.04053 | null | null |
Generalised Random Forest Space Overview | cs.LG | Assuming a view of the Random Forest as a special case of a nested ensemble
of interchangeable modules, we construct a generalisation space allowing one to
easily develop novel methods based on this algorithm. We discuss the role and
required properties of modules at each level, especially in context of some
already proposed RF generalisations.
| Miron B. Kursa | null | 1501.04244 | null | null |
Comment on "Clustering by fast search and find of density peaks" | cs.LG | In [1], a clustering algorithm was given to find the centers of clusters
quickly. However, the accuracy of this algorithm heavily depend on the
threshold value of d-c. Furthermore, [1] has not provided any efficient way to
select the threshold value of d-c, that is, one can have to estimate the value
of d_c depend on one's subjective experience. In this paper, based on the data
field [2], we propose a new way to automatically extract the threshold value of
d_c from the original data set by using the potential entropy of data field.
For any data set to be clustered, the most reasonable value of d_c can be
objectively calculated from the data set by using our proposed method. The same
experiments in [1] are redone with our proposed method on the same experimental
data set used in [1], the results of which shows that the problem to calculate
the threshold value of d_c in [1] has been solved by using our method.
| Shuliang Wang, Dakui Wang, Caoyuan Li, Yan Li | null | 1501.04267 | null | null |
Regularized maximum correntropy machine | cs.LG | In this paper we investigate the usage of regularized correntropy framework
for learning of classifiers from noisy labels. The class label predictors
learned by minimizing transitional loss functions are sensitive to the noisy
and outlying labels of training samples, because the transitional loss
functions are equally applied to all the samples. To solve this problem, we
propose to learn the class label predictors by maximizing the correntropy
between the predicted labels and the true labels of the training samples, under
the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we
regularize the predictor parameter to control the complexity of the predictor.
The learning problem is formulated by an objective function considering the
parameter regularization and MCC simultaneously. By optimizing the objective
function alternately, we develop a novel predictor learning algorithm. The
experiments on two chal- lenging pattern classification tasks show that it
significantly outperforms the machines with transitional loss functions.
| Jim Jing-Yan Wang, Yunji Wang, Bing-Yi Jing, Xin Gao | null | 1501.04282 | null | null |
Pairwise Constraint Propagation on Multi-View Data | cs.CV cs.LG | This paper presents a graph-based learning approach to pairwise constraint
propagation on multi-view data. Although pairwise constraint propagation has
been studied extensively, pairwise constraints are usually defined over pairs
of data points from a single view, i.e., only intra-view constraint propagation
is considered for multi-view tasks. In fact, very little attention has been
paid to inter-view constraint propagation, which is more challenging since
pairwise constraints are now defined over pairs of data points from different
views. In this paper, we propose to decompose the challenging inter-view
constraint propagation problem into semi-supervised learning subproblems so
that they can be efficiently solved based on graph-based label propagation. To
the best of our knowledge, this is the first attempt to give an efficient
solution to inter-view constraint propagation from a semi-supervised learning
viewpoint. Moreover, since graph-based label propagation has been adopted for
basic optimization, we develop two constrained graph construction methods for
interview constraint propagation, which only differ in how the intra-view
pairwise constraints are exploited. The experimental results in cross-view
retrieval have shown the promising performance of our inter-view constraint
propagation.
| Zhiwu Lu and Liwei Wang | null | 1501.04284 | null | null |
Information Theory and its Relation to Machine Learning | cs.IT cs.LG math.IT | In this position paper, I first describe a new perspective on machine
learning (ML) by four basic problems (or levels), namely, "What to learn?",
"How to learn?", "What to evaluate?", and "What to adjust?". The paper stresses
more on the first level of "What to learn?", or "Learning Target Selection".
Towards this primary problem within the four levels, I briefly review the
existing studies about the connection between information theoretical learning
(ITL [1]) and machine learning. A theorem is given on the relation between the
empirically-defined similarity measure and information measures. Finally, a
conjecture is proposed for pursuing a unified mathematical interpretation to
learning target selection.
| Bao-Gang Hu | null | 1501.04309 | null | null |
Clustering based on the In-tree Graph Structure and Affinity Propagation | cs.LG cs.CV stat.ML | A recently proposed clustering method, called the Nearest Descent (ND), can
organize the whole dataset into a sparsely connected graph, called the In-tree.
This ND-based Intree structure proves able to reveal the clustering structure
underlying the dataset, except one imperfect place, that is, there are some
undesired edges in this In-tree which require to be removed. Here, we propose
an effective way to automatically remove the undesired edges in In-tree via an
effective combination of the In-tree structure with affinity propagation (AP).
The key for the combination is to add edges between the reachable nodes in
In-tree before using AP to remove the undesired edges. The experiments on both
synthetic and real datasets demonstrate the effectiveness of the proposed
method.
| Teng Qiu, Yongjie Li | null | 1501.04318 | null | null |
Deep Belief Nets for Topic Modeling | cs.CL cs.LG stat.ML | Applying traditional collaborative filtering to digital publishing is
challenging because user data is very sparse due to the high volume of
documents relative to the number of users. Content based approaches, on the
other hand, is attractive because textual content is often very informative. In
this paper we describe large-scale content based collaborative filtering for
digital publishing. To solve the digital publishing recommender problem we
compare two approaches: latent Dirichlet allocation (LDA) and deep belief nets
(DBN) that both find low-dimensional latent representations for documents.
Efficient retrieval can be carried out in the latent representation. We work
both on public benchmarks and digital media content provided by Issuu, an
online publishing platform. This article also comes with a newly developed deep
belief nets toolbox for topic modeling tailored towards performance evaluation
of the DBN model and comparisons to the LDA model.
| Lars Maaloe and Morten Arngren and Ole Winther | null | 1501.04325 | null | null |
Mathematical Language Processing: Automatic Grading and Feedback for
Open Response Mathematical Questions | stat.ML cs.AI cs.CL cs.LG | While computer and communication technologies have provided effective means
to scale up many aspects of education, the submission and grading of
assessments such as homework assignments and tests remains a weak link. In this
paper, we study the problem of automatically grading the kinds of open response
mathematical questions that figure prominently in STEM (science, technology,
engineering, and mathematics) courses. Our data-driven framework for
mathematical language processing (MLP) leverages solution data from a large
number of learners to evaluate the correctness of their solutions, assign
partial-credit scores, and provide feedback to each learner on the likely
locations of any errors. MLP takes inspiration from the success of natural
language processing for text data and comprises three main steps. First, we
convert each solution to an open response mathematical question into a series
of numerical features. Second, we cluster the features from several solutions
to uncover the structures of correct, partially correct, and incorrect
solutions. We develop two different clustering approaches, one that leverages
generic clustering algorithms and one based on Bayesian nonparametrics. Third,
we automatically grade the remaining (potentially large number of) solutions
based on their assigned cluster and one instructor-provided grade per cluster.
As a bonus, we can track the cluster assignment of each step of a multistep
solution and determine when it departs from a cluster of correct solutions,
which enables us to indicate the likely locations of errors to learners. We
test and validate MLP on real-world MOOC data to demonstrate how it can
substantially reduce the human effort required in large-scale educational
platforms.
| Andrew S. Lan and Divyanshu Vats and Andrew E. Waters and Richard G.
Baraniuk | null | 1501.04346 | null | null |
Structure Learning in Bayesian Networks of Moderate Size by Efficient
Sampling | cs.AI cs.LG stat.ML | We study the Bayesian model averaging approach to learning Bayesian network
structures (DAGs) from data. We develop new algorithms including the first
algorithm that is able to efficiently sample DAGs according to the exact
structure posterior. The DAG samples can then be used to construct estimators
for the posterior of any feature. We theoretically prove good properties of our
estimators and empirically show that our estimators considerably outperform the
estimators from the previous state-of-the-art methods.
| Ru He, Jin Tian, Huaiqing Wu | null | 1501.04370 | null | null |
Statistical-mechanical analysis of pre-training and fine tuning in deep
learning | stat.ML cond-mat.dis-nn cond-mat.stat-mech cs.AI cs.LG | In this paper, we present a statistical-mechanical analysis of deep learning.
We elucidate some of the essential components of deep learning---pre-training
by unsupervised learning and fine tuning by supervised learning. We formulate
the extraction of features from the training data as a margin criterion in a
high-dimensional feature-vector space. The self-organized classifier is then
supplied with small amounts of labelled data, as in deep learning. Although we
employ a simple single-layer perceptron model, rather than directly analyzing a
multi-layer neural network, we find a nontrivial phase transition that is
dependent on the number of unlabelled data in the generalization error of the
resultant classifier. In this sense, we evaluate the efficacy of the
unsupervised learning component of deep learning. The analysis is performed by
the replica method, which is a sophisticated tool in statistical mechanics. We
validate our result in the manner of deep learning, using a simple iterative
algorithm to learn the weight vector on the basis of belief propagation.
| Masayuki Ohzeki | 10.7566/JPSJ.84.034003 | 1501.04413 | null | null |
Sparse Bayesian Learning for EEG Source Localization | q-bio.QM cs.LG q-bio.NC | Purpose: Localizing the sources of electrical activity from
electroencephalographic (EEG) data has gained considerable attention over the
last few years. In this paper, we propose an innovative source localization
method for EEG, based on Sparse Bayesian Learning (SBL). Methods: To better
specify the sparsity profile and to ensure efficient source localization, the
proposed approach considers grouping of the electrical current dipoles inside
human brain. SBL is used to solve the localization problem in addition with
imposed constraint that the electric current dipoles associated with the brain
activity are isotropic. Results: Numerical experiments are conducted on a
realistic head model that is obtained by segmentation of MRI images of the head
and includes four major components, namely the scalp, the skull, the
cerebrospinal fluid (CSF) and the brain, with appropriate relative conductivity
values. The results demonstrate that the isotropy constraint significantly
improves the performance of SBL. In a noiseless environment, the proposed
method was 1 found to accurately (with accuracy of >75%) locate up to 6
simultaneously active sources, whereas for SBL without the isotropy constraint,
the accuracy of finding just 3 simultaneously active sources was <75%.
Conclusions: Compared to the state-of-the-art algorithms, the proposed method
is potentially more consistent in specifying the sparsity profile of human
brain activity and is able to produce better source localization for EEG.
| Sajib Saha, Frank de Hoog, Ya.I. Nesterets, Rajib Rana, M. Tahtali and
T.E. Gureyev | null | 1501.04621 | null | null |
Microscopic Advances with Large-Scale Learning: Stochastic Optimization
for Cryo-EM | stat.ML cs.CV cs.LG q-bio.QM | Determining the 3D structures of biological molecules is a key problem for
both biology and medicine. Electron Cryomicroscopy (Cryo-EM) is a promising
technique for structure estimation which relies heavily on computational
methods to reconstruct 3D structures from 2D images. This paper introduces the
challenging Cryo-EM density estimation problem as a novel application for
stochastic optimization techniques. Structure discovery is formulated as MAP
estimation in a probabilistic latent-variable model, resulting in an
optimization problem to which an array of seven stochastic optimization methods
are applied. The methods are tested on both real and synthetic data, with some
methods recovering reasonable structures in less than one epoch from a random
initialization. Complex quasi-Newton methods are found to converge more slowly
than simple gradient-based methods, but all stochastic methods are found to
converge to similar optima. This method represents a major improvement over
existing methods as it is significantly faster and is able to converge from a
random initialization.
| Ali Punjani and Marcus A. Brubaker | null | 1501.04656 | null | null |
Robust Face Recognition by Constrained Part-based Alignment | cs.CV cs.LG | Developing a reliable and practical face recognition system is a
long-standing goal in computer vision research. Existing literature suggests
that pixel-wise face alignment is the key to achieve high-accuracy face
recognition. By assuming a human face as piece-wise planar surfaces, where each
surface corresponds to a facial part, we develop in this paper a Constrained
Part-based Alignment (CPA) algorithm for face recognition across pose and/or
expression. Our proposed algorithm is based on a trainable CPA model, which
learns appearance evidence of individual parts and a tree-structured shape
configuration among different parts. Given a probe face, CPA simultaneously
aligns all its parts by fitting them to the appearance evidence with
consideration of the constraint from the tree-structured shape configuration.
This objective is formulated as a norm minimization problem regularized by
graph likelihoods. CPA can be easily integrated with many existing classifiers
to perform part-based face recognition. Extensive experiments on benchmark face
datasets show that CPA outperforms or is on par with existing methods for
robust face recognition across pose, expression, and/or illumination changes.
| Yuting Zhang, Kui Jia, Yueming Wang, Gang Pan, Tsung-Han Chan, Yi Ma | null | 1501.04717 | null | null |
Learning Invariants using Decision Trees | cs.PL cs.LG | The problem of inferring an inductive invariant for verifying program safety
can be formulated in terms of binary classification. This is a standard problem
in machine learning: given a sample of good and bad points, one is asked to
find a classifier that generalizes from the sample and separates the two sets.
Here, the good points are the reachable states of the program, and the bad
points are those that reach a safety property violation. Thus, a learned
classifier is a candidate invariant. In this paper, we propose a new algorithm
that uses decision trees to learn candidate invariants in the form of arbitrary
Boolean combinations of numerical inequalities. We have used our algorithm to
verify C programs taken from the literature. The algorithm is able to infer
safe invariants for a range of challenging benchmarks and compares favorably to
other ML-based invariant inference techniques. In particular, it scales well to
large sample sets.
| Siddharth Krishna, Christian Puhrsch, Thomas Wies | null | 1501.04725 | null | null |
Relative Entailment Among Probabilistic Implications | cs.LO cs.DB cs.LG | We study a natural variant of the implicational fragment of propositional
logic. Its formulas are pairs of conjunctions of positive literals, related
together by an implicational-like connective; the semantics of this sort of
implication is defined in terms of a threshold on a conditional probability of
the consequent, given the antecedent: we are dealing with what the data
analysis community calls confidence of partial implications or association
rules. Existing studies of redundancy among these partial implications have
characterized so far only entailment from one premise and entailment from two
premises, both in the stand-alone case and in the case of presence of
additional classical implications (this is what we call "relative entailment").
By exploiting a previously noted alternative view of the entailment in terms of
linear programming duality, we characterize exactly the cases of entailment
from arbitrary numbers of premises, again both in the stand-alone case and in
the case of presence of additional classical implications. As a result, we
obtain decision algorithms of better complexity; additionally, for each
potential case of entailment, we identify a critical confidence threshold and
show that it is, actually, intrinsic to each set of premises and antecedent of
the conclusion.
| Albert Atserias and Jos\'e L. Balc\'azar and Marie Ely Piceno | 10.23638/LMCS-15(1:10)2019 | 1501.04826 | null | null |
Scalable Multi-Output Label Prediction: From Classifier Chains to
Classifier Trellises | stat.ML cs.CV cs.DS cs.LG stat.CO | Multi-output inference tasks, such as multi-label classification, have become
increasingly important in recent years. A popular method for multi-label
classification is classifier chains, in which the predictions of individual
classifiers are cascaded along a chain, thus taking into account inter-label
dependencies and improving the overall performance. Several varieties of
classifier chain methods have been introduced, and many of them perform very
competitively across a wide range of benchmark datasets. However, scalability
limitations become apparent on larger datasets when modeling a fully-cascaded
chain. In particular, the methods' strategies for discovering and modeling a
good chain structure constitutes a mayor computational bottleneck. In this
paper, we present the classifier trellis (CT) method for scalable multi-label
classification. We compare CT with several recently proposed classifier chain
methods to show that it occupies an important niche: it is highly competitive
on standard multi-label problems, yet it can also scale up to thousands or even
tens of thousands of labels.
| J. Read, L. Martino, P. Olmos, D. Luengo | 10.1016/j.patcog.2015.01.004 | 1501.04870 | null | null |
An Algebra to Merge Heterogeneous Classifiers | cs.DM cs.LG | In distributed classification, each learner observes its environment and
deduces a classifier. As a learner has only a local view of its environment,
classifiers can be exchanged among the learners and integrated, or merged, to
improve accuracy. However, the operation of merging is not defined for most
classifiers. Furthermore, the classifiers that have to be merged may be of
different types in settings such as ad-hoc networks in which several
generations of sensors may be creating classifiers. We introduce decision
spaces as a framework for merging possibly different classifiers. We formally
study the merging operation as an algebra, and prove that it satisfies a
desirable set of properties. The impact of time is discussed for the two main
data mining settings. Firstly, decision spaces can naturally be used with
non-stationary distributions, such as the data collected by sensor networks, as
the impact of a model decays over time. Secondly, we introduce an approach for
stationary distributions, such as homogeneous databases partitioned over
different learners, which ensures that all models have the same impact. We also
present a method that uses storage flexibly to achieve different types of decay
for non-stationary distributions. Finally, we show that the algebraic approach
developed for merging can also be used to analyze the behaviour of other
operators.
| Philippe J. Giabbanelli and Joseph G. Peters | null | 1501.05141 | null | null |
A Bayesian alternative to mutual information for the hierarchical
clustering of dependent random variables | stat.ML cs.LG q-bio.QM | The use of mutual information as a similarity measure in agglomerative
hierarchical clustering (AHC) raises an important issue: some correction needs
to be applied for the dimensionality of variables. In this work, we formulate
the decision of merging dependent multivariate normal variables in an AHC
procedure as a Bayesian model comparison. We found that the Bayesian
formulation naturally shrinks the empirical covariance matrix towards a matrix
set a priori (e.g., the identity), provides an automated stopping rule, and
corrects for dimensionality using a term that scales up the measure as a
function of the dimensionality of the variables. Also, the resulting log Bayes
factor is asymptotically proportional to the plug-in estimate of mutual
information, with an additive correction for dimensionality in agreement with
the Bayesian information criterion. We investigated the behavior of these
Bayesian alternatives (in exact and asymptotic forms) to mutual information on
simulated and real data. An encouraging result was first derived on
simulations: the hierarchical clustering based on the log Bayes factor
outperformed off-the-shelf clustering techniques as well as raw and normalized
mutual information in terms of classification accuracy. On a toy example, we
found that the Bayesian approaches led to results that were similar to those of
mutual information clustering techniques, with the advantage of an automated
thresholding. On real functional magnetic resonance imaging (fMRI) datasets
measuring brain activity, it identified clusters consistent with the
established outcome of standard procedures. On this application, normalized
mutual information had a highly atypical behavior, in the sense that it
systematically favored very large clusters. These initial experiments suggest
that the proposed Bayesian alternatives to mutual information are a useful new
tool for hierarchical clustering.
| Guillaume Marrelec, Arnaud Mess\'e, Pierre Bellec | 10.1371/journal.pone.0137278 | 1501.05194 | null | null |
Plug-and-play dual-tree algorithm runtime analysis | cs.DS cs.LG | Numerous machine learning algorithms contain pairwise statistical problems at
their core---that is, tasks that require computations over all pairs of input
points if implemented naively. Often, tree structures are used to solve these
problems efficiently. Dual-tree algorithms can efficiently solve or approximate
many of these problems. Using cover trees, rigorous worst-case runtime
guarantees have been proven for some of these algorithms. In this paper, we
present a problem-independent runtime guarantee for any dual-tree algorithm
using the cover tree, separating out the problem-dependent and the
problem-independent elements. This allows us to just plug in bounds for the
problem-dependent elements to get runtime guarantees for dual-tree algorithms
for any pairwise statistical problem without re-deriving the entire proof. We
demonstrate this plug-and-play procedure for nearest-neighbor search and
approximate kernel density estimation to get improved runtime guarantees. Under
mild assumptions, we also present the first linear runtime guarantee for
dual-tree based range search.
| Ryan R. Curtin, Dongryeol Lee, William B. March, Parikshit Ram | null | 1501.05222 | null | null |
Extreme Entropy Machines: Robust information theoretic classification | cs.LG | Most of the existing classification methods are aimed at minimization of
empirical risk (through some simple point-based error measured with loss
function) with added regularization. We propose to approach this problem in a
more information theoretic way by investigating applicability of entropy
measures as a classification model objective function. We focus on quadratic
Renyi's entropy and connected Cauchy-Schwarz Divergence which leads to the
construction of Extreme Entropy Machines (EEM).
The main contribution of this paper is proposing a model based on the
information theoretic concepts which on the one hand shows new, entropic
perspective on known linear classifiers and on the other leads to a
construction of very robust method competetitive with the state of the art
non-information theoretic ones (including Support Vector Machines and Extreme
Learning Machines).
Evaluation on numerous problems spanning from small, simple ones from UCI
repository to the large (hundreads of thousands of samples) extremely
unbalanced (up to 100:1 classes' ratios) datasets shows wide applicability of
the EEM in real life problems and that it scales well.
| Wojciech Marian Czarnecki, Jacek Tabor | null | 1501.05279 | null | null |
Optimizing affinity-based binary hashing using auxiliary coordinates | cs.LG cs.CV math.OC stat.ML | In supervised binary hashing, one wants to learn a function that maps a
high-dimensional feature vector to a vector of binary codes, for application to
fast image retrieval. This typically results in a difficult optimization
problem, nonconvex and nonsmooth, because of the discrete variables involved.
Much work has simply relaxed the problem during training, solving a continuous
optimization, and truncating the codes a posteriori. This gives reasonable
results but is quite suboptimal. Recent work has tried to optimize the
objective directly over the binary codes and achieved better results, but the
hash function was still learned a posteriori, which remains suboptimal. We
propose a general framework for learning hash functions using affinity-based
loss functions that uses auxiliary coordinates. This closes the loop and
optimizes jointly over the hash functions and the binary codes so that they
gradually match each other. The resulting algorithm can be seen as a corrected,
iterated version of the procedure of optimizing first over the codes and then
learning the hash function. Compared to this, our optimization is guaranteed to
obtain better hash functions while being not much slower, as demonstrated
experimentally in various supervised datasets. In addition, our framework
facilitates the design of optimization algorithms for arbitrary types of loss
and hash functions.
| Ramin Raziperchikolaei and Miguel \'A. Carreira-Perpi\~n\'an | null | 1501.05352 | null | null |
Deep Multimodal Learning for Audio-Visual Speech Recognition | cs.CL cs.LG | In this paper, we present methods in deep multimodal learning for fusing
speech and visual modalities for Audio-Visual Automatic Speech Recognition
(AV-ASR). First, we study an approach where uni-modal deep networks are trained
separately and their final hidden layers fused to obtain a joint feature space
in which another deep network is built. While the audio network alone achieves
a phone error rate (PER) of $41\%$ under clean condition on the IBM large
vocabulary audio-visual studio dataset, this fusion model achieves a PER of
$35.83\%$ demonstrating the tremendous value of the visual channel in phone
classification even in audio with high signal to noise ratio. Second, we
present a new deep network architecture that uses a bilinear softmax layer to
account for class specific correlations between modalities. We show that
combining the posteriors from the bilinear networks with those from the fused
model mentioned above results in a further significant phone error rate
reduction, yielding a final PER of $34.03\%$.
| Youssef Mroueh, Etienne Marcheret, Vaibhava Goel | null | 1501.05396 | null | null |
Sketch and Validate for Big Data Clustering | stat.ML cs.LG | In response to the need for learning tools tuned to big data analytics, the
present paper introduces a framework for efficient clustering of huge sets of
(possibly high-dimensional) data. Building on random sampling and consensus
(RANSAC) ideas pursued earlier in a different (computer vision) context for
robust regression, a suite of novel dimensionality and set-reduction algorithms
is developed. The advocated sketch-and-validate (SkeVa) family includes two
algorithms that rely on K-means clustering per iteration on reduced number of
dimensions and/or feature vectors: The first operates in a batch fashion, while
the second sequential one offers computational efficiency and suitability with
streaming modes of operation. For clustering even nonlinearly separable
vectors, the SkeVa family offers also a member based on user-selected kernel
functions. Further trading off performance for reduced complexity, a fourth
member of the SkeVa family is based on a divergence criterion for selecting
proper minimal subsets of feature variables and vectors, thus bypassing the
need for K-means clustering per iteration. Extensive numerical tests on
synthetic and real data sets highlight the potential of the proposed
algorithms, and demonstrate their competitive performance relative to
state-of-the-art random projection alternatives.
| Panagiotis A. Traganitis, Konstantinos Slavakis, Georgios B. Giannakis | 10.1109/JSTSP.2015.2396477 | 1501.05590 | null | null |
A Collaborative Kalman Filter for Time-Evolving Dyadic Processes | stat.ML cs.LG | We present the collaborative Kalman filter (CKF), a dynamic model for
collaborative filtering and related factorization models. Using the matrix
factorization approach to collaborative filtering, the CKF accounts for time
evolution by modeling each low-dimensional latent embedding as a
multidimensional Brownian motion. Each observation is a random variable whose
distribution is parameterized by the dot product of the relevant Brownian
motions at that moment in time. This is naturally interpreted as a Kalman
filter with multiple interacting state space vectors. We also present a method
for learning a dynamically evolving drift parameter for each location by
modeling it as a geometric Brownian motion. We handle posterior intractability
via a mean-field variational approximation, which also preserves tractability
for downstream calculations in a manner similar to the Kalman filter. We
evaluate the model on several large datasets, providing quantitative evaluation
on the 10 million Movielens and 100 million Netflix datasets and qualitative
evaluation on a set of 39 million stock returns divided across roughly 6,500
companies from the years 1962-2014.
| San Gultekin and John Paisley | null | 1501.05624 | null | null |
Bi-Objective Nonnegative Matrix Factorization: Linear Versus
Kernel-Based Models | stat.ML cs.CV cs.LG math.OC | Nonnegative matrix factorization (NMF) is a powerful class of feature
extraction techniques that has been successfully applied in many fields, namely
in signal and image processing. Current NMF techniques have been limited to a
single-objective problem in either its linear or nonlinear kernel-based
formulation. In this paper, we propose to revisit the NMF as a multi-objective
problem, in particular a bi-objective one, where the objective functions
defined in both input and feature spaces are taken into account. By taking the
advantage of the sum-weighted method from the literature of multi-objective
optimization, the proposed bi-objective NMF determines a set of nondominated,
Pareto optimal, solutions instead of a single optimal decomposition. Moreover,
the corresponding Pareto front is studied and approximated. Experimental
results on unmixing real hyperspectral images confirm the efficiency of the
proposed bi-objective NMF compared with the state-of-the-art methods.
| Paul Honeine, Fei Zhu | null | 1501.05684 | null | null |
Bayesian Learning for Low-Rank matrix reconstruction | stat.ML cs.LG cs.NA | We develop latent variable models for Bayesian learning based low-rank matrix
completion and reconstruction from linear measurements. For under-determined
systems, the developed methods are shown to reconstruct low-rank matrices when
neither the rank nor the noise power is known a-priori. We derive relations
between the latent variable models and several low-rank promoting penalty
functions. The relations justify the use of Kronecker structured covariance
matrices in a Gaussian based prior. In the methods, we use evidence
approximation and expectation-maximization to learn the model parameters. The
performance of the methods is evaluated through extensive numerical
simulations.
| Martin Sundin, Cristian R. Rojas, Magnus Jansson and Saikat Chatterjee | null | 1501.05740 | null | null |
Consistency Analysis of Nearest Subspace Classifier | stat.ML cs.LG | The Nearest subspace classifier (NSS) finds an estimation of the underlying
subspace within each class and assigns data points to the class that
corresponds to its nearest subspace. This paper mainly studies how well NSS can
be generalized to new samples. It is proved that NSS is strongly consistent
under certain assumptions. For completeness, NSS is evaluated through
experiments on various simulated and real data sets, in comparison with some
other linear model based classifiers. It is also shown that NSS can obtain
effective classification results and is very efficient, especially for large
scale data sets.
| Yi Wang | null | 1501.06060 | null | null |
Between Pure and Approximate Differential Privacy | cs.DS cs.CR cs.LG | We show a new lower bound on the sample complexity of $(\varepsilon,
\delta)$-differentially private algorithms that accurately answer statistical
queries on high-dimensional databases. The novelty of our bound is that it
depends optimally on the parameter $\delta$, which loosely corresponds to the
probability that the algorithm fails to be private, and is the first to
smoothly interpolate between approximate differential privacy ($\delta > 0$)
and pure differential privacy ($\delta = 0$).
Specifically, we consider a database $D \in \{\pm1\}^{n \times d}$ and its
\emph{one-way marginals}, which are the $d$ queries of the form "What fraction
of individual records have the $i$-th bit set to $+1$?" We show that in order
to answer all of these queries to within error $\pm \alpha$ (on average) while
satisfying $(\varepsilon, \delta)$-differential privacy, it is necessary that
$$ n \geq \Omega\left( \frac{\sqrt{d \log(1/\delta)}}{\alpha \varepsilon}
\right), $$ which is optimal up to constant factors. To prove our lower bound,
we build on the connection between \emph{fingerprinting codes} and lower bounds
in differential privacy (Bun, Ullman, and Vadhan, STOC'14).
In addition to our lower bound, we give new purely and approximately
differentially private algorithms for answering arbitrary statistical queries
that improve on the sample complexity of the standard Laplace and Gaussian
mechanisms for achieving worst-case accuracy guarantees by a logarithmic
factor.
| Thomas Steinke and Jonathan Ullman | null | 1501.06095 | null | null |
Constrained Extreme Learning Machines: A Study on Classification Cases | cs.LG cs.CV cs.NE | Extreme learning machine (ELM) is an extremely fast learning method and has a
powerful performance for pattern recognition tasks proven by enormous
researches and engineers. However, its good generalization ability is built on
large numbers of hidden neurons, which is not beneficial to real time response
in the test process. In this paper, we proposed new ways, named "constrained
extreme learning machines" (CELMs), to randomly select hidden neurons based on
sample distribution. Compared to completely random selection of hidden nodes in
ELM, the CELMs randomly select hidden nodes from the constrained vector space
containing some basic combinations of original sample vectors. The experimental
results show that the CELMs have better generalization ability than traditional
ELM, SVM and some other related methods. Additionally, the CELMs have a similar
fast learning speed as ELM.
| Wentao Zhu, Jun Miao, Laiyun Qing | null | 1501.06115 | null | null |
Randomized sketches for kernels: Fast and optimal non-parametric
regression | stat.ML cs.DS cs.LG stat.CO | Kernel ridge regression (KRR) is a standard method for performing
non-parametric regression over reproducing kernel Hilbert spaces. Given $n$
samples, the time and space complexity of computing the KRR estimate scale as
$\mathcal{O}(n^3)$ and $\mathcal{O}(n^2)$ respectively, and so is prohibitive
in many cases. We propose approximations of KRR based on $m$-dimensional
randomized sketches of the kernel matrix, and study how small the projection
dimension $m$ can be chosen while still preserving minimax optimality of the
approximate KRR estimate. For various classes of randomized sketches, including
those based on Gaussian and randomized Hadamard matrices, we prove that it
suffices to choose the sketch dimension $m$ proportional to the statistical
dimension (modulo logarithmic factors). Thus, we obtain fast and minimax
optimal approximations to the KRR estimate for non-parametric regression.
| Yun Yang, Mert Pilanci, Martin J. Wainwright | null | 1501.06195 | null | null |
Robust Subjective Visual Property Prediction from Crowdsourced Pairwise
Labels | cs.CV cs.LG cs.MM cs.SI math.ST stat.TH | The problem of estimating subjective visual properties from image and video
has attracted increasing interest. A subjective visual property is useful
either on its own (e.g. image and video interestingness) or as an intermediate
representation for visual recognition (e.g. a relative attribute). Due to its
ambiguous nature, annotating the value of a subjective visual property for
learning a prediction model is challenging. To make the annotation more
reliable, recent studies employ crowdsourcing tools to collect pairwise
comparison labels because human annotators are much better at ranking two
images/videos (e.g. which one is more interesting) than giving an absolute
value to each of them separately. However, using crowdsourced data also
introduces outliers. Existing methods rely on majority voting to prune the
annotation outliers/errors. They thus require large amount of pairwise labels
to be collected. More importantly as a local outlier detection method, majority
voting is ineffective in identifying outliers that can cause global ranking
inconsistencies. In this paper, we propose a more principled way to identify
annotation outliers by formulating the subjective visual property prediction
task as a unified robust learning to rank problem, tackling both the outlier
detection and learning to rank jointly. Differing from existing methods, the
proposed method integrates local pairwise comparison labels together to
minimise a cost that corresponds to global inconsistency of ranking order. This
not only leads to better detection of annotation outliers but also enables
learning with extremely sparse annotations. Extensive experiments on various
benchmark datasets demonstrate that our new approach significantly outperforms
state-of-the-arts alternatives.
| Yanwei Fu, Timothy M. Hospedales, Tao Xiang, Jiechao Xiong, Shaogang
Gong, Yizhou Wang, and Yuan Yao | 10.1109/TPAMI.2015.2456887 | 1501.06202 | null | null |
Online Optimization : Competing with Dynamic Comparators | cs.LG math.OC stat.ML | Recent literature on online learning has focused on developing adaptive
algorithms that take advantage of a regularity of the sequence of observations,
yet retain worst-case performance guarantees. A complementary direction is to
develop prediction methods that perform well against complex benchmarks. In
this paper, we address these two directions together. We present a fully
adaptive method that competes with dynamic benchmarks in which regret guarantee
scales with regularity of the sequence of cost functions and comparators.
Notably, the regret bound adapts to the smaller complexity measure in the
problem environment. Finally, we apply our results to drifting zero-sum,
two-player games where both players achieve no regret guarantees against best
sequences of actions in hindsight.
| Ali Jadbabaie, Alexander Rakhlin, Shahin Shahrampour and Karthik
Sridharan | null | 1501.06225 | null | null |
Deep Transductive Semi-supervised Maximum Margin Clustering | cs.LG | Semi-supervised clustering is an very important topic in machine learning and
computer vision. The key challenge of this problem is how to learn a metric,
such that the instances sharing the same label are more likely close to each
other on the embedded space. However, little attention has been paid to learn
better representations when the data lie on non-linear manifold. Fortunately,
deep learning has led to great success on feature learning recently. Inspired
by the advances of deep learning, we propose a deep transductive
semi-supervised maximum margin clustering approach. More specifically, given
pairwise constraints, we exploit both labeled and unlabeled data to learn a
non-linear mapping under maximum margin framework for clustering analysis.
Thus, our model unifies transductive learning, feature learning and maximum
margin techniques in the semi-supervised clustering framework. We pretrain the
deep network structure with restricted Boltzmann machines (RBMs) layer by layer
greedily, and optimize our objective function with gradient descent. By
checking the most violated constraints, our approach updates the model
parameters through error backpropagation, in which deep features are learned
automatically. The experimental results shows that our model is significantly
better than the state of the art on semi-supervised clustering.
| Gang Chen | null | 1501.06237 | null | null |
Sequential Sensing with Model Mismatch | stat.ML cs.IT cs.LG math.IT math.ST stat.TH | We characterize the performance of sequential information guided sensing,
Info-Greedy Sensing, when there is a mismatch between the true signal model and
the assumed model, which may be a sample estimate. In particular, we consider a
setup where the signal is low-rank Gaussian and the measurements are taken in
the directions of eigenvectors of the covariance matrix in a decreasing order
of eigenvalues. We establish a set of performance bounds when a mismatched
covariance matrix is used, in terms of the gap of signal posterior entropy, as
well as the additional amount of power required to achieve the same signal
recovery precision. Based on this, we further study how to choose an
initialization for Info-Greedy Sensing using the sample covariance matrix, or
using an efficient covariance sketching scheme.
| Ruiyang Song, Yao Xie, Sebastian Pokutta | 10.1109/ISIT.2015.7282736 | 1501.06241 | null | null |
Poisson Matrix Completion | stat.ML cs.LG | We extend the theory of matrix completion to the case where we make Poisson
observations for a subset of entries of a low-rank matrix. We consider the
(now) usual matrix recovery formulation through maximum likelihood with proper
constraints on the matrix $M$, and establish theoretical upper and lower bounds
on the recovery error. Our bounds are nearly optimal up to a factor on the
order of $\mathcal{O}(\log(d_1 d_2))$. These bounds are obtained by adapting
the arguments used for one-bit matrix completion \cite{davenport20121}
(although these two problems are different in nature) and the adaptation
requires new techniques exploiting properties of the Poisson likelihood
function and tackling the difficulties posed by the locally sub-Gaussian
characteristic of the Poisson distribution. Our results highlight a few
important distinctions of Poisson matrix completion compared to the prior work
in matrix completion including having to impose a minimum signal-to-noise
requirement on each observed entry. We also develop an efficient iterative
algorithm and demonstrate its good performance in recovering solar flare
images.
| Yang Cao and Yao Xie | null | 1501.06243 | null | null |
Deep Semantic Ranking Based Hashing for Multi-Label Image Retrieval | cs.CV cs.LG | With the rapid growth of web images, hashing has received increasing
interests in large scale image retrieval. Research efforts have been devoted to
learning compact binary codes that preserve semantic similarity based on
labels. However, most of these hashing methods are designed to handle simple
binary similarity. The complex multilevel semantic structure of images
associated with multiple labels have not yet been well explored. Here we
propose a deep semantic ranking based method for learning hash functions that
preserve multilevel semantic similarity between multi-label images. In our
approach, deep convolutional neural network is incorporated into hash functions
to jointly learn feature representations and mappings from them to hash codes,
which avoids the limitation of semantic representation power of hand-crafted
features. Meanwhile, a ranking list that encodes the multilevel similarity
information is employed to guide the learning of such deep hash functions. An
effective scheme based on surrogate loss is used to solve the intractable
optimization problem of nonsmooth and multivariate ranking measures involved in
the learning procedure. Experimental results show the superiority of our
proposed approach over several state-of-the-art hashing methods in term of
ranking evaluation metrics when tested on multi-label image datasets.
| Fang Zhao, Yongzhen Huang, Liang Wang, Tieniu Tan | null | 1501.06272 | null | null |
On a Family of Decomposable Kernels on Sequences | cs.LG | In many applications data is naturally presented in terms of orderings of
some basic elements or symbols. Reasoning about such data requires a notion of
similarity capable of handling sequences of different lengths. In this paper we
describe a family of Mercer kernel functions for such sequentially structured
data. The family is characterized by a decomposable structure in terms of
symbol-level and structure-level similarities, representing a specific
combination of kernels which allows for efficient computation. We provide an
experimental evaluation on sequential classification tasks comparing kernels
from our family of kernels to a state of the art sequence kernel called the
Global Alignment kernel which has been shown to outperform Dynamic Time Warping
| Andrea Baisero, Florian T. Pokorny, Carl Henrik Ek | null | 1501.06284 | null | null |
IT-map: an Effective Nonlinear Dimensionality Reduction Method for
Interactive Clustering | stat.ML cs.CV cs.LG | Scientists in many fields have the common and basic need of dimensionality
reduction: visualizing the underlying structure of the massive multivariate
data in a low-dimensional space. However, many dimensionality reduction methods
confront the so-called "crowding problem" that clusters tend to overlap with
each other in the embedding. Previously, researchers expect to avoid that
problem and seek to make clusters maximally separated in the embedding.
However, the proposed in-tree (IT) based method, called IT-map, allows clusters
in the embedding to be locally overlapped, while seeking to make them
distinguishable by some small yet key parts. IT-map provides a simple,
effective and novel solution to cluster-preserving mapping, which makes it
possible to cluster the original data points interactively and thus should be
of general meaning in science and engineering.
| Teng Qiu, Yongjie Li | null | 1501.06450 | null | null |
Compressed Support Vector Machines | cs.LG | Support vector machines (SVM) can classify data sets along highly non-linear
decision boundaries because of the kernel-trick. This expressiveness comes at a
price: During test-time, the SVM classifier needs to compute the kernel
inner-product between a test sample and all support vectors. With large
training data sets, the time required for this computation can be substantial.
In this paper, we introduce a post-processing algorithm, which compresses the
learned SVM model by reducing and optimizing support vectors. We evaluate our
algorithm on several medium-scaled real-world data sets, demonstrating that it
maintains high test accuracy while reducing the test-time evaluation cost by
several orders of magnitude---in some cases from hours to seconds. It is fair
to say that most of the work in this paper was previously been invented by
Burges and Sch\"olkopf almost 20 years ago. For most of the time during which
we conducted this research, we were unaware of this prior work. However, in the
past two decades, computing power has increased drastically, and we can
therefore provide empirical insights that were not possible in their original
paper.
| Zhixiang Xu, Jacob R. Gardner, Stephen Tyree, Kilian Q. Weinberger | null | 1501.06478 | null | null |
Noisy Tensor Completion via the Sum-of-Squares Hierarchy | cs.LG cs.DS stat.ML | In the noisy tensor completion problem we observe $m$ entries (whose location
is chosen uniformly at random) from an unknown $n_1 \times n_2 \times n_3$
tensor $T$. We assume that $T$ is entry-wise close to being rank $r$. Our goal
is to fill in its missing entries using as few observations as possible. Let $n
= \max(n_1, n_2, n_3)$. We show that if $m = n^{3/2} r$ then there is a
polynomial time algorithm based on the sixth level of the sum-of-squares
hierarchy for completing it. Our estimate agrees with almost all of $T$'s
entries almost exactly and works even when our observations are corrupted by
noise. This is also the first algorithm for tensor completion that works in the
overcomplete case when $r > n$, and in fact it works all the way up to $r =
n^{3/2-\epsilon}$.
Our proofs are short and simple and are based on establishing a new
connection between noisy tensor completion (through the language of Rademacher
complexity) and the task of refuting random constant satisfaction problems.
This connection seems to have gone unnoticed even in the context of matrix
completion. Furthermore, we use this connection to show matching lower bounds.
Our main technical result is in characterizing the Rademacher complexity of the
sequence of norms that arise in the sum-of-squares relaxations to the tensor
nuclear norm. These results point to an interesting new direction: Can we
explore computational vs. sample complexity tradeoffs through the
sum-of-squares hierarchy?
| Boaz Barak and Ankur Moitra | null | 1501.06521 | null | null |
Measuring academic influence: Not all citations are equal | cs.DL cs.CL cs.LG | The importance of a research article is routinely measured by counting how
many times it has been cited. However, treating all citations with equal weight
ignores the wide variety of functions that citations perform. We want to
automatically identify the subset of references in a bibliography that have a
central academic influence on the citing paper. For this purpose, we examine
the effectiveness of a variety of features for determining the academic
influence of a citation. By asking authors to identify the key references in
their own work, we created a data set in which citations were labeled according
to their academic influence. Using automatic feature selection with supervised
machine learning, we found a model for predicting academic influence that
achieves good performance on this data set using only four features. The best
features, among those we evaluated, were those based on the number of times a
reference is mentioned in the body of a citing paper. The performance of these
features inspired us to design an influence-primed h-index (the hip-index).
Unlike the conventional h-index, it weights citations by how many times a
reference is mentioned. According to our experiments, the hip-index is a better
indicator of researcher performance than the conventional h-index.
| Xiaodan Zhu, Peter Turney, Daniel Lemire, Andr\'e Vellino | 10.1002/asi.23179 | 1501.06587 | null | null |
Online Nonparametric Regression with General Loss Functions | stat.ML cs.IT cs.LG math.IT | This paper establishes minimax rates for online regression with arbitrary
classes of functions and general losses. We show that below a certain threshold
for the complexity of the function class, the minimax rates depend on both the
curvature of the loss function and the sequential complexities of the class.
Above this threshold, the curvature of the loss does not affect the rates.
Furthermore, for the case of square loss, our results point to the interesting
phenomenon: whenever sequential and i.i.d. empirical entropies match, the rates
for statistical and online learning are the same.
In addition to the study of minimax regret, we derive a generic forecaster
that enjoys the established optimal rates. We also provide a recipe for
designing online prediction algorithms that can be computationally efficient
for certain problems. We illustrate the techniques by deriving existing and new
forecasters for the case of finite experts and for online linear regression.
| Alexander Rakhlin and Karthik Sridharan | null | 1501.06598 | null | null |
maxDNN: An Efficient Convolution Kernel for Deep Learning with Maxwell
GPUs | cs.NE cs.DC cs.LG | This paper describes maxDNN, a computationally efficient convolution kernel
for deep learning with the NVIDIA Maxwell GPU. maxDNN reaches 96.3%
computational efficiency on typical deep learning network architectures. The
design combines ideas from cuda-convnet2 with the Maxas SGEMM assembly code. We
only address forward propagation (FPROP) operation of the network, but we
believe that the same techniques used here will be effective for backward
propagation (BPROP) as well.
| Andrew Lavin | null | 1501.06633 | null | null |
Computing Functions of Random Variables via Reproducing Kernel Hilbert
Space Representations | stat.ML cs.DS cs.LG | We describe a method to perform functional operations on probability
distributions of random variables. The method uses reproducing kernel Hilbert
space representations of probability distributions, and it is applicable to all
operations which can be applied to points drawn from the respective
distributions. We refer to our approach as {\em kernel probabilistic
programming}. We illustrate it on synthetic data, and show how it can be used
for nonparametric structural equation models, with an application to causal
inference.
| Bernhard Sch\"olkopf, Krikamol Muandet, Kenji Fukumizu, Jonas Peters | null | 1501.06794 | null | null |
Novel Approaches for Predicting Risk Factors of Atherosclerosis | cs.LG | Coronary heart disease (CHD) caused by hardening of artery walls due to
cholesterol known as atherosclerosis is responsible for large number of deaths
world-wide. The disease progression is slow, asymptomatic and may lead to
sudden cardiac arrest, stroke or myocardial infraction. Presently, imaging
techniques are being employed to understand the molecular and metabolic
activity of atherosclerotic plaques to estimate the risk. Though imaging
methods are able to provide some information on plaque metabolism they lack the
required resolution and sensitivity for detection. In this paper we consider
the clinical observations and habits of individuals for predicting the risk
factors of CHD. The identification of risk factors helps in stratifying
patients for further intensive tests such as nuclear imaging or coronary
angiography. We present a novel approach for predicting the risk factors of
atherosclerosis with an in-built imputation algorithm and particle swarm
optimization (PSO). We compare the performance of our methodology with other
machine learning techniques on STULONG dataset which is based on longitudinal
study of middle aged individuals lasting for twenty years. Our methodology
powered by PSO search has identified physical inactivity as one of the risk
factor for the onset of atherosclerosis in addition to other already known
factors. The decision rules extracted by our methodology are able to predict
the risk factors with an accuracy of $99.73%$ which is higher than the
accuracies obtained by application of the state-of-the-art machine learning
techniques presently being employed in the identification of atherosclerosis
risk studies.
| V. Sree Hari Rao and M. Naresh Kumar | 10.1109/TITB.2012.2227271 | 1501.07093 | null | null |
A Neural Network Anomaly Detector Using the Random Cluster Model | cs.LG cs.NE stat.ML | The random cluster model is used to define an upper bound on a distance
measure as a function of the number of data points to be classified and the
expected value of the number of classes to form in a hybrid K-means and
regression classification methodology, with the intent of detecting anomalies.
Conditions are given for the identification of classes which contain anomalies
and individual anomalies within identified classes. A neural network model
describes the decision region-separating surface for offline storage and recall
in any new anomaly detection.
| Robert A. Murphy | null | 1501.07227 | null | null |
Escaping the Local Minima via Simulated Annealing: Optimization of
Approximately Convex Functions | cs.NA cs.LG math.OC | We consider the problem of optimizing an approximately convex function over a
bounded convex set in $\mathbb{R}^n$ using only function evaluations. The
problem is reduced to sampling from an \emph{approximately} log-concave
distribution using the Hit-and-Run method, which is shown to have the same
$\mathcal{O}^*$ complexity as sampling from log-concave distributions. In
addition to extend the analysis for log-concave distributions to approximate
log-concave distributions, the implementation of the 1-dimensional sampler of
the Hit-and-Run walk requires new methods and analysis. The algorithm then is
based on simulated annealing which does not relies on first order conditions
which makes it essentially immune to local minima.
We then apply the method to different motivating problems. In the context of
zeroth order stochastic convex optimization, the proposed method produces an
$\epsilon$-minimizer after $\mathcal{O}^*(n^{7.5}\epsilon^{-2})$ noisy function
evaluations by inducing a $\mathcal{O}(\epsilon/n)$-approximately log concave
distribution. We also consider in detail the case when the "amount of
non-convexity" decays towards the optimum of the function. Other applications
of the method discussed in this work include private computation of empirical
risk minimizers, two-stage stochastic programming, and approximate dynamic
programming for online learning.
| Alexandre Belloni, Tengyuan Liang, Hariharan Narayanan, Alexander
Rakhlin | null | 1501.07242 | null | null |
Per-Block-Convex Data Modeling by Accelerated Stochastic Approximation | cs.LG | Applications involving dictionary learning, non-negative matrix
factorization, subspace clustering, and parallel factor tensor decomposition
tasks motivate well algorithms for per-block-convex and non-smooth optimization
problems. By leveraging the stochastic approximation paradigm and first-order
acceleration schemes, this paper develops an online and modular learning
algorithm for a large class of non-convex data models, where convexity is
manifested only per-block of variables whenever the rest of them are held
fixed. The advocated algorithm incurs computational complexity that scales
linearly with the number of unknowns. Under minimal assumptions on the cost
functions of the composite optimization task, without bounding constraints on
the optimization variables, or any explicit information on bounds of Lipschitz
coefficients, the expected cost evaluated online at the resultant iterates is
provably convergent with quadratic rate to an accumulation point of the
(per-block) minima, while subgradients of the expected cost asymptotically
vanish in the mean-squared sense. The merits of the general approach are
demonstrated in two online learning setups: (i) Robust linear regression using
a sparsity-cognizant total least-squares criterion; and (ii) semi-supervised
dictionary learning for network-wide link load tracking and imputation with
missing entries. Numerical tests on synthetic and real data highlight the
potential of the proposed framework for streaming data analytics by
demonstrating superior performance over block coordinate descent, and reduced
complexity relative to the popular alternating-direction method of multipliers.
| Konstantinos Slavakis and Georgios B. Giannakis | null | 1501.07315 | null | null |
Tensor Factorization via Matrix Factorization | cs.LG stat.ML | Tensor factorization arises in many machine learning applications, such
knowledge base modeling and parameter estimation in latent variable models.
However, numerical methods for tensor factorization have not reached the level
of maturity of matrix factorization methods. In this paper, we propose a new
method for CP tensor factorization that uses random projections to reduce the
problem to simultaneous matrix diagonalization. Our method is conceptually
simple and also applies to non-orthogonal and asymmetric tensors of arbitrary
order. We prove that a small number random projections essentially preserves
the spectral information in the tensor, allowing us to remove the dependence on
the eigengap that plagued earlier tensor-to-matrix reductions. Experimentally,
our method outperforms existing tensor factorization methods on both simulated
data and two real datasets.
| Volodymyr Kuleshov and Arun Tejasvi Chaganty and Percy Liang | null | 1501.07320 | null | null |
Sequential Probability Assignment with Binary Alphabets and Large
Classes of Experts | cs.IT cs.LG math.IT stat.ML | We analyze the problem of sequential probability assignment for binary
outcomes with side information and logarithmic loss, where regret---or,
redundancy---is measured with respect to a (possibly infinite) class of
experts. We provide upper and lower bounds for minimax regret in terms of
sequential complexities of the class. These complexities were recently shown to
give matching (up to logarithmic factors) upper and lower bounds for sequential
prediction with general convex Lipschitz loss functions (Rakhlin and Sridharan,
2015). To deal with unbounded gradients of the logarithmic loss, we present a
new analysis that employs a sequential chaining technique with a Bernstein-type
bound. The introduced complexities are intrinsic to the problem of sequential
probability assignment, as illustrated by our lower bound.
We also consider an example of a large class of experts parametrized by
vectors in a high-dimensional Euclidean ball (or a Hilbert ball). The typical
discretization approach fails, while our techniques give a non-trivial bound.
For this problem we also present an algorithm based on regularization with a
self-concordant barrier. This algorithm is of an independent interest, as it
requires a bound on the function values rather than gradients.
| Alexander Rakhlin and Karthik Sridharan | null | 1501.07340 | null | null |
Particle swarm optimization for time series motif discovery | cs.LG cs.NE | Efficiently finding similar segments or motifs in time series data is a
fundamental task that, due to the ubiquity of these data, is present in a wide
range of domains and situations. Because of this, countless solutions have been
devised but, to date, none of them seems to be fully satisfactory and flexible.
In this article, we propose an innovative standpoint and present a solution
coming from it: an anytime multimodal optimization algorithm for time series
motif discovery based on particle swarms. By considering data from a variety of
domains, we show that this solution is extremely competitive when compared to
the state-of-the-art, obtaining comparable motifs in considerably less time
using minimal memory. In addition, we show that it is robust to different
implementation choices and see that it offers an unprecedented degree of
flexibility with regard to the task. All these qualities make the presented
solution stand out as one of the most prominent candidates for motif discovery
in long time series streams. Besides, we believe the proposed standpoint can be
exploited in further time series analysis and mining tasks, widening the scope
of research and potentially yielding novel effective solutions.
| Joan Serr\`a and Josep Lluis Arcos | 10.1016/j.knosys.2015.10.021 | 1501.07399 | null | null |
Bayesian Hierarchical Clustering with Exponential Family: Small-Variance
Asymptotics and Reducibility | stat.ML cs.LG | Bayesian hierarchical clustering (BHC) is an agglomerative clustering method,
where a probabilistic model is defined and its marginal likelihoods are
evaluated to decide which clusters to merge. While BHC provides a few
advantages over traditional distance-based agglomerative clustering algorithms,
successive evaluation of marginal likelihoods and careful hyperparameter tuning
are cumbersome and limit the scalability. In this paper we relax BHC into a
non-probabilistic formulation, exploring small-variance asymptotics in
conjugate-exponential models. We develop a novel clustering algorithm, referred
to as relaxed BHC (RBHC), from the asymptotic limit of the BHC model that
exhibits the scalability of distance-based agglomerative clustering algorithms
as well as the flexibility of Bayesian nonparametric models. We also
investigate the reducibility of the dissimilarity measure emerged from the
asymptotic limit of the BHC model, allowing us to use scalable algorithms such
as the nearest neighbor chain algorithm. Numerical experiments on both
synthetic and real-world datasets demonstrate the validity and high performance
of our method.
| Juho Lee and Seungjin Choi | null | 1501.07430 | null | null |
Regression and Learning to Rank Aggregation for User Engagement
Evaluation | cs.IR cs.LG | User engagement refers to the amount of interaction an instance (e.g., tweet,
news, and forum post) achieves. Ranking the items in social media websites
based on the amount of user participation in them, can be used in different
applications, such as recommender systems. In this paper, we consider a tweet
containing a rating for a movie as an instance and focus on ranking the
instances of each user based on their engagement, i.e., the total number of
retweets and favorites it will gain.
For this task, we define several features which can be extracted from the
meta-data of each tweet. The features are partitioned into three categories:
user-based, movie-based, and tweet-based. We show that in order to obtain good
results, features from all categories should be considered. We exploit
regression and learning to rank methods to rank the tweets and propose to
aggregate the results of regression and learning to rank methods to achieve
better performance. We have run our experiments on an extended version of
MovieTweeting dataset provided by ACM RecSys Challenge 2014. The results show
that learning to rank approach outperforms most of the regression models and
the combination can improve the performance significantly.
| Hamed Zamani, Azadeh Shakery, Pooya Moradi | 10.1145/2668067.2668077 | 1501.07467 | null | null |
Efficient Divide-And-Conquer Classification Based on Feature-Space
Decomposition | cs.LG | This study presents a divide-and-conquer (DC) approach based on feature space
decomposition for classification. When large-scale datasets are present,
typical approaches usually employed truncated kernel methods on the feature
space or DC approaches on the sample space. However, this did not guarantee
separability between classes, owing to overfitting. To overcome such problems,
this work proposes a novel DC approach on feature spaces consisting of three
steps. Firstly, we divide the feature space into several subspaces using the
decomposition method proposed in this paper. Subsequently, these feature
subspaces are sent into individual local classifiers for training. Finally, the
outcomes of local classifiers are fused together to generate the final
classification results. Experiments on large-scale datasets are carried out for
performance evaluation. The results show that the error rates of the proposed
DC method decreased comparing with the state-of-the-art fast SVM solvers, e.g.,
reducing error rates by 10.53% and 7.53% on RCV1 and covtype datasets
respectively.
| Qi Guo, Bo-Wei Chen, Feng Jiang, Xiangyang Ji, and Sun-Yuan Kung | 10.1109/JSYST.2015.2478800 | 1501.07584 | null | null |
Representing Objects, Relations, and Sequences | cs.LG | Vector Symbolic Architectures (VSAs) are high-dimensional vector
representations of objects (eg., words, image parts), relations (eg., sentence
structures), and sequences for use with machine learning algorithms. They
consist of a vector addition operator for representing a collection of
unordered objects, a Binding operator for associating groups of objects, and a
methodology for encoding complex structures.
We first develop Constraints that machine learning imposes upon VSAs: for
example, similar structures must be represented by similar vectors. The
constraints suggest that current VSAs should represent phrases ("The smart
Brazilian girl") by binding sums of terms, in addition to simply binding the
terms directly.
We show that matrix multiplication can be used as the binding operator for a
VSA, and that matrix elements can be chosen at random. A consequence for living
systems is that binding is mathematically possible without the need to specify,
in advance, precise neuron-to-neuron connection properties for large numbers of
synapses.
A VSA that incorporates these ideas, MBAT (Matrix Binding of Additive Terms),
is described that satisfies all Constraints.
With respect to machine learning, for some types of problems appropriate VSA
representations permit us to prove learnability, rather than relying on
simulations. We also propose dividing machine (and neural) learning and
representation into three Stages, with differing roles for learning in each
stage.
For neural modeling, we give "representational reasons" for nervous systems
to have many recurrent connections, as well as for the importance of phrases in
language processing.
Sizing simulations and analyses suggest that VSAs in general, and MBAT in
particular, are ready for real-world applications.
| Stephen I. Gallant and T. Wendy Okaywe | null | 1501.07627 | null | null |
Hyper-parameter optimization of Deep Convolutional Networks for object
recognition | cs.CV cs.LG | Recently sequential model based optimization (SMBO) has emerged as a
promising hyper-parameter optimization strategy in machine learning. In this
work, we investigate SMBO to identify architecture hyper-parameters of deep
convolution networks (DCNs) object recognition. We propose a simple SMBO
strategy that starts from a set of random initial DCN architectures to generate
new architectures, which on training perform well on a given dataset. Using the
proposed SMBO strategy we are able to identify a number of DCN architectures
that produce results that are comparable to state-of-the-art results on object
recognition benchmarks.
| Sachin S. Talathi | null | 1501.07645 | null | null |
A Random Matrix Theoretical Approach to Early Event Detection in Smart
Grid | stat.ME cs.LG | Power systems are developing very fast nowadays, both in size and in
complexity; this situation is a challenge for Early Event Detection (EED). This
paper proposes a data- driven unsupervised learning method to handle this
challenge. Specifically, the random matrix theories (RMTs) are introduced as
the statistical foundations for random matrix models (RMMs); based on the RMMs,
linear eigenvalue statistics (LESs) are defined via the test functions as the
system indicators. By comparing the values of the LES between the experimental
and the theoretical ones, the anomaly detection is conducted. Furthermore, we
develop 3D power-map to visualize the LES; it provides a robust auxiliary
decision-making mechanism to the operators. In this sense, the proposed method
conducts EED with a pure statistical procedure, requiring no knowledge of
system topologies, unit operation/control models, etc. The LES, as a key
ingredient during this procedure, is a high dimensional indictor derived
directly from raw data. As an unsupervised learning indicator, the LES is much
more sensitive than the low dimensional indictors obtained from supervised
learning. With the statistical procedure, the proposed method is universal and
fast; moreover, it is robust against traditional EED challenges (such as error
accumulations, spurious correlations, and even bad data in core area). Case
studies, with both simulated data and real ones, validate the proposed method.
To manage large-scale distributed systems, data fusion is mentioned as another
data processing ingredient.
| Xing He, Robert Caiming Qiu, Qian Ai, Yinshuang Cao, Jie Gu, Zhijian
Jin | null | 1502.00060 | null | null |
A New Intelligence Based Approach for Computer-Aided Diagnosis of Dengue
Fever | stat.ML cs.AI cs.LG | Identification of the influential clinical symptoms and laboratory features
that help in the diagnosis of dengue fever in early phase of the illness would
aid in designing effective public health management and virological
surveillance strategies. Keeping this as our main objective we develop in this
paper, a new computational intelligence based methodology that predicts the
diagnosis in real time, minimizing the number of false positives and false
negatives. Our methodology consists of three major components (i) a novel
missing value imputation procedure that can be applied on any data set
consisting of categorical (nominal) and/or numeric (real or integer) (ii) a
wrapper based features selection method with genetic search for extracting a
subset of most influential symptoms that can diagnose the illness and (iii) an
alternating decision tree method that employs boosting for generating highly
accurate decision rules. The predictive models developed using our methodology
are found to be more accurate than the state-of-the-art methodologies used in
the diagnosis of the dengue fever.
| Vadrevu Sree Hari Rao and Mallenahalli Naresh Kumar | 10.1109/TITB.2011.2171978 | 1502.00062 | null | null |
A Batchwise Monotone Algorithm for Dictionary Learning | cs.LG | We propose a batchwise monotone algorithm for dictionary learning. Unlike the
state-of-the-art dictionary learning algorithms which impose sparsity
constraints on a sample-by-sample basis, we instead treat the samples as a
batch, and impose the sparsity constraint on the whole. The benefit of
batchwise optimization is that the non-zeros can be better allocated across the
samples, leading to a better approximation of the whole. To accomplish this, we
propose procedures to switch non-zeros in both rows and columns in the support
of the coefficient matrix to reduce the reconstruction error. We prove in the
proposed support switching procedure the objective of the algorithm, i.e., the
reconstruction error, decreases monotonically and converges. Furthermore, we
introduce a block orthogonal matching pursuit algorithm that also operates on
sample batches to provide a warm start. Experiments on both natural image
patches and UCI data sets show that the proposed algorithm produces a better
approximation with the same sparsity levels compared to the state-of-the-art
algorithms.
| Huan Wang, John Wright, Daniel Spielman | null | 1502.00064 | null | null |
TuPAQ: An Efficient Planner for Large-scale Predictive Analytic Queries | cs.DB cs.DC cs.LG | The proliferation of massive datasets combined with the development of
sophisticated analytical techniques have enabled a wide variety of novel
applications such as improved product recommendations, automatic image tagging,
and improved speech-driven interfaces. These and many other applications can be
supported by Predictive Analytic Queries (PAQs). A major obstacle to supporting
PAQs is the challenging and expensive process of identifying and training an
appropriate predictive model. Recent efforts aiming to automate this process
have focused on single node implementations and have assumed that model
training itself is a black box, thus limiting the effectiveness of such
approaches on large-scale problems. In this work, we build upon these recent
efforts and propose an integrated PAQ planning architecture that combines
advanced model search techniques, bandit resource allocation via runtime
algorithm introspection, and physical optimization via batching. The result is
TuPAQ, a component of the MLbase system, which solves the PAQ planning problem
with comparable quality to exhaustive strategies but an order of magnitude more
efficiently than the standard baseline approach, and can scale to models
trained on terabytes of data across hundreds of machines.
| Evan R. Sparks, Ameet Talwalkar, Michael J. Franklin, Michael I.
Jordan, Tim Kraska | null | 1502.00068 | null | null |
Deep learning of fMRI big data: a novel approach to subject-transfer
decoding | stat.ML cs.LG q-bio.NC | As a technology to read brain states from measurable brain activities, brain
decoding are widely applied in industries and medical sciences. In spite of
high demands in these applications for a universal decoder that can be applied
to all individuals simultaneously, large variation in brain activities across
individuals has limited the scope of many studies to the development of
individual-specific decoders. In this study, we used deep neural network (DNN),
a nonlinear hierarchical model, to construct a subject-transfer decoder. Our
decoder is the first successful DNN-based subject-transfer decoder. When
applied to a large-scale functional magnetic resonance imaging (fMRI) database,
our DNN-based decoder achieved higher decoding accuracy than other baseline
methods, including support vector machine (SVM). In order to analyze the
knowledge acquired by this decoder, we applied principal sensitivity analysis
(PSA) to the decoder and visualized the discriminative features that are common
to all subjects in the dataset. Our PSA successfully visualized the
subject-independent features contributing to the subject-transferability of the
trained decoder.
| Sotetsu Koyamada and Yumi Shikauchi and Ken Nakae and Masanori Koyama
and Shin Ishii | null | 1502.00093 | null | null |
Twitter Hash Tag Recommendation | cs.IR cs.LG | The rise in popularity of microblogging services like Twitter has led to
increased use of content annotation strategies like the hashtag. Hashtags
provide users with a tagging mechanism to help organize, group, and create
visibility for their posts. This is a simple idea but can be challenging for
the user in practice which leads to infrequent usage. In this paper, we will
investigate various methods of recommending hashtags as new posts are created
to encourage more widespread adoption and usage. Hashtag recommendation comes
with numerous challenges including processing huge volumes of streaming data
and content which is small and noisy. We will investigate preprocessing methods
to reduce noise in the data and determine an effective method of hashtag
recommendation based on the popular classification algorithms.
| Roman Dovgopol, Matt Nohelty | null | 1502.00094 | null | null |
Sparse Dueling Bandits | stat.ML cs.LG | The dueling bandit problem is a variation of the classical multi-armed bandit
in which the allowable actions are noisy comparisons between pairs of arms.
This paper focuses on a new approach for finding the "best" arm according to
the Borda criterion using noisy comparisons. We prove that in the absence of
structural assumptions, the sample complexity of this problem is proportional
to the sum of the inverse squared gaps between the Borda scores of each
suboptimal arm and the best arm. We explore this dependence further and
consider structural constraints on the pairwise comparison matrix (a particular
form of sparsity natural to this problem) that can significantly reduce the
sample complexity. This motivates a new algorithm called Successive Elimination
with Comparison Sparsity (SECS) that exploits sparsity to find the Borda winner
using fewer samples than standard algorithms. We also evaluate the new
algorithm experimentally with synthetic and real data. The results show that
the sparsity model and the new algorithm can provide significant improvements
over standard approaches.
| Kevin Jamieson, Sumeet Katariya, Atul Deshpande and Robert Nowak | null | 1502.00133 | null | null |
Spectral Detection in the Censored Block Model | cs.SI cond-mat.dis-nn cs.LG math.PR | We consider the problem of partially recovering hidden binary variables from
the observation of (few) censored edge weights, a problem with applications in
community detection, correlation clustering and synchronization. We describe
two spectral algorithms for this task based on the non-backtracking and the
Bethe Hessian operators. These algorithms are shown to be asymptotically
optimal for the partial recovery problem, in that they detect the hidden
assignment as soon as it is information theoretically possible to do so.
| Alaa Saade, Florent Krzakala, Marc Lelarge and Lenka Zdeborov\'a | 10.1109/ISIT.2015.7282642 | 1502.00163 | null | null |
High Dimensional Low Rank plus Sparse Matrix Decomposition | cs.NA cs.DS cs.LG math.NA stat.ML | This paper is concerned with the problem of low rank plus sparse matrix
decomposition for big data. Conventional algorithms for matrix decomposition
use the entire data to extract the low-rank and sparse components, and are
based on optimization problems with complexity that scales with the dimension
of the data, which limits their scalability. Furthermore, existing randomized
approaches mostly rely on uniform random sampling, which is quite inefficient
for many real world data matrices that exhibit additional structures (e.g.
clustering). In this paper, a scalable subspace-pursuit approach that
transforms the decomposition problem to a subspace learning problem is
proposed. The decomposition is carried out using a small data sketch formed
from sampled columns/rows. Even when the data is sampled uniformly at random,
it is shown that the sufficient number of sampled columns/rows is roughly
O(r\mu), where \mu is the coherency parameter and r the rank of the low rank
component. In addition, adaptive sampling algorithms are proposed to address
the problem of column/row sampling from structured data. We provide an analysis
of the proposed method with adaptive sampling and show that adaptive sampling
makes the required number of sampled columns/rows invariant to the distribution
of the data. The proposed approach is amenable to online implementation and an
online scheme is proposed.
| Mostafa Rahmani, George Atia | 10.1109/TSP.2017.2649482 | 1502.00182 | null | null |
Advanced Mean Field Theory of Restricted Boltzmann Machine | cond-mat.stat-mech cs.LG q-bio.NC stat.ML | Learning in restricted Boltzmann machine is typically hard due to the
computation of gradients of log-likelihood function. To describe the network
state statistics of the restricted Boltzmann machine, we develop an advanced
mean field theory based on the Bethe approximation. Our theory provides an
efficient message passing based method that evaluates not only the partition
function (free energy) but also its gradients without requiring statistical
sampling. The results are compared with those obtained by the computationally
expensive sampling based method.
| Haiping Huang and Taro Toyoizumi | 10.1103/PhysRevE.91.050101 | 1502.00186 | null | null |
Feature Selection with Redundancy-complementariness Dispersion | cs.LG stat.ML | Feature selection has attracted significant attention in data mining and
machine learning in the past decades. Many existing feature selection methods
eliminate redundancy by measuring pairwise inter-correlation of features,
whereas the complementariness of features and higher inter-correlation among
more than two features are ignored. In this study, a modification item
concerning the complementariness of features is introduced in the evaluation
criterion of features. Additionally, in order to identify the interference
effect of already-selected False Positives (FPs), the
redundancy-complementariness dispersion is also taken into account to adjust
the measurement of pairwise inter-correlation of features. To illustrate the
effectiveness of proposed method, classification experiments are applied with
four frequently used classifiers on ten datasets. Classification results verify
the superiority of proposed method compared with five representative feature
selection methods.
| Zhijun Chen, Chaozhong Wu, Yishi Zhang, Zhen Huang, Bin Ran, Ming
Zhong, Nengchao Lyu | null | 1502.00231 | null | null |
Injury risk prediction for traffic accidents in Porto Alegre/RS, Brazil | cs.LG cs.AI | This study describes the experimental application of Machine Learning
techniques to build prediction models that can assess the injury risk
associated with traffic accidents. This work uses an freely available data set
of traffic accident records that took place in the city of Porto Alegre/RS
(Brazil) during the year of 2013. This study also provides an analysis of the
most important attributes of a traffic accident that could produce an outcome
of injury to the people involved in the accident.
| Christian S. Perone | null | 1502.00245 | null | null |
Iterated Support Vector Machines for Distance Metric Learning | cs.LG cs.CV | Distance metric learning aims to learn from the given training data a valid
distance metric, with which the similarity between data samples can be more
effectively evaluated for classification. Metric learning is often formulated
as a convex or nonconvex optimization problem, while many existing metric
learning algorithms become inefficient for large scale problems. In this paper,
we formulate metric learning as a kernel classification problem, and solve it
by iterated training of support vector machines (SVM). The new formulation is
easy to implement, efficient in training, and tractable for large-scale
problems. Two novel metric learning models, namely Positive-semidefinite
Constrained Metric Learning (PCML) and Nonnegative-coefficient Constrained
Metric Learning (NCML), are developed. Both PCML and NCML can guarantee the
global optimality of their solutions. Experimental results on UCI dataset
classification, handwritten digit recognition, face verification and person
re-identification demonstrate that the proposed metric learning methods achieve
higher classification accuracy than state-of-the-art methods and they are
significantly more efficient in training.
| Wangmeng Zuo, Faqiang Wang, David Zhang, Liang Lin, Yuchi Huang, Deyu
Meng, Lei Zhang | null | 1502.00363 | null | null |
Scaling Recurrent Neural Network Language Models | cs.CL cs.LG | This paper investigates the scaling properties of Recurrent Neural Network
Language Models (RNNLMs). We discuss how to train very large RNNs on GPUs and
address the questions of how RNNLMs scale with respect to model size,
training-set size, computational costs and memory. Our analysis shows that
despite being more costly to train, RNNLMs obtain much lower perplexities on
standard benchmarks than n-gram models. We train the largest known RNNs and
present relative word error rates gains of 18% on an ASR task. We also present
the new lowest perplexities on the recently released billion word language
modelling benchmark, 1 BLEU point gain on machine translation and a 17%
relative hit rate gain in word prediction.
| Will Williams, Niranjani Prasad, David Mrva, Tom Ash, Tony Robinson | null | 1502.00512 | null | null |
Unsupervised Incremental Learning and Prediction of Music Signals | cs.SD cs.IR cs.LG stat.ML | A system is presented that segments, clusters and predicts musical audio in
an unsupervised manner, adjusting the number of (timbre) clusters
instantaneously to the audio input. A sequence learning algorithm adapts its
structure to a dynamically changing clustering tree. The flow of the system is
as follows: 1) segmentation by onset detection, 2) timbre representation of
each segment by Mel frequency cepstrum coefficients, 3) discretization by
incremental clustering, yielding a tree of different sound classes (e.g.
instruments) that can grow or shrink on the fly driven by the instantaneous
sound events, resulting in a discrete symbol sequence, 4) extraction of
statistical regularities of the symbol sequence, using hierarchical N-grams and
the newly introduced conceptual Boltzmann machine, and 5) prediction of the
next sound event in the sequence. The system's robustness is assessed with
respect to complexity and noisiness of the signal. Clustering in isolation
yields an adjusted Rand index (ARI) of 82.7% / 85.7% for data sets of singing
voice and drums. Onset detection jointly with clustering achieve an ARI of
81.3% / 76.3% and the prediction of the entire system yields an ARI of 27.2% /
39.2%.
| Ricard Marxer and Hendrik Purwins | 10.1109/TASLP.2016.2530409 | 1502.00524 | null | null |
Lock in Feedback in Sequential Experiments | cs.LG | We often encounter situations in which an experimenter wants to find, by
sequential experimentation, $x_{max} = \arg\max_{x} f(x)$, where $f(x)$ is a
(possibly unknown) function of a well controllable variable $x$. Taking
inspiration from physics and engineering, we have designed a new method to
address this problem. In this paper, we first introduce the method in
continuous time, and then present two algorithms for use in sequential
experiments. Through a series of simulation studies, we show that the method is
effective for finding maxima of unknown functions by experimentation, even when
the maximum of the functions drifts or when the signal to noise ratio is low.
| Maurits Kaptein and Davide Iannuzzi | null | 1502.00598 | null | null |
Hybrid Orthogonal Projection and Estimation (HOPE): A New Framework to
Probe and Learn Neural Networks | cs.LG cs.NE | In this paper, we propose a novel model for high-dimensional data, called the
Hybrid Orthogonal Projection and Estimation (HOPE) model, which combines a
linear orthogonal projection and a finite mixture model under a unified
generative modeling framework. The HOPE model itself can be learned
unsupervised from unlabelled data based on the maximum likelihood estimation as
well as discriminatively from labelled data. More interestingly, we have shown
the proposed HOPE models are closely related to neural networks (NNs) in a
sense that each hidden layer can be reformulated as a HOPE model. As a result,
the HOPE framework can be used as a novel tool to probe why and how NNs work,
more importantly, to learn NNs in either supervised or unsupervised ways. In
this work, we have investigated the HOPE framework to learn NNs for several
standard tasks, including image recognition on MNIST and speech recognition on
TIMIT. Experimental results have shown that the HOPE framework yields
significant performance gains over the current state-of-the-art methods in
various types of NN learning problems, including unsupervised feature learning,
supervised or semi-supervised learning.
| Shiliang Zhang and Hui Jiang | null | 1502.00702 | null | null |
Cheaper and Better: Selecting Good Workers for Crowdsourcing | stat.ML cs.AI cs.LG stat.AP | Crowdsourcing provides a popular paradigm for data collection at scale. We
study the problem of selecting subsets of workers from a given worker pool to
maximize the accuracy under a budget constraint. One natural question is
whether we should hire as many workers as the budget allows, or restrict on a
small number of top-quality workers. By theoretically analyzing the error rate
of a typical setting in crowdsourcing, we frame the worker selection problem
into a combinatorial optimization problem and propose an algorithm to solve it
efficiently. Empirical results on both simulated and real-world datasets show
that our algorithm is able to select a small number of high-quality workers,
and performs as good as, sometimes even better than, the much larger crowds as
the budget allows.
| Hongwei Li and Qiang Liu | null | 1502.00725 | null | null |
Incremental Knowledge Base Construction Using DeepDive | cs.DB cs.CL cs.LG | Populating a database with unstructured information is a long-standing
problem in industry and research that encompasses problems of extraction,
cleaning, and integration. Recent names used for this problem include dealing
with dark data and knowledge base construction (KBC). In this work, we describe
DeepDive, a system that combines database and machine learning ideas to help
develop KBC systems, and we present techniques to make the KBC process more
efficient. We observe that the KBC process is iterative, and we develop
techniques to incrementally produce inference results for KBC systems. We
propose two methods for incremental inference, based respectively on sampling
and variational techniques. We also study the tradeoff space of these methods
and develop a simple rule-based optimizer. DeepDive includes all of these
contributions, and we evaluate DeepDive on five KBC systems, showing that it
can speed up KBC inference tasks by up to two orders of magnitude with
negligible impact on quality.
| Jaeho Shin, Sen Wu, Feiran Wang, Christopher De Sa, Ce Zhang,
Christopher R\'e | null | 1502.00731 | null | null |
Personalized Web Search | cs.IR cs.LG | Personalization is important for search engines to improve user experience.
Most of the existing work do pure feature engineering and extract a lot of
session-style features and then train a ranking model. Here we proposed a novel
way to model both long term and short term user behavior using Multi-armed
bandit algorithm. Our algorithm can generalize session information across users
well, and as an Explore-Exploit style algorithm, it can generalize to new urls
and new users well. Experiments show that our algorithm can improve performance
over the default ranking and outperforms several popular Multi-armed bandit
algorithms.
| Li Zhou | null | 1502.01057 | null | null |
Multimodal Task-Driven Dictionary Learning for Image Classification | stat.ML cs.CV cs.LG | Dictionary learning algorithms have been successfully used for both
reconstructive and discriminative tasks, where an input signal is represented
with a sparse linear combination of dictionary atoms. While these methods are
mostly developed for single-modality scenarios, recent studies have
demonstrated the advantages of feature-level fusion based on the joint sparse
representation of the multimodal inputs. In this paper, we propose a multimodal
task-driven dictionary learning algorithm under the joint sparsity constraint
(prior) to enforce collaborations among multiple homogeneous/heterogeneous
sources of information. In this task-driven formulation, the multimodal
dictionaries are learned simultaneously with their corresponding classifiers.
The resulting multimodal dictionaries can generate discriminative latent
features (sparse codes) from the data that are optimized for a given task such
as binary or multiclass classification. Moreover, we present an extension of
the proposed formulation using a mixed joint and independent sparsity prior
which facilitates more flexible fusion of the modalities at feature level. The
efficacy of the proposed algorithms for multimodal classification is
illustrated on four different applications -- multimodal face recognition,
multi-view face recognition, multi-view action recognition, and multimodal
biometric recognition. It is also shown that, compared to the counterpart
reconstructive-based dictionary learning algorithms, the task-driven
formulations are more computationally efficient in the sense that they can be
equipped with more compact dictionaries and still achieve superior performance.
| Soheil Bahrampour, Nasser M. Nasrabadi, Asok Ray, W. Kenneth Jenkins | 10.1109/TIP.2015.2496275 | 1502.01094 | null | null |
Learning Local Invariant Mahalanobis Distances | cs.LG stat.ML | For many tasks and data types, there are natural transformations to which the
data should be invariant or insensitive. For instance, in visual recognition,
natural images should be insensitive to rotation and translation. This
requirement and its implications have been important in many machine learning
applications, and tolerance for image transformations was primarily achieved by
using robust feature vectors. In this paper we propose a novel and
computationally efficient way to learn a local Mahalanobis metric per datum,
and show how we can learn a local invariant metric to any transformation in
order to improve performance.
| Ethan Fetaya and Shimon Ullman | null | 1502.01176 | null | null |
RELEAF: An Algorithm for Learning and Exploiting Relevance | cs.LG stat.ML | Recommender systems, medical diagnosis, network security, etc., require
on-going learning and decision-making in real time. These -- and many others --
represent perfect examples of the opportunities and difficulties presented by
Big Data: the available information often arrives from a variety of sources and
has diverse features so that learning from all the sources may be valuable but
integrating what is learned is subject to the curse of dimensionality. This
paper develops and analyzes algorithms that allow efficient learning and
decision-making while avoiding the curse of dimensionality. We formalize the
information available to the learner/decision-maker at a particular time as a
context vector which the learner should consider when taking actions. In
general the context vector is very high dimensional, but in many settings, the
most relevant information is embedded into only a few relevant dimensions. If
these relevant dimensions were known in advance, the problem would be simple --
but they are not. Moreover, the relevant dimensions may be different for
different actions. Our algorithm learns the relevant dimensions for each
action, and makes decisions based in what it has learned. Formally, we build on
the structure of a contextual multi-armed bandit by adding and exploiting a
relevance relation. We prove a general regret bound for our algorithm whose
time order depends only on the maximum number of relevant dimensions among all
the actions, which in the special case where the relevance relation is
single-valued (a function), reduces to $\tilde{O}(T^{2(\sqrt{2}-1)})$; in the
absence of a relevance relation, the best known contextual bandit algorithms
achieve regret $\tilde{O}(T^{(D+1)/(D+2)})$, where $D$ is the full dimension of
the context vector.
| Cem Tekin and Mihaela van der Schaar | 10.1109/JSTSP.2015.2402646 | 1502.01418 | null | null |
A mixture Cox-Logistic model for feature selection from survival and
classification data | stat.ML cs.LG stat.ME | This paper presents an original approach for jointly fitting survival times
and classifying samples into subgroups. The Coxlogit model is a generalized
linear model with a common set of selected features for both tasks. Survival
times and class labels are here assumed to be conditioned by a common risk
score which depends on those features. Learning is then naturally expressed as
maximizing the joint probability of subgroup labels and the ordering of
survival events, conditioned to a common weight vector. The model is estimated
by minimizing a regularized log-likelihood through a coordinate descent
algorithm.
Validation on synthetic and breast cancer data shows that the proposed
approach outperforms a standard Cox model or logistic regression when both
predicting the survival times and classifying new samples into subgroups. It is
also better at selecting informative features for both tasks.
| Samuel Branders, Roberto D'Ambrosio and Pierre Dupont | null | 1502.01493 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.