title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Using Learned Predictions as Feedback to Improve Control and
Communication with an Artificial Limb: Preliminary Findings | cs.AI cs.HC cs.LG cs.RO | Many people suffer from the loss of a limb. Learning to get by without an arm
or hand can be very challenging, and existing prostheses do not yet fulfil the
needs of individuals with amputations. One promising solution is to provide
greater communication between a prosthesis and its user. Towards this end, we
present a simple machine learning interface to supplement the control of a
robotic limb with feedback to the user about what the limb will be experiencing
in the near future. A real-time prediction learner was implemented to predict
impact-related electrical load experienced by a robot limb; the learning
system's predictions were then communicated to the device's user to aid in
their interactions with a workspace. We tested this system with five
able-bodied subjects. Each subject manipulated the robot arm while receiving
different forms of vibrotactile feedback regarding the arm's contact with its
workspace. Our trials showed that communicable predictions could be learned
quickly during human control of the robot arm. Using these predictions as a
basis for feedback led to a statistically significant improvement in task
performance when compared to purely reactive feedback from the device. Our
study therefore contributes initial evidence that prediction learning and
machine intelligence can benefit not just control, but also feedback from an
artificial limb. We expect that a greater level of acceptance and ownership can
be achieved if the prosthesis itself takes an active role in transmitting
learned knowledge about its state and its situation of use.
| Adam S. R. Parker, Ann L. Edwards, and Patrick M. Pilarski | null | 1408.1913 | null | null |
LARSEN-ELM: Selective Ensemble of Extreme Learning Machines using LARS
for Blended Data | cs.LG stat.ML | Extreme learning machine (ELM) as a neural network algorithm has shown its
good performance, such as fast speed, simple structure etc, but also, weak
robustness is an unavoidable defect in original ELM for blended data. We
present a new machine learning framework called LARSEN-ELM for overcoming this
problem. In our paper, we would like to show two key steps in LARSEN-ELM. In
the first step, preprocessing, we select the input variables highly related to
the output using least angle regression (LARS). In the second step, training,
we employ Genetic Algorithm (GA) based selective ensemble and original ELM. In
the experiments, we apply a sum of two sines and four datasets from UCI
repository to verify the robustness of our approach. The experimental results
show that compared with original ELM and other methods such as OP-ELM,
GASEN-ELM and LSBoost, LARSEN-ELM significantly improve robustness performance
while keeping a relatively high speed.
| Bo Han, Bo He, Rui Nian, Mengmeng Ma, Shujing Zhang, Minghui Li and
Amaury Lendasse | 10.1016/j.neucom.2014.01.069 | 1408.2003 | null | null |
RMSE-ELM: Recursive Model based Selective Ensemble of Extreme Learning
Machines for Robustness Improvement | cs.LG cs.NE | Extreme learning machine (ELM) as an emerging branch of shallow networks has
shown its excellent generalization and fast learning speed. However, for
blended data, the robustness of ELM is weak because its weights and biases of
hidden nodes are set randomly. Moreover, the noisy data exert a negative
effect. To solve this problem, a new framework called RMSE-ELM is proposed in
this paper. It is a two-layer recursive model. In the first layer, the
framework trains lots of ELMs in different groups concurrently, then employs
selective ensemble to pick out an optimal set of ELMs in each group, which can
be merged into a large group of ELMs called candidate pool. In the second
layer, selective ensemble is recursively used on candidate pool to acquire the
final ensemble. In the experiments, we apply UCI blended datasets to confirm
the robustness of our new approach in two key aspects (mean square error and
standard deviation). The space complexity of our method is increased to some
degree, but the results have shown that RMSE-ELM significantly improves
robustness with slightly computational time compared with representative
methods (ELM, OP-ELM, GASEN-ELM, GASEN-BP and E-GASEN). It becomes a potential
framework to solve robustness issue of ELM for high-dimensional blended data in
the future.
| Bo Han, Bo He, Mengmeng Ma, Tingting Sun, Tianhong Yan, Amaury
Lendasse | null | 1408.2004 | null | null |
Blind Construction of Optimal Nonlinear Recursive Predictors for
Discrete Sequences | cs.LG stat.ML | We present a new method for nonlinear prediction of discrete random sequences
under minimal structural assumptions. We give a mathematical construction for
optimal predictors of such processes, in the form of hidden Markov models. We
then describe an algorithm, CSSR (Causal-State Splitting Reconstruction), which
approximates the ideal predictor from data. We discuss the reliability of CSSR,
its data requirements, and its performance in simulations. Finally, we compare
our approach to existing methods using variablelength Markov models and
cross-validated hidden Markov models, and show theoretically and experimentally
that our method delivers results superior to the former and at least comparable
to the latter.
| Cosma Shalizi, Kristina Lisa Klinkner | null | 1408.2025 | null | null |
Robust Graphical Modeling with t-Distributions | cs.LG stat.ML | Graphical Gaussian models have proven to be useful tools for exploring
network structures based on multivariate data. Applications to studies of gene
expression have generated substantial interest in these models, and resulting
recent progress includes the development of fitting methodology involving
penalization of the likelihood function. In this paper we advocate the use of
the multivariate t and related distributions for more robust inference of
graphs. In particular, we demonstrate that penalized likelihood inference
combined with an application of the EM algorithm provides a simple and
computationally efficient approach to model selection in the t-distribution
case.
| Michael A. Finegold, Mathias Drton | null | 1408.2033 | null | null |
Characterizing predictable classes of processes | cs.LG stat.ML | The problem is sequence prediction in the following setting. A sequence
x1,..., xn,... of discrete-valued observations is generated according to some
unknown probabilistic law (measure) mu. After observing each outcome, it is
required to give the conditional probabilities of the next observation. The
measure mu belongs to an arbitrary class C of stochastic processes. We are
interested in predictors ? whose conditional probabilities converge to the
'true' mu-conditional probabilities if any mu { C is chosen to generate the
data. We show that if such a predictor exists, then a predictor can also be
obtained as a convex combination of a countably many elements of C. In other
words, it can be obtained as a Bayesian predictor whose prior is concentrated
on a countable set. This result is established for two very different measures
of performance of prediction, one of which is very strong, namely, total
variation, and the other is very weak, namely, prediction in expected average
Kullback-Leibler divergence.
| Daniil Ryabko | null | 1408.2036 | null | null |
A direct method for estimating a causal ordering in a linear
non-Gaussian acyclic model | cs.LG stat.ML | Structural equation models and Bayesian networks have been widely used to
analyze causal relations between continuous variables. In such frameworks,
linear acyclic models are typically used to model the datagenerating process of
variables. Recently, it was shown that use of non-Gaussianity identifies a
causal ordering of variables in a linear acyclic model without using any prior
knowledge on the network structure, which is not the case with conventional
methods. However, existing estimation methods are based on iterative search
algorithms and may not converge to a correct solution in a finite number of
steps. In this paper, we propose a new direct method to estimate a causal
ordering based on non-Gaussianity. In contrast to the previous methods, our
algorithm requires no algorithmic parameters and is guaranteed to converge to
the right solution within a small fixed number of steps if the data strictly
follows the model.
| Shohei Shimizu, Aapo Hyvarinen, Yoshinobu Kawahara | null | 1408.2038 | null | null |
GraphLab: A New Framework For Parallel Machine Learning | cs.LG cs.DC | Designing and implementing efficient, provably correct parallel machine
learning (ML) algorithms is challenging. Existing high-level parallel
abstractions like MapReduce are insufficiently expressive while low-level tools
like MPI and Pthreads leave ML experts repeatedly solving the same design
challenges. By targeting common patterns in ML, we developed GraphLab, which
improves upon abstractions like MapReduce by compactly expressing asynchronous
iterative algorithms with sparse computational dependencies while ensuring data
consistency and achieving a high degree of parallel performance. We demonstrate
the expressiveness of the GraphLab framework by designing and implementing
parallel versions of belief propagation, Gibbs sampling, Co-EM, Lasso and
Compressed Sensing. We show that using GraphLab we can achieve excellent
parallel performance on large scale real-world problems.
| Yucheng Low, Joseph E. Gonzalez, Aapo Kyrola, Danny Bickson, Carlos E.
Guestrin, Joseph Hellerstein | null | 1408.2041 | null | null |
Matrix Coherence and the Nystrom Method | cs.LG stat.ML | The Nystrom method is an efficient technique used to speed up large-scale
learning applications by generating low-rank approximations. Crucial to the
performance of this technique is the assumption that a matrix can be well
approximated by working exclusively with a subset of its columns. In this work
we relate this assumption to the concept of matrix coherence, connecting
coherence to the performance of the Nystrom method. Making use of related work
in the compressed sensing and the matrix completion literature, we derive novel
coherence-based bounds for the Nystrom method in the low-rank setting. We then
present empirical results that corroborate these theoretical bounds. Finally,
we present more general empirical results for the full-rank setting that
convincingly demonstrate the ability of matrix coherence to measure the degree
to which information can be extracted from a subset of columns.
| Ameet Talwalkar, Afshin Rostamizadeh | null | 1408.2044 | null | null |
Efficient Clustering with Limited Distance Information | cs.LG cs.AI | Given a point set S and an unknown metric d on S, we study the problem of
efficiently partitioning S into k clusters while querying few distances between
the points. In our model we assume that we have access to one versus all
queries that given a point s 2 S return the distances between s and all other
points. We show that given a natural assumption about the structure of the
instance, we can efficiently find an accurate clustering using only O(k)
distance queries. We use our algorithm to cluster proteins by sequence
similarity. This setting nicely fits our model because we can use a fast
sequence database search program to query a sequence against an entire dataset.
We conduct an empirical study that shows that even though we query a small
fraction of the distances between the points, we produce clusterings that are
close to a desired clustering given by manual classification.
| Konstantin Voevodski, Maria-Florina Balcan, Heiko Roglin, Shang-Hua
Teng, Yu Xia | null | 1408.2045 | null | null |
Optimally-Weighted Herding is Bayesian Quadrature | cs.LG stat.ML | Herding and kernel herding are deterministic methods of choosing samples
which summarise a probability distribution. A related task is choosing samples
for estimating integrals using Bayesian quadrature. We show that the criterion
minimised when selecting samples in kernel herding is equivalent to the
posterior variance in Bayesian quadrature. We then show that sequential
Bayesian quadrature can be viewed as a weighted version of kernel herding which
achieves performance superior to any other weighted herding method. We
demonstrate empirically a rate of convergence faster than O(1/N). Our results
also imply an upper bound on the empirical error of the Bayesian quadrature
estimate.
| Ferenc Huszar, David Duvenaud | null | 1408.2049 | null | null |
Non-Convex Rank Minimization via an Empirical Bayesian Approach | cs.LG cs.NA stat.ML | In many applications that require matrix solutions of minimal rank, the
underlying cost function is non-convex leading to an intractable, NP-hard
optimization problem. Consequently, the convex nuclear norm is frequently used
as a surrogate penalty term for matrix rank. The problem is that in many
practical scenarios there is no longer any guarantee that we can correctly
estimate generative low-rank matrices of interest, theoretical special cases
notwithstanding. Consequently, this paper proposes an alternative empirical
Bayesian procedure build upon a variational approximation that, unlike the
nuclear norm, retains the same globally minimizing point estimate as the rank
function under many useful constraints. However, locally minimizing solutions
are largely smoothed away via marginalization, allowing the algorithm to
succeed when standard convex relaxations completely fail. While the proposed
methodology is generally applicable to a wide range of low-rank applications,
we focus our attention on the robust principal component analysis problem
(RPCA), which involves estimating an unknown low-rank matrix with unknown
sparse corruptions. Theoretical and empirical evidence are presented to show
that our method is potentially superior to related MAP-based approaches, for
which the convex principle component pursuit (PCP) algorithm (Candes et al.,
2011) can be viewed as a special case.
| David Wipf | null | 1408.2054 | null | null |
Warped Mixtures for Nonparametric Cluster Shapes | cs.LG stat.ML | A mixture of Gaussians fit to a single curved or heavy-tailed cluster will
report that the data contains many clusters. To produce more appropriate
clusterings, we introduce a model which warps a latent mixture of Gaussians to
produce nonparametric cluster shapes. The possibly low-dimensional latent
mixture model allows us to summarize the properties of the high-dimensional
clusters (or density manifolds) describing the data. The number of manifolds,
as well as the shape and dimension of each manifold is automatically inferred.
We derive a simple inference scheme for this model which analytically
integrates out both the mixture parameters and the warping function. We show
that our model is effective for density estimation, performs better than
infinite Gaussian mixture models at recovering the true number of clusters, and
produces interpretable summaries of high-dimensional datasets.
| Tomoharu Iwata, David Duvenaud, Zoubin Ghahramani | null | 1408.2061 | null | null |
Statistical guarantees for the EM algorithm: From population to
sample-based analysis | math.ST cs.LG stat.ML stat.TH | We develop a general framework for proving rigorous guarantees on the
performance of the EM algorithm and a variant known as gradient EM. Our
analysis is divided into two parts: a treatment of these algorithms at the
population level (in the limit of infinite data), followed by results that
apply to updates based on a finite set of samples. First, we characterize the
domain of attraction of any global maximizer of the population likelihood. This
characterization is based on a novel view of the EM updates as a perturbed form
of likelihood ascent, or in parallel, of the gradient EM updates as a perturbed
form of standard gradient ascent. Leveraging this characterization, we then
provide non-asymptotic guarantees on the EM and gradient EM algorithms when
applied to a finite set of samples. We develop consequences of our general
theory for three canonical examples of incomplete-data problems: mixture of
Gaussians, mixture of regressions, and linear regression with covariates
missing completely at random. In each case, our theory guarantees that with a
suitable initialization, a relatively small number of EM (or gradient EM) steps
will yield (with high probability) an estimate that is within statistical error
of the MLE. We provide simulations to confirm this theoretically predicted
behavior.
| Sivaraman Balakrishnan, Martin J. Wainwright, Bin Yu | null | 1408.2156 | null | null |
R-UCB: a Contextual Bandit Algorithm for Risk-Aware Recommender Systems | cs.IR cs.LG | Mobile Context-Aware Recommender Systems can be naturally modelled as an
exploration/exploitation trade-off (exr/exp) problem, where the system has to
choose between maximizing its expected rewards dealing with its current
knowledge (exploitation) and learning more about the unknown user's preferences
to improve its knowledge (exploration). This problem has been addressed by the
reinforcement learning community but they do not consider the risk level of the
current user's situation, where it may be dangerous to recommend items the user
may not desire in her current situation if the risk level is high. We introduce
in this paper an algorithm named R-UCB that considers the risk level of the
user's situation to adaptively balance between exr and exp. The detailed
analysis of the experimental results reveals several important discoveries in
the exr/exp behaviour.
| Djallel Bouneffouf | null | 1408.2195 | null | null |
Exponentiated Gradient Exploration for Active Learning | cs.LG cs.AI | Active learning strategies respond to the costly labelling task in a
supervised classification by selecting the most useful unlabelled examples in
training a predictive model. Many conventional active learning algorithms focus
on refining the decision boundary, rather than exploring new regions that can
be more informative. In this setting, we propose a sequential algorithm named
EG-Active that can improve any Active learning algorithm by an optimal random
exploration. Experimental results show a statistically significant and
appreciable improvement in the performance of our new approach over the
existing active feedback methods.
| Djallel Bouneffouf | null | 1408.2196 | null | null |
On the Consistency of Ordinal Regression Methods | cs.LG | Many of the ordinal regression models that have been proposed in the
literature can be seen as methods that minimize a convex surrogate of the
zero-one, absolute, or squared loss functions. A key property that allows to
study the statistical implications of such approximations is that of Fisher
consistency. Fisher consistency is a desirable property for surrogate loss
functions and implies that in the population setting, i.e., if the probability
distribution that generates the data were available, then optimization of the
surrogate would yield the best possible model. In this paper we will
characterize the Fisher consistency of a rich family of surrogate loss
functions used in the context of ordinal regression, including support vector
ordinal regression, ORBoosting and least absolute deviation. We will see that,
for a family of surrogate loss functions that subsumes support vector ordinal
regression and ORBoosting, consistency can be fully characterized by the
derivative of a real-valued function at zero, as happens for convex
margin-based surrogates in binary classification. We also derive excess risk
bounds for a surrogate of the absolute error that generalize existing risk
bounds for binary classification. Finally, our analysis suggests a novel
surrogate of the squared error loss. We compare this novel surrogate with
competing approaches on 9 different datasets. Our method shows to be highly
competitive in practice, outperforming the least squares loss on 7 out of 9
datasets.
| Fabian Pedregosa, Francis Bach, Alexandre Gramfort | null | 1408.2327 | null | null |
On the Complexity of Bandit Linear Optimization | cs.LG | We study the attainable regret for online linear optimization problems with
bandit feedback, where unlike the full-information setting, the player can only
observe its own loss rather than the full loss vector. We show that the price
of bandit information in this setting can be as large as $d$, disproving the
well-known conjecture that the regret for bandit linear optimization is at most
$\sqrt{d}$ times the full-information regret. Surprisingly, this is shown using
"trivial" modifications of standard domains, which have no effect in the
full-information setting. This and other results we present highlight some
interesting differences between full-information and bandit learning, which
were not considered in previous literature.
| Ohad Shamir | null | 1408.2368 | null | null |
Compressed Sensing with Very Sparse Gaussian Random Projections | stat.ME cs.DS cs.IT cs.LG math.IT | We study the use of very sparse random projections for compressed sensing
(sparse signal recovery) when the signal entries can be either positive or
negative. In our setting, the entries of a Gaussian design matrix are randomly
sparsified so that only a very small fraction of the entries are nonzero. Our
proposed decoding algorithm is simple and efficient in that the major cost is
one linear scan of the coordinates. We have developed two estimators: (i) the
{\em tie estimator}, and (ii) the {\em absolute minimum estimator}. Using only
the tie estimator, we are able to recover a $K$-sparse signal of length $N$
using $1.551 eK \log K/\delta$ measurements (where $\delta\leq 0.05$ is the
confidence). Using only the absolute minimum estimator, we can detect the
support of the signal using $eK\log N/\delta$ measurements. For a particular
coordinate, the absolute minimum estimator requires fewer measurements (i.e.,
with a constant $e$ instead of $1.551e$). Thus, the two estimators can be
combined to form an even more practical decoding framework.
Prior studies have shown that existing one-scan (or roughly one-scan)
recovery algorithms using sparse matrices would require substantially more
(e.g., one order of magnitude) measurements than L1 decoding by linear
programming, when the nonzero entries of signals can be either negative or
positive. In this paper, following a known experimental setup, we show that, at
the same number of measurements, the recovery accuracies of our proposed method
are (at least) similar to the standard L1 decoding.
| Ping Li and Cun-Hui Zhang | null | 1408.2504 | null | null |
Optimum Statistical Estimation with Strategic Data Sources | stat.ML cs.GT cs.LG | We propose an optimum mechanism for providing monetary incentives to the data
sources of a statistical estimator such as linear regression, so that high
quality data is provided at low cost, in the sense that the sum of payments and
estimation error is minimized. The mechanism applies to a broad range of
estimators, including linear and polynomial regression, kernel regression, and,
under some additional assumptions, ridge regression. It also generalizes to
several objectives, including minimizing estimation error subject to budget
constraints. Besides our concrete results for regression problems, we
contribute a mechanism design framework through which to design and analyze
statistical estimators whose examples are supplied by workers with cost for
labeling said examples.
| Yang Cai, Constantinos Daskalakis, Christos H. Papadimitriou | null | 1408.2539 | null | null |
Comparing Nonparametric Bayesian Tree Priors for Clonal Reconstruction
of Tumors | q-bio.PE cs.LG stat.ML | Statistical machine learning methods, especially nonparametric Bayesian
methods, have become increasingly popular to infer clonal population structure
of tumors. Here we describe the treeCRP, an extension of the Chinese restaurant
process (CRP), a popular construction used in nonparametric mixture models, to
infer the phylogeny and genotype of major subclonal lineages represented in the
population of cancer cells. We also propose new split-merge updates tailored to
the subclonal reconstruction problem that improve the mixing time of Markov
chains. In comparisons with the tree-structured stick breaking prior used in
PhyloSub, we demonstrate superior mixing and running time using the treeCRP
with our new split-merge procedures. We also show that given the same number of
samples, TSSB and treeCRP have similar ability to recover the subclonal
structure of a tumor.
| Amit G. Deshwar, Shankar Vembu, Quaid Morris | null | 1408.2552 | null | null |
Block stochastic gradient iteration for convex and nonconvex
optimization | math.OC cs.LG cs.NA math.NA stat.ML | The stochastic gradient (SG) method can minimize an objective function
composed of a large number of differentiable functions, or solve a stochastic
optimization problem, to a moderate accuracy. The block coordinate
descent/update (BCD) method, on the other hand, handles problems with multiple
blocks of variables by updating them one at a time; when the blocks of
variables are easier to update individually than together, BCD has a lower
per-iteration cost. This paper introduces a method that combines the features
of SG and BCD for problems with many components in the objective and with
multiple (blocks of) variables.
Specifically, a block stochastic gradient (BSG) method is proposed for
solving both convex and nonconvex programs. At each iteration, BSG approximates
the gradient of the differentiable part of the objective by randomly sampling a
small set of data or sampling a few functions from the sum term in the
objective, and then, using those samples, it updates all the blocks of
variables in either a deterministic or a randomly shuffled order. Its
convergence for both convex and nonconvex cases are established in different
senses. In the convex case, the proposed method has the same order of
convergence rate as the SG method. In the nonconvex case, its convergence is
established in terms of the expected violation of a first-order optimality
condition. The proposed method was numerically tested on problems including
stochastic least squares and logistic regression, which are convex, as well as
low-rank tensor recovery and bilinear logistic regression, which are nonconvex.
| Yangyang Xu and Wotao Yin | null | 1408.2597 | null | null |
Convex Calibration Dimension for Multiclass Loss Matrices | cs.LG stat.ML | We study consistency properties of surrogate loss functions for general
multiclass learning problems, defined by a general multiclass loss matrix. We
extend the notion of classification calibration, which has been studied for
binary and multiclass 0-1 classification problems (and for certain other
specific learning problems), to the general multiclass setting, and derive
necessary and sufficient conditions for a surrogate loss to be calibrated with
respect to a loss matrix in this setting. We then introduce the notion of
convex calibration dimension of a multiclass loss matrix, which measures the
smallest `size' of a prediction space in which it is possible to design a
convex surrogate that is calibrated with respect to the loss matrix. We derive
both upper and lower bounds on this quantity, and use these results to analyze
various loss matrices. In particular, we apply our framework to study various
subset ranking losses, and use the convex calibration dimension as a tool to
show both the existence and non-existence of various types of convex calibrated
surrogates for these losses. Our results strengthen recent results of Duchi et
al. (2010) and Calauzenes et al. (2012) on the non-existence of certain types
of convex calibrated surrogates in subset ranking. We anticipate the convex
calibration dimension may prove to be a useful tool in the study and design of
surrogate losses for general multiclass learning problems.
| Harish G. Ramaswamy and Shivani Agarwal | null | 1408.2764 | null | null |
Learning a hyperplane classifier by minimizing an exact bound on the VC
dimension | cs.LG | The VC dimension measures the capacity of a learning machine, and a low VC
dimension leads to good generalization. While SVMs produce state-of-the-art
learning performance, it is well known that the VC dimension of a SVM can be
unbounded; despite good results in practice, there is no guarantee of good
generalization. In this paper, we show how to learn a hyperplane classifier by
minimizing an exact, or \boldmath{$\Theta$} bound on its VC dimension. The
proposed approach, termed as the Minimal Complexity Machine (MCM), involves
solving a simple linear programming problem. Experimental results show, that on
a number of benchmark datasets, the proposed approach learns classifiers with
error rates much less than conventional SVMs, while often using fewer support
vectors. On many benchmark datasets, the number of support vectors is less than
one-tenth the number used by SVMs, indicating that the MCM does indeed learn
simpler representations.
| Jayadeva | 10.1016/j.neucom.2014.07.062 | 1408.2803 | null | null |
Cluster based RBF Kernel for Support Vector Machines | cs.LG stat.ML | In the classical Gaussian SVM classification we use the feature space
projection transforming points to normal distributions with fixed covariance
matrices (identity in the standard RBF and the covariance of the whole dataset
in Mahalanobis RBF). In this paper we add additional information to Gaussian
SVM by considering local geometry-dependent feature space projection. We
emphasize that our approach is in fact an algorithm for a construction of the
new Gaussian-type kernel.
We show that better (compared to standard RBF and Mahalanobis RBF)
classification results are obtained in the simple case when the space is
preliminary divided by k-means into two sets and points are represented as
normal distributions with a covariances calculated according to the dataset
partitioning.
We call the constructed method C$_k$RBF, where $k$ stands for the amount of
clusters used in k-means. We show empirically on nine datasets from UCI
repository that C$_2$RBF increases the stability of the grid search (measured
as the probability of finding good parameters).
| Wojciech Marian Czarnecki, Jacek Tabor | null | 1408.2869 | null | null |
First-Pass Large Vocabulary Continuous Speech Recognition using
Bi-Directional Recurrent DNNs | cs.CL cs.LG cs.NE | We present a method to perform first-pass large vocabulary continuous speech
recognition using only a neural network and language model. Deep neural network
acoustic models are now commonplace in HMM-based speech recognition systems,
but building such systems is a complex, domain-specific task. Recent work
demonstrated the feasibility of discarding the HMM sequence modeling framework
by directly predicting transcript text from audio. This paper extends this
approach in two ways. First, we demonstrate that a straightforward recurrent
neural network architecture can achieve a high level of accuracy. Second, we
propose and evaluate a modified prefix-search decoding algorithm. This approach
to decoding enables first-pass speech recognition with a language model,
completely unaided by the cumbersome infrastructure of HMM-based systems.
Experiments on the Wall Street Journal corpus demonstrate fairly competitive
word error rates, and the importance of bi-directional network recurrence.
| Awni Y. Hannun, Andrew L. Maas, Daniel Jurafsky, Andrew Y. Ng | null | 1408.2873 | null | null |
A Classifier-free Ensemble Selection Method based on Data Diversity in
Random Subspaces | cs.LG cs.NE | The Ensemble of Classifiers (EoC) has been shown to be effective in improving
the performance of single classifiers by combining their outputs, and one of
the most important properties involved in the selection of the best EoC from a
pool of classifiers is considered to be classifier diversity. In general,
classifier diversity does not occur randomly, but is generated systematically
by various ensemble creation methods. By using diverse data subsets to train
classifiers, these methods can create diverse classifiers for the EoC. In this
work, we propose a scheme to measure data diversity directly from random
subspaces, and explore the possibility of using it to select the best data
subsets for the construction of the EoC. Our scheme is the first ensemble
selection method to be presented in the literature based on the concept of data
diversity. Its main advantage over the traditional framework (ensemble creation
then selection) is that it obviates the need for classifier training prior to
ensemble selection. A single Genetic Algorithm (GA) and a Multi-Objective
Genetic Algorithm (MOGA) were evaluated to search for the best solutions for
the classifier-free ensemble selection. In both cases, objective functions
based on different clustering diversity measures were implemented and tested.
All the results obtained with the proposed classifier-free ensemble selection
method were compared with the traditional classifier-based ensemble selection
using Mean Classifier Error (ME) and Majority Voting Error (MVE). The
applicability of the method is tested on UCI machine learning problems and NIST
SD19 handwritten numerals.
| Albert H. R. Ko, Robert Sabourin, Alceu S. Britto Jr, Luiz E. S.
Oliveira | null | 1408.2889 | null | null |
Robust OS-ELM with a novel selective ensemble based on particle swarm
optimization | cs.LG | In this paper, a robust online sequential extreme learning machine (ROS-ELM)
is proposed. It is based on the original OS-ELM with an adaptive selective
ensemble framework. Two novel insights are proposed in this paper. First, a
novel selective ensemble algorithm referred to as particle swarm optimization
selective ensemble (PSOSEN) is proposed. Noting that PSOSEN is a general
selective ensemble method which is applicable to any learning algorithms,
including batch learning and online learning. Second, an adaptive selective
ensemble framework for online learning is designed to balance the robustness
and complexity of the algorithm. Experiments for both regression and
classification problems with UCI data sets are carried out. Comparisons between
OS-ELM, simple ensemble OS-ELM (EOS-ELM) and the proposed ROS-ELM empirically
show that ROS-ELM significantly improves the robustness and stability.
| Yang Liu, Bo He, Diya Dong, Yue Shen, Tianhong Yan, Rui Nian, Amaury
Lendase | null | 1408.2890 | null | null |
Learning Multi-Scale Representations for Material Classification | cs.CV cs.LG cs.NE | The recent progress in sparse coding and deep learning has made unsupervised
feature learning methods a strong competitor to hand-crafted descriptors. In
computer vision, success stories of learned features have been predominantly
reported for object recognition tasks. In this paper, we investigate if and how
feature learning can be used for material recognition. We propose two
strategies to incorporate scale information into the learning procedure
resulting in a novel multi-scale coding procedure. Our results show that our
learned features for material recognition outperform hand-crafted descriptors
on the FMD and the KTH-TIPS2 material classification benchmarks.
| Wenbin Li, Mario Fritz | null | 1408.2938 | null | null |
Fastfood: Approximate Kernel Expansions in Loglinear Time | cs.LG stat.ML | Despite their successes, what makes kernel methods difficult to use in many
large scale problems is the fact that storing and computing the decision
function is typically expensive, especially at prediction time. In this paper,
we overcome this difficulty by proposing Fastfood, an approximation that
accelerates such computation significantly. Key to Fastfood is the observation
that Hadamard matrices, when combined with diagonal Gaussian matrices, exhibit
properties similar to dense Gaussian random matrices. Yet unlike the latter,
Hadamard and diagonal matrices are inexpensive to multiply and store. These two
matrices can be used in lieu of Gaussian matrices in Random Kitchen Sinks
proposed by Rahimi and Recht (2009) and thereby speeding up the computation for
a large range of kernel functions. Specifically, Fastfood requires O(n log d)
time and O(n) storage to compute n non-linear basis functions in d dimensions,
a significant improvement from O(nd) computation and storage, without
sacrificing accuracy.
Our method applies to any translation invariant and any dot-product kernel,
such as the popular RBF kernels and polynomial kernels. We prove that the
approximation is unbiased and has low variance. Experiments show that we
achieve similar accuracy to full kernel expansions and Random Kitchen Sinks
while being 100x faster and using 1000x less memory. These improvements,
especially in terms of memory usage, make kernel methods more practical for
applications that have large training sets and/or require real-time prediction.
| Quoc Viet Le, Tamas Sarlos, Alexander Johannes Smola | null | 1408.3060 | null | null |
Human Activity Learning and Segmentation using Partially Hidden
Discriminative Models | cs.LG cs.CV stat.ML | Learning and understanding the typical patterns in the daily activities and
routines of people from low-level sensory data is an important problem in many
application domains such as building smart environments, or providing
intelligent assistance. Traditional approaches to this problem typically rely
on supervised learning and generative models such as the hidden Markov models
and its extensions. While activity data can be readily acquired from pervasive
sensors, e.g. in smart environments, providing manual labels to support
supervised training is often extremely expensive. In this paper, we propose a
new approach based on semi-supervised training of partially hidden
discriminative models such as the conditional random field (CRF) and the
maximum entropy Markov model (MEMM). We show that these models allow us to
incorporate both labeled and unlabeled data for learning, and at the same time,
provide us with the flexibility and accuracy of the discriminative framework.
Our experimental results in the video surveillance domain illustrate that these
models can perform better than their generative counterpart, the partially
hidden Markov model, even when a substantial amount of labels are unavailable.
| Truyen Tran, Hung Bui, Svetha Venkatesh | null | 1408.3081 | null | null |
Convergence rate of Bayesian tensor estimator: Optimal rate without
restricted strong convexity | stat.ML cs.LG | In this paper, we investigate the statistical convergence rate of a Bayesian
low-rank tensor estimator. Our problem setting is the regression problem where
a tensor structure underlying the data is estimated. This problem setting
occurs in many practical applications, such as collaborative filtering,
multi-task learning, and spatio-temporal data analysis. The convergence rate is
analyzed in terms of both in-sample and out-of-sample predictive accuracies. It
is shown that a near optimal rate is achieved without any strong convexity of
the observation. Moreover, we show that the method has adaptivity to the
unknown rank of the true tensor, that is, the near optimal rate depending on
the true rank is achieved even if it is not known a priori.
| Taiji Suzuki | null | 1408.3092 | null | null |
On Data Preconditioning for Regularized Loss Minimization | cs.NA cs.LG stat.ML | In this work, we study data preconditioning, a well-known and long-existing
technique, for boosting the convergence of first-order methods for regularized
loss minimization. It is well understood that the condition number of the
problem, i.e., the ratio of the Lipschitz constant to the strong convexity
modulus, has a harsh effect on the convergence of the first-order optimization
methods. Therefore, minimizing a small regularized loss for achieving good
generalization performance, yielding an ill conditioned problem, becomes the
bottleneck for big data problems. We provide a theory on data preconditioning
for regularized loss minimization. In particular, our analysis exhibits an
appropriate data preconditioner and characterizes the conditions on the loss
function and on the data under which data preconditioning can reduce the
condition number and therefore boost the convergence for minimizing the
regularized loss. To make the data preconditioning practically useful, we
endeavor to employ and analyze a random sampling approach to efficiently
compute the preconditioned data. The preliminary experiments validate our
theory.
| Tianbao Yang, Rong Jin, Shenghuo Zhu, Qihang Lin | null | 1408.3115 | null | null |
Indefinitely Oscillating Martingales | cs.LG math.PR math.ST stat.TH | We construct a class of nonnegative martingale processes that oscillate
indefinitely with high probability. For these processes, we state a uniform
rate of the number of oscillations and show that this rate is asymptotically
close to the theoretical upper bound. These bounds on probability and
expectation of the number of upcrossings are compared to classical bounds from
the martingale literature. We discuss two applications. First, our results
imply that the limit of the minimum description length operator may not exist.
Second, we give bounds on how often one can change one's belief in a given
hypothesis when observing a stream of data.
| Jan Leike and Marcus Hutter | null | 1408.3169 | null | null |
Toward Automated Discovery of Artistic Influence | cs.CV cs.LG | Considering the huge amount of art pieces that exist, there is valuable
information to be discovered. Examining a painting, an expert can determine its
style, genre, and the time period that the painting belongs. One important task
for art historians is to find influences and connections between artists. Is
influence a task that a computer can measure? The contribution of this paper is
in exploring the problem of computer-automated suggestion of influences between
artists, a problem that was not addressed before in a general setting. We first
present a comparative study of different classification methodologies for the
task of fine-art style classification. A two-level comparative study is
performed for this classification problem. The first level reviews the
performance of discriminative vs. generative models, while the second level
touches the features aspect of the paintings and compares semantic-level
features vs. low-level and intermediate-level features present in the painting.
Then, we investigate the question "Who influenced this artist?" by looking at
his masterpieces and comparing them to others. We pose this interesting
question as a knowledge discovery problem. For this purpose, we investigated
several painting-similarity and artist-similarity measures. As a result, we
provide a visualization of artists (Map of Artists) based on the similarity
between their works
| Babak Saleh, Kanako Abe, Ravneet Singh Arora, Ahmed Elgammal | null | 1408.3218 | null | null |
A brief survey on deep belief networks and introducing a new object
oriented toolbox (DeeBNet) | cs.CV cs.LG cs.MS cs.NE | Nowadays, this is very popular to use the deep architectures in machine
learning. Deep Belief Networks (DBNs) are deep architectures that use stack of
Restricted Boltzmann Machines (RBM) to create a powerful generative model using
training data. DBNs have many ability like feature extraction and
classification that are used in many applications like image processing, speech
processing and etc. This paper introduces a new object oriented MATLAB toolbox
with most of abilities needed for the implementation of DBNs. In the new
version, the toolbox can be used in Octave. According to the results of the
experiments conducted on MNIST (image), ISOLET (speech), and 20 Newsgroups
(text) datasets, it was shown that the toolbox can learn automatically a good
representation of the input from unlabeled data with better discrimination
between different classes. Also on all datasets, the obtained classification
errors are comparable to those of state of the art classifiers. In addition,
the toolbox supports different sampling methods (e.g. Gibbs, CD, PCD and our
new FEPCD method), different sparsity methods (quadratic, rate distortion and
our new normal method), different RBM types (generative and discriminative),
using GPU, etc. The toolbox is a user-friendly open source software and is
freely available on the website
http://ceit.aut.ac.ir/~keyvanrad/DeeBNet%20Toolbox.html .
| Mohammad Ali Keyvanrad, Mohammad Mehdi Homayounpour | null | 1408.3264 | null | null |
Exact and empirical estimation of misclassification probability | stat.ML cs.LG | We discuss the problem of risk estimation in the classification problem, with
specific focus on finding distributions that maximize the confidence intervals
of risk estimation. We derived simple analytic approximations for the maximum
bias of empirical risk for histogram classifier. We carry out a detailed study
on using these analytic estimates for empirical estimation of risk.
| Victor Nedelko | null | 1408.3332 | null | null |
2D View Aggregation for Lymph Node Detection Using a Shallow Hierarchy
of Linear Classifiers | cs.CV cs.LG | Enlarged lymph nodes (LNs) can provide important information for cancer
diagnosis, staging, and measuring treatment reactions, making automated
detection a highly sought goal. In this paper, we propose a new algorithm
representation of decomposing the LN detection problem into a set of 2D object
detection subtasks on sampled CT slices, largely alleviating the curse of
dimensionality issue. Our 2D detection can be effectively formulated as linear
classification on a single image feature type of Histogram of Oriented
Gradients (HOG), covering a moderate field-of-view of 45 by 45 voxels. We
exploit both simple pooling and sparse linear fusion schemes to aggregate these
2D detection scores for the final 3D LN detection. In this manner, detection is
more tractable and does not need to perform perfectly at instance level (as
weak hypotheses) since our aggregation process will robustly harness collective
information for LN detection. Two datasets (90 patients with 389 mediastinal
LNs and 86 patients with 595 abdominal LNs) are used for validation.
Cross-validation demonstrates 78.0% sensitivity at 6 false positives/volume
(FP/vol.) (86.1% at 10 FP/vol.) and 73.1% sensitivity at 6 FP/vol. (87.2% at 10
FP/vol.), for the mediastinal and abdominal datasets respectively. Our results
compare favorably to previous state-of-the-art methods.
| Ari Seff, Le Lu, Kevin M. Cherry, Holger Roth, Jiamin Liu, Shijun
Wang, Joanne Hoffman, Evrim B. Turkbey, and Ronald M. Summers | null | 1408.3337 | null | null |
Linear Contour Learning: A Method for Supervised Dimension Reduction | cs.LG | We propose a novel approach to sufficient dimension reduction in regression,
based on estimating contour directions of negligible variation for the response
surface. These directions span the orthogonal complement of the minimal space
relevant for the regression, and can be extracted according to a measure of the
variation in the response, leading to General Contour Regression(GCR). In
comparison to exiisting sufficient dimension reduction techniques, this
sontour-based mothology guarantees exhaustive estimation of the central space
under ellipticity of the predictoor distribution and very mild additional
assumptions, while maintaining vn-consisytency and somputational ease.
Moreover, it proves to be robust to departures from ellipticity. We also
establish some useful population properties for GCR. Simulations to compare
performance with that of standard techniques such as ordinary least squares,
sliced inverse regression, principal hessian directions, and sliced average
variance estimation confirm the advntages anticipated by theoretical analyses.
We also demonstrate the use of contour-based methods on a data set concerning
grades of students from Massachusetts colleges.
| Bing Li, Hongyuan Zha, Francesca Chiaromonte | null | 1408.3359 | null | null |
Likely to stop? Predicting Stopout in Massive Open Online Courses | cs.CY cs.LG | Understanding why students stopout will help in understanding how students
learn in MOOCs. In this report, part of a 3 unit compendium, we describe how we
build accurate predictive models of MOOC student stopout. We document a
scalable, stopout prediction methodology, end to end, from raw source data to
model analysis. We attempted to predict stopout for the Fall 2012 offering of
6.002x. This involved the meticulous and crowd-sourced engineering of over 25
predictive features extracted for thousands of students, the creation of
temporal and non-temporal data representations for use in predictive modeling,
the derivation of over 10 thousand models with a variety of state-of-the-art
machine learning techniques and the analysis of feature importance by examining
over 70000 models. We found that stop out prediction is a tractable problem.
Our models achieved an AUC (receiver operating characteristic
area-under-the-curve) as high as 0.95 (and generally 0.88) when predicting one
week in advance. Even with more difficult prediction problems, such as
predicting stop out at the end of the course with only one weeks' data, the
models attained AUCs of 0.7.
| Colin Taylor, Kalyan Veeramachaneni, Una-May O'Reilly | null | 1408.3382 | null | null |
Evaluating Visual Properties via Robust HodgeRank | stat.ME cs.LG stat.ML | Nowadays, how to effectively evaluate visual properties has become a popular
topic for fine-grained visual comprehension. In this paper we study the problem
of how to estimate such visual properties from a ranking perspective with the
help of the annotators from online crowdsourcing platforms. The main challenges
of our task are two-fold. On one hand, the annotations often contain
contaminated information, where a small fraction of label flips might ruin the
global ranking of the whole dataset. On the other hand, considering the large
data capacity, the annotations are often far from being complete. What is
worse, there might even exist imbalanced annotations where a small subset of
samples are frequently annotated. Facing such challenges, we propose a robust
ranking framework based on the principle of Hodge decomposition of imbalanced
and incomplete ranking data. According to the HodgeRank theory, we find that
the major source of the contamination comes from the cyclic ranking component
of the Hodge decomposition. This leads us to an outlier detection formulation
as sparse approximations of the cyclic ranking projection. Taking a step
further, it facilitates a novel outlier detection model as Huber's LASSO in
robust statistics. Moreover, simple yet scalable algorithms are developed based
on Linearized Bregman Iteration to achieve an even less biased estimator.
Statistical consistency of outlier detection is established in both cases under
nearly the same conditions. Our studies are supported by experiments with both
simulated examples and real-world data. The proposed framework provides us a
promising tool for robust ranking with large scale crowdsourcing data arising
from computer vision.
| Qianqian Xu and Jiechao Xiong and Xiaochun Cao and Qingming Huang and
Yuan Yao | null | 1408.3467 | null | null |
Stability and Performance Limits of Adaptive Primal-Dual Networks | math.OC cs.DC cs.LG cs.MA | This work studies distributed primal-dual strategies for adaptation and
learning over networks from streaming data. Two first-order methods are
considered based on the Arrow-Hurwicz (AH) and augmented Lagrangian (AL)
techniques. Several revealing results are discovered in relation to the
performance and stability of these strategies when employed over adaptive
networks. The conclusions establish that the advantages that these methods have
for deterministic optimization problems do not necessarily carry over to
stochastic optimization problems. It is found that they have narrower stability
ranges and worse steady-state mean-square-error performance than primal methods
of the consensus and diffusion type. It is also found that the AH technique can
become unstable under a partial observation model, while the other techniques
are able to recover the unknown under this scenario. A method to enhance the
performance of AL strategies is proposed by tying the selection of the
step-size to their regularization parameter. It is shown that this method
allows the AL algorithm to approach the performance of consensus and diffusion
strategies but that it remains less stable than these other strategies.
| Zaid J. Towfic and Ali H. Sayed | 10.1109/TSP.2015.2415759 | 1408.3693 | null | null |
Inverse Reinforcement Learning with Multi-Relational Chains for
Robot-Centered Smart Home | cs.RO cs.LG | In a robot-centered smart home, the robot observes the home states with its
own sensors, and then it can change certain object states according to an
operator's commands for remote operations, or imitate the operator's behaviors
in the house for autonomous operations. To model the robot's imitation of the
operator's behaviors in a dynamic indoor environment, we use multi-relational
chains to describe the changes of environment states, and apply inverse
reinforcement learning to encoding the operator's behaviors with a learned
reward function. We implement this approach with a mobile robot, and do five
experiments to include increasing training days, object numbers, and action
types. Besides, a baseline method by directly recording the operator's
behaviors is also implemented, and comparison is made on the accuracy of home
state evaluation and the accuracy of robot action selection. The results show
that the proposed approach handles dynamic environment well, and guides the
robot's actions in the house more accurately.
| Kun Li, Max Q.-H. Meng | null | 1408.3727 | null | null |
Multi-Sensor Event Detection using Shape Histograms | cs.LG | Vehicular sensor data consists of multiple time-series arising from a number
of sensors. Using such multi-sensor data we would like to detect occurrences of
specific events that vehicles encounter, e.g., corresponding to particular
maneuvers that a vehicle makes or conditions that it encounters. Events are
characterized by similar waveform patterns re-appearing within one or more
sensors. Further such patterns can be of variable duration. In this work, we
propose a method for detecting such events in time-series data using a novel
feature descriptor motivated by similar ideas in image processing. We define
the shape histogram: a constant dimension descriptor that nevertheless captures
patterns of variable duration. We demonstrate the efficacy of using shape
histograms as features to detect events in an SVM-based, multi-sensor,
supervised learning scenario, i.e., multiple time-series are used to detect an
event. We present results on real-life vehicular sensor data and show that our
technique performs better than available pattern detection implementations on
our data, and that it can also be used to combine features from multiple
sensors resulting in better accuracy than using any single sensor. Since
previous work on pattern detection in time-series has been in the single series
context, we also present results using our technique on multiple standard
time-series datasets and show that it is the most versatile in terms of how it
ranks compared to other published results.
| Ehtesham Hassan and Gautam Shroff and Puneet Agarwal | null | 1408.3733 | null | null |
Real-time emotion recognition for gaming using deep convolutional
network features | cs.CV cs.LG cs.NE | The goal of the present study is to explore the application of deep
convolutional network features to emotion recognition. Results indicate that
they perform similarly to other published models at a best recognition rate of
94.4%, and do so with a single still image rather than a video stream. An
implementation of an affective feedback game is also described, where a
classifier using these features tracks the facial expressions of a player in
real-time.
| S\'ebastien Ouellet | null | 1408.3750 | null | null |
Down-Sampling coupled to Elastic Kernel Machines for Efficient
Recognition of Isolated Gestures | cs.LG cs.HC | In the field of gestural action recognition, many studies have focused on
dimensionality reduction along the spatial axis, to reduce both the variability
of gestural sequences expressed in the reduced space, and the computational
complexity of their processing. It is noticeable that very few of these methods
have explicitly addressed the dimensionality reduction along the time axis.
This is however a major issue with regard to the use of elastic distances
characterized by a quadratic complexity. To partially fill this apparent gap,
we present in this paper an approach based on temporal down-sampling associated
to elastic kernel machine learning. We experimentally show, on two data sets
that are widely referenced in the domain of human gesture recognition, and very
different in terms of quality of motion capture, that it is possible to
significantly reduce the number of skeleton frames while maintaining a good
recognition rate. The method proves to give satisfactory results at a level
currently reached by state-of-the-art methods on these data sets. The
computational complexity reduction makes this approach eligible for real-time
applications.
| Pierre-Fran\c{c}ois Marteau (IRISA), Sylvie Gibet (IRISA), Clement
Reverdy (IRISA) | null | 1408.3944 | null | null |
Learning Deep Representation for Face Alignment with Auxiliary
Attributes | cs.CV cs.LG | In this study, we show that landmark detection or face alignment task is not
a single and independent problem. Instead, its robustness can be greatly
improved with auxiliary information. Specifically, we jointly optimize landmark
detection together with the recognition of heterogeneous but subtly correlated
facial attributes, such as gender, expression, and appearance attributes. This
is non-trivial since different attribute inference tasks have different
learning difficulties and convergence rates. To address this problem, we
formulate a novel tasks-constrained deep model, which not only learns the
inter-task correlation but also employs dynamic task coefficients to facilitate
the optimization convergence when learning multiple complex tasks. Extensive
evaluations show that the proposed task-constrained learning (i) outperforms
existing face alignment methods, especially in dealing with faces with severe
occlusion and pose variation, and (ii) reduces model complexity drastically
compared to the state-of-the-art methods based on cascaded deep model.
| Zhanpeng Zhang, Ping Luo, Chen Change Loy, Xiaoou Tang | 10.1109/TPAMI.2015.2469286 | 1408.3967 | null | null |
Relax, no need to round: integrality of clustering formulations | stat.ML cs.DS cs.LG math.ST stat.TH | We study exact recovery conditions for convex relaxations of point cloud
clustering problems, focusing on two of the most common optimization problems
for unsupervised clustering: $k$-means and $k$-median clustering. Motivations
for focusing on convex relaxations are: (a) they come with a certificate of
optimality, and (b) they are generic tools which are relatively parameter-free,
not tailored to specific assumptions over the input. More precisely, we
consider the distributional setting where there are $k$ clusters in
$\mathbb{R}^m$ and data from each cluster consists of $n$ points sampled from a
symmetric distribution within a ball of unit radius. We ask: what is the
minimal separation distance between cluster centers needed for convex
relaxations to exactly recover these $k$ clusters as the optimal integral
solution? For the $k$-median linear programming relaxation we show a tight
bound: exact recovery is obtained given arbitrarily small pairwise separation
$\epsilon > 0$ between the balls. In other words, the pairwise center
separation is $\Delta > 2+\epsilon$. Under the same distributional model, the
$k$-means LP relaxation fails to recover such clusters at separation as large
as $\Delta = 4$. Yet, if we enforce PSD constraints on the $k$-means LP, we get
exact cluster recovery at center separation $\Delta > 2\sqrt2(1+\sqrt{1/m})$.
In contrast, common heuristics such as Lloyd's algorithm (a.k.a. the $k$-means
algorithm) can fail to recover clusters in this setting; even with arbitrarily
large cluster separation, k-means++ with overseeding by any constant factor
fails with high probability at exact cluster recovery. To complement the
theoretical analysis, we provide an experimental study of the recovery
guarantees for these various methods, and discuss several open problems which
these experiments suggest.
| Pranjal Awasthi, Afonso S. Bandeira, Moses Charikar, Ravishankar
Krishnaswamy, Soledad Villar, Rachel Ward | null | 1408.4045 | null | null |
Indexing Cost Sensitive Prediction | cs.LG cs.DB cs.DS | Predictive models are often used for real-time decision making. However,
typical machine learning techniques ignore feature evaluation cost, and focus
solely on the accuracy of the machine learning models obtained utilizing all
the features available. We develop algorithms and indexes to support
cost-sensitive prediction, i.e., making decisions using machine learning models
taking feature evaluation cost into account. Given an item and a online
computation cost (i.e., time) budget, we present two approaches to return an
appropriately chosen machine learning model that will run within the specified
time on the given item. The first approach returns the optimal machine learning
model, i.e., one with the highest accuracy, that runs within the specified
time, but requires significant up-front precomputation time. The second
approach returns a possibly sub- optimal machine learning model, but requires
little up-front precomputation time. We study these two algorithms in detail
and characterize the scenarios (using real and synthetic data) in which each
performs well. Unlike prior work that focuses on a narrow domain or a specific
algorithm, our techniques are very general: they apply to any cost-sensitive
prediction scenario on any machine learning algorithm.
| Leilani Battle, Edward Benson, Aditya Parameswaran, Eugene Wu | null | 1408.4072 | null | null |
Dimensionality Reduction of Affine Variational Inequalities Using Random
Projections | math.OC cs.LG cs.SY | We present a method for dimensionality reduction of an affine variational
inequality (AVI) defined over a compact feasible region. Centered around the
Johnson Lindenstrauss lemma, our method is a randomized algorithm that produces
with high probability an approximate solution for the given AVI by solving a
lower-dimensional AVI. The algorithm allows the lower dimension to be chosen
based on the quality of approximation desired. The algorithm can also be used
as a subroutine in an exact algorithm for generating an initial point close to
the solution. The lower-dimensional AVI is obtained by appropriately projecting
the original AVI on a randomly chosen subspace. The lower-dimensional AVI is
solved using standard solvers and from this solution an approximate solution to
the original AVI is recovered through an inexpensive process. Our numerical
experiments corroborate the theoretical results and validate that the algorithm
provides a good approximation at low dimensions and substantial savings in time
for an exact solution.
| Bharat Prabhakar, Ankur A. Kulkarni | null | 1408.4551 | null | null |
Introduction to Clustering Algorithms and Applications | cs.LG cs.CV | Data clustering is the process of identifying natural groupings or clusters
within multidimensional data based on some similarity measure. Clustering is a
fundamental process in many different disciplines. Hence, researchers from
different fields are actively working on the clustering problem. This paper
provides an overview of the different representative clustering methods. In
addition, application of clustering in different field is briefly introduced.
| Sibei Yang and Liangde Tao and Bingchen Gong | null | 1408.4576 | null | null |
A new integral loss function for Bayesian optimization | stat.CO cs.LG math.OC stat.ML | We consider the problem of maximizing a real-valued continuous function $f$
using a Bayesian approach. Since the early work of Jonas Mockus and Antanas
\v{Z}ilinskas in the 70's, the problem of optimization is usually formulated by
considering the loss function $\max f - M_n$ (where $M_n$ denotes the best
function value observed after $n$ evaluations of $f$). This loss function puts
emphasis on the value of the maximum, at the expense of the location of the
maximizer. In the special case of a one-step Bayes-optimal strategy, it leads
to the classical Expected Improvement (EI) sampling criterion. This is a
special case of a Stepwise Uncertainty Reduction (SUR) strategy, where the risk
associated to a certain uncertainty measure (here, the expected loss) on the
quantity of interest is minimized at each step of the algorithm. In this
article, assuming that $f$ is defined over a measure space $(\mathbb{X},
\lambda)$, we propose to consider instead the integral loss function
$\int_{\mathbb{X}} (f - M_n)_{+}\, d\lambda$, and we show that this leads, in
the case of a Gaussian process prior, to a new numerically tractable sampling
criterion that we call $\rm EI^2$ (for Expected Integrated Expected
Improvement). A numerical experiment illustrates that a SUR strategy based on
this new sampling criterion reduces the error on both the value and the
location of the maximizer faster than the EI-based strategy.
| Emmanuel Vazquez and Julien Bect | null | 1408.4622 | null | null |
AFP Algorithm and a Canonical Normal Form for Horn Formulas | cs.LG | AFP Algorithm is a learning algorithm for Horn formulas. We show that it does
not improve the complexity of AFP Algorithm, if after each negative
counterexample more that just one refinements are performed. Moreover, a
canonical normal form for Horn formulas is presented, and it is proved that the
output formula of AFP Algorithm is in this normal form.
| Ruhollah Majdoddin | null | 1408.4673 | null | null |
Conic Multi-Task Classification | cs.LG | Traditionally, Multi-task Learning (MTL) models optimize the average of
task-related objective functions, which is an intuitive approach and which we
will be referring to as Average MTL. However, a more general framework,
referred to as Conic MTL, can be formulated by considering conic combinations
of the objective functions instead; in this framework, Average MTL arises as a
special case, when all combination coefficients equal 1. Although the advantage
of Conic MTL over Average MTL has been shown experimentally in previous works,
no theoretical justification has been provided to date. In this paper, we
derive a generalization bound for the Conic MTL method, and demonstrate that
the tightest bound is not necessarily achieved, when all combination
coefficients equal 1; hence, Average MTL may not always be the optimal choice,
and it is important to consider Conic MTL. As a byproduct of the generalization
bound, it also theoretically explains the good experimental results of previous
relevant works. Finally, we propose a new Conic MTL model, whose conic
combination coefficients minimize the generalization bound, instead of choosing
them heuristically as has been done in previous methods. The rationale and
advantage of our model is demonstrated and verified via a series of experiments
by comparing with several other methods.
| Cong Li, Michael Georgiopoulos, Georgios C. Anagnostopoulos | null | 1408.4714 | null | null |
Diffusion Fingerprints | stat.ML cs.IR cs.LG | We introduce, test and discuss a method for classifying and clustering data
modeled as directed graphs. The idea is to start diffusion processes from any
subset of a data collection, generating corresponding distributions for
reaching points in the network. These distributions take the form of
high-dimensional numerical vectors and capture essential topological properties
of the original dataset. We show how these diffusion vectors can be
successfully applied for getting state-of-the-art accuracies in the problem of
extracting pathways from metabolic networks. We also provide a guideline to
illustrate how to use our method for classification problems, and discuss
important details of its implementation. In particular, we present a simple
dimensionality reduction technique that lowers the computational cost of
classifying diffusion vectors, while leaving the predictive power of the
classification process substantially unaltered. Although the method has very
few parameters, the results we obtain show its flexibility and power. This
should make it helpful in many other contexts.
| Jimmy Dubuisson, Jean-Pierre Eckmann and Andrea Agazzi | null | 1408.4966 | null | null |
Caffe: Convolutional Architecture for Fast Feature Embedding | cs.CV cs.LG cs.NE | Caffe provides multimedia scientists and practitioners with a clean and
modifiable framework for state-of-the-art deep learning algorithms and a
collection of reference models. The framework is a BSD-licensed C++ library
with Python and MATLAB bindings for training and deploying general-purpose
convolutional neural networks and other deep models efficiently on commodity
architectures. Caffe fits industry and internet-scale media needs by CUDA GPU
computation, processing over 40 million images a day on a single K40 or Titan
GPU ($\approx$ 2.5 ms per image). By separating model representation from
actual implementation, Caffe allows experimentation and seamless switching
among platforms for ease of development and deployment from prototyping
machines to cloud environments. Caffe is maintained and developed by the
Berkeley Vision and Learning Center (BVLC) with the help of an active community
of contributors on GitHub. It powers ongoing research projects, large-scale
industrial applications, and startup prototypes in vision, speech, and
multimedia.
| Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan
Long, Ross Girshick, Sergio Guadarrama, Trevor Darrell | null | 1408.5093 | null | null |
Uniform Sampling for Matrix Approximation | cs.DS cs.LG stat.ML | Random sampling has become a critical tool in solving massive matrix
problems. For linear regression, a small, manageable set of data rows can be
randomly selected to approximate a tall, skinny data matrix, improving
processing time significantly. For theoretical performance guarantees, each row
must be sampled with probability proportional to its statistical leverage
score. Unfortunately, leverage scores are difficult to compute.
A simple alternative is to sample rows uniformly at random. While this often
works, uniform sampling will eliminate critical row information for many
natural instances. We take a fresh look at uniform sampling by examining what
information it does preserve. Specifically, we show that uniform sampling
yields a matrix that, in some sense, well approximates a large fraction of the
original. While this weak form of approximation is not enough for solving
linear regression directly, it is enough to compute a better approximation.
This observation leads to simple iterative row sampling algorithms for matrix
approximation that run in input-sparsity time and preserve row structure and
sparsity at all intermediate steps. In addition to an improved understanding of
uniform sampling, our main proof introduces a structural result of independent
interest: we show that every matrix can be made to have low coherence by
reweighting a small subset of its rows.
| Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco,
Richard Peng, Aaron Sidford | null | 1408.5099 | null | null |
A two-stage architecture for stock price forecasting by combining SOM
and fuzzy-SVM | cs.AI cs.LG | This paper proposed a model to predict the stock price based on combining
Self-Organizing Map (SOM) and fuzzy-Support Vector Machines (f-SVM). Extraction
of fuzzy rules from raw data based on the combining of statistical machine
learning models is base of this proposed approach. In the proposed model, SOM
is used as a clustering algorithm to partition the whole input space into the
several disjoint regions. For each partition, a set of fuzzy rules is extracted
based on a f-SVM combining model. Then fuzzy rules sets are used to predict the
test data using fuzzy inference algorithms. The performance of the proposed
approach is compared with other models using four data sets
| Duc-Hien Nguyen, Manh-Thanh Le | null | 1408.5241 | null | null |
Improving the Interpretability of Support Vector Machines-based Fuzzy
Rules | cs.LG cs.AI | Support vector machines (SVMs) and fuzzy rule systems are functionally
equivalent under some conditions. Therefore, the learning algorithms developed
in the field of support vector machines can be used to adapt the parameters of
fuzzy systems. Extracting fuzzy models from support vector machines has the
inherent advantage that the model does not need to determine the number of
rules in advance. However, after the support vector machine learning, the
complexity is usually high, and interpretability is also impaired. This paper
not only proposes a complete framework for extracting interpretable SVM-based
fuzzy modeling, but also provides optimization issues of the models.
Simulations examples are given to embody the idea of this paper.
| Duc-Hien Nguyen, Manh-Thanh Le | null | 1408.5246 | null | null |
Nonconvex Statistical Optimization: Minimax-Optimal Sparse PCA in
Polynomial Time | stat.ML cs.LG | Sparse principal component analysis (PCA) involves nonconvex optimization for
which the global solution is hard to obtain. To address this issue, one popular
approach is convex relaxation. However, such an approach may produce suboptimal
estimators due to the relaxation effect. To optimally estimate sparse principal
subspaces, we propose a two-stage computational framework named "tighten after
relax": Within the 'relax' stage, we approximately solve a convex relaxation of
sparse PCA with early stopping to obtain a desired initial estimator; For the
'tighten' stage, we propose a novel algorithm called sparse orthogonal
iteration pursuit (SOAP), which iteratively refines the initial estimator by
directly solving the underlying nonconvex problem. A key concept of this
two-stage framework is the basin of attraction. It represents a local region
within which the `tighten' stage has desired computational and statistical
guarantees. We prove that, the initial estimator obtained from the 'relax'
stage falls into such a region, and hence SOAP geometrically converges to a
principal subspace estimator which is minimax-optimal within a certain model
class. Unlike most existing sparse PCA estimators, our approach applies to the
non-spiked covariance models, and adapts to non-Gaussianity as well as
dependent data settings. Moreover, through analyzing the computational
complexity of the two stages, we illustrate an interesting phenomenon that
larger sample size can reduce the total iteration complexity. Our framework
motivates a general paradigm for solving many complex statistical problems
which involve nonconvex optimization with provable guarantees.
| Zhaoran Wang, Huanran Lu, Han Liu | null | 1408.5352 | null | null |
Computing Multi-Relational Sufficient Statistics for Large Databases | cs.LG cs.DB | Databases contain information about which relationships do and do not hold
among entities. To make this information accessible for statistical analysis
requires computing sufficient statistics that combine information from
different database tables. Such statistics may involve any number of {\em
positive and negative} relationships. With a naive enumeration approach,
computing sufficient statistics for negative relationships is feasible only for
small databases. We solve this problem with a new dynamic programming algorithm
that performs a virtual join, where the requisite counts are computed without
materializing join tables. Contingency table algebra is a new extension of
relational algebra, that facilitates the efficient implementation of this
M\"obius virtual join operation. The M\"obius Join scales to large datasets
(over 1M tuples) with complex schemas. Empirical evaluation with seven
benchmark datasets showed that information about the presence and absence of
links can be exploited in feature selection, association rule mining, and
Bayesian network learning.
| Zhensong Qian, Oliver Schulte and Yan Sun | 10.1145/2661829.2662010 | 1408.5389 | null | null |
Hierarchical Adaptive Structural SVM for Domain Adaptation | cs.CV cs.LG | A key topic in classification is the accuracy loss produced when the data
distribution in the training (source) domain differs from that in the testing
(target) domain. This is being recognized as a very relevant problem for many
computer vision tasks such as image classification, object detection, and
object category recognition. In this paper, we present a novel domain
adaptation method that leverages multiple target domains (or sub-domains) in a
hierarchical adaptation tree. The core idea is to exploit the commonalities and
differences of the jointly considered target domains.
Given the relevance of structural SVM (SSVM) classifiers, we apply our idea
to the adaptive SSVM (A-SSVM), which only requires the target domain samples
together with the existing source-domain classifier for performing the desired
adaptation. Altogether, we term our proposal as hierarchical A-SSVM (HA-SSVM).
As proof of concept we use HA-SSVM for pedestrian detection and object
category recognition. In the former we apply HA-SSVM to the deformable
part-based model (DPM) while in the latter HA-SSVM is applied to multi-category
classifiers. In both cases, we show how HA-SSVM is effective in increasing the
detection/recognition accuracy with respect to adaptation strategies that
ignore the structure of the target data. Since, the sub-domains of the target
data are not always known a priori, we shown how HA-SSVM can incorporate
sub-domain structure discovery for object category recognition.
| Jiaolong Xu, Sebastian Ramos, David Vazquez, Antonio M. Lopez | null | 1408.5400 | null | null |
A Case Study in Text Mining: Interpreting Twitter Data From World Cup
Tweets | stat.ML cs.CL cs.IR cs.LG | Cluster analysis is a field of data analysis that extracts underlying
patterns in data. One application of cluster analysis is in text-mining, the
analysis of large collections of text to find similarities between documents.
We used a collection of about 30,000 tweets extracted from Twitter just before
the World Cup started. A common problem with real world text data is the
presence of linguistic noise. In our case it would be extraneous tweets that
are unrelated to dominant themes. To combat this problem, we created an
algorithm that combined the DBSCAN algorithm and a consensus matrix. This way
we are left with the tweets that are related to those dominant themes. We then
used cluster analysis to find those topics that the tweets describe. We
clustered the tweets using k-means, a commonly used clustering algorithm, and
Non-Negative Matrix Factorization (NMF) and compared the results. The two
algorithms gave similar results, but NMF proved to be faster and provided more
easily interpreted results. We explored our results using two visualization
tools, Gephi and Wordle.
| Daniel Godfrey, Caley Johns, Carl Meyer, Shaina Race, Carol Sadek | null | 1408.5427 | null | null |
Stretchy Polynomial Regression | cs.LG stat.ML | This article proposes a novel solution for stretchy polynomial regression
learning. The solution comes in primal and dual closed-forms similar to that of
ridge regression. Essentially, the proposed solution stretches the covariance
computation via a power term thereby compresses or amplifies the estimation.
Our experiments on both synthetic data and real-world data show effectiveness
of the proposed method for compressive learning.
| Kar-Ann Toh | null | 1408.5449 | null | null |
Interpreting Tree Ensembles with inTrees | cs.LG stat.ML | Tree ensembles such as random forests and boosted trees are accurate but
difficult to understand, debug and deploy. In this work, we provide the inTrees
(interpretable trees) framework that extracts, measures, prunes and selects
rules from a tree ensemble, and calculates frequent variable interactions. An
rule-based learner, referred to as the simplified tree ensemble learner (STEL),
can also be formed and used for future prediction. The inTrees framework can
applied to both classification and regression problems, and is applicable to
many types of tree ensembles, e.g., random forests, regularized random forests,
and boosted trees. We implemented the inTrees algorithms in the "inTrees" R
package.
| Houtao Deng | null | 1408.5456 | null | null |
To lie or not to lie in a subspace | stat.ML cs.LG | Give deterministic necessary and sufficient conditions to guarantee that if a
subspace fits certain partially observed data from a union of subspaces, it is
because such data really lies in a subspace.
Furthermore, Give deterministic necessary and sufficient conditions to
guarantee that if a subspace fits certain partially observed data, such
subspace is unique.
Do this by characterizing when and only when a set of incomplete vectors
behaves as a single but complete one.
| Daniel L. Pimentel-Alarc\'on | null | 1408.5544 | null | null |
Supervised Hashing Using Graph Cuts and Boosted Decision Trees | cs.LG cs.CV | Embedding image features into a binary Hamming space can improve both the
speed and accuracy of large-scale query-by-example image retrieval systems.
Supervised hashing aims to map the original features to compact binary codes in
a manner which preserves the label-based similarities of the original data.
Most existing approaches apply a single form of hash function, and an
optimization process which is typically deeply coupled to this specific form.
This tight coupling restricts the flexibility of those methods, and can result
in complex optimization problems that are difficult to solve. In this work we
proffer a flexible yet simple framework that is able to accommodate different
types of loss functions and hash functions. The proposed framework allows a
number of existing approaches to hashing to be placed in context, and
simplifies the development of new problem-specific hashing methods. Our
framework decomposes the into two steps: binary code (hash bits) learning, and
hash function learning. The first step can typically be formulated as a binary
quadratic problem, and the second step can be accomplished by training standard
binary classifiers. For solving large-scale binary code inference, we show how
to ensure that the binary quadratic problems are submodular such that an
efficient graph cut approach can be used. To achieve efficiency as well as
efficacy on large-scale high-dimensional data, we propose to use boosted
decision trees as the hash functions, which are nonlinear, highly descriptive,
and very fast to train and evaluate. Experiments demonstrate that our proposed
method significantly outperforms most state-of-the-art methods, especially on
high-dimensional data.
| Guosheng Lin, Chunhua Shen, Anton van den Hengel | 10.1109/TPAMI.2015.2404776 | 1408.5574 | null | null |
An application of topological graph clustering to protein function
prediction | cs.CE cs.LG q-bio.QM stat.ML | We use a semisupervised learning algorithm based on a topological data
analysis approach to assign functional categories to yeast proteins using
similarity graphs. This new approach to analyzing biological networks yields
results that are as good as or better than state of the art existing
approaches.
| R. Sean Bowman and Douglas Heisterkamp and Jesse Johnson and Danielle
O'Donnol | null | 1408.5634 | null | null |
Asymptotic Accuracy of Bayesian Estimation for a Single Latent Variable | stat.ML cs.LG | In data science and machine learning, hierarchical parametric models, such as
mixture models, are often used. They contain two kinds of variables: observable
variables, which represent the parts of the data that can be directly measured,
and latent variables, which represent the underlying processes that generate
the data. Although there has been an increase in research on the estimation
accuracy for observable variables, the theoretical analysis of estimating
latent variables has not been thoroughly investigated. In a previous study, we
determined the accuracy of a Bayes estimation for the joint probability of the
latent variables in a dataset, and we proved that the Bayes method is
asymptotically more accurate than the maximum-likelihood method. However, the
accuracy of the Bayes estimation for a single latent variable remains unknown.
In the present paper, we derive the asymptotic expansions of the error
functions, which are defined by the Kullback-Leibler divergence, for two types
of single-variable estimations when the statistical regularity is satisfied.
Our results indicate that the accuracies of the Bayes and maximum-likelihood
methods are asymptotically equivalent and clarify that the Bayes method is only
advantageous for multivariable estimations.
| Keisuke Yamazaki | null | 1408.5661 | null | null |
Improved Distributed Principal Component Analysis | cs.LG | We study the distributed computing setting in which there are multiple
servers, each holding a set of points, who wish to compute functions on the
union of their point sets. A key task in this setting is Principal Component
Analysis (PCA), in which the servers would like to compute a low dimensional
subspace capturing as much of the variance of the union of their point sets as
possible. Given a procedure for approximate PCA, one can use it to
approximately solve $\ell_2$-error fitting problems such as $k$-means
clustering and subspace clustering. The essential properties of an approximate
distributed PCA algorithm are its communication cost and computational
efficiency for a given desired accuracy in downstream applications. We give new
algorithms and analyses for distributed PCA which lead to improved
communication and computational costs for $k$-means clustering and related
problems. Our empirical study on real world data shows a speedup of orders of
magnitude, preserving communication with only a negligible degradation in
solution quality. Some of these techniques we develop, such as a general
transformation from a constant success probability subspace embedding to a high
success probability subspace embedding with a dimension and sparsity
independent of the success probability, may be of independent interest.
| Maria-Florina Balcan, Vandana Kanchanapally, Yingyu Liang, David
Woodruff | null | 1408.5823 | null | null |
Analysis of a Reduced-Communication Diffusion LMS Algorithm | cs.DC cs.LG cs.SY math.OC | In diffusion-based algorithms for adaptive distributed estimation, each node
of an adaptive network estimates a target parameter vector by creating an
intermediate estimate and then combining the intermediate estimates available
within its closed neighborhood. We analyze the performance of a
reduced-communication diffusion least mean-square (RC-DLMS) algorithm, which
allows each node to receive the intermediate estimates of only a subset of its
neighbors at each iteration. This algorithm eases the usage of network
communication resources and delivers a trade-off between estimation performance
and communication cost. We show analytically that the RC-DLMS algorithm is
stable and convergent in both mean and mean-square senses. We also calculate
its theoretical steady-state mean-square deviation. Simulation results
demonstrate a good match between theory and experiment.
| Reza Arablouei, Stefan Werner, Kutluy{\i}l Do\u{g}an\c{c}ay, and
Yih-Fang Huang | null | 1408.5845 | null | null |
Label Distribution Learning | cs.LG | Although multi-label learning can deal with many problems with label
ambiguity, it does not fit some real applications well where the overall
distribution of the importance of the labels matters. This paper proposes a
novel learning paradigm named \emph{label distribution learning} (LDL) for such
kind of applications. The label distribution covers a certain number of labels,
representing the degree to which each label describes the instance. LDL is a
more general learning framework which includes both single-label and
multi-label learning as its special cases. This paper proposes six working LDL
algorithms in three ways: problem transformation, algorithm adaptation, and
specialized algorithm design. In order to compare the performance of the LDL
algorithms, six representative and diverse evaluation measures are selected via
a clustering analysis, and the first batch of label distribution datasets are
collected and made publicly available. Experimental results on one artificial
and fifteen real-world datasets show clear advantages of the specialized
algorithms, which indicates the importance of special design for the
characteristics of the LDL problem.
| Xin Geng | null | 1408.6027 | null | null |
PMCE: efficient inference of expressive models of cancer evolution with
high prognostic power | stat.ML cs.LG q-bio.QM | Motivation: Driver (epi)genomic alterations underlie the positive selection
of cancer subpopulations, which promotes drug resistance and relapse. Even
though substantial heterogeneity is witnessed in most cancer types, mutation
accumulation patterns can be regularly found and can be exploited to
reconstruct predictive models of cancer evolution. Yet, available methods
cannot infer logical formulas connecting events to represent alternative
evolutionary routes or convergent evolution. Results: We introduce PMCE, an
expressive framework that leverages mutational profiles from cross-sectional
sequencing data to infer probabilistic graphical models of cancer evolution
including arbitrary logical formulas, and which outperforms the
state-of-the-art in terms of accuracy and robustness to noise, on simulations.
The application of PMCE to 7866 samples from the TCGA database allows us to
identify a highly significant correlation between the predicted evolutionary
paths and the overall survival in 7 tumor types, proving that our approach can
effectively stratify cancer patients in reliable risk groups. Availability:
PMCE is freely available at https://github.com/BIMIB-DISCo/PMCE, in addition to
the code to replicate all the analyses presented in the manuscript. Contacts:
[email protected], [email protected].
| Fabrizio Angaroni, Kevin Chen, Chiara Damiani, Giulio Caravagna, Alex
Graudenzi, Daniele Ramazzotti | null | 1408.6032 | null | null |
Recursive Total Least-Squares Algorithm Based on Inverse Power Method
and Dichotomous Coordinate-Descent Iterations | cs.SY cs.LG | We develop a recursive total least-squares (RTLS) algorithm for
errors-in-variables system identification utilizing the inverse power method
and the dichotomous coordinate-descent (DCD) iterations. The proposed
algorithm, called DCD-RTLS, outperforms the previously-proposed RTLS
algorithms, which are based on the line-search method, with reduced
computational complexity. We perform a comprehensive analysis of the DCD-RTLS
algorithm and show that it is asymptotically unbiased as well as being stable
in the mean. We also find a lower bound for the forgetting factor that ensures
mean-square stability of the algorithm and calculate the theoretical
steady-state mean-square deviation (MSD). We verify the effectiveness of the
proposed algorithm and the accuracy of the predicted steady-state MSD via
simulations.
| Reza Arablouei, Kutluy{\i}l Do\u{g}an\c{c}ay, and Stefan Werner | 10.1109/TSP.2015.2405492 | 1408.6141 | null | null |
A Methodology for the Diagnostic of Aircraft Engine Based on Indicators
Aggregation | stat.ML cs.LG | Aircraft engine manufacturers collect large amount of engine related data
during flights. These data are used to detect anomalies in the engines in order
to help companies optimize their maintenance costs. This article introduces and
studies a generic methodology that allows one to build automatic early signs of
anomaly detection in a way that is understandable by human operators who make
the final maintenance decision. The main idea of the method is to generate a
very large number of binary indicators based on parametric anomaly scores
designed by experts, complemented by simple aggregations of those scores. The
best indicators are selected via a classical forward scheme, leading to a much
reduced number of indicators that are tuned to a data set. We illustrate the
interest of the method on simulated data which contain realistic early signs of
anomalies.
| Tsirizo Rabenoro (SAMM), J\'er\^ome Lacaille, Marie Cottrell (SAMM),
Fabrice Rossi (SAMM) | 10.1007/978-3-319-08976-8_11 | 1408.6214 | null | null |
Large Scale Purchase Prediction with Historical User Actions on B2C
Online Retail Platform | cs.LG | This paper describes the solution of Bazinga Team for Tmall Recommendation
Prize 2014. With real-world user action data provided by Tmall, one of the
largest B2C online retail platforms in China, this competition requires to
predict future user purchases on Tmall website. Predictions are judged on
F1Score, which considers both precision and recall for fair evaluation. The
data set provided by Tmall contains more than half billion action records from
over ten million distinct users. Such massive data volume poses a big
challenge, and drives competitors to write every single program in MapReduce
fashion and run it on distributed cluster. We model the purchase prediction
problem as standard machine learning problem, and mainly employ regression and
classification methods as single models. Individual models are then aggregated
in a two-stage approach, using linear regression for blending, and finally a
linear ensemble of blended models. The competition is approaching the end but
still in running during writing this paper. In the end, our team achieves
F1Score 6.11 and ranks 7th (out of 7,276 teams in total).
| Yuyu Zhang, Liang Pang, Lei Shi and Bin Wang | null | 1408.6515 | null | null |
Task-group Relatedness and Generalization Bounds for Regularized
Multi-task Learning | cs.LG | In this paper, we study the generalization performance of regularized
multi-task learning (RMTL) in a vector-valued framework, where MTL is
considered as a learning process for vector-valued functions. We are mainly
concerned with two theoretical questions: 1) under what conditions does RMTL
perform better with a smaller task sample size than STL? 2) under what
conditions is RMTL generalizable and can guarantee the consistency of each task
during simultaneous learning?
In particular, we investigate two types of task-group relatedness: the
observed discrepancy-dependence measure (ODDM) and the empirical
discrepancy-dependence measure (EDDM), both of which detect the dependence
between two groups of multiple related tasks (MRTs). We then introduce the
Cartesian product-based uniform entropy number (CPUEN) to measure the
complexities of vector-valued function classes. By applying the specific
deviation and the symmetrization inequalities to the vector-valued framework,
we obtain the generalization bound for RMTL, which is the upper bound of the
joint probability of the event that there is at least one task with a large
empirical discrepancy between the expected and empirical risks. Finally, we
present a sufficient condition to guarantee the consistency of each task in the
simultaneous learning process, and we discuss how task relatedness affects the
generalization performance of RMTL. Our theoretical findings answer the
aforementioned two questions.
| Chao Zhang, Dacheng Tao, Tao Hu, Xiang Li | null | 1408.6617 | null | null |
Falsifiable implies Learnable | cs.LG math.ST stat.ML stat.TH | The paper demonstrates that falsifiability is fundamental to learning. We
prove the following theorem for statistical learning and sequential prediction:
If a theory is falsifiable then it is learnable -- i.e. admits a strategy that
predicts optimally. An analogous result is shown for universal induction.
| David Balduzzi | null | 1408.6618 | null | null |
Non-Standard Words as Features for Text Categorization | cs.CL cs.LG | This paper presents categorization of Croatian texts using Non-Standard Words
(NSW) as features. Non-Standard Words are: numbers, dates, acronyms,
abbreviations, currency, etc. NSWs in Croatian language are determined
according to Croatian NSW taxonomy. For the purpose of this research, 390 text
documents were collected and formed the SKIPEZ collection with 6 classes:
official, literary, informative, popular, educational and scientific. Text
categorization experiment was conducted on three different representations of
the SKIPEZ collection: in the first representation, the frequencies of NSWs are
used as features; in the second representation, the statistic measures of NSWs
(variance, coefficient of variation, standard deviation, etc.) are used as
features; while the third representation combines the first two feature sets.
Naive Bayes, CN2, C4.5, kNN, Classification Trees and Random Forest algorithms
were used in text categorization experiments. The best categorization results
are achieved using the first feature set (NSW frequencies) with the
categorization accuracy of 87%. This suggests that the NSWs should be
considered as features in highly inflectional languages, such as Croatian. NSW
based features reduce the dimensionality of the feature space without standard
lemmatization procedures, and therefore the bag-of-NSWs should be considered
for further Croatian texts categorization experiments.
| Slobodan Beliga, Sanda Martin\v{c}i\'c-Ip\v{s}i\'c | 10.1109/MIPRO.2014.6859744 | 1408.6746 | null | null |
A Multi-Plane Block-Coordinate Frank-Wolfe Algorithm for Training
Structural SVMs with a Costly max-Oracle | cs.LG | Structural support vector machines (SSVMs) are amongst the best performing
models for structured computer vision tasks, such as semantic image
segmentation or human pose estimation. Training SSVMs, however, is
computationally costly, because it requires repeated calls to a structured
prediction subroutine (called \emph{max-oracle}), which has to solve an
optimization problem itself, e.g. a graph cut.
In this work, we introduce a new algorithm for SSVM training that is more
efficient than earlier techniques when the max-oracle is computationally
expensive, as it is frequently the case in computer vision tasks. The main idea
is to (i) combine the recent stochastic Block-Coordinate Frank-Wolfe algorithm
with efficient hyperplane caching, and (ii) use an automatic selection rule for
deciding whether to call the exact max-oracle or to rely on an approximate one
based on the cached hyperplanes.
We show experimentally that this strategy leads to faster convergence to the
optimum with respect to the number of requires oracle calls, and that this
translates into faster convergence with respect to the total runtime when the
max-oracle is slow compared to the other steps of the algorithm.
A publicly available C++ implementation is provided at
http://pub.ist.ac.at/~vnk/papers/SVM.html .
| Neel Shah, Vladimir Kolmogorov, Christoph H. Lampert | null | 1408.6804 | null | null |
A Plug&Play P300 BCI Using Information Geometry | cs.LG cs.HC stat.ML | This paper presents a new classification methods for Event Related Potentials
(ERP) based on an Information geometry framework. Through a new estimation of
covariance matrices, this work extend the use of Riemannian geometry, which was
previously limited to SMR-based BCI, to the problem of classification of ERPs.
As compared to the state-of-the-art, this new method increases performance,
reduces the number of data needed for the calibration and features good
generalisation across sessions and subjects. This method is illustrated on data
recorded with the P300-based game brain invaders. Finally, an online and
adaptive implementation is described, where the BCI is initialized with generic
parameters derived from a database and continuously adapt to the individual,
allowing the user to play the game without any calibration while keeping a high
accuracy.
| Alexandre Barachant and Marco Congedo | null | 1409.0107 | null | null |
Ad Hoc Microphone Array Calibration: Euclidean Distance Matrix
Completion Algorithm and Theoretical Guarantees | cs.SD cs.LG | This paper addresses the problem of ad hoc microphone array calibration where
only partial information about the distances between microphones is available.
We construct a matrix consisting of the pairwise distances and propose to
estimate the missing entries based on a novel Euclidean distance matrix
completion algorithm by alternative low-rank matrix completion and projection
onto the Euclidean distance space. This approach confines the recovered matrix
to the EDM cone at each iteration of the matrix completion algorithm. The
theoretical guarantees of the calibration performance are obtained considering
the random and locally structured missing entries as well as the measurement
noise on the known distances. This study elucidates the links between the
calibration error and the number of microphones along with the noise level and
the ratio of missing distances. Thorough experiments on real data recordings
and simulated setups are conducted to demonstrate these theoretical insights. A
significant improvement is achieved by the proposed Euclidean distance matrix
completion algorithm over the state-of-the-art techniques for ad hoc microphone
array calibration.
| Mohammad J. Taghizadeh, Reza Parhizkar, Philip N. Garner, Herve
Bourlard, Afsaneh Asaei | null | 1409.0203 | null | null |
Multi-task Sparse Structure Learning | cs.LG stat.ML | Multi-task learning (MTL) aims to improve generalization performance by
learning multiple related tasks simultaneously. While sometimes the underlying
task relationship structure is known, often the structure needs to be estimated
from data at hand. In this paper, we present a novel family of models for MTL,
applicable to regression and classification problems, capable of learning the
structure of task relationships. In particular, we consider a joint estimation
problem of the task relationship structure and the individual task parameters,
which is solved using alternating minimization. The task relationship structure
learning component builds on recent advances in structure learning of Gaussian
graphical models based on sparse estimators of the precision (inverse
covariance) matrix. We illustrate the effectiveness of the proposed model on a
variety of synthetic and benchmark datasets for regression and classification.
We also consider the problem of combining climate model outputs for better
projections of future climate, with focus on temperature in South America, and
show that the proposed model outperforms several existing methods for the
problem.
| Andre R. Goncalves, Puja Das, Soumyadeep Chatterjee, Vidyashankar
Sivakumar, Fernando J. Von Zuben, Arindam Banerjee | 10.1145/2661829.2662091 | 1409.0272 | null | null |
Neural Machine Translation by Jointly Learning to Align and Translate | cs.CL cs.LG cs.NE stat.ML | Neural machine translation is a recently proposed approach to machine
translation. Unlike the traditional statistical machine translation, the neural
machine translation aims at building a single neural network that can be
jointly tuned to maximize the translation performance. The models proposed
recently for neural machine translation often belong to a family of
encoder-decoders and consists of an encoder that encodes a source sentence into
a fixed-length vector from which a decoder generates a translation. In this
paper, we conjecture that the use of a fixed-length vector is a bottleneck in
improving the performance of this basic encoder-decoder architecture, and
propose to extend this by allowing a model to automatically (soft-)search for
parts of a source sentence that are relevant to predicting a target word,
without having to form these parts as a hard segment explicitly. With this new
approach, we achieve a translation performance comparable to the existing
state-of-the-art phrase-based system on the task of English-to-French
translation. Furthermore, qualitative analysis reveals that the
(soft-)alignments found by the model agree well with our intuition.
| Dzmitry Bahdanau and Kyunghyun Cho and Yoshua Bengio | null | 1409.0473 | null | null |
Sampling-based Approximations with Quantitative Performance for the
Probabilistic Reach-Avoid Problem over General Markov Processes | cs.SY cs.LG | This article deals with stochastic processes endowed with the Markov
(memoryless) property and evolving over general (uncountable) state spaces. The
models further depend on a non-deterministic quantity in the form of a control
input, which can be selected to affect the probabilistic dynamics. We address
the computation of maximal reach-avoid specifications, together with the
synthesis of the corresponding optimal controllers. The reach-avoid
specification deals with assessing the likelihood that any finite-horizon
trajectory of the model enters a given goal set, while avoiding a given set of
undesired states. This article newly provides an approximate computational
scheme for the reach-avoid specification based on the Fitted Value Iteration
algorithm, which hinges on random sample extractions, and gives a-priori
computable formal probabilistic bounds on the error made by the approximation
algorithm: as such, the output of the numerical scheme is quantitatively
assessed and thus meaningful for safety-critical applications. Furthermore, we
provide tighter probabilistic error bounds that are sample-based. The overall
computational scheme is put in relationship with alternative approximation
algorithms in the literature, and finally its performance is practically
assessed over a benchmark case study.
| Sofie Haesaert and Robert Babuska and Alessandro Abate | null | 1409.0553 | null | null |
On the Equivalence Between Deep NADE and Generative Stochastic Networks | stat.ML cs.LG | Neural Autoregressive Distribution Estimators (NADEs) have recently been
shown as successful alternatives for modeling high dimensional multimodal
distributions. One issue associated with NADEs is that they rely on a
particular order of factorization for $P(\mathbf{x})$. This issue has been
recently addressed by a variant of NADE called Orderless NADEs and its deeper
version, Deep Orderless NADE. Orderless NADEs are trained based on a criterion
that stochastically maximizes $P(\mathbf{x})$ with all possible orders of
factorizations. Unfortunately, ancestral sampling from deep NADE is very
expensive, corresponding to running through a neural net separately predicting
each of the visible variables given some others. This work makes a connection
between this criterion and the training criterion for Generative Stochastic
Networks (GSNs). It shows that training NADEs in this way also trains a GSN,
which defines a Markov chain associated with the NADE model. Based on this
connection, we show an alternative way to sample from a trained Orderless NADE
that allows to trade-off computing time and quality of the samples: a 3 to
10-fold speedup (taking into account the waste due to correlations between
consecutive samples of the chain) can be obtained without noticeably reducing
the quality of the samples. This is achieved using a novel sampling procedure
for GSNs called annealed GSN sampling, similar to tempering methods that
combines fast mixing (obtained thanks to steps at high noise levels) with
accurate samples (obtained thanks to steps at low noise levels).
| Li Yao and Sherjil Ozair and Kyunghyun Cho and Yoshua Bengio | null | 1409.0585 | null | null |
Multi-rank Sparse Hierarchical Clustering | stat.ML cs.LG | There has been a surge in the number of large and flat data sets - data sets
containing a large number of features and a relatively small number of
observations - due to the growing ability to collect and store information in
medical research and other fields. Hierarchical clustering is a widely used
clustering tool. In hierarchical clustering, large and flat data sets may allow
for a better coverage of clustering features (features that help explain the
true underlying clusters) but, such data sets usually include a large fraction
of noise features (non-clustering features) that may hide the underlying
clusters. Witten and Tibshirani (2010) proposed a sparse hierarchical
clustering framework to cluster the observations using an adaptively chosen
subset of the features, however, we show that this framework has some
limitations when the data sets contain clustering features with complex
structure. In this paper, we propose the Multi-rank sparse hierarchical
clustering (MrSHC). We show that, using simulation studies and real data
examples, MrSHC produces superior feature selection and clustering performance
comparing to the classical (of-the-shelf) hierarchical clustering and the
existing sparse hierarchical clustering framework.
| Hongyang Zhang and Ruben H. Zamar | null | 1409.0745 | null | null |
Comparison of algorithms that detect drug side effects using electronic
healthcare databases | cs.LG cs.CE | The electronic healthcare databases are starting to become more readily
available and are thought to have excellent potential for generating adverse
drug reaction signals. The Health Improvement Network (THIN) database is an
electronic healthcare database containing medical information on over 11
million patients that has excellent potential for detecting ADRs. In this paper
we apply four existing electronic healthcare database signal detecting
algorithms (MUTARA, HUNT, Temporal Pattern Discovery and modified ROR) on the
THIN database for a selection of drugs from six chosen drug families. This is
the first comparison of ADR signalling algorithms that includes MUTARA and HUNT
and enabled us to set a benchmark for the adverse drug reaction signalling
ability of the THIN database. The drugs were selectively chosen to enable a
comparison with previous work and for variety. It was found that no algorithm
was generally superior and the algorithms' natural thresholds act at variable
stringencies. Furthermore, none of the algorithms perform well at detecting
rare ADRs.
| Jenna Reps, Jonathan M. Garibaldi, Uwe Aickelin, Daniele Soria, Jack
Gibson, Richard Hubbard | null | 1409.0748 | null | null |
Data classification using the Dempster-Shafer method | cs.LG | In this paper, the Dempster-Shafer method is employed as the theoretical
basis for creating data classification systems. Testing is carried out using
three popular (multiple attribute) benchmark datasets that have two, three and
four classes. In each case, a subset of the available data is used for training
to establish thresholds, limits or likelihoods of class membership for each
attribute, and hence create mass functions that establish probability of class
membership for each attribute of the test data. Classification of each data
item is achieved by combination of these probabilities via Dempster's Rule of
Combination. Results for the first two datasets show extremely high
classification accuracy that is competitive with other popular methods. The
third dataset is non-numerical and difficult to classify, but good results can
be achieved provided the system and mass functions are designed carefully and
the right attributes are chosen for combination. In all cases the
Dempster-Shafer method provides comparable performance to other more popular
algorithms, but the overhead of generating accurate mass functions increases
the complexity with the addition of new attributes. Overall, the results
suggest that the D-S approach provides a suitable framework for the design of
classification systems and that automating the mass function design and
calculation would increase the viability of the algorithm for complex
classification problems.
| Qi Chen, Amanda Whitbrook, Uwe Aickelin and Chris Roadknight | 10.1080/0952813X.2014.886301 | 1409.0763 | null | null |
A Novel Semi-Supervised Algorithm for Rare Prescription Side Effect
Discovery | cs.LG cs.CE | Drugs are frequently prescribed to patients with the aim of improving each
patient's medical state, but an unfortunate consequence of most prescription
drugs is the occurrence of undesirable side effects. Side effects that occur in
more than one in a thousand patients are likely to be signalled efficiently by
current drug surveillance methods, however, these same methods may take decades
before generating signals for rarer side effects, risking medical morbidity or
mortality in patients prescribed the drug while the rare side effect is
undiscovered. In this paper we propose a novel computational meta-analysis
framework for signalling rare side effects that integrates existing methods,
knowledge from the web, metric learning and semi-supervised clustering. The
novel framework was able to signal many known rare and serious side effects for
the selection of drugs investigated, such as tendon rupture when prescribed
Ciprofloxacin or Levofloxacin, renal failure with Naproxen and depression
associated with Rimonabant. Furthermore, for the majority of the drug
investigated it generated signals for rare side effects at a more stringent
signalling threshold than existing methods and shows the potential to become a
fundamental part of post marketing surveillance to detect rare side effects.
| Jenna Reps, Jonathan M. Garibaldi, Uwe Aickelin, Daniele Soria, Jack
E. Gibson, Richard B. Hubbard | 10.2139/ssrn.2823251 | 1409.0768 | null | null |
Signalling Paediatric Side Effects using an Ensemble of Simple Study
Designs | cs.LG cs.CE | Background: Children are frequently prescribed medication off-label, meaning
there has not been sufficient testing of the medication to determine its safety
or effectiveness. The main reason this safety knowledge is lacking is due to
ethical restrictions that prevent children from being included in the majority
of clinical trials. Objective: The objective of this paper is to investigate
whether an ensemble of simple study designs can be implemented to signal
acutely occurring side effects effectively within the paediatric population by
using historical longitudinal data. The majority of pharmacovigilance
techniques are unsupervised, but this research presents a supervised framework.
Methods: Multiple measures of association are calculated for each drug and
medical event pair and these are used as features that are fed into a
classiffier to determine the likelihood of the drug and medical event pair
corresponding to an adverse drug reaction. The classiffier is trained using
known adverse drug reactions or known non-adverse drug reaction relationships.
Results: The novel ensemble framework obtained a false positive rate of 0:149,
a sensitivity of 0:547 and a specificity of 0:851 when implemented on a
reference set of drug and medical event pairs. The novel framework consistently
outperformed each individual simple study design. Conclusion: This research
shows that it is possible to exploit the mechanism of causality and presents a
framework for signalling adverse drug reactions effectively.
| Jenna M. Reps, Jonathan M. Garibaldi, Uwe Aickelin, Daniele Soria,
Jack E. Gibson, Richard B. Hubbard | null | 1409.0772 | null | null |
Feature selection in detection of adverse drug reactions from the Health
Improvement Network (THIN) database | cs.LG cs.CE | Adverse drug reaction (ADR) is widely concerned for public health issue. ADRs
are one of most common causes to withdraw some drugs from market. Prescription
event monitoring (PEM) is an important approach to detect the adverse drug
reactions. The main problem to deal with this method is how to automatically
extract the medical events or side effects from high-throughput medical events,
which are collected from day to day clinical practice. In this study we propose
a novel concept of feature matrix to detect the ADRs. Feature matrix, which is
extracted from big medical data from The Health Improvement Network (THIN)
database, is created to characterize the medical events for the patients who
take drugs. Feature matrix builds the foundation for the irregular and big
medical data. Then feature selection methods are performed on feature matrix to
detect the significant features. Finally the ADRs can be located based on the
significant features. The experiments are carried out on three drugs:
Atorvastatin, Alendronate, and Metoclopramide. Major side effects for each drug
are detected and better performance is achieved compared to other computerized
methods. The detected ADRs are based on computerized methods, further
investigation is needed.
| Yihui Liu and Uwe Aickelin | null | 1409.0775 | null | null |
Ensemble Learning of Colorectal Cancer Survival Rates | cs.LG cs.CE | In this paper, we describe a dataset relating to cellular and physical
conditions of patients who are operated upon to remove colorectal tumours. This
data provides a unique insight into immunological status at the point of tumour
removal, tumour classification and post-operative survival. We build on
existing research on clustering and machine learning facets of this data to
demonstrate a role for an ensemble approach to highlighting patients with
clearer prognosis parameters. Results for survival prediction using 3 different
approaches are shown for a subset of the data which is most difficult to model.
The performance of each model individually is compared with subsets of the data
where some agreement is reached for multiple models. Significant improvements
in model accuracy on an unseen test set can be achieved for patients where
agreement between models is achieved.
| Chris Roadknight, Uwe Aickelin, John Scholefield, Lindy Durrant | 10.1109/CIVEMSA.2013.6617400 | 1409.0788 | null | null |
Feature Selection in Conditional Random Fields for Map Matching of GPS
Trajectories | stat.ML cs.AI cs.LG | Map matching of the GPS trajectory serves the purpose of recovering the
original route on a road network from a sequence of noisy GPS observations. It
is a fundamental technique to many Location Based Services. However, map
matching of a low sampling rate on urban road network is still a challenging
task. In this paper, the characteristics of Conditional Random Fields with
regard to inducing many contextual features and feature selection are explored
for the map matching of the GPS trajectories at a low sampling rate.
Experiments on a taxi trajectory dataset show that our method may achieve
competitive results along with the success of reducing model complexity for
computation-limited applications.
| Jian Yang, Liqiu Meng | null | 1409.0791 | null | null |
Feature Engineering for Map Matching of Low-Sampling-Rate GPS
Trajectories in Road Network | stat.ML cs.LG | Map matching of GPS trajectories from a sequence of noisy observations serves
the purpose of recovering the original routes in a road network. In this work
in progress, we attempt to share our experience of feature construction in a
spatial database by reporting our ongoing experiment of feature extrac-tion in
Conditional Random Fields (CRFs) for map matching. Our preliminary results are
obtained from real-world taxi GPS trajectories.
| Jian Yang and Liqiu Meng | null | 1409.0797 | null | null |
Solving the Problem of the K Parameter in the KNN Classifier Using an
Ensemble Learning Approach | cs.LG | This paper presents a new solution for choosing the K parameter in the
k-nearest neighbor (KNN) algorithm, the solution depending on the idea of
ensemble learning, in which a weak KNN classifier is used each time with a
different K, starting from one to the square root of the size of the training
set. The results of the weak classifiers are combined using the weighted sum
rule. The proposed solution was tested and compared to other solutions using a
group of experiments in real life problems. The experimental results show that
the proposed classifier outperforms the traditional KNN classifier that uses a
different number of neighbors, is competitive with other classifiers, and is a
promising classifier with strong potential for a wide range of applications.
| Ahmad Basheer Hassanat, Mohammad Ali Abbadi, Ghada Awad Altarawneh,
Ahmad Ali Alhasanat | null | 1409.0919 | null | null |
Dimensionality Invariant Similarity Measure | cs.LG | This paper presents a new similarity measure to be used for general tasks
including supervised learning, which is represented by the K-nearest neighbor
classifier (KNN). The proposed similarity measure is invariant to large
differences in some dimensions in the feature space. The proposed metric is
proved mathematically to be a metric. To test its viability for different
applications, the KNN used the proposed metric for classifying test examples
chosen from a number of real datasets. Compared to some other well known
metrics, the experimental results show that the proposed metric is a promising
distance measure for the KNN classifier with strong potential for a wide range
of applications.
| Ahmad Basheer Hassanat | null | 1409.0923 | null | null |
Breakdown Point of Robust Support Vector Machine | stat.ML cs.LG | The support vector machine (SVM) is one of the most successful learning
methods for solving classification problems. Despite its popularity, SVM has a
serious drawback, that is sensitivity to outliers in training samples. The
penalty on misclassification is defined by a convex loss called the hinge loss,
and the unboundedness of the convex loss causes the sensitivity to outliers. To
deal with outliers, robust variants of SVM have been proposed, such as the
robust outlier detection algorithm and an SVM with a bounded loss called the
ramp loss. In this paper, we propose a robust variant of SVM and investigate
its robustness in terms of the breakdown point. The breakdown point is a
robustness measure that is the largest amount of contamination such that the
estimated classifier still gives information about the non-contaminated data.
The main contribution of this paper is to show an exact evaluation of the
breakdown point for the robust SVM. For learning parameters such as the
regularization parameter in our algorithm, we derive a simple formula that
guarantees the robustness of the classifier. When the learning parameters are
determined with a grid search using cross validation, our formula works to
reduce the number of candidate search points. The robustness of the proposed
method is confirmed in numerical experiments. We show that the statistical
properties of the robust SVM are well explained by a theoretical analysis of
the breakdown point.
| Takafumi Kanamori, Shuhei Fujiwara, Akiko Takeda | null | 1409.0934 | null | null |
High-performance Kernel Machines with Implicit Distributed Optimization
and Randomization | stat.ML cs.DC cs.LG | In order to fully utilize "big data", it is often required to use "big
models". Such models tend to grow with the complexity and size of the training
data, and do not make strong parametric assumptions upfront on the nature of
the underlying statistical dependencies. Kernel methods fit this need well, as
they constitute a versatile and principled statistical methodology for solving
a wide range of non-parametric modelling problems. However, their high
computational costs (in storage and time) pose a significant barrier to their
widespread adoption in big data applications.
We propose an algorithmic framework and high-performance implementation for
massive-scale training of kernel-based statistical models, based on combining
two key technical ingredients: (i) distributed general purpose convex
optimization, and (ii) the use of randomization to improve the scalability of
kernel methods. Our approach is based on a block-splitting variant of the
Alternating Directions Method of Multipliers, carefully reconfigured to handle
very large random feature matrices, while exploiting hybrid parallelism
typically found in modern clusters of multicore machines. Our implementation
supports a variety of statistical learning tasks by enabling several loss
functions, regularization schemes, kernels, and layers of randomized
approximations for both dense and sparse datasets, in a highly extensible
framework. We evaluate the ability of our framework to learn models on data
from applications, and provide a comparison against existing sequential and
parallel libraries.
| Vikas Sindhwani and Haim Avron | null | 1409.0940 | null | null |
Constructing a Non-Negative Low Rank and Sparse Graph with Data-Adaptive
Features | cs.CV cs.LG | This paper aims at constructing a good graph for discovering intrinsic data
structures in a semi-supervised learning setting. Firstly, we propose to build
a non-negative low-rank and sparse (referred to as NNLRS) graph for the given
data representation. Specifically, the weights of edges in the graph are
obtained by seeking a nonnegative low-rank and sparse matrix that represents
each data sample as a linear combination of others. The so-obtained NNLRS-graph
can capture both the global mixture of subspaces structure (by the low
rankness) and the locally linear structure (by the sparseness) of the data,
hence is both generative and discriminative. Secondly, as good features are
extremely important for constructing a good graph, we propose to learn the data
embedding matrix and construct the graph jointly within one framework, which is
termed as NNLRS with embedded features (referred to as NNLRS-EF). Extensive
experiments on three publicly available datasets demonstrate that the proposed
method outperforms the state-of-the-art graph construction method by a large
margin for both semi-supervised classification and discriminative analysis,
which verifies the effectiveness of our proposed method.
| Liansheng Zhuang, Shenghua Gao, Jinhui Tang, Jingjing Wang, Zhouchen
Lin, Yi Ma | 10.1109/TIP.2015.2441632 | 1409.0964 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.