title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Rapid and deterministic estimation of probability densities using
scale-free field theories | physics.data-an cs.LG math.ST q-bio.QM stat.ML stat.TH | The question of how best to estimate a continuous probability density from
finite data is an intriguing open problem at the interface of statistics and
physics. Previous work has argued that this problem can be addressed in a
natural way using methods from statistical field theory. Here I describe new
results that allow this field-theoretic approach to be rapidly and
deterministically computed in low dimensions, making it practical for use in
day-to-day data analysis. Importantly, this approach does not impose a
privileged length scale for smoothness of the inferred probability density, but
rather learns a natural length scale from the data due to the tradeoff between
goodness-of-fit and an Occam factor. Open source software implementing this
method in one and two dimensions is provided.
| Justin B. Kinney | 10.1103/PhysRevE.90.011301 | 1312.6661 | null | null |
Invariant Factorization Of Time-Series | cs.LG | Time-series classification is an important domain of machine learning and a
plethora of methods have been developed for the task. In comparison to existing
approaches, this study presents a novel method which decomposes a time-series
dataset into latent patterns and membership weights of local segments to those
patterns. The process is formalized as a constrained objective function and a
tailored stochastic coordinate descent optimization is applied. The time-series
are projected to a new feature representation consisting of the sums of the
membership weights, which captures frequencies of local patterns. Features from
various sliding window sizes are concatenated in order to encapsulate the
interaction of patterns from different sizes. Finally, a large-scale
experimental comparison against 6 state of the art baselines and 43 real life
datasets is conducted. The proposed method outperforms all the baselines with
statistically significant margins in terms of prediction accuracy.
| Josif Grabocka, Lars Schmidt-Thieme | 10.1007/s10618-014-0364-z | 1312.6712 | null | null |
Local algorithms for interactive clustering | cs.DS cs.LG | We study the design of interactive clustering algorithms for data sets
satisfying natural stability assumptions. Our algorithms start with any initial
clustering and only make local changes in each step; both are desirable
features in many applications. We show that in this constrained setting one can
still design provably efficient algorithms that produce accurate clusterings.
We also show that our algorithms perform well on real-world data.
| Pranjal Awasthi and Maria-Florina Balcan and Konstantin Voevodski | null | 1312.6724 | null | null |
Iterative Nearest Neighborhood Oversampling in Semisupervised Learning
from Imbalanced Data | cs.LG | Transductive graph-based semi-supervised learning methods usually build an
undirected graph utilizing both labeled and unlabeled samples as vertices.
Those methods propagate label information of labeled samples to neighbors
through their edges in order to get the predicted labels of unlabeled samples.
Most popular semi-supervised learning approaches are sensitive to initial label
distribution happened in imbalanced labeled datasets. The class boundary will
be severely skewed by the majority classes in an imbalanced classification. In
this paper, we proposed a simple and effective approach to alleviate the
unfavorable influence of imbalance problem by iteratively selecting a few
unlabeled samples and adding them into the minority classes to form a balanced
labeled dataset for the learning methods afterwards. The experiments on UCI
datasets and MNIST handwritten digits dataset showed that the proposed approach
outperforms other existing state-of-art methods.
| Fengqi Li, Chuang Yu, Nanhai Yang, Feng Xia, Guangming Li, Fatemeh
Kaveh-Yazdy | null | 1312.6807 | null | null |
A Fast Greedy Algorithm for Generalized Column Subset Selection | cs.DS cs.LG stat.ML | This paper defines a generalized column subset selection problem which is
concerned with the selection of a few columns from a source matrix A that best
approximate the span of a target matrix B. The paper then proposes a fast
greedy algorithm for solving this problem and draws connections to different
problems that can be efficiently solved using the proposed algorithm.
| Ahmed K. Farahat, Ali Ghodsi, Mohamed S. Kamel | null | 1312.6820 | null | null |
Greedy Column Subset Selection for Large-scale Data Sets | cs.DS cs.LG | In today's information systems, the availability of massive amounts of data
necessitates the development of fast and accurate algorithms to summarize these
data and represent them in a succinct format. One crucial problem in big data
analytics is the selection of representative instances from large and
massively-distributed data, which is formally known as the Column Subset
Selection (CSS) problem. The solution to this problem enables data analysts to
understand the insights of the data and explore its hidden structure. The
selected instances can also be used for data preprocessing tasks such as
learning a low-dimensional embedding of the data points or computing a low-rank
approximation of the corresponding matrix. This paper presents a fast and
accurate greedy algorithm for large-scale column subset selection. The
algorithm minimizes an objective function which measures the reconstruction
error of the data matrix based on the subset of selected columns. The paper
first presents a centralized greedy algorithm for column subset selection which
depends on a novel recursive formula for calculating the reconstruction error
of the data matrix. The paper then presents a MapReduce algorithm which selects
a few representative columns from a matrix whose columns are massively
distributed across several commodity machines. The algorithm first learns a
concise representation of all columns using random projection, and it then
solves a generalized column subset selection problem at each machine in which a
subset of columns are selected from the sub-matrix on that machine such that
the reconstruction error of the concise representation is minimized. The paper
demonstrates the effectiveness and efficiency of the proposed algorithm through
an empirical evaluation on benchmark data sets.
| Ahmed K. Farahat, Ahmed Elgohary, Ali Ghodsi, Mohamed S. Kamel | null | 1312.6838 | null | null |
Speech Recognition Front End Without Information Loss | cs.CL cs.CV cs.LG | Speech representation and modelling in high-dimensional spaces of acoustic
waveforms, or a linear transformation thereof, is investigated with the aim of
improving the robustness of automatic speech recognition to additive noise. The
motivation behind this approach is twofold: (i) the information in acoustic
waveforms that is usually removed in the process of extracting low-dimensional
features might aid robust recognition by virtue of structured redundancy
analogous to channel coding, (ii) linear feature domains allow for exact noise
adaptation, as opposed to representations that involve non-linear processing
which makes noise adaptation challenging. Thus, we develop a generative
framework for phoneme modelling in high-dimensional linear feature domains, and
use it in phoneme classification and recognition tasks. Results show that
classification and recognition in this framework perform better than analogous
PLP and MFCC classifiers below 18 dB SNR. A combination of the high-dimensional
and MFCC features at the likelihood level performs uniformly better than either
of the individual representations across all noise levels.
| Matthew Ager and Zoran Cvetkovic and Peter Sollich | null | 1312.6849 | null | null |
Matrix recovery using Split Bregman | cs.NA cs.LG | In this paper we address the problem of recovering a matrix, with inherent
low rank structure, from its lower dimensional projections. This problem is
frequently encountered in wide range of areas including pattern recognition,
wireless sensor networks, control systems, recommender systems, image/video
reconstruction etc. Both in theory and practice, the most optimal way to solve
the low rank matrix recovery problem is via nuclear norm minimization. In this
paper, we propose a Split Bregman algorithm for nuclear norm minimization. The
use of Bregman technique improves the convergence speed of our algorithm and
gives a higher success rate. Also, the accuracy of reconstruction is much
better even for cases where small number of linear measurements are available.
Our claim is supported by empirical results obtained using our algorithm and
its comparison to other existing methods for matrix recovery. The algorithms
are compared on the basis of NMSE, execution time and success rate for varying
ranks and sampling ratios.
| Anupriya Gogna, Ankita Shukla and Angshul Majumdar | null | 1312.6872 | null | null |
Deep learning for class-generic object detection | cs.CV cs.LG cs.NE | We investigate the use of deep neural networks for the novel task of class
generic object detection. We show that neural networks originally designed for
image recognition can be trained to detect objects within images, regardless of
their class, including objects for which no bounding box labels have been
provided. In addition, we show that bounding box labels yield a 1% performance
increase on the ImageNet recognition challenge.
| Brody Huval, Adam Coates, Andrew Ng | null | 1312.6885 | null | null |
Joint segmentation of multivariate time series with hidden process
regression for human activity recognition | stat.ML cs.LG | The problem of human activity recognition is central for understanding and
predicting the human behavior, in particular in a prospective of assistive
services to humans, such as health monitoring, well being, security, etc. There
is therefore a growing need to build accurate models which can take into
account the variability of the human activities over time (dynamic models)
rather than static ones which can have some limitations in such a dynamic
context. In this paper, the problem of activity recognition is analyzed through
the segmentation of the multidimensional time series of the acceleration data
measured in the 3-d space using body-worn accelerometers. The proposed model
for automatic temporal segmentation is a specific statistical latent process
model which assumes that the observed acceleration sequence is governed by
sequence of hidden (unobserved) activities. More specifically, the proposed
approach is based on a specific multiple regression model incorporating a
hidden discrete logistic process which governs the switching from one activity
to another over time. The model is learned in an unsupervised context by
maximizing the observed-data log-likelihood via a dedicated
expectation-maximization (EM) algorithm. We applied it on a real-world
automatic human activity recognition problem and its performance was assessed
by performing comparisons with alternative approaches, including well-known
supervised static classifiers and the standard hidden Markov model (HMM). The
obtained results are very encouraging and show that the proposed approach is
quite competitive even it works in an entirely unsupervised way and does not
requires a feature extraction preprocessing step.
| Faicel Chamroukhi, Samer Mohammed, Dorra Trabelsi, Latifa Oukhellou,
Yacine Amirat | 10.1016/j.neucom.2013.04.003 | 1312.6956 | null | null |
Subjectivity Classification using Machine Learning Techniques for Mining
Feature-Opinion Pairs from Web Opinion Sources | cs.IR cs.CL cs.LG | Due to flourish of the Web 2.0, web opinion sources are rapidly emerging
containing precious information useful for both customers and manufactures.
Recently, feature based opinion mining techniques are gaining momentum in which
customer reviews are processed automatically for mining product features and
user opinions expressed over them. However, customer reviews may contain both
opinionated and factual sentences. Distillations of factual contents improve
mining performance by preventing noisy and irrelevant extraction. In this
paper, combination of both supervised machine learning and rule-based
approaches are proposed for mining feasible feature-opinion pairs from
subjective review sentences. In the first phase of the proposed approach, a
supervised machine learning technique is applied for classifying subjective and
objective sentences from customer reviews. In the next phase, a rule based
method is implemented which applies linguistic and semantic analysis of texts
to mine feasible feature-opinion pairs from subjective sentences retained after
the first phase. The effectiveness of the proposed methods is established
through experimentation over customer reviews on different electronic products.
| Ahmad Kamal | null | 1312.6962 | null | null |
An Unsupervised Approach for Automatic Activity Recognition based on
Hidden Markov Model Regression | stat.ML cs.CV cs.LG | Using supervised machine learning approaches to recognize human activities
from on-body wearable accelerometers generally requires a large amount of
labelled data. When ground truth information is not available, too expensive,
time consuming or difficult to collect, one has to rely on unsupervised
approaches. This paper presents a new unsupervised approach for human activity
recognition from raw acceleration data measured using inertial wearable
sensors. The proposed method is based upon joint segmentation of
multidimensional time series using a Hidden Markov Model (HMM) in a multiple
regression context. The model is learned in an unsupervised framework using the
Expectation-Maximization (EM) algorithm where no activity labels are needed.
The proposed method takes into account the sequential appearance of the data.
It is therefore adapted for the temporal acceleration data to accurately detect
the activities. It allows both segmentation and classification of the human
activities. Experimental results are provided to demonstrate the efficiency of
the proposed approach with respect to standard supervised and unsupervised
classification approaches
| Dorra Trabelsi, Samer Mohammed, Faicel Chamroukhi, Latifa Oukhellou,
Yacine Amirat | 10.1109/TASE.2013.2256349 | 1312.6965 | null | null |
Model-based functional mixture discriminant analysis with hidden process
regression for curve classification | stat.ME cs.LG math.ST stat.ML stat.TH | In this paper, we study the modeling and the classification of functional
data presenting regime changes over time. We propose a new model-based
functional mixture discriminant analysis approach based on a specific hidden
process regression model that governs the regime changes over time. Our
approach is particularly adapted to handle the problem of complex-shaped
classes of curves, where each class is potentially composed of several
sub-classes, and to deal with the regime changes within each homogeneous
sub-class. The proposed model explicitly integrates the heterogeneity of each
class of curves via a mixture model formulation, and the regime changes within
each sub-class through a hidden logistic process. Each class of complex-shaped
curves is modeled by a finite number of homogeneous clusters, each of them
being decomposed into several regimes. The model parameters of each class are
learned by maximizing the observed-data log-likelihood by using a dedicated
expectation-maximization (EM) algorithm. Comparisons are performed with
alternative curve classification approaches, including functional linear
discriminant analysis and functional mixture discriminant analysis with
polynomial regression mixtures and spline regression mixtures. Results obtained
on simulated data and real data show that the proposed approach outperforms the
alternative approaches in terms of discrimination, and significantly improves
the curves approximation.
| Faicel Chamroukhi, Herv\'e Glotin, Allou Sam\'e | 10.1016/j.neucom.2012.10.030 | 1312.6966 | null | null |
Model-based clustering and segmentation of time series with changes in
regime | stat.ME cs.LG math.ST stat.ML stat.TH | Mixture model-based clustering, usually applied to multidimensional data, has
become a popular approach in many data analysis problems, both for its good
statistical properties and for the simplicity of implementation of the
Expectation-Maximization (EM) algorithm. Within the context of a railway
application, this paper introduces a novel mixture model for dealing with time
series that are subject to changes in regime. The proposed approach consists in
modeling each cluster by a regression model in which the polynomial
coefficients vary according to a discrete hidden process. In particular, this
approach makes use of logistic functions to model the (smooth or abrupt)
transitions between regimes. The model parameters are estimated by the maximum
likelihood method solved by an Expectation-Maximization algorithm. The proposed
approach can also be regarded as a clustering approach which operates by
finding groups of time series having common changes in regime. In addition to
providing a time series partition, it therefore provides a time series
segmentation. The problem of selecting the optimal numbers of clusters and
segments is solved by means of the Bayesian Information Criterion (BIC). The
proposed approach is shown to be efficient using a variety of simulated time
series and real-world time series of electrical power consumption from rail
switching operations.
| Allou Sam\'e, Faicel Chamroukhi, G\'erard Govaert, Patrice Aknin | 10.1007/s11634-011-0096-5 | 1312.6967 | null | null |
A hidden process regression model for functional data description.
Application to curve discrimination | stat.ME cs.LG stat.ML | A new approach for functional data description is proposed in this paper. It
consists of a regression model with a discrete hidden logistic process which is
adapted for modeling curves with abrupt or smooth regime changes. The model
parameters are estimated in a maximum likelihood framework through a dedicated
Expectation Maximization (EM) algorithm. From the proposed generative model, a
curve discrimination rule is derived using the Maximum A Posteriori rule. The
proposed model is evaluated using simulated curves and real world curves
acquired during railway switch operations, by performing comparisons with the
piecewise regression approach in terms of curve modeling and classification.
| Faicel Chamroukhi, Allou Sam\'e, G\'erard Govaert, Patrice Aknin | 10.1016/j.neucom.2009.12.023 | 1312.6968 | null | null |
Time series modeling by a regression approach based on a latent process | stat.ME cs.LG math.ST stat.ML stat.TH | Time series are used in many domains including finance, engineering,
economics and bioinformatics generally to represent the change of a measurement
over time. Modeling techniques may then be used to give a synthetic
representation of such data. A new approach for time series modeling is
proposed in this paper. It consists of a regression model incorporating a
discrete hidden logistic process allowing for activating smoothly or abruptly
different polynomial regression models. The model parameters are estimated by
the maximum likelihood method performed by a dedicated Expectation Maximization
(EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative
Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process
parameters. To evaluate the proposed approach, an experimental study on
simulated data and real world data was performed using two alternative
approaches: a heteroskedastic piecewise regression model using a global
optimization algorithm based on dynamic programming, and a Hidden Markov
Regression Model whose parameters are estimated by the Baum-Welch algorithm.
Finally, in the context of the remote monitoring of components of the French
railway infrastructure, and more particularly the switch mechanism, the
proposed approach has been applied to modeling and classifying time series
representing the condition measurements acquired during switch operations.
| Faicel Chamroukhi, Allou Sam\'e, G\'erard Govaert, Patrice Aknin | 10.1016/j.neunet.2009.06.040 | 1312.6969 | null | null |
Piecewise regression mixture for simultaneous functional data clustering
and optimal segmentation | stat.ME cs.LG math.ST stat.ML stat.TH | This paper introduces a novel mixture model-based approach for simultaneous
clustering and optimal segmentation of functional data which are curves
presenting regime changes. The proposed model consists in a finite mixture of
piecewise polynomial regression models. Each piecewise polynomial regression
model is associated with a cluster, and within each cluster, each piecewise
polynomial component is associated with a regime (i.e., a segment). We derive
two approaches for learning the model parameters. The former is an estimation
approach and consists in maximizing the observed-data likelihood via a
dedicated expectation-maximization (EM) algorithm. A fuzzy partition of the
curves in K clusters is then obtained at convergence by maximizing the
posterior cluster probabilities. The latter however is a classification
approach and optimizes a specific classification likelihood criterion through a
dedicated classification expectation-maximization (CEM) algorithm. The optimal
curve segmentation is performed by using dynamic programming. In the
classification approach, both the curve clustering and the optimal segmentation
are performed simultaneously as the CEM learning proceeds. We show that the
classification approach is the probabilistic version that generalizes the
deterministic K-means-like algorithm proposed in H\'ebrail et al. (2010). The
proposed approach is evaluated using simulated curves and real-world curves.
Comparisons with alternatives including regression mixture models and the
K-means like algorithm for piecewise regression demonstrate the effectiveness
of the proposed approach.
| Faicel Chamroukhi | null | 1312.6974 | null | null |
Mod\`ele \`a processus latent et algorithme EM pour la r\'egression non
lin\'eaire | math.ST cs.LG stat.ME stat.ML stat.TH | A non linear regression approach which consists of a specific regression
model incorporating a latent process, allowing various polynomial regression
models to be activated preferentially and smoothly, is introduced in this
paper. The model parameters are estimated by maximum likelihood performed via a
dedicated expecation-maximization (EM) algorithm. An experimental study using
simulated and real data sets reveals good performances of the proposed
approach.
| Faicel Chamroukhi, Allou Sam\'e, G\'erard Govaert, Patrice Aknin | null | 1312.6978 | null | null |
A regression model with a hidden logistic process for signal
parametrization | stat.ME cs.LG stat.ML | A new approach for signal parametrization, which consists of a specific
regression model incorporating a discrete hidden logistic process, is proposed.
The model parameters are estimated by the maximum likelihood method performed
by a dedicated Expectation Maximization (EM) algorithm. The parameters of the
hidden logistic process, in the inner loop of the EM algorithm, are estimated
using a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm. An
experimental study using simulated and real data reveals good performances of
the proposed approach.
| Faicel Chamroukhi, Allou Sam\'e, G\'erard Govaert, Patrice Aknin | null | 1312.6994 | null | null |
Towards Using Unlabeled Data in a Sparse-coding Framework for Human
Activity Recognition | cs.LG cs.AI stat.ML | We propose a sparse-coding framework for activity recognition in ubiquitous
and mobile computing that alleviates two fundamental problems of current
supervised learning approaches. (i) It automatically derives a compact, sparse
and meaningful feature representation of sensor data that does not rely on
prior expert knowledge and generalizes extremely well across domain boundaries.
(ii) It exploits unlabeled sample data for bootstrapping effective activity
recognizers, i.e., substantially reduces the amount of ground truth annotation
required for model estimation. Such unlabeled data is trivial to obtain, e.g.,
through contemporary smartphones carried by users as they go about their
everyday activities.
Based on the self-taught learning paradigm we automatically derive an
over-complete set of basis vectors from unlabeled data that captures inherent
patterns present within activity data. Through projecting raw sensor data onto
the feature space defined by such over-complete sets of basis vectors effective
feature extraction is pursued. Given these learned feature representations,
classification backends are then trained using small amounts of labeled
training data.
We study the new approach in detail using two datasets which differ in terms
of the recognition tasks and sensor modalities. Primarily we focus on
transportation mode analysis task, a popular task in mobile-phone based
sensing. The sparse-coding framework significantly outperforms the
state-of-the-art in supervised learning approaches. Furthermore, we demonstrate
the great practical potential of the new approach by successfully evaluating
its generalization capabilities across both domain and sensor modalities by
considering the popular Opportunity dataset. Our feature learning approach
outperforms state-of-the-art approaches to analyzing activities in daily
living.
| Sourav Bhattacharya and Petteri Nurmi and Nils Hammerla and Thomas
Pl\"otz | 10.1016/j.pmcj.2014.05.006 | 1312.6995 | null | null |
A regression model with a hidden logistic process for feature extraction
from time series | stat.ME cs.LG math.ST stat.ML stat.TH | A new approach for feature extraction from time series is proposed in this
paper. This approach consists of a specific regression model incorporating a
discrete hidden logistic process. The model parameters are estimated by the
maximum likelihood method performed by a dedicated Expectation Maximization
(EM) algorithm. The parameters of the hidden logistic process, in the inner
loop of the EM algorithm, are estimated using a multi-class Iterative
Reweighted Least-Squares (IRLS) algorithm. A piecewise regression algorithm and
its iterative variant have also been considered for comparisons. An
experimental study using simulated and real data reveals good performances of
the proposed approach.
| Faicel Chamroukhi, Allou Sam\'e, G\'erard Govaert and Patrice Aknin | null | 1312.7001 | null | null |
Supervised learning of a regression model based on latent process.
Application to the estimation of fuel cell life time | stat.ML cs.LG stat.AP | This paper describes a pattern recognition approach aiming to estimate fuel
cell duration time from electrochemical impedance spectroscopy measurements. It
consists in first extracting features from both real and imaginary parts of the
impedance spectrum. A parametric model is considered in the case of the real
part, whereas regression model with latent variables is used in the latter
case. Then, a linear regression model using different subsets of extracted
features is used fo r the estimation of fuel cell time duration. The
performances of the proposed approach are evaluated on experimental data set to
show its feasibility. This could lead to interesting perspectives for
predictive maintenance policy of fuel cell.
| Ra\"issa Onanena, Faicel Chamroukhi, Latifa Oukhellou, Denis Candusso,
Patrice Aknin, Daniel Hissel | null | 1312.7003 | null | null |
A Convex Formulation for Mixed Regression with Two Components: Minimax
Optimal Rates | stat.ML cs.IT cs.LG math.IT | We consider the mixed regression problem with two components, under
adversarial and stochastic noise. We give a convex optimization formulation
that provably recovers the true solution, and provide upper bounds on the
recovery errors for both arbitrary noise and stochastic noise settings. We also
give matching minimax lower bounds (up to log factors), showing that under
certain assumptions, our algorithm is information-theoretically optimal. Our
results represent the first tractable algorithm guaranteeing successful
recovery with tight bounds on recovery errors and sample complexity.
| Yudong Chen, Xinyang Yi, Constantine Caramanis | null | 1312.7006 | null | null |
Functional Mixture Discriminant Analysis with hidden process regression
for curve classification | stat.ME cs.LG stat.ML | We present a new mixture model-based discriminant analysis approach for
functional data using a specific hidden process regression model. The approach
allows for fitting flexible curve-models to each class of complex-shaped curves
presenting regime changes. The model parameters are learned by maximizing the
observed-data log-likelihood for each class by using a dedicated
expectation-maximization (EM) algorithm. Comparisons on simulated data with
alternative approaches show that the proposed approach provides better results.
| Faicel Chamroukhi, Her\'e Glotin, C\'eline Rabouy | null | 1312.7007 | null | null |
Mixture model-based functional discriminant analysis for curve
classification | stat.ME cs.LG stat.ML | Statistical approaches for Functional Data Analysis concern the paradigm for
which the individuals are functions or curves rather than finite dimensional
vectors. In this paper, we particularly focus on the modeling and the
classification of functional data which are temporal curves presenting regime
changes over time. More specifically, we propose a new mixture model-based
discriminant analysis approach for functional data using a specific hidden
process regression model. Our approach is particularly adapted to both handle
the problem of complex-shaped classes of curves, where each class is composed
of several sub-classes, and to deal with the regime changes within each
homogeneous sub-class. The model explicitly integrates the heterogeneity of
each class of curves via a mixture model formulation, and the regime changes
within each sub-class through a hidden logistic process. The approach allows
therefore for fitting flexible curve-models to each class of complex-shaped
curves presenting regime changes through an unsupervised learning scheme, to
automatically summarize it into a finite number of homogeneous clusters, each
of them is decomposed into several regimes. The model parameters are learned by
maximizing the observed-data log-likelihood for each class by using a dedicated
expectation-maximization (EM) algorithm. Comparisons on simulated data and real
data with alternative approaches, including functional linear discriminant
analysis and functional mixture discriminant analysis with polynomial
regression mixtures and spline regression mixtures, show that the proposed
approach provides better results regarding the discrimination results and
significantly improves the curves approximation.
| Faicel Chamroukhi, Herv\'e Glotin | 10.1109/IJCNN.2012.6252818 | 1312.7018 | null | null |
Robust EM algorithm for model-based curve clustering | stat.ME cs.LG stat.ML | Model-based clustering approaches concern the paradigm of exploratory data
analysis relying on the finite mixture model to automatically find a latent
structure governing observed data. They are one of the most popular and
successful approaches in cluster analysis. The mixture density estimation is
generally performed by maximizing the observed-data log-likelihood by using the
expectation-maximization (EM) algorithm. However, it is well-known that the EM
algorithm initialization is crucial. In addition, the standard EM algorithm
requires the number of clusters to be known a priori. Some solutions have been
provided in [31, 12] for model-based clustering with Gaussian mixture models
for multivariate data. In this paper we focus on model-based curve clustering
approaches, when the data are curves rather than vectorial data, based on
regression mixtures. We propose a new robust EM algorithm for clustering
curves. We extend the model-based clustering approach presented in [31] for
Gaussian mixture models, to the case of curve clustering by regression
mixtures, including polynomial regression mixtures as well as spline or
B-spline regressions mixtures. Our approach both handles the problem of
initialization and the one of choosing the optimal number of clusters as the EM
learning proceeds, rather than in a two-fold scheme. This is achieved by
optimizing a penalized log-likelihood criterion. A simulation study confirms
the potential benefit of the proposed algorithm in terms of robustness
regarding initialization and funding the actual number of clusters.
| Faicel Chamroukhi | null | 1312.7022 | null | null |
Model-based clustering with Hidden Markov Model regression for time
series with regime changes | stat.ML cs.LG stat.ME | This paper introduces a novel model-based clustering approach for clustering
time series which present changes in regime. It consists of a mixture of
polynomial regressions governed by hidden Markov chains. The underlying hidden
process for each cluster activates successively several polynomial regimes
during time. The parameter estimation is performed by the maximum likelihood
method through a dedicated Expectation-Maximization (EM) algorithm. The
proposed approach is evaluated using simulated time series and real-world time
series issued from a railway diagnosis application. Comparisons with existing
approaches for time series clustering, including the stand EM for Gaussian
mixtures, $K$-means clustering, the standard mixture of regression models and
mixture of Hidden Markov Models, demonstrate the effectiveness of the proposed
approach.
| Faicel Chamroukhi, Allou Sam\'e, Patrice Aknin, G\'erard Govaert | 10.1109/IJCNN.2011.6033590 | 1312.7024 | null | null |
Language Modeling with Power Low Rank Ensembles | cs.CL cs.LG stat.ML | We present power low rank ensembles (PLRE), a flexible framework for n-gram
language modeling where ensembles of low rank matrices and tensors are used to
obtain smoothed probability estimates of words in context. Our method can be
understood as a generalization of n-gram modeling to non-integer n, and
includes standard techniques such as absolute discounting and Kneser-Ney
smoothing as special cases. PLRE training is efficient and our approach
outperforms state-of-the-art modified Kneser Ney baselines in terms of
perplexity on large corpora as well as on BLEU score in a downstream machine
translation task.
| Ankur P. Parikh, Avneesh Saluja, Chris Dyer, Eric P. Xing | null | 1312.7077 | null | null |
Near-separable Non-negative Matrix Factorization with $\ell_1$- and
Bregman Loss Functions | stat.ML cs.CV cs.LG | Recently, a family of tractable NMF algorithms have been proposed under the
assumption that the data matrix satisfies a separability condition Donoho &
Stodden (2003); Arora et al. (2012). Geometrically, this condition reformulates
the NMF problem as that of finding the extreme rays of the conical hull of a
finite set of vectors. In this paper, we develop several extensions of the
conical hull procedures of Kumar et al. (2013) for robust ($\ell_1$)
approximations and Bregman divergences. Our methods inherit all the advantages
of Kumar et al. (2013) including scalability and noise-tolerance. We show that
on foreground-background separation problems in computer vision, robust
near-separable NMFs match the performance of Robust PCA, considered state of
the art on these problems, with an order of magnitude faster training time. We
also demonstrate applications in exemplar selection settings.
| Abhishek Kumar, Vikas Sindhwani | null | 1312.7167 | null | null |
Sub-Classifier Construction for Error Correcting Output Code Using
Minimum Weight Perfect Matching | cs.LG cs.IT math.IT | Multi-class classification is mandatory for real world problems and one of
promising techniques for multi-class classification is Error Correcting Output
Code. We propose a method for constructing the Error Correcting Output Code to
obtain the suitable combination of positive and negative classes encoded to
represent binary classifiers. The minimum weight perfect matching algorithm is
applied to find the optimal pairs of subset of classes by using the
generalization performance as a weighting criterion. Based on our method, each
subset of classes with positive and negative labels is appropriately combined
for learning the binary classifiers. Experimental results show that our
technique gives significantly higher performance compared to traditional
methods including the dense random code and the sparse random code both in
terms of accuracy and classification times. Moreover, our method requires
significantly smaller number of binary classifiers while maintaining accuracy
compared to the One-Versus-One.
| Patoomsiri Songsiri, Thimaporn Phetkaew, Ryutaro Ichise and Boonserm
Kijsirikul | null | 1312.7179 | null | null |
Active Discovery of Network Roles for Predicting the Classes of Network
Nodes | cs.LG cs.SI stat.ML | Nodes in real world networks often have class labels, or underlying
attributes, that are related to the way in which they connect to other nodes.
Sometimes this relationship is simple, for instance nodes of the same class are
may be more likely to be connected. In other cases, however, this is not true,
and the way that nodes link in a network exhibits a different, more complex
relationship to their attributes. Here, we consider networks in which we know
how the nodes are connected, but we do not know the class labels of the nodes
or how class labels relate to the network links. We wish to identify the best
subset of nodes to label in order to learn this relationship between node
attributes and network links. We can then use this discovered relationship to
accurately predict the class labels of the rest of the network nodes.
We present a model that identifies groups of nodes with similar link
patterns, which we call network roles, using a generative blockmodel. The model
then predicts labels by learning the mapping from network roles to class labels
using a maximum margin classifier. We choose a subset of nodes to label
according to an iterative margin-based active learning strategy. By integrating
the discovery of network roles with the classifier optimisation, the active
learning process can adapt the network roles to better represent the network
for node classification. We demonstrate the model by exploring a selection of
real world networks, including a marine food web and a network of English
words. We show that, in contrast to other network classifiers, this model
achieves good classification accuracy for a range of networks with different
relationships between class labels and network links.
| Leto Peel | null | 1312.7258 | null | null |
Two Timescale Convergent Q-learning for Sleep--Scheduling in Wireless
Sensor Networks | cs.SY cs.LG | In this paper, we consider an intrusion detection application for Wireless
Sensor Networks (WSNs). We study the problem of scheduling the sleep times of
the individual sensors to maximize the network lifetime while keeping the
tracking error to a minimum. We formulate this problem as a
partially-observable Markov decision process (POMDP) with continuous
state-action spaces, in a manner similar to (Fuemmeler and Veeravalli [2008]).
However, unlike their formulation, we consider infinite horizon discounted and
average cost objectives as performance criteria. For each criterion, we propose
a convergent on-policy Q-learning algorithm that operates on two timescales,
while employing function approximation to handle the curse of dimensionality
associated with the underlying POMDP. Our proposed algorithm incorporates a
policy gradient update using a one-simulation simultaneous perturbation
stochastic approximation (SPSA) estimate on the faster timescale, while the
Q-value parameter (arising from a linear function approximation for the
Q-values) is updated in an on-policy temporal difference (TD) algorithm-like
fashion on the slower timescale. The feature selection scheme employed in each
of our algorithms manages the energy and tracking components in a manner that
assists the search for the optimal sleep-scheduling policy. For the sake of
comparison, in both discounted and average settings, we also develop a function
approximation analogue of the Q-learning algorithm. This algorithm, unlike the
two-timescale variant, does not possess theoretical convergence guarantees.
Finally, we also adapt our algorithms to include a stochastic iterative
estimation scheme for the intruder's mobility model. Our simulation results on
a 2-dimensional network setting suggest that our algorithms result in better
tracking accuracy at the cost of only a few additional sensors, in comparison
to a recent prior work.
| Prashanth L.A., Abhranil Chatterjee and Shalabh Bhatnagar | null | 1312.7292 | null | null |
Learning Human Pose Estimation Features with Convolutional Networks | cs.CV cs.LG cs.NE | This paper introduces a new architecture for human pose estimation using a
multi- layer convolutional network architecture and a modified learning
technique that learns low-level features and higher-level weak spatial models.
Unconstrained human pose estimation is one of the hardest problems in computer
vision, and our new architecture and learning schema shows significant
improvement over the current state-of-the-art results. The main contribution of
this paper is showing, for the first time, that a specific variation of deep
learning is able to outperform all existing traditional architectures on this
task. The paper also discusses several lessons learned while researching
alternatives, most notably, that it is possible to learn strong low-level
feature detectors on features that might even just cover a few pixels in the
image. Higher-level spatial models improve somewhat the overall result, but to
a much lesser extent then expected. Many researchers previously argued that the
kinematic structure and top-down information is crucial for this domain, but
with our purely bottom up, and weak spatial model, we could improve other more
complicated architectures that currently produce the best results. This mirrors
what many other researchers, like those in the speech recognition, object
recognition, and other domains have experienced.
| Arjun Jain, Jonathan Tompson, Mykhaylo Andriluka, Graham W. Taylor,
Christoph Bregler | null | 1312.7302 | null | null |
lil' UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits | stat.ML cs.LG | The paper proposes a novel upper confidence bound (UCB) procedure for
identifying the arm with the largest mean in a multi-armed bandit game in the
fixed confidence setting using a small number of total samples. The procedure
cannot be improved in the sense that the number of samples required to identify
the best arm is within a constant factor of a lower bound based on the law of
the iterated logarithm (LIL). Inspired by the LIL, we construct our confidence
bounds to explicitly account for the infinite time horizon of the algorithm. In
addition, by using a novel stopping time for the algorithm we avoid a union
bound over the arms that has been observed in other UCB-type algorithms. We
prove that the algorithm is optimal up to constants and also show through
simulations that it provides superior performance with respect to the
state-of-the-art.
| Kevin Jamieson, Matthew Malloy, Robert Nowak, S\'ebastien Bubeck | null | 1312.7308 | null | null |
Correlation-based construction of neighborhood and edge features | cs.CV cs.LG stat.ML | Motivated by an abstract notion of low-level edge detector filters, we
propose a simple method of unsupervised feature construction based on pairwise
statistics of features. In the first step, we construct neighborhoods of
features by regrouping features that correlate. Then we use these subsets as
filters to produce new neighborhood features. Next, we connect neighborhood
features that correlate, and construct edge features by subtracting the
correlated neighborhood features of each other. To validate the usefulness of
the constructed features, we ran AdaBoost.MH on four multi-class classification
problems. Our most significant result is a test error of 0.94% on MNIST with an
algorithm which is essentially free of any image-specific priors. On CIFAR-10
our method is suboptimal compared to today's best deep learning techniques,
nevertheless, we show that the proposed method outperforms not only boosting on
the raw pixels, but also boosting on Haar filters.
| Bal\'azs K\'egl | null | 1312.7335 | null | null |
Rate-Distortion Auto-Encoders | cs.LG | A rekindled the interest in auto-encoder algorithms has been spurred by
recent work on deep learning. Current efforts have been directed towards
effective training of auto-encoder architectures with a large number of coding
units. Here, we propose a learning algorithm for auto-encoders based on a
rate-distortion objective that minimizes the mutual information between the
inputs and the outputs of the auto-encoder subject to a fidelity constraint.
The goal is to learn a representation that is minimally committed to the input
data, but that is rich enough to reconstruct the inputs up to certain level of
distortion. Minimizing the mutual information acts as a regularization term
whereas the fidelity constraint can be understood as a risk functional in the
conventional statistical learning setting. The proposed algorithm uses a
recently introduced measure of entropy based on infinitely divisible matrices
that avoids the plug in estimation of densities. Experiments using
over-complete bases show that the rate-distortion auto-encoders can learn a
regularized input-output mapping in an implicit manner.
| Luis G. Sanchez Giraldo and Jose C. Principe | null | 1312.7381 | null | null |
Generalized Ambiguity Decomposition for Understanding Ensemble Diversity | stat.ML cs.CV cs.LG | Diversity or complementarity of experts in ensemble pattern recognition and
information processing systems is widely-observed by researchers to be crucial
for achieving performance improvement upon fusion. Understanding this link
between ensemble diversity and fusion performance is thus an important research
question. However, prior works have theoretically characterized ensemble
diversity and have linked it with ensemble performance in very restricted
settings. We present a generalized ambiguity decomposition (GAD) theorem as a
broad framework for answering these questions. The GAD theorem applies to a
generic convex ensemble of experts for any arbitrary twice-differentiable loss
function. It shows that the ensemble performance approximately decomposes into
a difference of the average expert performance and the diversity of the
ensemble. It thus provides a theoretical explanation for the
empirically-observed benefit of fusing outputs from diverse classifiers and
regressors. It also provides a loss function-dependent, ensemble-dependent, and
data-dependent definition of diversity. We present extensions of this
decomposition to common regression and classification loss functions, and
report a simulation-based analysis of the diversity term and the accuracy of
the decomposition. We finally present experiments on standard pattern
recognition data sets which indicate the accuracy of the decomposition for
real-world classification and regression problems.
| Kartik Audhkhasi, Abhinav Sethy, Bhuvana Ramabhadran and Shrikanth S.
Narayanan | null | 1312.7463 | null | null |
Nonparametric Inference For Density Modes | stat.ME cs.LG | We derive nonparametric confidence intervals for the eigenvalues of the
Hessian at modes of a density estimate. This provides information about the
strength and shape of modes and can also be used as a significance test. We use
a data-splitting approach in which potential modes are identified using the
first half of the data and inference is done with the second half of the data.
To get valid confidence sets for the eigenvalues, we use a bootstrap based on
an elementary-symmetric-polynomial (ESP) transformation. This leads to valid
bootstrap confidence sets regardless of any multiplicities in the eigenvalues.
We also suggest a new method for bandwidth selection, namely, choosing the
bandwidth to maximize the number of significant modes. We show by example that
this method works well. Even when the true distribution is singular, and hence
does not have a density, (in which case cross validation chooses a zero
bandwidth), our method chooses a reasonable bandwidth.
| Christopher Genovese, Marco Perone-Pacifico, Isabella Verdinelli and
Larry Wasserman | null | 1312.7567 | null | null |
Distributed Policy Evaluation Under Multiple Behavior Strategies | cs.MA cs.AI cs.DC cs.LG | We apply diffusion strategies to develop a fully-distributed cooperative
reinforcement learning algorithm in which agents in a network communicate only
with their immediate neighbors to improve predictions about their environment.
The algorithm can also be applied to off-policy learning, meaning that the
agents can predict the response to a behavior different from the actual
policies they are following. The proposed distributed strategy is efficient,
with linear complexity in both computation time and memory footprint. We
provide a mean-square-error performance analysis and establish convergence
under constant step-size updates, which endow the network with continuous
learning capabilities. The results show a clear gain from cooperation: when the
individual agents can estimate the solution, cooperation increases stability
and reduces bias and variance of the prediction error; but, more importantly,
the network is able to approach the optimal solution even when none of the
individual agents can (e.g., when the individual behavior policies restrict
each agent to sample a small portion of the state space).
| Sergio Valcarcel Macua, Jianshu Chen, Santiago Zazo, Ali H. Sayed | null | 1312.7606 | null | null |
Petuum: A New Platform for Distributed Machine Learning on Big Data | stat.ML cs.LG cs.SY | What is a systematic way to efficiently apply a wide spectrum of advanced ML
programs to industrial scale problems, using Big Models (up to 100s of billions
of parameters) on Big Data (up to terabytes or petabytes)? Modern
parallelization strategies employ fine-grained operations and scheduling beyond
the classic bulk-synchronous processing paradigm popularized by MapReduce, or
even specialized graph-based execution that relies on graph representations of
ML programs. The variety of approaches tends to pull systems and algorithms
design in different directions, and it remains difficult to find a universal
platform applicable to a wide range of ML programs at scale. We propose a
general-purpose framework that systematically addresses data- and
model-parallel challenges in large-scale ML, by observing that many ML programs
are fundamentally optimization-centric and admit error-tolerant,
iterative-convergent algorithmic solutions. This presents unique opportunities
for an integrative system design, such as bounded-error network synchronization
and dynamic scheduling based on ML program structure. We demonstrate the
efficacy of these system designs versus well-known implementations of modern ML
algorithms, allowing ML programs to run in much less time and at considerably
larger model sizes, even on modestly-sized compute clusters.
| Eric P. Xing, Qirong Ho, Wei Dai, Jin Kyu Kim, Jinliang Wei, Seunghak
Lee, Xun Zheng, Pengtao Xie, Abhimanu Kumar, Yaoliang Yu | null | 1312.7651 | null | null |
Response-Based Approachability and its Application to Generalized
No-Regret Algorithms | cs.LG cs.GT | Approachability theory, introduced by Blackwell (1956), provides fundamental
results on repeated games with vector-valued payoffs, and has been usefully
applied since in the theory of learning in games and to learning algorithms in
the online adversarial setup. Given a repeated game with vector payoffs, a
target set $S$ is approachable by a certain player (the agent) if he can ensure
that the average payoff vector converges to that set no matter what his
adversary opponent does. Blackwell provided two equivalent sets of conditions
for a convex set to be approachable. The first (primary) condition is a
geometric separation condition, while the second (dual) condition requires that
the set be {\em non-excludable}, namely that for every mixed action of the
opponent there exists a mixed action of the agent (a {\em response}) such that
the resulting payoff vector belongs to $S$. Existing approachability algorithms
rely on the primal condition and essentially require to compute at each stage a
projection direction from a given point to $S$. In this paper, we introduce an
approachability algorithm that relies on Blackwell's {\em dual} condition.
Thus, rather than projection, the algorithm relies on computation of the
response to a certain action of the opponent at each stage. The utility of the
proposed algorithm is demonstrated by applying it to certain generalizations of
the classical regret minimization problem, which include regret minimization
with side constraints and regret minimization for global cost functions. In
these problems, computation of the required projections is generally complex
but a response is readily obtainable.
| Andrey Bernstein and Nahum Shimkin | null | 1312.7658 | null | null |
Communication Efficient Distributed Optimization using an Approximate
Newton-type Method | cs.LG math.OC stat.ML | We present a novel Newton-type method for distributed optimization, which is
particularly well suited for stochastic optimization and learning problems. For
quadratic objectives, the method enjoys a linear rate of convergence which
provably \emph{improves} with the data size, requiring an essentially constant
number of iterations under reasonable assumptions. We provide theoretical and
empirical evidence of the advantages of our method compared to other
approaches, such as one-shot parameter averaging and ADMM.
| Ohad Shamir, Nathan Srebro, Tong Zhang | null | 1312.7853 | null | null |
Consistent Bounded-Asynchronous Parameter Servers for Distributed ML | stat.ML cs.DC cs.LG | In distributed ML applications, shared parameters are usually replicated
among computing nodes to minimize network overhead. Therefore, proper
consistency model must be carefully chosen to ensure algorithm's correctness
and provide high throughput. Existing consistency models used in
general-purpose databases and modern distributed ML systems are either too
loose to guarantee correctness of the ML algorithms or too strict and thus fail
to fully exploit the computing power of the underlying distributed system.
Many ML algorithms fall into the category of \emph{iterative convergent
algorithms} which start from a randomly chosen initial point and converge to
optima by repeating iteratively a set of procedures. We've found that many such
algorithms are to a bounded amount of inconsistency and still converge
correctly. This property allows distributed ML to relax strict consistency
models to improve system performance while theoretically guarantees algorithmic
correctness. In this paper, we present several relaxed consistency models for
asynchronous parallel computation and theoretically prove their algorithmic
correctness. The proposed consistency models are implemented in a distributed
parameter server and evaluated in the context of a popular ML application:
topic modeling.
| Jinliang Wei, Wei Dai, Abhimanu Kumar, Xun Zheng, Qirong Ho and Eric
P. Xing | null | 1312.7869 | null | null |
Approximating the Bethe partition function | cs.LG | When belief propagation (BP) converges, it does so to a stationary point of
the Bethe free energy $F$, and is often strikingly accurate. However, it may
converge only to a local optimum or may not converge at all. An algorithm was
recently introduced for attractive binary pairwise MRFs which is guaranteed to
return an $\epsilon$-approximation to the global minimum of $F$ in polynomial
time provided the maximum degree $\Delta=O(\log n)$, where $n$ is the number of
variables. Here we significantly improve this algorithm and derive several
results including a new approach based on analyzing first derivatives of $F$,
which leads to performance that is typically far superior and yields a fully
polynomial-time approximation scheme (FPTAS) for attractive models without any
degree restriction. Further, the method applies to general (non-attractive)
models, though with no polynomial time guarantee in this case, leading to the
important result that approximating $\log$ of the Bethe partition function,
$\log Z_B=-\min F$, for a general model to additive $\epsilon$-accuracy may be
reduced to a discrete MAP inference problem. We explore an application to
predicting equipment failure on an urban power network and demonstrate that the
Bethe approximation can perform well even when BP fails to converge.
| Adrian Weller, Tony Jebara | null | 1401.0044 | null | null |
PSO-MISMO Modeling Strategy for Multi-Step-Ahead Time Series Prediction | cs.AI cs.LG cs.NE stat.ML | Multi-step-ahead time series prediction is one of the most challenging
research topics in the field of time series modeling and prediction, and is
continually under research. Recently, the multiple-input several
multiple-outputs (MISMO) modeling strategy has been proposed as a promising
alternative for multi-step-ahead time series prediction, exhibiting advantages
compared with the two currently dominating strategies, the iterated and the
direct strategies. Built on the established MISMO strategy, this study proposes
a particle swarm optimization (PSO)-based MISMO modeling strategy, which is
capable of determining the number of sub-models in a self-adaptive mode, with
varying prediction horizons. Rather than deriving crisp divides with equal-size
s prediction horizons from the established MISMO, the proposed PSO-MISMO
strategy, implemented with neural networks, employs a heuristic to create
flexible divides with varying sizes of prediction horizons and to generate
corresponding sub-models, providing considerable flexibility in model
construction, which has been validated with simulated and real datasets.
| Yukun Bao, Tao Xiong, Zhongyi Hu | 10.1109/TCYB.2013.2265084 | 1401.0104 | null | null |
Controlled Sparsity Kernel Learning | cs.LG | Multiple Kernel Learning(MKL) on Support Vector Machines(SVMs) has been a
popular front of research in recent times due to its success in application
problems like Object Categorization. This success is due to the fact that MKL
has the ability to choose from a variety of feature kernels to identify the
optimal kernel combination. But the initial formulation of MKL was only able to
select the best of the features and misses out many other informative kernels
presented. To overcome this, the Lp norm based formulation was proposed by
Kloft et. al. This formulation is capable of choosing a non-sparse set of
kernels through a control parameter p. Unfortunately, the parameter p does not
have a direct meaning to the number of kernels selected. We have observed that
stricter control over the number of kernels selected gives us an edge over
these techniques in terms of accuracy of classification and also helps us to
fine tune the algorithms to the time requirements at hand. In this work, we
propose a Controlled Sparsity Kernel Learning (CSKL) formulation that can
strictly control the number of kernels which we wish to select. The CSKL
formulation introduces a parameter t which directly corresponds to the number
of kernels selected. It is important to note that a search in t space is finite
and fast as compared to p. We have also provided an efficient Reduced Gradient
Descent based algorithm to solve the CSKL formulation, which is proven to
converge. Through our experiments on the Caltech101 Object Categorization
dataset, we have also shown that one can achieve better accuracies than the
previous formulations through the right choice of t.
| Dinesh Govindaraj, Raman Sankaran, Sreedal Menon, Chiranjib
Bhattacharyya | null | 1401.0116 | null | null |
Black Box Variational Inference | stat.ML cs.LG stat.CO stat.ME | Variational inference has become a widely used method to approximate
posteriors in complex latent variables models. However, deriving a variational
inference algorithm generally requires significant model-specific analysis, and
these efforts can hinder and deter us from quickly developing and exploring a
variety of models for a problem at hand. In this paper, we present a "black
box" variational inference algorithm, one that can be quickly applied to many
models with little additional derivation. Our method is based on a stochastic
optimization of the variational objective where the noisy gradient is computed
from Monte Carlo samples from the variational distribution. We develop a number
of methods to reduce the variance of the gradient, always maintaining the
criterion that we want to avoid difficult model-based derivations. We evaluate
our method against the corresponding black box sampling based methods. We find
that our method reaches better predictive likelihoods much faster than sampling
methods. Finally, we demonstrate that Black Box Variational Inference lets us
easily explore a wide space of models by quickly constructing and evaluating
several models of longitudinal healthcare data.
| Rajesh Ranganath and Sean Gerrish and David M. Blei | null | 1401.0118 | null | null |
Speeding-Up Convergence via Sequential Subspace Optimization: Current
State and Future Directions | cs.NA cs.LG | This is an overview paper written in style of research proposal. In recent
years we introduced a general framework for large-scale unconstrained
optimization -- Sequential Subspace Optimization (SESOP) and demonstrated its
usefulness for sparsity-based signal/image denoising, deconvolution,
compressive sensing, computed tomography, diffraction imaging, support vector
machines. We explored its combination with Parallel Coordinate Descent and
Separable Surrogate Function methods, obtaining state of the art results in
above-mentioned areas. There are several methods, that are faster than plain
SESOP under specific conditions: Trust region Newton method - for problems with
easily invertible Hessian matrix; Truncated Newton method - when fast
multiplication by Hessian is available; Stochastic optimization methods - for
problems with large stochastic-type data; Multigrid methods - for problems with
nested multilevel structure. Each of these methods can be further improved by
merge with SESOP. One can also accelerate Augmented Lagrangian method for
constrained optimization problems and Alternating Direction Method of
Multipliers for problems with separable objective function and non-separable
constraints.
| Michael Zibulevsky | null | 1401.0159 | null | null |
Sparse Recovery with Very Sparse Compressed Counting | stat.ME cs.DS cs.IT cs.LG math.IT | Compressed sensing (sparse signal recovery) often encounters nonnegative data
(e.g., images). Recently we developed the methodology of using (dense)
Compressed Counting for recovering nonnegative K-sparse signals. In this paper,
we adopt very sparse Compressed Counting for nonnegative signal recovery. Our
design matrix is sampled from a maximally-skewed p-stable distribution (0<p<1),
and we sparsify the design matrix so that on average (1-g)-fraction of the
entries become zero. The idea is related to very sparse stable random
projections (Li et al 2006 and Li 2007), the prior work for estimating summary
statistics of the data.
In our theoretical analysis, we show that, when p->0, it suffices to use M=
K/(1-exp(-gK) log N measurements, so that all coordinates can be recovered in
one scan of the coordinates. If g = 1 (i.e., dense design), then M = K log N.
If g= 1/K or 2/K (i.e., very sparse design), then M = 1.58K log N or M = 1.16K
log N. This means the design matrix can be indeed very sparse at only a minor
inflation of the sample complexity.
Interestingly, as p->1, the required number of measurements is essentially M
= 2.7K log N, provided g= 1/K. It turns out that this result is a general
worst-case bound.
| Ping Li, Cun-Hui Zhang, Tong Zhang | null | 1401.0201 | null | null |
Robust Hierarchical Clustering | cs.LG cs.DS | One of the most widely used techniques for data clustering is agglomerative
clustering. Such algorithms have been long used across many different fields
ranging from computational biology to social sciences to computer vision in
part because their output is easy to interpret. Unfortunately, it is well
known, however, that many of the classic agglomerative clustering algorithms
are not robust to noise. In this paper we propose and analyze a new robust
algorithm for bottom-up agglomerative clustering. We show that our algorithm
can be used to cluster accurately in cases where the data satisfies a number of
natural properties and where the traditional agglomerative algorithms fail. We
also show how to adapt our algorithm to the inductive setting where our given
data is only a small random sample of the entire data set. Experimental
evaluations on synthetic and real world data sets show that our algorithm
achieves better performance than other hierarchical algorithms in the presence
of noise.
| Maria-Florina Balcan, Yingyu Liang, Pramod Gupta | null | 1401.0247 | null | null |
Modeling Attractiveness and Multiple Clicks in Sponsored Search Results | cs.IR cs.LG | Click models are an important tool for leveraging user feedback, and are used
by commercial search engines for surfacing relevant search results. However,
existing click models are lacking in two aspects. First, they do not share
information across search results when computing attractiveness. Second, they
assume that users interact with the search results sequentially. Based on our
analysis of the click logs of a commercial search engine, we observe that the
sequential scan assumption does not always hold, especially for sponsored
search results. To overcome the above two limitations, we propose a new click
model. Our key insight is that sharing information across search results helps
in identifying important words or key-phrases which can then be used to
accurately compute attractiveness of a search result. Furthermore, we argue
that the click probability of a position as well as its attractiveness changes
during a user session and depends on the user's past click experience. Our
model seamlessly incorporates the effect of externalities (quality of other
search results displayed in response to a user query), user fatigue, as well as
pre and post-click relevance of a sponsored search result. We propose an
efficient one-pass inference scheme and empirically evaluate the performance of
our model via extensive experiments using the click logs of a large commercial
search engine.
| Dinesh Govindaraj, Tao Wang, S.V.N. Vishwanathan | null | 1401.0255 | null | null |
Learning without Concentration | cs.LG stat.ML | We obtain sharp bounds on the performance of Empirical Risk Minimization
performed in a convex class and with respect to the squared loss, without
assuming that class members and the target are bounded functions or have
rapidly decaying tails.
Rather than resorting to a concentration-based argument, the method used here
relies on a `small-ball' assumption and thus holds for classes consisting of
heavy-tailed functions and for heavy-tailed targets.
The resulting estimates scale correctly with the `noise level' of the
problem, and when applied to the classical, bounded scenario, always improve
the known bounds.
| Shahar Mendelson | null | 1401.0304 | null | null |
EigenGP: Gaussian Process Models with Adaptive Eigenfunctions | cs.LG | Gaussian processes (GPs) provide a nonparametric representation of functions.
However, classical GP inference suffers from high computational cost for big
data. In this paper, we propose a new Bayesian approach, EigenGP, that learns
both basis dictionary elements--eigenfunctions of a GP prior--and prior
precisions in a sparse finite model. It is well known that, among all
orthogonal basis functions, eigenfunctions can provide the most compact
representation. Unlike other sparse Bayesian finite models where the basis
function has a fixed form, our eigenfunctions live in a reproducing kernel
Hilbert space as a finite linear combination of kernel functions. We learn the
dictionary elements--eigenfunctions--and the prior precisions over these
elements as well as all the other hyperparameters from data by maximizing the
model marginal likelihood. We explore computational linear algebra to simplify
the gradient computation significantly. Our experimental results demonstrate
improved predictive performance of EigenGP over alternative sparse GP methods
as well as relevance vector machine.
| Hao Peng and Yuan Qi | null | 1401.0362 | null | null |
Generalization Bounds for Representative Domain Adaptation | cs.LG stat.ML | In this paper, we propose a novel framework to analyze the theoretical
properties of the learning process for a representative type of domain
adaptation, which combines data from multiple sources and one target (or
briefly called representative domain adaptation). In particular, we use the
integral probability metric to measure the difference between the distributions
of two domains and meanwhile compare it with the H-divergence and the
discrepancy distance. We develop the Hoeffding-type, the Bennett-type and the
McDiarmid-type deviation inequalities for multiple domains respectively, and
then present the symmetrization inequality for representative domain
adaptation. Next, we use the derived inequalities to obtain the Hoeffding-type
and the Bennett-type generalization bounds respectively, both of which are
based on the uniform entropy number. Moreover, we present the generalization
bounds based on the Rademacher complexity. Finally, we analyze the asymptotic
convergence and the rate of convergence of the learning process for
representative domain adaptation. We discuss the factors that affect the
asymptotic behavior of the learning process and the numerical experiments
support our theoretical findings as well. Meanwhile, we give a comparison with
the existing results of domain adaptation and the classical results under the
same-distribution assumption.
| Chao Zhang, Lei Zhang, Wei Fan, Jieping Ye | null | 1401.0376 | null | null |
Zero-Shot Learning for Semantic Utterance Classification | cs.CL cs.LG | We propose a novel zero-shot learning method for semantic utterance
classification (SUC). It learns a classifier $f: X \to Y$ for problems where
none of the semantic categories $Y$ are present in the training set. The
framework uncovers the link between categories and utterances using a semantic
space. We show that this semantic space can be learned by deep neural networks
trained on large amounts of search engine query log data. More precisely, we
propose a novel method that can learn discriminative semantic features without
supervision. It uses the zero-shot learning framework to guide the learning of
the semantic features. We demonstrate the effectiveness of the zero-shot
semantic learning algorithm on the SUC dataset collected by (Tur, 2012).
Furthermore, we achieve state-of-the-art results by combining the semantic
features with a supervised method.
| Yann N. Dauphin, Gokhan Tur, Dilek Hakkani-Tur, Larry Heck | null | 1401.0509 | null | null |
Structured Generative Models of Natural Source Code | cs.PL cs.LG stat.ML | We study the problem of building generative models of natural source code
(NSC); that is, source code written and understood by humans. Our primary
contribution is to describe a family of generative models for NSC that have
three key properties: First, they incorporate both sequential and hierarchical
structure. Second, we learn a distributed representation of source code
elements. Finally, they integrate closely with a compiler, which allows
leveraging compiler logic and abstractions when building structure into the
model. We also develop an extension that includes more complex structure,
refining how the model generates identifier tokens based on what variables are
currently in scope. Our models can be learned efficiently, and we show
empirically that including appropriate structure greatly improves the models,
measured by the probability of generating test programs.
| Chris J. Maddison and Daniel Tarlow | null | 1401.0514 | null | null |
More Algorithms for Provable Dictionary Learning | cs.DS cs.LG stat.ML | In dictionary learning, also known as sparse coding, the algorithm is given
samples of the form $y = Ax$ where $x\in \mathbb{R}^m$ is an unknown random
sparse vector and $A$ is an unknown dictionary matrix in $\mathbb{R}^{n\times
m}$ (usually $m > n$, which is the overcomplete case). The goal is to learn $A$
and $x$. This problem has been studied in neuroscience, machine learning,
visions, and image processing. In practice it is solved by heuristic algorithms
and provable algorithms seemed hard to find. Recently, provable algorithms were
found that work if the unknown feature vector $x$ is $\sqrt{n}$-sparse or even
sparser. Spielman et al. \cite{DBLP:journals/jmlr/SpielmanWW12} did this for
dictionaries where $m=n$; Arora et al. \cite{AGM} gave an algorithm for
overcomplete ($m >n$) and incoherent matrices $A$; and Agarwal et al.
\cite{DBLP:journals/corr/AgarwalAN13} handled a similar case but with weaker
guarantees.
This raised the problem of designing provable algorithms that allow sparsity
$\gg \sqrt{n}$ in the hidden vector $x$. The current paper designs algorithms
that allow sparsity up to $n/poly(\log n)$. It works for a class of matrices
where features are individually recoverable, a new notion identified in this
paper that may motivate further work.
The algorithm runs in quasipolynomial time because they use limited
enumeration.
| Sanjeev Arora, Aditya Bhaskara, Rong Ge, Tengyu Ma | null | 1401.0579 | null | null |
Computing Entropy Rate Of Symbol Sources & A Distribution-free Limit
Theorem | cs.IT cs.LG math.IT math.PR stat.CO stat.ML | Entropy rate of sequential data-streams naturally quantifies the complexity
of the generative process. Thus entropy rate fluctuations could be used as a
tool to recognize dynamical perturbations in signal sources, and could
potentially be carried out without explicit background noise characterization.
However, state of the art algorithms to estimate the entropy rate have markedly
slow convergence; making such entropic approaches non-viable in practice. We
present here a fundamentally new approach to estimate entropy rates, which is
demonstrated to converge significantly faster in terms of input data lengths,
and is shown to be effective in diverse applications ranging from the
estimation of the entropy rate of English texts to the estimation of complexity
of chaotic dynamical systems. Additionally, the convergence rate of entropy
estimates do not follow from any standard limit theorem, and reported
algorithms fail to provide any confidence bounds on the computed values.
Exploiting a connection to the theory of probabilistic automata, we establish a
convergence rate of $O(\log \vert s \vert/\sqrt[3]{\vert s \vert})$ as a
function of the input length $\vert s \vert$, which then yields explicit
uncertainty estimates, as well as required data lengths to satisfy
pre-specified confidence bounds.
| Ishanu Chattopadhyay and Hod Lipson | null | 1401.0711 | null | null |
Data Smashing | cs.LG cs.AI cs.CE cs.IT math.IT stat.ML | Investigation of the underlying physics or biology from empirical data
requires a quantifiable notion of similarity - when do two observed data sets
indicate nearly identical generating processes, and when they do not. The
discriminating characteristics to look for in data is often determined by
heuristics designed by experts, $e.g.$, distinct shapes of "folded" lightcurves
may be used as "features" to classify variable stars, while determination of
pathological brain states might require a Fourier analysis of brainwave
activity. Finding good features is non-trivial. Here, we propose a universal
solution to this problem: we delineate a principle for quantifying similarity
between sources of arbitrary data streams, without a priori knowledge, features
or training. We uncover an algebraic structure on a space of symbolic models
for quantized data, and show that such stochastic generators may be added and
uniquely inverted; and that a model and its inverse always sum to the generator
of flat white noise. Therefore, every data stream has an anti-stream: data
generated by the inverse model. Similarity between two streams, then, is the
degree to which one, when summed to the other's anti-stream, mutually
annihilates all statistical structure to noise. We call this data smashing. We
present diverse applications, including disambiguation of brainwaves pertaining
to epileptic seizures, detection of anomalous cardiac rhythms, and
classification of astronomical objects from raw photometry. In our examples,
the data smashing principle, without access to any domain knowledge, meets or
exceeds the performance of specialized algorithms tuned by domain experts.
| Ishanu Chattopadhyay and Hod Lipson | null | 1401.0742 | null | null |
Context-Aware Hypergraph Construction for Robust Spectral Clustering | cs.CV cs.LG | Spectral clustering is a powerful tool for unsupervised data analysis. In
this paper, we propose a context-aware hypergraph similarity measure (CAHSM),
which leads to robust spectral clustering in the case of noisy data. We
construct three types of hypergraph---the pairwise hypergraph, the
k-nearest-neighbor (kNN) hypergraph, and the high-order over-clustering
hypergraph. The pairwise hypergraph captures the pairwise similarity of data
points; the kNN hypergraph captures the neighborhood of each point; and the
clustering hypergraph encodes high-order contexts within the dataset. By
combining the affinity information from these three hypergraphs, the CAHSM
algorithm is able to explore the intrinsic topological information of the
dataset. Therefore, data clustering using CAHSM tends to be more robust.
Considering the intra-cluster compactness and the inter-cluster separability of
vertices, we further design a discriminative hypergraph partitioning criterion
(DHPC). Using both CAHSM and DHPC, a robust spectral clustering algorithm is
developed. Theoretical analysis and experimental evaluation demonstrate the
effectiveness and robustness of the proposed algorithm.
| Xi Li, Weiming Hu, Chunhua Shen, Anthony Dick, Zhongfei Zhang | 10.1109/TKDE.2013.126 | 1401.0764 | null | null |
From Kernel Machines to Ensemble Learning | cs.LG cs.CV | Ensemble methods such as boosting combine multiple learners to obtain better
prediction than could be obtained from any individual learner. Here we propose
a principled framework for directly constructing ensemble learning methods from
kernel methods. Unlike previous studies showing the equivalence between
boosting and support vector machines (SVMs), which needs a translation
procedure, we show that it is possible to design boosting-like procedure to
solve the SVM optimization problems.
In other words, it is possible to design ensemble methods directly from SVM
without any middle procedure.
This finding not only enables us to design new ensemble learning methods
directly from kernel methods, but also makes it possible to take advantage of
those highly-optimized fast linear SVM solvers for ensemble learning.
We exemplify this framework for designing binary ensemble learning as well as
a new multi-class ensemble learning methods.
Experimental results demonstrate the flexibility and usefulness of the
proposed framework.
| Chunhua Shen, Fayao Liu | null | 1401.0767 | null | null |
Least Squares Policy Iteration with Instrumental Variables vs. Direct
Policy Search: Comparison Against Optimal Benchmarks Using Energy Storage | math.OC cs.LG | This paper studies approximate policy iteration (API) methods which use
least-squares Bellman error minimization for policy evaluation. We address
several of its enhancements, namely, Bellman error minimization using
instrumental variables, least-squares projected Bellman error minimization, and
projected Bellman error minimization using instrumental variables. We prove
that for a general discrete-time stochastic control problem, Bellman error
minimization using instrumental variables is equivalent to both variants of
projected Bellman error minimization. An alternative to these API methods is
direct policy search based on knowledge gradient. The practical performance of
these three approximate dynamic programming methods are then investigated in
the context of an application in energy storage, integrated with an
intermittent wind energy supply to fully serve a stochastic time-varying
electricity demand. We create a library of test problems using real-world data
and apply value iteration to find their optimal policies. These benchmarks are
then used to compare the developed policies. Our analysis indicates that API
with instrumental variables Bellman error minimization prominently outperforms
API with least-squares Bellman error minimization. However, these approaches
underperform our direct policy search implementation.
| Warren R. Scott, Warren B. Powell, Somayeh Moazehi | null | 1401.0843 | null | null |
Concave Penalized Estimation of Sparse Gaussian Bayesian Networks | stat.ME cs.LG stat.ML | We develop a penalized likelihood estimation framework to estimate the
structure of Gaussian Bayesian networks from observational data. In contrast to
recent methods which accelerate the learning problem by restricting the search
space, our main contribution is a fast algorithm for score-based structure
learning which does not restrict the search space in any way and works on
high-dimensional datasets with thousands of variables. Our use of concave
regularization, as opposed to the more popular $\ell_0$ (e.g. BIC) penalty, is
new. Moreover, we provide theoretical guarantees which generalize existing
asymptotic results when the underlying distribution is Gaussian. Most notably,
our framework does not require the existence of a so-called faithful DAG
representation, and as a result the theory must handle the inherent
nonidentifiability of the estimation problem in a novel way. Finally, as a
matter of independent interest, we provide a comprehensive comparison of our
approach to several standard structure learning methods using open-source
packages developed for the R language. Based on these experiments, we show that
our algorithm is significantly faster than other competing methods while
obtaining higher sensitivity with comparable false discovery rates for
high-dimensional data. In particular, the total runtime for our method to
generate a solution path of 20 estimates for DAGs with 8000 nodes is around one
hour.
| Bryon Aragam and Qing Zhou | null | 1401.0852 | null | null |
Schatten-$p$ Quasi-Norm Regularized Matrix Optimization via Iterative
Reweighted Singular Value Minimization | math.OC cs.LG math.NA stat.CO stat.ML | In this paper we study general Schatten-$p$ quasi-norm (SPQN) regularized
matrix minimization problems. In particular, we first introduce a class of
first-order stationary points for them, and show that the first-order
stationary points introduced in [11] for an SPQN regularized $vector$
minimization problem are equivalent to those of an SPQN regularized $matrix$
minimization reformulation. We also show that any local minimizer of the SPQN
regularized matrix minimization problems must be a first-order stationary
point. Moreover, we derive lower bounds for nonzero singular values of the
first-order stationary points and hence also of the local minimizers of the
SPQN regularized matrix minimization problems. The iterative reweighted
singular value minimization (IRSVM) methods are then proposed to solve these
problems, whose subproblems are shown to have a closed-form solution. In
contrast to the analogous methods for the SPQN regularized $vector$
minimization problems, the convergence analysis of these methods is
significantly more challenging. We develop a novel approach to establishing the
convergence of these methods, which makes use of the expression of a specific
solution of their subproblems and avoids the intricate issue of finding the
explicit expression for the Clarke subdifferential of the objective of their
subproblems. In particular, we show that any accumulation point of the sequence
generated by the IRSVM methods is a first-order stationary point of the
problems. Our computational results demonstrate that the IRSVM methods
generally outperform some recently developed state-of-the-art methods in terms
of solution quality and/or speed.
| Zhaosong Lu and Yong Zhang | null | 1401.0869 | null | null |
Learning parametric dictionaries for graph signals | cs.LG cs.SI stat.ML | In sparse signal representation, the choice of a dictionary often involves a
tradeoff between two desirable properties -- the ability to adapt to specific
signal data and a fast implementation of the dictionary. To sparsely represent
signals residing on weighted graphs, an additional design challenge is to
incorporate the intrinsic geometric structure of the irregular data domain into
the atoms of the dictionary. In this work, we propose a parametric dictionary
learning algorithm to design data-adapted, structured dictionaries that
sparsely represent graph signals. In particular, we model graph signals as
combinations of overlapping local patterns. We impose the constraint that each
dictionary is a concatenation of subdictionaries, with each subdictionary being
a polynomial of the graph Laplacian matrix, representing a single pattern
translated to different areas of the graph. The learning algorithm adapts the
patterns to a training set of graph signals. Experimental results on both
synthetic and real datasets demonstrate that the dictionaries learned by the
proposed algorithm are competitive with and often better than unstructured
dictionaries learned by state-of-the-art numerical learning algorithms in terms
of sparse approximation of graph signals. In contrast to the unstructured
dictionaries, however, the dictionaries learned by the proposed algorithm
feature localized atoms and can be implemented in a computationally efficient
manner in signal processing tasks such as compression, denoising, and
classification.
| Dorina Thanou, David I Shuman, Pascal Frossard | 10.1109/TSP.2014.2332441 | 1401.0887 | null | null |
Feature Selection Using Classifier in High Dimensional Data | cs.CV cs.LG stat.ML | Feature selection is frequently used as a pre-processing step to machine
learning. It is a process of choosing a subset of original features so that the
feature space is optimally reduced according to a certain evaluation criterion.
The central objective of this paper is to reduce the dimension of the data by
finding a small set of important features which can give good classification
performance. We have applied filter and wrapper approach with different
classifiers QDA and LDA respectively. A widely-used filter method is used for
bioinformatics data i.e. a univariate criterion separately on each feature,
assuming that there is no interaction between features and then applied
Sequential Feature Selection method. Experimental results show that filter
approach gives better performance in respect of Misclassification Error Rate.
| Vijendra Singh and Shivani Pathak | null | 1401.0898 | null | null |
Exploration vs Exploitation vs Safety: Risk-averse Multi-Armed Bandits | cs.LG | Motivated by applications in energy management, this paper presents the
Multi-Armed Risk-Aware Bandit (MARAB) algorithm. With the goal of limiting the
exploration of risky arms, MARAB takes as arm quality its conditional value at
risk. When the user-supplied risk level goes to 0, the arm quality tends toward
the essential infimum of the arm distribution density, and MARAB tends toward
the MIN multi-armed bandit algorithm, aimed at the arm with maximal minimal
value. As a first contribution, this paper presents a theoretical analysis of
the MIN algorithm under mild assumptions, establishing its robustness
comparatively to UCB. The analysis is supported by extensive experimental
validation of MIN and MARAB compared to UCB and state-of-art risk-aware MAB
algorithms on artificial and real-world problems.
| Nicolas Galichet (LRI, INRIA Saclay - Ile de France), Mich\`ele Sebag
(LRI, INRIA Saclay - Ile de France), Olivier Teytaud (LRI, INRIA Saclay - Ile
de France) | null | 1401.1123 | null | null |
Cortical prediction markets | cs.AI cs.GT cs.LG cs.MA q-bio.NC | We investigate cortical learning from the perspective of mechanism design.
First, we show that discretizing standard models of neurons and synaptic
plasticity leads to rational agents maximizing simple scoring rules. Second,
our main result is that the scoring rules are proper, implying that neurons
faithfully encode expected utilities in their synaptic weights and encode
high-scoring outcomes in their spikes. Third, with this foundation in hand, we
propose a biologically plausible mechanism whereby neurons backpropagate
incentives which allows them to optimize their usefulness to the rest of
cortex. Finally, experiments show that networks that backpropagate incentives
can learn simple tasks.
| David Balduzzi | null | 1401.1465 | null | null |
Key point selection and clustering of swimmer coordination through
Sparse Fisher-EM | stat.ML cs.CV cs.LG physics.data-an stat.AP | To answer the existence of optimal swimmer learning/teaching strategies, this
work introduces a two-level clustering in order to analyze temporal dynamics of
motor learning in breaststroke swimming. Each level have been performed through
Sparse Fisher-EM, a unsupervised framework which can be applied efficiently on
large and correlated datasets. The induced sparsity selects key points of the
coordination phase without any prior knowledge.
| John Komar and Romain H\'erault and Ludovic Seifert | null | 1401.1489 | null | null |
Optimal Demand Response Using Device Based Reinforcement Learning | cs.LG cs.AI cs.SY | Demand response (DR) for residential and small commercial buildings is
estimated to account for as much as 65% of the total energy savings potential
of DR, and previous work shows that a fully automated Energy Management System
(EMS) is a necessary prerequisite to DR in these areas. In this paper, we
propose a novel EMS formulation for DR problems in these sectors. Specifically,
we formulate a fully automated EMS's rescheduling problem as a reinforcement
learning (RL) problem, and argue that this RL problem can be approximately
solved by decomposing it over device clusters. Compared with existing
formulations, our new formulation (1) does not require explicitly modeling the
user's dissatisfaction on job rescheduling, (2) enables the EMS to
self-initiate jobs, (3) allows the user to initiate more flexible requests and
(4) has a computational complexity linear in the number of devices. We also
demonstrate the simulation results of applying Q-learning, one of the most
popular and classical RL algorithms, to a representative example.
| Zheng Wen, Daniel O'Neill and Hamid Reza Maei | null | 1401.1549 | null | null |
Beyond One-Step-Ahead Forecasting: Evaluation of Alternative
Multi-Step-Ahead Forecasting Models for Crude Oil Prices | cs.LG cs.AI | An accurate prediction of crude oil prices over long future horizons is
challenging and of great interest to governments, enterprises, and investors.
This paper proposes a revised hybrid model built upon empirical mode
decomposition (EMD) based on the feed-forward neural network (FNN) modeling
framework incorporating the slope-based method (SBM), which is capable of
capturing the complex dynamic of crude oil prices. Three commonly used
multi-step-ahead prediction strategies proposed in the literature, including
iterated strategy, direct strategy, and MIMO (multiple-input multiple-output)
strategy, are examined and compared, and practical considerations for the
selection of a prediction strategy for multi-step-ahead forecasting relating to
crude oil prices are identified. The weekly data from the WTI (West Texas
Intermediate) crude oil spot price are used to compare the performance of the
alternative models under the EMD-SBM-FNN modeling framework with selected
counterparts. The quantitative and comprehensive assessments are performed on
the basis of prediction accuracy and computational cost. The results obtained
in this study indicate that the proposed EMD-SBM-FNN model using the MIMO
strategy is the best in terms of prediction accuracy with accredited
computational load.
| Tao Xiong, Yukun Bao, Zhongyi Hu | 10.1016/j.eneco.2013.07.028 | 1401.1560 | null | null |
Fast nonparametric clustering of structured time-series | cs.LG cs.CV stat.ML | In this publication, we combine two Bayesian non-parametric models: the
Gaussian Process (GP) and the Dirichlet Process (DP). Our innovation in the GP
model is to introduce a variation on the GP prior which enables us to model
structured time-series data, i.e. data containing groups where we wish to model
inter- and intra-group variability. Our innovation in the DP model is an
implementation of a new fast collapsed variational inference procedure which
enables us to optimize our variationala pproximation significantly faster than
standard VB approaches. In a biological time series application we show how our
model better captures salient features of the data, leading to better
consistency with existing biological classifications, while the associated
inference algorithm provides a twofold speed-up over EM-based variational
inference.
| James Hensman and Magnus Rattray and Neil D. Lawrence | null | 1401.1605 | null | null |
Learning Multilingual Word Representations using a Bag-of-Words
Autoencoder | cs.CL cs.LG stat.ML | Recent work on learning multilingual word representations usually relies on
the use of word-level alignements (e.g. infered with the help of GIZA++)
between translated sentences, in order to align the word embeddings in
different languages. In this workshop paper, we investigate an autoencoder
model for learning multilingual word representations that does without such
word-level alignements. The autoencoder is trained to reconstruct the
bag-of-word representation of given sentence from an encoded representation
extracted from its translation. We evaluate our approach on a multilingual
document classification task, where labeled data is available only for one
language (e.g. English) while classification must be performed in a different
language (e.g. French). In our experiments, we observe that our method compares
favorably with a previously proposed method that exploits word-level alignments
to learn word representations.
| Stanislas Lauly, Alex Boulanger, Hugo Larochelle | null | 1401.1803 | null | null |
Robust Large Scale Non-negative Matrix Factorization using Proximal
Point Algorithm | stat.ML cs.IT cs.LG cs.NA math.IT | A robust algorithm for non-negative matrix factorization (NMF) is presented
in this paper with the purpose of dealing with large-scale data, where the
separability assumption is satisfied. In particular, we modify the Linear
Programming (LP) algorithm of [9] by introducing a reduced set of constraints
for exact NMF. In contrast to the previous approaches, the proposed algorithm
does not require the knowledge of factorization rank (extreme rays [3] or
topics [7]). Furthermore, motivated by a similar problem arising in the context
of metabolic network analysis [13], we consider an entirely different regime
where the number of extreme rays or topics can be much larger than the
dimension of the data vectors. The performance of the algorithm for different
synthetic data sets are provided.
| Jason Gejie Liu and Shuchin Aeron | null | 1401.1842 | null | null |
DJ-MC: A Reinforcement-Learning Agent for Music Playlist Recommendation | cs.LG | In recent years, there has been growing focus on the study of automated
recommender systems. Music recommendation systems serve as a prominent domain
for such works, both from an academic and a commercial perspective. A
fundamental aspect of music perception is that music is experienced in temporal
context and in sequence. In this work we present DJ-MC, a novel
reinforcement-learning framework for music recommendation that does not
recommend songs individually but rather song sequences, or playlists, based on
a model of preferences for both songs and song transitions. The model is
learned online and is uniquely adapted for each listener. To reduce exploration
time, DJ-MC exploits user feedback to initialize a model, which it subsequently
updates by reinforcement. We evaluate our framework with human participants
using both real song and playlist data. Our results indicate that DJ-MC's
ability to recommend sequences of songs provides a significant improvement over
more straightforward approaches, which do not take transitions into account.
| Elad Liebman, Maytal Saar-Tsechansky and Peter Stone | null | 1401.1880 | null | null |
Efficient unimodality test in clustering by signature testing | cs.LG stat.ML | This paper provides a new unimodality test with application in hierarchical
clustering methods. The proposed method denoted by signature test (Sigtest),
transforms the data based on its statistics. The transformed data has much
smaller variation compared to the original data and can be evaluated in a
simple proposed unimodality test. Compared with the existing unimodality tests,
Sigtest is more accurate in detecting the overlapped clusters and has a much
less computational complexity. Simulation results demonstrate the efficiency of
this statistic test for both real and synthetic data sets.
| Mahdi Shahbaba and Soosan Beheshti | null | 1401.1895 | null | null |
Multiple-output support vector regression with a firefly algorithm for
interval-valued stock price index forecasting | cs.CE cs.LG q-fin.ST | Highly accurate interval forecasting of a stock price index is fundamental to
successfully making a profit when making investment decisions, by providing a
range of values rather than a point estimate. In this study, we investigate the
possibility of forecasting an interval-valued stock price index series over
short and long horizons using multi-output support vector regression (MSVR).
Furthermore, this study proposes a firefly algorithm (FA)-based approach, built
on the established MSVR, for determining the parameters of MSVR (abbreviated as
FA-MSVR). Three globally traded broad market indices are used to compare the
performance of the proposed FA-MSVR method with selected counterparts. The
quantitative and comprehensive assessments are performed on the basis of
statistical criteria, economic criteria, and computational cost. In terms of
statistical criteria, we compare the out-of-sample forecasting using
goodness-of-forecast measures and testing approaches. In terms of economic
criteria, we assess the relative forecast performance with a simple trading
strategy. The results obtained in this study indicate that the proposed FA-MSVR
method is a promising alternative for forecasting interval-valued financial
time series.
| Tao Xiong, Yukun Bao, Zhongyi Hu | 10.1016/j.knosys.2013.10.012 | 1401.1916 | null | null |
A PSO and Pattern Search based Memetic Algorithm for SVMs Parameters
Optimization | cs.LG cs.AI cs.NE stat.ML | Addressing the issue of SVMs parameters optimization, this study proposes an
efficient memetic algorithm based on Particle Swarm Optimization algorithm
(PSO) and Pattern Search (PS). In the proposed memetic algorithm, PSO is
responsible for exploration of the search space and the detection of the
potential regions with optimum solutions, while pattern search (PS) is used to
produce an effective exploitation on the potential regions obtained by PSO.
Moreover, a novel probabilistic selection strategy is proposed to select the
appropriate individuals among the current population to undergo local
refinement, keeping a well balance between exploration and exploitation.
Experimental results confirm that the local refinement with PS and our proposed
selection strategy are effective, and finally demonstrate effectiveness and
robustness of the proposed PSO-PS based MA for SVMs parameters optimization.
| Yukun Bao, Zhongyi Hu, Tao Xiong | 10.1016/j.neucom.2013.01.027 | 1401.1926 | null | null |
Bayesian Nonparametric Multilevel Clustering with Group-Level Contexts | cs.LG stat.ML | We present a Bayesian nonparametric framework for multilevel clustering which
utilizes group-level context information to simultaneously discover
low-dimensional structures of the group contents and partitions groups into
clusters. Using the Dirichlet process as the building block, our model
constructs a product base-measure with a nested structure to accommodate
content and context observations at multiple levels. The proposed model
possesses properties that link the nested Dirichlet processes (nDP) and the
Dirichlet process mixture models (DPM) in an interesting way: integrating out
all contents results in the DPM over contexts, whereas integrating out
group-specific contexts results in the nDP mixture over content variables. We
provide a Polya-urn view of the model and an efficient collapsed Gibbs
inference procedure. Extensive experiments on real-world datasets demonstrate
the advantage of utilizing context information via our model in both text and
image domains.
| Vu Nguyen, Dinh Phung, XuanLong Nguyen, Svetha Venkatesh, Hung Hai Bui | null | 1401.1974 | null | null |
Actor-Critic Algorithms for Learning Nash Equilibria in N-player
General-Sum Games | cs.GT cs.LG stat.ML | We consider the problem of finding stationary Nash equilibria (NE) in a
finite discounted general-sum stochastic game. We first generalize a non-linear
optimization problem from Filar and Vrieze [2004] to a $N$-player setting and
break down this problem into simpler sub-problems that ensure there is no
Bellman error for a given state and an agent. We then provide a
characterization of solution points of these sub-problems that correspond to
Nash equilibria of the underlying game and for this purpose, we derive a set of
necessary and sufficient SG-SP (Stochastic Game - Sub-Problem) conditions.
Using these conditions, we develop two actor-critic algorithms: OFF-SGSP
(model-based) and ON-SGSP (model-free). Both algorithms use a critic that
estimates the value function for a fixed policy and an actor that performs
descent in the policy space using a descent direction that avoids local minima.
We establish that both algorithms converge, in self-play, to the equilibria of
a certain ordinary differential equation (ODE), whose stable limit points
coincide with stationary NE of the underlying general-sum stochastic game. On a
single state non-generic game (see Hart and Mas-Colell [2005]) as well as on a
synthetic two-player game setup with $810,000$ states, we establish that
ON-SGSP consistently outperforms NashQ ([Hu and Wellman, 2003] and FFQ
[Littman, 2001] algorithms.
| H.L Prasad, L.A.Prashanth and Shalabh Bhatnagar | null | 1401.2086 | null | null |
A Comparative Study of Reservoir Computing for Temporal Signal
Processing | cs.NE cs.LG | Reservoir computing (RC) is a novel approach to time series prediction using
recurrent neural networks. In RC, an input signal perturbs the intrinsic
dynamics of a medium called a reservoir. A readout layer is then trained to
reconstruct a target output from the reservoir's state. The multitude of RC
architectures and evaluation metrics poses a challenge to both practitioners
and theorists who study the task-solving performance and computational power of
RC. In addition, in contrast to traditional computation models, the reservoir
is a dynamical system in which computation and memory are inseparable, and
therefore hard to analyze. Here, we compare echo state networks (ESN), a
popular RC architecture, with tapped-delay lines (DL) and nonlinear
autoregressive exogenous (NARX) networks, which we use to model systems with
limited computation and limited memory respectively. We compare the performance
of the three systems while computing three common benchmark time series:
H{\'e}non Map, NARMA10, and NARMA20. We find that the role of the reservoir in
the reservoir computing paradigm goes beyond providing a memory of the past
inputs. The DL and the NARX network have higher memorization capability, but
fall short of the generalization power of the ESN.
| Alireza Goudarzi, Peter Banda, Matthew R. Lakin, Christof Teuscher,
Darko Stefanovic | null | 1401.2224 | null | null |
Extension of Sparse Randomized Kaczmarz Algorithm for Multiple
Measurement Vectors | cs.NA cs.LG stat.ML | The Kaczmarz algorithm is popular for iteratively solving an overdetermined
system of linear equations. The traditional Kaczmarz algorithm can approximate
the solution in few sweeps through the equations but a randomized version of
the Kaczmarz algorithm was shown to converge exponentially and independent of
number of equations. Recently an algorithm for finding sparse solution to a
linear system of equations has been proposed based on weighted randomized
Kaczmarz algorithm. These algorithms solves single measurement vector problem;
however there are applications were multiple-measurements are available. In
this work, the objective is to solve a multiple measurement vector problem with
common sparse support by modifying the randomized Kaczmarz algorithm. We have
also modeled the problem of face recognition from video as the multiple
measurement vector problem and solved using our proposed technique. We have
compared the proposed algorithm with state-of-art spectral projected gradient
algorithm for multiple measurement vectors on both real and synthetic datasets.
The Monte Carlo simulations confirms that our proposed algorithm have better
recovery and convergence rate than the MMV version of spectral projected
gradient algorithm under fairness constraints.
| Hemant Kumar Aggarwal and Angshul Majumdar | null | 1401.2288 | null | null |
Lasso and equivalent quadratic penalized models | stat.ML cs.LG | The least absolute shrinkage and selection operator (lasso) and ridge
regression produce usually different estimates although input, loss function
and parameterization of the penalty are identical. In this paper we look for
ridge and lasso models with identical solution set.
It turns out, that the lasso model with shrink vector $\lambda$ and a
quadratic penalized model with shrink matrix as outer product of $\lambda$ with
itself are equivalent, in the sense that they have equal solutions. To achieve
this, we have to restrict the estimates to be positive. This doesn't limit the
area of application since we can easily decompose every estimate in a positive
and negative part. The resulting problem can be solved with a non negative
least square algorithm.
Beside this quadratic penalized model, an augmented regression model with
positive bounded estimates is developed which is also equivalent to the lasso
model, but is probably faster to solve.
| Stefan Hummelsheim | null | 1401.2304 | null | null |
Clustering, Coding, and the Concept of Similarity | cs.LG | This paper develops a theory of clustering and coding which combines a
geometric model with a probabilistic model in a principled way. The geometric
model is a Riemannian manifold with a Riemannian metric, ${g}_{ij}({\bf x})$,
which we interpret as a measure of dissimilarity. The probabilistic model
consists of a stochastic process with an invariant probability measure which
matches the density of the sample input data. The link between the two models
is a potential function, $U({\bf x})$, and its gradient, $\nabla U({\bf x})$.
We use the gradient to define the dissimilarity metric, which guarantees that
our measure of dissimilarity will depend on the probability measure. Finally,
we use the dissimilarity metric to define a coordinate system on the embedded
Riemannian manifold, which gives us a low-dimensional encoding of our original
data.
| L. Thorne McCarty | null | 1401.2411 | null | null |
An Online Expectation-Maximisation Algorithm for Nonnegative Matrix
Factorisation Models | cs.LG stat.CO stat.ML | In this paper we formulate the nonnegative matrix factorisation (NMF) problem
as a maximum likelihood estimation problem for hidden Markov models and propose
online expectation-maximisation (EM) algorithms to estimate the NMF and the
other unknown static parameters. We also propose a sequential Monte Carlo
approximation of our online EM algorithm. We show the performance of the
proposed method with two numerical examples.
| Sinan Yildirim, A. Taylan Cemgil, Sumeetpal S. Singh | 10.3182/20120711-3-BE-2027.00312 | 1401.2490 | null | null |
Multi-Step-Ahead Time Series Prediction using Multiple-Output Support
Vector Regression | cs.LG stat.ML | Accurate time series prediction over long future horizons is challenging and
of great interest to both practitioners and academics. As a well-known
intelligent algorithm, the standard formulation of Support Vector Regression
(SVR) could be taken for multi-step-ahead time series prediction, only relying
either on iterated strategy or direct strategy. This study proposes a novel
multiple-step-ahead time series prediction approach which employs
multiple-output support vector regression (M-SVR) with multiple-input
multiple-output (MIMO) prediction strategy. In addition, the rank of three
leading prediction strategies with SVR is comparatively examined, providing
practical implications on the selection of the prediction strategy for
multi-step-ahead forecasting while taking SVR as modeling technique. The
proposed approach is validated with the simulated and real datasets. The
quantitative and comprehensive assessments are performed on the basis of the
prediction accuracy and computational cost. The results indicate that: 1) the
M-SVR using MIMO strategy achieves the best accurate forecasts with accredited
computational load, 2) the standard SVR using direct strategy achieves the
second best accurate forecasts, but with the most expensive computational cost,
and 3) the standard SVR using iterated strategy is the worst in terms of
prediction accuracy, but with the least computational cost.
| Yukun Bao, Tao Xiong, Zhongyi Hu | 10.1016/j.neucom.2013.09.010 | 1401.2504 | null | null |
MRFalign: Protein Homology Detection through Alignment of Markov Random
Fields | q-bio.QM cs.CE cs.LG | Sequence-based protein homology detection has been extensively studied and so
far the most sensitive method is based upon comparison of protein sequence
profiles, which are derived from multiple sequence alignment (MSA) of sequence
homologs in a protein family. A sequence profile is usually represented as a
position-specific scoring matrix (PSSM) or an HMM (Hidden Markov Model) and
accordingly PSSM-PSSM or HMM-HMM comparison is used for homolog detection. This
paper presents a new homology detection method MRFalign, consisting of three
key components: 1) a Markov Random Fields (MRF) representation of a protein
family; 2) a scoring function measuring similarity of two MRFs; and 3) an
efficient ADMM (Alternating Direction Method of Multipliers) algorithm aligning
two MRFs. Compared to HMM that can only model very short-range residue
correlation, MRFs can model long-range residue interaction pattern and thus,
encode information for the global 3D structure of a protein family.
Consequently, MRF-MRF comparison for remote homology detection shall be much
more sensitive than HMM-HMM or PSSM-PSSM comparison. Experiments confirm that
MRFalign outperforms several popular HMM or PSSM-based methods in terms of both
alignment accuracy and remote homology detection and that MRFalign works
particularly well for mainly beta proteins. For example, tested on the
benchmark SCOP40 (8353 proteins) for homology detection, PSSM-PSSM and HMM-HMM
succeed on 48% and 52% of proteins, respectively, at superfamily level, and on
15% and 27% of proteins, respectively, at fold level. In contrast, MRFalign
succeeds on 57.3% and 42.5% of proteins at superfamily and fold level,
respectively. This study implies that long-range residue interaction patterns
are very helpful for sequence-based homology detection. The software is
available for download at http://raptorx.uchicago.edu/download/.
| Jianzhu Ma, Sheng Wang, Zhiyong Wang and Jinbo Xu | 10.1371/journal.pcbi.1003500 | 1401.2668 | null | null |
PSMACA: An Automated Protein Structure Prediction Using MACA (Multiple
Attractor Cellular Automata) | cs.CE cs.LG | Protein Structure Predication from sequences of amino acid has gained a
remarkable attention in recent years. Even though there are some prediction
techniques addressing this problem, the approximate accuracy in predicting the
protein structure is closely 75%. An automated procedure was evolved with MACA
(Multiple Attractor Cellular Automata) for predicting the structure of the
protein. Most of the existing approaches are sequential which will classify the
input into four major classes and these are designed for similar sequences.
PSMACA is designed to identify ten classes from the sequences that share
twilight zone similarity and identity with the training sequences. This method
also predicts three states (helix, strand, and coil) for the structure. Our
comprehensive design considers 10 feature selection methods and 4 classifiers
to develop MACA (Multiple Attractor Cellular Automata) based classifiers that
are build for each of the ten classes. We have tested the proposed classifier
with twilight-zone and 1-high-similarity benchmark datasets with over three
dozens of modern competing predictors shows that PSMACA provides the best
overall accuracy that ranges between 77% and 88.7% depending on the dataset.
| Pokkuluri Kiran Sree, Inamupudi Ramesh Babu, SSSN Usha Devi N | 10.1166/jbic.2013.1052 | 1401.2688 | null | null |
Stochastic Optimization with Importance Sampling | stat.ML cs.LG | Uniform sampling of training data has been commonly used in traditional
stochastic optimization algorithms such as Proximal Stochastic Gradient Descent
(prox-SGD) and Proximal Stochastic Dual Coordinate Ascent (prox-SDCA). Although
uniform sampling can guarantee that the sampled stochastic quantity is an
unbiased estimate of the corresponding true quantity, the resulting estimator
may have a rather high variance, which negatively affects the convergence of
the underlying optimization procedure. In this paper we study stochastic
optimization with importance sampling, which improves the convergence rate by
reducing the stochastic variance. Specifically, we study prox-SGD (actually,
stochastic mirror descent) with importance sampling and prox-SDCA with
importance sampling. For prox-SGD, instead of adopting uniform sampling
throughout the training process, the proposed algorithm employs importance
sampling to minimize the variance of the stochastic gradient. For prox-SDCA,
the proposed importance sampling scheme aims to achieve higher expected dual
value at each dual coordinate ascent step. We provide extensive theoretical
analysis to show that the convergence rates with the proposed importance
sampling methods can be significantly improved under suitable conditions both
for prox-SGD and for prox-SDCA. Experiments are provided to verify the
theoretical analysis.
| Peilin Zhao, Tong Zhang | null | 1401.2753 | null | null |
GPS-ABC: Gaussian Process Surrogate Approximate Bayesian Computation | cs.LG q-bio.QM stat.ML | Scientists often express their understanding of the world through a
computationally demanding simulation program. Analyzing the posterior
distribution of the parameters given observations (the inverse problem) can be
extremely challenging. The Approximate Bayesian Computation (ABC) framework is
the standard statistical tool to handle these likelihood free problems, but
they require a very large number of simulations. In this work we develop two
new ABC sampling algorithms that significantly reduce the number of simulations
necessary for posterior inference. Both algorithms use confidence estimates for
the accept probability in the Metropolis Hastings step to adaptively choose the
number of necessary simulations. Our GPS-ABC algorithm stores the information
obtained from every simulation in a Gaussian process which acts as a surrogate
function for the simulated statistics. Experiments on a challenging realistic
biological problem illustrate the potential of these algorithms.
| Edward Meeds and Max Welling | null | 1401.2838 | null | null |
Exploiting generalisation symmetries in accuracy-based learning
classifier systems: An initial study | cs.NE cs.LG | Modern learning classifier systems typically exploit a niched genetic
algorithm to facilitate rule discovery. When used for reinforcement learning,
such rules represent generalisations over the state-action-reward space. Whilst
encouraging maximal generality, the niching can potentially hinder the
formation of generalisations in the state space which are symmetrical, or very
similar, over different actions. This paper introduces the use of rules which
contain multiple actions, maintaining accuracy and reward metrics for each
action. It is shown that problem symmetries can be exploited, improving
performance, whilst not degrading performance when symmetries are reduced.
| Larry Bull | null | 1401.2949 | null | null |
Binary Classifier Calibration: Bayesian Non-Parametric Approach | stat.ML cs.LG | A set of probabilistic predictions is well calibrated if the events that are
predicted to occur with probability p do in fact occur about p fraction of the
time. Well calibrated predictions are particularly important when machine
learning models are used in decision analysis. This paper presents two new
non-parametric methods for calibrating outputs of binary classification models:
a method based on the Bayes optimal selection and a method based on the
Bayesian model averaging. The advantage of these methods is that they are
independent of the algorithm used to learn a predictive model, and they can be
applied in a post-processing step, after the model is learned. This makes them
applicable to a wide variety of machine learning models and methods. These
calibration methods, as well as other methods, are tested on a variety of
datasets in terms of both discrimination and calibration performance. The
results show the methods either outperform or are comparable in performance to
the state-of-the-art calibration methods.
| Mahdi Pakdaman Naeini, Gregory F. Cooper, Milos Hauskrecht | null | 1401.2955 | null | null |
Use Case Point Approach Based Software Effort Estimation using Various
Support Vector Regression Kernel Methods | cs.SE cs.LG | The job of software effort estimation is a critical one in the early stages
of the software development life cycle when the details of requirements are
usually not clearly identified. Various optimization techniques help in
improving the accuracy of effort estimation. The Support Vector Regression
(SVR) is one of several different soft-computing techniques that help in
getting optimal estimated values. The idea of SVR is based upon the computation
of a linear regression function in a high dimensional feature space where the
input data are mapped via a nonlinear function. Further, the SVR kernel methods
can be applied in transforming the input data and then based on these
transformations, an optimal boundary between the possible outputs can be
obtained. The main objective of the research work carried out in this paper is
to estimate the software effort using use case point approach. The use case
point approach relies on the use case diagram to estimate the size and effort
of software projects. Then, an attempt has been made to optimize the results
obtained from use case point analysis using various SVR kernel methods to
achieve better prediction accuracy.
| Shashank Mouli Satapathy, Santanu Kumar Rath | null | 1401.3069 | null | null |
Dynamic Topology Adaptation and Distributed Estimation for Smart Grids | cs.IT cs.LG math.IT | This paper presents new dynamic topology adaptation strategies for
distributed estimation in smart grids systems. We propose a dynamic exhaustive
search--based topology adaptation algorithm and a dynamic sparsity--inspired
topology adaptation algorithm, which can exploit the topology of smart grids
with poor--quality links and obtain performance gains. We incorporate an
optimized combining rule, named Hastings rule into our proposed dynamic
topology adaptation algorithms. Compared with the existing works in the
literature on distributed estimation, the proposed algorithms have a better
convergence rate and significantly improve the system performance. The
performance of the proposed algorithms is compared with that of existing
algorithms in the IEEE 14--bus system.
| S. Xu, R. C. de Lamare and H. V. Poor | null | 1401.3148 | null | null |
Online Markov decision processes with Kullback-Leibler control cost | math.OC cs.LG cs.SY | This paper considers an online (real-time) control problem that involves an
agent performing a discrete-time random walk over a finite state space. The
agent's action at each time step is to specify the probability distribution for
the next state given the current state. Following the set-up of Todorov, the
state-action cost at each time step is a sum of a state cost and a control cost
given by the Kullback-Leibler (KL) divergence between the agent's next-state
distribution and that determined by some fixed passive dynamics. The online
aspect of the problem is due to the fact that the state cost functions are
generated by a dynamic environment, and the agent learns the current state cost
only after selecting an action. An explicit construction of a computationally
efficient strategy with small regret (i.e., expected difference between its
actual total cost and the smallest cost attainable using noncausal knowledge of
the state costs) under mild regularity conditions is presented, along with a
demonstration of the performance of the proposed strategy on a simulated target
tracking problem. A number of new results on Markov decision processes with KL
control cost are also obtained.
| Peng Guan and Maxim Raginsky and Rebecca Willett | null | 1401.3198 | null | null |
A Boosting Approach to Learning Graph Representations | cs.LG cs.SI stat.ML | Learning the right graph representation from noisy, multisource data has
garnered significant interest in recent years. A central tenet of this problem
is relational learning. Here the objective is to incorporate the partial
information each data source gives us in a way that captures the true
underlying relationships. To address this challenge, we present a general,
boosting-inspired framework for combining weak evidence of entity associations
into a robust similarity metric. We explore the extent to which different
quality measurements yield graph representations that are suitable for
community detection. We then present empirical results on both synthetic and
real datasets demonstrating the utility of this framework. Our framework leads
to suitable global graph representations from quality measurements local to
each edge. Finally, we discuss future extensions and theoretical considerations
of learning useful graph representations from weak feedback in general
application settings.
| Rajmonda Caceres, Kevin Carter, Jeremy Kun | null | 1401.3258 | null | null |
A Subband-Based SVM Front-End for Robust ASR | cs.CL cs.LG cs.SD | This work proposes a novel support vector machine (SVM) based robust
automatic speech recognition (ASR) front-end that operates on an ensemble of
the subband components of high-dimensional acoustic waveforms. The key issues
of selecting the appropriate SVM kernels for classification in frequency
subbands and the combination of individual subband classifiers using ensemble
methods are addressed. The proposed front-end is compared with state-of-the-art
ASR front-ends in terms of robustness to additive noise and linear filtering.
Experiments performed on the TIMIT phoneme classification task demonstrate the
benefits of the proposed subband based SVM front-end: it outperforms the
standard cepstral front-end in the presence of noise and linear filtering for
signal-to-noise ratio (SNR) below 12-dB. A combination of the proposed
front-end with a conventional front-end such as MFCC yields further
improvements over the individual front ends across the full range of noise
levels.
| Jibran Yousafzai and Zoran Cvetkovic and Peter Sollich and Matthew
Ager | null | 1401.3322 | null | null |
Learning Language from a Large (Unannotated) Corpus | cs.CL cs.LG | A novel approach to the fully automated, unsupervised extraction of
dependency grammars and associated syntax-to-semantic-relationship mappings
from large text corpora is described. The suggested approach builds on the
authors' prior work with the Link Grammar, RelEx and OpenCog systems, as well
as on a number of prior papers and approaches from the statistical language
learning literature. If successful, this approach would enable the mining of
all the information needed to power a natural language comprehension and
generation system, directly from a large, unannotated corpus.
| Linas Vepstas and Ben Goertzel | null | 1401.3372 | null | null |
Binary Classifier Calibration: Non-parametric approach | stat.ML cs.LG | Accurate calibration of probabilistic predictive models learned is critical
for many practical prediction and decision-making tasks. There are two main
categories of methods for building calibrated classifiers. One approach is to
develop methods for learning probabilistic models that are well-calibrated, ab
initio. The other approach is to use some post-processing methods for
transforming the output of a classifier to be well calibrated, as for example
histogram binning, Platt scaling, and isotonic regression. One advantage of the
post-processing approach is that it can be applied to any existing
probabilistic classification model that was constructed using any
machine-learning method.
In this paper, we first introduce two measures for evaluating how well a
classifier is calibrated. We prove three theorems showing that using a simple
histogram binning post-processing method, it is possible to make a classifier
be well calibrated while retaining its discrimination capability. Also, by
casting the histogram binning method as a density-based non-parametric binary
classifier, we can extend it using two simple non-parametric density estimation
methods. We demonstrate the performance of the proposed calibration methods on
synthetic and real datasets. Experimental results show that the proposed
methods either outperform or are comparable to existing calibration methods.
| Mahdi Pakdaman Naeini, Gregory F. Cooper, Milos Hauskrecht | null | 1401.3390 | null | null |
Low-Rank Modeling and Its Applications in Image Analysis | cs.CV cs.LG stat.ML | Low-rank modeling generally refers to a class of methods that solve problems
by representing variables of interest as low-rank matrices. It has achieved
great success in various fields including computer vision, data mining, signal
processing and bioinformatics. Recently, much progress has been made in
theories, algorithms and applications of low-rank modeling, such as exact
low-rank matrix recovery via convex programming and matrix completion applied
to collaborative filtering. These advances have brought more and more
attentions to this topic. In this paper, we review the recent advance of
low-rank modeling, the state-of-the-art algorithms, and related applications in
image analysis. We first give an overview to the concept of low-rank modeling
and challenging problems in this area. Then, we summarize the models and
algorithms for low-rank matrix recovery and illustrate their advantages and
limitations with numerical experiments. Next, we introduce a few applications
of low-rank modeling in the context of image analysis. Finally, we conclude
this paper with some discussions.
| Xiaowei Zhou, Can Yang, Hongyu Zhao, Weichuan Yu | null | 1401.3409 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.