title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Predictive Interval Models for Non-parametric Regression | cs.LG stat.ML | Having a regression model, we are interested in finding two-sided intervals
that are guaranteed to contain at least a desired proportion of the conditional
distribution of the response variable given a specific combination of
predictors. We name such intervals predictive intervals. This work presents a
new method to find two-sided predictive intervals for non-parametric least
squares regression without the homoscedasticity assumption. Our predictive
intervals are built by using tolerance intervals on prediction errors in the
query point's neighborhood. We proposed a predictive interval model test and we
also used it as a constraint in our hyper-parameter tuning algorithm. This
gives an algorithm that finds the smallest reliable predictive intervals for a
given dataset. We also introduce a measure for comparing different interval
prediction methods yielding intervals having different size and coverage. These
experiments show that our methods are more reliable, effective and precise than
other interval prediction methods.
| Mohammad Ghasemi Hamed, Mathieu Serrurier, Nicolas Durand | null | 1402.5874 | null | null |
Manifold Gaussian Processes for Regression | stat.ML cs.LG | Off-the-shelf Gaussian Process (GP) covariance functions encode smoothness
assumptions on the structure of the function to be modeled. To model complex
and non-differentiable functions, these smoothness assumptions are often too
restrictive. One way to alleviate this limitation is to find a different
representation of the data by introducing a feature space. This feature space
is often learned in an unsupervised way, which might lead to data
representations that are not useful for the overall regression task. In this
paper, we propose Manifold Gaussian Processes, a novel supervised method that
jointly learns a transformation of the data into a feature space and a GP
regression from the feature space to observed space. The Manifold GP is a full
GP and allows to learn data representations, which are useful for the overall
regression task. As a proof-of-concept, we evaluate our approach on complex
non-smooth functions where standard GPs perform poorly, such as step functions
and robotics tasks with contacts.
| Roberto Calandra and Jan Peters and Carl Edward Rasmussen and Marc
Peter Deisenroth | null | 1402.5876 | null | null |
Near Optimal Bayesian Active Learning for Decision Making | cs.LG cs.AI | How should we gather information to make effective decisions? We address
Bayesian active learning and experimental design problems, where we
sequentially select tests to reduce uncertainty about a set of hypotheses.
Instead of minimizing uncertainty per se, we consider a set of overlapping
decision regions of these hypotheses. Our goal is to drive uncertainty into a
single decision region as quickly as possible.
We identify necessary and sufficient conditions for correctly identifying a
decision region that contains all hypotheses consistent with observations. We
develop a novel Hyperedge Cutting (HEC) algorithm for this problem, and prove
that is competitive with the intractable optimal policy. Our efficient
implementation of the algorithm relies on computing subsets of the complete
homogeneous symmetric polynomials. Finally, we demonstrate its effectiveness on
two practical applications: approximate comparison-based learning and active
localization using a robot manipulator.
| Shervin Javdani, Yuxin Chen, Amin Karbasi, Andreas Krause, J. Andrew
Bagnell, Siddhartha Srinivasa | null | 1402.5886 | null | null |
On Learning from Label Proportions | stat.ML cs.LG | Learning from Label Proportions (LLP) is a learning setting, where the
training data is provided in groups, or "bags", and only the proportion of each
class in each bag is known. The task is to learn a model to predict the class
labels of the individual instances. LLP has broad applications in political
science, marketing, healthcare, and computer vision. This work answers the
fundamental question, when and why LLP is possible, by introducing a general
framework, Empirical Proportion Risk Minimization (EPRM). EPRM learns an
instance label classifier to match the given label proportions on the training
data. Our result is based on a two-step analysis. First, we provide a VC bound
on the generalization error of the bag proportions. We show that the bag sample
complexity is only mildly sensitive to the bag size. Second, we show that under
some mild assumptions, good bag proportion prediction guarantees good instance
label prediction. The results together provide a formal guarantee that the
individual labels can indeed be learned in the LLP setting. We discuss
applications of the analysis, including justification of LLP algorithms,
learning with population proportions, and a paradigm for learning algorithms
with privacy guarantees. We also demonstrate the feasibility of LLP based on a
case study in real-world setting: predicting income based on census data.
| Felix X. Yu, Krzysztof Choromanski, Sanjiv Kumar, Tony Jebara, Shih-Fu
Chang | null | 1402.5902 | null | null |
Incremental Learning of Event Definitions with Inductive Logic
Programming | cs.LG cs.AI | Event recognition systems rely on properly engineered knowledge bases of
event definitions to infer occurrences of events in time. The manual
development of such knowledge is a tedious and error-prone task, thus
event-based applications may benefit from automated knowledge construction
techniques, such as Inductive Logic Programming (ILP), which combines machine
learning with the declarative and formal semantics of First-Order Logic.
However, learning temporal logical formalisms, which are typically utilized by
logic-based Event Recognition systems is a challenging task, which most ILP
systems cannot fully undertake. In addition, event-based data is usually
massive and collected at different times and under various circumstances.
Ideally, systems that learn from temporal data should be able to operate in an
incremental mode, that is, revise prior constructed knowledge in the face of
new evidence. Most ILP systems are batch learners, in the sense that in order
to account for new evidence they have no alternative but to forget past
knowledge and learn from scratch. Given the increased inherent complexity of
ILP and the volumes of real-life temporal data, this results to algorithms that
scale poorly. In this work we present an incremental method for learning and
revising event-based knowledge, in the form of Event Calculus programs. The
proposed algorithm relies on abductive-inductive learning and comprises a
scalable clause refinement methodology, based on a compressive summarization of
clause coverage in a stream of examples. We present an empirical evaluation of
our approach on real and synthetic data from activity recognition and city
transport applications.
| Nikos Katzouris, Alexander Artikis, George Paliouras | null | 1402.5988 | null | null |
Open science in machine learning | cs.LG cs.DL | We present OpenML and mldata, open science platforms that provides easy
access to machine learning data, software and results to encourage further
study and application. They go beyond the more traditional repositories for
data sets and software packages in that they allow researchers to also easily
share the results they obtained in experiments and to compare their solutions
with those of others.
| Joaquin Vanschoren and Mikio L. Braun and Cheng Soon Ong | null | 1402.6013 | null | null |
Algorithms for multi-armed bandit problems | cs.AI cs.LG | Although many algorithms for the multi-armed bandit problem are
well-understood theoretically, empirical confirmation of their effectiveness is
generally scarce. This paper presents a thorough empirical study of the most
popular multi-armed bandit algorithms. Three important observations can be made
from our results. Firstly, simple heuristics such as epsilon-greedy and
Boltzmann exploration outperform theoretically sound algorithms on most
settings by a significant margin. Secondly, the performance of most algorithms
varies dramatically with the parameters of the bandit problem. Our study
identifies for each algorithm the settings where it performs well, and the
settings where it performs poorly. Thirdly, the algorithms' performance
relative each to other is affected only by the number of bandit arms and the
variance of the rewards. This finding may guide the design of subsequent
empirical evaluations. In the second part of the paper, we turn our attention
to an important area of application of bandit algorithms: clinical trials.
Although the design of clinical trials has been one of the principal practical
problems motivating research on multi-armed bandits, bandit algorithms have
never been evaluated as potential treatment allocation strategies. Using data
from a real study, we simulate the outcome that a 2001-2002 clinical trial
would have had if bandit algorithms had been used to allocate patients to
treatments. We find that an adaptive trial would have successfully treated at
least 50% more patients, while significantly reducing the number of adverse
effects and increasing patient retention. At the end of the trial, the best
treatment could have still been identified with a high level of statistical
confidence. Our findings demonstrate that bandit algorithms are attractive
alternatives to current adaptive treatment allocation strategies.
| Volodymyr Kuleshov and Doina Precup | null | 1402.6028 | null | null |
Machine Learning at Scale | cs.LG cs.MS stat.ML | It takes skill to build a meaningful predictive model even with the abundance
of implementations of modern machine learning algorithms and readily available
computing resources. Building a model becomes challenging if hundreds of
terabytes of data need to be processed to produce the training data set. In a
digital advertising technology setting, we are faced with the need to build
thousands of such models that predict user behavior and power advertising
campaigns in a 24/7 chaotic real-time production environment. As data
scientists, we also have to convince other internal departments critical to
implementation success, our management, and our customers that our machine
learning system works. In this paper, we present the details of the design and
implementation of an automated, robust machine learning platform that impacts
billions of advertising impressions monthly. This platform enables us to
continuously optimize thousands of campaigns over hundreds of millions of
users, on multiple continents, against varying performance objectives.
| Sergei Izrailev and Jeremy M. Stanley | null | 1402.6076 | null | null |
Inductive Logic Boosting | cs.LG cs.AI | Recent years have seen a surge of interest in Probabilistic Logic Programming
(PLP) and Statistical Relational Learning (SRL) models that combine logic with
probabilities. Structure learning of these systems is an intersection area of
Inductive Logic Programming (ILP) and statistical learning (SL). However, ILP
cannot deal with probabilities, SL cannot model relational hypothesis. The
biggest challenge of integrating these two machine learning frameworks is how
to estimate the probability of a logic clause only from the observation of
grounded logic atoms. Many current methods models a joint probability by
representing clause as graphical model and literals as vertices in it. This
model is still too complicate and only can be approximate by pseudo-likelihood.
We propose Inductive Logic Boosting framework to transform the relational
dataset into a feature-based dataset, induces logic rules by boosting Problog
Rule Trees and relaxes the independence constraint of pseudo-likelihood.
Experimental evaluation on benchmark datasets demonstrates that the AUC-PR and
AUC-ROC value of ILP learned rules are higher than current state-of-the-art SRL
methods.
| Wang-Zhou Dai and Zhi-Hua Zhou | null | 1402.6077 | null | null |
Bayesian Sample Size Determination of Vibration Signals in Machine
Learning Approach to Fault Diagnosis of Roller Bearings | stat.ML cs.LG | Sample size determination for a data set is an important statistical process
for analyzing the data to an optimum level of accuracy and using minimum
computational work. The applications of this process are credible in every
domain which deals with large data sets and high computational work. This study
uses Bayesian analysis for determination of minimum sample size of vibration
signals to be considered for fault diagnosis of a bearing using pre-defined
parameters such as the inverse standard probability and the acceptable margin
of error. Thus an analytical formula for sample size determination is
introduced. The fault diagnosis of the bearing is done using a machine learning
approach using an entropy-based J48 algorithm. The following method will help
researchers involved in fault diagnosis to determine minimum sample size of
data for analysis for a good statistical stability and precision.
| Siddhant Sahu, V. Sugumaran | null | 1402.6133 | null | null |
Improving Collaborative Filtering based Recommenders using Topic
Modelling | cs.IR cs.CL cs.LG | Standard Collaborative Filtering (CF) algorithms make use of interactions
between users and items in the form of implicit or explicit ratings alone for
generating recommendations. Similarity among users or items is calculated
purely based on rating overlap in this case,without considering explicit
properties of users or items involved, limiting their applicability in domains
with very sparse rating spaces. In many domains such as movies, news or
electronic commerce recommenders, considerable contextual data in text form
describing item properties is available along with the rating data, which could
be utilized to improve recommendation quality.In this paper, we propose a novel
approach to improve standard CF based recommenders by utilizing latent
Dirichlet allocation (LDA) to learn latent properties of items, expressed in
terms of topic proportions, derived from their textual description. We infer
user's topic preferences or persona in the same latent space,based on her
historical ratings. While computing similarity between users, we make use of a
combined similarity measure involving rating overlap as well as similarity in
the latent topic space. This approach alleviates sparsity problem as it allows
calculation of similarity between users even if they have not rated any items
in common. Our experiments on multiple public datasets indicate that the
proposed hybrid approach significantly outperforms standard user Based and item
Based CF recommenders in terms of classification accuracy metrics such as
precision, recall and f-measure.
| Jobin Wilson, Santanu Chaudhury, Brejesh Lall, Prateek Kapadia | null | 1402.6238 | null | null |
Sample Complexity Bounds on Differentially Private Learning via
Communication Complexity | cs.DS cs.CC cs.LG | In this work we analyze the sample complexity of classification by
differentially private algorithms. Differential privacy is a strong and
well-studied notion of privacy introduced by Dwork et al. (2006) that ensures
that the output of an algorithm leaks little information about the data point
provided by any of the participating individuals. Sample complexity of private
PAC and agnostic learning was studied in a number of prior works starting with
(Kasiviswanathan et al., 2008) but a number of basic questions still remain
open, most notably whether learning with privacy requires more samples than
learning without privacy.
We show that the sample complexity of learning with (pure) differential
privacy can be arbitrarily higher than the sample complexity of learning
without the privacy constraint or the sample complexity of learning with
approximate differential privacy. Our second contribution and the main tool is
an equivalence between the sample complexity of (pure) differentially private
learning of a concept class $C$ (or $SCDP(C)$) and the randomized one-way
communication complexity of the evaluation problem for concepts from $C$. Using
this equivalence we prove the following bounds:
1. $SCDP(C) = \Omega(LDim(C))$, where $LDim(C)$ is the Littlestone's (1987)
dimension characterizing the number of mistakes in the online-mistake-bound
learning model. Known bounds on $LDim(C)$ then imply that $SCDP(C)$ can be much
higher than the VC-dimension of $C$.
2. For any $t$, there exists a class $C$ such that $LDim(C)=2$ but $SCDP(C)
\geq t$.
3. For any $t$, there exists a class $C$ such that the sample complexity of
(pure) $\alpha$-differentially private PAC learning is $\Omega(t/\alpha)$ but
the sample complexity of the relaxed $(\alpha,\beta)$-differentially private
PAC learning is $O(\log(1/\beta)/\alpha)$. This resolves an open problem of
Beimel et al. (2013b).
| Vitaly Feldman and David Xiao | null | 1402.6278 | null | null |
Oracle-Based Robust Optimization via Online Learning | math.OC cs.LG | Robust optimization is a common framework in optimization under uncertainty
when the problem parameters are not known, but it is rather known that the
parameters belong to some given uncertainty set. In the robust optimization
framework the problem solved is a min-max problem where a solution is judged
according to its performance on the worst possible realization of the
parameters. In many cases, a straightforward solution of the robust
optimization problem of a certain type requires solving an optimization problem
of a more complicated type, and in some cases even NP-hard. For example,
solving a robust conic quadratic program, such as those arising in robust SVM,
ellipsoidal uncertainty leads in general to a semidefinite program. In this
paper we develop a method for approximately solving a robust optimization
problem using tools from online convex optimization, where in every stage a
standard (non-robust) optimization program is solved. Our algorithms find an
approximate robust solution using a number of calls to an oracle that solves
the original (non-robust) problem that is inversely proportional to the square
of the target accuracy.
| Aharon Ben-Tal, Elad Hazan, Tomer Koren, Shie Mannor | null | 1402.6361 | null | null |
Renewable Energy Prediction using Weather Forecasts for Optimal
Scheduling in HPC Systems | cs.LG | The objective of the GreenPAD project is to use green energy (wind, solar and
biomass) for powering data-centers that are used to run HPC jobs. As a part of
this it is important to predict the Renewable (Wind) energy for efficient
scheduling (executing jobs that require higher energy when there is more green
energy available and vice-versa). For predicting the wind energy we first
analyze the historical data to find a statistical model that gives relation
between wind energy and weather attributes. Then we use this model based on the
weather forecast data to predict the green energy availability in the future.
Using the green energy prediction obtained from the statistical model we are
able to precompute job schedules for maximizing the green energy utilization in
the future. We propose a model which uses live weather data in addition to
machine learning techniques (which can predict future deviations in weather
conditions based on current deviations from the forecast) to make on-the-fly
changes to the precomputed schedule (based on green energy prediction).
For this we first analyze the data using histograms and simple statistical
tools such as correlation. In addition we build (correlation) regression model
for finding the relation between wind energy availability and weather
attributes (temperature, cloud cover, air pressure, wind speed / direction,
precipitation and sunshine). We also analyze different algorithms and machine
learning techniques for optimizing the job schedules for maximizing the green
energy utilization.
| Ankur Sahai | null | 1402.6552 | null | null |
Resourceful Contextual Bandits | cs.LG cs.DS cs.GT | We study contextual bandits with ancillary constraints on resources, which
are common in real-world applications such as choosing ads or dynamic pricing
of items. We design the first algorithm for solving these problems that handles
constrained resources other than time, and improves over a trivial reduction to
the non-contextual case. We consider very general settings for both contextual
bandits (arbitrary policy sets, e.g. Dudik et al. (UAI'11)) and bandits with
resource constraints (bandits with knapsacks, Badanidiyuru et al. (FOCS'13)),
and prove a regret guarantee with near-optimal statistical properties.
| Ashwinkumar Badanidiyuru and John Langford and Aleksandrs Slivkins | null | 1402.6779 | null | null |
Outlier Detection using Improved Genetic K-means | cs.LG cs.DB | The outlier detection problem in some cases is similar to the classification
problem. For example, the main concern of clustering-based outlier detection
algorithms is to find clusters and outliers, which are often regarded as noise
that should be removed in order to make more reliable clustering. In this
article, we present an algorithm that provides outlier detection and data
clustering simultaneously. The algorithmimprovesthe estimation of centroids of
the generative distribution during the process of clustering and outlier
discovery. The proposed algorithm consists of two stages. The first stage
consists of improved genetic k-means algorithm (IGK) process, while the second
stage iteratively removes the vectors which are far from their cluster
centroids.
| M. H. Marghny, Ahmed I. Taloba | null | 1402.6859 | null | null |
Sequential Complexity as a Descriptor for Musical Similarity | cs.IR cs.LG cs.SD | We propose string compressibility as a descriptor of temporal structure in
audio, for the purpose of determining musical similarity. Our descriptors are
based on computing track-wise compression rates of quantised audio features,
using multiple temporal resolutions and quantisation granularities. To verify
that our descriptors capture musically relevant information, we incorporate our
descriptors into similarity rating prediction and song year prediction tasks.
We base our evaluation on a dataset of 15500 track excerpts of Western popular
music, for which we obtain 7800 web-sourced pairwise similarity ratings. To
assess the agreement among similarity ratings, we perform an evaluation under
controlled conditions, obtaining a rank correlation of 0.33 between intersected
sets of ratings. Combined with bag-of-features descriptors, we obtain
performance gains of 31.1% and 10.9% for similarity rating prediction and song
year prediction. For both tasks, analysis of selected descriptors reveals that
representing features at multiple time scales benefits prediction accuracy.
| Peter Foster, Matthias Mauch and Simon Dixon | 10.1109/TASLP.2014.2357676 | 1402.6926 | null | null |
Scalable methods for nonnegative matrix factorizations of near-separable
tall-and-skinny matrices | cs.LG cs.DC cs.NA stat.ML | Numerous algorithms are used for nonnegative matrix factorization under the
assumption that the matrix is nearly separable. In this paper, we show how to
make these algorithms efficient for data matrices that have many more rows than
columns, so-called "tall-and-skinny matrices". One key component to these
improved methods is an orthogonal matrix transformation that preserves the
separability of the NMF problem. Our final methods need a single pass over the
data matrix and are suitable for streaming, multi-core, and MapReduce
architectures. We demonstrate the efficacy of these algorithms on
terabyte-sized synthetic matrices and real-world matrices from scientific
computing and bioinformatics.
| Austin R. Benson, Jason D. Lee, Bartek Rajwa, David F. Gleich | null | 1402.6964 | null | null |
Marginalizing Corrupted Features | cs.LG | The goal of machine learning is to develop predictors that generalize well to
test data. Ideally, this is achieved by training on an almost infinitely large
training data set that captures all variations in the data distribution. In
practical learning settings, however, we do not have infinite data and our
predictors may overfit. Overfitting may be combatted, for example, by adding a
regularizer to the training objective or by defining a prior over the model
parameters and performing Bayesian inference. In this paper, we propose a
third, alternative approach to combat overfitting: we extend the training set
with infinitely many artificial training examples that are obtained by
corrupting the original training data. We show that this approach is practical
and efficient for a range of predictors and corruption models. Our approach,
called marginalized corrupted features (MCF), trains robust predictors by
minimizing the expected value of the loss function under the corruption model.
We show empirically on a variety of data sets that MCF classifiers can be
trained efficiently, may generalize substantially better to test data, and are
also more robust to feature deletion at test time.
| Laurens van der Maaten, Minmin Chen, Stephen Tyree and Kilian
Weinberger | null | 1402.7001 | null | null |
Bayesian Multi-Scale Optimistic Optimization | stat.ML cs.LG | Bayesian optimization is a powerful global optimization technique for
expensive black-box functions. One of its shortcomings is that it requires
auxiliary optimization of an acquisition function at each iteration. This
auxiliary optimization can be costly and very hard to carry out in practice.
Moreover, it creates serious theoretical concerns, as most of the convergence
results assume that the exact optimum of the acquisition function can be found.
In this paper, we introduce a new technique for efficient global optimization
that combines Gaussian process confidence bounds and treed simultaneous
optimistic optimization to eliminate the need for auxiliary optimization of
acquisition functions. The experiments with global optimization benchmarks and
a novel application to automatic information extraction demonstrate that the
resulting technique is more efficient than the two approaches from which it
draws inspiration. Unlike most theoretical analyses of Bayesian optimization
with Gaussian processes, our finite-time convergence rate proofs do not require
exact optimization of an acquisition function. That is, our approach eliminates
the unsatisfactory assumption that a difficult, potentially NP-hard, problem
has to be solved in order to obtain vanishing regret rates.
| Ziyu Wang, Babak Shakibi, Lin Jin, Nando de Freitas | null | 1402.7005 | null | null |
Data-driven HRF estimation for encoding and decoding models | cs.CE cs.LG | Despite the common usage of a canonical, data-independent, hemodynamic
response function (HRF), it is known that the shape of the HRF varies across
brain regions and subjects. This suggests that a data-driven estimation of this
function could lead to more statistical power when modeling BOLD fMRI data.
However, unconstrained estimation of the HRF can yield highly unstable results
when the number of free parameters is large. We develop a method for the joint
estimation of activation and HRF using a rank constraint causing the estimated
HRF to be equal across events/conditions, yet permitting it to be different
across voxels. Model estimation leads to an optimization problem that we
propose to solve with an efficient quasi-Newton method exploiting fast gradient
computations. This model, called GLM with Rank-1 constraint (R1-GLM), can be
extended to the setting of GLM with separate designs which has been shown to
improve decoding accuracy in brain activity decoding experiments. We compare 10
different HRF modeling methods in terms of encoding and decoding score in two
different datasets. Our results show that the R1-GLM model significantly
outperforms competing methods in both encoding and decoding settings,
positioning it as an attractive method both from the points of view of accuracy
and computational efficiency.
| Fabian Pedregosa (INRIA Saclay - Ile de France, INRIA Paris -
Rocquencourt), Michael Eickenberg (INRIA Saclay - Ile de France, LNAO),
Philippe Ciuciu (INRIA Saclay - Ile de France, NEUROSPIN), Bertrand Thirion
(INRIA Saclay - Ile de France, NEUROSPIN), Alexandre Gramfort (LTCI) | 10.1016/j.neuroimage.2014.09.060 | 1402.7015 | null | null |
Exploiting the Statistics of Learning and Inference | cs.LG | When dealing with datasets containing a billion instances or with simulations
that require a supercomputer to execute, computational resources become part of
the equation. We can improve the efficiency of learning and inference by
exploiting their inherent statistical nature. We propose algorithms that
exploit the redundancy of data relative to a model by subsampling data-cases
for every update and reasoning about the uncertainty created in this process.
In the context of learning we propose to test for the probability that a
stochastically estimated gradient points more than 180 degrees in the wrong
direction. In the context of MCMC sampling we use stochastic gradients to
improve the efficiency of MCMC updates, and hypothesis tests based on adaptive
mini-batches to decide whether to accept or reject a proposed parameter update.
Finally, we argue that in the context of likelihood free MCMC one needs to
store all the information revealed by all simulations, for instance in a
Gaussian process. We conclude that Bayesian methods will remain to play a
crucial role in the era of big data and big simulations, but only if we
overcome a number of computational challenges.
| Max Welling | null | 1402.7025 | null | null |
An Incidence Geometry approach to Dictionary Learning | cs.LG stat.ML | We study the Dictionary Learning (aka Sparse Coding) problem of obtaining a
sparse representation of data points, by learning \emph{dictionary vectors}
upon which the data points can be written as sparse linear combinations. We
view this problem from a geometry perspective as the spanning set of a subspace
arrangement, and focus on understanding the case when the underlying hypergraph
of the subspace arrangement is specified. For this Fitted Dictionary Learning
problem, we completely characterize the combinatorics of the associated
subspace arrangements (i.e.\ their underlying hypergraphs). Specifically, a
combinatorial rigidity-type theorem is proven for a type of geometric incidence
system. The theorem characterizes the hypergraphs of subspace arrangements that
generically yield (a) at least one dictionary (b) a locally unique dictionary
(i.e.\ at most a finite number of isolated dictionaries) of the specified size.
We are unaware of prior application of combinatorial rigidity techniques in the
setting of Dictionary Learning, or even in machine learning. We also provide a
systematic classification of problems related to Dictionary Learning together
with various algorithms, their assumptions and performance.
| Meera Sitharam, Mohamad Tarifi, Menghan Wang | null | 1402.7344 | null | null |
Real-time Topic-aware Influence Maximization Using Preprocessing | cs.SI cs.LG | Influence maximization is the task of finding a set of seed nodes in a social
network such that the influence spread of these seed nodes based on certain
influence diffusion model is maximized. Topic-aware influence diffusion models
have been recently proposed to address the issue that influence between a pair
of users are often topic-dependent and information, ideas, innovations etc.
being propagated in networks (referred collectively as items in this paper) are
typically mixtures of topics. In this paper, we focus on the topic-aware
influence maximization task. In particular, we study preprocessing methods for
these topics to avoid redoing influence maximization for each item from
scratch. We explore two preprocessing algorithms with theoretical
justifications. Our empirical results on data obtained in a couple of existing
studies demonstrate that one of our algorithms stands out as a strong candidate
providing microsecond online response time and competitive influence spread,
with reasonable preprocessing effort.
| Wei Chen, Tian Lin, Cheng Yang | null | 1403.0057 | null | null |
Sleep Analytics and Online Selective Anomaly Detection | cs.LG | We introduce a new problem, the Online Selective Anomaly Detection (OSAD), to
model a specific scenario emerging from research in sleep science. Scientists
have segmented sleep into several stages and stage two is characterized by two
patterns (or anomalies) in the EEG time series recorded on sleep subjects.
These two patterns are sleep spindle (SS) and K-complex. The OSAD problem was
introduced to design a residual system, where all anomalies (known and unknown)
are detected but the system only triggers an alarm when non-SS anomalies
appear. The solution of the OSAD problem required us to combine techniques from
both machine learning and control theory. Experiments on data from real
subjects attest to the effectiveness of our approach.
| Tahereh Babaie, Sanjay Chawla, Romesh Abeysuriya | null | 1403.0156 | null | null |
Network Traffic Decomposition for Anomaly Detection | cs.LG cs.NI | In this paper we focus on the detection of network anomalies like Denial of
Service (DoS) attacks and port scans in a unified manner. While there has been
an extensive amount of research in network anomaly detection, current state of
the art methods are only able to detect one class of anomalies at the cost of
others. The key tool we will use is based on the spectral decomposition of a
trajectory/hankel matrix which is able to detect deviations from both between
and within correlation present in the observed network traffic data. Detailed
experiments on synthetic and real network traces shows a significant
improvement in detection capability over competing approaches. In the process
we also address the issue of robustness of anomaly detection systems in a
principled fashion.
| Tahereh Babaie, Sanjay Chawla, Sebastien Ardon | null | 1403.0157 | null | null |
Cascading Randomized Weighted Majority: A New Online Ensemble Learning
Algorithm | stat.ML cs.LG | With the increasing volume of data in the world, the best approach for
learning from this data is to exploit an online learning algorithm. Online
ensemble methods are online algorithms which take advantage of an ensemble of
classifiers to predict labels of data. Prediction with expert advice is a
well-studied problem in the online ensemble learning literature. The Weighted
Majority algorithm and the randomized weighted majority (RWM) are the most
well-known solutions to this problem, aiming to converge to the best expert.
Since among some expert, the best one does not necessarily have the minimum
error in all regions of data space, defining specific regions and converging to
the best expert in each of these regions will lead to a better result. In this
paper, we aim to resolve this defect of RWM algorithms by proposing a novel
online ensemble algorithm to the problem of prediction with expert advice. We
propose a cascading version of RWM to achieve not only better experimental
results but also a better error bound for sufficiently large datasets.
| Mohammadzaman Zamani, Hamid Beigy, and Amirreza Shaban | null | 1403.0388 | null | null |
Support Vector Machine Model for Currency Crisis Discrimination | cs.LG stat.ML | Support Vector Machine (SVM) is powerful classification technique based on
the idea of structural risk minimization. Use of kernel function enables curse
of dimensionality to be addressed. However, proper kernel function for certain
problem is dependent on specific dataset and as such there is no good method on
choice of kernel function. In this paper, SVM is used to build empirical models
of currency crisis in Argentina. An estimation technique is developed by
training model on real life data set which provides reasonably accurate model
outputs and helps policy makers to identify situations in which currency crisis
may happen. The third and fourth order polynomial kernel is generally best
choice to achieve high generalization of classifier performance. SVM has high
level of maturity with algorithms that are simple, easy to implement, tolerates
curse of dimensionality and good empirical performance. The satisfactory
results show that currency crisis situation is properly emulated using only
small fraction of database and could be used as an evaluation tool as well as
an early warning system. To the best of knowledge this is the first work on SVM
approach for currency crisis evaluation of Argentina.
| Arindam Chaudhuri | null | 1403.0481 | null | null |
The Structurally Smoothed Graphlet Kernel | cs.LG | A commonly used paradigm for representing graphs is to use a vector that
contains normalized frequencies of occurrence of certain motifs or sub-graphs.
This vector representation can be used in a variety of applications, such as,
for computing similarity between graphs. The graphlet kernel of Shervashidze et
al. [32] uses induced sub-graphs of k nodes (christened as graphlets by Przulj
[28]) as motifs in the vector representation, and computes the kernel via a dot
product between these vectors. One can easily show that this is a valid kernel
between graphs. However, such a vector representation suffers from a few
drawbacks. As k becomes larger we encounter the sparsity problem; most higher
order graphlets will not occur in a given graph. This leads to diagonal
dominance, that is, a given graph is similar to itself but not to any other
graph in the dataset. On the other hand, since lower order graphlets tend to be
more numerous, using lower values of k does not provide enough discrimination
ability. We propose a smoothing technique to tackle the above problems. Our
method is based on a novel extension of Kneser-Ney and Pitman-Yor smoothing
techniques from natural language processing to graphs. We use the relationships
between lower order and higher order graphlets in order to derive our method.
Consequently, our smoothing algorithm not only respects the dependency between
sub-graphs but also tackles the diagonal dominance problem by distributing the
probability mass across graphlets. In our experiments, the smoothed graphlet
kernel outperforms graph kernels based on raw frequency counts.
| Pinar Yanardag, S.V.N. Vishwanathan | null | 1403.0598 | null | null |
Unconstrained Online Linear Learning in Hilbert Spaces: Minimax
Algorithms and Normal Approximations | cs.LG | We study algorithms for online linear optimization in Hilbert spaces,
focusing on the case where the player is unconstrained. We develop a novel
characterization of a large class of minimax algorithms, recovering, and even
improving, several previous results as immediate corollaries. Moreover, using
our tools, we develop an algorithm that provides a regret bound of
$\mathcal{O}\Big(U \sqrt{T \log(U \sqrt{T} \log^2 T +1)}\Big)$, where $U$ is
the $L_2$ norm of an arbitrary comparator and both $T$ and $U$ are unknown to
the player. This bound is optimal up to $\sqrt{\log \log T}$ terms. When $T$ is
known, we derive an algorithm with an optimal regret bound (up to constant
factors). For both the known and unknown $T$ case, a Normal approximation to
the conditional value of the game proves to be the key analysis tool.
| H. Brendan McMahan and Francesco Orabona | null | 1403.0628 | null | null |
Multi-period Trading Prediction Markets with Connections to Machine
Learning | cs.GT cs.LG q-fin.TR stat.ML | We present a new model for prediction markets, in which we use risk measures
to model agents and introduce a market maker to describe the trading process.
This specific choice on modelling tools brings us mathematical convenience. The
analysis shows that the whole market effectively approaches a global objective,
despite that the market is designed such that each agent only cares about its
own goal. Additionally, the market dynamics provides a sensible algorithm for
optimising the global objective. An intimate connection between machine
learning and our markets is thus established, such that we could 1) analyse a
market by applying machine learning methods to the global objective, and 2)
solve machine learning problems by setting up and running certain markets.
| Jinli Hu and Amos Storkey | null | 1403.0648 | null | null |
The Hidden Convexity of Spectral Clustering | cs.LG stat.ML | In recent years, spectral clustering has become a standard method for data
analysis used in a broad range of applications. In this paper we propose a new
class of algorithms for multiway spectral clustering based on optimization of a
certain "contrast function" over the unit sphere. These algorithms, partly
inspired by certain Independent Component Analysis techniques, are simple, easy
to implement and efficient.
Geometrically, the proposed algorithms can be interpreted as hidden basis
recovery by means of function optimization. We give a complete characterization
of the contrast functions admissible for provable basis recovery. We show how
these conditions can be interpreted as a "hidden convexity" of our optimization
problem on the sphere; interestingly, we use efficient convex maximization
rather than the more common convex minimization. We also show encouraging
experimental results on real and simulated data.
| James Voss, Mikhail Belkin, Luis Rademacher | null | 1403.0667 | null | null |
Fast Prediction with SVM Models Containing RBF Kernels | stat.ML cs.LG | We present an approximation scheme for support vector machine models that use
an RBF kernel. A second-order Maclaurin series approximation is used for
exponentials of inner products between support vectors and test instances. The
approximation is applicable to all kernel methods featuring sums of kernel
evaluations and makes no assumptions regarding data normalization. The
prediction speed of approximated models no longer relates to the amount of
support vectors but is quadratic in terms of the number of input dimensions. If
the number of input dimensions is small compared to the amount of support
vectors, the approximated model is significantly faster in prediction and has a
smaller memory footprint. An optimized C++ implementation was made to assess
the gain in prediction speed in a set of practical tests. We additionally
provide a method to verify the approximation accuracy, prior to training models
or during run-time, to ensure the loss in accuracy remains acceptable and
within known bounds.
| Marc Claesen, Frank De Smet, Johan A.K. Suykens, Bart De Moor | null | 1403.0736 | null | null |
EnsembleSVM: A Library for Ensemble Learning Using Support Vector
Machines | stat.ML cs.LG | EnsembleSVM is a free software package containing efficient routines to
perform ensemble learning with support vector machine (SVM) base models. It
currently offers ensemble methods based on binary SVM models. Our
implementation avoids duplicate storage and evaluation of support vectors which
are shared between constituent models. Experimental results show that using
ensemble approaches can drastically reduce training complexity while
maintaining high predictive accuracy. The EnsembleSVM software package is
freely available online at http://esat.kuleuven.be/stadius/ensemblesvm.
| Marc Claesen, Frank De Smet, Johan Suykens, Bart De Moor | null | 1403.0745 | null | null |
Multiview Hessian regularized logistic regression for action recognition | cs.CV cs.LG stat.ML | With the rapid development of social media sharing, people often need to
manage the growing volume of multimedia data such as large scale video
classification and annotation, especially to organize those videos containing
human activities. Recently, manifold regularized semi-supervised learning
(SSL), which explores the intrinsic data probability distribution and then
improves the generalization ability with only a small number of labeled data,
has emerged as a promising paradigm for semiautomatic video classification. In
addition, human action videos often have multi-modal content and different
representations. To tackle the above problems, in this paper we propose
multiview Hessian regularized logistic regression (mHLR) for human action
recognition. Compared with existing work, the advantages of mHLR lie in three
folds: (1) mHLR combines multiple Hessian regularization, each of which
obtained from a particular representation of instance, to leverage the
exploring of local geometry; (2) mHLR naturally handle multi-view instances
with multiple representations; (3) mHLR employs a smooth loss function and then
can be effectively optimized. We carefully conduct extensive experiments on the
unstructured social activity attribute (USAA) dataset and the experimental
results demonstrate the effectiveness of the proposed multiview Hessian
regularized logistic regression for human action recognition.
| W. Liu, H. Liu, D. Tao, Y. Wang, Ke Lu | null | 1403.0829 | null | null |
Matroid Regression | math.ST cs.DM cs.LG stat.ME stat.ML stat.TH | We propose an algebraic combinatorial method for solving large sparse linear
systems of equations locally - that is, a method which can compute single
evaluations of the signal without computing the whole signal. The method scales
only in the sparsity of the system and not in its size, and allows to provide
error estimates for any solution method. At the heart of our approach is the
so-called regression matroid, a combinatorial object associated to sparsity
patterns, which allows to replace inversion of the large matrix with the
inversion of a kernel matrix that is constant size. We show that our method
provides the best linear unbiased estimator (BLUE) for this setting and the
minimum variance unbiased estimator (MVUE) under Gaussian noise assumptions,
and furthermore we show that the size of the kernel matrix which is to be
inverted can be traded off with accuracy.
| Franz J Kir\'aly and Louis Theran | null | 1403.0873 | null | null |
Dynamic stochastic blockmodels for time-evolving social networks | cs.SI cs.LG physics.soc-ph stat.ME | Significant efforts have gone into the development of statistical models for
analyzing data in the form of networks, such as social networks. Most existing
work has focused on modeling static networks, which represent either a single
time snapshot or an aggregate view over time. There has been recent interest in
statistical modeling of dynamic networks, which are observed at multiple points
in time and offer a richer representation of many complex phenomena. In this
paper, we present a state-space model for dynamic networks that extends the
well-known stochastic blockmodel for static networks to the dynamic setting. We
fit the model in a near-optimal manner using an extended Kalman filter (EKF)
augmented with a local search. We demonstrate that the EKF-based algorithm
performs competitively with a state-of-the-art algorithm based on Markov chain
Monte Carlo sampling but is significantly less computationally demanding.
| Kevin S. Xu and Alfred O. Hero III | 10.1109/JSTSP.2014.2310294 | 1403.0921 | null | null |
On learning to localize objects with minimal supervision | cs.CV cs.LG | Learning to localize objects with minimal supervision is an important problem
in computer vision, since large fully annotated datasets are extremely costly
to obtain. In this paper, we propose a new method that achieves this goal with
only image-level labels of whether the objects are present or not. Our approach
combines a discriminative submodular cover problem for automatically
discovering a set of positive object windows with a smoothed latent SVM
formulation. The latter allows us to leverage efficient quasi-Newton
optimization techniques. Our experiments demonstrate that the proposed approach
provides a 50% relative improvement in mean average precision over the current
state-of-the-art on PASCAL VOC 2007 detection.
| Hyun Oh Song, Ross Girshick, Stefanie Jegelka, Julien Mairal, Zaid
Harchaoui, Trevor Darrell | null | 1403.1024 | null | null |
Estimating complex causal effects from incomplete observational data | stat.ME cs.LG math.ST stat.ML stat.TH | Despite the major advances taken in causal modeling, causality is still an
unfamiliar topic for many statisticians. In this paper, it is demonstrated from
the beginning to the end how causal effects can be estimated from observational
data assuming that the causal structure is known. To make the problem more
challenging, the causal effects are highly nonlinear and the data are missing
at random. The tools used in the estimation include causal models with design,
causal calculus, multiple imputation and generalized additive models. The main
message is that a trained statistician can estimate causal effects by
judiciously combining existing tools.
| Juha Karvanen | null | 1403.1124 | null | null |
Inducing Language Networks from Continuous Space Word Representations | cs.LG cs.CL cs.SI | Recent advancements in unsupervised feature learning have developed powerful
latent representations of words. However, it is still not clear what makes one
representation better than another and how we can learn the ideal
representation. Understanding the structure of latent spaces attained is key to
any future advancement in unsupervised learning. In this work, we introduce a
new view of continuous space word representations as language networks. We
explore two techniques to create language networks from learned features by
inducing them for two popular word representation methods and examining the
properties of their resulting networks. We find that the induced networks
differ from other methods of creating language networks, and that they contain
meaningful community structure.
| Bryan Perozzi, Rami Al-Rfou, Vivek Kulkarni, Steven Skiena | null | 1403.1252 | null | null |
Integer Programming Relaxations for Integrated Clustering and Outlier
Detection | cs.LG | In this paper we present methods for exemplar based clustering with outlier
selection based on the facility location formulation. Given a distance function
and the number of outliers to be found, the methods automatically determine the
number of clusters and outliers. We formulate the problem as an integer program
to which we present relaxations that allow for solutions that scale to large
data sets. The advantages of combining clustering and outlier selection
include: (i) the resulting clusters tend to be compact and semantically
coherent (ii) the clusters are more robust against data perturbations and (iii)
the outliers are contextualised by the clusters and more interpretable, i.e. it
is easier to distinguish between outliers which are the result of data errors
from those that may be indicative of a new pattern emergent in the data. We
present and contrast three relaxations to the integer program formulation: (i)
a linear programming formulation (LP) (ii) an extension of affinity propagation
to outlier detection (APOC) and (iii) a Lagrangian duality based formulation
(LD). Evaluation on synthetic as well as real data shows the quality and
scalability of these different methods.
| Lionel Ott, Linsey Pang, Fabio Ramos, David Howe, Sanjay Chawla | null | 1403.1329 | null | null |
An Extensive Repot on the Efficiency of AIS-INMACA (A Novel Integrated
MACA based Clonal Classifier for Protein Coding and Promoter Region
Prediction) | cs.CE cs.LG | This paper exclusively reports the efficiency of AIS-INMACA. AIS-INMACA has
created good impact on solving major problems in bioinformatics like protein
region identification and promoter region prediction with less time (Pokkuluri
Kiran Sree, 2014). This AIS-INMACA is now came with several variations
(Pokkuluri Kiran Sree, 2014) towards projecting it as a tool in bioinformatics
for solving many problems in bioinformatics. So this paper will be very much
useful for so many researchers who are working in the domain of bioinformatics
with cellular automata.
| Pokkuluri Kiran Sree, Inampudi Ramesh Babu | null | 1403.1336 | null | null |
Deep Supervised and Convolutional Generative Stochastic Network for
Protein Secondary Structure Prediction | q-bio.QM cs.CE cs.LG | Predicting protein secondary structure is a fundamental problem in protein
structure prediction. Here we present a new supervised generative stochastic
network (GSN) based method to predict local secondary structure with deep
hierarchical representations. GSN is a recently proposed deep learning
technique (Bengio & Thibodeau-Laufer, 2013) to globally train deep generative
model. We present the supervised extension of GSN, which learns a Markov chain
to sample from a conditional distribution, and applied it to protein structure
prediction. To scale the model to full-sized, high-dimensional data, like
protein sequences with hundreds of amino acids, we introduce a convolutional
architecture, which allows efficient learning across multiple layers of
hierarchical representations. Our architecture uniquely focuses on predicting
structured low-level labels informed with both low and high-level
representations learned by the model. In our application this corresponds to
labeling the secondary structure state of each amino-acid residue. We trained
and tested the model on separate sets of non-homologous proteins sharing less
than 30% sequence identity. Our model achieves 66.4% Q8 accuracy on the CB513
dataset, better than the previously reported best performance 64.9% (Wang et
al., 2011) for this challenging secondary structure prediction problem.
| Jian Zhou and Olga G. Troyanskaya | null | 1403.1347 | null | null |
Collaborative Representation for Classification, Sparse or Non-sparse? | cs.CV cs.AI cs.LG | Sparse representation based classification (SRC) has been proved to be a
simple, effective and robust solution to face recognition. As it gets popular,
doubts on the necessity of enforcing sparsity starts coming up, and primary
experimental results showed that simply changing the $l_1$-norm based
regularization to the computationally much more efficient $l_2$-norm based
non-sparse version would lead to a similar or even better performance. However,
that's not always the case. Given a new classification task, it's still unclear
which regularization strategy (i.e., making the coefficients sparse or
non-sparse) is a better choice without trying both for comparison. In this
paper, we present as far as we know the first study on solving this issue,
based on plenty of diverse classification experiments. We propose a scoring
function for pre-selecting the regularization strategy using only the dataset
size, the feature dimensionality and a discrimination score derived from a
given feature representation. Moreover, we show that when dictionary learning
is taking into account, non-sparse representation has a more significant
superiority to sparse representation. This work is expected to enrich our
understanding of sparse/non-sparse collaborative representation for
classification and motivate further research activities.
| Yang Wu, Vansteenberge Jarich, Masayuki Mukunoki, and Michihiko Minoh | null | 1403.1353 | null | null |
Rate Prediction and Selection in LTE systems using Modified Source
Encoding Techniques | stat.AP cs.IT cs.LG math.IT | In current wireless systems, the base-Station (eNodeB) tries to serve its
user-equipment (UE) at the highest possible rate that the UE can reliably
decode. The eNodeB obtains this rate information as a quantized feedback from
the UE at time n and uses this, for rate selection till the next feedback is
received at time n + {\delta}. The feedback received at n can become outdated
before n + {\delta}, because of a) Doppler fading, and b) Change in the set of
active interferers for a UE. Therefore rate prediction becomes essential.
Since, the rates belong to a discrete set, we propose a discrete sequence
prediction approach, wherein, frequency trees for the discrete sequences are
built using source encoding algorithms like Prediction by Partial Match (PPM).
Finding the optimal depth of the frequency tree used for prediction is cast as
a model order selection problem. The rate sequence complexity is analysed to
provide an upper bound on model order. Information-theoretic criteria are then
used to solve the model order problem. Finally, two prediction algorithms are
proposed, using the PPM with optimal model order and system level simulations
demonstrate the improvement in packet loss and throughput due to these
algorithms.
| K.P. Saishankar, Sheetal Kalyani, K. Narendran | null | 1403.1412 | null | null |
Sparse Principal Component Analysis via Rotation and Truncation | cs.LG cs.CV stat.ML | Sparse principal component analysis (sparse PCA) aims at finding a sparse
basis to improve the interpretability over the dense basis of PCA, meanwhile
the sparse basis should cover the data subspace as much as possible. In
contrast to most of existing work which deal with the problem by adding some
sparsity penalties on various objectives of PCA, in this paper, we propose a
new method SPCArt, whose motivation is to find a rotation matrix and a sparse
basis such that the sparse basis approximates the basis of PCA after the
rotation. The algorithm of SPCArt consists of three alternating steps: rotate
PCA basis, truncate small entries, and update the rotation matrix. Its
performance bounds are also given. SPCArt is efficient, with each iteration
scaling linearly with the data dimension. It is easy to choose parameters in
SPCArt, due to its explicit physical explanations. Besides, we give a unified
view to several existing sparse PCA methods and discuss the connection with
SPCArt. Some ideas in SPCArt are extended to GPower, a popular sparse PCA
algorithm, to overcome its drawback. Experimental results demonstrate that
SPCArt achieves the state-of-the-art performance. It also achieves a good
tradeoff among various criteria, including sparsity, explained variance,
orthogonality, balance of sparsity among loadings, and computational speed.
| Zhenfang Hu, Gang Pan, Yueming Wang, and Zhaohui Wu | null | 1403.1430 | null | null |
Collaborative Filtering with Information-Rich and Information-Sparse
Entities | stat.ML cs.IT cs.LG math.IT | In this paper, we consider a popular model for collaborative filtering in
recommender systems where some users of a website rate some items, such as
movies, and the goal is to recover the ratings of some or all of the unrated
items of each user. In particular, we consider both the clustering model, where
only users (or items) are clustered, and the co-clustering model, where both
users and items are clustered, and further, we assume that some users rate many
items (information-rich users) and some users rate only a few items
(information-sparse users). When users (or items) are clustered, our algorithm
can recover the rating matrix with $\omega(MK \log M)$ noisy entries while $MK$
entries are necessary, where $K$ is the number of clusters and $M$ is the
number of items. In the case of co-clustering, we prove that $K^2$ entries are
necessary for recovering the rating matrix, and our algorithm achieves this
lower bound within a logarithmic factor when $K$ is sufficiently large. We
compare our algorithms with a well-known algorithms called alternating
minimization (AM), and a similarity score-based algorithm known as the
popularity-among-friends (PAF) algorithm by applying all three to the MovieLens
and Netflix data sets. Our co-clustering algorithm and AM have similar overall
error rates when recovering the rating matrix, both of which are lower than the
error rate under PAF. But more importantly, the error rate of our co-clustering
algorithm is significantly lower than AM and PAF in the scenarios of interest
in recommender systems: when recommending a few items to each user or when
recommending items to users who only rated a few items (these users are the
majority of the total user population). The performance difference increases
even more when noise is added to the datasets.
| Kai Zhu, Rui Wu, Lei Ying, R. Srikant | null | 1403.1600 | null | null |
Statistical Structure Learning, Towards a Robust Smart Grid | cs.LG cs.SY | Robust control and maintenance of the grid relies on accurate data. Both PMUs
and state estimators are prone to false data injection attacks. Thus, it is
crucial to have a mechanism for fast and accurate detection of an agent
maliciously tampering with the data---for both preventing attacks that may lead
to blackouts, and for routine monitoring and control tasks of current and
future grids. We propose a decentralized false data injection detection scheme
based on Markov graph of the bus phase angles. We utilize the Conditional
Covariance Test (CCT) to learn the structure of the grid. Using the DC power
flow model, we show that under normal circumstances, and because of
walk-summability of the grid graph, the Markov graph of the voltage angles can
be determined by the power grid graph. Therefore, a discrepancy between
calculated Markov graph and learned structure should trigger the alarm. Local
grid topology is available online from the protection system and we exploit it
to check for mismatch. Should a mismatch be detected, we use correlation
anomaly score to detect the set of attacked nodes. Our method can detect the
most recent stealthy deception attack on the power grid that assumes knowledge
of bus-branch model of the system and is capable of deceiving the state
estimator, damaging power network observatory, control, monitoring, demand
response and pricing schemes. Specifically, under the stealthy deception
attack, the Markov graph of phase angles changes. In addition to detect a state
of attack, our method can detect the set of attacked nodes. To the best of our
knowledge, our remedy is the first to comprehensively detect this sophisticated
attack and it does not need additional hardware. Moreover, our detection scheme
is successful no matter the size of the attacked subset. Simulation of various
power networks confirms our claims.
| Hanie Sedghi and Edmond Jonckheere | null | 1403.1863 | null | null |
Counterfactual Estimation and Optimization of Click Metrics for Search
Engines | cs.LG cs.AI stat.AP stat.ML | Optimizing an interactive system against a predefined online metric is
particularly challenging, when the metric is computed from user feedback such
as clicks and payments. The key challenge is the counterfactual nature: in the
case of Web search, any change to a component of the search engine may result
in a different search result page for the same query, but we normally cannot
infer reliably from search log how users would react to the new result page.
Consequently, it appears impossible to accurately estimate online metrics that
depend on user feedback, unless the new engine is run to serve users and
compared with a baseline in an A/B test. This approach, while valid and
successful, is unfortunately expensive and time-consuming. In this paper, we
propose to address this problem using causal inference techniques, under the
contextual-bandit framework. This approach effectively allows one to run
(potentially infinitely) many A/B tests offline from search log, making it
possible to estimate and optimize online metrics quickly and inexpensively.
Focusing on an important component in a commercial search engine, we show how
these ideas can be instantiated and applied, and obtain very promising results
that suggest the wide applicability of these techniques.
| Lihong Li and Shunbao Chen and Jim Kleban and Ankur Gupta | null | 1403.1891 | null | null |
Becoming More Robust to Label Noise with Classifier Diversity | stat.ML cs.AI cs.LG | It is widely known in the machine learning community that class noise can be
(and often is) detrimental to inducing a model of the data. Many current
approaches use a single, often biased, measurement to determine if an instance
is noisy. A biased measure may work well on certain data sets, but it can also
be less effective on a broader set of data sets. In this paper, we present
noise identification using classifier diversity (NICD) -- a method for deriving
a less biased noise measurement and integrating it into the learning process.
To lessen the bias of the noise measure, NICD selects a diverse set of
classifiers (based on their predictions of novel instances) to determine which
instances are noisy. We examine NICD as a technique for filtering, instance
weighting, and selecting the base classifiers of a voting ensemble. We compare
NICD with several other noise handling techniques that do not consider
classifier diversity on a set of 54 data sets and 5 learning algorithms. NICD
significantly increases the classification accuracy over the other considered
approaches and is effective across a broad set of data sets and learning
algorithms.
| Michael R. Smith and Tony Martinez | null | 1403.1893 | null | null |
Predictive Overlapping Co-Clustering | cs.LG | In the past few years co-clustering has emerged as an important data mining
tool for two way data analysis. Co-clustering is more advantageous over
traditional one dimensional clustering in many ways such as, ability to find
highly correlated sub-groups of rows and columns. However, one of the
overlooked benefits of co-clustering is that, it can be used to extract
meaningful knowledge for various other knowledge extraction purposes. For
example, building predictive models with high dimensional data and
heterogeneous population is a non-trivial task. Co-clusters extracted from such
data, which shows similar pattern in both the dimension, can be used for a more
accurate predictive model building. Several applications such as finding
patient-disease cohorts in health care analysis, finding user-genre groups in
recommendation systems and community detection problems can benefit from
co-clustering technique that utilizes the predictive power of the data to
generate co-clusters for improved data analysis.
In this paper, we present the novel idea of Predictive Overlapping
Co-Clustering (POCC) as an optimization problem for a more effective and
improved predictive analysis. Our algorithm generates optimal co-clusters by
maximizing predictive power of the co-clusters subject to the constraints on
the number of row and column clusters. In this paper precision, recall and
f-measure have been used as evaluation measures of the resulting co-clusters.
Results of our algorithm has been compared with two other well-known techniques
- K-means and Spectral co-clustering, over four real data set namely, Leukemia,
Internet-Ads, Ovarian cancer and MovieLens data set. The results demonstrate
the effectiveness and utility of our algorithm POCC in practice.
| Chandrima Sarkar, Jaideep Srivastava | null | 1403.1942 | null | null |
Multi-label ensemble based on variable pairwise constraint projection | cs.LG cs.CV stat.ML | Multi-label classification has attracted an increasing amount of attention in
recent years. To this end, many algorithms have been developed to classify
multi-label data in an effective manner. However, they usually do not consider
the pairwise relations indicated by sample labels, which actually play
important roles in multi-label classification. Inspired by this, we naturally
extend the traditional pairwise constraints to the multi-label scenario via a
flexible thresholding scheme. Moreover, to improve the generalization ability
of the classifier, we adopt a boosting-like strategy to construct a multi-label
ensemble from a group of base classifiers. To achieve these goals, this paper
presents a novel multi-label classification framework named Variable Pairwise
Constraint projection for Multi-label Ensemble (VPCME). Specifically, we take
advantage of the variable pairwise constraint projection to learn a
lower-dimensional data representation, which preserves the correlations between
samples and labels. Thereafter, the base classifiers are trained in the new
data space. For the boosting-like strategy, we employ both the variable
pairwise constraints and the bootstrap steps to diversify the base classifiers.
Empirical studies have shown the superiority of the proposed method in
comparison with other approaches.
| Ping Li and Hong Li and Min Wu | 10.1016/j.ins.2012.07.066 | 1403.1944 | null | null |
Improving Performance of a Group of Classification Algorithms Using
Resampling and Feature Selection | cs.LG | In recent years the importance of finding a meaningful pattern from huge
datasets has become more challenging. Data miners try to adopt innovative
methods to face this problem by applying feature selection methods. In this
paper we propose a new hybrid method in which we use a combination of
resampling, filtering the sample domain and wrapper subset evaluation method
with genetic search to reduce dimensions of Lung-Cancer dataset that we
received from UCI Repository of Machine Learning databases. Finally, we apply
some well- known classification algorithms (Na\"ive Bayes, Logistic, Multilayer
Perceptron, Best First Decision Tree and JRIP) to the resulting dataset and
compare the results and prediction rates before and after the application of
our feature selection method on that dataset. The results show a substantial
progress in the average performance of five classification algorithms
simultaneously and the classification error for these classifiers decreases
considerably. The experiments also show that this method outperforms other
feature selection methods with a lower cost.
| Mehdi Naseriparsa, Amir-masoud Bidgoli, Touraj Varaee | null | 1403.1946 | null | null |
Combination of PCA with SMOTE Resampling to Boost the Prediction Rate in
Lung Cancer Dataset | cs.LG cs.CE | Classification algorithms are unable to make reliable models on the datasets
with huge sizes. These datasets contain many irrelevant and redundant features
that mislead the classifiers. Furthermore, many huge datasets have imbalanced
class distribution which leads to bias over majority class in the
classification process. In this paper combination of unsupervised
dimensionality reduction methods with resampling is proposed and the results
are tested on Lung-Cancer dataset. In the first step PCA is applied on
Lung-Cancer dataset to compact the dataset and eliminate irrelevant features
and in the second step SMOTE resampling is carried out to balance the class
distribution and increase the variety of sample domain. Finally, Naive Bayes
classifier is applied on the resulting dataset and the results are compared and
evaluation metrics are calculated. The experiments show the effectiveness of
the proposed method across four evaluation metrics: Overall accuracy, False
Positive Rate, Precision, Recall.
| Mehdi Naseriparsa, Mohammad Mansour Riahi Kashani | 10.5120/13376-0987 | 1403.1949 | null | null |
Categorization Axioms for Clustering Results | cs.LG | Cluster analysis has attracted more and more attention in the field of
machine learning and data mining. Numerous clustering algorithms have been
proposed and are being developed due to diverse theories and various
requirements of emerging applications. Therefore, it is very worth establishing
an unified axiomatic framework for data clustering. In the literature, it is an
open problem and has been proved very challenging. In this paper, clustering
results are axiomatized by assuming that an proper clustering result should
satisfy categorization axioms. The proposed axioms not only introduce
classification of clustering results and inequalities of clustering results,
but also are consistent with prototype theory and exemplar theory of
categorization models in cognitive science. Moreover, the proposed axioms lead
to three principles of designing clustering algorithm and cluster validity
index, which follow many popular clustering algorithms and cluster validity
indices.
| Jian Yu, Zongben Xu | null | 1403.2065 | null | null |
Sublinear Models for Graphs | cs.LG cs.CV | This contribution extends linear models for feature vectors to sublinear
models for graphs and analyzes their properties. The results are (i) a
geometric interpretation of sublinear classifiers, (ii) a generic learning rule
based on the principle of empirical risk minimization, (iii) a convergence
theorem for the margin perceptron in the sublinearly separable case, and (iv)
the VC-dimension of sublinear functions. Empirical results on graph data show
that sublinear models on graphs have similar properties as linear models for
feature vectors.
| Brijnesh J. Jain | null | 1403.2295 | null | null |
A Hybrid Feature Selection Method to Improve Performance of a Group of
Classification Algorithms | cs.LG | In this paper a hybrid feature selection method is proposed which takes
advantages of wrapper subset evaluation with a lower cost and improves the
performance of a group of classifiers. The method uses combination of sample
domain filtering and resampling to refine the sample domain and two feature
subset evaluation methods to select reliable features. This method utilizes
both feature space and sample domain in two phases. The first phase filters and
resamples the sample domain and the second phase adopts a hybrid procedure by
information gain, wrapper subset evaluation and genetic search to find the
optimal feature space. Experiments carried out on different types of datasets
from UCI Repository of Machine Learning databases and the results show a rise
in the average performance of five classifiers (Naive Bayes, Logistic,
Multilayer Perceptron, Best First Decision Tree and JRIP) simultaneously and
the classification error for these classifiers decreases considerably. The
experiments also show that this method outperforms other feature selection
methods with a lower cost.
| Mehdi Naseriparsa, Amir-Masoud Bidgoli, Touraj Varaee | 10.5120/12065-8172 | 1403.2372 | null | null |
Generalised Mixability, Constant Regret, and Bayesian Updating | cs.LG stat.ML | Mixability of a loss is known to characterise when constant regret bounds are
achievable in games of prediction with expert advice through the use of Vovk's
aggregating algorithm. We provide a new interpretation of mixability via convex
analysis that highlights the role of the Kullback-Leibler divergence in its
definition. This naturally generalises to what we call $\Phi$-mixability where
the Bregman divergence $D_\Phi$ replaces the KL divergence. We prove that
losses that are $\Phi$-mixable also enjoy constant regret bounds via a
generalised aggregating algorithm that is similar to mirror descent.
| Mark D. Reid and Rafael M. Frongillo and Robert C. Williamson | null | 1403.2433 | null | null |
Transfer Learning across Networks for Collective Classification | cs.LG cs.SI | This paper addresses the problem of transferring useful knowledge from a
source network to predict node labels in a newly formed target network. While
existing transfer learning research has primarily focused on vector-based data,
in which the instances are assumed to be independent and identically
distributed, how to effectively transfer knowledge across different information
networks has not been well studied, mainly because networks may have their
distinct node features and link relationships between nodes. In this paper, we
propose a new transfer learning algorithm that attempts to transfer common
latent structure features across the source and target networks. The proposed
algorithm discovers these latent features by constructing label propagation
matrices in the source and target networks, and mapping them into a shared
latent feature space. The latent features capture common structure patterns
shared by two networks, and serve as domain-independent features to be
transferred between networks. Together with domain-dependent node features, we
thereafter propose an iterative classification algorithm that leverages label
correlations to predict node labels in the target network. Experiments on
real-world networks demonstrate that our proposed algorithm can successfully
achieve knowledge transfer between networks to help improve the accuracy of
classifying nodes in the target network.
| Meng Fang, Jie Yin, Xingquan Zhu | 10.1109/ICDM.2013.116 | 1403.2484 | null | null |
Optimal interval clustering: Application to Bregman clustering and
statistical mixture learning | cs.IT cs.LG math.IT | We present a generic dynamic programming method to compute the optimal
clustering of $n$ scalar elements into $k$ pairwise disjoint intervals. This
case includes 1D Euclidean $k$-means, $k$-medoids, $k$-medians, $k$-centers,
etc. We extend the method to incorporate cluster size constraints and show how
to choose the appropriate $k$ by model selection. Finally, we illustrate and
refine the method on two case studies: Bregman clustering and statistical
mixture learning maximizing the complete likelihood.
| Frank Nielsen and Richard Nock | null | 1403.2485 | null | null |
Flying Insect Classification with Inexpensive Sensors | cs.LG cs.CE | The ability to use inexpensive, noninvasive sensors to accurately classify
flying insects would have significant implications for entomological research,
and allow for the development of many useful applications in vector control for
both medical and agricultural entomology. Given this, the last sixty years have
seen many research efforts on this task. To date, however, none of this
research has had a lasting impact. In this work, we explain this lack of
progress. We attribute the stagnation on this problem to several factors,
including the use of acoustic sensing devices, the over-reliance on the single
feature of wingbeat frequency, and the attempts to learn complex models with
relatively little data. In contrast, we show that pseudo-acoustic optical
sensors can produce vastly superior data, that we can exploit additional
features, both intrinsic and extrinsic to the insect's flight behavior, and
that a Bayesian classification approach allows us to efficiently learn
classification models that are very robust to over-fitting. We demonstrate our
findings with large scale experiments that dwarf all previous works combined,
as measured by the number of insects and the number of species considered.
| Yanping Chen, Adena Why, Gustavo Batista, Agenor Mafra-Neto, Eamonn
Keogh | null | 1403.2654 | null | null |
Robust and Scalable Bayes via a Median of Subset Posterior Measures | math.ST cs.DC cs.LG stat.TH | We propose a novel approach to Bayesian analysis that is provably robust to
outliers in the data and often has computational advantages over standard
methods. Our technique is based on splitting the data into non-overlapping
subgroups, evaluating the posterior distribution given each independent
subgroup, and then combining the resulting measures. The main novelty of our
approach is the proposed aggregation step, which is based on the evaluation of
a median in the space of probability measures equipped with a suitable
collection of distances that can be quickly and efficiently evaluated in
practice. We present both theoretical and numerical evidence illustrating the
improvements achieved by our method.
| Stanislav Minsker, Sanvesh Srivastava, Lizhen Lin and David B. Dunson | null | 1403.2660 | null | null |
Learning Deep Face Representation | cs.CV cs.LG | Face representation is a crucial step of face recognition systems. An optimal
face representation should be discriminative, robust, compact, and very
easy-to-implement. While numerous hand-crafted and learning-based
representations have been proposed, considerable room for improvement is still
present. In this paper, we present a very easy-to-implement deep learning
framework for face representation. Our method bases on a new structure of deep
network (called Pyramid CNN). The proposed Pyramid CNN adopts a
greedy-filter-and-down-sample operation, which enables the training procedure
to be very fast and computation-efficient. In addition, the structure of
Pyramid CNN can naturally incorporate feature sharing across multi-scale face
representations, increasing the discriminative ability of resulting
representation. Our basic network is capable of achieving high recognition
accuracy ($85.8\%$ on LFW benchmark) with only 8 dimension representation. When
extended to feature-sharing Pyramid CNN, our system achieves the
state-of-the-art performance ($97.3\%$) on LFW benchmark. We also introduce a
new benchmark of realistic face images on social network and validate our
proposed representation has a good ability of generalization.
| Haoqiang Fan, Zhimin Cao, Yuning Jiang, Qi Yin, Chinchilla Doudou | null | 1403.2802 | null | null |
A survey of dimensionality reduction techniques | stat.ML cs.LG q-bio.QM | Experimental life sciences like biology or chemistry have seen in the recent
decades an explosion of the data available from experiments. Laboratory
instruments become more and more complex and report hundreds or thousands
measurements for a single experiment and therefore the statistical methods face
challenging tasks when dealing with such high dimensional data. However, much
of the data is highly redundant and can be efficiently brought down to a much
smaller number of variables without a significant loss of information. The
mathematical procedures making possible this reduction are called
dimensionality reduction techniques; they have widely been developed by fields
like Statistics or Machine Learning, and are currently a hot research topic. In
this review we categorize the plethora of dimension reduction techniques
available and give the mathematical insight behind them.
| C.O.S. Sorzano, J. Vargas, A. Pascual Montano | null | 1403.2877 | null | null |
Cancer Prognosis Prediction Using Balanced Stratified Sampling | cs.LG | High accuracy in cancer prediction is important to improve the quality of the
treatment and to improve the rate of survivability of patients. As the data
volume is increasing rapidly in the healthcare research, the analytical
challenge exists in double. The use of effective sampling technique in
classification algorithms always yields good prediction accuracy. The SEER
public use cancer database provides various prominent class labels for
prognosis prediction. The main objective of this paper is to find the effect of
sampling techniques in classifying the prognosis variable and propose an ideal
sampling method based on the outcome of the experimentation. In the first phase
of this work the traditional random sampling and stratified sampling techniques
have been used. At the next level the balanced stratified sampling with
variations as per the choice of the prognosis class labels have been tested.
Much of the initial time has been focused on performing the pre_processing of
the SEER data set. The classification model for experimentation has been built
using the breast cancer, respiratory cancer and mixed cancer data sets with
three traditional classifiers namely Decision Tree, Naive Bayes and K-Nearest
Neighbor. The three prognosis factors survival, stage and metastasis have been
used as class labels for experimental comparisons. The results shows a steady
increase in the prediction accuracy of balanced stratified model as the sample
size increases, but the traditional approach fluctuates before the optimum
results.
| J S Saleema, N Bhagawathi, S Monica, P Deepa Shenoy, K R Venugopal and
L M Patnaik | 10.5121/ijscai.2014.3102 | 1403.2950 | null | null |
Statistical Decision Making for Optimal Budget Allocation in Crowd
Labeling | cs.LG math.OC stat.ML | In crowd labeling, a large amount of unlabeled data instances are outsourced
to a crowd of workers. Workers will be paid for each label they provide, but
the labeling requester usually has only a limited amount of the budget. Since
data instances have different levels of labeling difficulty and workers have
different reliability, it is desirable to have an optimal policy to allocate
the budget among all instance-worker pairs such that the overall labeling
accuracy is maximized. We consider categorical labeling tasks and formulate the
budget allocation problem as a Bayesian Markov decision process (MDP), which
simultaneously conducts learning and decision making. Using the dynamic
programming (DP) recurrence, one can obtain the optimal allocation policy.
However, DP quickly becomes computationally intractable when the size of the
problem increases. To solve this challenge, we propose a computationally
efficient approximate policy, called optimistic knowledge gradient policy. Our
MDP is a quite general framework, which applies to both pull crowdsourcing
marketplaces with homogeneous workers and push marketplaces with heterogeneous
workers. It can also incorporate the contextual information of instances when
they are available. The experiments on both simulated and real data show that
the proposed policy achieves a higher labeling accuracy than other existing
policies at the same budget level.
| Xi Chen, Qihang Lin, Dengyong Zhou | null | 1403.3080 | null | null |
Sparse Recovery with Linear and Nonlinear Observations: Dependent and
Noisy Data | cs.IT cs.LG math.IT math.ST stat.TH | We formulate sparse support recovery as a salient set identification problem
and use information-theoretic analyses to characterize the recovery performance
and sample complexity. We consider a very general model where we are not
restricted to linear models or specific distributions. We state non-asymptotic
bounds on recovery probability and a tight mutual information formula for
sample complexity. We evaluate our bounds for applications such as sparse
linear regression and explicitly characterize effects of correlation or noisy
features on recovery performance. We show improvements upon previous work and
identify gaps between the performance of recovery algorithms and fundamental
information.
| Cem Aksoylar and Venkatesh Saligrama | null | 1403.3109 | null | null |
The Potential Benefits of Filtering Versus Hyper-Parameter Optimization | stat.ML cs.LG | The quality of an induced model by a learning algorithm is dependent on the
quality of the training data and the hyper-parameters supplied to the learning
algorithm. Prior work has shown that improving the quality of the training data
(i.e., by removing low quality instances) or tuning the learning algorithm
hyper-parameters can significantly improve the quality of an induced model. A
comparison of the two methods is lacking though. In this paper, we estimate and
compare the potential benefits of filtering and hyper-parameter optimization.
Estimating the potential benefit gives an overly optimistic estimate but also
empirically demonstrates an approximation of the maximum potential benefit of
each method. We find that, while both significantly improve the induced model,
improving the quality of the training set has a greater potential effect than
hyper-parameter optimization.
| Michael R. Smith and Tony Martinez and Christophe Giraud-Carrier | null | 1403.3342 | null | null |
Spectral Correlation Hub Screening of Multivariate Time Series | stat.OT cs.LG stat.AP | This chapter discusses correlation analysis of stationary multivariate
Gaussian time series in the spectral or Fourier domain. The goal is to identify
the hub time series, i.e., those that are highly correlated with a specified
number of other time series. We show that Fourier components of the time series
at different frequencies are asymptotically statistically independent. This
property permits independent correlation analysis at each frequency,
alleviating the computational and statistical challenges of high-dimensional
time series. To detect correlation hubs at each frequency, an existing
correlation screening method is extended to the complex numbers to accommodate
complex-valued Fourier components. We characterize the number of hub
discoveries at specified correlation and degree thresholds in the regime of
increasing dimension and fixed sample size. The theory specifies appropriate
thresholds to apply to sample correlation matrices to detect hubs and also
allows statistical significance to be attributed to hub discoveries. Numerical
results illustrate the accuracy of the theory and the usefulness of the
proposed spectral framework.
| Hamed Firouzi, Dennis Wei, Alfred O. Hero III | null | 1403.3371 | null | null |
Box Drawings for Learning with Imbalanced Data | stat.ML cs.LG | The vast majority of real world classification problems are imbalanced,
meaning there are far fewer data from the class of interest (the positive
class) than from other classes. We propose two machine learning algorithms to
handle highly imbalanced classification problems. The classifiers constructed
by both methods are created as unions of parallel axis rectangles around the
positive examples, and thus have the benefit of being interpretable. The first
algorithm uses mixed integer programming to optimize a weighted balance between
positive and negative class accuracies. Regularization is introduced to improve
generalization performance. The second method uses an approximation in order to
assist with scalability. Specifically, it follows a \textit{characterize then
discriminate} approach, where the positive class is characterized first by
boxes, and then each box boundary becomes a separate discriminative classifier.
This method has the computational advantages that it can be easily
parallelized, and considers only the relevant regions of feature space.
| Siong Thye Goh, Cynthia Rudin | null | 1403.3378 | null | null |
Scalable and Robust Construction of Topical Hierarchies | cs.LG cs.CL cs.DB cs.IR | Automated generation of high-quality topical hierarchies for a text
collection is a dream problem in knowledge engineering with many valuable
applications. In this paper a scalable and robust algorithm is proposed for
constructing a hierarchy of topics from a text collection. We divide and
conquer the problem using a top-down recursive framework, based on a tensor
orthogonal decomposition technique. We solve a critical challenge to perform
scalable inference for our newly designed hierarchical topic model. Experiments
with various real-world datasets illustrate its ability to generate robust,
high-quality hierarchies efficiently. Our method reduces the time of
construction by several orders of magnitude, and its robust feature renders it
possible for users to interactively revise the hierarchy.
| Chi Wang, Xueqing Liu, Yanglei Song, Jiawei Han | null | 1403.3460 | null | null |
A Survey of Algorithms and Analysis for Adaptive Online Learning | cs.LG | We present tools for the analysis of Follow-The-Regularized-Leader (FTRL),
Dual Averaging, and Mirror Descent algorithms when the regularizer
(equivalently, prox-function or learning rate schedule) is chosen adaptively
based on the data. Adaptivity can be used to prove regret bounds that hold on
every round, and also allows for data-dependent regret bounds as in
AdaGrad-style algorithms (e.g., Online Gradient Descent with adaptive
per-coordinate learning rates). We present results from a large number of prior
works in a unified manner, using a modular and tight analysis that isolates the
key arguments in easily re-usable lemmas. This approach strengthens pre-viously
known FTRL analysis techniques to produce bounds as tight as those achieved by
potential functions or primal-dual analysis. Further, we prove a general and
exact equivalence between an arbitrary adaptive Mirror Descent algorithm and a
correspond- ing FTRL update, which allows us to analyze any Mirror Descent
algorithm in the same framework. The key to bridging the gap between Dual
Averaging and Mirror Descent algorithms lies in an analysis of the
FTRL-Proximal algorithm family. Our regret bounds are proved in the most
general form, holding for arbitrary norms and non-smooth regularizers with
time-varying weight.
| H. Brendan McMahan | null | 1403.3465 | null | null |
Making Risk Minimization Tolerant to Label Noise | cs.LG | In many applications, the training data, from which one needs to learn a
classifier, is corrupted with label noise. Many standard algorithms such as SVM
perform poorly in presence of label noise. In this paper we investigate the
robustness of risk minimization to label noise. We prove a sufficient condition
on a loss function for the risk minimization under that loss to be tolerant to
uniform label noise. We show that the $0-1$ loss, sigmoid loss, ramp loss and
probit loss satisfy this condition though none of the standard convex loss
functions satisfy it. We also prove that, by choosing a sufficiently large
value of a parameter in the loss function, the sigmoid loss, ramp loss and
probit loss can be made tolerant to non-uniform label noise also if we can
assume the classes to be separable under noise-free data distribution. Through
extensive empirical studies, we show that risk minimization under the $0-1$
loss, the sigmoid loss and the ramp loss has much better robustness to label
noise when compared to the SVM algorithm.
| Aritra Ghosh, Naresh Manwani and P. S. Sastry | 10.1016/j.neucom.2014.09.081 | 1403.3610 | null | null |
Mixed-norm Regularization for Brain Decoding | cs.LG | This work investigates the use of mixed-norm regularization for sensor
selection in Event-Related Potential (ERP) based Brain-Computer Interfaces
(BCI). The classification problem is cast as a discriminative optimization
framework where sensor selection is induced through the use of mixed-norms.
This framework is extended to the multi-task learning situation where several
similar classification tasks related to different subjects are learned
simultaneously. In this case, multi-task learning helps in leveraging data
scarcity issue yielding to more robust classifiers. For this purpose, we have
introduced a regularizer that induces both sensor selection and classifier
similarities. The different regularization approaches are compared on three ERP
datasets showing the interest of mixed-norm regularization in terms of sensor
selection. The multi-task approaches are evaluated when a small number of
learning examples are available yielding to significant performance
improvements especially for subjects performing poorly.
| R\'emi Flamary (LAGRANGE), Nisrine Jrad (GIPSA-lab), Ronald Phlypo
(GIPSA-lab), Marco Congedo (GIPSA-lab), Alain Rakotomamonjy (LITIS) | null | 1403.3628 | null | null |
Learning the Latent State Space of Time-Varying Graphs | cs.SI cs.LG physics.soc-ph stat.ML | From social networks to Internet applications, a wide variety of electronic
communication tools are producing streams of graph data; where the nodes
represent users and the edges represent the contacts between them over time.
This has led to an increased interest in mechanisms to model the dynamic
structure of time-varying graphs. In this work, we develop a framework for
learning the latent state space of a time-varying email graph. We show how the
framework can be used to find subsequences that correspond to global real-time
events in the Email graph (e.g. vacations, breaks, ...etc.). These events
impact the underlying graph process to make its characteristics non-stationary.
Within the framework, we compare two different representations of the temporal
relationships; discrete vs. probabilistic. We use the two representations as
inputs to a mixture model to learn the latent state transitions that correspond
to important changes in the Email graph structure over time.
| Nesreen K. Ahmed, Christopher Cole, Jennifer Neville | null | 1403.3707 | null | null |
Near-optimal Reinforcement Learning in Factored MDPs | stat.ML cs.LG | Any reinforcement learning algorithm that applies to all Markov decision
processes (MDPs) will suffer $\Omega(\sqrt{SAT})$ regret on some MDP, where $T$
is the elapsed time and $S$ and $A$ are the cardinalities of the state and
action spaces. This implies $T = \Omega(SA)$ time to guarantee a near-optimal
policy. In many settings of practical interest, due to the curse of
dimensionality, $S$ and $A$ can be so enormous that this learning time is
unacceptable. We establish that, if the system is known to be a \emph{factored}
MDP, it is possible to achieve regret that scales polynomially in the number of
\emph{parameters} encoding the factored MDP, which may be exponentially smaller
than $S$ or $A$. We provide two algorithms that satisfy near-optimal regret
bounds in this context: posterior sampling reinforcement learning (PSRL) and an
upper confidence bound algorithm (UCRL-Factored).
| Ian Osband, Benjamin Van Roy | null | 1403.3741 | null | null |
Multi-task Feature Selection based Anomaly Detection | stat.ML cs.LG | Network anomaly detection is still a vibrant research area. As the fast
growth of network bandwidth and the tremendous traffic on the network, there
arises an extremely challengeable question: How to efficiently and accurately
detect the anomaly on multiple traffic? In multi-task learning, the traffic
consisting of flows at different time periods is considered as a task. Multiple
tasks at different time periods performed simultaneously to detect anomalies.
In this paper, we apply the multi-task feature selection in network anomaly
detection area which provides a powerful method to gather information from
multiple traffic and detect anomalies on it simultaneously. In particular, the
multi-task feature selection includes the well-known l1-norm based feature
selection as a special case given only one task. Moreover, we show that the
multi-task feature selection is more accurate by utilizing more information
simultaneously than the l1-norm based method. At the evaluation stage, we
preprocess the raw data trace from trans-Pacific backbone link between Japan
and the United States, label with anomaly communities, and generate a
248-feature dataset. We show empirically that the multi-task feature selection
outperforms independent l1-norm based feature selection on real traffic
dataset.
| Longqi Yang, Yibing Wang, Zhisong Pan and Guyu Hu | null | 1403.4017 | null | null |
Learning Negative Mixture Models by Tensor Decompositions | cs.LG | This work considers the problem of estimating the parameters of negative
mixture models, i.e. mixture models that possibly involve negative weights. The
contributions of this paper are as follows. (i) We show that every rational
probability distributions on strings, a representation which occurs naturally
in spectral learning, can be computed by a negative mixture of at most two
probabilistic automata (or HMMs). (ii) We propose a method to estimate the
parameters of negative mixture models having a specific tensor structure in
their low order observable moments. Building upon a recent paper on tensor
decompositions for learning latent variable models, we extend this work to the
broader setting of tensors having a symmetric decomposition with positive and
negative weights. We introduce a generalization of the tensor power method for
complex valued tensors, and establish theoretical convergence guarantees. (iii)
We show how our approach applies to negative Gaussian mixture models, for which
we provide some experiments.
| Guillaume Rabusseau and Fran\c{c}ois Denis | null | 1403.4224 | null | null |
Balancing Sparsity and Rank Constraints in Quadratic Basis Pursuit | cs.NA cs.LG | We investigate the methods that simultaneously enforce sparsity and low-rank
structure in a matrix as often employed for sparse phase retrieval problems or
phase calibration problems in compressive sensing. We propose a new approach
for analyzing the trade off between the sparsity and low rank constraints in
these approaches which not only helps to provide guidelines to adjust the
weights between the aforementioned constraints, but also enables new simulation
strategies for evaluating performance. We then provide simulation results for
phase retrieval and phase calibration cases both to demonstrate the consistency
of the proposed method with other approaches and to evaluate the change of
performance with different weights for the sparsity and low rank structure
constraints.
| Cagdas Bilen (INRIA - IRISA), Gilles Puy, R\'emi Gribonval (INRIA -
IRISA), Laurent Daudet | null | 1403.4267 | null | null |
Spectral Clustering with Jensen-type kernels and their multi-point
extensions | cs.LG | Motivated by multi-distribution divergences, which originate in information
theory, we propose a notion of `multi-point' kernels, and study their
applications. We study a class of kernels based on Jensen type divergences and
show that these can be extended to measure similarity among multiple points. We
study tensor flattening methods and develop a multi-point (kernel) spectral
clustering (MSC) method. We further emphasize on a special case of the proposed
kernels, which is a multi-point extension of the linear (dot-product) kernel
and show the existence of cubic time tensor flattening algorithm in this case.
Finally, we illustrate the usefulness of our contributions using standard data
sets and image segmentation tasks.
| Debarghya Ghoshdastidar, Ambedkar Dukkipati, Ajay P. Adsul, Aparna S.
Vijayan | 10.1109/CVPR.2014.191 | 1403.4378 | null | null |
Simultaneous Perturbation Algorithms for Batch Off-Policy Search | math.OC cs.LG | We propose novel policy search algorithms in the context of off-policy, batch
mode reinforcement learning (RL) with continuous state and action spaces. Given
a batch collection of trajectories, we perform off-line policy evaluation using
an algorithm similar to that by [Fonteneau et al., 2010]. Using this
Monte-Carlo like policy evaluator, we perform policy search in a class of
parameterized policies. We propose both first order policy gradient and second
order policy Newton algorithms. All our algorithms incorporate simultaneous
perturbation estimates for the gradient as well as the Hessian of the
cost-to-go vector, since the latter is unknown and only biased estimates are
available. We demonstrate their practicality on a simple 1-dimensional
continuous state space problem.
| Raphael Fonteneau and L.A. Prashanth | null | 1403.4514 | null | null |
Similarity networks for classification: a case study in the Horse Colic
problem | cs.LG cs.NE | This paper develops a two-layer neural network in which the neuron model
computes a user-defined similarity function between inputs and weights. The
neuron transfer function is formed by composition of an adapted logistic
function with the mean of the partial input-weight similarities. The resulting
neuron model is capable of dealing directly with variables of potentially
different nature (continuous, fuzzy, ordinal, categorical). There is also
provision for missing values. The network is trained using a two-stage
procedure very similar to that used to train a radial basis function (RBF)
neural network. The network is compared to two types of RBF networks in a
non-trivial dataset: the Horse Colic problem, taken as a case study and
analyzed in detail.
| Llu\'is Belanche and Jer\'onimo Hern\'andez | null | 1403.4540 | null | null |
A Split-and-Merge Dictionary Learning Algorithm for Sparse
Representation | cs.LG stat.ML | In big data image/video analytics, we encounter the problem of learning an
overcomplete dictionary for sparse representation from a large training
dataset, which can not be processed at once because of storage and
computational constraints. To tackle the problem of dictionary learning in such
scenarios, we propose an algorithm for parallel dictionary learning. The
fundamental idea behind the algorithm is to learn a sparse representation in
two phases. In the first phase, the whole training dataset is partitioned into
small non-overlapping subsets, and a dictionary is trained independently on
each small database. In the second phase, the dictionaries are merged to form a
global dictionary. We show that the proposed algorithm is efficient in its
usage of memory and computational complexity, and performs on par with the
standard learning strategy operating on the entire data at a time. As an
application, we consider the problem of image denoising. We present a
comparative analysis of our algorithm with the standard learning techniques,
that use the entire database at a time, in terms of training and denoising
performance. We observe that the split-and-merge algorithm results in a
remarkable reduction of training time, without significantly affecting the
denoising performance.
| Subhadip Mukherjee and Chandra Sekhar Seelamantula | null | 1403.4781 | null | null |
Universal and Distinct Properties of Communication Dynamics: How to
Generate Realistic Inter-event Times | cs.SI cs.LG physics.soc-ph | With the advancement of information systems, means of communications are
becoming cheaper, faster and more available. Today, millions of people carrying
smart-phones or tablets are able to communicate at practically any time and
anywhere they want. Among others, they can access their e-mails, comment on
weblogs, watch and post comments on videos, make phone calls or text messages
almost ubiquitously. Given this scenario, in this paper we tackle a fundamental
aspect of this new era of communication: how the time intervals between
communication events behave for different technologies and means of
communications? Are there universal patterns for the inter-event time
distribution (IED)? In which ways inter-event times behave differently among
particular technologies? To answer these questions, we analyze eight different
datasets from real and modern communication data and we found four well defined
patterns that are seen in all the eight datasets. Moreover, we propose the use
of the Self-Feeding Process (SFP) to generate inter-event times between
communications. The SFP is extremely parsimonious point process that requires
at most two parameters and is able to generate inter-event times with all the
universal properties we observed in the data. We show the potential application
of SFP by proposing a framework to generate a synthetic dataset containing
realistic communication events of any one of the analyzed means of
communications (e.g. phone calls, e-mails, comments on blogs) and an algorithm
to detect anomalies.
| Pedro O.S. Vaz de Melo, Christos Faloutsos, Renato Assun\c{c}\~ao,
Rodrigo Alves and Antonio A.F. Loureiro | 10.1145/2700399 | 1403.4997 | null | null |
Network-based Isoform Quantification with RNA-Seq Data for Cancer
Transcriptome Analysis | cs.CE cs.AI cs.LG | High-throughput mRNA sequencing (RNA-Seq) is widely used for transcript
quantification of gene isoforms. Since RNA-Seq data alone is often not
sufficient to accurately identify the read origins from the isoforms for
quantification, we propose to explore protein domain-domain interactions as
prior knowledge for integrative analysis with RNA-seq data. We introduce a
Network-based method for RNA-Seq-based Transcript Quantification (Net-RSTQ) to
integrate protein domain-domain interaction network with short read alignments
for transcript abundance estimation. Based on our observation that the
abundances of the neighboring isoforms by domain-domain interactions in the
network are positively correlated, Net-RSTQ models the expression of the
neighboring transcripts as Dirichlet priors on the likelihood of the observed
read alignments against the transcripts in one gene. The transcript abundances
of all the genes are then jointly estimated with alternating optimization of
multiple EM problems. In simulation Net-RSTQ effectively improved isoform
transcript quantifications when isoform co-expressions correlate with their
interactions. qRT-PCR results on 25 multi-isoform genes in a stem cell line, an
ovarian cancer cell line, and a breast cancer cell line also showed that
Net-RSTQ estimated more consistent isoform proportions with RNA-Seq data. In
the experiments on the RNA-Seq data in The Cancer Genome Atlas (TCGA), the
transcript abundances estimated by Net-RSTQ are more informative for patient
sample classification of ovarian cancer, breast cancer and lung cancer. All
experimental results collectively support that Net-RSTQ is a promising approach
for isoform quantification.
| Wei Zhang, Jae-Woong Chang, Lilong Lin, Kay Minn, Baolin Wu, Jeremy
Chien, Jeongsik Yong, Hui Zheng, Rui Kuang | 10.1371/journal.pcbi.1004465 | 1403.5029 | null | null |
Matroid Bandits: Fast Combinatorial Optimization with Learning | cs.LG cs.AI cs.SY stat.ML | A matroid is a notion of independence in combinatorial optimization which is
closely related to computational efficiency. In particular, it is well known
that the maximum of a constrained modular function can be found greedily if and
only if the constraints are associated with a matroid. In this paper, we bring
together the ideas of bandits and matroids, and propose a new class of
combinatorial bandits, matroid bandits. The objective in these problems is to
learn how to maximize a modular function on a matroid. This function is
stochastic and initially unknown. We propose a practical algorithm for solving
our problem, Optimistic Matroid Maximization (OMM); and prove two upper bounds,
gap-dependent and gap-free, on its regret. Both bounds are sublinear in time
and at most linear in all other quantities of interest. The gap-dependent upper
bound is tight and we prove a matching lower bound on a partition matroid
bandit. Finally, we evaluate our method on three real-world problems and show
that it is practical.
| Branislav Kveton, Zheng Wen, Azin Ashkan, Hoda Eydgahi, Brian Eriksson | null | 1403.5045 | null | null |
Unconfused Ultraconservative Multiclass Algorithms | cs.LG | We tackle the problem of learning linear classifiers from noisy datasets in a
multiclass setting. The two-class version of this problem was studied a few
years ago by, e.g. Bylander (1994) and Blum et al. (1996): in these
contributions, the proposed approaches to fight the noise revolve around a
Perceptron learning scheme fed with peculiar examples computed through a
weighted average of points from the noisy training set. We propose to build
upon these approaches and we introduce a new algorithm called UMA (for
Unconfused Multiclass additive Algorithm) which may be seen as a generalization
to the multiclass setting of the previous approaches. In order to characterize
the noise we use the confusion matrix as a multiclass extension of the
classification noise studied in the aforementioned literature. Theoretically
well-founded, UMA furthermore displays very good empirical noise robustness, as
evidenced by numerical simulations conducted on both synthetic and real data.
Keywords: Multiclass classification, Perceptron, Noisy labels, Confusion Matrix
| Ugo Louche (LIF), Liva Ralaivola (LIF) | null | 1403.5115 | null | null |
Online Local Learning via Semidefinite Programming | cs.LG | In many online learning problems we are interested in predicting local
information about some universe of items. For example, we may want to know
whether two items are in the same cluster rather than computing an assignment
of items to clusters; we may want to know which of two teams will win a game
rather than computing a ranking of teams. Although finding the optimal
clustering or ranking is typically intractable, it may be possible to predict
the relationships between items as well as if you could solve the global
optimization problem exactly.
Formally, we consider an online learning problem in which a learner
repeatedly guesses a pair of labels (l(x), l(y)) and receives an adversarial
payoff depending on those labels. The learner's goal is to receive a payoff
nearly as good as the best fixed labeling of the items. We show that a simple
algorithm based on semidefinite programming can obtain asymptotically optimal
regret in the case where the number of possible labels is O(1), resolving an
open problem posed by Hazan, Kale, and Shalev-Schwartz. Our main technical
contribution is a novel use and analysis of the log determinant regularizer,
exploiting the observation that log det(A + I) upper bounds the entropy of any
distribution with covariance matrix A.
| Paul Christiano | null | 1403.5287 | null | null |
An Information-Theoretic Analysis of Thompson Sampling | cs.LG | We provide an information-theoretic analysis of Thompson sampling that
applies across a broad range of online optimization problems in which a
decision-maker must learn from partial feedback. This analysis inherits the
simplicity and elegance of information theory and leads to regret bounds that
scale with the entropy of the optimal-action distribution. This strengthens
preexisting results and yields new insight into how information improves
performance.
| Daniel Russo, Benjamin Van Roy | null | 1403.5341 | null | null |
Using n-grams models for visual semantic place recognition | stat.ML cs.CV cs.LG | The aim of this paper is to present a new method for visual place
recognition. Our system combines global image characterization and visual
words, which allows to use efficient Bayesian filtering methods to integrate
several images. More precisely, we extend the classical HMM model with
techniques inspired by the field of Natural Language Processing. This paper
presents our system and the Bayesian filtering algorithm. The performance of
our system and the influence of the main parameters are evaluated on a standard
database. The discussion highlights the interest of using such models and
proposes improvements.
| Mathieu Dubois (LIMSI), Frenoux Emmanuelle (LIMSI), Philippe Tarroux
(LIMSI) | null | 1403.5370 | null | null |
Missing Data Prediction and Classification: The Use of Auto-Associative
Neural Networks and Optimization Algorithms | cs.NE cs.LG | This paper presents methods which are aimed at finding approximations to
missing data in a dataset by using optimization algorithms to optimize the
network parameters after which prediction and classification tasks can be
performed. The optimization methods that are considered are genetic algorithm
(GA), simulated annealing (SA), particle swarm optimization (PSO), random
forest (RF) and negative selection (NS) and these methods are individually used
in combination with auto-associative neural networks (AANN) for missing data
estimation and the results obtained are compared. The methods suggested use the
optimization algorithms to minimize an error function derived from training the
auto-associative neural network during which the interrelationships between the
inputs and the outputs are obtained and stored in the weights connecting the
different layers of the network. The error function is expressed as the square
of the difference between the actual observations and predicted values from an
auto-associative neural network. In the event of missing data, all the values
of the actual observations are not known hence, the error function is
decomposed to depend on the known and unknown variable values. Multi-layer
perceptron (MLP) neural network is employed to train the neural networks using
the scaled conjugate gradient (SCG) method. Prediction accuracy is determined
by mean squared error (MSE), root mean squared error (RMSE), mean absolute
error (MAE), and correlation coefficient (r) computations. Accuracy in
classification is obtained by plotting ROC curves and calculating the areas
under these. Analysis of results depicts that the approach using RF with AANN
produces the most accurate predictions and classifications while on the other
end of the scale is the approach which entails using NS with AANN.
| Collins Leke, Bhekisipho Twala, and T. Marwala | null | 1403.5488 | null | null |
Learning to Optimize via Information-Directed Sampling | cs.LG | We propose information-directed sampling -- a new approach to online
optimization problems in which a decision-maker must balance between
exploration and exploitation while learning from partial feedback. Each action
is sampled in a manner that minimizes the ratio between squared expected
single-period regret and a measure of information gain: the mutual information
between the optimal action and the next observation. We establish an expected
regret bound for information-directed sampling that applies across a very
general class of models and scales with the entropy of the optimal action
distribution. We illustrate through simple analytic examples how
information-directed sampling accounts for kinds of information that
alternative approaches do not adequately address and that this can lead to
dramatic performance gains. For the widely studied Bernoulli, Gaussian, and
linear bandit problems, we demonstrate state-of-the-art simulation performance.
| Daniel Russo and Benjamin Van Roy | null | 1403.5556 | null | null |
Forecasting Popularity of Videos using Social Media | cs.LG cs.SI | This paper presents a systematic online prediction method (Social-Forecast)
that is capable to accurately forecast the popularity of videos promoted by
social media. Social-Forecast explicitly considers the dynamically changing and
evolving propagation patterns of videos in social media when making popularity
forecasts, thereby being situation and context aware. Social-Forecast aims to
maximize the forecast reward, which is defined as a tradeoff between the
popularity prediction accuracy and the timeliness with which a prediction is
issued. The forecasting is performed online and requires no training phase or a
priori knowledge. We analytically bound the prediction performance loss of
Social-Forecast as compared to that obtained by an omniscient oracle and prove
that the bound is sublinear in the number of video arrivals, thereby
guaranteeing its short-term performance as well as its asymptotic convergence
to the optimal performance. In addition, we conduct extensive experiments using
real-world data traces collected from the videos shared in RenRen, one of the
largest online social networks in China. These experiments show that our
proposed method outperforms existing view-based approaches for popularity
prediction (which are not context-aware) by more than 30% in terms of
prediction rewards.
| Jie Xu, Mihaela van der Schaar, Jiangchuan Liu and Haitao Li | 10.1109/JSTSP.2014.2370942 | 1403.5603 | null | null |
Bayesian Optimization with Unknown Constraints | stat.ML cs.LG | Recent work on Bayesian optimization has shown its effectiveness in global
optimization of difficult black-box objective functions. Many real-world
optimization problems of interest also have constraints which are unknown a
priori. In this paper, we study Bayesian optimization for constrained problems
in the general case that noise may be present in the constraint functions, and
the objective and constraints may be evaluated independently. We provide
motivating practical examples, and present a general framework to solve such
problems. We demonstrate the effectiveness of our approach on optimizing the
performance of online latent Dirichlet allocation subject to topic sparsity
constraints, tuning a neural network given test-time memory constraints, and
optimizing Hamiltonian Monte Carlo to achieve maximal effectiveness in a fixed
time, subject to passing standard convergence diagnostics.
| Michael A. Gelbart, Jasper Snoek, Ryan P. Adams | null | 1403.5607 | null | null |
CUR Algorithm with Incomplete Matrix Observation | cs.LG stat.ML | CUR matrix decomposition is a randomized algorithm that can efficiently
compute the low rank approximation for a given rectangle matrix. One limitation
with the existing CUR algorithms is that they require an access to the full
matrix A for computing U. In this work, we aim to alleviate this limitation. In
particular, we assume that besides having an access to randomly sampled d rows
and d columns from A, we only observe a subset of randomly sampled entries from
A. Our goal is to develop a low rank approximation algorithm, similar to CUR,
based on (i) randomly sampled rows and columns from A, and (ii) randomly
sampled entries from A. The proposed algorithm is able to perfectly recover the
target matrix A with only O(rn log n) number of observed entries. In addition,
instead of having to solve an optimization problem involved trace norm
regularization, the proposed algorithm only needs to solve a standard
regression problem. Finally, unlike most matrix completion theories that hold
only when the target matrix is of low rank, we show a strong guarantee for the
proposed algorithm even when the target matrix is not low rank.
| Rong Jin, Shenghuo Zhu | null | 1403.5647 | null | null |
Firefly Monte Carlo: Exact MCMC with Subsets of Data | stat.ML cs.LG stat.CO | Markov chain Monte Carlo (MCMC) is a popular and successful general-purpose
tool for Bayesian inference. However, MCMC cannot be practically applied to
large data sets because of the prohibitive cost of evaluating every likelihood
term at every iteration. Here we present Firefly Monte Carlo (FlyMC) an
auxiliary variable MCMC algorithm that only queries the likelihoods of a
potentially small subset of the data at each iteration yet simulates from the
exact posterior distribution, in contrast to recent proposals that are
approximate even in the asymptotic limit. FlyMC is compatible with a wide
variety of modern MCMC algorithms, and only requires a lower bound on the
per-datum likelihood factors. In experiments, we find that FlyMC generates
samples from the posterior more than an order of magnitude faster than regular
MCMC, opening up MCMC methods to larger datasets than were previously
considered feasible.
| Dougal Maclaurin and Ryan P. Adams | null | 1403.5693 | null | null |
Non-uniform Feature Sampling for Decision Tree Ensembles | stat.ML cs.IT cs.LG math.IT stat.AP | We study the effectiveness of non-uniform randomized feature selection in
decision tree classification. We experimentally evaluate two feature selection
methodologies, based on information extracted from the provided dataset: $(i)$
\emph{leverage scores-based} and $(ii)$ \emph{norm-based} feature selection.
Experimental evaluation of the proposed feature selection techniques indicate
that such approaches might be more effective compared to naive uniform feature
selection and moreover having comparable performance to the random forest
algorithm [3]
| Anastasios Kyrillidis and Anastasios Zouzias | null | 1403.5877 | null | null |
AIS-INMACA: A Novel Integrated MACA Based Clonal Classifier for Protein
Coding and Promoter Region Prediction | cs.CE cs.LG | Most of the problems in bioinformatics are now the challenges in computing.
This paper aims at building a classifier based on Multiple Attractor Cellular
Automata (MACA) which uses fuzzy logic. It is strengthened with an artificial
Immune System Technique (AIS), Clonal algorithm for identifying a protein
coding and promoter region in a given DNA sequence. The proposed classifier is
named as AIS-INMACA introduces a novel concept to combine CA with artificial
immune system to produce a better classifier which can address major problems
in bioinformatics. This will be the first integrated algorithm which can
predict both promoter and protein coding regions. To obtain good fitness rules
the basic concept of Clonal selection algorithm was used. The proposed
classifier can handle DNA sequences of lengths 54,108,162,252,354. This
classifier gives the exact boundaries of both protein and promoter regions with
an average accuracy of 89.6%. This classifier was tested with 97,000 data
components which were taken from Fickett & Toung, MPromDb, and other sequences
from a renowned medical university. This proposed classifier can handle huge
data sets and can find protein and promoter regions even in mixed and
overlapped DNA sequences. This work also aims at identifying the logicality
between the major problems in bioinformatics and tries to obtaining a common
frame work for addressing major problems in bioinformatics like protein
structure prediction, RNA structure prediction, predicting the splicing pattern
of any primary transcript and analysis of information content in DNA, RNA,
protein sequences and structure. This work will attract more researchers
towards application of CA as a potential pattern classifier to many important
problems in bioinformatics
| Pokkuluri Kiran Sree, Inampudi Ramesh Babu | null | 1403.5933 | null | null |
Bayesian calibration for forensic evidence reporting | stat.ML cs.LG stat.AP | We introduce a Bayesian solution for the problem in forensic speaker
recognition, where there may be very little background material for estimating
score calibration parameters. We work within the Bayesian paradigm of evidence
reporting and develop a principled probabilistic treatment of the problem,
which results in a Bayesian likelihood-ratio as the vehicle for reporting
weight of evidence. We show in contrast, that reporting a likelihood-ratio
distribution does not solve this problem. Our solution is experimentally
exercised on a simulated forensic scenario, using NIST SRE'12 scores, which
demonstrates a clear advantage for the proposed method compared to the
traditional plugin calibration recipe.
| Niko Br\"ummer and Albert Swart | null | 1403.5997 | null | null |
Ensemble Detection of Single & Multiple Events at Sentence-Level | cs.CL cs.LG | Event classification at sentence level is an important Information Extraction
task with applications in several NLP, IR, and personalization systems.
Multi-label binary relevance (BR) are the state-of-art methods. In this work,
we explored new multi-label methods known for capturing relations between event
types. These new methods, such as the ensemble Chain of Classifiers, improve
the F1 on average across the 6 labels by 2.8% over the Binary Relevance. The
low occurrence of multi-label sentences motivated the reduction of the hard
imbalanced multi-label classification problem with low number of occurrences of
multiple labels per instance to an more tractable imbalanced multiclass problem
with better results (+ 4.6%). We report the results of adding new features,
such as sentiment strength, rhetorical signals, domain-id (source-id and date),
and key-phrases in both single-label and multi-label event classification
scenarios.
| Lu\'is Marujo, Anatole Gershman, Jaime Carbonell, Jo\~ao P. Neto,
David Martins de Matos | null | 1403.6023 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.