title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Finite-Time Analysis of Kernelised Contextual Bandits | cs.LG stat.ML | We tackle the problem of online reward maximisation over a large finite set
of actions described by their contexts. We focus on the case when the number of
actions is too big to sample all of them even once. However we assume that we
have access to the similarities between actions' contexts and that the expected
reward is an arbitrary linear function of the contexts' images in the related
reproducing kernel Hilbert space (RKHS). We propose KernelUCB, a kernelised UCB
algorithm, and give a cumulative regret bound through a frequentist analysis.
For contextual bandits, the related algorithm GP-UCB turns out to be a special
case of our algorithm, and our finite-time analysis improves the regret bound
of GP-UCB for the agnostic case, both in the terms of the kernel-dependent
quantity and the RKHS norm of the reward function. Moreover, for the linear
kernel, our regret bound matches the lower bound for contextual linear bandits.
| Michal Valko, Nathaniel Korda, Remi Munos, Ilias Flaounas, Nelo
Cristianini | null | 1309.6869 | null | null |
Integrating Document Clustering and Topic Modeling | cs.LG cs.CL cs.IR stat.ML | Document clustering and topic modeling are two closely related tasks which
can mutually benefit each other. Topic modeling can project documents into a
topic space which facilitates effective document clustering. Cluster labels
discovered by document clustering can be incorporated into topic models to
extract local topics specific to each cluster and global topics shared by all
clusters. In this paper, we propose a multi-grain clustering topic model
(MGCTM) which integrates document clustering and topic modeling into a unified
framework and jointly performs the two tasks to achieve the overall best
performance. Our model tightly couples two components: a mixture component used
for discovering latent groups in document collection and a topic model
component used for mining multi-grain topics including local topics specific to
each cluster and global topics shared across clusters.We employ variational
inference to approximate the posterior of hidden variables and learn model
parameters. Experiments on two datasets demonstrate the effectiveness of our
model.
| Pengtao Xie, Eric P. Xing | null | 1309.6874 | null | null |
Active Learning with Expert Advice | cs.LG stat.ML | Conventional learning with expert advice methods assumes a learner is always
receiving the outcome (e.g., class labels) of every incoming training instance
at the end of each trial. In real applications, acquiring the outcome from
oracle can be costly or time consuming. In this paper, we address a new problem
of active learning with expert advice, where the outcome of an instance is
disclosed only when it is requested by the online learner. Our goal is to learn
an accurate prediction model by asking the oracle the number of questions as
small as possible. To address this challenge, we propose a framework of active
forecasters for online active learning with expert advice, which attempts to
extend two regular forecasters, i.e., Exponentially Weighted Average Forecaster
and Greedy Forecaster, to tackle the task of active learning with expert
advice. We prove that the proposed algorithms satisfy the Hannan consistency
under some proper assumptions, and validate the efficacy of our technique by an
extensive set of experiments.
| Peilin Zhao, Steven Hoi, Jinfeng Zhuang | null | 1309.6875 | null | null |
Bennett-type Generalization Bounds: Large-deviation Case and Faster Rate
of Convergence | stat.ML cs.LG | In this paper, we present the Bennett-type generalization bounds of the
learning process for i.i.d. samples, and then show that the generalization
bounds have a faster rate of convergence than the traditional results. In
particular, we first develop two types of Bennett-type deviation inequality for
the i.i.d. learning process: one provides the generalization bounds based on
the uniform entropy number; the other leads to the bounds based on the
Rademacher complexity. We then adopt a new method to obtain the alternative
expressions of the Bennett-type generalization bounds, which imply that the
bounds have a faster rate o(N^{-1/2}) of convergence than the traditional
results O(N^{-1/2}). Additionally, we find that the rate of the bounds will
become faster in the large-deviation case, which refers to a situation where
the empirical risk is far away from (at least not close to) the expected risk.
Finally, we analyze the asymptotical convergence of the learning process and
compare our analysis with the existing results.
| Chao Zhang | null | 1309.6876 | null | null |
Estimating Undirected Graphs Under Weak Assumptions | math.ST cs.LG stat.ML stat.TH | We consider the problem of providing nonparametric confidence guarantees for
undirected graphs under weak assumptions. In particular, we do not assume
sparsity, incoherence or Normality. We allow the dimension $D$ to increase with
the sample size $n$. First, we prove lower bounds that show that if we want
accurate inferences with low assumptions then there are limitations on the
dimension as a function of sample size. When the dimension increases slowly
with sample size, we show that methods based on Normal approximations and on
the bootstrap lead to valid inferences and we provide Berry-Esseen bounds on
the accuracy of the Normal approximation. When the dimension is large relative
to sample size, accurate inferences for graphs under low assumptions are not
possible. Instead we propose to estimate something less demanding than the
entire partial correlation graph. In particular, we consider: cluster graphs,
restricted partial correlation graphs and correlation graphs.
| Larry Wasserman, Mladen Kolar and Alessandro Rinaldo | null | 1309.6933 | null | null |
Stock price direction prediction by directly using prices data: an
empirical study on the KOSPI and HSI | cs.CE cs.LG q-fin.ST | The prediction of a stock market direction may serve as an early
recommendation system for short-term investors and as an early financial
distress warning system for long-term shareholders. Many stock prediction
studies focus on using macroeconomic indicators, such as CPI and GDP, to train
the prediction model. However, daily data of the macroeconomic indicators are
almost impossible to obtain. Thus, those methods are difficult to be employed
in practice. In this paper, we propose a method that directly uses prices data
to predict market index direction and stock price direction. An extensive
empirical study of the proposed method is presented on the Korean Composite
Stock Price Index (KOSPI) and Hang Seng Index (HSI), as well as the individual
constituents included in the indices. The experimental results show notably
high hit ratios in predicting the movements of the individual constituents in
the KOSPI and HIS.
| Yanshan Wang | 10.1504/IJBIDM.2014.065091 | 1309.7119 | null | null |
Detecting Fake Escrow Websites using Rich Fraud Cues and Kernel Based
Methods | cs.CY cs.LG | The ability to automatically detect fraudulent escrow websites is important
in order to alleviate online auction fraud. Despite research on related topics,
fake escrow website categorization has received little attention. In this study
we evaluated the effectiveness of various features and techniques for detecting
fake escrow websites. Our analysis included a rich set of features extracted
from web page text, image, and link information. We also proposed a composite
kernel tailored to represent the properties of fake websites, including content
duplication and structural attributes. Experiments were conducted to assess the
proposed features, techniques, and kernels on a test bed encompassing nearly
90,000 web pages derived from 410 legitimate and fake escrow sites. The
combination of an extended feature set and the composite kernel attained over
98% accuracy when differentiating fake sites from real ones, using the support
vector machines algorithm. The results suggest that automated web-based
information systems for detecting fake escrow sites could be feasible and may
be utilized as authentication mechanisms.
| Ahmed Abbasi and Hsinchun Chen | null | 1309.7261 | null | null |
Evaluating Link-Based Techniques for Detecting Fake Pharmacy Websites | cs.CY cs.LG | Fake online pharmacies have become increasingly pervasive, constituting over
90% of online pharmacy websites. There is a need for fake website detection
techniques capable of identifying fake online pharmacy websites with a high
degree of accuracy. In this study, we compared several well-known link-based
detection techniques on a large-scale test bed with the hyperlink graph
encompassing over 80 million links between 15.5 million web pages, including
1.2 million known legitimate and fake pharmacy pages. We found that the QoC and
QoL class propagation algorithms achieved an accuracy of over 90% on our
dataset. The results revealed that algorithms that incorporate dual class
propagation as well as inlink and outlink information, on page-level or
site-level graphs, are better suited for detecting fake pharmacy websites. In
addition, site-level analysis yielded significantly better results than
page-level analysis for most algorithms evaluated.
| Ahmed Abbasi, Siddharth Kaza and F. Mariam Zahedi | null | 1309.7266 | null | null |
Bayesian Inference in Sparse Gaussian Graphical Models | stat.ML cs.LG | One of the fundamental tasks of science is to find explainable relationships
between observed phenomena. One approach to this task that has received
attention in recent years is based on probabilistic graphical modelling with
sparsity constraints on model structures. In this paper, we describe two new
approaches to Bayesian inference of sparse structures of Gaussian graphical
models (GGMs). One is based on a simple modification of the cutting-edge block
Gibbs sampler for sparse GGMs, which results in significant computational gains
in high dimensions. The other method is based on a specific construction of the
Hamiltonian Monte Carlo sampler, which results in further significant
improvements. We compare our fully Bayesian approaches with the popular
regularisation-based graphical LASSO, and demonstrate significant advantages of
the Bayesian treatment under the same computing costs. We apply the methods to
a broad range of simulated data sets, and a real-life financial data set.
| Peter Orchard, Felix Agakov, Amos Storkey | 10.1017/S0956796814000057 | 1309.7311 | null | null |
Stochastic Online Shortest Path Routing: The Value of Feedback | cs.NI cs.LG math.OC | This paper studies online shortest path routing over multi-hop networks. Link
costs or delays are time-varying and modeled by independent and identically
distributed random processes, whose parameters are initially unknown. The
parameters, and hence the optimal path, can only be estimated by routing
packets through the network and observing the realized delays. Our aim is to
find a routing policy that minimizes the regret (the cumulative difference of
expected delay) between the path chosen by the policy and the unknown optimal
path. We formulate the problem as a combinatorial bandit optimization problem
and consider several scenarios that differ in where routing decisions are made
and in the information available when making the decisions. For each scenario,
we derive a tight asymptotic lower bound on the regret that has to be satisfied
by any online routing policy. These bounds help us to understand the
performance improvements we can expect when (i) taking routing decisions at
each hop rather than at the source only, and (ii) observing per-link delays
rather than end-to-end path delays. In particular, we show that (i) is of no
use while (ii) can have a spectacular impact. Three algorithms, with a
trade-off between computational complexity and performance, are proposed. The
regret upper bounds of these algorithms improve over those of the existing
algorithms, and they significantly outperform state-of-the-art algorithms in
numerical experiments.
| M. Sadegh Talebi, Zhenhua Zou, Richard Combes, Alexandre Proutiere,
Mikael Johansson | null | 1309.7367 | null | null |
Optimal Hybrid Channel Allocation:Based On Machine Learning Algorithms | cs.NI cs.LG | Recent advances in cellular communication systems resulted in a huge increase
in spectrum demand. To meet the requirements of the ever-growing need for
spectrum, efficient utilization of the existing resources is of utmost
importance. Channel Allocation, has thus become an inevitable research topic in
wireless communications. In this paper, we propose an optimal channel
allocation scheme, Optimal Hybrid Channel Allocation (OHCA) for an effective
allocation of channels. We improvise upon the existing Fixed Channel Allocation
(FCA) technique by imparting intelligence to the existing system by employing
the multilayer perceptron technique.
| K Viswanadh and Dr.G Rama Murthy | null | 1309.7439 | null | null |
Structured learning of sum-of-submodular higher order energy functions | cs.CV cs.LG stat.ML | Submodular functions can be exactly minimized in polynomial time, and the
special case that graph cuts solve with max flow \cite{KZ:PAMI04} has had
significant impact in computer vision
\cite{BVZ:PAMI01,Kwatra:SIGGRAPH03,Rother:GrabCut04}. In this paper we address
the important class of sum-of-submodular (SoS) functions
\cite{Arora:ECCV12,Kolmogorov:DAM12}, which can be efficiently minimized via a
variant of max flow called submodular flow \cite{Edmonds:ADM77}. SoS functions
can naturally express higher order priors involving, e.g., local image patches;
however, it is difficult to fully exploit their expressive power because they
have so many parameters. Rather than trying to formulate existing higher order
priors as an SoS function, we take a discriminative learning approach,
effectively searching the space of SoS functions for a higher order prior that
performs well on our training set. We adopt a structural SVM approach
\cite{Joachims/etal/09a,Tsochantaridis/etal/04} and formulate the training
problem in terms of quadratic programming; as a result we can efficiently
search the space of SoS priors via an extended cutting-plane algorithm. We also
show how the state-of-the-art max flow method for vision problems
\cite{Goldberg:ESA11} can be modified to efficiently solve the submodular flow
problem. Experimental comparisons are made against the OpenCV implementation of
the GrabCut interactive segmentation technique \cite{Rother:GrabCut04}, which
uses hand-tuned parameters instead of machine learning. On a standard dataset
\cite{Gulshan:CVPR10} our method learns higher order priors with hundreds of
parameter values, and produces significantly better segmentations. While our
focus is on binary labeling problems, we show that our techniques can be
naturally generalized to handle more than two labels.
| Alexander Fix and Thorsten Joachims and Sam Park and Ramin Zabih | null | 1309.7512 | null | null |
On Sampling from the Gibbs Distribution with Random Maximum A-Posteriori
Perturbations | cs.LG | In this paper we describe how MAP inference can be used to sample efficiently
from Gibbs distributions. Specifically, we provide means for drawing either
approximate or unbiased samples from Gibbs' distributions by introducing low
dimensional perturbations and solving the corresponding MAP assignments. Our
approach also leads to new ways to derive lower bounds on partition functions.
We demonstrate empirically that our method excels in the typical "high signal -
high coupling" regime. The setting results in ragged energy landscapes that are
challenging for alternative approaches to sampling and/or lower bounds.
| Tamir Hazan, Subhransu Maji and Tommi Jaakkola | null | 1309.7598 | null | null |
Context-aware recommendations from implicit data via scalable tensor
factorization | cs.LG cs.IR | Albeit the implicit feedback based recommendation problem - when only the
user history is available but there are no ratings - is the most typical
setting in real-world applications, it is much less researched than the
explicit feedback case. State-of-the-art algorithms that are efficient on the
explicit case cannot be automatically transformed to the implicit case if
scalability should be maintained. There are few implicit feedback benchmark
data sets, therefore new ideas are usually experimented on explicit benchmarks.
In this paper, we propose a generic context-aware implicit feedback recommender
algorithm, coined iTALS. iTALS applies a fast, ALS-based tensor factorization
learning method that scales linearly with the number of non-zero elements in
the tensor. We also present two approximate and faster variants of iTALS using
coordinate descent and conjugate gradient methods at learning. The method also
allows us to incorporate various contextual information into the model while
maintaining its computational efficiency. We present two context-aware variants
of iTALS incorporating seasonality and item purchase sequentiality into the
model to distinguish user behavior at different time intervals, and product
types with different repetitiveness. Experiments run on six data sets shows
that iTALS clearly outperforms context-unaware models and context aware
baselines, while it is on par with factorization machines (beats 7 times out of
12 cases) both in terms of recall and MAP.
| Bal\'azs Hidasi, Domonkos Tikk | null | 1309.7611 | null | null |
An upper bound on prototype set size for condensed nearest neighbor | cs.LG stat.ML | The condensed nearest neighbor (CNN) algorithm is a heuristic for reducing
the number of prototypical points stored by a nearest neighbor classifier,
while keeping the classification rule given by the reduced prototypical set
consistent with the full set. I present an upper bound on the number of
prototypical points accumulated by CNN. The bound originates in a bound on the
number of times the decision rule is updated during training in the multiclass
perceptron algorithm, and thus is independent of training set size.
| Eric Christiansen | null | 1309.7676 | null | null |
An Extensive Experimental Study on the Cluster-based Reference Set
Reduction for speeding-up the k-NN Classifier | cs.LG | The k-Nearest Neighbor (k-NN) classification algorithm is one of the most
widely-used lazy classifiers because of its simplicity and ease of
implementation. It is considered to be an effective classifier and has many
applications. However, its major drawback is that when sequential search is
used to find the neighbors, it involves high computational cost. Speeding-up
k-NN search is still an active research field. Hwang and Cho have recently
proposed an adaptive cluster-based method for fast Nearest Neighbor searching.
The effectiveness of this method is based on the adjustment of three
parameters. However, the authors evaluated their method by setting specific
parameter values and using only one dataset. In this paper, an extensive
experimental study of this method is presented. The results, which are based on
five real life datasets, illustrate that if the parameters of the method are
carefully defined, one can achieve even better classification performance.
| Stefanos Ougiaroglou, Georgios Evangelidis, Dimitris A. Dervos | null | 1309.7750 | null | null |
On statistics, computation and scalability | stat.ML cs.LG math.ST stat.TH | How should statistical procedures be designed so as to be scalable
computationally to the massive datasets that are increasingly the norm? When
coupled with the requirement that an answer to an inferential question be
delivered within a certain time budget, this question has significant
repercussions for the field of statistics. With the goal of identifying
"time-data tradeoffs," we investigate some of the statistical consequences of
computational perspectives on scability, in particular divide-and-conquer
methodology and hierarchies of convex relaxations.
| Michael I. Jordan | 10.3150/12-BEJSP17 | 1309.7804 | null | null |
Linear Regression from Strategic Data Sources | cs.GT cs.LG math.ST stat.TH | Linear regression is a fundamental building block of statistical data
analysis. It amounts to estimating the parameters of a linear model that maps
input features to corresponding outputs. In the classical setting where the
precision of each data point is fixed, the famous Aitken/Gauss-Markov theorem
in statistics states that generalized least squares (GLS) is a so-called "Best
Linear Unbiased Estimator" (BLUE). In modern data science, however, one often
faces strategic data sources, namely, individuals who incur a cost for
providing high-precision data.
In this paper, we study a setting in which features are public but
individuals choose the precision of the outputs they reveal to an analyst. We
assume that the analyst performs linear regression on this dataset, and
individuals benefit from the outcome of this estimation. We model this scenario
as a game where individuals minimize a cost comprising two components: (a) an
(agent-specific) disclosure cost for providing high-precision data; and (b) a
(global) estimation cost representing the inaccuracy in the linear model
estimate. In this game, the linear model estimate is a public good that
benefits all individuals. We establish that this game has a unique non-trivial
Nash equilibrium. We study the efficiency of this equilibrium and we prove
tight bounds on the price of stability for a large class of disclosure and
estimation costs. Finally, we study the estimator accuracy achieved at
equilibrium. We show that, in general, Aitken's theorem does not hold under
strategic data sources, though it does hold if individuals have identical
disclosure costs (up to a multiplicative factor). When individuals have
non-identical costs, we derive a bound on the improvement of the equilibrium
estimation cost that can be achieved by deviating from GLS, under mild
assumptions on the disclosure cost functions.
| Nicolas Gast, Stratis Ioannidis, Patrick Loiseau, and Benjamin
Roussillon | null | 1309.7824 | null | null |
A Statistical Learning Based System for Fake Website Detection | cs.CY cs.LG | Existing fake website detection systems are unable to effectively detect fake
websites. In this study, we advocate the development of fake website detection
systems that employ classification methods grounded in statistical learning
theory (SLT). Experimental results reveal that a prototype system developed
using SLT-based methods outperforms seven existing fake website detection
systems on a test bed encompassing 900 real and fake websites.
| Ahmed Abbasi, Zhu Zhang and Hsinchun Chen | null | 1309.7958 | null | null |
Exploration and Exploitation in Visuomotor Prediction of Autonomous
Agents | cs.LG cs.CV math.DS | This paper discusses various techniques to let an agent learn how to predict
the effects of its own actions on its sensor data autonomously, and their
usefulness to apply them to visual sensors. An Extreme Learning Machine is used
for visuomotor prediction, while various autonomous control techniques that can
aid the prediction process by balancing exploration and exploitation are
discussed and tested in a simple system: a camera moving over a 2D greyscale
image.
| Laurens Bliek | null | 1309.7959 | null | null |
On the Feature Discovery for App Usage Prediction in Smartphones | cs.LG | With the increasing number of mobile Apps developed, they are now closely
integrated into daily life. In this paper, we develop a framework to predict
mobile Apps that are most likely to be used regarding the current device status
of a smartphone. Such an Apps usage prediction framework is a crucial
prerequisite for fast App launching, intelligent user experience, and power
management of smartphones. By analyzing real App usage log data, we discover
two kinds of features: The Explicit Feature (EF) from sensing readings of
built-in sensors, and the Implicit Feature (IF) from App usage relations. The
IF feature is derived by constructing the proposed App Usage Graph (abbreviated
as AUG) that models App usage transitions. In light of AUG, we are able to
discover usage relations among Apps. Since users may have different usage
behaviors on their smartphones, we further propose one personalized feature
selection algorithm. We explore minimum description length (MDL) from the
training data and select those features which need less length to describe the
training data. The personalized feature selection can successfully reduce the
log size and the prediction time. Finally, we adopt the kNN classification
model to predict Apps usage. Note that through the features selected by the
proposed personalized feature selection algorithm, we only need to keep these
features, which in turn reduces the prediction time and avoids the curse of
dimensionality when using the kNN classifier. We conduct a comprehensive
experimental study based on a real mobile App usage dataset. The results
demonstrate the effectiveness of the proposed framework and show the predictive
capability for App usage prediction.
| Zhung-Xun Liao, Shou-Chung Li, Wen-Chih Peng, Philip S Yu | null | 1309.7982 | null | null |
An information measure for comparing top $k$ lists | cs.IT cs.LG math.IT | Comparing the top $k$ elements between two or more ranked results is a common
task in many contexts and settings. A few measures have been proposed to
compare top $k$ lists with attractive mathematical properties, but they face a
number of pitfalls and shortcomings in practice. This work introduces a new
measure to compare any two top k lists based on measuring the information these
lists convey. Our method investigates the compressibility of the lists, and the
length of the message to losslessly encode them gives a natural and robust
measure of their variability. This information-theoretic measure objectively
reconciles all the main considerations that arise when measuring
(dis-)similarity between lists: the extent of their non-overlapping elements in
each of the lists; the amount of disarray among overlapping elements between
the lists; the measurement of displacement of actual ranks of their overlapping
elements.
| Arun Konagurthu and James Collier | null | 1310.0110 | null | null |
Incoherence-Optimal Matrix Completion | cs.IT cs.LG math.IT stat.ML | This paper considers the matrix completion problem. We show that it is not
necessary to assume joint incoherence, which is a standard but unintuitive and
restrictive condition that is imposed by previous studies. This leads to a
sample complexity bound that is order-wise optimal with respect to the
incoherence parameter (as well as to the rank $r$ and the matrix dimension $n$
up to a log factor). As a consequence, we improve the sample complexity of
recovering a semidefinite matrix from $O(nr^{2}\log^{2}n)$ to $O(nr\log^{2}n)$,
and the highest allowable rank from $\Theta(\sqrt{n}/\log n)$ to
$\Theta(n/\log^{2}n)$. The key step in proof is to obtain new bounds on the
$\ell_{\infty,2}$-norm, defined as the maximum of the row and column norms of a
matrix. To illustrate the applicability of our techniques, we discuss
extensions to SVD projection, structured matrix completion and semi-supervised
clustering, for which we provide order-wise improvements over existing results.
Finally, we turn to the closely-related problem of low-rank-plus-sparse matrix
decomposition. We show that the joint incoherence condition is unavoidable here
for polynomial-time algorithms conditioned on the Planted Clique conjecture.
This means it is intractable in general to separate a rank-$\omega(\sqrt{n})$
positive semidefinite matrix and a sparse matrix. Interestingly, our results
show that the standard and joint incoherence conditions are associated
respectively with the information (statistical) and computational aspects of
the matrix decomposition problem.
| Yudong Chen | 10.1109/TIT.2015.2415195 | 1310.0154 | null | null |
Deep and Wide Multiscale Recursive Networks for Robust Image Labeling | cs.CV cs.LG | Feedforward multilayer networks trained by supervised learning have recently
demonstrated state of the art performance on image labeling problems such as
boundary prediction and scene parsing. As even very low error rates can limit
practical usage of such systems, methods that perform closer to human accuracy
remain desirable. In this work, we propose a new type of network with the
following properties that address what we hypothesize to be limiting aspects of
existing methods: (1) a `wide' structure with thousands of features, (2) a
large field of view, (3) recursive iterations that exploit statistical
dependencies in label space, and (4) a parallelizable architecture that can be
trained in a fraction of the time compared to benchmark multilayer
convolutional networks. For the specific image labeling problem of boundary
prediction, we also introduce a novel example weighting algorithm that improves
segmentation accuracy. Experiments in the challenging domain of connectomic
reconstruction of neural circuity from 3d electron microscopy data show that
these "Deep And Wide Multiscale Recursive" (DAWMR) networks lead to new levels
of image labeling performance. The highest performing architecture has twelve
layers, interwoven supervised and unsupervised stages, and uses an input field
of view of 157,464 voxels ($54^3$) to make a prediction at each image location.
We present an associated open source software package that enables the simple
and flexible creation of DAWMR networks.
| Gary B. Huang and Viren Jain | null | 1310.0354 | null | null |
Online Learning of Dynamic Parameters in Social Networks | math.OC cs.LG cs.SI stat.ML | This paper addresses the problem of online learning in a dynamic setting. We
consider a social network in which each individual observes a private signal
about the underlying state of the world and communicates with her neighbors at
each time period. Unlike many existing approaches, the underlying state is
dynamic, and evolves according to a geometric random walk. We view the scenario
as an optimization problem where agents aim to learn the true state while
suffering the smallest possible loss. Based on the decomposition of the global
loss function, we introduce two update mechanisms, each of which generates an
estimate of the true state. We establish a tight bound on the rate of change of
the underlying state, under which individuals can track the parameter with a
bounded variance. Then, we characterize explicit expressions for the steady
state mean-square deviation(MSD) of the estimates from the truth, per
individual. We observe that only one of the estimators recovers the optimal
MSD, which underscores the impact of the objective function decomposition on
the learning quality. Finally, we provide an upper bound on the regret of the
proposed methods, measured as an average of errors in estimating the parameter
in a finite time.
| Shahin Shahrampour, Alexander Rakhlin, Ali Jadbabaie | null | 1310.0432 | null | null |
Summary Statistics for Partitionings and Feature Allocations | cs.LG stat.ML | Infinite mixture models are commonly used for clustering. One can sample from
the posterior of mixture assignments by Monte Carlo methods or find its maximum
a posteriori solution by optimization. However, in some problems the posterior
is diffuse and it is hard to interpret the sampled partitionings. In this
paper, we introduce novel statistics based on block sizes for representing
sample sets of partitionings and feature allocations. We develop an
element-based definition of entropy to quantify segmentation among their
elements. Then we propose a simple algorithm called entropy agglomeration (EA)
to summarize and visualize this information. Experiments on various infinite
mixture posteriors as well as a feature allocation dataset demonstrate that the
proposed statistics are useful in practice.
| I\c{s}{\i}k Bar{\i}\c{s} Fidaner and Ali Taylan Cemgil | null | 1310.0509 | null | null |
Learning Lambek grammars from proof frames | cs.LG cs.AI cs.LO math.LO | In addition to their limpid interface with semantics, categorial grammars
enjoy another important property: learnability. This was first noticed by
Buskowsky and Penn and further studied by Kanazawa, for Bar-Hillel categorial
grammars.
What about Lambek categorial grammars? In a previous paper we showed that
product free Lambek grammars where learnable from structured sentences, the
structures being incomplete natural deductions. These grammars were shown to be
unlearnable from strings by Foret and Le Nir. In the present paper we show that
Lambek grammars, possibly with product, are learnable from proof frames that
are incomplete proof nets.
After a short reminder on grammatical inference \`a la Gold, we provide an
algorithm that learns Lambek grammars with product from proof frames and we
prove its convergence. We do so for 1-valued also known as rigid Lambek
grammars with product, since standard techniques can extend our result to
$k$-valued grammars. Because of the correspondence between cut-free proof nets
and normal natural deductions, our initial result on product free Lambek
grammars can be recovered.
We are sad to dedicate the present paper to Philippe Darondeau, with whom we
started to study such questions in Rennes at the beginning of the millennium,
and who passed away prematurely.
We are glad to dedicate the present paper to Jim Lambek for his 90 birthday:
he is the living proof that research is an eternal learning process.
| Roberto Bonato and Christian Retor\'e | null | 1310.0576 | null | null |
Pseudo-Marginal Bayesian Inference for Gaussian Processes | stat.ML cs.LG stat.ME | The main challenges that arise when adopting Gaussian Process priors in
probabilistic modeling are how to carry out exact Bayesian inference and how to
account for uncertainty on model parameters when making model-based predictions
on out-of-sample data. Using probit regression as an illustrative working
example, this paper presents a general and effective methodology based on the
pseudo-marginal approach to Markov chain Monte Carlo that efficiently addresses
both of these issues. The results presented in this paper show improvements
over existing sampling methods to simulate from the posterior distribution over
the parameters defining the covariance function of the Gaussian Process prior.
This is particularly important as it offers a powerful tool to carry out full
Bayesian inference of Gaussian Process based hierarchic statistical models in
general. The results also demonstrate that Monte Carlo based integration of all
model parameters is actually feasible in this class of models providing a
superior quantification of uncertainty in predictions. Extensive comparisons
with respect to state-of-the-art probabilistic classifiers confirm this
assertion.
| Maurizio Filippone and Mark Girolami | null | 1310.0740 | null | null |
Exact and Stable Covariance Estimation from Quadratic Sampling via
Convex Programming | cs.IT cs.LG math.IT math.NA math.ST stat.ML stat.TH | Statistical inference and information processing of high-dimensional data
often require efficient and accurate estimation of their second-order
statistics. With rapidly changing data, limited processing power and storage at
the acquisition devices, it is desirable to extract the covariance structure
from a single pass over the data and a small number of stored measurements. In
this paper, we explore a quadratic (or rank-one) measurement model which
imposes minimal memory requirements and low computational complexity during the
sampling process, and is shown to be optimal in preserving various
low-dimensional covariance structures. Specifically, four popular structural
assumptions of covariance matrices, namely low rank, Toeplitz low rank,
sparsity, jointly rank-one and sparse structure, are investigated, while
recovery is achieved via convex relaxation paradigms for the respective
structure.
The proposed quadratic sampling framework has a variety of potential
applications including streaming data processing, high-frequency wireless
communication, phase space tomography and phase retrieval in optics, and
non-coherent subspace detection. Our method admits universally accurate
covariance estimation in the absence of noise, as soon as the number of
measurements exceeds the information theoretic limits. We also demonstrate the
robustness of this approach against noise and imperfect structural assumptions.
Our analysis is established upon a novel notion called the mixed-norm
restricted isometry property (RIP-$\ell_{2}/\ell_{1}$), as well as the
conventional RIP-$\ell_{2}/\ell_{2}$ for near-isotropic and bounded
measurements. In addition, our results improve upon the best-known phase
retrieval (including both dense and sparse signals) guarantees using PhaseLift
with a significantly simpler approach.
| Yuxin Chen and Yuejie Chi and Andrea Goldsmith | null | 1310.0807 | null | null |
Electricity Market Forecasting via Low-Rank Multi-Kernel Learning | stat.ML cs.LG cs.SY | The smart grid vision entails advanced information technology and data
analytics to enhance the efficiency, sustainability, and economics of the power
grid infrastructure. Aligned to this end, modern statistical learning tools are
leveraged here for electricity market inference. Day-ahead price forecasting is
cast as a low-rank kernel learning problem. Uniquely exploiting the market
clearing process, congestion patterns are modeled as rank-one components in the
matrix of spatio-temporally varying prices. Through a novel nuclear norm-based
regularization, kernels across pricing nodes and hours can be systematically
selected. Even though market-wide forecasting is beneficial from a learning
perspective, it involves processing high-dimensional market data. The latter
becomes possible after devising a block-coordinate descent algorithm for
solving the non-convex optimization problem involved. The algorithm utilizes
results from block-sparse vector recovery and is guaranteed to converge to a
stationary point. Numerical tests on real data from the Midwest ISO (MISO)
market corroborate the prediction accuracy, computational efficiency, and the
interpretative merits of the developed approach over existing alternatives.
| Vassilis Kekatos and Yu Zhang and Georgios B. Giannakis | 10.1109/JSTSP.2014.2336611 | 1310.0865 | null | null |
Multiple Kernel Learning in the Primal for Multi-modal Alzheimer's
Disease Classification | cs.LG cs.CE | To achieve effective and efficient detection of Alzheimer's disease (AD),
many machine learning methods have been introduced into this realm. However,
the general case of limited training samples, as well as different feature
representations typically makes this problem challenging. In this work, we
propose a novel multiple kernel learning framework to combine multi-modal
features for AD classification, which is scalable and easy to implement.
Contrary to the usual way of solving the problem in the dual space, we look at
the optimization from a new perspective. By conducting Fourier transform on the
Gaussian kernel, we explicitly compute the mapping function, which leads to a
more straightforward solution of the problem in the primal space. Furthermore,
we impose the mixed $L_{21}$ norm constraint on the kernel weights, known as
the group lasso regularization, to enforce group sparsity among different
feature modalities. This actually acts as a role of feature modality selection,
while at the same time exploiting complementary information among different
kernels. Therefore it is able to extract the most discriminative features for
classification. Experiments on the ADNI data set demonstrate the effectiveness
of the proposed method.
| Fayao Liu, Luping Zhou, Chunhua Shen, Jianping Yin | null | 1310.0890 | null | null |
Efficient pedestrian detection by directly optimize the partial area
under the ROC curve | cs.CV cs.LG | Many typical applications of object detection operate within a prescribed
false-positive range. In this situation the performance of a detector should be
assessed on the basis of the area under the ROC curve over that range, rather
than over the full curve, as the performance outside the range is irrelevant.
This measure is labelled as the partial area under the ROC curve (pAUC).
Effective cascade-based classification, for example, depends on training node
classifiers that achieve the maximal detection rate at a moderate false
positive rate, e.g., around 40% to 50%. We propose a novel ensemble learning
method which achieves a maximal detection rate at a user-defined range of false
positive rates by directly optimizing the partial AUC using structured
learning. By optimizing for different ranges of false positive rates, the
proposed method can be used to train either a single strong classifier or a
node classifier forming part of a cascade classifier. Experimental results on
both synthetic and real-world data sets demonstrate the effectiveness of our
approach, and we show that it is possible to train state-of-the-art pedestrian
detectors using the proposed structured ensemble learning method.
| Sakrapee Paisitkriangkrai, Chunhua Shen, Anton van den Hengel | null | 1310.0900 | null | null |
Compressed Counting Meets Compressed Sensing | stat.ME cs.DS cs.IT cs.LG math.IT | Compressed sensing (sparse signal recovery) has been a popular and important
research topic in recent years. By observing that natural signals are often
nonnegative, we propose a new framework for nonnegative signal recovery using
Compressed Counting (CC). CC is a technique built on maximally-skewed p-stable
random projections originally developed for data stream computations. Our
recovery procedure is computationally very efficient in that it requires only
one linear scan of the coordinates. Our analysis demonstrates that, when
0<p<=0.5, it suffices to use M= O(C/eps^p log N) measurements so that all
coordinates will be recovered within eps additive precision, in one scan of the
coordinates. The constant C=1 when p->0 and C=pi/2 when p=0.5. In particular,
when p->0 the required number of measurements is essentially M=K\log N, where K
is the number of nonzero coordinates of the signal.
| Ping Li, Cun-Hui Zhang, Tong Zhang | null | 1310.1076 | null | null |
Clustering on Multiple Incomplete Datasets via Collective Kernel
Learning | cs.LG | Multiple datasets containing different types of features may be available for
a given task. For instance, users' profiles can be used to group users for
recommendation systems. In addition, a model can also use users' historical
behaviors and credit history to group users. Each dataset contains different
information and suffices for learning. A number of clustering algorithms on
multiple datasets were proposed during the past few years. These algorithms
assume that at least one dataset is complete. So far as we know, all the
previous methods will not be applicable if there is no complete dataset
available. However, in reality, there are many situations where no dataset is
complete. As in building a recommendation system, some new users may not have a
profile or historical behaviors, while some may not have a credit history.
Hence, no available dataset is complete. In order to solve this problem, we
propose an approach called Collective Kernel Learning to infer hidden sample
similarity from multiple incomplete datasets. The idea is to collectively
completes the kernel matrices of incomplete datasets by optimizing the
alignment of the shared instances of the datasets. Furthermore, a clustering
algorithm is proposed based on the kernel matrix. The experiments on both
synthetic and real datasets demonstrate the effectiveness of the proposed
approach. The proposed clustering algorithm outperforms the comparison
algorithms by as much as two times in normalized mutual information.
| Weixiang Shao (1), Xiaoxiao Shi (1) and Philip S. Yu (1) ((1)
University of Illinois at Chicago) | null | 1310.1177 | null | null |
Labeled Directed Acyclic Graphs: a generalization of context-specific
independence in directed graphical models | stat.ML cs.AI cs.LG | We introduce a novel class of labeled directed acyclic graph (LDAG) models
for finite sets of discrete variables. LDAGs generalize earlier proposals for
allowing local structures in the conditional probability distribution of a
node, such that unrestricted label sets determine which edges can be deleted
from the underlying directed acyclic graph (DAG) for a given context. Several
properties of these models are derived, including a generalization of the
concept of Markov equivalence classes. Efficient Bayesian learning of LDAGs is
enabled by introducing an LDAG-based factorization of the Dirichlet prior for
the model parameters, such that the marginal likelihood can be calculated
analytically. In addition, we develop a novel prior distribution for the model
structures that can appropriately penalize a model for its labeling complexity.
A non-reversible Markov chain Monte Carlo algorithm combined with a greedy hill
climbing approach is used for illustrating the useful properties of LDAG models
for both real and synthetic data sets.
| Johan Pensar, Henrik Nyman, Timo Koski and Jukka Corander | 10.1007/s10618-014-0355-0 | 1310.1187 | null | null |
Learning ambiguous functions by neural networks | cs.NE cs.LG physics.data-an | It is not, in general, possible to have access to all variables that
determine the behavior of a system. Having identified a number of variables
whose values can be accessed, there may still be hidden variables which
influence the dynamics of the system. The result is model ambiguity in the
sense that, for the same (or very similar) input values, different objective
outputs should have been obtained. In addition, the degree of ambiguity may
vary widely across the whole range of input values. Thus, to evaluate the
accuracy of a model it is of utmost importance to create a method to obtain the
degree of reliability of each output result. In this paper we present such a
scheme composed of two coupled artificial neural networks: the first one being
responsible for outputting the predicted value, whereas the other evaluates the
reliability of the output, which is learned from the error values of the first
one. As an illustration, the scheme is applied to a model for tracking slopes
in a straw chamber and to a credit scoring model.
| Rui Ligeiro and R. Vilela Mendes | 10.1007/s00500-017-2525-7 | 1310.1250 | null | null |
Weakly supervised clustering: Learning fine-grained signals from coarse
labels | stat.ML cs.LG | Consider a classification problem where we do not have access to labels for
individual training examples, but only have average labels over subpopulations.
We give practical examples of this setup and show how such a classification
task can usefully be analyzed as a weakly supervised clustering problem. We
propose three approaches to solving the weakly supervised clustering problem,
including a latent variables model that performs well in our experiments. We
illustrate our methods on an analysis of aggregated elections data and an
industry data set that was the original motivation for this research.
| Stefan Wager, Alexander Blocker, Niall Cardin | 10.1214/15-AOAS812 | 1310.1363 | null | null |
Sequential Monte Carlo Bandits | stat.ML cs.LG stat.ME | In this paper we propose a flexible and efficient framework for handling
multi-armed bandits, combining sequential Monte Carlo algorithms with
hierarchical Bayesian modeling techniques. The framework naturally encompasses
restless bandits, contextual bandits, and other bandit variants under a single
inferential model. Despite the model's generality, we propose efficient Monte
Carlo algorithms to make inference scalable, based on recent developments in
sequential Monte Carlo methods. Through two simulation studies, the framework
is shown to outperform other empirical methods, while also naturally scaling to
more complex problems for which existing approaches can not cope. Additionally,
we successfully apply our framework to online video-based advertising
recommendation, and show its increased efficacy as compared to current state of
the art bandit algorithms.
| Michael Cherkassky and Luke Bornn | null | 1310.1404 | null | null |
Narrowing the Gap: Random Forests In Theory and In Practice | stat.ML cs.LG | Despite widespread interest and practical use, the theoretical properties of
random forests are still not well understood. In this paper we contribute to
this understanding in two ways. We present a new theoretically tractable
variant of random regression forests and prove that our algorithm is
consistent. We also provide an empirical evaluation, comparing our algorithm
and other theoretically tractable random forest models to the random forest
algorithm used in practice. Our experiments provide insight into the relative
importance of different simplifications that theoreticians have made to obtain
tractable models for analysis.
| Misha Denil, David Matheson, Nando de Freitas | null | 1310.1415 | null | null |
Randomized Approximation of the Gram Matrix: Exact Computation and
Probabilistic Bounds | math.NA cs.LG stat.ML | Given a real matrix A with n columns, the problem is to approximate the Gram
product AA^T by c << n weighted outer products of columns of A. Necessary and
sufficient conditions for the exact computation of AA^T (in exact arithmetic)
from c >= rank(A) columns depend on the right singular vector matrix of A. For
a Monte-Carlo matrix multiplication algorithm by Drineas et al. that samples
outer products, we present probabilistic bounds for the 2-norm relative error
due to randomization. The bounds depend on the stable rank or the rank of A,
but not on the matrix dimensions. Numerical experiments illustrate that the
bounds are informative, even for stringent success probabilities and matrices
of small dimension. We also derive bounds for the smallest singular value and
the condition number of matrices obtained by sampling rows from orthonormal
matrices.
| John T. Holodnak, Ilse C. F. Ipsen | null | 1310.1502 | null | null |
Contraction Principle based Robust Iterative Algorithms for Machine
Learning | cs.LG stat.ML | Iterative algorithms are ubiquitous in the field of data mining. Widely known
examples of such algorithms are the least mean square algorithm,
backpropagation algorithm of neural networks. Our contribution in this paper is
an improvement upon this iterative algorithms in terms of their respective
performance metrics and robustness. This improvement is achieved by a new
scaling factor which is multiplied to the error term. Our analysis shows that
in essence, we are minimizing the corresponding LASSO cost function, which is
the reason of its increased robustness. We also give closed form expressions
for the number of iterations for convergence and the MSE floor of the original
cost function for a minimum targeted value of the L1 norm. As a concluding
theme based on the stochastic subgradient algorithm, we give a comparison
between the well known Dantzig selector and our algorithm based on contraction
principle. By these simulations we attempt to show the optimality of our
approach for any widely used parent iterative optimization problem.
| Rangeet Mitra, Amit Kumar Mishra | null | 1310.1518 | null | null |
CAM: Causal additive models, high-dimensional order search and penalized
regression | stat.ME cs.LG stat.ML | We develop estimation for potentially high-dimensional additive structural
equation models. A key component of our approach is to decouple order search
among the variables from feature or edge selection in a directed acyclic graph
encoding the causal structure. We show that the former can be done with
nonregularized (restricted) maximum likelihood estimation while the latter can
be efficiently addressed using sparse regression techniques. Thus, we
substantially simplify the problem of structure search and estimation for an
important class of causal models. We establish consistency of the (restricted)
maximum likelihood estimator for low- and high-dimensional scenarios, and we
also allow for misspecification of the error distribution. Furthermore, we
develop an efficient computational algorithm which can deal with many
variables, and the new method's accuracy and performance is illustrated on
simulated and real data.
| Peter B\"uhlmann, Jonas Peters, Jan Ernest | 10.1214/14-AOS1260 | 1310.1533 | null | null |
Learning Hidden Structures with Relational Models by Adequately
Involving Rich Information in A Network | cs.LG cs.SI stat.ML | Effectively modelling hidden structures in a network is very practical but
theoretically challenging. Existing relational models only involve very limited
information, namely the binary directional link data, embedded in a network to
learn hidden networking structures. There is other rich and meaningful
information (e.g., various attributes of entities and more granular information
than binary elements such as "like" or "dislike") missed, which play a critical
role in forming and understanding relations in a network. In this work, we
propose an informative relational model (InfRM) framework to adequately involve
rich information and its granularity in a network, including metadata
information about each entity and various forms of link data. Firstly, an
effective metadata information incorporation method is employed on the prior
information from relational models MMSB and LFRM. This is to encourage the
entities with similar metadata information to have similar hidden structures.
Secondly, we propose various solutions to cater for alternative forms of link
data. Substantial efforts have been made towards modelling appropriateness and
efficiency, for example, using conjugate priors. We evaluate our framework and
its inference algorithms in different datasets, which shows the generality and
effectiveness of our models in capturing implicit structures in networks.
| Xuhui Fan, Richard Yi Da Xu, Longbing Cao, Yin Song | null | 1310.1545 | null | null |
MINT: Mutual Information based Transductive Feature Selection for
Genetic Trait Prediction | cs.LG cs.CE | Whole genome prediction of complex phenotypic traits using high-density
genotyping arrays has attracted a great deal of attention, as it is relevant to
the fields of plant and animal breeding and genetic epidemiology. As the number
of genotypes is generally much bigger than the number of samples, predictive
models suffer from the curse-of-dimensionality. The curse-of-dimensionality
problem not only affects the computational efficiency of a particular genomic
selection method, but can also lead to poor performance, mainly due to
correlation among markers. In this work we proposed the first transductive
feature selection method based on the MRMR (Max-Relevance and Min-Redundancy)
criterion which we call MINT. We applied MINT on genetic trait prediction
problems and showed that in general MINT is a better feature selection method
than the state-of-the-art inductive method mRMR.
| Dan He, Irina Rish, David Haws, Simon Teyssedre, Zivan Karaman, Laxmi
Parida | null | 1310.1659 | null | null |
A Deep and Tractable Density Estimator | stat.ML cs.LG | The Neural Autoregressive Distribution Estimator (NADE) and its real-valued
version RNADE are competitive density models of multidimensional data across a
variety of domains. These models use a fixed, arbitrary ordering of the data
dimensions. One can easily condition on variables at the beginning of the
ordering, and marginalize out variables at the end of the ordering, however
other inference tasks require approximate inference. In this work we introduce
an efficient procedure to simultaneously train a NADE model for each possible
ordering of the variables, by sharing parameters across all these models. We
can thus use the most convenient model for each inference task at hand, and
ensembles of such models with different orderings are immediately available.
Moreover, unlike the original NADE, our training procedure scales to deep
models. Empirically, ensembles of Deep NADE models obtain state of the art
density estimation performance.
| Benigno Uria, Iain Murray, Hugo Larochelle | null | 1310.1757 | null | null |
Parallel coordinate descent for the Adaboost problem | cs.LG math.OC stat.ML | We design a randomised parallel version of Adaboost based on previous studies
on parallel coordinate descent. The algorithm uses the fact that the logarithm
of the exponential loss is a function with coordinate-wise Lipschitz continuous
gradient, in order to define the step lengths. We provide the proof of
convergence for this randomised Adaboost algorithm and a theoretical
parallelisation speedup factor. We finally provide numerical examples on
learning problems of various sizes that show that the algorithm is competitive
with concurrent approaches, especially for large scale problems.
| Olivier Fercoq | 10.1109/ICMLA.2013.72 | 1310.1840 | null | null |
Discriminative Features via Generalized Eigenvectors | cs.LG stat.ML | Representing examples in a way that is compatible with the underlying
classifier can greatly enhance the performance of a learning system. In this
paper we investigate scalable techniques for inducing discriminative features
by taking advantage of simple second order structure in the data. We focus on
multiclass classification and show that features extracted from the generalized
eigenvectors of the class conditional second moments lead to classifiers with
excellent empirical performance. Moreover, these features have attractive
theoretical properties, such as inducing representations that are invariant to
linear transformations of the input. We evaluate classifiers built from these
features on three different tasks, obtaining state of the art results.
| Nikos Karampatziakis, Paul Mineiro | null | 1310.1934 | null | null |
Bayesian Optimization With Censored Response Data | cs.AI cs.LG stat.ML | Bayesian optimization (BO) aims to minimize a given blackbox function using a
model that is updated whenever new evidence about the function becomes
available. Here, we address the problem of BO under partially right-censored
response data, where in some evaluations we only obtain a lower bound on the
function value. The ability to handle such response data allows us to
adaptively censor costly function evaluations in minimization problems where
the cost of a function evaluation corresponds to the function value. One
important application giving rise to such censored data is the
runtime-minimizing variant of the algorithm configuration problem: finding
settings of a given parametric algorithm that minimize the runtime required for
solving problem instances from a given distribution. We demonstrate that
terminating slow algorithm runs prematurely and handling the resulting
right-censored observations can substantially improve the state of the art in
model-based algorithm configuration.
| Frank Hutter and Holger Hoos and Kevin Leyton-Brown | null | 1310.1947 | null | null |
Least Squares Revisited: Scalable Approaches for Multi-class Prediction | cs.LG stat.ML | This work provides simple algorithms for multi-class (and multi-label)
prediction in settings where both the number of examples n and the data
dimension d are relatively large. These robust and parameter free algorithms
are essentially iterative least-squares updates and very versatile both in
theory and in practice. On the theoretical front, we present several variants
with convergence guarantees. Owing to their effective use of second-order
structure, these algorithms are substantially better than first-order methods
in many practical scenarios. On the empirical side, we present a scalable
stagewise variant of our approach, which achieves dramatic computational
speedups over popular optimization packages such as Liblinear and Vowpal Wabbit
on standard datasets (MNIST and CIFAR-10), while attaining state-of-the-art
accuracies.
| Alekh Agarwal, Sham M. Kakade, Nikos Karampatziakis, Le Song, Gregory
Valiant | null | 1310.1949 | null | null |
Fast Multi-Instance Multi-Label Learning | cs.LG | In many real-world tasks, particularly those involving data objects with
complicated semantics such as images and texts, one object can be represented
by multiple instances and simultaneously be associated with multiple labels.
Such tasks can be formulated as multi-instance multi-label learning (MIML)
problems, and have been extensively studied during the past few years. Existing
MIML approaches have been found useful in many applications; however, most of
them can only handle moderate-sized data. To efficiently handle large data
sets, in this paper we propose the MIMLfast approach, which first constructs a
low-dimensional subspace shared by all labels, and then trains label specific
linear models to optimize approximated ranking loss via stochastic gradient
descent. Although the MIML problem is complicated, MIMLfast is able to achieve
excellent performance by exploiting label relations with shared space and
discovering sub-concepts for complicated labels. Experiments show that the
performance of MIMLfast is highly competitive to state-of-the-art techniques,
whereas its time cost is much less; particularly, on a data set with 20K bags
and 180K instances, MIMLfast is more than 100 times faster than existing MIML
approaches. On a larger data set where none of existing approaches can return
results in 24 hours, MIMLfast takes only 12 minutes. Moreover, our approach is
able to identify the most representative instance for each label, and thus
providing a chance to understand the relation between input patterns and output
label semantics.
| Sheng-Jun Huang and Zhi-Hua Zhou | null | 1310.2049 | null | null |
Distributed Coordinate Descent Method for Learning with Big Data | stat.ML cs.DC cs.LG math.OC | In this paper we develop and analyze Hydra: HYbriD cooRdinAte descent method
for solving loss minimization problems with big data. We initially partition
the coordinates (features) and assign each partition to a different node of a
cluster. At every iteration, each node picks a random subset of the coordinates
from those it owns, independently from the other computers, and in parallel
computes and applies updates to the selected coordinates based on a simple
closed-form formula. We give bounds on the number of iterations sufficient to
approximately solve the problem with high probability, and show how it depends
on the data and on the partitioning. We perform numerical experiments with a
LASSO instance described by a 3TB matrix.
| Peter Richt\'arik and Martin Tak\'a\v{c} | null | 1310.2059 | null | null |
Predicting Students' Performance Using ID3 And C4.5 Classification
Algorithms | cs.CY cs.LG | An educational institution needs to have an approximate prior knowledge of
enrolled students to predict their performance in future academics. This helps
them to identify promising students and also provides them an opportunity to
pay attention to and improve those who would probably get lower grades. As a
solution, we have developed a system which can predict the performance of
students from their previous performances using concepts of data mining
techniques under Classification. We have analyzed the data set containing
information about students, such as gender, marks scored in the board
examinations of classes X and XII, marks and rank in entrance examinations and
results in first year of the previous batch of students. By applying the ID3
(Iterative Dichotomiser 3) and C4.5 classification algorithms on this data, we
have predicted the general and individual performance of freshly admitted
students in future examinations.
| Kalpesh Adhatrao, Aditya Gaykar, Amiraj Dhawan, Rohit Jha and Vipul
Honrao | 10.5121/ijdkp.2013.3504 | 1310.2071 | null | null |
Semidefinite Programming Based Preconditioning for More Robust
Near-Separable Nonnegative Matrix Factorization | stat.ML cs.LG math.OC | Nonnegative matrix factorization (NMF) under the separability assumption can
provably be solved efficiently, even in the presence of noise, and has been
shown to be a powerful technique in document classification and hyperspectral
unmixing. This problem is referred to as near-separable NMF and requires that
there exists a cone spanned by a small subset of the columns of the input
nonnegative matrix approximately containing all columns. In this paper, we
propose a preconditioning based on semidefinite programming making the input
matrix well-conditioned. This in turn can improve significantly the performance
of near-separable NMF algorithms which is illustrated on the popular successive
projection algorithm (SPA). The new preconditioned SPA is provably more robust
to noise, and outperforms SPA on several synthetic data sets. We also show how
an active-set method allow us to apply the preconditioning on large-scale
real-world hyperspectral images.
| Nicolas Gillis and Stephen A. Vavasis | 10.1137/130940670 | 1310.2273 | null | null |
Improved Bayesian Logistic Supervised Topic Models with Data
Augmentation | cs.LG cs.CL stat.AP stat.ML | Supervised topic models with a logistic likelihood have two issues that
potentially limit their practical use: 1) response variables are usually
over-weighted by document word counts; and 2) existing variational inference
methods make strict mean-field assumptions. We address these issues by: 1)
introducing a regularization constant to better balance the two parts based on
an optimization formulation of Bayesian inference; and 2) developing a simple
Gibbs sampling algorithm by introducing auxiliary Polya-Gamma variables and
collapsing out Dirichlet variables. Our augment-and-collapse sampling algorithm
has analytical forms of each conditional distribution without making any
restricting assumptions and can be easily parallelized. Empirical results
demonstrate significant improvements on prediction performance and time
efficiency.
| Jun Zhu, Xun Zheng, Bo Zhang | null | 1310.2408 | null | null |
Discriminative Relational Topic Models | cs.LG cs.IR stat.ML | Many scientific and engineering fields involve analyzing network data. For
document networks, relational topic models (RTMs) provide a probabilistic
generative process to describe both the link structure and document contents,
and they have shown promise on predicting network structures and discovering
latent topic representations. However, existing RTMs have limitations in both
the restricted model expressiveness and incapability of dealing with imbalanced
network data. To expand the scope and improve the inference accuracy of RTMs,
this paper presents three extensions: 1) unlike the common link likelihood with
a diagonal weight matrix that allows the-same-topic interactions only, we
generalize it to use a full weight matrix that captures all pairwise topic
interactions and is applicable to asymmetric networks; 2) instead of doing
standard Bayesian inference, we perform regularized Bayesian inference
(RegBayes) with a regularization parameter to deal with the imbalanced link
structure issue in common real networks and improve the discriminative ability
of learned latent representations; and 3) instead of doing variational
approximation with strict mean-field assumptions, we present collapsed Gibbs
sampling algorithms for the generalized relational topic models by exploring
data augmentation without making restricting assumptions. Under the generic
RegBayes framework, we carefully investigate two popular discriminative loss
functions, namely, the logistic log-loss and the max-margin hinge loss.
Experimental results on several real network datasets demonstrate the
significance of these extensions on improving the prediction performance, and
the time efficiency can be dramatically improved with a simple fast
approximation method.
| Ning Chen, Jun Zhu, Fei Xia, Bo Zhang | null | 1310.2409 | null | null |
M-Power Regularized Least Squares Regression | stat.ML cs.LG math.PR | Regularization is used to find a solution that both fits the data and is
sufficiently smooth, and thereby is very effective for designing and refining
learning algorithms. But the influence of its exponent remains poorly
understood. In particular, it is unclear how the exponent of the reproducing
kernel Hilbert space~(RKHS) regularization term affects the accuracy and the
efficiency of kernel-based learning algorithms. Here we consider regularized
least squares regression (RLSR) with an RKHS regularization raised to the power
of m, where m is a variable real exponent. We design an efficient algorithm for
solving the associated minimization problem, we provide a theoretical analysis
of its stability, and we compare its advantage with respect to computational
complexity, speed of convergence and prediction accuracy to the classical
kernel ridge regression algorithm where the regularization exponent m is fixed
at 2. Our results show that the m-power RLSR problem can be solved efficiently,
and support the suggestion that one can use a regularization term that grows
significantly slower than the standard quadratic growth in the RKHS norm.
| Julien Audiffren (LIF), Hachem Kadri (LIF) | null | 1310.2451 | null | null |
A Sparse and Adaptive Prior for Time-Dependent Model Parameters | stat.ML cs.AI cs.LG | We consider the scenario where the parameters of a probabilistic model are
expected to vary over time. We construct a novel prior distribution that
promotes sparsity and adapts the strength of correlation between parameters at
successive timesteps, based on the data. We derive approximate variational
inference procedures for learning and prediction with this prior. We test the
approach on two tasks: forecasting financial quantities from relevant text, and
modeling language contingent on time-varying financial measurements.
| Dani Yogatama and Bryan R. Routledge and Noah A. Smith | null | 1310.2627 | null | null |
Localized Iterative Methods for Interpolation in Graph Structured Data | cs.LG | In this paper, we present two localized graph filtering based methods for
interpolating graph signals defined on the vertices of arbitrary graphs from
only a partial set of samples. The first method is an extension of previous
work on reconstructing bandlimited graph signals from partially observed
samples. The iterative graph filtering approach very closely approximates the
solution proposed in the that work, while being computationally more efficient.
As an alternative, we propose a regularization based framework in which we
define the cost of reconstruction to be a combination of smoothness of the
graph signal and the reconstruction error with respect to the known samples,
and find solutions that minimize this cost. We provide both a closed form
solution and a computationally efficient iterative solution of the optimization
problem. The experimental results on the recommendation system datasets
demonstrate effectiveness of the proposed methods.
| Sunil K. Narang, Akshay Gadde, Eduard Sanou and Antonio Ortega | null | 1310.2646 | null | null |
Analyzing Big Data with Dynamic Quantum Clustering | physics.data-an cs.LG physics.comp-ph | How does one search for a needle in a multi-dimensional haystack without
knowing what a needle is and without knowing if there is one in the haystack?
This kind of problem requires a paradigm shift - away from hypothesis driven
searches of the data - towards a methodology that lets the data speak for
itself. Dynamic Quantum Clustering (DQC) is such a methodology. DQC is a
powerful visual method that works with big, high-dimensional data. It exploits
variations of the density of the data (in feature space) and unearths subsets
of the data that exhibit correlations among all the measured variables. The
outcome of a DQC analysis is a movie that shows how and why sets of data-points
are eventually classified as members of simple clusters or as members of - what
we call - extended structures. This allows DQC to be successfully used in a
non-conventional exploratory mode where one searches data for unexpected
information without the need to model the data. We show how this works for big,
complex, real-world datasets that come from five distinct fields: i.e., x-ray
nano-chemistry, condensed matter, biology, seismology and finance. These
studies show how DQC excels at uncovering unexpected, small - but meaningful -
subsets of the data that contain important information. We also establish an
important new result: namely, that big, complex datasets often contain
interesting structures that will be missed by many conventional clustering
techniques. Experience shows that these structures appear frequently enough
that it is crucial to know they can exist, and that when they do, they encode
important hidden information. In short, we not only demonstrate that DQC can be
flexibly applied to datasets that present significantly different challenges,
we also show how a simple analysis can be used to look for the needle in the
haystack, determine what it is, and find what this means.
| M. Weinstein, F. Meirer, A. Hume, Ph. Sciau, G. Shaked, R. Hofstetter,
E. Persi, A. Mehta, and D. Horn | null | 1310.2700 | null | null |
Lemma Mining over HOL Light | cs.AI cs.DL cs.LG cs.LO | Large formal mathematical libraries consist of millions of atomic inference
steps that give rise to a corresponding number of proved statements (lemmas).
Analogously to the informal mathematical practice, only a tiny fraction of such
statements is named and re-used in later proofs by formal mathematicians. In
this work, we suggest and implement criteria defining the estimated usefulness
of the HOL Light lemmas for proving further theorems. We use these criteria to
mine the large inference graph of all lemmas in the core HOL Light library,
adding thousands of the best lemmas to the pool of named statements that can be
re-used in later proofs. The usefulness of the new lemmas is then evaluated by
comparing the performance of automated proving of the core HOL Light theorems
with and without such added lemmas.
| Cezary Kaliszyk and Josef Urban | null | 1310.2797 | null | null |
MizAR 40 for Mizar 40 | cs.AI cs.DL cs.LG cs.LO cs.MS | As a present to Mizar on its 40th anniversary, we develop an AI/ATP system
that in 30 seconds of real time on a 14-CPU machine automatically proves 40% of
the theorems in the latest official version of the Mizar Mathematical Library
(MML). This is a considerable improvement over previous performance of large-
theory AI/ATP methods measured on the whole MML. To achieve that, a large suite
of AI/ATP methods is employed and further developed. We implement the most
useful methods efficiently, to scale them to the 150000 formulas in MML. This
reduces the training times over the corpus to 1-3 seconds, allowing a simple
practical deployment of the methods in the online automated reasoning service
for the Mizar users (MizAR).
| Cezary Kaliszyk and Josef Urban | 10.1007/s10817-015-9330-8 | 1310.2805 | null | null |
Gibbs Max-margin Topic Models with Data Augmentation | stat.ML cs.LG stat.CO stat.ME | Max-margin learning is a powerful approach to building classifiers and
structured output predictors. Recent work on max-margin supervised topic models
has successfully integrated it with Bayesian topic models to discover
discriminative latent semantic structures and make accurate predictions for
unseen testing data. However, the resulting learning problems are usually hard
to solve because of the non-smoothness of the margin loss. Existing approaches
to building max-margin supervised topic models rely on an iterative procedure
to solve multiple latent SVM subproblems with additional mean-field assumptions
on the desired posterior distributions. This paper presents an alternative
approach by defining a new max-margin loss. Namely, we present Gibbs max-margin
supervised topic models, a latent variable Gibbs classifier to discover hidden
topic representations for various tasks, including classification, regression
and multi-task learning. Gibbs max-margin supervised topic models minimize an
expected margin loss, which is an upper bound of the existing margin loss
derived from an expected prediction rule. By introducing augmented variables
and integrating out the Dirichlet variables analytically by conjugacy, we
develop simple Gibbs sampling algorithms with no restricting assumptions and no
need to solve SVM subproblems. Furthermore, each step of the
"augment-and-collapse" Gibbs sampling algorithms has an analytical conditional
distribution, from which samples can be easily drawn. Experimental results
demonstrate significant improvements on time efficiency. The classification
performance is also significantly improved over competitors on binary,
multi-class and multi-label classification tasks.
| Jun Zhu, Ning Chen, Hugh Perkins, Bo Zhang | null | 1310.2816 | null | null |
Feature Selection with Annealing for Computer Vision and Big Data
Learning | stat.ML cs.CV cs.LG math.ST stat.TH | Many computer vision and medical imaging problems are faced with learning
from large-scale datasets, with millions of observations and features. In this
paper we propose a novel efficient learning scheme that tightens a sparsity
constraint by gradually removing variables based on a criterion and a schedule.
The attractive fact that the problem size keeps dropping throughout the
iterations makes it particularly suitable for big data learning. Our approach
applies generically to the optimization of any differentiable loss function,
and finds applications in regression, classification and ranking. The resultant
algorithms build variable screening into estimation and are extremely simple to
implement. We provide theoretical guarantees of convergence and selection
consistency. In addition, one dimensional piecewise linear response functions
are used to account for nonlinearity and a second order prior is imposed on
these functions to avoid overfitting. Experiments on real and synthetic data
show that the proposed method compares very well with other state of the art
methods in regression, classification and ranking while being computationally
very efficient and scalable.
| Adrian Barbu, Yiyuan She, Liangjing Ding, Gary Gramajo | 10.1109/TPAMI.2016.2544315 | 1310.2880 | null | null |
Feedback Detection for Live Predictors | stat.ME cs.LG stat.ML | A predictor that is deployed in a live production system may perturb the
features it uses to make predictions. Such a feedback loop can occur, for
example, when a model that predicts a certain type of behavior ends up causing
the behavior it predicts, thus creating a self-fulfilling prophecy. In this
paper we analyze predictor feedback detection as a causal inference problem,
and introduce a local randomization scheme that can be used to detect
non-linear feedback in real-world problems. We conduct a pilot study for our
proposed methodology using a predictive system currently deployed as a part of
a search engine.
| Stefan Wager, Nick Chamandy, Omkar Muralidharan, and Amir Najmi | null | 1310.2931 | null | null |
Spontaneous Analogy by Piggybacking on a Perceptual System | cs.AI cs.LG | Most computational models of analogy assume they are given a delineated
source domain and often a specified target domain. These systems do not address
how analogs can be isolated from large domains and spontaneously retrieved from
long-term memory, a process we call spontaneous analogy. We present a system
that represents relational structures as feature bags. Using this
representation, our system leverages perceptual algorithms to automatically
create an ontology of relational structures and to efficiently retrieve analogs
for new relational structures from long-term memory. We provide a demonstration
of our approach that takes a set of unsegmented stories, constructs an ontology
of analogical schemas (corresponding to plot devices), and uses this ontology
to efficiently find analogs within new stories, yielding significant
time-savings over linear analog retrieval at a small accuracy cost.
| Marc Pickett and David W. Aha | null | 1310.2955 | null | null |
Scaling Graph-based Semi Supervised Learning to Large Number of Labels
Using Count-Min Sketch | cs.LG | Graph-based Semi-supervised learning (SSL) algorithms have been successfully
used in a large number of applications. These methods classify initially
unlabeled nodes by propagating label information over the structure of graph
starting from seed nodes. Graph-based SSL algorithms usually scale linearly
with the number of distinct labels (m), and require O(m) space on each node.
Unfortunately, there exist many applications of practical significance with
very large m over large graphs, demanding better space and time complexity. In
this paper, we propose MAD-SKETCH, a novel graph-based SSL algorithm which
compactly stores label distribution on each node using Count-min Sketch, a
randomized data structure. We present theoretical analysis showing that under
mild conditions, MAD-SKETCH can reduce space complexity at each node from O(m)
to O(log m), and achieve similar savings in time complexity as well. We support
our analysis through experiments on multiple real world datasets. We observe
that MAD-SKETCH achieves similar performance as existing state-of-the-art
graph- based SSL algorithms, while requiring smaller memory footprint and at
the same time achieving up to 10x speedup. We find that MAD-SKETCH is able to
scale to datasets with one million labels, which is beyond the scope of
existing graph- based SSL algorithms.
| Partha Pratim Talukdar, William Cohen | null | 1310.2959 | null | null |
Bandits with Switching Costs: T^{2/3} Regret | cs.LG math.PR | We study the adversarial multi-armed bandit problem in a setting where the
player incurs a unit cost each time he switches actions. We prove that the
player's $T$-round minimax regret in this setting is
$\widetilde{\Theta}(T^{2/3})$, thereby closing a fundamental gap in our
understanding of learning with bandit feedback. In the corresponding
full-information version of the problem, the minimax regret is known to grow at
a much slower rate of $\Theta(\sqrt{T})$. The difference between these two
rates provides the \emph{first} indication that learning with bandit feedback
can be significantly harder than learning with full-information feedback
(previous results only showed a different dependence on the number of actions,
but not on $T$.)
In addition to characterizing the inherent difficulty of the multi-armed
bandit problem with switching costs, our results also resolve several other
open problems in online learning. One direct implication is that learning with
bandit feedback against bounded-memory adaptive adversaries has a minimax
regret of $\widetilde{\Theta}(T^{2/3})$. Another implication is that the
minimax regret of online learning in adversarial Markov decision processes
(MDPs) is $\widetilde{\Theta}(T^{2/3})$. The key to all of our results is a new
randomized construction of a multi-scale random walk, which is of independent
interest and likely to prove useful in additional settings.
| Ofer Dekel, Jian Ding, Tomer Koren, Yuval Peres | null | 1310.2997 | null | null |
A Bayesian Network View on Acoustic Model-Based Techniques for Robust
Speech Recognition | cs.LG cs.CL stat.ML | This article provides a unifying Bayesian network view on various approaches
for acoustic model adaptation, missing feature, and uncertainty decoding that
are well-known in the literature of robust automatic speech recognition. The
representatives of these classes can often be deduced from a Bayesian network
that extends the conventional hidden Markov models used in speech recognition.
These extensions, in turn, can in many cases be motivated from an underlying
observation model that relates clean and distorted feature vectors. By
converting the observation models into a Bayesian network representation, we
formulate the corresponding compensation rules leading to a unified view on
known derivations as well as to new formulations for certain approaches. The
generic Bayesian perspective provided in this contribution thus highlights
structural differences and similarities between the analyzed approaches.
| Roland Maas, Christian Huemmer, Armin Sehr, Walter Kellermann | null | 1310.3099 | null | null |
Deep Multiple Kernel Learning | stat.ML cs.LG | Deep learning methods have predominantly been applied to large artificial
neural networks. Despite their state-of-the-art performance, these large
networks typically do not generalize well to datasets with limited sample
sizes. In this paper, we take a different approach by learning multiple layers
of kernels. We combine kernels at each layer and then optimize over an estimate
of the support vector machine leave-one-out error rather than the dual
objective function. Our experiments on a variety of datasets show that each
layer successively increases performance with only a few base kernels.
| Eric Strobl, Shyam Visweswaran | 10.1109/ICMLA.2013.84 | 1310.3101 | null | null |
Visualizing Bags of Vectors | cs.IR cs.CL cs.LG | The motivation of this work is two-fold - a) to compare between two different
modes of visualizing data that exists in a bag of vectors format b) to propose
a theoretical model that supports a new mode of visualizing data. Visualizing
high dimensional data can be achieved using Minimum Volume Embedding, but the
data has to exist in a format suitable for computing similarities while
preserving local distances. This paper compares the visualization between two
methods of representing data and also proposes a new method providing sample
visualizations for that method.
| Sriramkumar Balasubramanian and Raghuram Reddy Nagireddy | null | 1310.3333 | null | null |
Joint Indoor Localization and Radio Map Construction with Limited
Deployment Load | cs.NI cs.LG | One major bottleneck in the practical implementation of received signal
strength (RSS) based indoor localization systems is the extensive deployment
efforts required to construct the radio maps through fingerprinting. In this
paper, we aim to design an indoor localization scheme that can be directly
employed without building a full fingerprinted radio map of the indoor
environment. By accumulating the information of localized RSSs, this scheme can
also simultaneously construct the radio map with limited calibration. To design
this scheme, we employ a source data set that possesses the same spatial
correlation of the RSSs in the indoor environment under study. The knowledge of
this data set is then transferred to a limited number of calibration
fingerprints and one or several RSS observations with unknown locations, in
order to perform direct localization of these observations using manifold
alignment. We test two different source data sets, namely a simulated radio
propagation map and the environments plan coordinates. For moving users, we
exploit the correlation of their observations to improve the localization
accuracy. The online testing in two indoor environments shows that the plan
coordinates achieve better results than the simulated radio maps, and a
negligible degradation with 70-85% reduction in calibration load.
| Sameh Sorour, Yves Lostanlen, Shahrokh Valaee | null | 1310.3407 | null | null |
Predicting Social Links for New Users across Aligned Heterogeneous
Social Networks | cs.SI cs.LG physics.soc-ph | Online social networks have gained great success in recent years and many of
them involve multiple kinds of nodes and complex relationships. Among these
relationships, social links among users are of great importance. Many existing
link prediction methods focus on predicting social links that will appear in
the future among all users based upon a snapshot of the social network. In
real-world social networks, many new users are joining in the service every
day. Predicting links for new users are more important. Different from
conventional link prediction problems, link prediction for new users are more
challenging due to the following reasons: (1) differences in information
distributions between new users and the existing active users (i.e., old
users); (2) lack of information from the new users in the network. We propose a
link prediction method called SCAN-PS (Supervised Cross Aligned Networks link
prediction with Personalized Sampling), to solve the link prediction problem
for new users with information transferred from both the existing active users
in the target network and other source networks through aligned accounts. We
proposed a within-target-network personalized sampling method to process the
existing active users' information in order to accommodate the differences in
information distributions before the intra-network knowledge transfer. SCAN-PS
can also exploit information in other source networks, where the user accounts
are aligned with the target network. In this way, SCAN-PS could solve the cold
start problem when information of these new users is total absent in the target
network.
| Jiawei Zhang, Xiangnan Kong, Philip S. Yu | null | 1310.3492 | null | null |
Identifying Influential Entries in a Matrix | cs.NA cs.LG stat.ML | For any matrix A in R^(m x n) of rank \rho, we present a probability
distribution over the entries of A (the element-wise leverage scores of
equation (2)) that reveals the most influential entries in the matrix. From a
theoretical perspective, we prove that sampling at most s = O ((m + n) \rho^2
ln (m + n)) entries of the matrix (see eqn. (3) for the precise value of s)
with respect to these scores and solving the nuclear norm minimization problem
on the sampled entries, reconstructs A exactly. To the best of our knowledge,
these are the strongest theoretical guarantees on matrix completion without any
incoherence assumptions on the matrix A. From an experimental perspective, we
show that entries corresponding to high element-wise leverage scores reveal
structural properties of the data matrix that are of interest to domain
scientists.
| Abhisek Kundu, Srinivas Nambirajan, Petros Drineas | null | 1310.3556 | null | null |
An Extreme Learning Machine Approach to Predicting Near Chaotic HCCI
Combustion Phasing in Real-Time | cs.LG cs.CE | Fuel efficient Homogeneous Charge Compression Ignition (HCCI) engine
combustion timing predictions must contend with non-linear chemistry,
non-linear physics, period doubling bifurcation(s), turbulent mixing, model
parameters that can drift day-to-day, and air-fuel mixture state information
that cannot typically be resolved on a cycle-to-cycle basis, especially during
transients. In previous work, an abstract cycle-to-cycle mapping function
coupled with $\epsilon$-Support Vector Regression was shown to predict
experimentally observed cycle-to-cycle combustion timing over a wide range of
engine conditions, despite some of the aforementioned difficulties. The main
limitation of the previous approach was that a partially acausual randomly
sampled training dataset was used to train proof of concept offline
predictions. The objective of this paper is to address this limitation by
proposing a new online adaptive Extreme Learning Machine (ELM) extension named
Weighted Ring-ELM. This extension enables fully causal combustion timing
predictions at randomly chosen engine set points, and is shown to achieve
results that are as good as or better than the previous offline method. The
broader objective of this approach is to enable a new class of real-time model
predictive control strategies for high variability HCCI and, ultimately, to
bring HCCI's low engine-out NOx and reduced CO2 emissions to production
engines.
| Adam Vaughan and Stanislav V. Bohac | null | 1310.3567 | null | null |
Predicting college basketball match outcomes using machine learning
techniques: some results and lessons learned | cs.LG stat.AP | Most existing work on predicting NCAAB matches has been developed in a
statistical context. Trusting the capabilities of ML techniques, particularly
classification learners, to uncover the importance of features and learn their
relationships, we evaluated a number of different paradigms on this task. In
this paper, we summarize our work, pointing out that attributes seem to be more
important than models, and that there seems to be an upper limit to predictive
quality.
| Albrecht Zimmermann, Sruthi Moorthy and Zifan Shi | null | 1310.3607 | null | null |
Scalable Verification of Markov Decision Processes | cs.DS cs.DC cs.LG cs.LO | Markov decision processes (MDP) are useful to model concurrent process
optimisation problems, but verifying them with numerical methods is often
intractable. Existing approximative approaches do not scale well and are
limited to memoryless schedulers. Here we present the basis of scalable
verification for MDPSs, using an O(1) memory representation of
history-dependent schedulers. We thus facilitate scalable learning techniques
and the use of massively parallel verification.
| Axel Legay, Sean Sedwards and Louis-Marie Traonouez | null | 1310.3609 | null | null |
Variance Adjusted Actor Critic Algorithms | stat.ML cs.LG cs.SY | We present an actor-critic framework for MDPs where the objective is the
variance-adjusted expected return. Our critic uses linear function
approximation, and we extend the concept of compatible features to the
variance-adjusted setting. We present an episodic actor-critic algorithm and
show that it converges almost surely to a locally optimal point of the
objective function.
| Aviv Tamar, Shie Mannor | null | 1310.3697 | null | null |
Ridge Fusion in Statistical Learning | stat.ML cs.LG stat.CO | We propose a penalized likelihood method to jointly estimate multiple
precision matrices for use in quadratic discriminant analysis and model based
clustering. A ridge penalty and a ridge fusion penalty are used to introduce
shrinkage and promote similarity between precision matrix estimates. Block-wise
coordinate descent is used for optimization, and validation likelihood is used
for tuning parameter selection. Our method is applied in quadratic discriminant
analysis and semi-supervised model based clustering.
| Bradley S. Price, Charles J. Geyer, and Adam J. Rothman | null | 1310.3892 | null | null |
Demystifying Information-Theoretic Clustering | cs.LG cs.IT math.IT physics.data-an stat.ML | We propose a novel method for clustering data which is grounded in
information-theoretic principles and requires no parametric assumptions.
Previous attempts to use information theory to define clusters in an
assumption-free way are based on maximizing mutual information between data and
cluster labels. We demonstrate that this intuition suffers from a fundamental
conceptual flaw that causes clustering performance to deteriorate as the amount
of data increases. Instead, we return to the axiomatic foundations of
information theory to define a meaningful clustering measure based on the
notion of consistency under coarse-graining for finite data.
| Greg Ver Steeg, Aram Galstyan, Fei Sha, Simon DeDeo | null | 1310.4210 | null | null |
Exact Learning of RNA Energy Parameters From Structure | q-bio.BM cs.LG | We consider the problem of exact learning of parameters of a linear RNA
energy model from secondary structure data. A necessary and sufficient
condition for learnability of parameters is derived, which is based on
computing the convex hull of union of translated Newton polytopes of input
sequences. The set of learned energy parameters is characterized as the convex
cone generated by the normal vectors to those facets of the resulting polytope
that are incident to the origin. In practice, the sufficient condition may not
be satisfied by the entire training data set; hence, computing a maximal subset
of training data for which the sufficient condition is satisfied is often
desired. We show that problem is NP-hard in general for an arbitrary
dimensional feature space. Using a randomized greedy algorithm, we select a
subset of RNA STRAND v2.0 database that satisfies the sufficient condition for
separate A-U, C-G, G-U base pair counting model. The set of learned energy
parameters includes experimentally measured energies of A-U, C-G, and G-U
pairs; hence, our parameter set is in agreement with the Turner parameters.
| Hamidreza Chitsaz, Mohammad Aminisharifabad | null | 1310.4223 | null | null |
On Measure Concentration of Random Maximum A-Posteriori Perturbations | cs.LG math.PR | The maximum a-posteriori (MAP) perturbation framework has emerged as a useful
approach for inference and learning in high dimensional complex models. By
maximizing a randomly perturbed potential function, MAP perturbations generate
unbiased samples from the Gibbs distribution. Unfortunately, the computational
cost of generating so many high-dimensional random variables can be
prohibitive. More efficient algorithms use sequential sampling strategies based
on the expected value of low dimensional MAP perturbations. This paper develops
new measure concentration inequalities that bound the number of samples needed
to estimate such expected values. Applying the general result to MAP
perturbations can yield a more efficient algorithm to approximate sampling from
the Gibbs distribution. The measure concentration result is of general interest
and may be applicable to other areas involving expected estimations.
| Francesco Orabona, Tamir Hazan, Anand D. Sarwate, Tommi Jaakkola | null | 1310.4227 | null | null |
Multilabel Consensus Classification | stat.ML cs.LG | In the era of big data, a large amount of noisy and incomplete data can be
collected from multiple sources for prediction tasks. Combining multiple models
or data sources helps to counteract the effects of low data quality and the
bias of any single model or data source, and thus can improve the robustness
and the performance of predictive models. Out of privacy, storage and bandwidth
considerations, in certain circumstances one has to combine the predictions
from multiple models or data sources to obtain the final predictions without
accessing the raw data. Consensus-based prediction combination algorithms are
effective for such situations. However, current research on prediction
combination focuses on the single label setting, where an instance can have one
and only one label. Nonetheless, data nowadays are usually multilabeled, such
that more than one label have to be predicted at the same time. Direct
applications of existing prediction combination methods to multilabel settings
can lead to degenerated performance. In this paper, we address the challenges
of combining predictions from multiple multilabel classifiers and propose two
novel algorithms, MLCM-r (MultiLabel Consensus Maximization for ranking) and
MLCM-a (MLCM for microAUC). These algorithms can capture label correlations
that are common in multilabel classifications, and optimize corresponding
performance metrics. Experimental results on popular multilabel classification
tasks verify the theoretical analysis and effectiveness of the proposed
methods.
| Sihong Xie and Xiangnan Kong and Jing Gao and Wei Fan and Philip S.Yu | null | 1310.4252 | null | null |
Bayesian Information Sharing Between Noise And Regression Models
Improves Prediction of Weak Effects | stat.ML cs.LG | We consider the prediction of weak effects in a multiple-output regression
setup, when covariates are expected to explain a small amount, less than
$\approx 1%$, of the variance of the target variables. To facilitate the
prediction of the weak effects, we constrain our model structure by introducing
a novel Bayesian approach of sharing information between the regression model
and the noise model. Further reduction of the effective number of parameters is
achieved by introducing an infinite shrinkage prior and group sparsity in the
context of the Bayesian reduced rank regression, and using the Bayesian
infinite factor model as a flexible low-rank noise model. In our experiments
the model incorporating the novelties outperformed alternatives in genomic
prediction of rich phenotype data. In particular, the information sharing
between the noise and regression models led to significant improvement in
prediction accuracy.
| Jussi Gillberg, Pekka Marttinen, Matti Pirinen, Antti J Kangas, Pasi
Soininen, Marjo-Riitta J\"arvelin, Mika Ala-Korpela, Samuel Kaski | null | 1310.4362 | null | null |
Inference, Sampling, and Learning in Copula Cumulative Distribution
Networks | stat.ML cs.LG | The cumulative distribution network (CDN) is a recently developed class of
probabilistic graphical models (PGMs) permitting a copula factorization, in
which the CDF, rather than the density, is factored. Despite there being much
recent interest within the machine learning community about copula
representations, there has been scarce research into the CDN, its amalgamation
with copula theory, and no evaluation of its performance. Algorithms for
inference, sampling, and learning in these models are underdeveloped compared
those of other PGMs, hindering widerspread use.
One advantage of the CDN is that it allows the factors to be parameterized as
copulae, combining the benefits of graphical models with those of copula
theory. In brief, the use of a copula parameterization enables greater
modelling flexibility by separating representation of the marginals from the
dependence structure, permitting more efficient and robust learning. Another
advantage is that the CDN permits the representation of implicit latent
variables, whose parameterization and connectivity are not required to be
specified. Unfortunately, that the model can encode only latent relationships
between variables severely limits its utility.
In this thesis, we present inference, learning, and sampling for CDNs, and
further the state-of-the-art. First, we explain the basics of copula theory and
the representation of copula CDNs. Then, we discuss inference in the models,
and develop the first sampling algorithm. We explain standard learning methods,
propose an algorithm for learning from data missing completely at random
(MCAR), and develop a novel algorithm for learning models of arbitrary
treewidth and size. Properties of the models and algorithms are investigated
through Monte Carlo simulations. We conclude with further discussion of the
advantages and limitations of CDNs, and suggest future work.
| Stefan Douglas Webb | null | 1310.4456 | null | null |
The BeiHang Keystroke Dynamics Authentication System | cs.CR cs.LG | Keystroke Dynamics is an important biometric solution for person
authentication. Based upon keystroke dynamics, this paper designs an embedded
password protection device, develops an online system, collects two public
databases for promoting the research on keystroke authentication, exploits the
Gabor filter bank to characterize keystroke dynamics, and provides benchmark
results of three popular classification algorithms, one-class support vector
machine, Gaussian classifier, and nearest neighbour classifier.
| Juan Liu, Baochang Zhang, Linlin Shen, Jianzhuang Liu, Jason Zhao | null | 1310.4485 | null | null |
Multiple Attractor Cellular Automata (MACA) for Addressing Major
Problems in Bioinformatics | cs.CE cs.LG | CA has grown as potential classifier for addressing major problems in
bioinformatics. Lot of bioinformatics problems like predicting the protein
coding region, finding the promoter region, predicting the structure of protein
and many other problems in bioinformatics can be addressed through Cellular
Automata. Even though there are some prediction techniques addressing these
problems, the approximate accuracy level is very less. An automated procedure
was proposed with MACA (Multiple Attractor Cellular Automata) which can address
all these problems. The genetic algorithm is also used to find rules with good
fitness values. Extensive experiments are conducted for reporting the accuracy
of the proposed tool. The average accuracy of MACA when tested with ENCODE,
BG570, HMR195, Fickett and Tongue, ASP67 datasets is 78%.
| Pokkuluri Kiran Sree, Inampudi Ramesh Babu and SSSN Usha Devi Nedunuri | null | 1310.4495 | null | null |
Distributed Representations of Words and Phrases and their
Compositionality | cs.CL cs.LG stat.ML | The recently introduced continuous Skip-gram model is an efficient method for
learning high-quality distributed vector representations that capture a large
number of precise syntactic and semantic word relationships. In this paper we
present several extensions that improve both the quality of the vectors and the
training speed. By subsampling of the frequent words we obtain significant
speedup and also learn more regular word representations. We also describe a
simple alternative to the hierarchical softmax called negative sampling. An
inherent limitation of word representations is their indifference to word order
and their inability to represent idiomatic phrases. For example, the meanings
of "Canada" and "Air" cannot be easily combined to obtain "Air Canada".
Motivated by this example, we present a simple method for finding phrases in
text, and show that learning good vector representations for millions of
phrases is possible.
| Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean | null | 1310.4546 | null | null |
Discriminative Link Prediction using Local Links, Node Features and
Community Structure | cs.LG cs.SI physics.soc-ph | A link prediction (LP) algorithm is given a graph, and has to rank, for each
node, other nodes that are candidates for new linkage. LP is strongly motivated
by social search and recommendation applications. LP techniques often focus on
global properties (graph conductance, hitting or commute times, Katz score) or
local properties (Adamic-Adar and many variations, or node feature vectors),
but rarely combine these signals. Furthermore, neither of these extremes
exploit link densities at the intermediate level of communities. In this paper
we describe a discriminative LP algorithm that exploits two new signals. First,
a co-clustering algorithm provides community level link density estimates,
which are used to qualify observed links with a surprise value. Second, links
in the immediate neighborhood of the link to be predicted are not interpreted
at face value, but through a local model of node feature similarities. These
signals are combined into a discriminative link predictor. We evaluate the new
predictor using five diverse data sets that are standard in the literature. We
report on significant accuracy boosts compared to standard LP methods
(including Adamic-Adar and random walk). Apart from the new predictor, another
contribution is a rigorous protocol for benchmarking and reporting LP
algorithms, which reveals the regions of strengths and weaknesses of all the
predictors studied here, and establishes the new proposal as the most robust.
| Abir De, Niloy Ganguly, Soumen Chakrabarti | null | 1310.4579 | null | null |
Minimax rates in permutation estimation for feature matching | math.ST cs.LG stat.TH | The problem of matching two sets of features appears in various tasks of
computer vision and can be often formalized as a problem of permutation
estimation. We address this problem from a statistical point of view and
provide a theoretical analysis of the accuracy of several natural estimators.
To this end, the minimax rate of separation is investigated and its expression
is obtained as a function of the sample size, noise level and dimension. We
consider the cases of homoscedastic and heteroscedastic noise and establish, in
each case, tight upper bounds on the separation distance of several estimators.
These upper bounds are shown to be unimprovable both in the homoscedastic and
heteroscedastic settings. Interestingly, these bounds demonstrate that a phase
transition occurs when the dimension $d$ of the features is of the order of the
logarithm of the number of features $n$. For $d=O(\log n)$, the rate is
dimension free and equals $\sigma (\log n)^{1/2}$, where $\sigma$ is the noise
level. In contrast, when $d$ is larger than $c\log n$ for some constant $c>0$,
the minimax rate increases with $d$ and is of the order $\sigma(d\log
n)^{1/4}$. We also discuss the computational aspects of the estimators and
provide empirical evidence of their consistency on synthetic data. Finally, we
show that our results extend to more general matching criteria.
| Olivier Collier and Arnak S. Dalalyan | null | 1310.4661 | null | null |
On the Bayes-optimality of F-measure maximizers | stat.ML cs.LG | The F-measure, which has originally been introduced in information retrieval,
is nowadays routinely used as a performance metric for problems such as binary
classification, multi-label classification, and structured output prediction.
Optimizing this measure is a statistically and computationally challenging
problem, since no closed-form solution exists. Adopting a decision-theoretic
perspective, this article provides a formal and experimental analysis of
different approaches for maximizing the F-measure. We start with a Bayes-risk
analysis of related loss functions, such as Hamming loss and subset zero-one
loss, showing that optimizing such losses as a surrogate of the F-measure leads
to a high worst-case regret. Subsequently, we perform a similar type of
analysis for F-measure maximizing algorithms, showing that such algorithms are
approximate, while relying on additional assumptions regarding the statistical
distribution of the binary response variables. Furthermore, we present a new
algorithm which is not only computationally efficient but also Bayes-optimal,
regardless of the underlying distribution. To this end, the algorithm requires
only a quadratic (with respect to the number of binary responses) number of
parameters of the joint distribution. We illustrate the practical performance
of all analyzed methods by means of experiments with multi-label classification
problems.
| Willem Waegeman, Krzysztof Dembczynski, Arkadiusz Jachnik, Weiwei
Cheng, Eyke Hullermeier | null | 1310.4849 | null | null |
Text Classification For Authorship Attribution Analysis | cs.DL cs.CL cs.LG | Authorship attribution mainly deals with undecided authorship of literary
texts. Authorship attribution is useful in resolving issues like uncertain
authorship, recognize authorship of unknown texts, spot plagiarism so on.
Statistical methods can be used to set apart the approach of an author
numerically. The basic methodologies that are made use in computational
stylometry are word length, sentence length, vocabulary affluence, frequencies
etc. Each author has an inborn style of writing, which is particular to
himself. Statistical quantitative techniques can be used to differentiate the
approach of an author in a numerical way. The problem can be broken down into
three sub problems as author identification, author characterization and
similarity detection. The steps involved are pre-processing, extracting
features, classification and author identification. For this different
classifiers can be used. Here fuzzy learning classifier and SVM are used. After
author identification the SVM was found to have more accuracy than Fuzzy
classifier. Later combined the classifiers to obtain a better accuracy when
compared to individual SVM and fuzzy classifier.
| M. Sudheep Elayidom, Chinchu Jose, Anitta Puthussery, Neenu K Sasi | 10.5121/acij.2013.4501 | 1310.4909 | null | null |
A novel sparsity and clustering regularization | cs.LG cs.CV stat.ML | We propose a novel SPARsity and Clustering (SPARC) regularizer, which is a
modified version of the previous octagonal shrinkage and clustering algorithm
for regression (OSCAR), where, the proposed regularizer consists of a
$K$-sparse constraint and a pair-wise $\ell_{\infty}$ norm restricted on the
$K$ largest components in magnitude. The proposed regularizer is able to
separably enforce $K$-sparsity and encourage the non-zeros to be equal in
magnitude. Moreover, it can accurately group the features without shrinking
their magnitude. In fact, SPARC is closely related to OSCAR, so that the
proximity operator of the former can be efficiently computed based on that of
the latter, allowing using proximal splitting algorithms to solve problems with
SPARC regularization. Experiments on synthetic data and with benchmark breast
cancer data show that SPARC is a competitive group-sparsity inducing
regularizer for regression and classification.
| Xiangrong Zeng and M\'ario A. T. Figueiredo | null | 1310.4945 | null | null |
Learning Tensors in Reproducing Kernel Hilbert Spaces with Multilinear
Spectral Penalties | cs.LG | We present a general framework to learn functions in tensor product
reproducing kernel Hilbert spaces (TP-RKHSs). The methodology is based on a
novel representer theorem suitable for existing as well as new spectral
penalties for tensors. When the functions in the TP-RKHS are defined on the
Cartesian product of finite discrete sets, in particular, our main problem
formulation admits as a special case existing tensor completion problems. Other
special cases include transfer learning with multimodal side information and
multilinear multitask learning. For the latter case, our kernel-based view is
instrumental to derive nonlinear extensions of existing model classes. We give
a novel algorithm and show in experiments the usefulness of the proposed
extensions.
| Marco Signoretto and Lieven De Lathauwer and Johan A.K. Suykens | null | 1310.4977 | null | null |
Online Classification Using a Voted RDA Method | cs.LG stat.ML | We propose a voted dual averaging method for online classification problems
with explicit regularization. This method employs the update rule of the
regularized dual averaging (RDA) method, but only on the subsequence of
training examples where a classification error is made. We derive a bound on
the number of mistakes made by this method on the training set, as well as its
generalization error rate. We also introduce the concept of relative strength
of regularization, and show how it affects the mistake bound and generalization
performance. We experimented with the method using $\ell_1$ regularization on a
large-scale natural language processing task, and obtained state-of-the-art
classification performance with fairly sparse models.
| Tianbing Xu, Jianfeng Gao, Lin Xiao, Amelia Regan | null | 1310.5007 | null | null |
Thompson Sampling in Dynamic Systems for Contextual Bandit Problems | cs.LG | We consider the multiarm bandit problems in the timevarying dynamic system
for rich structural features. For the nonlinear dynamic model, we propose the
approximate inference for the posterior distributions based on Laplace
Approximation. For the context bandit problems, Thompson Sampling is adopted
based on the underlying posterior distributions of the parameters. More
specifically, we introduce the discount decays on the previous samples impact
and analyze the different decay rates with the underlying sample dynamics.
Consequently, the exploration and exploitation is adaptively tradeoff according
to the dynamics in the system.
| Tianbing Xu, Yaming Yu, John Turner, Amelia Regan | null | 1310.5008 | null | null |
A Theoretical and Experimental Comparison of the EM and SEM Algorithm | cs.LG stat.ML | In this paper we provide a new analysis of the SEM algorithm. Unlike previous
work, we focus on the analysis of a single run of the algorithm. First, we
discuss the algorithm for general mixture distributions. Second, we consider
Gaussian mixture models and show that with high probability the update
equations of the EM algorithm and its stochastic variant are almost the same,
given that the input set is sufficiently large. Our experiments confirm that
this still holds for a large number of successive update steps. In particular,
for Gaussian mixture models, we show that the stochastic variant runs nearly
twice as fast.
| Johannes Bl\"omer, Kathrin Bujna, and Daniel Kuntze | null | 1310.5034 | null | null |
Linearized Alternating Direction Method with Parallel Splitting and
Adaptive Penalty for Separable Convex Programs in Machine Learning | cs.NA cs.LG math.OC stat.ML | Many problems in machine learning and other fields can be (re)for-mulated as
linearly constrained separable convex programs. In most of the cases, there are
multiple blocks of variables. However, the traditional alternating direction
method (ADM) and its linearized version (LADM, obtained by linearizing the
quadratic penalty term) are for the two-block case and cannot be naively
generalized to solve the multi-block case. So there is great demand on
extending the ADM based methods for the multi-block case. In this paper, we
propose LADM with parallel splitting and adaptive penalty (LADMPSAP) to solve
multi-block separable convex programs efficiently. When all the component
objective functions have bounded subgradients, we obtain convergence results
that are stronger than those of ADM and LADM, e.g., allowing the penalty
parameter to be unbounded and proving the sufficient and necessary conditions}
for global convergence. We further propose a simple optimality measure and
reveal the convergence rate of LADMPSAP in an ergodic sense. For programs with
extra convex set constraints, with refined parameter estimation we devise a
practical version of LADMPSAP for faster convergence. Finally, we generalize
LADMPSAP to handle programs with more difficult objective functions by
linearizing part of the objective function as well. LADMPSAP is particularly
suitable for sparse representation and low-rank recovery problems because its
subproblems have closed form solutions and the sparsity and low-rankness of the
iterates can be preserved during the iteration. It is also highly
parallelizable and hence fits for parallel or distributed computing. Numerical
experiments testify to the advantages of LADMPSAP in speed and numerical
accuracy.
| Zhouchen Lin, Risheng Liu, Huan Li | null | 1310.5035 | null | null |
Distributional semantics beyond words: Supervised learning of analogy
and paraphrase | cs.LG cs.AI cs.CL cs.IR | There have been several efforts to extend distributional semantics beyond
individual words, to measure the similarity of word pairs, phrases, and
sentences (briefly, tuples; ordered sets of words, contiguous or
noncontiguous). One way to extend beyond words is to compare two tuples using a
function that combines pairwise similarities between the component words in the
tuples. A strength of this approach is that it works with both relational
similarity (analogy) and compositional similarity (paraphrase). However, past
work required hand-coding the combination function for different tasks. The
main contribution of this paper is that combination functions are generated by
supervised learning. We achieve state-of-the-art results in measuring
relational similarity between word pairs (SAT analogies and SemEval~2012 Task
2) and measuring compositional similarity between noun-modifier phrases and
unigrams (multiple-choice paraphrase questions).
| Peter D. Turney | null | 1310.5042 | null | null |
On the Suitable Domain for SVM Training in Image Coding | cs.CV cs.LG stat.ML | Conventional SVM-based image coding methods are founded on independently
restricting the distortion in every image coefficient at some particular image
representation. Geometrically, this implies allowing arbitrary signal
distortions in an $n$-dimensional rectangle defined by the
$\varepsilon$-insensitivity zone in each dimension of the selected image
representation domain. Unfortunately, not every image representation domain is
well-suited for such a simple, scalar-wise, approach because statistical and/or
perceptual interactions between the coefficients may exist. These interactions
imply that scalar approaches may induce distortions that do not follow the
image statistics and/or are perceptually annoying. Taking into account these
relations would imply using non-rectangular $\varepsilon$-insensitivity regions
(allowing coupled distortions in different coefficients), which is beyond the
conventional SVM formulation.
In this paper, we report a condition on the suitable domain for developing
efficient SVM image coding schemes. We analytically demonstrate that no linear
domain fulfills this condition because of the statistical and perceptual
inter-coefficient relations that exist in these domains. This theoretical
result is experimentally confirmed by comparing SVM learning in previously
reported linear domains and in a recently proposed non-linear perceptual domain
that simultaneously reduces the statistical and perceptual relations (so it is
closer to fulfilling the proposed condition). These results highlight the
relevance of an appropriate choice of the image representation before SVM
learning.
| Gustavo Camps-Valls, Juan Guti\'errez, Gabriel G\'omez-P\'erez,
Jes\'us Malo | null | 1310.5082 | null | null |
Kernel Multivariate Analysis Framework for Supervised Subspace Learning:
A Tutorial on Linear and Kernel Multivariate Methods | stat.ML cs.LG | Feature extraction and dimensionality reduction are important tasks in many
fields of science dealing with signal processing and analysis. The relevance of
these techniques is increasing as current sensory devices are developed with
ever higher resolution, and problems involving multimodal data sources become
more common. A plethora of feature extraction methods are available in the
literature collectively grouped under the field of Multivariate Analysis (MVA).
This paper provides a uniform treatment of several methods: Principal Component
Analysis (PCA), Partial Least Squares (PLS), Canonical Correlation Analysis
(CCA) and Orthonormalized PLS (OPLS), as well as their non-linear extensions
derived by means of the theory of reproducing kernel Hilbert spaces. We also
review their connections to other methods for classification and statistical
dependence estimation, and introduce some recent developments to deal with the
extreme cases of large-scale and low-sized problems. To illustrate the wide
applicability of these methods in both classification and regression problems,
we analyze their performance in a benchmark of publicly available data sets,
and pay special attention to specific real applications involving audio
processing for music genre prediction and hyperspectral satellite images for
Earth and climate monitoring.
| Jer\'onimo Arenas-Garc\'ia, Kaare Brandt Petersen, Gustavo
Camps-Valls, Lars Kai Hansen | 10.1109/MSP.2013.2250591 | 1310.5089 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.