title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Coherence and sufficient sampling densities for reconstruction in
compressed sensing | cs.LG cs.IT math.AG math.IT stat.ML | We give a new, very general, formulation of the compressed sensing problem in
terms of coordinate projections of an analytic variety, and derive sufficient
sampling rates for signal reconstruction. Our bounds are linear in the
coherence of the signal space, a geometric parameter independent of the
specific signal and measurement, and logarithmic in the ambient dimension where
the signal is presented. We exemplify our approach by deriving sufficient
sampling densities for low-rank matrix completion and distance matrix
completion which are independent of the true matrix.
| Franz J. Kir\'aly and Louis Theran | null | 1302.2767 | null | null |
An Efficient Dual Approach to Distance Metric Learning | cs.LG | Distance metric learning is of fundamental interest in machine learning
because the distance metric employed can significantly affect the performance
of many learning methods. Quadratic Mahalanobis metric learning is a popular
approach to the problem, but typically requires solving a semidefinite
programming (SDP) problem, which is computationally expensive. Standard
interior-point SDP solvers typically have a complexity of $O(D^{6.5})$ (with
$D$ the dimension of input data), and can thus only practically solve problems
exhibiting less than a few thousand variables. Since the number of variables is
$D (D+1) / 2 $, this implies a limit upon the size of problem that can
practically be solved of around a few hundred dimensions. The complexity of the
popular quadratic Mahalanobis metric learning approach thus limits the size of
problem to which metric learning can be applied. Here we propose a
significantly more efficient approach to the metric learning problem based on
the Lagrange dual formulation of the problem. The proposed formulation is much
simpler to implement, and therefore allows much larger Mahalanobis metric
learning problems to be solved. The time complexity of the proposed method is
$O (D ^ 3) $, which is significantly lower than that of the SDP approach.
Experiments on a variety of datasets demonstrate that the proposed method
achieves an accuracy comparable to the state-of-the-art, but is applicable to
significantly larger problems. We also show that the proposed method can be
applied to solve more general Frobenius-norm regularized SDP problems
approximately.
| Chunhua Shen, Junae Kim, Fayao Liu, Lei Wang, Anton van den Hengel | null | 1302.3219 | null | null |
Adaptive Crowdsourcing Algorithms for the Bandit Survey Problem | cs.LG | Very recently crowdsourcing has become the de facto platform for distributing
and collecting human computation for a wide range of tasks and applications
such as information retrieval, natural language processing and machine
learning. Current crowdsourcing platforms have some limitations in the area of
quality control. Most of the effort to ensure good quality has to be done by
the experimenter who has to manage the number of workers needed to reach good
results.
We propose a simple model for adaptive quality control in crowdsourced
multiple-choice tasks which we call the \emph{bandit survey problem}. This
model is related to, but technically different from the well-known multi-armed
bandit problem. We present several algorithms for this problem, and support
them with analysis and simulations. Our approach is based in our experience
conducting relevance evaluation for a large commercial search engine.
| Ittai Abraham, Omar Alonso, Vasilis Kandylas and Aleksandrs Slivkins | null | 1302.3268 | null | null |
StructBoost: Boosting Methods for Predicting Structured Output Variables | cs.LG | Boosting is a method for learning a single accurate predictor by linearly
combining a set of less accurate weak learners. Recently, structured learning
has found many applications in computer vision. Inspired by structured support
vector machines (SSVM), here we propose a new boosting algorithm for structured
output prediction, which we refer to as StructBoost. StructBoost supports
nonlinear structured learning by combining a set of weak structured learners.
As SSVM generalizes SVM, our StructBoost generalizes standard boosting
approaches such as AdaBoost, or LPBoost to structured learning. The resulting
optimization problem of StructBoost is more challenging than SSVM in the sense
that it may involve exponentially many variables and constraints. In contrast,
for SSVM one usually has an exponential number of constraints and a
cutting-plane method is used. In order to efficiently solve StructBoost, we
formulate an equivalent $ 1 $-slack formulation and solve it using a
combination of cutting planes and column generation. We show the versatility
and usefulness of StructBoost on a range of problems such as optimizing the
tree loss for hierarchical multi-class classification, optimizing the Pascal
overlap criterion for robust visual tracking and learning conditional random
field parameters for image segmentation.
| Chunhua Shen, Guosheng Lin, Anton van den Hengel | 10.1109/TPAMI.2014.2315792 | 1302.3283 | null | null |
A consistent clustering-based approach to estimating the number of
change-points in highly dependent time-series | stat.ML cs.IT cs.LG math.IT math.ST stat.TH | The problem of change-point estimation is considered under a general
framework where the data are generated by unknown stationary ergodic process
distributions. In this context, the consistent estimation of the number of
change-points is provably impossible. However, it is shown that a consistent
clustering method may be used to estimate the number of change points, under
the additional constraint that the correct number of process distributions that
generate the data is provided. This additional parameter has a natural
interpretation in many real-world applications. An algorithm is proposed that
estimates the number of change-points and locates the changes. The proposed
algorithm is shown to be asymptotically consistent; its empirical evaluations
are provided.
| Azaden Khaleghi and Daniil Ryabko | null | 1302.3407 | null | null |
Exact Methods for Multistage Estimation of a Binomial Proportion | math.ST cs.LG cs.NA math.PR stat.TH | We first review existing sequential methods for estimating a binomial
proportion. Afterward, we propose a new family of group sequential sampling
schemes for estimating a binomial proportion with prescribed margin of error
and confidence level. In particular, we establish the uniform controllability
of coverage probability and the asymptotic optimality for such a family of
sampling schemes. Our theoretical results establish the possibility that the
parameters of this family of sampling schemes can be determined so that the
prescribed level of confidence is guaranteed with little waste of samples.
Analytic bounds for the cumulative distribution functions and expectations of
sample numbers are derived. Moreover, we discuss the inherent connection of
various sampling schemes. Numerical issues are addressed for improving the
accuracy and efficiency of computation. Computational experiments are conducted
for comparing sampling schemes. Illustrative examples are given for
applications in clinical trials.
| Zhengjia Chen and Xinjia Chen | null | 1302.3447 | null | null |
Learning Equivalence Classes of Bayesian Networks Structures | cs.AI cs.LG stat.ML | Approaches to learning Bayesian networks from data typically combine a
scoring function with a heuristic search procedure. Given a Bayesian network
structure, many of the scoring functions derived in the literature return a
score for the entire equivalence class to which the structure belongs. When
using such a scoring function, it is appropriate for the heuristic search
algorithm to search over equivalence classes of Bayesian networks as opposed to
individual structures. We present the general formulation of a search space for
which the states of the search correspond to equivalence classes of structures.
Using this space, any one of a number of heuristic search algorithms can easily
be applied. We compare greedy search performance in the proposed search space
to greedy search performance in a search space for which the states correspond
to individual Bayesian network structures.
| David Maxwell Chickering | null | 1302.3566 | null | null |
Efficient Approximations for the Marginal Likelihood of Incomplete Data
Given a Bayesian Network | cs.LG cs.AI stat.ML | We discuss Bayesian methods for learning Bayesian networks when data sets are
incomplete. In particular, we examine asymptotic approximations for the
marginal likelihood of incomplete data given a Bayesian network. We consider
the Laplace approximation and the less accurate but more efficient BIC/MDL
approximation. We also consider approximations proposed by Draper (1993) and
Cheeseman and Stutz (1995). These approximations are as efficient as BIC/MDL,
but their accuracy has not been studied in any depth. We compare the accuracy
of these approximations under the assumption that the Laplace approximation is
the most accurate. In experiments using synthetic data generated from discrete
naive-Bayes models having a hidden root node, we find that the CS measure is
the most accurate.
| David Maxwell Chickering, David Heckerman | null | 1302.3567 | null | null |
Learning Bayesian Networks with Local Structure | cs.AI cs.LG stat.ML | In this paper we examine a novel addition to the known methods for learning
Bayesian networks from data that improves the quality of the learned networks.
Our approach explicitly represents and learns the local structure in the
conditional probability tables (CPTs), that quantify these networks. This
increases the space of possible models, enabling the representation of CPTs
with a variable number of parameters that depends on the learned local
structures. The resulting learning procedure is capable of inducing models that
better emulate the real complexity of the interactions present in the data. We
describe the theoretical foundations and practical aspects of learning local
structures, as well as an empirical evaluation of the proposed method. This
evaluation indicates that learning curves characterizing the procedure that
exploits the local structure converge faster than these of the standard
procedure. Our results also show that networks learned with local structure
tend to be more complex (in terms of arcs), yet require less parameters.
| Nir Friedman, Moises Goldszmidt | null | 1302.3577 | null | null |
On the Sample Complexity of Learning Bayesian Networks | cs.LG stat.ML | In recent years there has been an increasing interest in learning Bayesian
networks from data. One of the most effective methods for learning such
networks is based on the minimum description length (MDL) principle. Previous
work has shown that this learning procedure is asymptotically successful: with
probability one, it will converge to the target distribution, given a
sufficient number of samples. However, the rate of this convergence has been
hitherto unknown. In this work we examine the sample complexity of MDL based
learning procedures for Bayesian networks. We show that the number of samples
needed to learn an epsilon-close approximation (in terms of entropy distance)
with confidence delta is O((1/epsilon)^(4/3)log(1/epsilon)log(1/delta)loglog
(1/delta)). This means that the sample complexity is a low-order polynomial in
the error threshold and sub-linear in the confidence bound. We also discuss how
the constants in this term depend on the complexity of the target distribution.
Finally, we address questions of asymptotic minimality and propose a method for
using the sample complexity results to speed up the learning process.
| Nir Friedman, Zohar Yakhini | null | 1302.3579 | null | null |
Asymptotic Model Selection for Directed Networks with Hidden Variables | cs.LG cs.AI stat.ML | We extend the Bayesian Information Criterion (BIC), an asymptotic
approximation for the marginal likelihood, to Bayesian networks with hidden
variables. This approximation can be used to select models given large samples
of data. The standard BIC as well as our extension punishes the complexity of a
model according to the dimension of its parameters. We argue that the dimension
of a Bayesian network with hidden variables is the rank of the Jacobian matrix
of the transformation between the parameters of the network and the parameters
of the observable variables. We compute the dimensions of several networks
including the naive Bayes model with a hidden root node.
| Dan Geiger, David Heckerman, Christopher Meek | null | 1302.3580 | null | null |
Bayesian Learning of Loglinear Models for Neural Connectivity | cs.LG q-bio.NC stat.AP stat.ML | This paper presents a Bayesian approach to learning the connectivity
structure of a group of neurons from data on configuration frequencies. A major
objective of the research is to provide statistical tools for detecting changes
in firing patterns with changing stimuli. Our framework is not restricted to
the well-understood case of pair interactions, but generalizes the Boltzmann
machine model to allow for higher order interactions. The paper applies a
Markov Chain Monte Carlo Model Composition (MC3) algorithm to search over
connectivity structures and uses Laplace's method to approximate posterior
probabilities of structures. Performance of the methods was tested on synthetic
data. The models were also applied to data obtained by Vaadia on multi-unit
recordings of several neurons in the visual cortex of a rhesus monkey in two
different attentional states. Results confirmed the experimenters' conjecture
that different attentional states were associated with different interaction
structures.
| Kathryn Blackmond Laskey, Laura Martignon | null | 1302.3590 | null | null |
A Latent Source Model for Nonparametric Time Series Classification | stat.ML cs.LG cs.SI | For classifying time series, a nearest-neighbor approach is widely used in
practice with performance often competitive with or better than more elaborate
methods such as neural networks, decision trees, and support vector machines.
We develop theoretical justification for the effectiveness of
nearest-neighbor-like classification of time series. Our guiding hypothesis is
that in many applications, such as forecasting which topics will become trends
on Twitter, there aren't actually that many prototypical time series to begin
with, relative to the number of time series we have access to, e.g., topics
become trends on Twitter only in a few distinct manners whereas we can collect
massive amounts of Twitter data. To operationalize this hypothesis, we propose
a latent source model for time series, which naturally leads to a "weighted
majority voting" classification rule that can be approximated by a
nearest-neighbor classifier. We establish nonasymptotic performance guarantees
of both weighted majority voting and nearest-neighbor classification under our
model accounting for how much of the time series we observe and the model
complexity. Experimental results on synthetic data show weighted majority
voting achieving the same misclassification rate as nearest-neighbor
classification while observing less of the time series. We then use weighted
majority to forecast which news topics on Twitter become trends, where we are
able to detect such "trending topics" in advance of Twitter 79% of the time,
with a mean early advantage of 1 hour and 26 minutes, a true positive rate of
95%, and a false positive rate of 4%.
| George H. Chen, Stanislav Nikolov, Devavrat Shah | null | 1302.3639 | null | null |
Bio-inspired data mining: Treating malware signatures as biosequences | cs.LG q-bio.QM stat.ML | The application of machine learning to bioinformatics problems is well
established. Less well understood is the application of bioinformatics
techniques to machine learning and, in particular, the representation of
non-biological data as biosequences. The aim of this paper is to explore the
effects of giving amino acid representation to problematic machine learning
data and to evaluate the benefits of supplementing traditional machine learning
with bioinformatics tools and techniques. The signatures of 60 computer viruses
and 60 computer worms were converted into amino acid representations and first
multiply aligned separately to identify conserved regions across different
families within each class (virus and worm). This was followed by a second
alignment of all 120 aligned signatures together so that non-conserved regions
were identified prior to input to a number of machine learning techniques.
Differences in length between virus and worm signatures after the first
alignment were resolved by the second alignment. Our first set of experiments
indicates that representing computer malware signatures as amino acid sequences
followed by alignment leads to greater classification and prediction accuracy.
Our second set of experiments indicates that checking the results of data
mining from artificial virus and worm data against known proteins can lead to
generalizations being made from the domain of naturally occurring proteins to
malware signatures. However, further work is needed to determine the advantages
and disadvantages of different representations and sequence alignment methods
for handling problematic machine learning data.
| Ajit Narayanan and Yi Chen | null | 1302.3668 | null | null |
Density Ratio Hidden Markov Models | stat.ML cs.LG | Hidden Markov models and their variants are the predominant sequential
classification method in such domains as speech recognition, bioinformatics and
natural language processing. Being generative rather than discriminative
models, however, their classification performance is a drawback. In this paper
we apply ideas from the field of density ratio estimation to bypass the
difficult step of learning likelihood functions in HMMs. By reformulating
inference and model fitting in terms of density ratios and applying a fast
kernel-based estimation method, we show that it is possible to obtain a
striking increase in discriminative performance while retaining the
probabilistic qualities of the HMM. We demonstrate experimentally that this
formulation makes more efficient use of training data than alternative
approaches.
| John A. Quinn, Masashi Sugiyama | null | 1302.3700 | null | null |
Thompson Sampling in Switching Environments with Bayesian Online Change
Point Detection | cs.LG | Thompson Sampling has recently been shown to be optimal in the Bernoulli
Multi-Armed Bandit setting[Kaufmann et al., 2012]. This bandit problem assumes
stationary distributions for the rewards. It is often unrealistic to model the
real world as a stationary distribution. In this paper we derive and evaluate
algorithms using Thompson Sampling for a Switching Multi-Armed Bandit Problem.
We propose a Thompson Sampling strategy equipped with a Bayesian change point
mechanism to tackle this problem. We develop algorithms for a variety of cases
with constant switching rate: when switching occurs all arms change (Global
Switching), switching occurs independently for each arm (Per-Arm Switching),
when the switching rate is known and when it must be inferred from data. This
leads to a family of algorithms we collectively term Change-Point Thompson
Sampling (CTS). We show empirical results of the algorithm in 4 artificial
environments, and 2 derived from real world data; news click-through[Yahoo!,
2011] and foreign exchange data[Dukascopy, 2012], comparing them to some other
bandit algorithms. In real world data CTS is the most effective.
| Joseph Mellor, Jonathan Shapiro | null | 1302.3721 | null | null |
Understanding Boltzmann Machine and Deep Learning via A Confident
Information First Principle | cs.NE cs.LG stat.ML | Typical dimensionality reduction methods focus on directly reducing the
number of random variables while retaining maximal variations in the data. In
this paper, we consider the dimensionality reduction in parameter spaces of
binary multivariate distributions. We propose a general
Confident-Information-First (CIF) principle to maximally preserve parameters
with confident estimates and rule out unreliable or noisy parameters. Formally,
the confidence of a parameter can be assessed by its Fisher information, which
establishes a connection with the inverse variance of any unbiased estimate for
the parameter via the Cram\'{e}r-Rao bound. We then revisit Boltzmann machines
(BM) and theoretically show that both single-layer BM without hidden units
(SBM) and restricted BM (RBM) can be solidly derived using the CIF principle.
This can not only help us uncover and formalize the essential parts of the
target density that SBM and RBM capture, but also suggest that the deep neural
network consisting of several layers of RBM can be seen as the layer-wise
application of CIF. Guided by the theoretical analysis, we develop a
sample-specific CIF-based contrastive divergence (CD-CIF) algorithm for SBM and
a CIF-based iterative projection procedure (IP) for RBM. Both CD-CIF and IP are
studied in a series of density estimation experiments.
| Xiaozhao Zhao and Yuexian Hou and Qian Yu and Dawei Song and Wenjie Li | null | 1302.3931 | null | null |
Clustering validity based on the most similarity | cs.LG stat.ML | One basic requirement of many studies is the necessity of classifying data.
Clustering is a proposed method for summarizing networks. Clustering methods
can be divided into two categories named model-based approaches and algorithmic
approaches. Since the most of clustering methods depend on their input
parameters, it is important to evaluate the result of a clustering algorithm
with its different input parameters, to choose the most appropriate one. There
are several clustering validity techniques based on inner density and outer
density of clusters that represent different metrics to choose the most
appropriate clustering independent of the input parameters. According to
dependency of previous methods on the input parameters, one challenge in facing
with large systems, is to complete data incrementally that effects on the final
choice of the most appropriate clustering. Those methods define the existence
of high intensity in a cluster, and low intensity among different clusters as
the measure of choosing the optimal clustering. This measure has a tremendous
problem, not availing all data at the first stage. In this paper, we introduce
an efficient measure in which maximum number of repetitions for various initial
values occurs.
| Raheleh Namayandeh, Farzad Didehvar, Zahra Shojaei | null | 1302.3956 | null | null |
Canonical dual solutions to nonconvex radial basis neural network
optimization problem | cs.NE cs.LG stat.ML | Radial Basis Functions Neural Networks (RBFNNs) are tools widely used in
regression problems. One of their principal drawbacks is that the formulation
corresponding to the training with the supervision of both the centers and the
weights is a highly non-convex optimization problem, which leads to some
fundamentally difficulties for traditional optimization theory and methods.
This paper presents a generalized canonical duality theory for solving this
challenging problem. We demonstrate that by sequential canonical dual
transformations, the nonconvex optimization problem of the RBFNN can be
reformulated as a canonical dual problem (without duality gap). Both global
optimal solution and local extrema can be classified. Several applications to
one of the most used Radial Basis Functions, the Gaussian function, are
illustrated. Our results show that even for one-dimensional case, the global
minimizer of the nonconvex problem may not be the best solution to the RBFNNs,
and the canonical dual theory is a promising tool for solving general neural
networks training problems.
| Vittorio Latorre and David Yang Gao | null | 1302.4141 | null | null |
Metrics for Multivariate Dictionaries | cs.LG stat.ML | Overcomplete representations and dictionary learning algorithms kept
attracting a growing interest in the machine learning community. This paper
addresses the emerging problem of comparing multivariate overcomplete
representations. Despite a recurrent need to rely on a distance for learning or
assessing multivariate overcomplete representations, no metrics in their
underlying spaces have yet been proposed. Henceforth we propose to study
overcomplete representations from the perspective of frame theory and matrix
manifolds. We consider distances between multivariate dictionaries as distances
between their spans which reveal to be elements of a Grassmannian manifold. We
introduce Wasserstein-like set-metrics defined on Grassmannian spaces and study
their properties both theoretically and numerically. Indeed a deep experimental
study based on tailored synthetic datasetsand real EEG signals for
Brain-Computer Interfaces (BCI) have been conducted. In particular, the
introduced metrics have been embedded in clustering algorithm and applied to
BCI Competition IV-2a for dataset quality assessment. Besides, a principled
connection is made between three close but still disjoint research fields,
namely, Grassmannian packing, dictionary learning and compressed sensing.
| Sylvain Chevallier and Quentin Barth\'elemy and Jamal Atif | 10.1109/ICASSP.2014.6854993 | 1302.4242 | null | null |
Feature Multi-Selection among Subjective Features | cs.LG stat.ML | When dealing with subjective, noisy, or otherwise nebulous features, the
"wisdom of crowds" suggests that one may benefit from multiple judgments of the
same feature on the same object. We give theoretically-motivated `feature
multi-selection' algorithms that choose, among a large set of candidate
features, not only which features to judge but how many times to judge each
one. We demonstrate the effectiveness of this approach for linear regression on
a crowdsourced learning task of predicting people's height and weight from
photos, using features such as 'gender' and 'estimated weight' as well as
culturally fraught ones such as 'attractive'.
| Sivan Sabato and Adam Kalai | null | 1302.4297 | null | null |
On Translation Invariant Kernels and Screw Functions | math.FA cs.LG stat.ML | We explore the connection between Hilbertian metrics and positive definite
kernels on the real line. In particular, we look at a well-known
characterization of translation invariant Hilbertian metrics on the real line
by von Neumann and Schoenberg (1941). Using this result we are able to give an
alternate proof of Bochner's theorem for translation invariant positive
definite kernels on the real line (Rudin, 1962).
| Purushottam Kar and Harish Karnick | null | 1302.4343 | null | null |
Online Learning with Switching Costs and Other Adaptive Adversaries | cs.LG stat.ML | We study the power of different types of adaptive (nonoblivious) adversaries
in the setting of prediction with expert advice, under both full-information
and bandit feedback. We measure the player's performance using a new notion of
regret, also known as policy regret, which better captures the adversary's
adaptiveness to the player's behavior. In a setting where losses are allowed to
drift, we characterize ---in a nearly complete manner--- the power of adaptive
adversaries with bounded memories and switching costs. In particular, we show
that with switching costs, the attainable rate with bandit feedback is
$\widetilde{\Theta}(T^{2/3})$. Interestingly, this rate is significantly worse
than the $\Theta(\sqrt{T})$ rate attainable with switching costs in the
full-information case. Via a novel reduction from experts to bandits, we also
show that a bounded memory adversary can force $\widetilde{\Theta}(T^{2/3})$
regret even in the full information case, proving that switching costs are
easier to control than bounded memory adversaries. Our lower bounds rely on a
new stochastic adversary strategy that generates loss processes with strong
dependencies.
| Nicolo Cesa-Bianchi, Ofer Dekel and Ohad Shamir | null | 1302.4387 | null | null |
Maxout Networks | stat.ML cs.LG | We consider the problem of designing models to leverage a recently introduced
approximate model averaging technique called dropout. We define a simple new
model called maxout (so named because its output is the max of a set of inputs,
and because it is a natural companion to dropout) designed to both facilitate
optimization by dropout and improve the accuracy of dropout's fast approximate
model averaging technique. We empirically verify that the model successfully
accomplishes both of these tasks. We use maxout and dropout to demonstrate
state of the art classification performance on four benchmark datasets: MNIST,
CIFAR-10, CIFAR-100, and SVHN.
| Ian J. Goodfellow and David Warde-Farley and Mehdi Mirza and Aaron
Courville and Yoshua Bengio | null | 1302.4389 | null | null |
Breaking the Small Cluster Barrier of Graph Clustering | cs.LG stat.ML | This paper investigates graph clustering in the planted cluster model in the
presence of {\em small clusters}. Traditional results dictate that for an
algorithm to provably correctly recover the clusters, {\em all} clusters must
be sufficiently large (in particular, $\tilde{\Omega}(\sqrt{n})$ where $n$ is
the number of nodes of the graph). We show that this is not really a
restriction: by a more refined analysis of the trace-norm based recovery
approach proposed in Jalali et al. (2011) and Chen et al. (2012), we prove that
small clusters, under certain mild assumptions, do not hinder recovery of large
ones.
Based on this result, we further devise an iterative algorithm to recover
{\em almost all clusters} via a "peeling strategy", i.e., recover large
clusters first, leading to a reduced problem, and repeat this procedure. These
results are extended to the {\em partial observation} setting, in which only a
(chosen) part of the graph is observed.The peeling strategy gives rise to an
active learning algorithm, in which edges adjacent to smaller clusters are
queried more often as large clusters are learned (and removed).
From a high level, this paper sheds novel insights on high-dimensional
statistics and learning structured data, by presenting a structured matrix
learning problem for which a one shot convex relaxation approach necessarily
fails, but a carefully constructed sequence of convex relaxationsdoes the job.
| Nir Ailon and Yudong Chen and Xu Huan | null | 1302.4549 | null | null |
Optimal Discriminant Functions Based On Sampled Distribution Distance
for Modulation Classification | stat.ML cs.LG cs.PF | In this letter, we derive the optimal discriminant functions for modulation
classification based on the sampled distribution distance. The proposed method
classifies various candidate constellations using a low complexity approach
based on the distribution distance at specific testpoints along the cumulative
distribution function. This method, based on the Bayesian decision criteria,
asymptotically provides the minimum classification error possible given a set
of testpoints. Testpoint locations are also optimized to improve classification
performance. The method provides significant gains over existing approaches
that also use the distribution of the signal features.
| Paulo Urriza, Eric Rebeiz, Danijela Cabric | 10.1109/LCOMM.2013.082113.131131 | 1302.4773 | null | null |
A Labeled Graph Kernel for Relationship Extraction | cs.CL cs.LG | In this paper, we propose an approach for Relationship Extraction (RE) based
on labeled graph kernels. The kernel we propose is a particularization of a
random walk kernel that exploits two properties previously studied in the RE
literature: (i) the words between the candidate entities or connecting them in
a syntactic representation are particularly likely to carry information
regarding the relationship; and (ii) combining information from distinct
sources in a kernel may help the RE system make better decisions. We performed
experiments on a dataset of protein-protein interactions and the results show
that our approach obtains effectiveness values that are comparable with the
state-of-the art kernel methods. Moreover, our approach is able to outperform
the state-of-the-art kernels when combined with other kernel methods.
| Gon\c{c}alo Sim\~oes, Helena Galhardas, David Matos | null | 1302.4874 | null | null |
Fast methods for denoising matrix completion formulations, with
applications to robust seismic data interpolation | stat.ML cs.LG | Recent SVD-free matrix factorization formulations have enabled rank
minimization for systems with millions of rows and columns, paving the way for
matrix completion in extremely large-scale applications, such as seismic data
interpolation.
In this paper, we consider matrix completion formulations designed to hit a
target data-fitting error level provided by the user, and propose an algorithm
called LR-BPDN that is able to exploit factorized formulations to solve the
corresponding optimization problem. Since practitioners typically have strong
prior knowledge about target error level, this innovation makes it easy to
apply the algorithm in practice, leaving only the factor rank to be determined.
Within the established framework, we propose two extensions that are highly
relevant to solving practical challenges of data interpolation. First, we
propose a weighted extension that allows known subspace information to improve
the results of matrix completion formulations. We show how this weighting can
be used in the context of frequency continuation, an essential aspect to
seismic data interpolation. Second, we propose matrix completion formulations
that are robust to large measurement errors in the available data.
We illustrate the advantages of LR-BPDN on the collaborative filtering
problem using the MovieLens 1M, 10M, and Netflix 100M datasets. Then, we use
the new method, along with its robust and subspace re-weighted extensions, to
obtain high-quality reconstructions for large scale seismic interpolation
problems with real data, even in the presence of data contamination.
| Aleksandr Y. Aravkin and Rajiv Kumar and Hassan Mansour and Ben Recht
and Felix J. Herrmann | null | 1302.4886 | null | null |
Structure Discovery in Nonparametric Regression through Compositional
Kernel Search | stat.ML cs.LG stat.ME | Despite its importance, choosing the structural form of the kernel in
nonparametric regression remains a black art. We define a space of kernel
structures which are built compositionally by adding and multiplying a small
number of base kernels. We present a method for searching over this space of
structures which mirrors the scientific discovery process. The learned
structures can often decompose functions into interpretable components and
enable long-range extrapolation on time-series datasets. Our structure search
method outperforms many widely used kernels and kernel combination methods on a
variety of prediction tasks.
| David Duvenaud, James Robert Lloyd, Roger Grosse, Joshua B. Tenenbaum,
Zoubin Ghahramani | null | 1302.4922 | null | null |
A Characterization of the Dirichlet Distribution with Application to
Learning Bayesian Networks | cs.AI cs.LG | We provide a new characterization of the Dirichlet distribution. This
characterization implies that under assumptions made by several previous
authors for learning belief networks, a Dirichlet prior on the parameters is
inevitable.
| Dan Geiger, David Heckerman | null | 1302.4949 | null | null |
Estimating Continuous Distributions in Bayesian Classifiers | cs.LG cs.AI stat.ML | When modeling a probability distribution with a Bayesian network, we are
faced with the problem of how to handle continuous variables. Most previous
work has either solved the problem by discretizing, or assumed that the data
are generated by a single Gaussian. In this paper we abandon the normality
assumption and instead use statistical methods for nonparametric density
estimation. For a naive Bayesian classifier, we present experimental results on
a variety of natural and artificial domains, comparing two methods of density
estimation: assuming normality and modeling each conditional distribution with
a single Gaussian; and using nonparametric kernel density estimation. We
observe large reductions in error on several natural and artificial data sets,
which suggests that kernel estimation is a useful tool for learning Bayesian
models.
| George H. John, Pat Langley | null | 1302.4964 | null | null |
Matching Pursuit LASSO Part II: Applications and Sparse Recovery over
Batch Signals | cs.CV cs.LG stat.ML | Matching Pursuit LASSIn Part I \cite{TanPMLPart1}, a Matching Pursuit LASSO
({MPL}) algorithm has been presented for solving large-scale sparse recovery
(SR) problems. In this paper, we present a subspace search to further improve
the performance of MPL, and then continue to address another major challenge of
SR -- batch SR with many signals, a consideration which is absent from most of
previous $\ell_1$-norm methods. As a result, a batch-mode {MPL} is developed to
vastly speed up sparse recovery of many signals simultaneously. Comprehensive
numerical experiments on compressive sensing and face recognition tasks
demonstrate the superior performance of MPL and BMPL over other methods
considered in this paper, in terms of sparse recovery ability and efficiency.
In particular, BMPL is up to 400 times faster than existing $\ell_1$-norm
methods considered to be state-of-the-art.O Part II: Applications and Sparse
Recovery over Batch Signals
| Mingkui Tan and Ivor W. Tsang and Li Wang | null | 1302.5010 | null | null |
Pooling-Invariant Image Feature Learning | cs.CV cs.LG | Unsupervised dictionary learning has been a key component in state-of-the-art
computer vision recognition architectures. While highly effective methods exist
for patch-based dictionary learning, these methods may learn redundant features
after the pooling stage in a given early vision architecture. In this paper, we
offer a novel dictionary learning scheme to efficiently take into account the
invariance of learned features after the spatial pooling stage. The algorithm
is built on simple clustering, and thus enjoys efficiency and scalability. We
discuss the underlying mechanism that justifies the use of clustering
algorithms, and empirically show that the algorithm finds better dictionaries
than patch-based methods with the same dictionary size.
| Yangqing Jia, Oriol Vinyals, Trevor Darrell | null | 1302.5056 | null | null |
High-Dimensional Probability Estimation with Deep Density Models | stat.ML cs.LG | One of the fundamental problems in machine learning is the estimation of a
probability distribution from data. Many techniques have been proposed to study
the structure of data, most often building around the assumption that
observations lie on a lower-dimensional manifold of high probability. It has
been more difficult, however, to exploit this insight to build explicit,
tractable density models for high-dimensional data. In this paper, we introduce
the deep density model (DDM), a new approach to density estimation. We exploit
insights from deep learning to construct a bijective map to a representation
space, under which the transformation of the distribution of the data is
approximately factorized and has identical and known marginal densities. The
simplicity of the latent distribution under the model allows us to feasibly
explore it, and the invertibility of the map to characterize contraction of
measure across it. This enables us to compute normalized densities for
out-of-sample data. This combination of tractability and flexibility allows us
to tackle a variety of probabilistic tasks on high-dimensional datasets,
including: rapid computation of normalized densities at test-time without
evaluating a partition function; generation of samples without MCMC; and
characterization of the joint entropy of the data.
| Oren Rippel, Ryan Prescott Adams | null | 1302.5125 | null | null |
Prediction and Clustering in Signed Networks: A Local to Global
Perspective | cs.SI cs.LG | The study of social networks is a burgeoning research area. However, most
existing work deals with networks that simply encode whether relationships
exist or not. In contrast, relationships in signed networks can be positive
("like", "trust") or negative ("dislike", "distrust"). The theory of social
balance shows that signed networks tend to conform to some local patterns that,
in turn, induce certain global characteristics. In this paper, we exploit both
local as well as global aspects of social balance theory for two fundamental
problems in the analysis of signed networks: sign prediction and clustering.
Motivated by local patterns of social balance, we first propose two families of
sign prediction methods: measures of social imbalance (MOIs), and supervised
learning using high order cycles (HOCs). These methods predict signs of edges
based on triangles and \ell-cycles for relatively small values of \ell.
Interestingly, by examining measures of social imbalance, we show that the
classic Katz measure, which is used widely in unsigned link prediction,
actually has a balance theoretic interpretation when applied to signed
networks. Furthermore, motivated by the global structure of balanced networks,
we propose an effective low rank modeling approach for both sign prediction and
clustering. For the low rank modeling approach, we provide theoretical
performance guarantees via convex relaxations, scale it up to large problem
sizes using a matrix factorization based algorithm, and provide extensive
experimental validation including comparisons with local approaches. Our
experimental results indicate that, by adopting a more global viewpoint of
balance structure, we get significant performance and computational gains in
prediction and clustering tasks on signed networks. Our work therefore
highlights the usefulness of the global aspect of balance theory for the
analysis of signed networks.
| Kai-Yang Chiang, Cho-Jui Hsieh, Nagarajan Natarajan, Ambuj Tewari and
Inderjit S. Dhillon | null | 1302.5145 | null | null |
Graph-based Generalization Bounds for Learning Binary Relations | cs.LG | We investigate the generalizability of learned binary relations: functions
that map pairs of instances to a logical indicator. This problem has
application in numerous areas of machine learning, such as ranking, entity
resolution and link prediction. Our learning framework incorporates an example
labeler that, given a sequence $X$ of $n$ instances and a desired training size
$m$, subsamples $m$ pairs from $X \times X$ without replacement. The challenge
in analyzing this learning scenario is that pairwise combinations of random
variables are inherently dependent, which prevents us from using traditional
learning-theoretic arguments. We present a unified, graph-based analysis, which
allows us to analyze this dependence using well-known graph identities. We are
then able to bound the generalization error of learned binary relations using
Rademacher complexity and algorithmic stability. The rate of uniform
convergence is partially determined by the labeler's subsampling process. We
thus examine how various assumptions about subsampling affect generalization;
under a natural random subsampling process, our bounds guarantee
$\tilde{O}(1/\sqrt{n})$ uniform convergence.
| Ben London and Bert Huang and Lise Getoor | null | 1302.5348 | null | null |
Nonparametric Basis Pursuit via Sparse Kernel-based Learning | cs.LG cs.CV cs.IT math.IT stat.ML | Signal processing tasks as fundamental as sampling, reconstruction, minimum
mean-square error interpolation and prediction can be viewed under the prism of
reproducing kernel Hilbert spaces. Endowing this vantage point with
contemporary advances in sparsity-aware modeling and processing, promotes the
nonparametric basis pursuit advocated in this paper as the overarching
framework for the confluence of kernel-based learning (KBL) approaches
leveraging sparse linear regression, nuclear-norm regularization, and
dictionary learning. The novel sparse KBL toolbox goes beyond translating
sparse parametric approaches to their nonparametric counterparts, to
incorporate new possibilities such as multi-kernel selection and matrix
smoothing. The impact of sparse KBL to signal processing applications is
illustrated through test cases from cognitive radio sensing, microarray data
imputation, and network traffic prediction.
| Juan Andres Bazerque and Georgios B. Giannakis | null | 1302.5449 | null | null |
The Importance of Clipping in Neurocontrol by Direct Gradient Descent on
the Cost-to-Go Function and in Adaptive Dynamic Programming | cs.LG | In adaptive dynamic programming, neurocontrol and reinforcement learning, the
objective is for an agent to learn to choose actions so as to minimise a total
cost function. In this paper we show that when discretized time is used to
model the motion of the agent, it can be very important to do "clipping" on the
motion of the agent in the final time step of the trajectory. By clipping we
mean that the final time step of the trajectory is to be truncated such that
the agent stops exactly at the first terminal state reached, and no distance
further. We demonstrate that when clipping is omitted, learning performance can
fail to reach the optimum; and when clipping is done properly, learning
performance can improve significantly.
The clipping problem we describe affects algorithms which use explicit
derivatives of the model functions of the environment to calculate a learning
gradient. These include Backpropagation Through Time for Control, and methods
based on Dual Heuristic Dynamic Programming. However the clipping problem does
not significantly affect methods based on Heuristic Dynamic Programming,
Temporal Differences or Policy Gradient Learning algorithms. Similarly, the
clipping problem does not affect fixed-length finite-horizon problems.
| Michael Fairbank | null | 1302.5565 | null | null |
Accelerated Linear SVM Training with Adaptive Variable Selection
Frequencies | stat.ML cs.LG | Support vector machine (SVM) training is an active research area since the
dawn of the method. In recent years there has been increasing interest in
specialized solvers for the important case of linear models. The algorithm
presented by Hsieh et al., probably best known under the name of the
"liblinear" implementation, marks a major breakthrough. The method is analog to
established dual decomposition algorithms for training of non-linear SVMs, but
with greatly reduced computational complexity per update step. This comes at
the cost of not keeping track of the gradient of the objective any more, which
excludes the application of highly developed working set selection algorithms.
We present an algorithmic improvement to this method. We replace uniform
working set selection with an online adaptation of selection frequencies. The
adaptation criterion is inspired by modern second order working set selection
methods. The same mechanism replaces the shrinking heuristic. This novel
technique speeds up training in some cases by more than an order of magnitude.
| Tobias Glasmachers and \"Ur\"un Dogan | null | 1302.5608 | null | null |
Sparse Signal Estimation by Maximally Sparse Convex Optimization | cs.LG stat.ML | This paper addresses the problem of sparsity penalized least squares for
applications in sparse signal processing, e.g. sparse deconvolution. This paper
aims to induce sparsity more strongly than L1 norm regularization, while
avoiding non-convex optimization. For this purpose, this paper describes the
design and use of non-convex penalty functions (regularizers) constrained so as
to ensure the convexity of the total cost function, F, to be minimized. The
method is based on parametric penalty functions, the parameters of which are
constrained to ensure convexity of F. It is shown that optimal parameters can
be obtained by semidefinite programming (SDP). This maximally sparse convex
(MSC) approach yields maximally non-convex sparsity-inducing penalty functions
constrained such that the total cost function, F, is convex. It is demonstrated
that iterative MSC (IMSC) can yield solutions substantially more sparse than
the standard convex sparsity-inducing approach, i.e., L1 norm minimization.
| Ivan W. Selesnick and Ilker Bayram | 10.1109/TSP.2014.2298839 | 1302.5729 | null | null |
Prediction by Random-Walk Perturbation | cs.LG | We propose a version of the follow-the-perturbed-leader online prediction
algorithm in which the cumulative losses are perturbed by independent symmetric
random walks. The forecaster is shown to achieve an expected regret of the
optimal order O(sqrt(n log N)) where n is the time horizon and N is the number
of experts. More importantly, it is shown that the forecaster changes its
prediction at most O(sqrt(n log N)) times, in expectation. We also extend the
analysis to online combinatorial optimization and show that even in this more
general setting, the forecaster rarely switches between experts while having a
regret of near-optimal order.
| Luc Devroye, G\'abor Lugosi, Gergely Neu | null | 1302.5797 | null | null |
On learning parametric-output HMMs | cs.LG math.ST stat.ML stat.TH | We present a novel approach for learning an HMM whose outputs are distributed
according to a parametric family. This is done by {\em decoupling} the learning
task into two steps: first estimating the output parameters, and then
estimating the hidden states transition probabilities. The first step is
accomplished by fitting a mixture model to the output stationary distribution.
Given the parameters of this mixture model, the second step is formulated as
the solution of an easily solvable convex quadratic program. We provide an
error analysis for the estimated transition probabilities and show they are
robust to small perturbations in the estimates of the mixture parameters.
Finally, we support our analysis with some encouraging empirical results.
| Aryeh Kontorovich, Boaz Nadler, Roi Weiss | null | 1302.6009 | null | null |
Phoneme discrimination using $KS$-algebra II | cs.SD cs.LG stat.ML | $KS$-algebra consists of expressions constructed with four kinds operations,
the minimum, maximum, difference and additively homogeneous generalized means.
Five families of $Z$-classifiers are investigated on binary classification
tasks between English phonemes. It is shown that the classifiers are able to
reflect well known formant characteristics of vowels, while having very small
Kolmogoroff's complexity.
| Ondrej Such and Lenka Mackovicova | null | 1302.6194 | null | null |
A Homogeneous Ensemble of Artificial Neural Networks for Time Series
Forecasting | cs.NE cs.LG | Enhancing the robustness and accuracy of time series forecasting models is an
active area of research. Recently, Artificial Neural Networks (ANNs) have found
extensive applications in many practical forecasting problems. However, the
standard backpropagation ANN training algorithm has some critical issues, e.g.
it has a slow convergence rate and often converges to a local minimum, the
complex pattern of error surfaces, lack of proper training parameters selection
methods, etc. To overcome these drawbacks, various improved training methods
have been developed in literature; but, still none of them can be guaranteed as
the best for all problems. In this paper, we propose a novel weighted ensemble
scheme which intelligently combines multiple training algorithms to increase
the ANN forecast accuracies. The weight for each training algorithm is
determined from the performance of the corresponding ANN model on the
validation dataset. Experimental results on four important time series depicts
that our proposed technique reduces the mentioned shortcomings of individual
ANN training algorithms to a great extent. Also it achieves significantly
better forecast accuracies than two other popular statistical models.
| Ratnadip Adhikari, R. K. Agrawal | 10.5120/3913-5505 | 1302.6210 | null | null |
Rate-Distortion Bounds for an Epsilon-Insensitive Distortion Measure | cs.IT cs.LG math.IT | Direct evaluation of the rate-distortion function has rarely been achieved
when it is strictly greater than its Shannon lower bound. In this paper, we
consider the rate-distortion function for the distortion measure defined by an
epsilon-insensitive loss function. We first present the Shannon lower bound
applicable to any source distribution with finite differential entropy. Then,
focusing on the Laplacian and Gaussian sources, we prove that the
rate-distortion functions of these sources are strictly greater than their
Shannon lower bounds and obtain analytically evaluable upper bounds for the
rate-distortion functions. Small distortion limit and numerical evaluation of
the bounds suggest that the Shannon lower bound provides a good approximation
to the rate-distortion function for the epsilon-insensitive distortion measure.
| Kazuho Watanabe | null | 1302.6315 | null | null |
The adaptive Gril estimator with a diverging number of parameters | stat.ME cs.LG | We consider the problem of variables selection and estimation in linear
regression model in situations where the number of parameters diverges with the
sample size. We propose the adaptive Generalized Ridge-Lasso (\mbox{AdaGril})
which is an extension of the the adaptive Elastic Net. AdaGril incorporates
information redundancy among correlated variables for model selection and
estimation. It combines the strengths of the quadratic regularization and the
adaptively weighted Lasso shrinkage. In this paper, we highlight the grouped
selection property for AdaCnet method (one type of AdaGril) in the equal
correlation case. Under weak conditions, we establish the oracle property of
AdaGril which ensures the optimal large performance when the dimension is high.
Consequently, it achieves both goals of handling the problem of collinearity in
high dimension and enjoys the oracle property. Moreover, we show that AdaGril
estimator achieves a Sparsity Inequality, i. e., a bound in terms of the number
of non-zero components of the 'true' regression coefficient. This bound is
obtained under a similar weak Restricted Eigenvalue (RE) condition used for
Lasso. Simulations studies show that some particular cases of AdaGril
outperform its competitors.
| Mohammed El Anbari and Abdallah Mkhadri | null | 1302.6390 | null | null |
ML4PG in Computer Algebra verification | cs.LO cs.LG | ML4PG is a machine-learning extension that provides statistical proof hints
during the process of Coq/SSReflect proof development. In this paper, we use
ML4PG to find proof patterns in the CoqEAL library -- a library that was
devised to verify the correctness of Computer Algebra algorithms. In
particular, we use ML4PG to help us in the formalisation of an efficient
algorithm to compute the inverse of triangular matrices.
| J\'onathan Heras and Ekaterina Komendantskaya | null | 1302.6421 | null | null |
A Conformal Prediction Approach to Explore Functional Data | stat.ML cs.LG | This paper applies conformal prediction techniques to compute simultaneous
prediction bands and clustering trees for functional data. These tools can be
used to detect outliers and clusters. Both our prediction bands and clustering
trees provide prediction sets for the underlying stochastic process with a
guaranteed finite sample behavior, under no distributional assumptions. The
prediction sets are also informative in that they correspond to the high
density region of the underlying process. While ordinary conformal prediction
has high computational cost for functional data, we use the inductive conformal
predictor, together with several novel choices of conformity scores, to
simplify the computation. Our methods are illustrated on some real data
examples.
| Jing Lei, Alessandro Rinaldo, Larry Wasserman | null | 1302.6452 | null | null |
Sparse Frequency Analysis with Sparse-Derivative Instantaneous Amplitude
and Phase Functions | cs.LG | This paper addresses the problem of expressing a signal as a sum of frequency
components (sinusoids) wherein each sinusoid may exhibit abrupt changes in its
amplitude and/or phase. The Fourier transform of a narrow-band signal, with a
discontinuous amplitude and/or phase function, exhibits spectral and temporal
spreading. The proposed method aims to avoid such spreading by explicitly
modeling the signal of interest as a sum of sinusoids with time-varying
amplitudes. So as to accommodate abrupt changes, it is further assumed that the
amplitude/phase functions are approximately piecewise constant (i.e., their
time-derivatives are sparse). The proposed method is based on a convex
variational (optimization) approach wherein the total variation (TV) of the
amplitude functions are regularized subject to a perfect (or approximate)
reconstruction constraint. A computationally efficient algorithm is derived
based on convex optimization techniques. The proposed technique can be used to
perform band-pass filtering that is relatively insensitive to narrow-band
amplitude/phase jumps present in data, which normally pose a challenge (due to
transients, leakage, etc.). The method is illustrated using both synthetic
signals and human EEG data for the purpose of band-pass filtering and the
estimation of phase synchrony indexes.
| Yin Ding and Ivan W. Selesnick | null | 1302.6523 | null | null |
An Introductory Study on Time Series Modeling and Forecasting | cs.LG stat.ML | Time series modeling and forecasting has fundamental importance to various
practical domains. Thus a lot of active research works is going on in this
subject during several years. Many important models have been proposed in
literature for improving the accuracy and effectiveness of time series
forecasting. The aim of this dissertation work is to present a concise
description of some popular time series forecasting models used in practice,
with their salient features. In this thesis, we have described three important
classes of time series models, viz. the stochastic, neural networks and SVM
based models, together with their inherent forecasting strengths and
weaknesses. We have also discussed about the basic issues related to time
series modeling, such as stationarity, parsimony, overfitting, etc. Our
discussion about different time series models is supported by giving the
experimental forecast results, performed on six real time series datasets.
While fitting a model to a dataset, special care is taken to select the most
parsimonious one. To evaluate forecast accuracy as well as to compare among
different models fitted to a time series, we have used the five performance
measures, viz. MSE, MAD, RMSE, MAPE and Theil's U-statistics. For each of the
six datasets, we have shown the obtained forecast diagram which graphically
depicts the closeness between the original and forecasted observations. To have
authenticity as well as clarity in our discussion about time series modeling
and forecasting, we have taken the help of various published research works
from reputed journals and some standard books.
| Ratnadip Adhikari, R. K. Agrawal | null | 1302.6613 | null | null |
Arriving on time: estimating travel time distributions on large-scale
road networks | cs.LG cs.AI | Most optimal routing problems focus on minimizing travel time or distance
traveled. Oftentimes, a more useful objective is to maximize the probability of
on-time arrival, which requires statistical distributions of travel times,
rather than just mean values. We propose a method to estimate travel time
distributions on large-scale road networks, using probe vehicle data collected
from GPS. We present a framework that works with large input of data, and
scales linearly with the size of the network. Leveraging the planar topology of
the graph, the method computes efficiently the time correlations between
neighboring streets. First, raw probe vehicle traces are compressed into pairs
of travel times and number of stops for each traversed road segment using a
`stop-and-go' algorithm developed for this work. The compressed data is then
used as input for training a path travel time model, which couples a Markov
model along with a Gaussian Markov random field. Finally, scalable inference
algorithms are developed for obtaining path travel time distributions from the
composite MM-GMRF model. We illustrate the accuracy and scalability of our
model on a 505,000 road link network spanning the San Francisco Bay Area.
| Timothy Hunter, Aude Hofleitner, Jack Reilly, Walid Krichene, Jerome
Thai, Anastasios Kouvelas, Pieter Abbeel, Alexandre Bayen | null | 1302.6617 | null | null |
Taming the Curse of Dimensionality: Discrete Integration by Hashing and
Optimization | cs.LG cs.AI stat.ML | Integration is affected by the curse of dimensionality and quickly becomes
intractable as the dimensionality of the problem grows. We propose a randomized
algorithm that, with high probability, gives a constant-factor approximation of
a general discrete integral defined over an exponentially large set. This
algorithm relies on solving only a small number of instances of a discrete
combinatorial optimization problem subject to randomly generated parity
constraints used as a hash function. As an application, we demonstrate that
with a small number of MAP queries we can efficiently approximate the partition
function of discrete graphical models, which can in turn be used, for instance,
for marginal computation or model selection.
| Stefano Ermon, Carla P. Gomes, Ashish Sabharwal, Bart Selman | null | 1302.6677 | null | null |
Categorizing Bugs with Social Networks: A Case Study on Four Open Source
Software Communities | cs.SE cs.LG cs.SI nlin.AO physics.soc-ph | Efficient bug triaging procedures are an important precondition for
successful collaborative software engineering projects. Triaging bugs can
become a laborious task particularly in open source software (OSS) projects
with a large base of comparably inexperienced part-time contributors. In this
paper, we propose an efficient and practical method to identify valid bug
reports which a) refer to an actual software bug, b) are not duplicates and c)
contain enough information to be processed right away. Our classification is
based on nine measures to quantify the social embeddedness of bug reporters in
the collaboration network. We demonstrate its applicability in a case study,
using a comprehensive data set of more than 700,000 bug reports obtained from
the Bugzilla installation of four major OSS communities, for a period of more
than ten years. For those projects that exhibit the lowest fraction of valid
bug reports, we find that the bug reporters' position in the collaboration
network is a strong indicator for the quality of bug reports. Based on this
finding, we develop an automated classification scheme that can easily be
integrated into bug tracking platforms and analyze its performance in the
considered OSS communities. A support vector machine (SVM) to identify valid
bug reports based on the nine measures yields a precision of up to 90.3% with
an associated recall of 38.9%. With this, we significantly improve the results
obtained in previous case studies for an automated early identification of bugs
that are eventually fixed. Furthermore, our study highlights the potential of
using quantitative measures of social organization in collaborative software
engineering. It also opens a broad perspective for the integration of social
awareness in the design of support infrastructures.
| Marcelo Serrano Zanetti, Ingo Scholtes, Claudio Juan Tessone and Frank
Schweitzer | null | 1302.6764 | null | null |
Missing Entries Matrix Approximation and Completion | math.NA cs.LG stat.ML | We describe several algorithms for matrix completion and matrix approximation
when only some of its entries are known. The approximation constraint can be
any whose approximated solution is known for the full matrix. For low rank
approximations, similar algorithms appears recently in the literature under
different names. In this work, we introduce new theorems for matrix
approximation and show that these algorithms can be extended to handle
different constraints such as nuclear norm, spectral norm, orthogonality
constraints and more that are different than low rank approximations. As the
algorithms can be viewed from an optimization point of view, we discuss their
convergence to global solution for the convex case. We also discuss the optimal
step size and show that it is fixed in each iteration. In addition, the derived
matrix completion flow is robust and does not require any parameters. This
matrix completion flow is applicable to different spectral minimizations and
can be applied to physics, mathematics and electrical engineering problems such
as data reconstruction of images and data coming from PDEs such as Helmholtz
equation used for electromagnetic waves.
| Gil Shabat, Yaniv Shmueli and Amir Averbuch | null | 1302.6768 | null | null |
Learning Gaussian Networks | cs.AI cs.LG stat.ML | We describe algorithms for learning Bayesian networks from a combination of
user knowledge and statistical data. The algorithms have two components: a
scoring metric and a search procedure. The scoring metric takes a network
structure, statistical data, and a user's prior knowledge, and returns a score
proportional to the posterior probability of the network structure given the
data. The search procedure generates networks for evaluation by the scoring
metric. Previous work has concentrated on metrics for domains containing only
discrete variables, under the assumption that data represents a multinomial
sample. In this paper, we extend this work, developing scoring metrics for
domains containing all continuous variables or a mixture of discrete and
continuous variables, under the assumption that continuous data is sampled from
a multivariate normal distribution. Our work extends traditional statistical
approaches for identifying vanishing regression coefficients in that we
identify two important assumptions, called event equivalence and parameter
modularity, that when combined allow the construction of prior distributions
for multivariate normal parameters from a single prior Bayesian network
specified by a user.
| Dan Geiger and David Heckerman | null | 1302.6808 | null | null |
Induction of Selective Bayesian Classifiers | cs.LG stat.ML | In this paper, we examine previous work on the naive Bayesian classifier and
review its limitations, which include a sensitivity to correlated features. We
respond to this problem by embedding the naive Bayesian induction scheme within
an algorithm that c arries out a greedy search through the space of features.
We hypothesize that this approach will improve asymptotic accuracy in domains
that involve correlated features without reducing the rate of learning in ones
that do not. We report experimental results on six natural domains, including
comparisons with decision-tree induction, that support these hypotheses. In
closing, we discuss other approaches to extending naive Bayesian classifiers
and outline some directions for future research.
| Pat Langley, Stephanie Sage | null | 1302.6828 | null | null |
Online Learning for Time Series Prediction | cs.LG | In this paper we address the problem of predicting a time series using the
ARMA (autoregressive moving average) model, under minimal assumptions on the
noise terms. Using regret minimization techniques, we develop effective online
learning algorithms for the prediction problem, without assuming that the noise
terms are Gaussian, identically distributed or even independent. Furthermore,
we show that our algorithm's performances asymptotically approaches the
performance of the best ARMA model in hindsight.
| Oren Anava, Elad Hazan, Shie Mannor, Ohad Shamir | null | 1302.6927 | null | null |
Online Convex Optimization Against Adversaries with Memory and
Application to Statistical Arbitrage | cs.LG | The framework of online learning with memory naturally captures learning
problems with temporal constraints, and was previously studied for the experts
setting. In this work we extend the notion of learning with memory to the
general Online Convex Optimization (OCO) framework, and present two algorithms
that attain low regret. The first algorithm applies to Lipschitz continuous
loss functions, obtaining optimal regret bounds for both convex and strongly
convex losses. The second algorithm attains the optimal regret bounds and
applies more broadly to convex losses without requiring Lipschitz continuity,
yet is more complicated to implement. We complement our theoretic results with
an application to statistical arbitrage in finance: we devise algorithms for
constructing mean-reverting portfolios.
| Oren Anava, Elad Hazan, Shie Mannor | null | 1302.6937 | null | null |
Spectrum Bandit Optimization | cs.LG cs.NI math.OC | We consider the problem of allocating radio channels to links in a wireless
network. Links interact through interference, modelled as a conflict graph
(i.e., two interfering links cannot be simultaneously active on the same
channel). We aim at identifying the channel allocation maximizing the total
network throughput over a finite time horizon. Should we know the average radio
conditions on each channel and on each link, an optimal allocation would be
obtained by solving an Integer Linear Program (ILP). When radio conditions are
unknown a priori, we look for a sequential channel allocation policy that
converges to the optimal allocation while minimizing on the way the throughput
loss or {\it regret} due to the need for exploring sub-optimal allocations. We
formulate this problem as a generic linear bandit problem, and analyze it first
in a stochastic setting where radio conditions are driven by a stationary
stochastic process, and then in an adversarial setting where radio conditions
can evolve arbitrarily. We provide new algorithms in both settings and derive
upper bounds on their regrets.
| Marc Lelarge and Alexandre Proutiere and M. Sadegh Talebi | null | 1302.6974 | null | null |
Scoup-SMT: Scalable Coupled Sparse Matrix-Tensor Factorization | stat.ML cs.LG | How can we correlate neural activity in the human brain as it responds to
words, with behavioral data expressed as answers to questions about these same
words? In short, we want to find latent variables, that explain both the brain
activity, as well as the behavioral responses. We show that this is an instance
of the Coupled Matrix-Tensor Factorization (CMTF) problem. We propose
Scoup-SMT, a novel, fast, and parallel algorithm that solves the CMTF problem
and produces a sparse latent low-rank subspace of the data. In our experiments,
we find that Scoup-SMT is 50-100 times faster than a state-of-the-art algorithm
for CMTF, along with a 5 fold increase in sparsity. Moreover, we extend
Scoup-SMT to handle missing data without degradation of performance. We apply
Scoup-SMT to BrainQ, a dataset consisting of a (nouns, brain voxels, human
subjects) tensor and a (nouns, properties) matrix, with coupling along the
nouns dimension. Scoup-SMT is able to find meaningful latent variables, as well
as to predict brain activity with competitive accuracy. Finally, we demonstrate
the generality of Scoup-SMT, by applying it on a Facebook dataset (users,
friends, wall-postings); there, Scoup-SMT spots spammer-like anomalies.
| Evangelos E. Papalexakis, Tom M. Mitchell, Nicholas D. Sidiropoulos,
Christos Faloutsos, Partha Pratim Talukdar, Brian Murphy | null | 1302.7043 | null | null |
Learning Theory in the Arithmetic Hierarchy | math.LO cs.LG cs.LO | We consider the arithmetic complexity of index sets of uniformly computably
enumerable families learnable under different learning criteria. We determine
the exact complexity of these sets for the standard notions of finite learning,
learning in the limit, behaviorally correct learning and anomalous learning in
the limit. In proving the $\Sigma_5^0$-completeness result for behaviorally
correct learning we prove a result of independent interest; if a uniformly
computably enumerable family is not learnable, then for any computable learner
there is a $\Delta_2^0$ enumeration witnessing failure.
| Achilles Beros | null | 1302.7069 | null | null |
Estimating the Maximum Expected Value: An Analysis of (Nested) Cross
Validation and the Maximum Sample Average | stat.ML cs.AI cs.LG stat.ME | We investigate the accuracy of the two most common estimators for the maximum
expected value of a general set of random variables: a generalization of the
maximum sample average, and cross validation. No unbiased estimator exists and
we show that it is non-trivial to select a good estimator without knowledge
about the distributions of the random variables. We investigate and bound the
bias and variance of the aforementioned estimators and prove consistency. The
variance of cross validation can be significantly reduced, but not without
risking a large bias. The bias and variance of different variants of cross
validation are shown to be very problem-dependent, and a wrong choice can lead
to very inaccurate estimates.
| Hado van Hasselt | null | 1302.7175 | null | null |
Online Similarity Prediction of Networked Data from Known and Unknown
Graphs | cs.LG | We consider online similarity prediction problems over networked data. We
begin by relating this task to the more standard class prediction problem,
showing that, given an arbitrary algorithm for class prediction, we can
construct an algorithm for similarity prediction with "nearly" the same mistake
bound, and vice versa. After noticing that this general construction is
computationally infeasible, we target our study to {\em feasible} similarity
prediction algorithms on networked data. We initially assume that the network
structure is {\em known} to the learner. Here we observe that Matrix Winnow
\cite{w07} has a near-optimal mistake guarantee, at the price of cubic
prediction time per round. This motivates our effort for an efficient
implementation of a Perceptron algorithm with a weaker mistake guarantee but
with only poly-logarithmic prediction time. Our focus then turns to the
challenging case of networks whose structure is initially {\em unknown} to the
learner. In this novel setting, where the network structure is only
incrementally revealed, we obtain a mistake-bounded algorithm with a quadratic
prediction time per round.
| Claudio Gentile, Mark Herbster, Stephen Pasteris | null | 1302.7263 | null | null |
Bayesian Consensus Clustering | stat.ML cs.LG | The task of clustering a set of objects based on multiple sources of data
arises in several modern applications. We propose an integrative statistical
model that permits a separate clustering of the objects for each data source.
These separate clusterings adhere loosely to an overall consensus clustering,
and hence they are not independent. We describe a computationally scalable
Bayesian framework for simultaneous estimation of both the consensus clustering
and the source-specific clusterings. We demonstrate that this flexible approach
is more robust than joint clustering of all data sources, and is more powerful
than clustering each data source separately. This work is motivated by the
integrated analysis of heterogeneous biomedical data, and we present an
application to subtype identification of breast cancer tumor samples using
publicly available data from The Cancer Genome Atlas. Software is available at
http://people.duke.edu/~el113/software.html.
| Eric F. Lock and David B. Dunson | 10.1093/bioinformatics/btt425 | 1302.7280 | null | null |
Source Separation using Regularized NMF with MMSE Estimates under GMM
Priors with Online Learning for The Uncertainties | cs.LG cs.NA | We propose a new method to enforce priors on the solution of the nonnegative
matrix factorization (NMF). The proposed algorithm can be used for denoising or
single-channel source separation (SCSS) applications. The NMF solution is
guided to follow the Minimum Mean Square Error (MMSE) estimates under Gaussian
mixture prior models (GMM) for the source signal. In SCSS applications, the
spectra of the observed mixed signal are decomposed as a weighted linear
combination of trained basis vectors for each source using NMF. In this work,
the NMF decomposition weight matrices are treated as a distorted image by a
distortion operator, which is learned directly from the observed signals. The
MMSE estimate of the weights matrix under GMM prior and log-normal distribution
for the distortion is then found to improve the NMF decomposition results. The
MMSE estimate is embedded within the optimization objective to form a novel
regularized NMF cost function. The corresponding update rules for the new
objectives are derived in this paper. Experimental results show that, the
proposed regularized NMF algorithm improves the source separation performance
compared with using NMF without prior or with other prior models.
| Emad M. Grais, Hakan Erdogan | null | 1302.7283 | null | null |
A Method for Comparing Hedge Funds | q-fin.ST cs.IR cs.LG stat.ML | The paper presents new machine learning methods: signal composition, which
classifies time-series regardless of length, type, and quantity; and
self-labeling, a supervised-learning enhancement. The paper describes further
the implementation of the methods on a financial search engine system to
identify behavioral similarities among time-series representing monthly returns
of 11,312 hedge funds operated during approximately one decade (2000 - 2010).
The presented approach of cross-category and cross-location classification
assists the investor to identify alternative investments.
| Uri Kartoun | null | 1303.0073 | null | null |
Bio-Signals-based Situation Comparison Approach to Predict Pain | stat.AP cs.LG stat.ML | This paper describes a time-series-based classification approach to identify
similarities between bio-medical-based situations. The proposed approach allows
classifying collections of time-series representing bio-medical measurements,
i.e., situations, regardless of the type, the length and the quantity of the
time-series a situation comprised of.
| Uri Kartoun | null | 1303.0076 | null | null |
Label-dependent Feature Extraction in Social Networks for Node
Classification | cs.SI cs.LG | A new method of feature extraction in the social network for within-network
classification is proposed in the paper. The method provides new features
calculated by combination of both: network structure information and class
labels assigned to nodes. The influence of various features on classification
performance has also been studied. The experiments on real-world data have
shown that features created owing to the proposed method can lead to
significant improvement of classification accuracy.
| Tomasz Kajdanowicz, Przemyslaw Kazienko, Piotr Doskocz | 10.1007/978-3-642-16567-2_7 | 1303.0095 | null | null |
Second-Order Non-Stationary Online Learning for Regression | cs.LG stat.ML | The goal of a learner, in standard online learning, is to have the cumulative
loss not much larger compared with the best-performing function from some fixed
class. Numerous algorithms were shown to have this gap arbitrarily close to
zero, compared with the best function that is chosen off-line. Nevertheless,
many real-world applications, such as adaptive filtering, are non-stationary in
nature, and the best prediction function may drift over time. We introduce two
novel algorithms for online regression, designed to work well in non-stationary
environment. Our first algorithm performs adaptive resets to forget the
history, while the second is last-step min-max optimal in context of a drift.
We analyze both algorithms in the worst-case regret framework and show that
they maintain an average loss close to that of the best slowly changing
sequence of linear functions, as long as the cumulative drift is sublinear. In
addition, in the stationary case, when no drift occurs, our algorithms suffer
logarithmic regret, as for previous algorithms. Our bounds improve over the
existing ones, and simulations demonstrate the usefulness of these algorithms
compared with other state-of-the-art approaches.
| Nina Vaits, Edward Moroshko, Koby Crammer | null | 1303.0140 | null | null |
Exploiting the Accumulated Evidence for Gene Selection in Microarray
Gene Expression Data | cs.CE cs.LG q-bio.QM | Machine Learning methods have of late made significant efforts to solving
multidisciplinary problems in the field of cancer classification using
microarray gene expression data. Feature subset selection methods can play an
important role in the modeling process, since these tasks are characterized by
a large number of features and a few observations, making the modeling a
non-trivial undertaking. In this particular scenario, it is extremely important
to select genes by taking into account the possible interactions with other
gene subsets. This paper shows that, by accumulating the evidence in favour (or
against) each gene along the search process, the obtained gene subsets may
constitute better solutions, either in terms of predictive accuracy or gene
size, or in both. The proposed technique is extremely simple and applicable at
a negligible overhead in cost.
| G. Prat and Ll. Belanche | null | 1303.0156 | null | null |
Inverse Signal Classification for Financial Instruments | cs.LG cs.IR q-fin.ST stat.ML | The paper presents new machine learning methods: signal composition, which
classifies time-series regardless of length, type, and quantity; and
self-labeling, a supervised-learning enhancement. The paper describes further
the implementation of the methods on a financial search engine system using a
collection of 7,881 financial instruments traded during 2011 to identify
inverse behavior among the time-series.
| Uri Kartoun | null | 1303.0283 | null | null |
One-Class Support Measure Machines for Group Anomaly Detection | stat.ML cs.LG | We propose one-class support measure machines (OCSMMs) for group anomaly
detection which aims at recognizing anomalous aggregate behaviors of data
points. The OCSMMs generalize well-known one-class support vector machines
(OCSVMs) to a space of probability measures. By formulating the problem as
quantile estimation on distributions, we can establish an interesting
connection to the OCSVMs and variable kernel density estimators (VKDEs) over
the input space on which the distributions are defined, bridging the gap
between large-margin methods and kernel density estimators. In particular, we
show that various types of VKDEs can be considered as solutions to a class of
regularization problems studied in this paper. Experiments on Sloan Digital Sky
Survey dataset and High Energy Particle Physics dataset demonstrate the
benefits of the proposed framework in real-world applications.
| Krikamol Muandet and Bernhard Sch\"olkopf | null | 1303.0309 | null | null |
Learning Hash Functions Using Column Generation | cs.LG | Fast nearest neighbor searching is becoming an increasingly important tool in
solving many large-scale problems. Recently a number of approaches to learning
data-dependent hash functions have been developed. In this work, we propose a
column generation based method for learning data-dependent hash functions on
the basis of proximity comparison information. Given a set of triplets that
encode the pairwise proximity comparison information, our method learns hash
functions that preserve the relative comparison relationships in the data as
well as possible within the large-margin learning framework. The learning
procedure is implemented using column generation and hence is named CGHash. At
each iteration of the column generation procedure, the best hash function is
selected. Unlike most other hashing methods, our method generalizes to new data
points naturally; and has a training objective which is convex, thus ensuring
that the global optimum can be identified. Experiments demonstrate that the
proposed method learns compact binary codes and that its retrieval performance
compares favorably with state-of-the-art methods when tested on a few benchmark
datasets.
| Xi Li and Guosheng Lin and Chunhua Shen and Anton van den Hengel and
Anthony Dick | null | 1303.0339 | null | null |
Matrix Completion via Max-Norm Constrained Optimization | cs.LG cs.IT math.IT stat.ML | Matrix completion has been well studied under the uniform sampling model and
the trace-norm regularized methods perform well both theoretically and
numerically in such a setting. However, the uniform sampling model is
unrealistic for a range of applications and the standard trace-norm relaxation
can behave very poorly when the underlying sampling scheme is non-uniform.
In this paper we propose and analyze a max-norm constrained empirical risk
minimization method for noisy matrix completion under a general sampling model.
The optimal rate of convergence is established under the Frobenius norm loss in
the context of approximately low-rank matrix reconstruction. It is shown that
the max-norm constrained method is minimax rate-optimal and yields a unified
and robust approximate recovery guarantee, with respect to the sampling
distributions. The computational effectiveness of this method is also
discussed, based on first-order algorithms for solving convex optimizations
involving max-norm regularization.
| T. Tony Cai, Wen-Xin Zhou | 10.1214/16-EJS1147 | 1303.0341 | null | null |
Sparse PCA through Low-rank Approximations | stat.ML cs.IT cs.LG math.IT | We introduce a novel algorithm that computes the $k$-sparse principal
component of a positive semidefinite matrix $A$. Our algorithm is combinatorial
and operates by examining a discrete set of special vectors lying in a
low-dimensional eigen-subspace of $A$. We obtain provable approximation
guarantees that depend on the spectral decay profile of the matrix: the faster
the eigenvalue decay, the better the quality of our approximation. For example,
if the eigenvalues of $A$ follow a power-law decay, we obtain a polynomial-time
approximation algorithm for any desired accuracy.
A key algorithmic component of our scheme is a combinatorial feature
elimination step that is provably safe and in practice significantly reduces
the running complexity of our algorithm. We implement our algorithm and test it
on multiple artificial and real data sets. Due to the feature elimination step,
it is possible to perform sparse PCA on data sets consisting of millions of
entries in a few minutes. Our experimental evaluation shows that our scheme is
nearly optimal while finding very sparse vectors. We compare to the prior state
of the art and show that our scheme matches or outperforms previous algorithms
in all tested data sets.
| Dimitris S. Papailiopoulos, Alexandros G. Dimakis, and Stavros
Korokythakis | null | 1303.0551 | null | null |
Top-down particle filtering for Bayesian decision trees | stat.ML cs.LG | Decision tree learning is a popular approach for classification and
regression in machine learning and statistics, and Bayesian
formulations---which introduce a prior distribution over decision trees, and
formulate learning as posterior inference given data---have been shown to
produce competitive performance. Unlike classic decision tree learning
algorithms like ID3, C4.5 and CART, which work in a top-down manner, existing
Bayesian algorithms produce an approximation to the posterior distribution by
evolving a complete tree (or collection thereof) iteratively via local Monte
Carlo modifications to the structure of the tree, e.g., using Markov chain
Monte Carlo (MCMC). We present a sequential Monte Carlo (SMC) algorithm that
instead works in a top-down manner, mimicking the behavior and speed of classic
algorithms. We demonstrate empirically that our approach delivers accuracy
comparable to the most popular MCMC method, but operates more than an order of
magnitude faster, and thus represents a better computation-accuracy tradeoff.
| Balaji Lakshminarayanan, Daniel M. Roy and Yee Whye Teh | null | 1303.0561 | null | null |
Bayesian Compressed Regression | stat.ML cs.LG | As an alternative to variable selection or shrinkage in high dimensional
regression, we propose to randomly compress the predictors prior to analysis.
This dramatically reduces storage and computational bottlenecks, performing
well when the predictors can be projected to a low dimensional linear subspace
with minimal loss of information about the response. As opposed to existing
Bayesian dimensionality reduction approaches, the exact posterior distribution
conditional on the compressed data is available analytically, speeding up
computation by many orders of magnitude while also bypassing robustness issues
due to convergence and mixing problems with MCMC. Model averaging is used to
reduce sensitivity to the random projection matrix, while accommodating
uncertainty in the subspace dimension. Strong theoretical support is provided
for the approach by showing near parametric convergence rates for the
predictive density in the large p small n asymptotic paradigm. Practical
performance relative to competitors is illustrated in simulations and real data
applications.
| Rajarshi Guhaniyogi and David B. Dunson | null | 1303.0642 | null | null |
Denoising Deep Neural Networks Based Voice Activity Detection | cs.LG cs.SD stat.ML | Recently, the deep-belief-networks (DBN) based voice activity detection (VAD)
has been proposed. It is powerful in fusing the advantages of multiple
features, and achieves the state-of-the-art performance. However, the deep
layers of the DBN-based VAD do not show an apparent superiority to the
shallower layers. In this paper, we propose a denoising-deep-neural-network
(DDNN) based VAD to address the aforementioned problem. Specifically, we
pre-train a deep neural network in a special unsupervised denoising greedy
layer-wise mode, and then fine-tune the whole network in a supervised way by
the common back-propagation algorithm. In the pre-training phase, we take the
noisy speech signals as the visible layer and try to extract a new feature that
minimizes the reconstruction cross-entropy loss between the noisy speech
signals and its corresponding clean speech signals. Experimental results show
that the proposed DDNN-based VAD not only outperforms the DBN-based VAD but
also shows an apparent performance improvement of the deep layers over
shallower layers.
| Xiao-Lei Zhang and Ji Wu | 10.1109/ICASSP.2013.6637769 | 1303.0663 | null | null |
Personalized News Recommendation with Context Trees | cs.IR cs.LG stat.ML | The profusion of online news articles makes it difficult to find interesting
articles, a problem that can be assuaged by using a recommender system to bring
the most relevant news stories to readers. However, news recommendation is
challenging because the most relevant articles are often new content seen by
few users. In addition, they are subject to trends and preference changes over
time, and in many cases we do not have sufficient information to profile the
reader.
In this paper, we introduce a class of news recommendation systems based on
context trees. They can provide high-quality news recommendation to anonymous
visitors based on present browsing behaviour. We show that context-tree
recommender systems provide good prediction accuracy and recommendation
novelty, and they are sufficiently flexible to capture the unique properties of
news articles.
| Florent Garcin, Christos Dimitrakakis and Boi Faltings | 10.1145/2507157.2507166 | 1303.0665 | null | null |
Learning AMP Chain Graphs and some Marginal Models Thereof under
Faithfulness: Extended Version | stat.ML cs.AI cs.LG | This paper deals with chain graphs under the Andersson-Madigan-Perlman (AMP)
interpretation. In particular, we present a constraint based algorithm for
learning an AMP chain graph a given probability distribution is faithful to.
Moreover, we show that the extension of Meek's conjecture to AMP chain graphs
does not hold, which compromises the development of efficient and correct
score+search learning algorithms under assumptions weaker than faithfulness.
We also introduce a new family of graphical models that consists of
undirected and bidirected edges. We name this new family maximal
covariance-concentration graphs (MCCGs) because it includes both covariance and
concentration graphs as subfamilies. However, every MCCG can be seen as the
result of marginalizing out some nodes in an AMP CG. We describe global, local
and pairwise Markov properties for MCCGs and prove their equivalence. We
characterize when two MCCGs are Markov equivalent, and show that every Markov
equivalence class of MCCGs has a distinguished member. We present a constraint
based algorithm for learning a MCCG a given probability distribution is
faithful to.
Finally, we present a graphical criterion for reading dependencies from a
MCCG of a probability distribution that satisfies the graphoid properties, weak
transitivity and composition. We prove that the criterion is sound and complete
in certain sense.
| Jose M. Pe\~na | null | 1303.0691 | null | null |
Multivariate Temporal Dictionary Learning for EEG | cs.LG q-bio.NC stat.ML | This article addresses the issue of representing electroencephalographic
(EEG) signals in an efficient way. While classical approaches use a fixed Gabor
dictionary to analyze EEG signals, this article proposes a data-driven method
to obtain an adapted dictionary. To reach an efficient dictionary learning,
appropriate spatial and temporal modeling is required. Inter-channels links are
taken into account in the spatial multivariate model, and shift-invariance is
used for the temporal model. Multivariate learned kernels are informative (a
few atoms code plentiful energy) and interpretable (the atoms can have a
physiological meaning). Using real EEG data, the proposed method is shown to
outperform the classical multichannel matching pursuit used with a Gabor
dictionary, as measured by the representative power of the learned dictionary
and its spatial flexibility. Moreover, dictionary learning can capture
interpretable patterns: this ability is illustrated on real data, learning a
P300 evoked potential.
| Quentin Barth\'elemy, C\'edric Gouy-Pailler, Yoann Isaac, Antoine
Souloumiac, Anthony Larue, J\'er\^ome I. Mars | null | 1303.0742 | null | null |
Riemannian metrics for neural networks I: feedforward networks | cs.NE cs.IT cs.LG math.DG math.IT | We describe four algorithms for neural network training, each adapted to
different scalability constraints. These algorithms are mathematically
principled and invariant under a number of transformations in data and network
representation, from which performance is thus independent. These algorithms
are obtained from the setting of differential geometry, and are based on either
the natural gradient using the Fisher information matrix, or on Hessian
methods, scaled down in a specific way to allow for scalability while keeping
some of their key mathematical properties.
| Yann Ollivier | null | 1303.0818 | null | null |
GURLS: a Least Squares Library for Supervised Learning | cs.LG cs.AI cs.MS | We present GURLS, a least squares, modular, easy-to-extend software library
for efficient supervised learning. GURLS is targeted to machine learning
practitioners, as well as non-specialists. It offers a number state-of-the-art
training strategies for medium and large-scale learning, and routines for
efficient model selection. The library is particularly well suited for
multi-output problems (multi-category/multi-label). GURLS is currently
available in two independent implementations: Matlab and C++. It takes
advantage of the favorable properties of regularized least squares algorithm to
exploit advanced tools in linear algebra. Routines to handle computations with
very large matrices by means of memory-mapped storage and distributed task
execution are available. The package is distributed under the BSD licence and
is available for download at https://github.com/CBCL/GURLS.
| Andrea Tacchetti, Pavan K Mallapragada, Matteo Santoro, Lorenzo
Rosasco | null | 1303.0934 | null | null |
An Equivalence between the Lasso and Support Vector Machines | cs.LG stat.ML | We investigate the relation of two fundamental tools in machine learning and
signal processing, that is the support vector machine (SVM) for classification,
and the Lasso technique used in regression. We show that the resulting
optimization problems are equivalent, in the following sense. Given any
instance of an $\ell_2$-loss soft-margin (or hard-margin) SVM, we construct a
Lasso instance having the same optimal solutions, and vice versa.
As a consequence, many existing optimization algorithms for both SVMs and
Lasso can also be applied to the respective other problem instances. Also, the
equivalence allows for many known theoretical insights for SVM and Lasso to be
translated between the two settings. One such implication gives a simple
kernelized version of the Lasso, analogous to the kernels used in the SVM
setting. Another consequence is that the sparsity of a Lasso solution is equal
to the number of support vectors for the corresponding SVM instance, and that
one can use screening rules to prune the set of support vectors. Furthermore,
we can relate sublinear time algorithms for the two problems, and give a new
such algorithm variant for the Lasso. We also study the regularization paths
for both methods.
| Martin Jaggi | null | 1303.1152 | null | null |
Classification with Asymmetric Label Noise: Consistency and Maximal
Denoising | stat.ML cs.LG | In many real-world classification problems, the labels of training examples
are randomly corrupted. Most previous theoretical work on classification with
label noise assumes that the two classes are separable, that the label noise is
independent of the true class label, or that the noise proportions for each
class are known. In this work, we give conditions that are necessary and
sufficient for the true class-conditional distributions to be identifiable.
These conditions are weaker than those analyzed previously, and allow for the
classes to be nonseparable and the noise levels to be asymmetric and unknown.
The conditions essentially state that a majority of the observed labels are
correct and that the true class-conditional distributions are "mutually
irreducible," a concept we introduce that limits the similarity of the two
distributions. For any label noise problem, there is a unique pair of true
class-conditional distributions satisfying the proposed conditions, and we
argue that this pair corresponds in a certain sense to maximal denoising of the
observed distributions.
Our results are facilitated by a connection to "mixture proportion
estimation," which is the problem of estimating the maximal proportion of one
distribution that is present in another. We establish a novel rate of
convergence result for mixture proportion estimation, and apply this to obtain
consistency of a discrimination rule based on surrogate loss minimization.
Experimental results on benchmark data and a nuclear particle classification
problem demonstrate the efficacy of our approach.
| Gilles Blanchard, Marek Flaska, Gregory Handy, Sara Pozzi, Clayton
Scott | null | 1303.1208 | null | null |
Discovery of factors in matrices with grades | cs.LG cs.NA | We present an approach to decomposition and factor analysis of matrices with
ordinal data. The matrix entries are grades to which objects represented by
rows satisfy attributes represented by columns, e.g. grades to which an image
is red, a product has a given feature, or a person performs well in a test. We
assume that the grades form a bounded scale equipped with certain aggregation
operators and conforms to the structure of a complete residuated lattice. We
present a greedy approximation algorithm for the problem of decomposition of
such matrix in a product of two matrices with grades under the restriction that
the number of factors be small. Our algorithm is based on a geometric insight
provided by a theorem identifying particular rectangular-shaped submatrices as
optimal factors for the decompositions. These factors correspond to formal
concepts of the input data and allow an easy interpretation of the
decomposition. We present illustrative examples and experimental evaluation.
| Radim Belohlavek and Vilem Vychodil | null | 1303.1264 | null | null |
Convex and Scalable Weakly Labeled SVMs | cs.LG | In this paper, we study the problem of learning from weakly labeled data,
where labels of the training examples are incomplete. This includes, for
example, (i) semi-supervised learning where labels are partially known; (ii)
multi-instance learning where labels are implicitly known; and (iii) clustering
where labels are completely unknown. Unlike supervised learning, learning with
weak labels involves a difficult Mixed-Integer Programming (MIP) problem.
Therefore, it can suffer from poor scalability and may also get stuck in local
minimum. In this paper, we focus on SVMs and propose the WellSVM via a novel
label generation strategy. This leads to a convex relaxation of the original
MIP, which is at least as tight as existing convex Semi-Definite Programming
(SDP) relaxations. Moreover, the WellSVM can be solved via a sequence of SVM
subproblems that are much more scalable than previous convex SDP relaxations.
Experiments on three weakly labeled learning tasks, namely, (i) semi-supervised
learning; (ii) multi-instance learning for locating regions of interest in
content-based information retrieval; and (iii) clustering, clearly demonstrate
improved performance, and WellSVM is also readily applicable on large data
sets.
| Yu-Feng Li, Ivor W. Tsang, James T. Kwok and Zhi-Hua Zhou | null | 1303.1271 | null | null |
Large-Margin Metric Learning for Partitioning Problems | cs.LG stat.ML | In this paper, we consider unsupervised partitioning problems, such as
clustering, image segmentation, video segmentation and other change-point
detection problems. We focus on partitioning problems based explicitly or
implicitly on the minimization of Euclidean distortions, which include
mean-based change-point detection, K-means, spectral clustering and normalized
cuts. Our main goal is to learn a Mahalanobis metric for these unsupervised
problems, leading to feature weighting and/or selection. This is done in a
supervised way by assuming the availability of several potentially partially
labelled datasets that share the same metric. We cast the metric learning
problem as a large-margin structured prediction problem, with proper definition
of regularizers and losses, leading to a convex optimization problem which can
be solved efficiently with iterative techniques. We provide experiments where
we show how learning the metric may significantly improve the partitioning
performance in synthetic examples, bioinformatics, video segmentation and image
segmentation problems.
| R\'emi Lajugie (LIENS), Sylvain Arlot (LIENS), Francis Bach (LIENS) | null | 1303.1280 | null | null |
Multi-relational Learning Using Weighted Tensor Decomposition with
Modular Loss | cs.LG | We propose a modular framework for multi-relational learning via tensor
decomposition. In our learning setting, the training data contains multiple
types of relationships among a set of objects, which we represent by a sparse
three-mode tensor. The goal is to predict the values of the missing entries. To
do so, we model each relationship as a function of a linear combination of
latent factors. We learn this latent representation by computing a low-rank
tensor decomposition, using quasi-Newton optimization of a weighted objective
function. Sparsity in the observed data is captured by the weighted objective,
leading to improved accuracy when training data is limited. Exploiting sparsity
also improves efficiency, potentially up to an order of magnitude over
unweighted approaches. In addition, our framework accommodates arbitrary
combinations of smooth, task-specific loss functions, making it better suited
for learning different types of relations. For the typical cases of real-valued
functions and binary relations, we propose several loss functions and derive
the associated parameter gradients. We evaluate our method on synthetic and
real data, showing significant improvements in both accuracy and scalability
over related factorization techniques.
| Ben London, Theodoros Rekatsinas, Bert Huang, and Lise Getoor | null | 1303.1733 | null | null |
Revisiting the Nystrom Method for Improved Large-Scale Machine Learning | cs.LG cs.DS cs.NA | We reconsider randomized algorithms for the low-rank approximation of
symmetric positive semi-definite (SPSD) matrices such as Laplacian and kernel
matrices that arise in data analysis and machine learning applications. Our
main results consist of an empirical evaluation of the performance quality and
running time of sampling and projection methods on a diverse suite of SPSD
matrices. Our results highlight complementary aspects of sampling versus
projection methods; they characterize the effects of common data preprocessing
steps on the performance of these algorithms; and they point to important
differences between uniform sampling and nonuniform sampling methods based on
leverage scores. In addition, our empirical results illustrate that existing
theory is so weak that it does not provide even a qualitative guide to
practice. Thus, we complement our empirical results with a suite of worst-case
theoretical bounds for both random sampling and random projection methods.
These bounds are qualitatively superior to existing bounds---e.g. improved
additive-error bounds for spectral and Frobenius norm error and relative-error
bounds for trace norm error---and they point to future directions to make these
algorithms useful in even larger-scale machine learning applications.
| Alex Gittens and Michael W. Mahoney | null | 1303.1849 | null | null |
Mining Representative Unsubstituted Graph Patterns Using Prior
Similarity Matrix | cs.CE cs.LG | One of the most powerful techniques to study protein structures is to look
for recurrent fragments (also called substructures or spatial motifs), then use
them as patterns to characterize the proteins under study. An emergent trend
consists in parsing proteins three-dimensional (3D) structures into graphs of
amino acids. Hence, the search of recurrent spatial motifs is formulated as a
process of frequent subgraph discovery where each subgraph represents a spatial
motif. In this scope, several efficient approaches for frequent subgraph
discovery have been proposed in the literature. However, the set of discovered
frequent subgraphs is too large to be efficiently analyzed and explored in any
further process. In this paper, we propose a novel pattern selection approach
that shrinks the large number of discovered frequent subgraphs by selecting the
representative ones. Existing pattern selection approaches do not exploit the
domain knowledge. Yet, in our approach we incorporate the evolutionary
information of amino acids defined in the substitution matrices in order to
select the representative subgraphs. We show the effectiveness of our approach
on a number of real datasets. The results issued from our experiments show that
our approach is able to considerably decrease the number of motifs while
enhancing their interestingness.
| Wajdi Dhifli, Rabie Saidi, Engelbert Mephu Nguifo | 10.1016/j.is.2017.05.006 | 1303.2054 | null | null |
Transfer Learning for Voice Activity Detection: A Denoising Deep Neural
Network Perspective | cs.LG | Mismatching problem between the source and target noisy corpora severely
hinder the practical use of the machine-learning-based voice activity detection
(VAD). In this paper, we try to address this problem in the transfer learning
prospective. Transfer learning tries to find a common learning machine or a
common feature subspace that is shared by both the source corpus and the target
corpus. The denoising deep neural network is used as the learning machine.
Three transfer techniques, which aim to learn common feature representations,
are used for analysis. Experimental results demonstrate the effectiveness of
the transfer learning schemes on the mismatch problem.
| Xiao-Lei Zhang, Ji Wu | null | 1303.2104 | null | null |
Convex Discriminative Multitask Clustering | cs.LG | Multitask clustering tries to improve the clustering performance of multiple
tasks simultaneously by taking their relationship into account. Most existing
multitask clustering algorithms fall into the type of generative clustering,
and none are formulated as convex optimization problems. In this paper, we
propose two convex Discriminative Multitask Clustering (DMTC) algorithms to
address the problems. Specifically, we first propose a Bayesian DMTC framework.
Then, we propose two convex DMTC objectives within the framework. The first
one, which can be seen as a technical combination of the convex multitask
feature learning and the convex Multiclass Maximum Margin Clustering (M3C),
aims to learn a shared feature representation. The second one, which can be
seen as a combination of the convex multitask relationship learning and M3C,
aims to learn the task relationship. The two objectives are solved in a uniform
procedure by the efficient cutting-plane algorithm. Experimental results on a
toy problem and two benchmark datasets demonstrate the effectiveness of the
proposed algorithms.
| Xiao-Lei Zhang | null | 1303.2130 | null | null |
Heuristic Ternary Error-Correcting Output Codes Via Weight Optimization
and Layered Clustering-Based Approach | cs.LG | One important classifier ensemble for multiclass classification problems is
Error-Correcting Output Codes (ECOCs). It bridges multiclass problems and
binary-class classifiers by decomposing multiclass problems to a serial
binary-class problems. In this paper, we present a heuristic ternary code,
named Weight Optimization and Layered Clustering-based ECOC (WOLC-ECOC). It
starts with an arbitrary valid ECOC and iterates the following two steps until
the training risk converges. The first step, named Layered Clustering based
ECOC (LC-ECOC), constructs multiple strong classifiers on the most confusing
binary-class problem. The second step adds the new classifiers to ECOC by a
novel Optimized Weighted (OW) decoding algorithm, where the optimization
problem of the decoding is solved by the cutting plane algorithm. Technically,
LC-ECOC makes the heuristic training process not blocked by some difficult
binary-class problem. OW decoding guarantees the non-increase of the training
risk for ensuring a small code length. Results on 14 UCI datasets and a music
genre classification problem demonstrate the effectiveness of WOLC-ECOC.
| Xiao-Lei Zhang | null | 1303.2132 | null | null |
Complex Support Vector Machines for Regression and Quaternary
Classification | cs.LG stat.ML | The paper presents a new framework for complex Support Vector Regression as
well as Support Vector Machines for quaternary classification. The method
exploits the notion of widely linear estimation to model the input-out relation
for complex-valued data and considers two cases: a) the complex data are split
into their real and imaginary parts and a typical real kernel is employed to
map the complex data to a complexified feature space and b) a pure complex
kernel is used to directly map the data to the induced complex feature space.
The recently developed Wirtinger's calculus on complex reproducing kernel
Hilbert spaces (RKHS) is employed in order to compute the Lagrangian and derive
the dual optimization problem. As one of our major results, we prove that any
complex SVM/SVR task is equivalent with solving two real SVM/SVR tasks
exploiting a specific real kernel which is generated by the chosen complex
kernel. In particular, the case of pure complex kernels leads to the generation
of new kernels, which have not been considered before. In the classification
case, the proposed framework inherently splits the complex space into four
parts. This leads naturally in solving the four class-task (quaternary
classification), instead of the typical two classes of the real SVM. In turn,
this rationale can be used in a multiclass problem as a split-class scenario
based on four classes, as opposed to the one-versus-all method; this can lead
to significant computational savings. Experiments demonstrate the effectiveness
of the proposed framework for regression and classification tasks that involve
complex data.
| Pantelis Bouboulis, Sergios Theodoridis, Charalampos Mavroforakis,
Leoni Dalla | 10.1109/TNNLS.2014.2336679 | 1303.2184 | null | null |
Clustering on Multi-Layer Graphs via Subspace Analysis on Grassmann
Manifolds | cs.LG cs.CV cs.SI stat.ML | Relationships between entities in datasets are often of multiple nature, like
geographical distance, social relationships, or common interests among people
in a social network, for example. This information can naturally be modeled by
a set of weighted and undirected graphs that form a global multilayer graph,
where the common vertex set represents the entities and the edges on different
layers capture the similarities of the entities in term of the different
modalities. In this paper, we address the problem of analyzing multi-layer
graphs and propose methods for clustering the vertices by efficiently merging
the information provided by the multiple modalities. To this end, we propose to
combine the characteristics of individual graph layers using tools from
subspace analysis on a Grassmann manifold. The resulting combination can then
be viewed as a low dimensional representation of the original data which
preserves the most important information from diverse relationships between
entities. We use this information in new clustering methods and test our
algorithm on several synthetic and real world datasets where we demonstrate
superior or competitive performances compared to baseline and state-of-the-art
techniques. Our generic framework further extends to numerous analysis and
learning problems that involve different types of information on graphs.
| Xiaowen Dong, Pascal Frossard, Pierre Vandergheynst, Nikolai Nefedov | 10.1109/TSP.2013.2295553 | 1303.2221 | null | null |
Penalty-regulated dynamics and robust learning procedures in games | math.OC cs.GT cs.LG | Starting from a heuristic learning scheme for N-person games, we derive a new
class of continuous-time learning dynamics consisting of a replicator-like
drift adjusted by a penalty term that renders the boundary of the game's
strategy space repelling. These penalty-regulated dynamics are equivalent to
players keeping an exponentially discounted aggregate of their on-going payoffs
and then using a smooth best response to pick an action based on these
performance scores. Owing to this inherent duality, the proposed dynamics
satisfy a variant of the folk theorem of evolutionary game theory and they
converge to (arbitrarily precise) approximations of Nash equilibria in
potential games. Motivated by applications to traffic engineering, we exploit
this duality further to design a discrete-time, payoff-based learning algorithm
which retains these convergence properties and only requires players to observe
their in-game payoffs: moreover, the algorithm remains robust in the presence
of stochastic perturbations and observation errors, and it does not require any
synchronization between players.
| Pierre Coucheney, Bruno Gaujal, Panayotis Mertikopoulos | null | 1303.2270 | null | null |
Mini-Batch Primal and Dual Methods for SVMs | cs.LG math.OC | We address the issue of using mini-batches in stochastic optimization of
SVMs. We show that the same quantity, the spectral norm of the data, controls
the parallelization speedup obtained for both primal stochastic subgradient
descent (SGD) and stochastic dual coordinate ascent (SCDA) methods and use it
to derive novel variants of mini-batched SDCA. Our guarantees for both methods
are expressed in terms of the original nonsmooth primal problem based on the
hinge-loss.
| Martin Tak\'a\v{c} and Avleen Bijral and Peter Richt\'arik and Nathan
Srebro | null | 1303.2314 | null | null |
State estimation under non-Gaussian Levy noise: A modified Kalman
filtering method | math.DS cs.IT cs.LG math.IT math.PR stat.ML | The Kalman filter is extensively used for state estimation for linear systems
under Gaussian noise. When non-Gaussian L\'evy noise is present, the
conventional Kalman filter may fail to be effective due to the fact that the
non-Gaussian L\'evy noise may have infinite variance. A modified Kalman filter
for linear systems with non-Gaussian L\'evy noise is devised. It works
effectively with reasonable computational cost. Simulation results are
presented to illustrate this non-Gaussian filtering method.
| Xu Sun, Jinqiao Duan, Xiaofan Li, Xiangjun Wang | null | 1303.2395 | null | null |
Linear NDCG and Pair-wise Loss | cs.LG stat.ML | Linear NDCG is used for measuring the performance of the Web content quality
assessment in ECML/PKDD Discovery Challenge 2010. In this paper, we will prove
that the DCG error equals a new pair-wise loss.
| Xiao-Bo Jin and Guang-Gang Geng | null | 1303.2417 | null | null |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.