title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Local Gaussian Regression | cs.LG cs.RO | Locally weighted regression was created as a nonparametric learning method
that is computationally efficient, can learn from very large amounts of data
and add data incrementally. An interesting feature of locally weighted
regression is that it can work with spatially varying length scales, a
beneficial property, for instance, in control problems. However, it does not
provide a generative model for function values and requires training and test
data to be generated identically, independently. Gaussian (process) regression,
on the other hand, provides a fully generative model without significant formal
requirements on the distribution of training data, but has much higher
computational cost and usually works with one global scale per input dimension.
Using a localising function basis and approximate inference techniques, we take
Gaussian (process) regression to increasingly localised properties and toward
the same computational complexity class as locally weighted regression.
| Franziska Meier and Philipp Hennig and Stefan Schaal | null | 1402.0645 | null | null |
UNLocBoX: A MATLAB convex optimization toolbox for proximal-splitting
methods | cs.LG stat.ML | Convex optimization is an essential tool for machine learning, as many of its
problems can be formulated as minimization problems of specific objective
functions. While there is a large variety of algorithms available to solve
convex problems, we can argue that it becomes more and more important to focus
on efficient, scalable methods that can deal with big data. When the objective
function can be written as a sum of "simple" terms, proximal splitting methods
are a good choice. UNLocBoX is a MATLAB library that implements many of these
methods, designed to solve convex optimization problems of the form $\min_{x
\in \mathbb{R}^N} \sum_{n=1}^K f_n(x).$ It contains the most recent solvers
such as FISTA, Douglas-Rachford, SDMM as well a primal dual techniques such as
Chambolle-Pock and forward-backward-forward. It also includes an extensive list
of common proximal operators that can be combined, allowing for a quick
implementation of a large variety of convex problems.
| Nathanael Perraudin, Vassilis Kalofolias, David Shuman, Pierre
Vandergheynst | null | 1402.0779 | null | null |
Sequential Model-Based Ensemble Optimization | cs.LG stat.ML | One of the most tedious tasks in the application of machine learning is model
selection, i.e. hyperparameter selection. Fortunately, recent progress has been
made in the automation of this process, through the use of sequential
model-based optimization (SMBO) methods. This can be used to optimize a
cross-validation performance of a learning algorithm over the value of its
hyperparameters. However, it is well known that ensembles of learned models
almost consistently outperform a single model, even if properly selected. In
this paper, we thus propose an extension of SMBO methods that automatically
constructs such ensembles. This method builds on a recently proposed ensemble
construction paradigm known as agnostic Bayesian learning. In experiments on 22
regression and 39 classification data sets, we confirm the success of this
proposed approach, which is able to outperform model selection with SMBO.
| Alexandre Lacoste, Hugo Larochelle, Fran\c{c}ois Laviolette, Mario
Marchand | null | 1402.0796 | null | null |
The Informed Sampler: A Discriminative Approach to Bayesian Inference in
Generative Computer Vision Models | cs.CV cs.LG stat.ML | Computer vision is hard because of a large variability in lighting, shape,
and texture; in addition the image signal is non-additive due to occlusion.
Generative models promised to account for this variability by accurately
modelling the image formation process as a function of latent variables with
prior beliefs. Bayesian posterior inference could then, in principle, explain
the observation. While intuitively appealing, generative models for computer
vision have largely failed to deliver on that promise due to the difficulty of
posterior inference. As a result the community has favoured efficient
discriminative approaches. We still believe in the usefulness of generative
models in computer vision, but argue that we need to leverage existing
discriminative or even heuristic computer vision methods. We implement this
idea in a principled way with an "informed sampler" and in careful experiments
demonstrate it on challenging generative models which contain renderer programs
as their components. We concentrate on the problem of inverting an existing
graphics rendering engine, an approach that can be understood as "Inverse
Graphics". The informed sampler, using simple discriminative proposals based on
existing computer vision technology, achieves significant improvements of
inference.
| Varun Jampani and Sebastian Nowozin and Matthew Loper and Peter V.
Gehler | 10.1016/j.cviu.2015.03.002 | 1402.0859 | null | null |
Discovering Latent Network Structure in Point Process Data | stat.ML cs.LG | Networks play a central role in modern data analysis, enabling us to reason
about systems by studying the relationships between their parts. Most often in
network analysis, the edges are given. However, in many systems it is difficult
or impossible to measure the network directly. Examples of latent networks
include economic interactions linking financial instruments and patterns of
reciprocity in gang violence. In these cases, we are limited to noisy
observations of events associated with each node. To enable analysis of these
implicit networks, we develop a probabilistic model that combines
mutually-exciting point processes with random graph models. We show how the
Poisson superposition principle enables an elegant auxiliary variable
formulation and a fully-Bayesian, parallel inference algorithm. We evaluate
this new model empirically on several datasets.
| Scott W. Linderman and Ryan P. Adams | null | 1402.0914 | null | null |
Learning Ordered Representations with Nested Dropout | stat.ML cs.LG | In this paper, we study ordered representations of data in which different
dimensions have different degrees of importance. To learn these representations
we introduce nested dropout, a procedure for stochastically removing coherent
nested sets of hidden units in a neural network. We first present a sequence of
theoretical results in the simple case of a semi-linear autoencoder. We
rigorously show that the application of nested dropout enforces identifiability
of the units, which leads to an exact equivalence with PCA. We then extend the
algorithm to deep models and demonstrate the relevance of ordered
representations to a number of applications. Specifically, we use the ordered
property of the learned codes to construct hash-based data structures that
permit very fast retrieval, achieving retrieval in time logarithmic in the
database size and independent of the dimensionality of the representation. This
allows codes that are hundreds of times longer than currently feasible for
retrieval. We therefore avoid the diminished quality associated with short
codes, while still performing retrieval that is competitive in speed with
existing methods. We also show that ordered representations are a promising way
to learn adaptive compression for efficient online data reconstruction.
| Oren Rippel, Michael A. Gelbart, Ryan P. Adams | null | 1402.0915 | null | null |
Input Warping for Bayesian Optimization of Non-stationary Functions | stat.ML cs.LG | Bayesian optimization has proven to be a highly effective methodology for the
global optimization of unknown, expensive and multimodal functions. The ability
to accurately model distributions over functions is critical to the
effectiveness of Bayesian optimization. Although Gaussian processes provide a
flexible prior over functions which can be queried efficiently, there are
various classes of functions that remain difficult to model. One of the most
frequently occurring of these is the class of non-stationary functions. The
optimization of the hyperparameters of machine learning algorithms is a problem
domain in which parameters are often manually transformed a priori, for example
by optimizing in "log-space," to mitigate the effects of spatially-varying
length scale. We develop a methodology for automatically learning a wide family
of bijective transformations or warpings of the input space using the Beta
cumulative distribution function. We further extend the warping framework to
multi-task Bayesian optimization so that multiple tasks can be warped into a
jointly stationary space. On a set of challenging benchmark optimization tasks,
we observe that the inclusion of warping greatly improves on the
state-of-the-art, producing better results faster and more reliably.
| Jasper Snoek, Kevin Swersky, Richard S. Zemel and Ryan P. Adams | null | 1402.0929 | null | null |
Long Short-Term Memory Based Recurrent Neural Network Architectures for
Large Vocabulary Speech Recognition | cs.NE cs.CL cs.LG stat.ML | Long Short-Term Memory (LSTM) is a recurrent neural network (RNN)
architecture that has been designed to address the vanishing and exploding
gradient problems of conventional RNNs. Unlike feedforward neural networks,
RNNs have cyclic connections making them powerful for modeling sequences. They
have been successfully used for sequence labeling and sequence prediction
tasks, such as handwriting recognition, language modeling, phonetic labeling of
acoustic frames. However, in contrast to the deep neural networks, the use of
RNNs in speech recognition has been limited to phone recognition in small scale
tasks. In this paper, we present novel LSTM based RNN architectures which make
more effective use of model parameters to train acoustic models for large
vocabulary speech recognition. We train and compare LSTM, RNN and DNN models at
various numbers of parameters and configurations. We show that LSTM models
converge quickly and give state of the art speech recognition performance for
relatively small sized models.
| Ha\c{s}im Sak, Andrew Senior, Fran\c{c}oise Beaufays | null | 1402.1128 | null | null |
Localized epidemic detection in networks with overwhelming noise | cs.SI cs.LG | We consider the problem of detecting an epidemic in a population where
individual diagnoses are extremely noisy. The motivation for this problem is
the plethora of examples (influenza strains in humans, or computer viruses in
smartphones, etc.) where reliable diagnoses are scarce, but noisy data
plentiful. In flu/phone-viruses, exceedingly few infected people/phones are
professionally diagnosed (only a small fraction go to a doctor) but less
reliable secondary signatures (e.g., people staying home, or
greater-than-typical upload activity) are more readily available. These
secondary data are often plagued by unreliability: many people with the flu do
not stay home, and many people that stay home do not have the flu. This paper
identifies the precise regime where knowledge of the contact network enables
finding the needle in the haystack: we provide a distributed, efficient and
robust algorithm that can correctly identify the existence of a spreading
epidemic from highly unreliable local data. Our algorithm requires only
local-neighbor knowledge of this graph, and in a broad array of settings that
we describe, succeeds even when false negatives and false positives make up an
overwhelming fraction of the data available. Our results show it succeeds in
the presence of partial information about the contact network, and also when
there is not a single "patient zero", but rather many (hundreds, in our
examples) of initial patient-zeroes, spread across the graph.
| Eli A. Meirom, Chris Milling, Constantine Caramanis, Shie Mannor,
Ariel Orda, Sanjay Shakkottai | null | 1402.1263 | null | null |
Phase transitions and sample complexity in Bayes-optimal matrix
factorization | cs.NA cond-mat.stat-mech cs.IT cs.LG math.IT stat.ML | We analyse the matrix factorization problem. Given a noisy measurement of a
product of two matrices, the problem is to estimate back the original matrices.
It arises in many applications such as dictionary learning, blind matrix
calibration, sparse principal component analysis, blind source separation, low
rank matrix completion, robust principal component analysis or factor analysis.
It is also important in machine learning: unsupervised representation learning
can often be studied through matrix factorization. We use the tools of
statistical mechanics - the cavity and replica methods - to analyze the
achievability and computational tractability of the inference problems in the
setting of Bayes-optimal inference, which amounts to assuming that the two
matrices have random independent elements generated from some known
distribution, and this information is available to the inference algorithm. In
this setting, we compute the minimal mean-squared-error achievable in principle
in any computational time, and the error that can be achieved by an efficient
approximate message passing algorithm. The computation is based on the
asymptotic state-evolution analysis of the algorithm. The performance that our
analysis predicts, both in terms of the achieved mean-squared-error, and in
terms of sample complexity, is extremely promising and motivating for a further
development of the algorithm.
| Yoshiyuki Kabashima, Florent Krzakala, Marc M\'ezard, Ayaka Sakata,
and Lenka Zdeborov\'a | 10.1109/TIT.2016.2556702 | 1402.1298 | null | null |
Dissimilarity-based Ensembles for Multiple Instance Learning | stat.ML cs.LG | In multiple instance learning, objects are sets (bags) of feature vectors
(instances) rather than individual feature vectors. In this paper we address
the problem of how these bags can best be represented. Two standard approaches
are to use (dis)similarities between bags and prototype bags, or between bags
and prototype instances. The first approach results in a relatively
low-dimensional representation determined by the number of training bags, while
the second approach results in a relatively high-dimensional representation,
determined by the total number of instances in the training set. In this paper
a third, intermediate approach is proposed, which links the two approaches and
combines their strengths. Our classifier is inspired by a random subspace
ensemble, and considers subspaces of the dissimilarity space, defined by
subsets of instances, as prototypes. We provide guidelines for using such an
ensemble, and show state-of-the-art performances on a range of multiple
instance learning problems.
| Veronika Cheplygina, David M. J. Tax, Marco Loog | 10.1109/TNNLS.2015.2424254 | 1402.1349 | null | null |
Distributed Variational Inference in Sparse Gaussian Process Regression
and Latent Variable Models | stat.ML cs.LG | Gaussian processes (GPs) are a powerful tool for probabilistic inference over
functions. They have been applied to both regression and non-linear
dimensionality reduction, and offer desirable properties such as uncertainty
estimates, robustness to over-fitting, and principled ways for tuning
hyper-parameters. However the scalability of these models to big datasets
remains an active topic of research. We introduce a novel re-parametrisation of
variational inference for sparse GP regression and latent variable models that
allows for an efficient distributed algorithm. This is done by exploiting the
decoupling of the data given the inducing points to re-formulate the evidence
lower bound in a Map-Reduce setting. We show that the inference scales well
with data and computational resources, while preserving a balanced distribution
of the load among the nodes. We further demonstrate the utility in scaling
Gaussian processes to big data. We show that GP performance improves with
increasing amounts of data in regression (on flight data with 2 million
records) and latent variable modelling (on MNIST). The results show that GPs
perform better than many common models often used for big data.
| Yarin Gal, Mark van der Wilk, Carl E. Rasmussen | null | 1402.1389 | null | null |
An Autoencoder Approach to Learning Bilingual Word Representations | cs.CL cs.LG stat.ML | Cross-language learning allows us to use training data from one language to
build models for a different language. Many approaches to bilingual learning
require that we have word-level alignment of sentences from parallel corpora.
In this work we explore the use of autoencoder-based methods for cross-language
learning of vectorial word representations that are aligned between two
languages, while not relying on word-level alignments. We show that by simply
learning to reconstruct the bag-of-words representations of aligned sentences,
within and between languages, we can in fact learn high-quality representations
and do without word alignments. Since training autoencoders on word
observations presents certain computational issues, we propose and compare
different variations adapted to this setting. We also propose an explicit
correlation maximizing regularizer that leads to significant improvement in the
performance. We empirically investigate the success of our approach on the
problem of cross-language test classification, where a classifier trained on a
given language (e.g., English) must learn to generalize to a different language
(e.g., German). These experiments demonstrate that our approaches are
competitive with the state-of-the-art, achieving up to 10-14 percentage point
improvements over the best reported results on this task.
| Sarath Chandar A P, Stanislas Lauly, Hugo Larochelle, Mitesh M.
Khapra, Balaraman Ravindran, Vikas Raykar, Amrita Saha | null | 1402.1454 | null | null |
Near-Optimal Joint Object Matching via Convex Relaxation | cs.LG cs.CV cs.IT math.IT math.OC stat.ML | Joint matching over a collection of objects aims at aggregating information
from a large collection of similar instances (e.g. images, graphs, shapes) to
improve maps between pairs of them. Given multiple matches computed between a
few object pairs in isolation, the goal is to recover an entire collection of
maps that are (1) globally consistent, and (2) close to the provided maps ---
and under certain conditions provably the ground-truth maps. Despite recent
advances on this problem, the best-known recovery guarantees are limited to a
small constant barrier --- none of the existing methods find theoretical
support when more than $50\%$ of input correspondences are corrupted. Moreover,
prior approaches focus mostly on fully similar objects, while it is practically
more demanding to match instances that are only partially similar to each
other.
In this paper, we develop an algorithm to jointly match multiple objects that
exhibit only partial similarities, given a few pairwise matches that are
densely corrupted. Specifically, we propose to recover the ground-truth maps
via a parameter-free convex program called MatchLift, following a spectral
method that pre-estimates the total number of distinct elements to be matched.
Encouragingly, MatchLift exhibits near-optimal error-correction ability, i.e.
in the asymptotic regime it is guaranteed to work even when a dominant fraction
$1-\Theta\left(\frac{\log^{2}n}{\sqrt{n}}\right)$ of the input maps behave like
random outliers. Furthermore, MatchLift succeeds with minimal input complexity,
namely, perfect matching can be achieved as soon as the provided maps form a
connected map graph. We evaluate the proposed algorithm on various benchmark
data sets including synthetic examples and real-world examples, all of which
confirm the practical applicability of MatchLift.
| Yuxin Chen and Leonidas J. Guibas and Qi-Xing Huang | null | 1402.1473 | null | null |
Dictionary Learning over Distributed Models | cs.LG cs.DC | In this paper, we consider learning dictionary models over a network of
agents, where each agent is only in charge of a portion of the dictionary
elements. This formulation is relevant in Big Data scenarios where large
dictionary models may be spread over different spatial locations and it is not
feasible to aggregate all dictionaries in one location due to communication and
privacy considerations. We first show that the dual function of the inference
problem is an aggregation of individual cost functions associated with
different agents, which can then be minimized efficiently by means of diffusion
strategies. The collaborative inference step generates dual variables that are
used by the agents to update their dictionaries without the need to share these
dictionaries or even the coefficient models for the training data. This is a
powerful property that leads to an effective distributed procedure for learning
dictionaries over large networks (e.g., hundreds of agents in our experiments).
Furthermore, the proposed learning strategy operates in an online manner and is
able to respond to streaming data, where each data sample is presented to the
network once.
| Jianshu Chen, Zaid J. Towfic, Ali H. Sayed | 10.1109/TSP.2014.2385045 | 1402.1515 | null | null |
Dual Query: Practical Private Query Release for High Dimensional Data | cs.DS cs.CR cs.DB cs.LG | We present a practical, differentially private algorithm for answering a
large number of queries on high dimensional datasets. Like all algorithms for
this task, ours necessarily has worst-case complexity exponential in the
dimension of the data. However, our algorithm packages the computationally hard
step into a concisely defined integer program, which can be solved
non-privately using standard solvers. We prove accuracy and privacy theorems
for our algorithm, and then demonstrate experimentally that our algorithm
performs well in practice. For example, our algorithm can efficiently and
accurately answer millions of queries on the Netflix dataset, which has over
17,000 attributes; this is an improvement on the state of the art by multiple
orders of magnitude.
| Marco Gaboardi, Emilio Jes\'us Gallego Arias, Justin Hsu, Aaron Roth,
Zhiwei Steven Wu | null | 1402.1526 | null | null |
Two-stage Sampled Learning Theory on Distributions | math.ST cs.LG math.FA stat.ML stat.TH | We focus on the distribution regression problem: regressing to a real-valued
response from a probability distribution. Although there exist a large number
of similarity measures between distributions, very little is known about their
generalization performance in specific learning tasks. Learning problems
formulated on distributions have an inherent two-stage sampled difficulty: in
practice only samples from sampled distributions are observable, and one has to
build an estimate on similarities computed between sets of points. To the best
of our knowledge, the only existing method with consistency guarantees for
distribution regression requires kernel density estimation as an intermediate
step (which suffers from slow convergence issues in high dimensions), and the
domain of the distributions to be compact Euclidean. In this paper, we provide
theoretical guarantees for a remarkably simple algorithmic alternative to solve
the distribution regression problem: embed the distributions to a reproducing
kernel Hilbert space, and learn a ridge regressor from the embeddings to the
outputs. Our main contribution is to prove the consistency of this technique in
the two-stage sampled setting under mild conditions (on separable, topological
domains endowed with kernels). For a given total number of observations, we
derive convergence rates as an explicit function of the problem difficulty. As
a special case, we answer a 15-year-old open question: we establish the
consistency of the classical set kernel [Haussler, 1999; Gartner et. al, 2002]
in regression, and cover more recent kernels on distributions, including those
due to [Christmann and Steinwart, 2010].
| Zoltan Szabo, Arthur Gretton, Barnabas Poczos, Bharath Sriperumbudur | null | 1402.1754 | null | null |
Active Clustering with Model-Based Uncertainty Reduction | cs.LG cs.CV stat.ML | Semi-supervised clustering seeks to augment traditional clustering methods by
incorporating side information provided via human expertise in order to
increase the semantic meaningfulness of the resulting clusters. However, most
current methods are \emph{passive} in the sense that the side information is
provided beforehand and selected randomly. This may require a large number of
constraints, some of which could be redundant, unnecessary, or even detrimental
to the clustering results. Thus in order to scale such semi-supervised
algorithms to larger problems it is desirable to pursue an \emph{active}
clustering method---i.e. an algorithm that maximizes the effectiveness of the
available human labor by only requesting human input where it will have the
greatest impact. Here, we propose a novel online framework for active
semi-supervised spectral clustering that selects pairwise constraints as
clustering proceeds, based on the principle of uncertainty reduction. Using a
first-order Taylor expansion, we decompose the expected uncertainty reduction
problem into a gradient and a step-scale, computed via an application of matrix
perturbation theory and cluster-assignment entropy, respectively. The resulting
model is used to estimate the uncertainty reduction potential of each sample in
the dataset. We then present the human user with pairwise queries with respect
to only the best candidate sample. We evaluate our method using three different
image datasets (faces, leaves and dogs), a set of common UCI machine learning
datasets and a gene dataset. The results validate our decomposition formulation
and show that our method is consistently superior to existing state-of-the-art
techniques, as well as being robust to noise and to unknown numbers of
clusters.
| Caiming Xiong, David Johnson, Jason J. Corso | null | 1402.1783 | null | null |
Binary Excess Risk for Smooth Convex Surrogates | cs.LG stat.ML | In statistical learning theory, convex surrogates of the 0-1 loss are highly
preferred because of the computational and theoretical virtues that convexity
brings in. This is of more importance if we consider smooth surrogates as
witnessed by the fact that the smoothness is further beneficial both
computationally- by attaining an {\it optimal} convergence rate for
optimization, and in a statistical sense- by providing an improved {\it
optimistic} rate for generalization bound. In this paper we investigate the
smoothness property from the viewpoint of statistical consistency and show how
it affects the binary excess risk. We show that in contrast to optimization and
generalization errors that favor the choice of smooth surrogate loss, the
smoothness of loss function may degrade the binary excess risk. Motivated by
this negative result, we provide a unified analysis that integrates
optimization error, generalization bound, and the error in translating convex
excess risk into a binary excess risk when examining the impact of smoothness
on the binary excess risk. We show that under favorable conditions appropriate
choice of smooth convex loss will result in a binary excess risk that is better
than $O(1/\sqrt{n})$.
| Mehrdad Mahdavi, Lijun Zhang, and Rong Jin | null | 1402.1792 | null | null |
An Inequality with Applications to Structured Sparsity and Multitask
Dictionary Learning | cs.LG stat.ML | From concentration inequalities for the suprema of Gaussian or Rademacher
processes an inequality is derived. It is applied to sharpen existing and to
derive novel bounds on the empirical Rademacher complexities of unit balls in
various norms appearing in the context of structured sparsity and multitask
dictionary learning or matrix factorization. A key role is played by the
largest eigenvalue of the data covariance matrix.
| Andreas Maurer, Massimiliano Pontil, Bernardino Romera-Paredes | null | 1402.1864 | null | null |
On the Number of Linear Regions of Deep Neural Networks | stat.ML cs.LG cs.NE | We study the complexity of functions computable by deep feedforward neural
networks with piecewise linear activations in terms of the symmetries and the
number of linear regions that they have. Deep networks are able to sequentially
map portions of each layer's input-space to the same output. In this way, deep
models compute functions that react equally to complicated patterns of
different inputs. The compositional structure of these functions enables them
to re-use pieces of computation exponentially often in terms of the network's
depth. This paper investigates the complexity of such compositional maps and
contributes new theoretical results regarding the advantage of depth for neural
networks with piecewise linear activation functions. In particular, our
analysis is not specific to a single family of models, and as an example, we
employ it for rectifier and maxout networks. We improve complexity bounds from
pre-existing work and investigate the behavior of units in higher layers.
| Guido Mont\'ufar, Razvan Pascanu, Kyunghyun Cho and Yoshua Bengio | null | 1402.1869 | null | null |
Thresholding Classifiers to Maximize F1 Score | stat.ML cs.IR cs.LG | This paper provides new insight into maximizing F1 scores in the context of
binary classification and also in the context of multilabel classification. The
harmonic mean of precision and recall, F1 score is widely used to measure the
success of a binary classifier when one class is rare. Micro average, macro
average, and per instance average F1 scores are used in multilabel
classification. For any classifier that produces a real-valued output, we
derive the relationship between the best achievable F1 score and the
decision-making threshold that achieves this optimum. As a special case, if the
classifier outputs are well-calibrated conditional probabilities, then the
optimal threshold is half the optimal F1 score. As another special case, if the
classifier is completely uninformative, then the optimal behavior is to
classify all examples as positive. Since the actual prevalence of positive
examples typically is low, this behavior can be considered undesirable. As a
case study, we discuss the results, which can be surprising, of applying this
procedure when predicting 26,853 labels for Medline documents.
| Zachary Chase Lipton, Charles Elkan, Balakrishnan Narayanaswamy | null | 1402.1892 | null | null |
A Hybrid Loss for Multiclass and Structured Prediction | cs.LG cs.AI cs.CV | We propose a novel hybrid loss for multiclass and structured prediction
problems that is a convex combination of a log loss for Conditional Random
Fields (CRFs) and a multiclass hinge loss for Support Vector Machines (SVMs).
We provide a sufficient condition for when the hybrid loss is Fisher consistent
for classification. This condition depends on a measure of dominance between
labels--specifically, the gap between the probabilities of the best label and
the second best label. We also prove Fisher consistency is necessary for
parametric consistency when learning models such as CRFs. We demonstrate
empirically that the hybrid loss typically performs least as well as--and often
better than--both of its constituent losses on a variety of tasks, such as
human action recognition. In doing so we also provide an empirical comparison
of the efficacy of probabilistic and margin based approaches to multiclass and
structured prediction.
| Qinfeng Shi, Mark Reid, Tiberio Caetano, Anton van den Hengel and
Zhenhua Wang | null | 1402.1921 | null | null |
Classification Tree Diagrams in Health Informatics Applications | cs.IR cs.CV cs.LG | Health informatics deal with the methods used to optimize the acquisition,
storage and retrieval of medical data, and classify information in healthcare
applications. Healthcare analysts are particularly interested in various
computer informatics areas such as; knowledge representation from data, anomaly
detection, outbreak detection methods and syndromic surveillance applications.
Although various parametric and non-parametric approaches are being proposed to
classify information from data, classification tree diagrams provide an
interactive visualization to analysts as compared to other methods. In this
work we discuss application of classification tree diagrams to classify
information from medical data in healthcare applications.
| Farrukh Arslan | null | 1402.1947 | null | null |
Better Optimism By Bayes: Adaptive Planning with Rich Models | cs.AI cs.LG stat.ML | The computational costs of inference and planning have confined Bayesian
model-based reinforcement learning to one of two dismal fates: powerful
Bayes-adaptive planning but only for simplistic models, or powerful, Bayesian
non-parametric models but using simple, myopic planning strategies such as
Thompson sampling. We ask whether it is feasible and truly beneficial to
combine rich probabilistic models with a closer approximation to fully Bayesian
planning. First, we use a collection of counterexamples to show formal problems
with the over-optimism inherent in Thompson sampling. Then we leverage
state-of-the-art techniques in efficient Bayes-adaptive planning and
non-parametric Bayesian methods to perform qualitatively better than both
existing conventional algorithms and Thompson sampling on two contextual
bandit-like problems.
| Arthur Guez, David Silver, Peter Dayan | null | 1402.1958 | null | null |
Dictionary learning for fast classification based on soft-thresholding | cs.CV cs.LG stat.ML | Classifiers based on sparse representations have recently been shown to
provide excellent results in many visual recognition and classification tasks.
However, the high cost of computing sparse representations at test time is a
major obstacle that limits the applicability of these methods in large-scale
problems, or in scenarios where computational power is restricted. We consider
in this paper a simple yet efficient alternative to sparse coding for feature
extraction. We study a classification scheme that applies the soft-thresholding
nonlinear mapping in a dictionary, followed by a linear classifier. A novel
supervised dictionary learning algorithm tailored for this low complexity
classification architecture is proposed. The dictionary learning problem, which
jointly learns the dictionary and linear classifier, is cast as a difference of
convex (DC) program and solved efficiently with an iterative DC solver. We
conduct experiments on several datasets, and show that our learning algorithm
that leverages the structure of the classification problem outperforms generic
learning procedures. Our simple classifier based on soft-thresholding also
competes with the recent sparse coding classifiers, when the dictionary is
learned appropriately. The adopted classification scheme further requires less
computational time at the testing stage, compared to other classifiers. The
proposed scheme shows the potential of the adequately trained soft-thresholding
mapping for classification and paves the way towards the development of very
efficient classification methods for vision problems.
| Alhussein Fawzi, Mike Davies, Pascal Frossard | null | 1402.1973 | null | null |
Deeply Coupled Auto-encoder Networks for Cross-view Classification | cs.CV cs.LG cs.NE | The comparison of heterogeneous samples extensively exists in many
applications, especially in the task of image classification. In this paper, we
propose a simple but effective coupled neural network, called Deeply Coupled
Autoencoder Networks (DCAN), which seeks to build two deep neural networks,
coupled with each other in every corresponding layers. In DCAN, each deep
structure is developed via stacking multiple discriminative coupled
auto-encoders, a denoising auto-encoder trained with maximum margin criterion
consisting of intra-class compactness and inter-class penalty. This single
layer component makes our model simultaneously preserve the local consistency
and enhance its discriminative capability. With increasing number of layers,
the coupled networks can gradually narrow the gap between the two views.
Extensive experiments on cross-view image classification tasks demonstrate the
superiority of our method over state-of-the-art methods.
| Wen Wang, Zhen Cui, Hong Chang, Shiguang Shan, Xilin Chen | null | 1402.2031 | null | null |
Approachability in unknown games: Online learning meets multi-objective
optimization | stat.ML cs.LG math.ST stat.TH | In the standard setting of approachability there are two players and a target
set. The players play repeatedly a known vector-valued game where the first
player wants to have the average vector-valued payoff converge to the target
set which the other player tries to exclude it from this set. We revisit this
setting in the spirit of online learning and do not assume that the first
player knows the game structure: she receives an arbitrary vector-valued reward
vector at every round. She wishes to approach the smallest ("best") possible
set given the observed average payoffs in hindsight. This extension of the
standard setting has implications even when the original target set is not
approachable and when it is not obvious which expansion of it should be
approached instead. We show that it is impossible, in general, to approach the
best target set in hindsight and propose achievable though ambitious
alternative goals. We further propose a concrete strategy to approach these
goals. Our method does not require projection onto a target set and amounts to
switching between scalar regret minimization algorithms that are performed in
episodes. Applications to global cost minimization and to approachability under
sample path constraints are considered.
| Shie Mannor (EE-Technion), Vianney Perchet, Gilles Stoltz (GREGH) | null | 1402.2043 | null | null |
A Second-order Bound with Excess Losses | stat.ML cs.LG math.ST stat.TH | We study online aggregation of the predictions of experts, and first show new
second-order regret bounds in the standard setting, which are obtained via a
version of the Prod algorithm (and also a version of the polynomially weighted
average algorithm) with multiple learning rates. These bounds are in terms of
excess losses, the differences between the instantaneous losses suffered by the
algorithm and the ones of a given expert. We then demonstrate the interest of
these bounds in the context of experts that report their confidences as a
number in the interval [0,1] using a generic reduction to the standard setting.
We conclude by two other applications in the standard setting, which improve
the known bounds in case of small excess losses and show a bounded regret
against i.i.d. sequences of losses.
| Pierre Gaillard (GREGH), Gilles Stoltz (GREGH), Tim Van Erven (INRIA
Saclay - Ile de France) | null | 1402.2044 | null | null |
Probabilistic Interpretation of Linear Solvers | math.OC cs.LG cs.NA math.NA math.PR stat.ML | This manuscript proposes a probabilistic framework for algorithms that
iteratively solve unconstrained linear problems $Bx = b$ with positive definite
$B$ for $x$. The goal is to replace the point estimates returned by existing
methods with a Gaussian posterior belief over the elements of the inverse of
$B$, which can be used to estimate errors. Recent probabilistic interpretations
of the secant family of quasi-Newton optimization algorithms are extended.
Combined with properties of the conjugate gradient algorithm, this leads to
uncertainty-calibrated methods with very limited cost overhead over conjugate
gradients, a self-contained novel interpretation of the quasi-Newton and
conjugate gradient algorithms, and a foundation for new nonlinear optimization
methods.
| Philipp Hennig | null | 1402.2058 | null | null |
Near-Optimally Teaching the Crowd to Classify | cs.LG | How should we present training examples to learners to teach them
classification rules? This is a natural problem when training workers for
crowdsourcing labeling tasks, and is also motivated by challenges in
data-driven online education. We propose a natural stochastic model of the
learners, modeling them as randomly switching among hypotheses based on
observed feedback. We then develop STRICT, an efficient algorithm for selecting
examples to teach to workers. Our solution greedily maximizes a submodular
surrogate objective function in order to select examples to show to the
learners. We prove that our strategy is competitive with the optimal teaching
policy. Moreover, for the special case of linear separators, we prove that an
exponential reduction in error probability can be achieved. Our experiments on
simulated workers as well as three real image annotation tasks on Amazon
Mechanical Turk show the effectiveness of our teaching algorithm.
| Adish Singla, Ilija Bogunovic, G\'abor Bart\'ok, Amin Karbasi, and
Andreas Krause | null | 1402.2092 | null | null |
Characterizing the Sample Complexity of Private Learners | cs.CR cs.LG | In 2008, Kasiviswanathan et al. defined private learning as a combination of
PAC learning and differential privacy. Informally, a private learner is applied
to a collection of labeled individual information and outputs a hypothesis
while preserving the privacy of each individual. Kasiviswanathan et al. gave a
generic construction of private learners for (finite) concept classes, with
sample complexity logarithmic in the size of the concept class. This sample
complexity is higher than what is needed for non-private learners, hence
leaving open the possibility that the sample complexity of private learning may
be sometimes significantly higher than that of non-private learning.
We give a combinatorial characterization of the sample size sufficient and
necessary to privately learn a class of concepts. This characterization is
analogous to the well known characterization of the sample complexity of
non-private learning in terms of the VC dimension of the concept class. We
introduce the notion of probabilistic representation of a concept class, and
our new complexity measure RepDim corresponds to the size of the smallest
probabilistic representation of the concept class.
We show that any private learning algorithm for a concept class C with sample
complexity m implies RepDim(C)=O(m), and that there exists a private learning
algorithm with sample complexity m=O(RepDim(C)). We further demonstrate that a
similar characterization holds for the database size needed for privately
computing a large class of optimization problems and also for the well studied
problem of private data release.
| Amos Beimel, Kobbi Nissim, Uri Stemmer | null | 1402.2224 | null | null |
Feature and Variable Selection in Classification | cs.LG cs.AI stat.ML | The amount of information in the form of features and variables avail- able
to machine learning algorithms is ever increasing. This can lead to classifiers
that are prone to overfitting in high dimensions, high di- mensional models do
not lend themselves to interpretable results, and the CPU and memory resources
necessary to run on high-dimensional datasets severly limit the applications of
the approaches. Variable and feature selection aim to remedy this by finding a
subset of features that in some way captures the information provided best. In
this paper we present the general methodology and highlight some specific
approaches.
| Aaron Karper | null | 1402.2300 | null | null |
Universal Matrix Completion | stat.ML cs.IT cs.LG math.IT | The problem of low-rank matrix completion has recently generated a lot of
interest leading to several results that offer exact solutions to the problem.
However, in order to do so, these methods make assumptions that can be quite
restrictive in practice. More specifically, the methods assume that: a) the
observed indices are sampled uniformly at random, and b) for every new matrix,
the observed indices are sampled afresh. In this work, we address these issues
by providing a universal recovery guarantee for matrix completion that works
for a variety of sampling schemes. In particular, we show that if the set of
sampled indices come from the edges of a bipartite graph with large spectral
gap (i.e. gap between the first and the second singular value), then the
nuclear norm minimization based method exactly recovers all low-rank matrices
that satisfy certain incoherence properties. Moreover, we also show that under
certain stricter incoherence conditions, $O(nr^2)$ uniformly sampled entries
are enough to recover any rank-$r$ $n\times n$ matrix, in contrast to the
$O(nr\log n)$ sample complexity required by other matrix completion algorithms
as well as existing analyses of the nuclear norm method.
| Srinadh Bhojanapalli, Prateek Jain | null | 1402.2324 | null | null |
Computational Limits for Matrix Completion | cs.CC cs.LG | Matrix Completion is the problem of recovering an unknown real-valued
low-rank matrix from a subsample of its entries. Important recent results show
that the problem can be solved efficiently under the assumption that the
unknown matrix is incoherent and the subsample is drawn uniformly at random.
Are these assumptions necessary?
It is well known that Matrix Completion in its full generality is NP-hard.
However, little is known if make additional assumptions such as incoherence and
permit the algorithm to output a matrix of slightly higher rank. In this paper
we prove that Matrix Completion remains computationally intractable even if the
unknown matrix has rank $4$ but we are allowed to output any constant rank
matrix, and even if additionally we assume that the unknown matrix is
incoherent and are shown $90%$ of the entries. This result relies on the
conjectured hardness of the $4$-Coloring problem. We also consider the positive
semidefinite Matrix Completion problem. Here we show a similar hardness result
under the standard assumption that $\mathrm{P}\ne \mathrm{NP}.$
Our results greatly narrow the gap between existing feasibility results and
computational lower bounds. In particular, we believe that our results give the
first complexity-theoretic justification for why distributional assumptions are
needed beyond the incoherence assumption in order to obtain positive results.
On the technical side, we contribute several new ideas on how to encode hard
combinatorial problems in low-rank optimization problems. We hope that these
techniques will be helpful in further understanding the computational limits of
Matrix Completion and related problems.
| Moritz Hardt, Raghu Meka, Prasad Raghavendra, and Benjamin Weitz | null | 1402.2331 | null | null |
Modeling sequential data using higher-order relational features and
predictive training | cs.LG cs.CV stat.ML | Bi-linear feature learning models, like the gated autoencoder, were proposed
as a way to model relationships between frames in a video. By minimizing
reconstruction error of one frame, given the previous frame, these models learn
"mapping units" that encode the transformations inherent in a sequence, and
thereby learn to encode motion. In this work we extend bi-linear models by
introducing "higher-order mapping units" that allow us to encode
transformations between frames and transformations between transformations.
We show that this makes it possible to encode temporal structure that is more
complex and longer-range than the structure captured within standard bi-linear
models. We also show that a natural way to train the model is by replacing the
commonly used reconstruction objective with a prediction objective which forces
the model to correctly predict the evolution of the input multiple steps into
the future. Learning can be achieved by back-propagating the multi-step
prediction through time. We test the model on various temporal prediction
tasks, and show that higher-order mappings and predictive training both yield a
significant improvement over bi-linear models in terms of prediction accuracy.
| Vincent Michalski, Roland Memisevic, Kishore Konda | null | 1402.2333 | null | null |
Machine Learner for Automated Reasoning 0.4 and 0.5 | cs.LG cs.AI cs.LO | Machine Learner for Automated Reasoning (MaLARea) is a learning and reasoning
system for proving in large formal libraries where thousands of theorems are
available when attacking a new conjecture, and a large number of related
problems and proofs can be used to learn specific theorem-proving knowledge.
The last version of the system has by a large margin won the 2013 CASC LTB
competition. This paper describes the motivation behind the methods used in
MaLARea, discusses the general approach and the issues arising in evaluation of
such system, and describes the Mizar@Turing100 and CASC'24 versions of MaLARea.
| Cezary Kaliszyk, Josef Urban, Ji\v{r}\'i Vysko\v{c}il | null | 1402.2359 | null | null |
A comparison of linear and non-linear calibrations for speaker
recognition | stat.ML cs.LG | In recent work on both generative and discriminative score to
log-likelihood-ratio calibration, it was shown that linear transforms give good
accuracy only for a limited range of operating points. Moreover, these methods
required tailoring of the calibration training objective functions in order to
target the desired region of best accuracy. Here, we generalize the linear
recipes to non-linear ones. We experiment with a non-linear, non-parametric,
discriminative PAV solution, as well as parametric, generative,
maximum-likelihood solutions that use Gaussian, Student's T and
normal-inverse-Gaussian score distributions. Experiments on NIST SRE'12 scores
suggest that the non-linear methods provide wider ranges of optimal accuracy
and can be trained without having to resort to objective function tailoring.
| Niko Br\"ummer, Albert Swart and David van Leeuwen | null | 1402.2447 | null | null |
Online Nonparametric Regression | stat.ML cs.LG math.ST stat.TH | We establish optimal rates for online regression for arbitrary classes of
regression functions in terms of the sequential entropy introduced in (Rakhlin,
Sridharan, Tewari, 2010). The optimal rates are shown to exhibit a phase
transition analogous to the i.i.d./statistical learning case, studied in
(Rakhlin, Sridharan, Tsybakov 2013). In the frequently encountered situation
when sequential entropy and i.i.d. empirical entropy match, our results point
to the interesting phenomenon that the rates for statistical learning with
squared loss and online nonparametric regression are the same.
In addition to a non-algorithmic study of minimax regret, we exhibit a
generic forecaster that enjoys the established optimal rates. We also provide a
recipe for designing online regression algorithms that can be computationally
efficient. We illustrate the techniques by deriving existing and new
forecasters for the case of finite experts and for online linear regression.
| Alexander Rakhlin, Karthik Sridharan | null | 1402.2594 | null | null |
On Zeroth-Order Stochastic Convex Optimization via Random Walks | cs.LG stat.ML | We propose a method for zeroth order stochastic convex optimization that
attains the suboptimality rate of $\tilde{\mathcal{O}}(n^{7}T^{-1/2})$ after
$T$ queries for a convex bounded function $f:{\mathbb R}^n\to{\mathbb R}$. The
method is based on a random walk (the \emph{Ball Walk}) on the epigraph of the
function. The randomized approach circumvents the problem of gradient
estimation, and appears to be less sensitive to noisy function evaluations
compared to noiseless zeroth order methods.
| Tengyuan Liang, Hariharan Narayanan and Alexander Rakhlin | null | 1402.2667 | null | null |
Ranking via Robust Binary Classification and Parallel Parameter
Estimation in Large-Scale Data | stat.ML cs.DC cs.LG stat.CO | We propose RoBiRank, a ranking algorithm that is motivated by observing a
close connection between evaluation metrics for learning to rank and loss
functions for robust classification. The algorithm shows a very competitive
performance on standard benchmark datasets against other representative
algorithms in the literature. On the other hand, in large scale problems where
explicit feature vectors and scores are not given, our algorithm can be
efficiently parallelized across a large number of machines; for a task that
requires 386,133 x 49,824,519 pairwise interactions between items to be ranked,
our algorithm finds solutions that are of dramatically higher quality than that
can be found by a state-of-the-art competitor algorithm, given the same amount
of wall-clock time for computation.
| Hyokun Yun, Parameswaran Raman, S.V.N. Vishwanathan | null | 1402.2676 | null | null |
Regularization for Multiple Kernel Learning via Sum-Product Networks | stat.ML cs.LG | In this paper, we are interested in constructing general graph-based
regularizers for multiple kernel learning (MKL) given a structure which is used
to describe the way of combining basis kernels. Such structures are represented
by sum-product networks (SPNs) in our method. Accordingly we propose a new
convex regularization method for MLK based on a path-dependent kernel weighting
function which encodes the entire SPN structure in our method. Under certain
conditions and from the view of probability, this function can be considered to
follow multinomial distributions over the weights associated with product nodes
in SPNs. We also analyze the convexity of our regularizer and the complexity of
our induced classifiers, and further propose an efficient wrapper algorithm to
optimize our formulation. In our experiments, we apply our method to ......
| Ziming Zhang | null | 1402.3032 | null | null |
Squeezing bottlenecks: exploring the limits of autoencoder semantic
representation capabilities | cs.IR cs.LG stat.ML | We present a comprehensive study on the use of autoencoders for modelling
text data, in which (differently from previous studies) we focus our attention
on the following issues: i) we explore the suitability of two different models
bDA and rsDA for constructing deep autoencoders for text data at the sentence
level; ii) we propose and evaluate two novel metrics for better assessing the
text-reconstruction capabilities of autoencoders; and iii) we propose an
automatic method to find the critical bottleneck dimensionality for text
language representations (below which structural information is lost).
| Parth Gupta, Rafael E. Banchs and Paolo Rosso | null | 1402.3070 | null | null |
A Robust Ensemble Approach to Learn From Positive and Unlabeled Data
Using SVM Base Models | stat.ML cs.LG | We present a novel approach to learn binary classifiers when only positive
and unlabeled instances are available (PU learning). This problem is routinely
cast as a supervised task with label noise in the negative set. We use an
ensemble of SVM models trained on bootstrap resamples of the training data for
increased robustness against label noise. The approach can be considered in a
bagging framework which provides an intuitive explanation for its mechanics in
a semi-supervised setting. We compared our method to state-of-the-art
approaches in simulations using multiple public benchmark data sets. The
included benchmark comprises three settings with increasing label noise: (i)
fully supervised, (ii) PU learning and (iii) PU learning with false positives.
Our approach shows a marginal improvement over existing methods in the second
setting and a significant improvement in the third.
| Marc Claesen, Frank De Smet, Johan A. K. Suykens, Bart De Moor | 10.1016/j.neucom.2014.10.081 | 1402.3144 | null | null |
Zero-bias autoencoders and the benefits of co-adapting features | stat.ML cs.CV cs.LG cs.NE | Regularized training of an autoencoder typically results in hidden unit
biases that take on large negative values. We show that negative biases are a
natural result of using a hidden layer whose responsibility is to both
represent the input data and act as a selection mechanism that ensures sparsity
of the representation. We then show that negative biases impede the learning of
data distributions whose intrinsic dimensionality is high. We also propose a
new activation function that decouples the two roles of the hidden layer and
that allows us to learn representations on data with very high intrinsic
dimensionality, where standard autoencoders typically fail. Since the decoupled
activation function acts like an implicit regularizer, the model can be trained
by minimizing the reconstruction error of training data, without requiring any
additional regularization.
| Kishore Konda, Roland Memisevic, David Krueger | null | 1402.3337 | null | null |
Geometry and Expressive Power of Conditional Restricted Boltzmann
Machines | cs.NE cs.LG stat.ML | Conditional restricted Boltzmann machines are undirected stochastic neural
networks with a layer of input and output units connected bipartitely to a
layer of hidden units. These networks define models of conditional probability
distributions on the states of the output units given the states of the input
units, parametrized by interaction weights and biases. We address the
representational power of these models, proving results their ability to
represent conditional Markov random fields and conditional distributions with
restricted supports, the minimal size of universal approximators, the maximal
model approximation errors, and on the dimension of the set of representable
conditional distributions. We contribute new tools for investigating
conditional probability models, which allow us to improve the results that can
be derived from existing work on restricted Boltzmann machine probability
models.
| Guido Montufar, Nihat Ay, Keyan Ghazi-Zahedi | null | 1402.3346 | null | null |
Indian Buffet Process Deep Generative Models for Semi-Supervised
Classification | cs.LG | Deep generative models (DGMs) have brought about a major breakthrough, as
well as renewed interest, in generative latent variable models. However, DGMs
do not allow for performing data-driven inference of the number of latent
features needed to represent the observed data. Traditional linear formulations
address this issue by resorting to tools from the field of nonparametric
statistics. Indeed, linear latent variable models imposed an Indian Buffet
Process (IBP) prior have been extensively studied by the machine learning
community; inference for such models can been performed either via exact
sampling or via approximate variational techniques. Based on this inspiration,
in this paper we examine whether similar ideas from the field of Bayesian
nonparametrics can be utilized in the context of modern DGMs in order to
address the latent variable dimensionality inference problem. To this end, we
propose a novel DGM formulation, based on the imposition of an IBP prior. We
devise an efficient Black-Box Variational inference algorithm for our model,
and exhibit its efficacy in a number of semi-supervised classification
experiments. In all cases, we use popular benchmark datasets, and compare to
state-of-the-art DGMs.
| Sotirios P. Chatzis | null | 1402.3427 | null | null |
A Clockwork RNN | cs.NE cs.LG | Sequence prediction and classification are ubiquitous and challenging
problems in machine learning that can require identifying complex dependencies
between temporally distant inputs. Recurrent Neural Networks (RNNs) have the
ability, in theory, to cope with these temporal dependencies by virtue of the
short-term memory implemented by their recurrent (feedback) connections.
However, in practice they are difficult to train successfully when the
long-term memory is required. This paper introduces a simple, yet powerful
modification to the standard RNN architecture, the Clockwork RNN (CW-RNN), in
which the hidden layer is partitioned into separate modules, each processing
inputs at its own temporal granularity, making computations only at its
prescribed clock rate. Rather than making the standard RNN models more complex,
CW-RNN reduces the number of RNN parameters, improves the performance
significantly in the tasks tested, and speeds up the network evaluation. The
network is demonstrated in preliminary experiments involving two tasks: audio
signal generation and TIMIT spoken word classification, where it outperforms
both RNN and LSTM networks.
| Jan Koutn\'ik, Klaus Greff, Faustino Gomez, J\"urgen Schmidhuber | null | 1402.3511 | null | null |
Learning-assisted Theorem Proving with Millions of Lemmas | cs.AI cs.DL cs.LG cs.LO | Large formal mathematical libraries consist of millions of atomic inference
steps that give rise to a corresponding number of proved statements (lemmas).
Analogously to the informal mathematical practice, only a tiny fraction of such
statements is named and re-used in later proofs by formal mathematicians. In
this work, we suggest and implement criteria defining the estimated usefulness
of the HOL Light lemmas for proving further theorems. We use these criteria to
mine the large inference graph of the lemmas in the HOL Light and Flyspeck
libraries, adding up to millions of the best lemmas to the pool of statements
that can be re-used in later proofs. We show that in combination with
learning-based relevance filtering, such methods significantly strengthen
automated theorem proving of new conjectures over large formal mathematical
libraries such as Flyspeck.
| Cezary Kaliszyk and Josef Urban | null | 1402.3578 | null | null |
Privately Solving Linear Programs | cs.DS cs.CR cs.LG | In this paper, we initiate the systematic study of solving linear programs
under differential privacy. The first step is simply to define the problem: to
this end, we introduce several natural classes of private linear programs that
capture different ways sensitive data can be incorporated into a linear
program. For each class of linear programs we give an efficient, differentially
private solver based on the multiplicative weights framework, or we give an
impossibility result.
| Justin Hsu and Aaron Roth and Tim Roughgarden and Jonathan Ullman | 10.1007/978-3-662-43948-7_51 | 1402.3631 | null | null |
word2vec Explained: deriving Mikolov et al.'s negative-sampling
word-embedding method | cs.CL cs.LG stat.ML | The word2vec software of Tomas Mikolov and colleagues
(https://code.google.com/p/word2vec/ ) has gained a lot of traction lately, and
provides state-of-the-art word embeddings. The learning models behind the
software are described in two research papers. We found the description of the
models in these papers to be somewhat cryptic and hard to follow. While the
motivations and presentation may be obvious to the neural-networks
language-modeling crowd, we had to struggle quite a bit to figure out the
rationale behind the equations.
This note is an attempt to explain equation (4) (negative sampling) in
"Distributed Representations of Words and Phrases and their Compositionality"
by Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado and Jeffrey Dean.
| Yoav Goldberg and Omer Levy | null | 1402.3722 | null | null |
Scalable Kernel Clustering: Approximate Kernel k-means | cs.CV cs.DS cs.LG | Kernel-based clustering algorithms have the ability to capture the non-linear
structure in real world data. Among various kernel-based clustering algorithms,
kernel k-means has gained popularity due to its simple iterative nature and
ease of implementation. However, its run-time complexity and memory footprint
increase quadratically in terms of the size of the data set, and hence, large
data sets cannot be clustered efficiently. In this paper, we propose an
approximation scheme based on randomization, called the Approximate Kernel
k-means. We approximate the cluster centers using the kernel similarity between
a few sampled points and all the points in the data set. We show that the
proposed method achieves better clustering performance than the traditional low
rank kernel approximation based clustering schemes. We also demonstrate that
its running time and memory requirements are significantly lower than those of
kernel k-means, with only a small reduction in the clustering quality on
several public domain large data sets. We then employ ensemble clustering
techniques to further enhance the performance of our algorithm.
| Radha Chitta, Rong Jin, Timothy C. Havens, Anil K. Jain | null | 1402.3849 | null | null |
Performance Evaluation of Machine Learning Classifiers in Sentiment
Mining | cs.LG cs.CL cs.IR | In recent years, the use of machine learning classifiers is of great value in
solving a variety of problems in text classification. Sentiment mining is a
kind of text classification in which, messages are classified according to
sentiment orientation such as positive or negative. This paper extends the idea
of evaluating the performance of various classifiers to show their
effectiveness in sentiment mining of online product reviews. The product
reviews are collected from Amazon reviews. To evaluate the performance of
classifiers various evaluation methods like random sampling, linear sampling
and bootstrap sampling are used. Our results shows that support vector machine
with bootstrap sampling method outperforms others classifiers and sampling
methods in terms of misclassification rate.
| Vinodhini G Chandrasekaran RM | null | 1402.3891 | null | null |
Sparse Polynomial Learning and Graph Sketching | cs.LG | Let $f:\{-1,1\}^n$ be a polynomial with at most $s$ non-zero real
coefficients. We give an algorithm for exactly reconstructing f given random
examples from the uniform distribution on $\{-1,1\}^n$ that runs in time
polynomial in $n$ and $2s$ and succeeds if the function satisfies the unique
sign property: there is one output value which corresponds to a unique set of
values of the participating parities. This sufficient condition is satisfied
when every coefficient of f is perturbed by a small random noise, or satisfied
with high probability when s parity functions are chosen randomly or when all
the coefficients are positive. Learning sparse polynomials over the Boolean
domain in time polynomial in $n$ and $2s$ is considered notoriously hard in the
worst-case. Our result shows that the problem is tractable for almost all
sparse polynomials. Then, we show an application of this result to hypergraph
sketching which is the problem of learning a sparse (both in the number of
hyperedges and the size of the hyperedges) hypergraph from uniformly drawn
random cuts. We also provide experimental results on a real world dataset.
| Murat Kocaoglu, Karthikeyan Shanmugam, Alexandros G. Dimakis and Adam
Klivans | null | 1402.3902 | null | null |
Selective Sampling with Drift | cs.LG | Recently there has been much work on selective sampling, an online active
learning setting, in which algorithms work in rounds. On each round an
algorithm receives an input and makes a prediction. Then, it can decide whether
to query a label, and if so to update its model, otherwise the input is
discarded. Most of this work is focused on the stationary case, where it is
assumed that there is a fixed target model, and the performance of the
algorithm is compared to a fixed model. However, in many real-world
applications, such as spam prediction, the best target function may drift over
time, or have shifts from time to time. We develop a novel selective sampling
algorithm for the drifting setting, analyze it under no assumptions on the
mechanism generating the sequence of instances, and derive new mistake bounds
that depend on the amount of drift in the problem. Simulations on synthetic and
real-world datasets demonstrate the superiority of our algorithms as a
selective sampling algorithm in the drifting setting.
| Edward Moroshko, Koby Crammer | null | 1402.4084 | null | null |
Stochastic Gradient Hamiltonian Monte Carlo | stat.ME cs.LG stat.ML | Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for
defining distant proposals with high acceptance probabilities in a
Metropolis-Hastings framework, enabling more efficient exploration of the state
space than standard random-walk proposals. The popularity of such methods has
grown significantly in recent years. However, a limitation of HMC methods is
the required gradient computation for simulation of the Hamiltonian dynamical
system-such computation is infeasible in problems involving a large sample size
or streaming data. Instead, we must rely on a noisy gradient estimate computed
from a subset of the data. In this paper, we explore the properties of such a
stochastic gradient HMC approach. Surprisingly, the natural implementation of
the stochastic approximation can be arbitrarily bad. To address this problem we
introduce a variant that uses second-order Langevin dynamics with a friction
term that counteracts the effects of the noisy gradient, maintaining the
desired target distribution as the invariant distribution. Results on simulated
data validate our theory. We also provide an application of our methods to a
classification task using neural networks and to online Bayesian matrix
factorization.
| Tianqi Chen, Emily B. Fox, Carlos Guestrin | null | 1402.4102 | null | null |
A Bayesian Model of node interaction in networks | cs.LG stat.ME stat.ML | We are concerned with modeling the strength of links in networks by taking
into account how often those links are used. Link usage is a strong indicator
of how closely two nodes are related, but existing network models in Bayesian
Statistics and Machine Learning are able to predict only wether a link exists
at all. As priors for latent attributes of network nodes we explore the Chinese
Restaurant Process (CRP) and a multivariate Gaussian with fixed dimensionality.
The model is applied to a social network dataset and a word coocurrence
dataset.
| Ingmar Schuster | null | 1402.4279 | null | null |
Discretization of Temporal Data: A Survey | cs.DB cs.LG | In real world, the huge amount of temporal data is to be processed in many
application areas such as scientific, financial, network monitoring, sensor
data analysis. Data mining techniques are primarily oriented to handle discrete
features. In the case of temporal data the time plays an important role on the
characteristics of data. To consider this effect, the data discretization
techniques have to consider the time while processing to resolve the issue by
finding the intervals of data which are more concise and precise with respect
to time. Here, this research is reviewing different data discretization
techniques used in temporal data applications according to the inclusion or
exclusion of: class label, temporal order of the data and handling of stream
data to open the research direction for temporal data discretization to improve
the performance of data mining technique.
| P. Chaudhari, D. P. Rana, R. G. Mehta, N. J. Mistry, M. M. Raghuwanshi | null | 1402.4283 | null | null |
The Random Forest Kernel and other kernels for big data from random
partitions | stat.ML cs.LG | We present Random Partition Kernels, a new class of kernels derived by
demonstrating a natural connection between random partitions of objects and
kernels between those objects. We show how the construction can be used to
create kernels from methods that would not normally be viewed as random
partitions, such as Random Forest. To demonstrate the potential of this method,
we propose two new kernels, the Random Forest Kernel and the Fast Cluster
Kernel, and show that these kernels consistently outperform standard kernels on
problems involving real-world datasets. Finally, we show how the form of these
kernels lend themselves to a natural approximation that is appropriate for
certain big data problems, allowing $O(N)$ inference in methods such as
Gaussian Processes, Support Vector Machines and Kernel PCA.
| Alex Davies, Zoubin Ghahramani | null | 1402.4293 | null | null |
Automatic Construction and Natural-Language Description of Nonparametric
Regression Models | stat.ML cs.LG | This paper presents the beginnings of an automatic statistician, focusing on
regression problems. Our system explores an open-ended space of statistical
models to discover a good explanation of a data set, and then produces a
detailed report with figures and natural-language text. Our approach treats
unknown regression functions nonparametrically using Gaussian processes, which
has two important consequences. First, Gaussian processes can model functions
in terms of high-level properties (e.g. smoothness, trends, periodicity,
changepoints). Taken together with the compositional structure of our language
of models this allows us to automatically describe functions in simple terms.
Second, the use of flexible nonparametric models and a rich language for
composing them in an open-ended manner also results in state-of-the-art
extrapolation performance evaluated over 13 real time series data sets from
various domains.
| James Robert Lloyd, David Duvenaud, Roger Grosse, Joshua B. Tenenbaum,
Zoubin Ghahramani | null | 1402.4304 | null | null |
Student-t Processes as Alternatives to Gaussian Processes | stat.ML cs.AI cs.LG stat.ME | We investigate the Student-t process as an alternative to the Gaussian
process as a nonparametric prior over functions. We derive closed form
expressions for the marginal likelihood and predictive distribution of a
Student-t process, by integrating away an inverse Wishart process prior over
the covariance kernel of a Gaussian process model. We show surprising
equivalences between different hierarchical Gaussian process models leading to
Student-t processes, and derive a new sampling scheme for the inverse Wishart
process, which helps elucidate these equivalences. Overall, we show that a
Student-t process can retain the attractive properties of a Gaussian process --
a nonparametric representation, analytic marginal and predictive distributions,
and easy model selection through covariance kernels -- but has enhanced
flexibility, and predictive covariances that, unlike a Gaussian process,
explicitly depend on the values of training observations. We verify empirically
that a Student-t process is especially useful in situations where there are
changes in covariance structure, or in applications like Bayesian optimization,
where accurate predictive covariances are critical for good performance. These
advantages come at no additional computational cost over Gaussian processes.
| Amar Shah, Andrew Gordon Wilson and Zoubin Ghahramani | null | 1402.4306 | null | null |
On the properties of $\alpha$-unchaining single linkage hierarchical
clustering | cs.LG | In the election of a hierarchical clustering method, theoretic properties may
give some insight to determine which method is the most suitable to treat a
clustering problem. Herein, we study some basic properties of two hierarchical
clustering methods: $\alpha$-unchaining single linkage or $SL(\alpha)$ and a
modified version of this one, $SL^*(\alpha)$. We compare the results with the
properties satisfied by the classical linkage-based hierarchical clustering
methods.
| A. Mart\'inez-P\'erez | null | 1402.4322 | null | null |
Hybrid SRL with Optimization Modulo Theories | cs.LG stat.ML | Generally speaking, the goal of constructive learning could be seen as, given
an example set of structured objects, to generate novel objects with similar
properties. From a statistical-relational learning (SRL) viewpoint, the task
can be interpreted as a constraint satisfaction problem, i.e. the generated
objects must obey a set of soft constraints, whose weights are estimated from
the data. Traditional SRL approaches rely on (finite) First-Order Logic (FOL)
as a description language, and on MAX-SAT solvers to perform inference. Alas,
FOL is unsuited for con- structive problems where the objects contain a mixture
of Boolean and numerical variables. It is in fact difficult to implement, e.g.
linear arithmetic constraints within the language of FOL. In this paper we
propose a novel class of hybrid SRL methods that rely on Satisfiability Modulo
Theories, an alternative class of for- mal languages that allow to describe,
and reason over, mixed Boolean-numerical objects and constraints. The resulting
methods, which we call Learning Mod- ulo Theories, are formulated within the
structured output SVM framework, and employ a weighted SMT solver as an
optimization oracle to perform efficient in- ference and discriminative max
margin weight learning. We also present a few examples of constructive learning
applications enabled by our method.
| Stefano Teso and Roberto Sebastiani and Andrea Passerini | null | 1402.4354 | null | null |
A convergence proof of the split Bregman method for regularized
least-squares problems | math.OC cs.LG stat.ML | The split Bregman (SB) method [T. Goldstein and S. Osher, SIAM J. Imaging
Sci., 2 (2009), pp. 323-43] is a fast splitting-based algorithm that solves
image reconstruction problems with general l1, e.g., total-variation (TV) and
compressed sensing (CS), regularizations by introducing a single variable split
to decouple the data-fitting term and the regularization term, yielding simple
subproblems that are separable (or partially separable) and easy to minimize.
Several convergence proofs have been proposed, and these proofs either impose a
"full column rank" assumption to the split or assume exact updates in all
subproblems. However, these assumptions are impractical in many applications
such as the X-ray computed tomography (CT) image reconstructions, where the
inner least-squares problem usually cannot be solved efficiently due to the
highly shift-variant Hessian. In this paper, we show that when the data-fitting
term is quadratic, the SB method is a convergent alternating direction method
of multipliers (ADMM), and a straightforward convergence proof with inexact
updates is given using [J. Eckstein and D. P. Bertsekas, Mathematical
Programming, 55 (1992), pp. 293-318, Theorem 8]. Furthermore, since the SB
method is just a special case of an ADMM algorithm, it seems likely that the
ADMM algorithm will be faster than the SB method if the augmented Largangian
(AL) penalty parameters are selected appropriately. To have a concrete example,
we conduct a convergence rate analysis of the ADMM algorithm using two splits
for image restoration problems with quadratic data-fitting term and
regularization term. According to our analysis, we can show that the two-split
ADMM algorithm can be faster than the SB method if the AL penalty parameter of
the SB method is suboptimal. Numerical experiments were conducted to verify our
analysis.
| Hung Nien and Jeffrey A. Fessler | null | 1402.4371 | null | null |
Fast X-ray CT image reconstruction using the linearized augmented
Lagrangian method with ordered subsets | math.OC cs.LG stat.ML | The augmented Lagrangian (AL) method that solves convex optimization problems
with linear constraints has drawn more attention recently in imaging
applications due to its decomposable structure for composite cost functions and
empirical fast convergence rate under weak conditions. However, for problems
such as X-ray computed tomography (CT) image reconstruction and large-scale
sparse regression with "big data", where there is no efficient way to solve the
inner least-squares problem, the AL method can be slow due to the inevitable
iterative inner updates. In this paper, we focus on solving regularized
(weighted) least-squares problems using a linearized variant of the AL method
that replaces the quadratic AL penalty term in the scaled augmented Lagrangian
with its separable quadratic surrogate (SQS) function, thus leading to a much
simpler ordered-subsets (OS) accelerable splitting-based algorithm, OS-LALM,
for X-ray CT image reconstruction. To further accelerate the proposed
algorithm, we use a second-order recursive system analysis to design a
deterministic downward continuation approach that avoids tedious parameter
tuning and provides fast convergence. Experimental results show that the
proposed algorithm significantly accelerates the "convergence" of X-ray CT
image reconstruction with negligible overhead and greatly reduces the OS
artifacts in the reconstructed image when using many subsets for OS
acceleration.
| Hung Nien and Jeffrey A. Fessler | 10.1109/TMI.2014.2358499 | 1402.4381 | null | null |
Incremental Majorization-Minimization Optimization with Application to
Large-Scale Machine Learning | math.OC cs.LG stat.ML | Majorization-minimization algorithms consist of successively minimizing a
sequence of upper bounds of the objective function. These upper bounds are
tight at the current estimate, and each iteration monotonically drives the
objective function downhill. Such a simple principle is widely applicable and
has been very popular in various scientific fields, especially in signal
processing and statistics. In this paper, we propose an incremental
majorization-minimization scheme for minimizing a large sum of continuous
functions, a problem of utmost importance in machine learning. We present
convergence guarantees for non-convex and convex optimization when the upper
bounds approximate the objective up to a smooth error; we call such upper
bounds "first-order surrogate functions". More precisely, we study asymptotic
stationary point guarantees for non-convex problems, and for convex ones, we
provide convergence rates for the expected objective function value. We apply
our scheme to composite optimization and obtain a new incremental proximal
gradient algorithm with linear convergence rate for strongly convex functions.
In our experiments, we show that our method is competitive with the state of
the art for solving machine learning problems such as logistic regression when
the number of training samples is large enough, and we demonstrate its
usefulness for sparse estimation with non-convex penalties.
| Julien Mairal (INRIA Grenoble Rh\^one-Alpes / LJK Laboratoire Jean
Kuntzmann) | null | 1402.4419 | null | null |
Learning the Irreducible Representations of Commutative Lie Groups | cs.LG | We present a new probabilistic model of compact commutative Lie groups that
produces invariant-equivariant and disentangled representations of data. To
define the notion of disentangling, we borrow a fundamental principle from
physics that is used to derive the elementary particles of a system from its
symmetries. Our model employs a newfound Bayesian conjugacy relation that
enables fully tractable probabilistic inference over compact commutative Lie
groups -- a class that includes the groups that describe the rotation and
cyclic translation of images. We train the model on pairs of transformed image
patches, and show that the learned invariant representation is highly effective
for classification.
| Taco Cohen, Max Welling | null | 1402.4437 | null | null |
Classification with Sparse Overlapping Groups | cs.LG stat.ML | Classification with a sparsity constraint on the solution plays a central
role in many high dimensional machine learning applications. In some cases, the
features can be grouped together so that entire subsets of features can be
selected or not selected. In many applications, however, this can be too
restrictive. In this paper, we are interested in a less restrictive form of
structured sparse feature selection: we assume that while features can be
grouped according to some notion of similarity, not all features in a group
need be selected for the task at hand. When the groups are comprised of
disjoint sets of features, this is sometimes referred to as the "sparse group"
lasso, and it allows for working with a richer class of models than traditional
group lasso methods. Our framework generalizes conventional sparse group lasso
further by allowing for overlapping groups, an additional flexiblity needed in
many applications and one that presents further challenges. The main
contribution of this paper is a new procedure called Sparse Overlapping Group
(SOG) lasso, a convex optimization program that automatically selects similar
features for classification in high dimensions. We establish model selection
error bounds for SOGlasso classification problems under a fairly general
setting. In particular, the error bounds are the first such results for
classification using the sparse group lasso. Furthermore, the general SOGlasso
bound specializes to results for the lasso and the group lasso, some known and
some new. The SOGlasso is motivated by multi-subject fMRI studies in which
functional activity is classified using brain voxels as features, source
localization problems in Magnetoencephalography (MEG), and analyzing gene
activation patterns in microarray data analysis. Experiments with real and
synthetic data demonstrate the advantages of SOGlasso compared to the lasso and
group lasso.
| Nikhil Rao, Robert Nowak, Christopher Cox and Timothy Rogers | null | 1402.4512 | null | null |
Unsupervised Ranking of Multi-Attribute Objects Based on Principal
Curves | cs.LG cs.AI stat.ML | Unsupervised ranking faces one critical challenge in evaluation applications,
that is, no ground truth is available. When PageRank and its variants show a
good solution in related subjects, they are applicable only for ranking from
link-structure data. In this work, we focus on unsupervised ranking from
multi-attribute data which is also common in evaluation tasks. To overcome the
challenge, we propose five essential meta-rules for the design and assessment
of unsupervised ranking approaches: scale and translation invariance, strict
monotonicity, linear/nonlinear capacities, smoothness, and explicitness of
parameter size. These meta-rules are regarded as high level knowledge for
unsupervised ranking tasks. Inspired by the works in [8] and [14], we propose a
ranking principal curve (RPC) model, which learns a one-dimensional manifold
function to perform unsupervised ranking tasks on multi-attribute observations.
Furthermore, the RPC is modeled to be a cubic B\'ezier curve with control
points restricted in the interior of a hypercube, thereby complying with all
the five meta-rules to infer a reasonable ranking list. With control points as
the model parameters, one is able to understand the learned manifold and to
interpret the ranking list semantically. Numerical experiments of the presented
RPC model are conducted on two open datasets of different ranking applications.
In comparison with the state-of-the-art approaches, the new model is able to
show more reasonable ranking lists.
| Chun-Guo Li, Xing Mei, Bao-Gang Hu | null | 1402.4542 | null | null |
Transduction on Directed Graphs via Absorbing Random Walks | cs.CV cs.LG stat.ML | In this paper we consider the problem of graph-based transductive
classification, and we are particularly interested in the directed graph
scenario which is a natural form for many real world applications. Different
from existing research efforts that either only deal with undirected graphs or
circumvent directionality by means of symmetrization, we propose a novel random
walk approach on directed graphs using absorbing Markov chains, which can be
regarded as maximizing the accumulated expected number of visits from the
unlabeled transient states. Our algorithm is simple, easy to implement, and
works with large-scale graphs. In particular, it is capable of preserving the
graph structure even when the input graph is sparse and changes over time, as
well as retaining weak signals presented in the directed edges. We present its
intimate connections to a number of existing methods, including graph kernels,
graph Laplacian based methods, and interestingly, spanning forest of graphs.
Its computational complexity and the generalization error are also studied.
Empirically our algorithm is systematically evaluated on a wide range of
applications, where it has shown to perform competitively comparing to a suite
of state-of-the-art methods.
| Jaydeep De and Xiaowei Zhang and Li Cheng | null | 1402.4566 | null | null |
A Survey on Semi-Supervised Learning Techniques | cs.LG | Semisupervised learning is a learning standard which deals with the study of
how computers and natural systems such as human beings acquire knowledge in the
presence of both labeled and unlabeled data. Semisupervised learning based
methods are preferred when compared to the supervised and unsupervised learning
because of the improved performance shown by the semisupervised approaches in
the presence of large volumes of data. Labels are very hard to attain while
unlabeled data are surplus, therefore semisupervised learning is a noble
indication to shrink human labor and improve accuracy. There has been a large
spectrum of ideas on semisupervised learning. In this paper we bring out some
of the key approaches for semisupervised learning.
| V. Jothi Prakash, Dr. L.M. Nithya | 10.14445/22312803/IJCTT-V8P105 | 1402.4645 | null | null |
Retrieval of Experiments by Efficient Estimation of Marginal Likelihood | stat.ML cs.IR cs.LG | We study the task of retrieving relevant experiments given a query
experiment. By experiment, we mean a collection of measurements from a set of
`covariates' and the associated `outcomes'. While similar experiments can be
retrieved by comparing available `annotations', this approach ignores the
valuable information available in the measurements themselves. To incorporate
this information in the retrieval task, we suggest employing a retrieval metric
that utilizes probabilistic models learned from the measurements. We argue that
such a metric is a sensible measure of similarity between two experiments since
it permits inclusion of experiment-specific prior knowledge. However, accurate
models are often not analytical, and one must resort to storing posterior
samples which demands considerable resources. Therefore, we study strategies to
select informative posterior samples to reduce the computational load while
maintaining the retrieval performance. We demonstrate the efficacy of our
approach on simulated data with simple linear regression as the models, and
real world datasets.
| Sohan Seth, John Shawe-Taylor, Samuel Kaski | null | 1402.4653 | null | null |
Efficient Inference of Gaussian Process Modulated Renewal Processes with
Application to Medical Event Data | stat.ML cs.LG stat.AP | The episodic, irregular and asynchronous nature of medical data render them
difficult substrates for standard machine learning algorithms. We would like to
abstract away this difficulty for the class of time-stamped categorical
variables (or events) by modeling them as a renewal process and inferring a
probability density over continuous, longitudinal, nonparametric intensity
functions modulating that process. Several methods exist for inferring such a
density over intensity functions, but either their constraints and assumptions
prevent their use with our potentially bursty event streams, or their time
complexity renders their use intractable on our long-duration observations of
high-resolution events, or both. In this paper we present a new and efficient
method for inferring a distribution over intensity functions that uses direct
numeric integration and smooth interpolation over Gaussian processes. We
demonstrate that our direct method is up to twice as accurate and two orders of
magnitude more efficient than the best existing method (thinning). Importantly,
the direct method can infer intensity functions over the full range of bursty
to memoryless to regular events, which thinning and many other methods cannot.
Finally, we apply the method to clinical event data and demonstrate the
face-validity of the abstraction, which is now amenable to standard learning
algorithms.
| Thomas A. Lasko | null | 1402.4732 | null | null |
Near-optimal-sample estimators for spherical Gaussian mixtures | cs.LG cs.DS cs.IT math.IT stat.ML | Statistical and machine-learning algorithms are frequently applied to
high-dimensional data. In many of these applications data is scarce, and often
much more costly than computation time. We provide the first sample-efficient
polynomial-time estimator for high-dimensional spherical Gaussian mixtures.
For mixtures of any $k$ $d$-dimensional spherical Gaussians, we derive an
intuitive spectral-estimator that uses
$\mathcal{O}_k\bigl(\frac{d\log^2d}{\epsilon^4}\bigr)$ samples and runs in time
$\mathcal{O}_{k,\epsilon}(d^3\log^5 d)$, both significantly lower than
previously known. The constant factor $\mathcal{O}_k$ is polynomial for sample
complexity and is exponential for the time complexity, again much smaller than
what was previously known. We also show that
$\Omega_k\bigl(\frac{d}{\epsilon^2}\bigr)$ samples are needed for any
algorithm. Hence the sample complexity is near-optimal in the number of
dimensions.
We also derive a simple estimator for one-dimensional mixtures that uses
$\mathcal{O}\bigl(\frac{k \log \frac{k}{\epsilon} }{\epsilon^2} \bigr)$ samples
and runs in time
$\widetilde{\mathcal{O}}\left(\bigl(\frac{k}{\epsilon}\bigr)^{3k+1}\right)$.
Our other technical contributions include a faster algorithm for choosing a
density estimate from a set of distributions, that minimizes the $\ell_1$
distance to an unknown underlying distribution.
| Jayadev Acharya, Ashkan Jafarpour, Alon Orlitsky, Ananda Theertha
Suresh | null | 1402.4746 | null | null |
Subspace Learning with Partial Information | cs.LG stat.ML | The goal of subspace learning is to find a $k$-dimensional subspace of
$\mathbb{R}^d$, such that the expected squared distance between instance
vectors and the subspace is as small as possible. In this paper we study
subspace learning in a partial information setting, in which the learner can
only observe $r \le d$ attributes from each instance vector. We propose several
efficient algorithms for this task, and analyze their sample complexity
| Alon Gonen, Dan Rosenbaum, Yonina Eldar, Shai Shalev-Shwartz | null | 1402.4844 | null | null |
Diffusion Least Mean Square: Simulations | cs.LG cs.MA | In this technical report we analyse the performance of diffusion strategies
applied to the Least-Mean-Square adaptive filter. We configure a network of
cooperative agents running adaptive filters and discuss their behaviour when
compared with a non-cooperative agent which represents the average of the
network. The analysis provides conditions under which diversity in the filter
parameters is beneficial in terms of convergence and stability. Simulations
drive and support the analysis.
| Jonathan Gelati and Sithan Kanna | null | 1402.4845 | null | null |
A Quasi-Newton Method for Large Scale Support Vector Machines | cs.LG | This paper adapts a recently developed regularized stochastic version of the
Broyden, Fletcher, Goldfarb, and Shanno (BFGS) quasi-Newton method for the
solution of support vector machine classification problems. The proposed method
is shown to converge almost surely to the optimal classifier at a rate that is
linear in expectation. Numerical results show that the proposed method exhibits
a convergence rate that degrades smoothly with the dimensionality of the
feature vectors.
| Aryan Mokhtari and Alejandro Ribeiro | null | 1402.4861 | null | null |
Learning the Parameters of Determinantal Point Process Kernels | stat.ML cs.LG | Determinantal point processes (DPPs) are well-suited for modeling repulsion
and have proven useful in many applications where diversity is desired. While
DPPs have many appealing properties, such as efficient sampling, learning the
parameters of a DPP is still considered a difficult problem due to the
non-convex nature of the likelihood function. In this paper, we propose using
Bayesian methods to learn the DPP kernel parameters. These methods are
applicable in large-scale and continuous DPP settings even when the exact form
of the eigendecomposition is unknown. We demonstrate the utility of our DPP
learning methods in studying the progression of diabetic neuropathy based on
spatial distribution of nerve fibers, and in studying human perception of
diversity in images.
| Raja Hafiz Affandi, Emily B. Fox, Ryan P. Adams and Ben Taskar | null | 1402.4862 | null | null |
Survey on Sparse Coded Features for Content Based Face Image Retrieval | cs.IR cs.CV cs.LG stat.ML | Content based image retrieval, a technique which uses visual contents of
image to search images from large scale image databases according to users'
interests. This paper provides a comprehensive survey on recent technology used
in the area of content based face image retrieval. Nowadays digital devices and
photo sharing sites are getting more popularity, large human face photos are
available in database. Multiple types of facial features are used to represent
discriminality on large scale human facial image database. Searching and mining
of facial images are challenging problems and important research issues. Sparse
representation on features provides significant improvement in indexing related
images to query image.
| D. Johnvictor, G. Selvavinayagam | 10.14445/22312803/IJCTT-V8P106 | 1402.4888 | null | null |
Group-sparse Matrix Recovery | cs.LG cs.CV stat.ML | We apply the OSCAR (octagonal selection and clustering algorithms for
regression) in recovering group-sparse matrices (two-dimensional---2D---arrays)
from compressive measurements. We propose a 2D version of OSCAR (2OSCAR)
consisting of the $\ell_1$ norm and the pair-wise $\ell_{\infty}$ norm, which
is convex but non-differentiable. We show that the proximity operator of 2OSCAR
can be computed based on that of OSCAR. The 2OSCAR problem can thus be
efficiently solved by state-of-the-art proximal splitting algorithms.
Experiments on group-sparse 2D array recovery show that 2OSCAR regularization
solved by the SpaRSA algorithm is the fastest choice, while the PADMM algorithm
(with debiasing) yields the most accurate results.
| Xiangrong Zeng and M\'ario A. T. Figueiredo | null | 1402.5077 | null | null |
Multi-Step Stochastic ADMM in High Dimensions: Applications to Sparse
Optimization and Noisy Matrix Decomposition | cs.LG math.OC stat.ML | We propose an efficient ADMM method with guarantees for high-dimensional
problems. We provide explicit bounds for the sparse optimization problem and
the noisy matrix decomposition problem. For sparse optimization, we establish
that the modified ADMM method has an optimal convergence rate of
$\mathcal{O}(s\log d/T)$, where $s$ is the sparsity level, $d$ is the data
dimension and $T$ is the number of steps. This matches with the minimax lower
bounds for sparse estimation. For matrix decomposition into sparse and low rank
components, we provide the first guarantees for any online method, and prove a
convergence rate of $\tilde{\mathcal{O}}((s+r)\beta^2(p) /T) +
\mathcal{O}(1/p)$ for a $p\times p$ matrix, where $s$ is the sparsity level,
$r$ is the rank and $\Theta(\sqrt{p})\leq \beta(p)\leq \Theta(p)$. Our
guarantees match the minimax lower bound with respect to $s,r$ and $T$. In
addition, we match the minimax lower bound with respect to the matrix dimension
$p$, i.e. $\beta(p)=\Theta(\sqrt{p})$, for many important statistical models
including the independent noise model, the linear Bayesian network and the
latent Gaussian graphical model under some conditions. Our ADMM method is based
on epoch-based annealing and consists of inexpensive steps which involve
projections on to simple norm balls. Experiments show that for both sparse
optimization and matrix decomposition problems, our algorithm outperforms the
state-of-the-art methods. In particular, we reach higher accuracy with same
time complexity.
| Hanie Sedghi and Anima Anandkumar and Edmond Jonckheere | null | 1402.5131 | null | null |
Distribution-Independent Reliable Learning | cs.LG cs.CC cs.DS | We study several questions in the reliable agnostic learning framework of
Kalai et al. (2009), which captures learning tasks in which one type of error
is costlier than others. A positive reliable classifier is one that makes no
false positive errors. The goal in the positive reliable agnostic framework is
to output a hypothesis with the following properties: (i) its false positive
error rate is at most $\epsilon$, (ii) its false negative error rate is at most
$\epsilon$ more than that of the best positive reliable classifier from the
class. A closely related notion is fully reliable agnostic learning, which
considers partial classifiers that are allowed to predict "unknown" on some
inputs. The best fully reliable partial classifier is one that makes no errors
and minimizes the probability of predicting "unknown", and the goal in fully
reliable learning is to output a hypothesis that is almost as good as the best
fully reliable partial classifier from a class.
For distribution-independent learning, the best known algorithms for PAC
learning typically utilize polynomial threshold representations, while the
state of the art agnostic learning algorithms use point-wise polynomial
approximations. We show that one-sided polynomial approximations, an
intermediate notion between polynomial threshold representations and point-wise
polynomial approximations, suffice for learning in the reliable agnostic
settings. We then show that majorities can be fully reliably learned and
disjunctions of majorities can be positive reliably learned, through
constructions of appropriate one-sided polynomial approximations. Our fully
reliable algorithm for majorities provides the first evidence that fully
reliable learning may be strictly easier than agnostic learning. Our algorithms
also satisfy strong attribute-efficiency properties, and provide smooth
tradeoffs between sample complexity and running time.
| Varun Kanade and Justin Thaler | null | 1402.5164 | null | null |
Pareto-depth for Multiple-query Image Retrieval | cs.IR cs.LG stat.ML | Most content-based image retrieval systems consider either one single query,
or multiple queries that include the same object or represent the same semantic
information. In this paper we consider the content-based image retrieval
problem for multiple query images corresponding to different image semantics.
We propose a novel multiple-query information retrieval algorithm that combines
the Pareto front method (PFM) with efficient manifold ranking (EMR). We show
that our proposed algorithm outperforms state of the art multiple-query
retrieval algorithms on real-world image databases. We attribute this
performance improvement to concavity properties of the Pareto fronts, and prove
a theoretical result that characterizes the asymptotic concavity of the fronts.
| Ko-Jen Hsiao, Jeff Calder, Alfred O. Hero III | 10.1109/TIP.2014.2378057 | 1402.5176 | null | null |
Guaranteed Non-Orthogonal Tensor Decomposition via Alternating Rank-$1$
Updates | cs.LG math.NA stat.ML | In this paper, we provide local and global convergence guarantees for
recovering CP (Candecomp/Parafac) tensor decomposition. The main step of the
proposed algorithm is a simple alternating rank-$1$ update which is the
alternating version of the tensor power iteration adapted for asymmetric
tensors. Local convergence guarantees are established for third order tensors
of rank $k$ in $d$ dimensions, when $k=o \bigl( d^{1.5} \bigr)$ and the tensor
components are incoherent. Thus, we can recover overcomplete tensor
decomposition. We also strengthen the results to global convergence guarantees
under stricter rank condition $k \le \beta d$ (for arbitrary constant $\beta >
1$) through a simple initialization procedure where the algorithm is
initialized by top singular vectors of random tensor slices. Furthermore, the
approximate local convergence guarantees for $p$-th order tensors are also
provided under rank condition $k=o \bigl( d^{p/2} \bigr)$. The guarantees also
include tight perturbation analysis given noisy tensor.
| Animashree Anandkumar and Rong Ge and Majid Janzamin | null | 1402.5180 | null | null |
Convergence results for projected line-search methods on varieties of
low-rank matrices via \L{}ojasiewicz inequality | math.OC cs.LG math.NA | The aim of this paper is to derive convergence results for projected
line-search methods on the real-algebraic variety $\mathcal{M}_{\le k}$ of real
$m \times n$ matrices of rank at most $k$. Such methods extend Riemannian
optimization methods, which are successfully used on the smooth manifold
$\mathcal{M}_k$ of rank-$k$ matrices, to its closure by taking steps along
gradient-related directions in the tangent cone, and afterwards projecting back
to $\mathcal{M}_{\le k}$. Considering such a method circumvents the
difficulties which arise from the nonclosedness and the unbounded curvature of
$\mathcal{M}_k$. The pointwise convergence is obtained for real-analytic
functions on the basis of a \L{}ojasiewicz inequality for the projection of the
antigradient to the tangent cone. If the derived limit point lies on the smooth
part of $\mathcal{M}_{\le k}$, i.e. in $\mathcal{M}_k$, this boils down to more
or less known results, but with the benefit that asymptotic convergence rate
estimates (for specific step-sizes) can be obtained without an a priori
curvature bound, simply from the fact that the limit lies on a smooth manifold.
At the same time, one can give a convincing justification for assuming critical
points to lie in $\mathcal{M}_k$: if $X$ is a critical point of $f$ on
$\mathcal{M}_{\le k}$, then either $X$ has rank $k$, or $\nabla f(X) = 0$.
| Reinhold Schneider and Andr\'e Uschmajew | null | 1402.5284 | null | null |
Important Molecular Descriptors Selection Using Self Tuned Reweighted
Sampling Method for Prediction of Antituberculosis Activity | cs.LG stat.AP stat.ML | In this paper, a new descriptor selection method for selecting an optimal
combination of important descriptors of sulfonamide derivatives data, named
self tuned reweighted sampling (STRS), is developed. descriptors are defined as
the descriptors with large absolute coefficients in a multivariate linear
regression model such as partial least squares(PLS). In this study, the
absolute values of regression coefficients of PLS model are used as an index
for evaluating the importance of each descriptor Then, based on the importance
level of each descriptor, STRS sequentially selects N subsets of descriptors
from N Monte Carlo (MC) sampling runs in an iterative and competitive manner.
In each sampling run, a fixed ratio (e.g. 80%) of samples is first randomly
selected to establish a regresson model. Next, based on the regression
coefficients, a two-step procedure including rapidly decreasing function (RDF)
based enforced descriptor selection and self tuned sampling (STS) based
competitive descriptor selection is adopted to select the important
descriptorss. After running the loops, a number of subsets of descriptors are
obtained and root mean squared error of cross validation (RMSECV) of PLS models
established with subsets of descriptors is computed. The subset of descriptors
with the lowest RMSECV is considered as the optimal descriptor subset. The
performance of the proposed algorithm is evaluated by sulfanomide derivative
dataset. The results reveal an good characteristic of STRS that it can usually
locate an optimal combination of some important descriptors which are
interpretable to the biologically of interest. Additionally, our study shows
that better prediction is obtained by STRS when compared to full descriptor set
PLS modeling, Monte Carlo uninformative variable elimination (MC-UVE).
| Doreswamy, Chanabasayya M. Vastrad | null | 1402.5360 | null | null |
From Predictive to Prescriptive Analytics | stat.ML cs.LG math.OC | In this paper, we combine ideas from machine learning (ML) and operations
research and management science (OR/MS) in developing a framework, along with
specific methods, for using data to prescribe optimal decisions in OR/MS
problems. In a departure from other work on data-driven optimization and
reflecting our practical experience with the data available in applications of
OR/MS, we consider data consisting, not only of observations of quantities with
direct effect on costs/revenues, such as demand or returns, but predominantly
of observations of associated auxiliary quantities. The main problem of
interest is a conditional stochastic optimization problem, given imperfect
observations, where the joint probability distributions that specify the
problem are unknown. We demonstrate that our proposed solution methods, which
are inspired by ML methods such as local regression, CART, and random forests,
are generally applicable to a wide range of decision problems. We prove that
they are tractable and asymptotically optimal even when data is not iid and may
be censored. We extend this to the case where decision variables may directly
affect uncertainty in unknown ways, such as pricing's effect on demand. As an
analogue to R^2, we develop a metric P termed the coefficient of
prescriptiveness to measure the prescriptive content of data and the efficacy
of a policy from an operations perspective. To demonstrate the power of our
approach in a real-world setting we study an inventory management problem faced
by the distribution arm of an international media conglomerate, which ships an
average of 1bil units per year. We leverage internal data and public online
data harvested from IMDb, Rotten Tomatoes, and Google to prescribe operational
decisions that outperform baseline measures. Specifically, the data we collect,
leveraged by our methods, accounts for an 88\% improvement as measured by our
P.
| Dimitris Bertsimas, Nathan Kallus | null | 1402.5481 | null | null |
Efficient Semidefinite Spectral Clustering via Lagrange Duality | cs.LG cs.CV | We propose an efficient approach to semidefinite spectral clustering (SSC),
which addresses the Frobenius normalization with the positive semidefinite
(p.s.d.) constraint for spectral clustering. Compared with the original
Frobenius norm approximation based algorithm, the proposed algorithm can more
accurately find the closest doubly stochastic approximation to the affinity
matrix by considering the p.s.d. constraint. In this paper, SSC is formulated
as a semidefinite programming (SDP) problem. In order to solve the high
computational complexity of SDP, we present a dual algorithm based on the
Lagrange dual formalization. Two versions of the proposed algorithm are
proffered: one with less memory usage and the other with faster convergence
rate. The proposed algorithm has much lower time complexity than that of the
standard interior-point based SDP solvers. Experimental results on both UCI
data sets and real-world image data sets demonstrate that 1) compared with the
state-of-the-art spectral clustering methods, the proposed algorithm achieves
better clustering performance; and 2) our algorithm is much more efficient and
can solve larger-scale SSC problems than those standard interior-point SDP
solvers.
| Yan Yan, Chunhua Shen, Hanzi Wang | null | 1402.5497 | null | null |
Semi-Supervised Nonlinear Distance Metric Learning via Forests of
Max-Margin Cluster Hierarchies | stat.ML cs.IR cs.LG | Metric learning is a key problem for many data mining and machine learning
applications, and has long been dominated by Mahalanobis methods. Recent
advances in nonlinear metric learning have demonstrated the potential power of
non-Mahalanobis distance functions, particularly tree-based functions. We
propose a novel nonlinear metric learning method that uses an iterative,
hierarchical variant of semi-supervised max-margin clustering to construct a
forest of cluster hierarchies, where each individual hierarchy can be
interpreted as a weak metric over the data. By introducing randomness during
hierarchy training and combining the output of many of the resulting
semi-random weak hierarchy metrics, we can obtain a powerful and robust
nonlinear metric model. This method has two primary contributions: first, it is
semi-supervised, incorporating information from both constrained and
unconstrained points. Second, we take a relaxed approach to constraint
satisfaction, allowing the method to satisfy different subsets of the
constraints at different levels of the hierarchy rather than attempting to
simultaneously satisfy all of them. This leads to a more robust learning
algorithm. We compare our method to a number of state-of-the-art benchmarks on
$k$-nearest neighbor classification, large-scale image retrieval and
semi-supervised clustering problems, and find that our algorithm yields results
comparable or superior to the state-of-the-art, and is significantly more
robust to noise.
| David M. Johnson, Caiming Xiong and Jason J. Corso | null | 1402.5565 | null | null |
Exact Post Model Selection Inference for Marginal Screening | stat.ME cs.LG math.ST stat.ML stat.TH | We develop a framework for post model selection inference, via marginal
screening, in linear regression. At the core of this framework is a result that
characterizes the exact distribution of linear functions of the response $y$,
conditional on the model being selected (``condition on selection" framework).
This allows us to construct valid confidence intervals and hypothesis tests for
regression coefficients that account for the selection procedure. In contrast
to recent work in high-dimensional statistics, our results are exact
(non-asymptotic) and require no eigenvalue-like assumptions on the design
matrix $X$. Furthermore, the computational cost of marginal regression,
constructing confidence intervals and hypothesis testing is negligible compared
to the cost of linear regression, thus making our methods particularly suitable
for extremely large datasets. Although we focus on marginal screening to
illustrate the applicability of the condition on selection framework, this
framework is much more broadly applicable. We show how to apply the proposed
framework to several other selection procedures including orthogonal matching
pursuit, non-negative least squares, and marginal screening+Lasso.
| Jason D Lee and Jonathan E Taylor | null | 1402.5596 | null | null |
To go deep or wide in learning? | cs.LG | To achieve acceptable performance for AI tasks, one can either use
sophisticated feature extraction methods as the first layer in a two-layered
supervised learning model, or learn the features directly using a deep
(multi-layered) model. While the first approach is very problem-specific, the
second approach has computational overheads in learning multiple layers and
fine-tuning of the model. In this paper, we propose an approach called wide
learning based on arc-cosine kernels, that learns a single layer of infinite
width. We propose exact and inexact learning strategies for wide learning and
show that wide learning with single layer outperforms single layer as well as
deep architectures of finite width for some benchmark datasets.
| Gaurav Pandey and Ambedkar Dukkipati | null | 1402.5634 | null | null |
Dynamic Rate and Channel Selection in Cognitive Radio Systems | cs.IT cs.LG math.IT | In this paper, we investigate dynamic channel and rate selection in cognitive
radio systems which exploit a large number of channels free from primary users.
In such systems, transmitters may rapidly change the selected (channel, rate)
pair to opportunistically learn and track the pair offering the highest
throughput. We formulate the problem of sequential channel and rate selection
as an online optimization problem, and show its equivalence to a {\it
structured} Multi-Armed Bandit problem. The structure stems from inherent
properties of the achieved throughput as a function of the selected channel and
rate. We derive fundamental performance limits satisfied by {\it any} channel
and rate adaptation algorithm, and propose algorithms that achieve (or
approach) these limits. In turn, the proposed algorithms optimally exploit the
inherent structure of the throughput. We illustrate the efficiency of our
algorithms using both test-bed and simulation experiments, in both stationary
and non-stationary radio environments. In stationary environments, the packet
successful transmission probabilities at the various channel and rate pairs do
not evolve over time, whereas in non-stationary environments, they may evolve.
In practical scenarios, the proposed algorithms are able to track the best
channel and rate quite accurately without the need of any explicit measurement
and feedback of the quality of the various channels.
| Richard Combes, Alexandre Proutiere | null | 1402.5666 | null | null |
Discriminative Functional Connectivity Measures for Brain Decoding | cs.AI cs.CE cs.CV cs.LG | We propose a statistical learning model for classifying cognitive processes
based on distributed patterns of neural activation in the brain, acquired via
functional magnetic resonance imaging (fMRI). In the proposed learning method,
local meshes are formed around each voxel. The distance between voxels in the
mesh is determined by using a functional neighbourhood concept. In order to
define the functional neighbourhood, the similarities between the time series
recorded for voxels are measured and functional connectivity matrices are
constructed. Then, the local mesh for each voxel is formed by including the
functionally closest neighbouring voxels in the mesh. The relationship between
the voxels within a mesh is estimated by using a linear regression model. These
relationship vectors, called Functional Connectivity aware Local Relational
Features (FC-LRF) are then used to train a statistical learning machine. The
proposed method was tested on a recognition memory experiment, including data
pertaining to encoding and retrieval of words belonging to ten different
semantic categories. Two popular classifiers, namely k-nearest neighbour (k-nn)
and Support Vector Machine (SVM), are trained in order to predict the semantic
category of the item being retrieved, based on activation patterns during
encoding. The classification performance of the Functional Mesh Learning model,
which range in 62%-71% is superior to the classical multi-voxel pattern
analysis (MVPA) methods, which range in 40%-48%, for ten semantic categories.
| Orhan Firat and Mete Ozay and Ilke Oztekin and Fatos T. Yarman Vural | null | 1402.5684 | null | null |
Variational Particle Approximations | stat.ML cs.LG | Approximate inference in high-dimensional, discrete probabilistic models is a
central problem in computational statistics and machine learning. This paper
describes discrete particle variational inference (DPVI), a new approach that
combines key strengths of Monte Carlo, variational and search-based techniques.
DPVI is based on a novel family of particle-based variational approximations
that can be fit using simple, fast, deterministic search techniques. Like Monte
Carlo, DPVI can handle multiple modes, and yields exact results in a
well-defined limit. Like unstructured mean-field, DPVI is based on optimizing a
lower bound on the partition function; when this quantity is not of intrinsic
interest, it facilitates convergence assessment and debugging. Like both Monte
Carlo and combinatorial search, DPVI can take advantage of factorization,
sequential structure, and custom search operators. This paper defines DPVI
particle-based approximation family and partition function lower bounds, along
with the sequential DPVI and local DPVI algorithm templates for optimizing
them. DPVI is illustrated and evaluated via experiments on lattice Markov
Random Fields, nonparametric Bayesian mixtures and block-models, and parametric
as well as non-parametric hidden Markov models. Results include applications to
real-world spike-sorting and relational modeling problems, and show that DPVI
can offer appealing time/accuracy trade-offs as compared to multiple
alternatives.
| Ardavan Saeedi, Tejas D Kulkarni, Vikash Mansinghka, Samuel Gershman | null | 1402.5715 | null | null |
Machine Learning Methods in the Computational Biology of Cancer | q-bio.QM cs.LG stat.ML | The objectives of this "perspective" paper are to review some recent advances
in sparse feature selection for regression and classification, as well as
compressed sensing, and to discuss how these might be used to develop tools to
advance personalized cancer therapy. As an illustration of the possibilities, a
new algorithm for sparse regression is presented, and is applied to predict the
time to tumor recurrence in ovarian cancer. A new algorithm for sparse feature
selection in classification problems is presented, and its validation in
endometrial cancer is briefly discussed. Some open problems are also presented.
| Mathukumalli Vidyasagar | 10.1098/rspa.2014.0081 | 1402.5728 | null | null |
Information-Theoretic Bounds for Adaptive Sparse Recovery | cs.IT cs.LG math.IT math.ST stat.TH | We derive an information-theoretic lower bound for sample complexity in
sparse recovery problems where inputs can be chosen sequentially and
adaptively. This lower bound is in terms of a simple mutual information
expression and unifies many different linear and nonlinear observation models.
Using this formula we derive bounds for adaptive compressive sensing (CS),
group testing and 1-bit CS problems. We show that adaptivity cannot decrease
sample complexity in group testing, 1-bit CS and CS with linear sparsity. In
contrast, we show there might be mild performance gains for CS in the sublinear
regime. Our unified analysis also allows characterization of gains due to
adaptivity from a wider perspective on sparse problems.
| Cem Aksoylar and Venkatesh Saligrama | null | 1402.5731 | null | null |
Bandits with concave rewards and convex knapsacks | cs.LG | In this paper, we consider a very general model for exploration-exploitation
tradeoff which allows arbitrary concave rewards and convex constraints on the
decisions across time, in addition to the customary limitation on the time
horizon. This model subsumes the classic multi-armed bandit (MAB) model, and
the Bandits with Knapsacks (BwK) model of Badanidiyuru et al.[2013]. We also
consider an extension of this model to allow linear contexts, similar to the
linear contextual extension of the MAB model. We demonstrate that a natural and
simple extension of the UCB family of algorithms for MAB provides a polynomial
time algorithm that has near-optimal regret guarantees for this substantially
more general model, and matches the bounds provided by Badanidiyuru et
al.[2013] for the special case of BwK, which is quite surprising. We also
provide computationally more efficient algorithms by establishing interesting
connections between this problem and other well studied problems/algorithms
such as the Blackwell approachability problem, online convex optimization, and
the Frank-Wolfe technique for convex optimization. We give examples of several
concrete applications, where this more general model of bandits allows for
richer and/or more efficient formulations of the problem.
| Shipra Agrawal and Nikhil R. Devanur | null | 1402.5758 | null | null |
No more meta-parameter tuning in unsupervised sparse feature learning | cs.LG cs.CV | We propose a meta-parameter free, off-the-shelf, simple and fast unsupervised
feature learning algorithm, which exploits a new way of optimizing for
sparsity. Experiments on STL-10 show that the method presents state-of-the-art
performance and provides discriminative features that generalize well.
| Adriana Romero, Petia Radeva and Carlo Gatta | null | 1402.5766 | null | null |
Sparse phase retrieval via group-sparse optimization | cs.IT cs.LG math.IT | This paper deals with sparse phase retrieval, i.e., the problem of estimating
a vector from quadratic measurements under the assumption that few components
are nonzero. In particular, we consider the problem of finding the sparsest
vector consistent with the measurements and reformulate it as a group-sparse
optimization problem with linear constraints. Then, we analyze the convex
relaxation of the latter based on the minimization of a block l1-norm and show
various exact recovery and stability results in the real and complex cases.
Invariance to circular shifts and reflections are also discussed for real
vectors measured via complex matrices.
| Fabien Lauer (LORIA), Henrik Ohlsson | null | 1402.5803 | null | null |
Avoiding pathologies in very deep networks | stat.ML cs.LG | Choosing appropriate architectures and regularization strategies for deep
networks is crucial to good predictive performance. To shed light on this
problem, we analyze the analogous problem of constructing useful priors on
compositions of functions. Specifically, we study the deep Gaussian process, a
type of infinitely-wide, deep neural network. We show that in standard
architectures, the representational capacity of the network tends to capture
fewer degrees of freedom as the number of layers increases, retaining only a
single degree of freedom in the limit. We propose an alternate network
architecture which does not suffer from this pathology. We also examine deep
covariance functions, obtained by composing infinitely many feature transforms.
Lastly, we characterize the class of models obtained by performing dropout on
Gaussian processes.
| David Duvenaud, Oren Rippel, Ryan P. Adams, Zoubin Ghahramani | null | 1402.5836 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.