title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Convex Optimization Learning of Faithful Euclidean Distance
Representations in Nonlinear Dimensionality Reduction | stat.ML cs.LG math.OC | Classical multidimensional scaling only works well when the noisy distances
observed in a high dimensional space can be faithfully represented by Euclidean
distances in a low dimensional space. Advanced models such as Maximum Variance
Unfolding (MVU) and Minimum Volume Embedding (MVE) use Semi-Definite
Programming (SDP) to reconstruct such faithful representations. While those SDP
models are capable of producing high quality configuration numerically, they
suffer two major drawbacks. One is that there exist no theoretically guaranteed
bounds on the quality of the configuration. The other is that they are slow in
computation when the data points are beyond moderate size. In this paper, we
propose a convex optimization model of Euclidean distance matrices. We
establish a non-asymptotic error bound for the random graph model with
sub-Gaussian noise, and prove that our model produces a matrix estimator of
high accuracy when the order of the uniform sample size is roughly the degree
of freedom of a low-rank matrix up to a logarithmic factor. Our results
partially explain why MVU and MVE often work well. Moreover, we develop a fast
inexact accelerated proximal gradient method. Numerical experiments show that
the model can produce configurations of high quality on large data points that
the SDP approach would struggle to cope with.
| Chao Ding and Hou-Duo Qi | null | 1406.5736 | null | null |
Divide-and-Conquer Learning by Anchoring a Conical Hull | stat.ML cs.LG | We reduce a broad class of machine learning problems, usually addressed by EM
or sampling, to the problem of finding the $k$ extremal rays spanning the
conical hull of a data point set. These $k$ "anchors" lead to a global solution
and a more interpretable model that can even outperform EM and sampling on
generalization error. To find the $k$ anchors, we propose a novel
divide-and-conquer learning scheme "DCA" that distributes the problem to
$\mathcal O(k\log k)$ same-type sub-problems on different low-D random
hyperplanes, each can be solved by any solver. For the 2D sub-problem, we
present a non-iterative solver that only needs to compute an array of cosine
values and its max/min entries. DCA also provides a faster subroutine for other
methods to check whether a point is covered in a conical hull, which improves
algorithm design in multiple dimensions and brings significant speedup to
learning. We apply our method to GMM, HMM, LDA, NMF and subspace clustering,
then show its competitive performance and scalability over other methods on
rich datasets.
| Tianyi Zhou and Jeff Bilmes and Carlos Guestrin | null | 1406.5752 | null | null |
Multi-utility Learning: Structured-output Learning with Multiple
Annotation-specific Loss Functions | cs.CV cs.LG | Structured-output learning is a challenging problem; particularly so because
of the difficulty in obtaining large datasets of fully labelled instances for
training. In this paper we try to overcome this difficulty by presenting a
multi-utility learning framework for structured prediction that can learn from
training instances with different forms of supervision. We propose a unified
technique for inferring the loss functions most suitable for quantifying the
consistency of solutions with the given weak annotation. We demonstrate the
effectiveness of our framework on the challenging semantic image segmentation
problem for which a wide variety of annotations can be used. For instance, the
popular training datasets for semantic segmentation are composed of images with
hard-to-generate full pixel labellings, as well as images with easy-to-obtain
weak annotations, such as bounding boxes around objects, or image-level labels
that specify which object categories are present in an image. Experimental
evaluation shows that the use of annotation-specific loss functions
dramatically improves segmentation accuracy compared to the baseline system
where only one type of weak annotation is used.
| Roman Shapovalov, Dmitry Vetrov, Anton Osokin, Pushmeet Kohli | null | 1406.5910 | null | null |
Reinforcement and Imitation Learning via Interactive No-Regret Learning | cs.LG stat.ML | Recent work has demonstrated that problems-- particularly imitation learning
and structured prediction-- where a learner's predictions influence the
input-distribution it is tested on can be naturally addressed by an interactive
approach and analyzed using no-regret online learning. These approaches to
imitation learning, however, neither require nor benefit from information about
the cost of actions. We extend existing results in two directions: first, we
develop an interactive imitation learning approach that leverages cost
information; second, we extend the technique to address reinforcement learning.
The results provide theoretical support to the commonly observed successes of
online approximate policy iteration. Our approach suggests a broad new family
of algorithms and provides a unifying view of existing techniques for imitation
and reinforcement learning.
| Stephane Ross, J. Andrew Bagnell | null | 1406.5979 | null | null |
Stationary Mixing Bandits | cs.LG | We study the bandit problem where arms are associated with stationary
phi-mixing processes and where rewards are therefore dependent: the question
that arises from this setting is that of recovering some independence by
ignoring the value of some rewards. As we shall see, the bandit problem we
tackle requires us to address the exploration/exploitation/independence
trade-off. To do so, we provide a UCB strategy together with a general regret
analysis for the case where the size of the independence blocks (the ignored
rewards) is fixed and we go a step beyond by providing an algorithm that is
able to compute the size of the independence blocks from the data. Finally, we
give an analysis of our bandit problem in the restless case, i.e., in the
situation where the time counters for all mixing processes simultaneously
evolve.
| Julien Audiffren (CMLA), Liva Ralaivola (LIF) | null | 1406.6020 | null | null |
From Black-Scholes to Online Learning: Dynamic Hedging under Adversarial
Environments | cs.DS cs.LG q-fin.PR | We consider a non-stochastic online learning approach to price financial
options by modeling the market dynamic as a repeated game between the nature
(adversary) and the investor. We demonstrate that such framework yields
analogous structure as the Black-Scholes model, the widely popular option
pricing model in stochastic finance, for both European and American options
with convex payoffs. In the case of non-convex options, we construct
approximate pricing algorithms, and demonstrate that their efficiency can be
analyzed through the introduction of an artificial probability measure, in
parallel to the so-called risk-neutral measure in the finance literature, even
though our framework is completely adversarial. Continuous-time convergence
results and extensions to incorporate price jumps are also presented.
| Henry Lam and Zhenming Liu | null | 1406.6084 | null | null |
Improved Frame Level Features and SVM Supervectors Approach for the
Recogniton of Emotional States from Speech: Application to categorical and
dimensional states | cs.CL cs.LG | The purpose of speech emotion recognition system is to classify speakers
utterances into different emotional states such as disgust, boredom, sadness,
neutral and happiness. Speech features that are commonly used in speech emotion
recognition rely on global utterance level prosodic features. In our work, we
evaluate the impact of frame level feature extraction. The speech samples are
from Berlin emotional database and the features extracted from these utterances
are energy, different variant of mel frequency cepstrum coefficients, velocity
and acceleration features.
| Imen Trabelsi, Dorra Ben Ayed, Noureddine Ellouze | 10.5815/ijigsp.2013.09.02 | 1406.6101 | null | null |
Mining Recurrent Concepts in Data Streams using the Discrete Fourier
Transform | cs.LG | In this research we address the problem of capturing recurring concepts in a
data stream environment. Recurrence capture enables the re-use of previously
learned classifiers without the need for re-learning while providing for better
accuracy during the concept recurrence interval. We capture concepts by
applying the Discrete Fourier Transform (DFT) to Decision Tree classifiers to
obtain highly compressed versions of the trees at concept drift points in the
stream and store such trees in a repository for future use. Our empirical
results on real world and synthetic data exhibiting varying degrees of
recurrence show that the Fourier compressed trees are more robust to noise and
are able to capture recurring concepts with higher precision than a meta
learning approach that chooses to re-use classifiers in their originally
occurring form.
| Sakthithasan Sripirakas and Russel Pears | null | 1406.6114 | null | null |
Generalized Mixability via Entropic Duality | cs.LG | Mixability is a property of a loss which characterizes when fast convergence
is possible in the game of prediction with expert advice. We show that a key
property of mixability generalizes, and the exp and log operations present in
the usual theory are not as special as one might have thought. In doing this we
introduce a more general notion of $\Phi$-mixability where $\Phi$ is a general
entropy (\ie, any convex function on probabilities). We show how a property
shared by the convex dual of any such entropy yields a natural algorithm (the
minimizer of a regret bound) which, analogous to the classical aggregating
algorithm, is guaranteed a constant regret when used with $\Phi$-mixable
losses. We characterize precisely which $\Phi$ have $\Phi$-mixable losses and
put forward a number of conjectures about the optimality and relationships
between different choices of entropy.
| Mark D. Reid and Rafael M. Frongillo and Robert C. Williamson and
Nishant Mehta | null | 1406.6130 | null | null |
Fast, Robust and Non-convex Subspace Recovery | cs.LG cs.CV stat.AP stat.ML | This work presents a fast and non-convex algorithm for robust subspace
recovery. The data sets considered include inliers drawn around a
low-dimensional subspace of a higher dimensional ambient space, and a possibly
large portion of outliers that do not lie nearby this subspace. The proposed
algorithm, which we refer to as Fast Median Subspace (FMS), is designed to
robustly determine the underlying subspace of such data sets, while having
lower computational complexity than existing methods. We prove convergence of
the FMS iterates to a stationary point. Further, under a special model of data,
FMS converges to a point which is near to the global minimum with overwhelming
probability. Under this model, we show that the iteration complexity is
globally bounded and locally $r$-linear. The latter theorem holds for any fixed
fraction of outliers (less than 1) and any fixed positive distance between the
limit point and the global minimum. Numerical experiments on synthetic and real
data demonstrate its competitive speed and accuracy.
| Gilad Lerman and Tyler Maunu | 10.1093/imaiai/iax012 | 1406.6145 | null | null |
Composite Likelihood Estimation for Restricted Boltzmann machines | cs.LG | Learning the parameters of graphical models using the maximum likelihood
estimation is generally hard which requires an approximation. Maximum composite
likelihood estimations are statistical approximations of the maximum likelihood
estimation which are higher-order generalizations of the maximum
pseudo-likelihood estimation. In this paper, we propose a composite likelihood
method and investigate its property. Furthermore, we apply our composite
likelihood method to restricted Boltzmann machines.
| Muneki Yasuda, Shun Kataoka, Yuji Waizumi, Kazuyuki Tanaka | null | 1406.6176 | null | null |
Combining predictions from linear models when training and test inputs
differ | stat.ME cs.LG stat.ML | Methods for combining predictions from different models in a supervised
learning setting must somehow estimate/predict the quality of a model's
predictions at unknown future inputs. Many of these methods (often implicitly)
make the assumption that the test inputs are identical to the training inputs,
which is seldom reasonable. By failing to take into account that prediction
will generally be harder for test inputs that did not occur in the training
set, this leads to the selection of too complex models. Based on a novel,
unbiased expression for KL divergence, we propose XAIC and its special case
FAIC as versions of AIC intended for prediction that use different degrees of
knowledge of the test inputs. Both methods substantially differ from and may
outperform all the known versions of AIC even when the training and test inputs
are iid, and are especially useful for deterministic inputs and under covariate
shift. Our experiments on linear models suggest that if the test and training
inputs differ substantially, then XAIC and FAIC predictively outperform AIC,
BIC and several other methods including Bayesian model averaging.
| Thijs van Ommen | null | 1406.6200 | null | null |
Recurrent Models of Visual Attention | cs.LG cs.CV stat.ML | Applying convolutional neural networks to large images is computationally
expensive because the amount of computation scales linearly with the number of
image pixels. We present a novel recurrent neural network model that is capable
of extracting information from an image or video by adaptively selecting a
sequence of regions or locations and only processing the selected regions at
high resolution. Like convolutional neural networks, the proposed model has a
degree of translation invariance built-in, but the amount of computation it
performs can be controlled independently of the input image size. While the
model is non-differentiable, it can be trained using reinforcement learning
methods to learn task-specific policies. We evaluate our model on several image
classification tasks, where it significantly outperforms a convolutional neural
network baseline on cluttered images, and on a dynamic visual control problem,
where it learns to track a simple object without an explicit training signal
for doing so.
| Volodymyr Mnih, Nicolas Heess, Alex Graves, Koray Kavukcuoglu | null | 1406.6247 | null | null |
Scalable Topical Phrase Mining from Text Corpora | cs.CL cs.IR cs.LG | While most topic modeling algorithms model text corpora with unigrams, human
interpretation often relies on inherent grouping of terms into phrases. As
such, we consider the problem of discovering topical phrases of mixed lengths.
Existing work either performs post processing to the inference results of
unigram-based topic models, or utilizes complex n-gram-discovery topic models.
These methods generally produce low-quality topical phrases or suffer from poor
scalability on even moderately-sized datasets. We propose a different approach
that is both computationally efficient and effective. Our solution combines a
novel phrase mining framework to segment a document into single and multi-word
phrases, and a new topic model that operates on the induced document partition.
Our approach discovers high quality topical phrases with negligible extra cost
to the bag-of-words topic model in a variety of datasets including research
publication titles, abstracts, reviews, and news articles.
| Ahmed El-Kishky, Yanglei Song, Chi Wang, Clare Voss, Jiawei Han | null | 1406.6312 | null | null |
Further heuristics for $k$-means: The merge-and-split heuristic and the
$(k,l)$-means | cs.LG cs.CV cs.IR stat.ML | Finding the optimal $k$-means clustering is NP-hard in general and many
heuristics have been designed for minimizing monotonically the $k$-means
objective. We first show how to extend Lloyd's batched relocation heuristic and
Hartigan's single-point relocation heuristic to take into account empty-cluster
and single-point cluster events, respectively. Those events tend to
increasingly occur when $k$ or $d$ increases, or when performing several
restarts. First, we show that those special events are a blessing because they
allow to partially re-seed some cluster centers while further minimizing the
$k$-means objective function. Second, we describe a novel heuristic,
merge-and-split $k$-means, that consists in merging two clusters and splitting
this merged cluster again with two new centers provided it improves the
$k$-means objective. This novel heuristic can improve Hartigan's $k$-means when
it has converged to a local minimum. We show empirically that this
merge-and-split $k$-means improves over the Hartigan's heuristic which is the
{\em de facto} method of choice. Finally, we propose the $(k,l)$-means
objective that generalizes the $k$-means objective by associating the data
points to their $l$ closest cluster centers, and show how to either directly
convert or iteratively relax the $(k,l)$-means into a $k$-means in order to
reach better local minima.
| Frank Nielsen and Richard Nock | null | 1406.6314 | null | null |
Incremental Clustering: The Case for Extra Clusters | cs.LG | The explosion in the amount of data available for analysis often necessitates
a transition from batch to incremental clustering methods, which process one
element at a time and typically store only a small subset of the data. In this
paper, we initiate the formal analysis of incremental clustering methods
focusing on the types of cluster structure that they are able to detect. We
find that the incremental setting is strictly weaker than the batch model,
proving that a fundamental class of cluster structures that can readily be
detected in the batch setting is impossible to identify using any incremental
method. Furthermore, we show how the limitations of incremental clustering can
be overcome by allowing additional clusters.
| Margareta Ackerman and Sanjoy Dasgupta | null | 1406.6398 | null | null |
On the Convergence Rate of Decomposable Submodular Function Minimization | math.OC cs.DM cs.DS cs.LG cs.NA | Submodular functions describe a variety of discrete problems in machine
learning, signal processing, and computer vision. However, minimizing
submodular functions poses a number of algorithmic challenges. Recent work
introduced an easy-to-use, parallelizable algorithm for minimizing submodular
functions that decompose as the sum of "simple" submodular functions.
Empirically, this algorithm performs extremely well, but no theoretical
analysis was given. In this paper, we show that the algorithm converges
linearly, and we provide upper and lower bounds on the rate of convergence. Our
proof relies on the geometry of submodular polyhedra and draws on results from
spectral graph theory.
| Robert Nishihara, Stefanie Jegelka, Michael I. Jordan | null | 1406.6474 | null | null |
Weakly-supervised Discovery of Visual Pattern Configurations | cs.CV cs.LG | The increasing prominence of weakly labeled data nurtures a growing demand
for object detection methods that can cope with minimal supervision. We propose
an approach that automatically identifies discriminative configurations of
visual patterns that are characteristic of a given object class. We formulate
the problem as a constrained submodular optimization problem and demonstrate
the benefits of the discovered configurations in remedying mislocalizations and
finding informative positive and negative training examples. Together, these
lead to state-of-the-art weakly-supervised detection results on the challenging
PASCAL VOC dataset.
| Hyun Oh Song, Yong Jae Lee, Stefanie Jegelka, Trevor Darrell | null | 1406.6507 | null | null |
Support vector machine classification of dimensionally reduced
structural MRI images for dementia | cs.CV cs.LG physics.med-ph | We classify very-mild to moderate dementia in patients (CDR ranging from 0 to
2) using a support vector machine classifier acting on dimensionally reduced
feature set derived from MRI brain scans of the 416 subjects available in the
OASIS-Brains dataset. We use image segmentation and principal component
analysis to reduce the dimensionality of the data. Our resulting feature set
contains 11 features for each subject. Performance of the classifiers is
evaluated using 10-fold cross-validation. Using linear and (gaussian) kernels,
we obtain a training classification accuracy of 86.4% (90.1%), test accuracy of
85.0% (85.7%), test precision of 68.7% (68.5%), test recall of 68.0% (74.0%),
and test Matthews correlation coefficient of 0.594 (0.616).
| V. A. Miller, S. Erlien, J. Piersol | null | 1406.6568 | null | null |
A scaled gradient projection method for Bayesian learning in dynamical
systems | math.NA cs.LG stat.ML | A crucial task in system identification problems is the selection of the most
appropriate model class, and is classically addressed resorting to
cross-validation or using asymptotic arguments. As recently suggested in the
literature, this can be addressed in a Bayesian framework, where model
complexity is regulated by few hyperparameters, which can be estimated via
marginal likelihood maximization. It is thus of primary importance to design
effective optimization methods to solve the corresponding optimization problem.
If the unknown impulse response is modeled as a Gaussian process with a
suitable kernel, the maximization of the marginal likelihood leads to a
challenging nonconvex optimization problem, which requires a stable and
effective solution strategy. In this paper we address this problem by means of
a scaled gradient projection algorithm, in which the scaling matrix and the
steplength parameter play a crucial role to provide a meaning solution in a
computational time comparable with second order methods. In particular, we
propose both a generalization of the split gradient approach to design the
scaling matrix in the presence of box constraints, and an effective
implementation of the gradient and objective function. The extensive numerical
experiments carried out on several test problems show that our method is very
effective in providing in few tenths of a second solutions of the problems with
accuracy comparable with state-of-the-art approaches. Moreover, the flexibility
of the proposed strategy makes it easily adaptable to a wider range of problems
arising in different areas of machine learning, signal processing and system
identification.
| Silvia Bonettini and Alessandro Chiuso and Marco Prato | 10.1137/140973529 | 1406.6603 | null | null |
When is it Better to Compare than to Score? | stat.ML cs.LG | When eliciting judgements from humans for an unknown quantity, one often has
the choice of making direct-scoring (cardinal) or comparative (ordinal)
measurements. In this paper we study the relative merits of either choice,
providing empirical and theoretical guidelines for the selection of a
measurement scheme. We provide empirical evidence based on experiments on
Amazon Mechanical Turk that in a variety of tasks, (pairwise-comparative)
ordinal measurements have lower per sample noise and are typically faster to
elicit than cardinal ones. Ordinal measurements however typically provide less
information. We then consider the popular Thurstone and Bradley-Terry-Luce
(BTL) models for ordinal measurements and characterize the minimax error rates
for estimating the unknown quantity. We compare these minimax error rates to
those under cardinal measurement models and quantify for what noise levels
ordinal measurements are better. Finally, we revisit the data collected from
our experiments and show that fitting these models confirms this prediction:
for tasks where the noise in ordinal measurements is sufficiently low, the
ordinal approach results in smaller errors in the estimation.
| Nihar B. Shah, Sivaraman Balakrishnan, Joseph Bradley, Abhay Parekh,
Kannan Ramchandran, Martin Wainwright | null | 1406.6618 | null | null |
Active Learning and Best-Response Dynamics | cs.LG cs.GT | We examine an important setting for engineered systems in which low-power
distributed sensors are each making highly noisy measurements of some unknown
target function. A center wants to accurately learn this function by querying a
small number of sensors, which ordinarily would be impossible due to the high
noise rate. The question we address is whether local communication among
sensors, together with natural best-response dynamics in an
appropriately-defined game, can denoise the system without destroying the true
signal and allow the center to succeed from only a small number of active
queries. By using techniques from game theory and empirical processes, we prove
positive (and negative) results on the denoising power of several natural
dynamics. We then show experimentally that when combined with recent agnostic
active learning algorithms, this process can achieve low error from very few
queries, performing substantially better than active or passive learning
without these denoising dynamics as well as passive learning with denoising.
| Maria-Florina Balcan, Chris Berlind, Avrim Blum, Emma Cohen, Kaushik
Patnaik, and Le Song | null | 1406.6633 | null | null |
Causality Networks | cs.LG cs.IT math.IT q-fin.ST stat.ML | While correlation measures are used to discern statistical relationships
between observed variables in almost all branches of data-driven scientific
inquiry, what we are really interested in is the existence of causal
dependence. Designing an efficient causality test, that may be carried out in
the absence of restrictive pre-suppositions on the underlying dynamical
structure of the data at hand, is non-trivial. Nevertheless, ability to
computationally infer statistical prima facie evidence of causal dependence may
yield a far more discriminative tool for data analysis compared to the
calculation of simple correlations. In the present work, we present a new
non-parametric test of Granger causality for quantized or symbolic data streams
generated by ergodic stationary sources. In contrast to state-of-art binary
tests, our approach makes precise and computes the degree of causal dependence
between data streams, without making any restrictive assumptions, linearity or
otherwise. Additionally, without any a priori imposition of specific dynamical
structure, we infer explicit generative models of causal cross-dependence,
which may be then used for prediction. These explicit models are represented as
generalized probabilistic automata, referred to crossed automata, and are shown
to be sufficient to capture a fairly general class of causal dependence. The
proposed algorithms are computationally efficient in the PAC sense; $i.e.$, we
find good models of cross-dependence with high probability, with polynomial
run-times and sample complexities. The theoretical results are applied to
weekly search-frequency data from Google Trends API for a chosen set of
socially "charged" keywords. The causality network inferred from this dataset
reveals, quite expectedly, the causal importance of certain keywords. It is
also illustrated that correlation analysis fails to gather such insight.
| Ishanu Chattopadhyay | null | 1406.6651 | null | null |
Mass-Univariate Hypothesis Testing on MEEG Data using Cross-Validation | stat.ML cs.LG math.ST stat.TH | Recent advances in statistical theory, together with advances in the
computational power of computers, provide alternative methods to do
mass-univariate hypothesis testing in which a large number of univariate tests,
can be properly used to compare MEEG data at a large number of time-frequency
points and scalp locations. One of the major problematic aspects of this kind
of mass-univariate analysis is due to high number of accomplished hypothesis
tests. Hence procedures that remove or alleviate the increased probability of
false discoveries are crucial for this type of analysis. Here, I propose a new
method for mass-univariate analysis of MEEG data based on cross-validation
scheme. In this method, I suggest a hierarchical classification procedure under
k-fold cross-validation to detect which sensors at which time-bin and which
frequency-bin contributes in discriminating between two different stimuli or
tasks. To achieve this goal, a new feature extraction method based on the
discrete cosine transform (DCT) employed to get maximum advantage of all three
data dimensions. Employing cross-validation and hierarchy architecture
alongside the DCT feature space makes this method more reliable and at the same
time enough sensitive to detect the narrow effects in brain activities.
| Seyed Mostafa Kia | null | 1406.6720 | null | null |
Online learning in MDPs with side information | cs.LG stat.ML | We study online learning of finite Markov decision process (MDP) problems
when a side information vector is available. The problem is motivated by
applications such as clinical trials, recommendation systems, etc. Such
applications have an episodic structure, where each episode corresponds to a
patient/customer. Our objective is to compete with the optimal dynamic policy
that can take side information into account.
We propose a computationally efficient algorithm and show that its regret is
at most $O(\sqrt{T})$, where $T$ is the number of rounds. To best of our
knowledge, this is the first regret bound for this setting.
| Yasin Abbasi-Yadkori and Gergely Neu | null | 1406.6812 | null | null |
Discriminative Unsupervised Feature Learning with Exemplar Convolutional
Neural Networks | cs.LG cs.CV cs.NE | Deep convolutional networks have proven to be very successful in learning
task specific features that allow for unprecedented performance on various
computer vision tasks. Training of such networks follows mostly the supervised
learning paradigm, where sufficiently many input-output pairs are required for
training. Acquisition of large training sets is one of the key challenges, when
approaching a new task. In this paper, we aim for generic feature learning and
present an approach for training a convolutional network using only unlabeled
data. To this end, we train the network to discriminate between a set of
surrogate classes. Each surrogate class is formed by applying a variety of
transformations to a randomly sampled 'seed' image patch. In contrast to
supervised network training, the resulting feature representation is not class
specific. It rather provides robustness to the transformations that have been
applied during training. This generic feature representation allows for
classification results that outperform the state of the art for unsupervised
learning on several popular datasets (STL-10, CIFAR-10, Caltech-101,
Caltech-256). While such generic features cannot compete with class specific
features from supervised training on a classification task, we show that they
are advantageous on geometric matching problems, where they also outperform the
SIFT descriptor.
| Alexey Dosovitskiy, Philipp Fischer, Jost Tobias Springenberg, Martin
Riedmiller and Thomas Brox | null | 1406.6909 | null | null |
A Concise Information-Theoretic Derivation of the Baum-Welch algorithm | cs.IT cs.LG math.IT | We derive the Baum-Welch algorithm for hidden Markov models (HMMs) through an
information-theoretical approach using cross-entropy instead of the Lagrange
multiplier approach which is universal in machine learning literature. The
proposed approach provides a more concise derivation of the Baum-Welch method
and naturally generalizes to multiple observations.
| Alireza Nejati, Charles Unsworth | null | 1406.7002 | null | null |
An Incentive Compatible Multi-Armed-Bandit Crowdsourcing Mechanism with
Quality Assurance | cs.GT cs.LG | Consider a requester who wishes to crowdsource a series of identical binary
labeling tasks to a pool of workers so as to achieve an assured accuracy for
each task, in a cost optimal way. The workers are heterogeneous with unknown
but fixed qualities and their costs are private. The problem is to select for
each task an optimal subset of workers so that the outcome obtained from the
selected workers guarantees a target accuracy level. The problem is a
challenging one even in a non strategic setting since the accuracy of
aggregated label depends on unknown qualities. We develop a novel multi-armed
bandit (MAB) mechanism for solving this problem. First, we propose a framework,
Assured Accuracy Bandit (AAB), which leads to an MAB algorithm, Constrained
Confidence Bound for a Non Strategic setting (CCB-NS). We derive an upper bound
on the number of time steps the algorithm chooses a sub-optimal set that
depends on the target accuracy level and true qualities. A more challenging
situation arises when the requester not only has to learn the qualities of the
workers but also elicit their true costs. We modify the CCB-NS algorithm to
obtain an adaptive exploration separated algorithm which we call { \em
Constrained Confidence Bound for a Strategic setting (CCB-S)}. CCB-S algorithm
produces an ex-post monotone allocation rule and thus can be transformed into
an ex-post incentive compatible and ex-post individually rational mechanism
that learns the qualities of the workers and guarantees a given target accuracy
level in a cost optimal way. We provide a lower bound on the number of times
any algorithm should select a sub-optimal set and we see that the lower bound
matches our upper bound upto a constant factor. We provide insights on the
practical implementation of this framework through an illustrative example and
we show the efficacy of our algorithms through simulations.
| Shweta Jain, Sujit Gujar, Satyanath Bhat, Onno Zoeter, Y. Narahari | null | 1406.7157 | null | null |
Reconstructing subclonal composition and evolution from whole genome
sequencing of tumors | q-bio.PE cs.LG stat.ML | Tumors often contain multiple subpopulations of cancerous cells defined by
distinct somatic mutations. We describe a new method, PhyloWGS, that can be
applied to WGS data from one or more tumor samples to reconstruct complete
genotypes of these subpopulations based on variant allele frequencies (VAFs) of
point mutations and population frequencies of structural variations. We
introduce a principled phylogenic correction for VAFs in loci affected by copy
number alterations and we show that this correction greatly improves subclonal
reconstruction compared to existing methods.
| Amit G. Deshwar, Shankar Vembu, Christina K. Yung, Gun Ho Jang,
Lincoln Stein, Quaid Morris | null | 1406.7250 | null | null |
On the Use of Different Feature Extraction Methods for Linear and Non
Linear kernels | cs.CL cs.LG | The speech feature extraction has been a key focus in robust speech
recognition research; it significantly affects the recognition performance. In
this paper, we first study a set of different features extraction methods such
as linear predictive coding (LPC), mel frequency cepstral coefficient (MFCC)
and perceptual linear prediction (PLP) with several features normalization
techniques like rasta filtering and cepstral mean subtraction (CMS). Based on
this, a comparative evaluation of these features is performed on the task of
text independent speaker identification using a combination between gaussian
mixture models (GMM) and linear and non-linear kernels based on support vector
machine (SVM).
| Imen Trabelsi and Dorra Ben Ayed | null | 1406.7314 | null | null |
Stock Market Prediction from WSJ: Text Mining via Sparse Matrix
Factorization | cs.LG q-fin.ST | We revisit the problem of predicting directional movements of stock prices
based on news articles: here our algorithm uses daily articles from The Wall
Street Journal to predict the closing stock prices on the same day. We propose
a unified latent space model to characterize the "co-movements" between stock
prices and news articles. Unlike many existing approaches, our new model is
able to simultaneously leverage the correlations: (a) among stock prices, (b)
among news articles, and (c) between stock prices and news articles. Thus, our
model is able to make daily predictions on more than 500 stocks (most of which
are not even mentioned in any news article) while having low complexity. We
carry out extensive backtesting on trading strategies based on our algorithm.
The result shows that our model has substantially better accuracy rate (55.7%)
compared to many widely used algorithms. The return (56%) and Sharpe ratio due
to a trading strategy based on our model are also much higher than baseline
indices.
| Felix Ming Fai Wong, Zhenming Liu, Mung Chiang | null | 1406.7330 | null | null |
Exponentially Increasing the Capacity-to-Computation Ratio for
Conditional Computation in Deep Learning | stat.ML cs.LG cs.NE | Many state-of-the-art results obtained with deep networks are achieved with
the largest models that could be trained, and if more computation power was
available, we might be able to exploit much larger datasets in order to improve
generalization ability. Whereas in learning algorithms such as decision trees
the ratio of capacity (e.g., the number of parameters) to computation is very
favorable (up to exponentially more parameters than computation), the ratio is
essentially 1 for deep neural networks. Conditional computation has been
proposed as a way to increase the capacity of a deep neural network without
increasing the amount of computation required, by activating some parameters
and computation "on-demand", on a per-example basis. In this note, we propose a
novel parametrization of weight matrices in neural networks which has the
potential to increase up to exponentially the ratio of the number of parameters
to computation. The proposed approach is based on turning on some parameters
(weight matrices) when specific bit patterns of hidden unit activations are
obtained. In order to better control for the overfitting that might result, we
propose a parametrization that is tree-structured, where each node of the tree
corresponds to a prefix of a sequence of sign bits, or gating units, associated
with hidden units.
| Kyunghyun Cho and Yoshua Bengio | null | 1406.7362 | null | null |
Complexity Measures and Concept Learning | cs.IT cs.LG math.IT | The nature of concept learning is a core question in cognitive science.
Theories must account for the relative difficulty of acquiring different
concepts by supervised learners. For a canonical set of six category types, two
distinct orderings of classification difficulty have been found. One ordering,
which we call paradigm-specific, occurs when adult human learners classify
objects with easily distinguishable characteristics such as size, shape, and
shading. The general order occurs in all other known cases: when adult humans
classify objects with characteristics that are not readily distinguished (e.g.,
brightness, saturation, hue); for children and monkeys; and when categorization
difficulty is extrapolated from errors in identification learning. The
paradigm-specific order was found to be predictable mathematically by measuring
the logical complexity of tasks, i.e., how concisely the solution can be
represented by logical rules.
However, logical complexity explains only the paradigm-specific order but not
the general order. Here we propose a new difficulty measurement, information
complexity, that calculates the amount of uncertainty remaining when a subset
of the dimensions are specified. This measurement is based on Shannon entropy.
We show that, when the metric extracts minimal uncertainties, this new
measurement predicts the paradigm-specific order for the canonical six category
types, and when the metric extracts average uncertainties, this new measurement
predicts the general order. Moreover, for learning category types beyond the
canonical six, we find that the minimal-uncertainty formulation correctly
predicts the paradigm-specific order as well or better than existing metrics
(Boolean complexity and GIST) in most cases.
| Andreas D. Pape, Kenneth J. Kurtz, Hiroki Sayama | 10.1016/j.jmp.2015.01.001 | 1406.7424 | null | null |
Comparison of SVM Optimization Techniques in the Primal | cs.LG | This paper examines the efficacy of different optimization techniques in a
primal formulation of a support vector machine (SVM). Three main techniques are
compared. The dataset used to compare all three techniques was the Sentiment
Analysis on Movie Reviews dataset, from kaggle.com.
| Jonathan Katzman and Diane Duros | null | 1406.7429 | null | null |
Efficient Learning in Large-Scale Combinatorial Semi-Bandits | cs.LG cs.AI stat.ML | A stochastic combinatorial semi-bandit is an online learning problem where at
each step a learning agent chooses a subset of ground items subject to
combinatorial constraints, and then observes stochastic weights of these items
and receives their sum as a payoff. In this paper, we consider efficient
learning in large-scale combinatorial semi-bandits with linear generalization,
and as a solution, propose two learning algorithms called Combinatorial Linear
Thompson Sampling (CombLinTS) and Combinatorial Linear UCB (CombLinUCB). Both
algorithms are computationally efficient as long as the offline version of the
combinatorial problem can be solved efficiently. We establish that CombLinTS
and CombLinUCB are also provably statistically efficient under reasonable
assumptions, by developing regret bounds that are independent of the problem
scale (number of items) and sublinear in time. We also evaluate CombLinTS on a
variety of problems with thousands of items. Our experiment results demonstrate
that CombLinTS is scalable, robust to the choice of algorithm parameters, and
significantly outperforms the best of our baselines.
| Zheng Wen, Branislav Kveton, and Azin Ashkan | null | 1406.7443 | null | null |
Learning to Deblur | cs.CV cs.LG | We describe a learning-based approach to blind image deconvolution. It uses a
deep layered architecture, parts of which are borrowed from recent work on
neural network learning, and parts of which incorporate computations that are
specific to image deconvolution. The system is trained end-to-end on a set of
artificially generated training examples, enabling competitive performance in
blind deconvolution, both with respect to quality and runtime.
| Christian J. Schuler, Michael Hirsch, Stefan Harmeling, Bernhard
Sch\"olkopf | null | 1406.7444 | null | null |
Contrastive Feature Induction for Efficient Structure Learning of
Conditional Random Fields | cs.LG | Structure learning of Conditional Random Fields (CRFs) can be cast into an
L1-regularized optimization problem. To avoid optimizing over a fully linked
model, gain-based or gradient-based feature selection methods start from an
empty model and incrementally add top ranked features to it. However, for
high-dimensional problems like statistical relational learning, training time
of these incremental methods can be dominated by the cost of evaluating the
gain or gradient of a large collection of candidate features. In this study we
propose a fast feature evaluation algorithm called Contrastive Feature
Induction (CFI), which only evaluates a subset of features that involve both
variables with high signals (deviation from mean) and variables with high
errors (residue). We prove that the gradient of candidate features can be
represented solely as a function of signals and errors, and that CFI is an
efficient approximation of gradient-based evaluation methods. Experiments on
synthetic and real data sets show competitive learning speed and accuracy of
CFI on pairwise CRFs, compared to state-of-the-art structure learning methods
such as full optimization over all features, and Grafting.
| Ni Lao, Jun Zhu | null | 1406.7445 | null | null |
Unimodal Bandits without Smoothness | cs.LG | We consider stochastic bandit problems with a continuous set of arms and
where the expected reward is a continuous and unimodal function of the arm. No
further assumption is made regarding the smoothness and the structure of the
expected reward function. For these problems, we propose the Stochastic
Pentachotomy (SP) algorithm, and derive finite-time upper bounds on its regret
and optimization error. In particular, we show that, for any expected reward
function $\mu$ that behaves as $\mu(x)=\mu(x^\star)-C|x-x^\star|^\xi$ locally
around its maximizer $x^\star$ for some $\xi, C>0$, the SP algorithm is
order-optimal. Namely its regret and optimization error scale as
$O(\sqrt{T\log(T)})$ and $O(\sqrt{\log(T)/T})$, respectively, when the time
horizon $T$ grows large. These scalings are achieved without the knowledge of
$\xi$ and $C$. Our algorithm is based on asymptotically optimal sequential
statistical tests used to successively trim an interval that contains the best
arm with high probability. To our knowledge, the SP algorithm constitutes the
first sequential arm selection rule that achieves a regret and optimization
error scaling as $O(\sqrt{T})$ and $O(1/\sqrt{T})$, respectively, up to a
logarithmic factor for non-smooth expected reward functions, as well as for
smooth functions with unknown smoothness.
| Richard Combes and Alexandre Proutiere | null | 1406.7447 | null | null |
Thompson Sampling for Learning Parameterized Markov Decision Processes | stat.ML cs.LG | We consider reinforcement learning in parameterized Markov Decision Processes
(MDPs), where the parameterization may induce correlation across transition
probabilities or rewards. Consequently, observing a particular state transition
might yield useful information about other, unobserved, parts of the MDP. We
present a version of Thompson sampling for parameterized reinforcement learning
problems, and derive a frequentist regret bound for priors over general
parameter spaces. The result shows that the number of instants where suboptimal
actions are chosen scales logarithmically with time, with high probability. It
holds for prior distributions that put significant probability near the true
model, without any additional, specific closed-form structure such as conjugate
or product-form priors. The constant factor in the logarithmic scaling encodes
the information complexity of learning the MDP in terms of the Kullback-Leibler
geometry of the parameter space.
| Aditya Gopalan, Shie Mannor | null | 1406.7498 | null | null |
Theoretical Analysis of Bayesian Optimisation with Unknown Gaussian
Process Hyper-Parameters | stat.ML cs.LG | Bayesian optimisation has gained great popularity as a tool for optimising
the parameters of machine learning algorithms and models. Somewhat ironically,
setting up the hyper-parameters of Bayesian optimisation methods is notoriously
hard. While reasonable practical solutions have been advanced, they can often
fail to find the best optima. Surprisingly, there is little theoretical
analysis of this crucial problem in the literature. To address this, we derive
a cumulative regret bound for Bayesian optimisation with Gaussian processes and
unknown kernel hyper-parameters in the stochastic setting. The bound, which
applies to the expected improvement acquisition function and sub-Gaussian
observation noise, provides us with guidelines on how to design hyper-parameter
estimation methods. A simple simulation demonstrates the importance of
following these guidelines.
| Ziyu Wang, Nando de Freitas | null | 1406.7758 | null | null |
Building DNN Acoustic Models for Large Vocabulary Speech Recognition | cs.CL cs.LG cs.NE stat.ML | Deep neural networks (DNNs) are now a central component of nearly all
state-of-the-art speech recognition systems. Building neural network acoustic
models requires several design decisions including network architecture, size,
and training loss function. This paper offers an empirical investigation on
which aspects of DNN acoustic model design are most important for speech
recognition system performance. We report DNN classifier performance and final
speech recognizer word error rates, and compare DNNs using several metrics to
quantify factors influencing differences in task performance. Our first set of
experiments use the standard Switchboard benchmark corpus, which contains
approximately 300 hours of conversational telephone speech. We compare standard
DNNs to convolutional networks, and present the first experiments using
locally-connected, untied neural networks for acoustic modeling. We
additionally build systems on a corpus of 2,100 hours of training data by
combining the Switchboard and Fisher corpora. This larger corpus allows us to
more thoroughly examine performance of large DNN models -- with up to ten times
more parameters than those typically used in speech recognition systems. Our
results suggest that a relatively simple DNN architecture and optimization
technique produces strong results. These findings, along with previous work,
help establish a set of best practices for building DNN hybrid speech
recognition systems with maximum likelihood training. Our experiments in DNN
optimization additionally serve as a case study for training DNNs with
discriminative loss functions for speech tasks, as well as DNN classifiers more
generally.
| Andrew L. Maas, Peng Qi, Ziang Xie, Awni Y. Hannun, Christopher T.
Lengerich, Daniel Jurafsky and Andrew Y. Ng | null | 1406.7806 | null | null |
Learning Laplacian Matrix in Smooth Graph Signal Representations | cs.LG cs.SI stat.ML | The construction of a meaningful graph plays a crucial role in the success of
many graph-based representations and algorithms for handling structured data,
especially in the emerging field of graph signal processing. However, a
meaningful graph is not always readily available from the data, nor easy to
define depending on the application domain. In particular, it is often
desirable in graph signal processing applications that a graph is chosen such
that the data admit certain regularity or smoothness on the graph. In this
paper, we address the problem of learning graph Laplacians, which is equivalent
to learning graph topologies, such that the input data form graph signals with
smooth variations on the resulting topology. To this end, we adopt a factor
analysis model for the graph signals and impose a Gaussian probabilistic prior
on the latent variables that control these signals. We show that the Gaussian
prior leads to an efficient representation that favors the smoothness property
of the graph signals. We then propose an algorithm for learning graphs that
enforces such property and is based on minimizing the variations of the signals
on the learned graph. Experiments on both synthetic and real world data
demonstrate that the proposed graph learning framework can efficiently infer
meaningful graph topologies from signal observations under the smoothness
prior.
| Xiaowen Dong, Dorina Thanou, Pascal Frossard, Pierre Vandergheynst | null | 1406.7842 | null | null |
Simple connectome inference from partial correlation statistics in
calcium imaging | stat.ML cs.CE cs.LG | In this work, we propose a simple yet effective solution to the problem of
connectome inference in calcium imaging data. The proposed algorithm consists
of two steps. First, processing the raw signals to detect neural peak
activities. Second, inferring the degree of association between neurons from
partial correlation statistics. This paper summarises the methodology that led
us to win the Connectomics Challenge, proposes a simplified version of our
method, and finally compares our results with respect to other inference
methods.
| Antonio Sutera, Arnaud Joly, Vincent Fran\c{c}ois-Lavet, Zixiao Aaron
Qiu, Gilles Louppe, Damien Ernst and Pierre Geurts | 10.1007/978-3-319-53070-3_2 | 1406.7865 | null | null |
Relevance Singular Vector Machine for low-rank matrix sensing | cs.NA cs.LG math.ST stat.TH | In this paper we develop a new Bayesian inference method for low rank matrix
reconstruction. We call the new method the Relevance Singular Vector Machine
(RSVM) where appropriate priors are defined on the singular vectors of the
underlying matrix to promote low rank. To accelerate computations, a
numerically efficient approximation is developed. The proposed algorithms are
applied to matrix completion and matrix reconstruction problems and their
performance is studied numerically.
| Martin Sundin, Saikat Chatterjee, Magnus Jansson and Cristian R. Rojas | null | 1407.0013 | null | null |
Rates of Convergence for Nearest Neighbor Classification | cs.LG math.ST stat.ML stat.TH | Nearest neighbor methods are a popular class of nonparametric estimators with
several desirable properties, such as adaptivity to different distance scales
in different regions of space. Prior work on convergence rates for nearest
neighbor classification has not fully reflected these subtle properties. We
analyze the behavior of these estimators in metric spaces and provide
finite-sample, distribution-dependent rates of convergence under minimal
assumptions. As a by-product, we are able to establish the universal
consistency of nearest neighbor in a broader range of data spaces than was
previously known. We illustrate our upper and lower bounds by introducing
smoothness classes that are customized for nearest neighbor classification.
| Kamalika Chaudhuri and Sanjoy Dasgupta | null | 1407.0067 | null | null |
Randomized Block Coordinate Descent for Online and Stochastic
Optimization | cs.LG | Two types of low cost-per-iteration gradient descent methods have been
extensively studied in parallel. One is online or stochastic gradient descent
(OGD/SGD), and the other is randomzied coordinate descent (RBCD). In this
paper, we combine the two types of methods together and propose online
randomized block coordinate descent (ORBCD). At each iteration, ORBCD only
computes the partial gradient of one block coordinate of one mini-batch
samples. ORBCD is well suited for the composite minimization problem where one
function is the average of the losses of a large number of samples and the
other is a simple regularizer defined on high dimensional variables. We show
that the iteration complexity of ORBCD has the same order as OGD or SGD. For
strongly convex functions, by reducing the variance of stochastic gradients, we
show that ORBCD can converge at a geometric rate in expectation, matching the
convergence rate of SGD with variance reduction and RBCD.
| Huahua Wang and Arindam Banerjee | null | 1407.0107 | null | null |
Mind the Nuisance: Gaussian Process Classification using Privileged
Noise | stat.ML cs.LG | The learning with privileged information setting has recently attracted a lot
of attention within the machine learning community, as it allows the
integration of additional knowledge into the training process of a classifier,
even when this comes in the form of a data modality that is not available at
test time. Here, we show that privileged information can naturally be treated
as noise in the latent function of a Gaussian Process classifier (GPC). That
is, in contrast to the standard GPC setting, the latent function is not just a
nuisance but a feature: it becomes a natural measure of confidence about the
training data by modulating the slope of the GPC sigmoid likelihood function.
Extensive experiments on public datasets show that the proposed GPC method
using privileged noise, called GPC+, improves over a standard GPC without
privileged knowledge, and also over the current state-of-the-art SVM-based
method, SVM+. Moreover, we show that advanced neural networks and deep learning
methods can be compressed as privileged information.
| Daniel Hern\'andez-Lobato, Viktoriia Sharmanska, Kristian Kersting,
Christoph H. Lampert, Novi Quadrianto | null | 1407.0179 | null | null |
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly
Convex Composite Objectives | cs.LG math.OC stat.ML | In this work we introduce a new optimisation method called SAGA in the spirit
of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient
algorithms with fast linear convergence rates. SAGA improves on the theory
behind SAG and SVRG, with better theoretical convergence rates, and has support
for composite objectives where a proximal operator is used on the regulariser.
Unlike SDCA, SAGA supports non-strongly convex problems directly, and is
adaptive to any inherent strong convexity of the problem. We give experimental
results showing the effectiveness of our method.
| Aaron Defazio, Francis Bach (INRIA Paris - Rocquencourt, LIENS, MSR -
INRIA), Simon Lacoste-Julien (INRIA Paris - Rocquencourt, LIENS, MSR - INRIA) | null | 1407.0202 | null | null |
A Bayes consistent 1-NN classifier | cs.LG stat.ML | We show that a simple modification of the 1-nearest neighbor classifier
yields a strongly Bayes consistent learner. Prior to this work, the only
strongly Bayes consistent proximity-based method was the k-nearest neighbor
classifier, for k growing appropriately with sample size. We will argue that a
margin-regularized 1-NN enjoys considerable statistical and algorithmic
advantages over the k-NN classifier. These include user-friendly finite-sample
error bounds, as well as time- and memory-efficient learning and test-point
evaluation algorithms with a principled speed-accuracy tradeoff. Encouraging
empirical results are reported.
| Aryeh Kontorovich and Roi Weiss | null | 1407.0208 | null | null |
DC approximation approaches for sparse optimization | cs.NA cs.LG stat.ML | Sparse optimization refers to an optimization problem involving the zero-norm
in objective or constraints. In this paper, nonconvex approximation approaches
for sparse optimization have been studied with a unifying point of view in DC
(Difference of Convex functions) programming framework. Considering a common DC
approximation of the zero-norm including all standard sparse inducing penalty
functions, we studied the consistency between global minimums (resp. local
minimums) of approximate and original problems. We showed that, in several
cases, some global minimizers (resp. local minimizers) of the approximate
problem are also those of the original problem. Using exact penalty techniques
in DC programming, we proved stronger results for some particular
approximations, namely, the approximate problem, with suitable parameters, is
equivalent to the original problem. The efficiency of several sparse inducing
penalty functions have been fully analyzed. Four DCA (DC Algorithm) schemes
were developed that cover all standard algorithms in nonconvex sparse
approximation approaches as special versions. They can be viewed as, an $\ell
_{1}$-perturbed algorithm / reweighted-$\ell _{1}$ algorithm / reweighted-$\ell
_{1}$ algorithm. We offer a unifying nonconvex approximation approach, with
solid theoretical tools as well as efficient algorithms based on DC programming
and DCA, to tackle the zero-norm and sparse optimization. As an application, we
implemented our methods for the feature selection in SVM (Support Vector
Machine) problem and performed empirical comparative numerical experiments on
the proposed algorithms with various approximation functions.
| Hoai An Le Thi, Tao Pham Dinh, Hoai Minh Le, Xuan Thanh Vo | null | 1407.0286 | null | null |
Identifying Outliers in Large Matrices via Randomized Adaptive
Compressive Sampling | cs.IT cs.LG math.IT stat.ML | This paper examines the problem of locating outlier columns in a large,
otherwise low-rank, matrix. We propose a simple two-step adaptive sensing and
inference approach and establish theoretical guarantees for its performance;
our results show that accurate outlier identification is achievable using very
few linear summaries of the original data matrix -- as few as the squared rank
of the low-rank component plus the number of outliers, times constant and
logarithmic factors. We demonstrate the performance of our approach
experimentally in two stylized applications, one motivated by robust
collaborative filtering tasks, and the other by saliency map estimation tasks
arising in computer vision and automated surveillance, and also investigate
extensions to settings where the data are noisy, or possibly incomplete.
| Xingguo Li and Jarvis Haupt | 10.1109/TSP.2015.2401536 | 1407.0312 | null | null |
Significant Subgraph Mining with Multiple Testing Correction | stat.ME cs.LG stat.ML | The problem of finding itemsets that are statistically significantly enriched
in a class of transactions is complicated by the need to correct for multiple
hypothesis testing. Pruning untestable hypotheses was recently proposed as a
strategy for this task of significant itemset mining. It was shown to lead to
greater statistical power, the discovery of more truly significant itemsets,
than the standard Bonferroni correction on real-world datasets. An open
question, however, is whether this strategy of excluding untestable hypotheses
also leads to greater statistical power in subgraph mining, in which the number
of hypotheses is much larger than in itemset mining. Here we answer this
question by an empirical investigation on eight popular graph benchmark
datasets. We propose a new efficient search strategy, which always returns the
same solution as the state-of-the-art approach and is approximately two orders
of magnitude faster. Moreover, we exploit the dependence between subgraphs by
considering the effective number of tests and thereby further increase the
statistical power.
| Mahito Sugiyama, Felipe Llinares L\'opez, Niklas Kasenburg, Karsten M.
Borgwardt | null | 1407.0316 | null | null |
A Multi Level Data Fusion Approach for Speaker Identification on
Telephone Speech | cs.SD cs.LG | Several speaker identification systems are giving good performance with clean
speech but are affected by the degradations introduced by noisy audio
conditions. To deal with this problem, we investigate the use of complementary
information at different levels for computing a combined match score for the
unknown speaker. In this work, we observe the effect of two supervised machine
learning approaches including support vectors machines (SVM) and na\"ive bayes
(NB). We define two feature vector sets based on mel frequency cepstral
coefficients (MFCC) and relative spectral perceptual linear predictive
coefficients (RASTA-PLP). Each feature is modeled using the Gaussian Mixture
Model (GMM). Several ways of combining these information sources give
significant improvements in a text-independent speaker identification task
using a very large telephone degraded NTIMIT database.
| Imen Trabelsi and Dorra Ben Ayed | null | 1407.0380 | null | null |
Geometric Tight Frame based Stylometry for Art Authentication of van
Gogh Paintings | cs.LG cs.CV | This paper is about authenticating genuine van Gogh paintings from forgeries.
The authentication process depends on two key steps: feature extraction and
outlier detection. In this paper, a geometric tight frame and some simple
statistics of the tight frame coefficients are used to extract features from
the paintings. Then a forward stage-wise rank boosting is used to select a
small set of features for more accurate classification so that van Gogh
paintings are highly concentrated towards some center point while forgeries are
spread out as outliers. Numerical results show that our method can achieve
86.08% classification accuracy under the leave-one-out cross-validation
procedure. Our method also identifies five features that are much more
predominant than other features. Using just these five features for
classification, our method can give 88.61% classification accuracy which is the
highest so far reported in literature. Evaluation of the five features is also
performed on two hundred datasets generated by bootstrap sampling with
replacement. The median and the mean are 88.61% and 87.77% respectively. Our
results show that a small set of statistics of the tight frame coefficients
along certain orientations can serve as discriminative features for van Gogh
paintings. It is more important to look at the tail distributions of such
directional coefficients than mean values and standard deviations. It reflects
a highly consistent style in van Gogh's brushstroke movements, where many
forgeries demonstrate a more diverse spread in these features.
| Haixia Liu, Raymond H. Chan, and Yuan Yao | null | 1407.0439 | null | null |
Classification-based Approximate Policy Iteration: Experiments and
Extended Discussions | cs.LG cs.SY math.OC stat.ML | Tackling large approximate dynamic programming or reinforcement learning
problems requires methods that can exploit regularities, or intrinsic
structure, of the problem in hand. Most current methods are geared towards
exploiting the regularities of either the value function or the policy. We
introduce a general classification-based approximate policy iteration (CAPI)
framework, which encompasses a large class of algorithms that can exploit
regularities of both the value function and the policy space, depending on what
is advantageous. This framework has two main components: a generic value
function estimator and a classifier that learns a policy based on the estimated
value function. We establish theoretical guarantees for the sample complexity
of CAPI-style algorithms, which allow the policy evaluation step to be
performed by a wide variety of algorithms (including temporal-difference-style
methods), and can handle nonparametric representations of policies. Our bounds
on the estimation error of the performance loss are tighter than existing
results. We also illustrate this approach empirically on several problems,
including a large HIV control task.
| Amir-massoud Farahmand, Doina Precup, Andr\'e M.S. Barreto, Mohammad
Ghavamzadeh | null | 1407.0449 | null | null |
How Many Dissimilarity/Kernel Self Organizing Map Variants Do We Need? | stat.ML cs.LG cs.NE | In numerous applicative contexts, data are too rich and too complex to be
represented by numerical vectors. A general approach to extend machine learning
and data mining techniques to such data is to really on a dissimilarity or on a
kernel that measures how different or similar two objects are. This approach
has been used to define several variants of the Self Organizing Map (SOM). This
paper reviews those variants in using a common set of notations in order to
outline differences and similarities between them. It discusses the advantages
and drawbacks of the variants, as well as the actual relevance of the
dissimilarity/kernel SOM for practical applications.
| Fabrice Rossi (SAMM) | 10.1007/978-3-319-07695-9_1 | 1407.0611 | null | null |
Nonparametric Hierarchical Clustering of Functional Data | stat.ML cs.LG | In this paper, we deal with the problem of curves clustering. We propose a
nonparametric method which partitions the curves into clusters and discretizes
the dimensions of the curve points into intervals. The cross-product of these
partitions forms a data-grid which is obtained using a Bayesian model selection
approach while making no assumptions regarding the curves. Finally, a
post-processing technique, aiming at reducing the number of clusters in order
to improve the interpretability of the clustering, is proposed. It consists in
optimally merging the clusters step by step, which corresponds to an
agglomerative hierarchical classification whose dissimilarity measure is the
variation of the criterion. Interestingly this measure is none other than the
sum of the Kullback-Leibler divergences between clusters distributions before
and after the merges. The practical interest of the approach for functional
data exploratory analysis is presented and compared with an alternative
approach on an artificial and a real world data set.
| Marc Boull\'e, Romain Guigour\`es (SAMM), Fabrice Rossi (SAMM) | 10.1007/978-3-319-02999-3_2 | 1407.0612 | null | null |
Fast Algorithm for Low-rank matrix recovery in Poisson noise | stat.ML cs.LG math.ST stat.TH | This paper describes a fast algorithm for recovering low-rank matrices from
their linear measurements contaminated with Poisson noise: the Poisson noise
Maximum Likelihood Singular Value thresholding (PMLSV) algorithm. We propose a
convex optimization formulation with a cost function consisting of the sum of a
likelihood function and a regularization function which the nuclear norm of the
matrix. Instead of solving the optimization problem directly by semi-definite
program (SDP), we derive an iterative singular value thresholding algorithm by
expanding the likelihood function. We demonstrate the good performance of the
proposed algorithm on recovery of solar flare images with Poisson noise: the
algorithm is more efficient than solving SDP using the interior-point algorithm
and it generates a good approximate solution compared to that solved from SDP.
| Yang Cao and Yao Xie | null | 1407.0726 | null | null |
Projecting Ising Model Parameters for Fast Mixing | cs.LG stat.ML | Inference in general Ising models is difficult, due to high treewidth making
tree-based algorithms intractable. Moreover, when interactions are strong,
Gibbs sampling may take exponential time to converge to the stationary
distribution. We present an algorithm to project Ising model parameters onto a
parameter set that is guaranteed to be fast mixing, under several divergences.
We find that Gibbs sampling using the projected parameters is more accurate
than with the original parameters when interaction strengths are strong and
when limited time is available for sampling.
| Justin Domke and Xianghang Liu | null | 1407.0749 | null | null |
Global convergence of splitting methods for nonconvex composite
optimization | math.OC cs.LG math.NA stat.ML | We consider the problem of minimizing the sum of a smooth function $h$ with a
bounded Hessian, and a nonsmooth function. We assume that the latter function
is a composition of a proper closed function $P$ and a surjective linear map
$\cal M$, with the proximal mappings of $\tau P$, $\tau > 0$, simple to
compute. This problem is nonconvex in general and encompasses many important
applications in engineering and machine learning. In this paper, we examined
two types of splitting methods for solving this nonconvex optimization problem:
alternating direction method of multipliers and proximal gradient algorithm.
For the direct adaptation of the alternating direction method of multipliers,
we show that, if the penalty parameter is chosen sufficiently large and the
sequence generated has a cluster point, then it gives a stationary point of the
nonconvex problem. We also establish convergence of the whole sequence under an
additional assumption that the functions $h$ and $P$ are semi-algebraic.
Furthermore, we give simple sufficient conditions to guarantee boundedness of
the sequence generated. These conditions can be satisfied for a wide range of
applications including the least squares problem with the $\ell_{1/2}$
regularization. Finally, when $\cal M$ is the identity so that the proximal
gradient algorithm can be efficiently applied, we show that any cluster point
is stationary under a slightly more flexible constant step-size rule than what
is known in the literature for a nonconvex $h$.
| Guoyin Li, Ting Kei Pong | 10.1137/140998135 | 1407.0753 | null | null |
Structured Learning via Logistic Regression | cs.LG stat.ML | A successful approach to structured learning is to write the learning
objective as a joint function of linear parameters and inference messages, and
iterate between updates to each. This paper observes that if the inference
problem is "smoothed" through the addition of entropy terms, for fixed
messages, the learning objective reduces to a traditional (non-structured)
logistic regression problem with respect to parameters. In these logistic
regression problems, each training example has a bias term determined by the
current set of messages. Based on this insight, the structured energy function
can be extended from linear factors to any function class where an "oracle"
exists to minimize a logistic loss.
| Justin Domke | null | 1407.0754 | null | null |
Reducing Offline Evaluation Bias in Recommendation Systems | cs.IR cs.LG stat.ML | Recommendation systems have been integrated into the majority of large online
systems. They tailor those systems to individual users by filtering and ranking
information according to user profiles. This adaptation process influences the
way users interact with the system and, as a consequence, increases the
difficulty of evaluating a recommendation algorithm with historical data (via
offline evaluation). This paper analyses this evaluation bias and proposes a
simple item weighting solution that reduces its impact. The efficiency of the
proposed solution is evaluated on real world data extracted from Viadeo
professional social network.
| Arnaud De Myttenaere (SAMM), B\'en\'edicte Le Grand (CRI), Boris
Golden (Viadeo), Fabrice Rossi (SAMM) | null | 1407.0822 | null | null |
Anomaly Detection Based on Aggregation of Indicators | stat.ML cs.LG | Automatic anomaly detection is a major issue in various areas. Beyond mere
detection, the identification of the origin of the problem that produced the
anomaly is also essential. This paper introduces a general methodology that can
assist human operators who aim at classifying monitoring signals. The main idea
is to leverage expert knowledge by generating a very large number of
indicators. A feature selection method is used to keep only the most
discriminant indicators which are used as inputs of a Naive Bayes classifier.
The parameters of the classifier have been optimized indirectly by the
selection process. Simulated data designed to reproduce some of the anomaly
types observed in real world engines.
| Tsirizo Rabenoro (SAMM), J\'er\^ome Lacaille, Marie Cottrell (SAMM),
Fabrice Rossi (SAMM) | null | 1407.0880 | null | null |
Online Submodular Maximization under a Matroid Constraint with
Application to Learning Assignments | cs.LG | Which ads should we display in sponsored search in order to maximize our
revenue? How should we dynamically rank information sources to maximize the
value of the ranking? These applications exhibit strong diminishing returns:
Redundancy decreases the marginal utility of each ad or information source. We
show that these and other problems can be formalized as repeatedly selecting an
assignment of items to positions to maximize a sequence of monotone submodular
functions that arrive one by one. We present an efficient algorithm for this
general problem and analyze it in the no-regret model. Our algorithm possesses
strong theoretical guarantees, such as a performance ratio that converges to
the optimal constant of 1 - 1/e. We empirically evaluate our algorithm on two
real-world online optimization problems on the web: ad allocation with
submodular utilities, and dynamically ranking blogs to detect information
cascades. Finally, we present a second algorithm that handles the more general
case in which the feasible sets are given by a matroid constraint, while still
maintaining a 1 - 1/e asymptotic performance ratio.
| Daniel Golovin, Andreas Krause, Matthew Streeter | null | 1407.1082 | null | null |
Robust Optimization using Machine Learning for Uncertainty Sets | math.OC cs.LG stat.ML | Our goal is to build robust optimization problems for making decisions based
on complex data from the past. In robust optimization (RO) generally, the goal
is to create a policy for decision-making that is robust to our uncertainty
about the future. In particular, we want our policy to best handle the the
worst possible situation that could arise, out of an uncertainty set of
possible situations. Classically, the uncertainty set is simply chosen by the
user, or it might be estimated in overly simplistic ways with strong
assumptions; whereas in this work, we learn the uncertainty set from data
collected in the past. The past data are drawn randomly from an (unknown)
possibly complicated high-dimensional distribution. We propose a new
uncertainty set design and show how tools from statistical learning theory can
be employed to provide probabilistic guarantees on the robustness of the
policy.
| Theja Tulabandhula, Cynthia Rudin | null | 1407.1097 | null | null |
Expanding the Family of Grassmannian Kernels: An Embedding Perspective | cs.CV cs.LG stat.ML | Modeling videos and image-sets as linear subspaces has proven beneficial for
many visual recognition tasks. However, it also incurs challenges arising from
the fact that linear subspaces do not obey Euclidean geometry, but lie on a
special type of Riemannian manifolds known as Grassmannian. To leverage the
techniques developed for Euclidean spaces (e.g, support vector machines) with
subspaces, several recent studies have proposed to embed the Grassmannian into
a Hilbert space by making use of a positive definite kernel. Unfortunately,
only two Grassmannian kernels are known, none of which -as we will show- is
universal, which limits their ability to approximate a target function
arbitrarily well. Here, we introduce several positive definite Grassmannian
kernels, including universal ones, and demonstrate their superiority over
previously-known kernels in various tasks, such as classification, clustering,
sparse coding and hashing.
| Mehrtash T. Harandi and Mathieu Salzmann and Sadeep Jayasumana and
Richard Hartley and Hongdong Li | null | 1407.1123 | null | null |
Optimizing Ranking Measures for Compact Binary Code Learning | cs.LG cs.CV | Hashing has proven a valuable tool for large-scale information retrieval.
Despite much success, existing hashing methods optimize over simple objectives
such as the reconstruction error or graph Laplacian related loss functions,
instead of the performance evaluation criteria of interest---multivariate
performance measures such as the AUC and NDCG. Here we present a general
framework (termed StructHash) that allows one to directly optimize multivariate
performance measures. The resulting optimization problem can involve
exponentially or infinitely many variables and constraints, which is more
challenging than standard structured output learning. To solve the StructHash
optimization problem, we use a combination of column generation and
cutting-plane techniques. We demonstrate the generality of StructHash by
applying it to ranking prediction and image retrieval, and show that it
outperforms a few state-of-the-art hashing methods.
| Guosheng Lin, Chunhua Shen, Jianxin Wu | null | 1407.1151 | null | null |
Identifying Higher-order Combinations of Binary Features | stat.ML cs.LG | Finding statistically significant interactions between binary variables is
computationally and statistically challenging in high-dimensional settings, due
to the combinatorial explosion in the number of hypotheses. Terada et al.
recently showed how to elegantly address this multiple testing problem by
excluding non-testable hypotheses. Still, it remains unclear how their approach
scales to large datasets.
We here proposed strategies to speed up the approach by Terada et al. and
evaluate them thoroughly in 11 real-world benchmark datasets. We observe that
one approach, incremental search with early stopping, is orders of magnitude
faster than the current state-of-the-art approach.
| Felipe Llinares, Mahito Sugiyama, Karsten M. Borgwardt | null | 1407.1176 | null | null |
Improving Performance of Self-Organising Maps with Distance Metric
Learning Method | cs.LG cs.NE | Self-Organising Maps (SOM) are Artificial Neural Networks used in Pattern
Recognition tasks. Their major advantage over other architectures is human
readability of a model. However, they often gain poorer accuracy. Mostly used
metric in SOM is the Euclidean distance, which is not the best approach to some
problems. In this paper, we study an impact of the metric change on the SOM's
performance in classification problems. In order to change the metric of the
SOM we applied a distance metric learning method, so-called 'Large Margin
Nearest Neighbour'. It computes the Mahalanobis matrix, which assures small
distance between nearest neighbour points from the same class and separation of
points belonging to different classes by large margin. Results are presented on
several real data sets, containing for example recognition of written digits,
spoken letters or faces.
| Piotr P{\l}o\'nski, Krzysztof Zaremba | 10.1007/978-3-642-29347-4_20 | 1407.1201 | null | null |
Weakly Supervised Action Labeling in Videos Under Ordering Constraints | cs.CV cs.LG | We are given a set of video clips, each one annotated with an {\em ordered}
list of actions, such as "walk" then "sit" then "answer phone" extracted from,
for example, the associated text script. We seek to temporally localize the
individual actions in each clip as well as to learn a discriminative classifier
for each action. We formulate the problem as a weakly supervised temporal
assignment with ordering constraints. Each video clip is divided into small
time intervals and each time interval of each video clip is assigned one action
label, while respecting the order in which the action labels appear in the
given annotations. We show that the action label assignment can be determined
together with learning a classifier for each action in a discriminative manner.
We evaluate the proposed model on a new and challenging dataset of 937 video
clips with a total of 787720 frames containing sequences of 16 different
actions from 69 Hollywood movies.
| Piotr Bojanowski, R\'emi Lajugie, Francis Bach, Ivan Laptev, Jean
Ponce, Cordelia Schmid, Josef Sivic | null | 1407.1208 | null | null |
Reinforcement Learning Based Algorithm for the Maximization of EV
Charging Station Revenue | cs.CE cs.LG math.OC stat.AP | This paper presents an online reinforcement learning based application which
increases the revenue of one particular electric vehicles (EV) station,
connected to a renewable source of energy. Moreover, the proposed application
adapts to changes in the trends of the station's average number of customers
and their types. Most of the parameters in the model are simulated
stochastically and the algorithm used is a Q-learning algorithm. A computer
simulation was implemented which demonstrates and confirms the utility of the
model.
| Stoyan Dimitrov, Redouane Lguensat | null | 1407.1291 | null | null |
Generalized Higher-Order Tensor Decomposition via Parallel ADMM | cs.NA cs.LG | Higher-order tensors are becoming prevalent in many scientific areas such as
computer vision, social network analysis, data mining and neuroscience.
Traditional tensor decomposition approaches face three major challenges: model
selecting, gross corruptions and computational efficiency. To address these
problems, we first propose a parallel trace norm regularized tensor
decomposition method, and formulate it as a convex optimization problem. This
method does not require the rank of each mode to be specified beforehand, and
can automatically determine the number of factors in each mode through our
optimization scheme. By considering the low-rank structure of the observed
tensor, we analyze the equivalent relationship of the trace norm between a
low-rank tensor and its core tensor. Then, we cast a non-convex tensor
decomposition model into a weighted combination of multiple much smaller-scale
matrix trace norm minimization. Finally, we develop two parallel alternating
direction methods of multipliers (ADMM) to solve our problems. Experimental
results verify that our regularized formulation is effective, and our methods
are robust to noise or outliers.
| Fanhua Shang and Yuanyuan Liu and James Cheng | null | 1407.1399 | null | null |
Linear Coupling: An Ultimate Unification of Gradient and Mirror Descent | cs.DS cs.LG cs.NA math.OC stat.ML | First-order methods play a central role in large-scale machine learning. Even
though many variations exist, each suited to a particular problem, almost all
such methods fundamentally rely on two types of algorithmic steps: gradient
descent, which yields primal progress, and mirror descent, which yields dual
progress.
We observe that the performances of gradient and mirror descent are
complementary, so that faster algorithms can be designed by LINEARLY COUPLING
the two. We show how to reconstruct Nesterov's accelerated gradient methods
using linear coupling, which gives a cleaner interpretation than Nesterov's
original proofs. We also discuss the power of linear coupling by extending it
to many other settings that Nesterov's methods cannot apply to.
| Zeyuan Allen-Zhu, Lorenzo Orecchia | null | 1407.1537 | null | null |
Large-Scale Multi-Label Learning with Incomplete Label Assignments | cs.LG | Multi-label learning deals with the classification problems where each
instance can be assigned with multiple labels simultaneously. Conventional
multi-label learning approaches mainly focus on exploiting label correlations.
It is usually assumed, explicitly or implicitly, that the label sets for
training instances are fully labeled without any missing labels. However, in
many real-world multi-label datasets, the label assignments for training
instances can be incomplete. Some ground-truth labels can be missed by the
labeler from the label set. This problem is especially typical when the number
instances is very large, and the labeling cost is very high, which makes it
almost impossible to get a fully labeled training set. In this paper, we study
the problem of large-scale multi-label learning with incomplete label
assignments. We propose an approach, called MPU, based upon positive and
unlabeled stochastic gradient descent and stacked models. Unlike prior works,
our method can effectively and efficiently consider missing labels and label
correlations simultaneously, and is very scalable, that has linear time
complexities over the size of the data. Extensive experiments on two real-world
multi-label datasets show that our MPU model consistently outperform other
commonly-used baselines.
| Xiangnan Kong and Zhaoming Wu and Li-Jia Li and Ruofei Zhang and
Philip S. Yu and Hang Wu and Wei Fan | null | 1407.1538 | null | null |
Dictionary Learning and Tensor Decomposition via the Sum-of-Squares
Method | cs.DS cs.LG stat.ML | We give a new approach to the dictionary learning (also known as "sparse
coding") problem of recovering an unknown $n\times m$ matrix $A$ (for $m \geq
n$) from examples of the form \[ y = Ax + e, \] where $x$ is a random vector in
$\mathbb R^m$ with at most $\tau m$ nonzero coordinates, and $e$ is a random
noise vector in $\mathbb R^n$ with bounded magnitude. For the case $m=O(n)$,
our algorithm recovers every column of $A$ within arbitrarily good constant
accuracy in time $m^{O(\log m/\log(\tau^{-1}))}$, in particular achieving
polynomial time if $\tau = m^{-\delta}$ for any $\delta>0$, and time $m^{O(\log
m)}$ if $\tau$ is (a sufficiently small) constant. Prior algorithms with
comparable assumptions on the distribution required the vector $x$ to be much
sparser---at most $\sqrt{n}$ nonzero coordinates---and there were intrinsic
barriers preventing these algorithms from applying for denser $x$.
We achieve this by designing an algorithm for noisy tensor decomposition that
can recover, under quite general conditions, an approximate rank-one
decomposition of a tensor $T$, given access to a tensor $T'$ that is
$\tau$-close to $T$ in the spectral norm (when considered as a matrix). To our
knowledge, this is the first algorithm for tensor decomposition that works in
the constant spectral-norm noise regime, where there is no guarantee that the
local optima of $T$ and $T'$ have similar structures.
Our algorithm is based on a novel approach to using and analyzing the Sum of
Squares semidefinite programming hierarchy (Parrilo 2000, Lasserre 2001), and
it can be viewed as an indication of the utility of this very general and
powerful tool for unsupervised learning problems.
| Boaz Barak, Jonathan A. Kelner, David Steurer | null | 1407.1543 | null | null |
WordRep: A Benchmark for Research on Learning Word Representations | cs.CL cs.LG | WordRep is a benchmark collection for the research on learning distributed
word representations (or word embeddings), released by Microsoft Research. In
this paper, we describe the details of the WordRep collection and show how to
use it in different types of machine learning research related to word
embedding. Specifically, we describe how the evaluation tasks in WordRep are
selected, how the data are sampled, and how the evaluation tool is built. We
then compare several state-of-the-art word representations on WordRep, report
their evaluation performance, and make discussions on the results. After that,
we discuss new potential research topics that can be supported by WordRep, in
addition to algorithm comparison. We hope that this paper can help people gain
deeper understanding of WordRep, and enable more interesting research on
learning distributed word representations and related topics.
| Bin Gao, Jiang Bian, and Tie-Yan Liu | null | 1407.1640 | null | null |
KNET: A General Framework for Learning Word Embedding using
Morphological Knowledge | cs.CL cs.LG | Neural network techniques are widely applied to obtain high-quality
distributed representations of words, i.e., word embeddings, to address text
mining, information retrieval, and natural language processing tasks. Recently,
efficient methods have been proposed to learn word embeddings from context that
captures both semantic and syntactic relationships between words. However, it
is challenging to handle unseen words or rare words with insufficient context.
In this paper, inspired by the study on word recognition process in cognitive
psychology, we propose to take advantage of seemingly less obvious but
essentially important morphological knowledge to address these challenges. In
particular, we introduce a novel neural network architecture called KNET that
leverages both contextual information and morphological word similarity built
based on morphological knowledge to learn word embeddings. Meanwhile, the
learning architecture is also able to refine the pre-defined morphological
knowledge and obtain more accurate word similarity. Experiments on an
analogical reasoning task and a word similarity task both demonstrate that the
proposed KNET framework can greatly enhance the effectiveness of word
embeddings.
| Qing Cui, Bin Gao, Jiang Bian, Siyu Qiu, and Tie-Yan Liu | null | 1407.1687 | null | null |
Recommending Learning Algorithms and Their Associated Hyperparameters | cs.LG stat.ML | The success of machine learning on a given task dependson, among other
things, which learning algorithm is selected and its associated
hyperparameters. Selecting an appropriate learning algorithm and setting its
hyperparameters for a given data set can be a challenging task, especially for
users who are not experts in machine learning. Previous work has examined using
meta-features to predict which learning algorithm and hyperparameters should be
used. However, choosing a set of meta-features that are predictive of algorithm
performance is difficult. Here, we propose to apply collaborative filtering
techniques to learning algorithm and hyperparameter selection, and find that
doing so avoids determining which meta-features to use and outperforms
traditional meta-learning approaches in many cases.
| Michael R. Smith, Logan Mitchell, Christophe Giraud-Carrier, Tony
Martinez | null | 1407.1890 | null | null |
Identifying Cover Songs Using Information-Theoretic Measures of
Similarity | cs.IR cs.LG stat.ML | This paper investigates methods for quantifying similarity between audio
signals, specifically for the task of of cover song detection. We consider an
information-theoretic approach, where we compute pairwise measures of
predictability between time series. We compare discrete-valued approaches
operating on quantised audio features, to continuous-valued approaches. In the
discrete case, we propose a method for computing the normalised compression
distance, where we account for correlation between time series. In the
continuous case, we propose to compute information-based measures of similarity
as statistics of the prediction error between time series. We evaluate our
methods on two cover song identification tasks using a data set comprised of
300 Jazz standards and using the Million Song Dataset. For both datasets, we
observe that continuous-valued approaches outperform discrete-valued
approaches. We consider approaches to estimating the normalised compression
distance (NCD) based on string compression and prediction, where we observe
that our proposed normalised compression distance with alignment (NCDA)
improves average performance over NCD, for sequential compression algorithms.
Finally, we demonstrate that continuous-valued distances may be combined to
improve performance with respect to baseline approaches. Using a large-scale
filter-and-refine approach, we demonstrate state-of-the-art performance for
cover song identification using the Million Song Dataset.
| Peter Foster, Simon Dixon, Anssi Klapuri | 10.1109/TASLP.2015.2416655 | 1407.2433 | null | null |
Counting Markov Blanket Structures | stat.ML cs.AI cs.LG | Learning Markov blanket (MB) structures has proven useful in performing
feature selection, learning Bayesian networks (BNs), and discovering causal
relationships. We present a formula for efficiently determining the number of
MB structures given a target variable and a set of other variables. As
expected, the number of MB structures grows exponentially. However, we show
quantitatively that there are many fewer MB structures that contain the target
variable than there are BN structures that contain it. In particular, the ratio
of BN structures to MB structures appears to increase exponentially in the
number of variables.
| Shyam Visweswaran and Gregory F. Cooper | null | 1407.2483 | null | null |
RankMerging: A supervised learning-to-rank framework to predict links in
large social network | cs.SI cs.IR cs.LG physics.soc-ph | Uncovering unknown or missing links in social networks is a difficult task
because of their sparsity and because links may represent different types of
relationships, characterized by different structural patterns. In this paper,
we define a simple yet efficient supervised learning-to-rank framework, called
RankMerging, which aims at combining information provided by various
unsupervised rankings. We illustrate our method on three different kinds of
social networks and show that it substantially improves the performances of
unsupervised metrics of ranking. We also compare it to other combination
strategies based on standard methods. Finally, we explore various aspects of
RankMerging, such as feature selection and parameter estimation and discuss its
area of relevance: the prediction of an adjustable number of links on large
networks.
| Lionel Tabourier, Daniel Faria Bernardes, Anne-Sophie Libert, Renaud
Lambiotte | null | 1407.2515 | null | null |
Learning Deep Structured Models | cs.LG | Many problems in real-world applications involve predicting several random
variables which are statistically related. Markov random fields (MRFs) are a
great mathematical tool to encode such relationships. The goal of this paper is
to combine MRFs with deep learning algorithms to estimate complex
representations while taking into account the dependencies between the output
random variables. Towards this goal, we propose a training algorithm that is
able to learn structured models jointly with deep features that form the MRF
potentials. Our approach is efficient as it blends learning and inference and
makes use of GPU acceleration. We demonstrate the effectiveness of our
algorithm in the tasks of predicting words from noisy images, as well as
multi-class classification of Flickr photographs. We show that joint learning
of the deep features and the MRF parameters results in significant performance
gains.
| Liang-Chieh Chen and Alexander G. Schwing and Alan L. Yuille and
Raquel Urtasun | null | 1407.2538 | null | null |
Learning Probabilistic Programs | cs.AI cs.LG stat.ML | We develop a technique for generalising from data in which models are
samplers represented as program text. We establish encouraging empirical
results that suggest that Markov chain Monte Carlo probabilistic programming
inference techniques coupled with higher-order probabilistic programming
languages are now sufficiently powerful to enable successful inference of this
kind in nontrivial domains. We also introduce a new notion of probabilistic
program compilation and show how the same machinery might be used in the future
to compile probabilistic programs for efficient reusable predictive inference.
| Yura N. Perov, Frank D. Wood | null | 1407.2646 | null | null |
Beyond Disagreement-based Agnostic Active Learning | cs.LG stat.ML | We study agnostic active learning, where the goal is to learn a classifier in
a pre-specified hypothesis class interactively with as few label queries as
possible, while making no assumptions on the true function generating the
labels. The main algorithms for this problem are {\em{disagreement-based active
learning}}, which has a high label requirement, and {\em{margin-based active
learning}}, which only applies to fairly restricted settings. A major challenge
is to find an algorithm which achieves better label complexity, is consistent
in an agnostic setting, and applies to general classification problems.
In this paper, we provide such an algorithm. Our solution is based on two
novel contributions -- a reduction from consistent active learning to
confidence-rated prediction with guaranteed error, and a novel confidence-rated
predictor.
| Chicheng Zhang and Kamalika Chaudhuri | null | 1407.2657 | null | null |
Learning Privately with Labeled and Unlabeled Examples | cs.LG cs.CR | A private learner is an algorithm that given a sample of labeled individual
examples outputs a generalizing hypothesis while preserving the privacy of each
individual. In 2008, Kasiviswanathan et al. (FOCS 2008) gave a generic
construction of private learners, in which the sample complexity is (generally)
higher than what is needed for non-private learners. This gap in the sample
complexity was then further studied in several followup papers, showing that
(at least in some cases) this gap is unavoidable. Moreover, those papers
considered ways to overcome the gap, by relaxing either the privacy or the
learning guarantees of the learner.
We suggest an alternative approach, inspired by the (non-private) models of
semi-supervised learning and active-learning, where the focus is on the sample
complexity of labeled examples whereas unlabeled examples are of a
significantly lower cost. We consider private semi-supervised learners that
operate on a random sample, where only a (hopefully small) portion of this
sample is labeled. The learners have no control over which of the sample
elements are labeled. Our main result is that the labeled sample complexity of
private learners is characterized by the VC dimension.
We present two generic constructions of private semi-supervised learners. The
first construction is of learners where the labeled sample complexity is
proportional to the VC dimension of the concept class, however, the unlabeled
sample complexity of the algorithm is as big as the representation length of
domain elements. Our second construction presents a new technique for
decreasing the labeled sample complexity of a given private learner, while
roughly maintaining its unlabeled sample complexity. In addition, we show that
in some settings the labeled sample complexity does not depend on the privacy
parameters of the learner.
| Amos Beimel, Kobbi Nissim, Uri Stemmer | null | 1407.2662 | null | null |
Private Learning and Sanitization: Pure vs. Approximate Differential
Privacy | cs.LG cs.CR stat.ML | We compare the sample complexity of private learning [Kasiviswanathan et al.
2008] and sanitization~[Blum et al. 2008] under pure $\epsilon$-differential
privacy [Dwork et al. TCC 2006] and approximate
$(\epsilon,\delta)$-differential privacy [Dwork et al. Eurocrypt 2006]. We show
that the sample complexity of these tasks under approximate differential
privacy can be significantly lower than that under pure differential privacy.
We define a family of optimization problems, which we call Quasi-Concave
Promise Problems, that generalizes some of our considered tasks. We observe
that a quasi-concave promise problem can be privately approximated using a
solution to a smaller instance of a quasi-concave promise problem. This allows
us to construct an efficient recursive algorithm solving such problems
privately. Specifically, we construct private learners for point functions,
threshold functions, and axis-aligned rectangles in high dimension. Similarly,
we construct sanitizers for point functions and threshold functions.
We also examine the sample complexity of label-private learners, a relaxation
of private learning where the learner is required to only protect the privacy
of the labels in the sample. We show that the VC dimension completely
characterizes the sample complexity of such learners, that is, the sample
complexity of learning with label privacy is equal (up to constants) to
learning without privacy.
| Amos Beimel, Kobbi Nissim, Uri Stemmer | null | 1407.2674 | null | null |
A New Optimal Stepsize For Approximate Dynamic Programming | math.OC cs.AI cs.LG cs.SY stat.ML | Approximate dynamic programming (ADP) has proven itself in a wide range of
applications spanning large-scale transportation problems, health care, revenue
management, and energy systems. The design of effective ADP algorithms has many
dimensions, but one crucial factor is the stepsize rule used to update a value
function approximation. Many operations research applications are
computationally intensive, and it is important to obtain good results quickly.
Furthermore, the most popular stepsize formulas use tunable parameters and can
produce very poor results if tuned improperly. We derive a new stepsize rule
that optimizes the prediction error in order to improve the short-term
performance of an ADP algorithm. With only one, relatively insensitive tunable
parameter, the new rule adapts to the level of noise in the problem and
produces faster convergence in numerical experiments.
| Ilya O. Ryzhov and Peter I. Frazier and Warren B. Powell | null | 1407.2676 | null | null |
A Convex Formulation for Learning Scale-Free Networks via Submodular
Relaxation | cs.LG stat.ML | A key problem in statistics and machine learning is the determination of
network structure from data. We consider the case where the structure of the
graph to be reconstructed is known to be scale-free. We show that in such cases
it is natural to formulate structured sparsity inducing priors using submodular
functions, and we use their Lov\'asz extension to obtain a convex relaxation.
For tractable classes such as Gaussian graphical models, this leads to a convex
optimization problem that can be efficiently solved. We show that our method
results in an improvement in the accuracy of reconstructed networks for
synthetic data. We also show how our prior encourages scale-free
reconstructions on a bioinfomatics dataset.
| Aaron J. Defazio and Tiberio S. Caetano | null | 1407.2697 | null | null |
Finito: A Faster, Permutable Incremental Gradient Method for Big Data
Problems | cs.LG stat.ML | Recent advances in optimization theory have shown that smooth strongly convex
finite sums can be minimized faster than by treating them as a black box
"batch" problem. In this work we introduce a new method in this class with a
theoretical convergence rate four times faster than existing methods, for sums
with sufficiently many terms. This method is also amendable to a sampling
without replacement scheme that in practice gives further speed-ups. We give
empirical results showing state of the art performance.
| Aaron J. Defazio and Tib\'erio S. Caetano and Justin Domke | null | 1407.2710 | null | null |
A multi-instance learning algorithm based on a stacked ensemble of lazy
learners | cs.LG | This document describes a novel learning algorithm that classifies "bags" of
instances rather than individual instances. A bag is labeled positive if it
contains at least one positive instance (which may or may not be specifically
identified), and negative otherwise. This class of problems is known as
multi-instance learning problems, and is useful in situations where the class
label at an instance level may be unavailable or imprecise or difficult to
obtain, or in situations where the problem is naturally posed as one of
classifying instance groups. The algorithm described here is an ensemble-based
method, wherein the members of the ensemble are lazy learning classifiers
learnt using the Citation Nearest Neighbour method. Diversity among the
ensemble members is achieved by optimizing their parameters using a
multi-objective optimization method, with the objectives being to maximize
Class 1 accuracy and minimize false positive rate. The method has been found to
be effective on the Musk1 benchmark dataset.
| Ramasubramanian Sundararajan, Hima Patel, Manisha Srivastava | null | 1407.2736 | null | null |
What you need to know about the state-of-the-art computational models of
object-vision: A tour through the models | cs.CV cs.AI cs.LG q-bio.NC | Models of object vision have been of great interest in computer vision and
visual neuroscience. During the last decades, several models have been
developed to extract visual features from images for object recognition tasks.
Some of these were inspired by the hierarchical structure of primate visual
system, and some others were engineered models. The models are varied in
several aspects: models that are trained by supervision, models trained without
supervision, and models (e.g. feature extractors) that are fully hard-wired and
do not need training. Some of the models come with a deep hierarchical
structure consisting of several layers, and some others are shallow and come
with only one or two layers of processing. More recently, new models have been
developed that are not hand-tuned but trained using millions of images, through
which they learn how to extract informative task-related features. Here I will
survey all these different models and provide the reader with an intuitive, as
well as a more detailed, understanding of the underlying computations in each
of the models.
| Seyed-Mahdi Khaligh-Razavi | null | 1407.2776 | null | null |
Bandits Warm-up Cold Recommender Systems | cs.LG cs.IR stat.ML | We address the cold start problem in recommendation systems assuming no
contextual information is available neither about users, nor items. We consider
the case in which we only have access to a set of ratings of items by users.
Most of the existing works consider a batch setting, and use cross-validation
to tune parameters. The classical method consists in minimizing the root mean
square error over a training subset of the ratings which provides a
factorization of the matrix of ratings, interpreted as a latent representation
of items and users. Our contribution in this paper is 5-fold. First, we
explicit the issues raised by this kind of batch setting for users or items
with very few ratings. Then, we propose an online setting closer to the actual
use of recommender systems; this setting is inspired by the bandit framework.
The proposed methodology can be used to turn any recommender system dataset
(such as Netflix, MovieLens,...) into a sequential dataset. Then, we explicit a
strong and insightful link between contextual bandit algorithms and matrix
factorization; this leads us to a new algorithm that tackles the
exploration/exploitation dilemma associated to the cold start problem in a
strikingly new perspective. Finally, experimental evidence confirm that our
algorithm is effective in dealing with the cold start problem on publicly
available datasets. Overall, the goal of this paper is to bridge the gap
between recommender systems based on matrix factorizations and those based on
contextual bandits.
| J\'er\'emie Mary (INRIA Lille - Nord Europe, LIFL), Romaric Gaudel
(INRIA Lille - Nord Europe, LIFL), Preux Philippe (INRIA Lille - Nord Europe,
LIFL) | null | 1407.2806 | null | null |
XML Matchers: approaches and challenges | cs.DB cs.AI cs.IR cs.LG | Schema Matching, i.e. the process of discovering semantic correspondences
between concepts adopted in different data source schemas, has been a key topic
in Database and Artificial Intelligence research areas for many years. In the
past, it was largely investigated especially for classical database models
(e.g., E/R schemas, relational databases, etc.). However, in the latest years,
the widespread adoption of XML in the most disparate application fields pushed
a growing number of researchers to design XML-specific Schema Matching
approaches, called XML Matchers, aiming at finding semantic matchings between
concepts defined in DTDs and XSDs. XML Matchers do not just take well-known
techniques originally designed for other data models and apply them on
DTDs/XSDs, but they exploit specific XML features (e.g., the hierarchical
structure of a DTD/XSD) to improve the performance of the Schema Matching
process. The design of XML Matchers is currently a well-established research
area. The main goal of this paper is to provide a detailed description and
classification of XML Matchers. We first describe to what extent the
specificities of DTDs/XSDs impact on the Schema Matching task. Then we
introduce a template, called XML Matcher Template, that describes the main
components of an XML Matcher, their role and behavior. We illustrate how each
of these components has been implemented in some popular XML Matchers. We
consider our XML Matcher Template as the baseline for objectively comparing
approaches that, at first glance, might appear as unrelated. The introduction
of this template can be useful in the design of future XML Matchers. Finally,
we analyze commercial tools implementing XML Matchers and introduce two
challenging issues strictly related to this topic, namely XML source clustering
and uncertainty management in XML Matchers.
| Santa Agreste, Pasquale De Meo, Emilio Ferrara, Domenico Ursino | 10.1016/j.knosys.2014.04.044 | 1407.2845 | null | null |
An eigenanalysis of data centering in machine learning | stat.ML cs.CV cs.LG math.SP math.ST stat.TH | Many pattern recognition methods rely on statistical information from
centered data, with the eigenanalysis of an empirical central moment, such as
the covariance matrix in principal component analysis (PCA), as well as partial
least squares regression, canonical-correlation analysis and Fisher
discriminant analysis. Recently, many researchers advocate working on
non-centered data. This is the case for instance with the singular value
decomposition approach, with the (kernel) entropy component analysis, with the
information-theoretic learning framework, and even with nonnegative matrix
factorization. Moreover, one can also consider a non-centered PCA by using the
second-order non-central moment.
The main purpose of this paper is to bridge the gap between these two
viewpoints in designing machine learning methods. To provide a study at the
cornerstone of kernel-based machines, we conduct an eigenanalysis of the inner
product matrices from centered and non-centered data. We derive several results
connecting their eigenvalues and their eigenvectors. Furthermore, we explore
the outer product matrices, by providing several results connecting the largest
eigenvectors of the covariance matrix and its non-centered counterpart. These
results lay the groundwork to several extensions beyond conventional centering,
with the weighted mean shift, the rank-one update, and the multidimensional
scaling. Experiments conducted on simulated and real data illustrate the
relevance of this work.
| Paul Honeine | null | 1407.2904 | null | null |
Collaborative Recommendation with Auxiliary Data: A Transfer Learning
View | cs.IR cs.LG | Intelligent recommendation technology has been playing an increasingly
important role in various industry applications such as e-commerce product
promotion and Internet advertisement display. Besides users' feedbacks (e.g.,
numerical ratings) on items as usually exploited by some typical recommendation
algorithms, there are often some additional data such as users' social circles
and other behaviors. Such auxiliary data are usually related to users'
preferences on items behind the numerical ratings. Collaborative recommendation
with auxiliary data (CRAD) aims to leverage such additional information so as
to improve the personalization services, which have received much attention
from both researchers and practitioners.
Transfer learning (TL) is proposed to extract and transfer knowledge from
some auxiliary data in order to assist the learning task on some target data.
In this paper, we consider the CRAD problem from a transfer learning view,
especially on how to achieve knowledge transfer from some auxiliary data.
First, we give a formal definition of transfer learning for CRAD (TL-CRAD).
Second, we extend the existing categorization of TL techniques (i.e., adaptive,
collective and integrative knowledge transfer algorithm styles) with three
knowledge transfer strategies (i.e., prediction rule, regularization and
constraint). Third, we propose a novel generic knowledge transfer framework for
TL-CRAD. Fourth, we describe some representative works of each specific
knowledge transfer strategy of each algorithm style in detail, which are
expected to inspire further works. Finally, we conclude the paper with some
summary discussions and several future directions.
| Weike Pan | null | 1407.2919 | null | null |
FAME: Face Association through Model Evolution | cs.CV cs.AI cs.IR cs.LG | We attack the problem of learning face models for public faces from
weakly-labelled images collected from web through querying a name. The data is
very noisy even after face detection, with several irrelevant faces
corresponding to other people. We propose a novel method, Face Association
through Model Evolution (FAME), that is able to prune the data in an iterative
way, for the face models associated to a name to evolve. The idea is based on
capturing discriminativeness and representativeness of each instance and
eliminating the outliers. The final models are used to classify faces on novel
datasets with possibly different characteristics. On benchmark datasets, our
results are comparable to or better than state-of-the-art studies for the task
of face identification.
| Eren Golge and Pinar Duygulu | null | 1407.2987 | null | null |
An SVM Based Approach for Cardiac View Planning | cs.LG cs.CV | We consider the problem of automatically prescribing oblique planes (short
axis, 4 chamber and 2 chamber views) in Cardiac Magnetic Resonance Imaging
(MRI). A concern with technologist-driven acquisitions of these planes is the
quality and time taken for the total examination. We propose an automated
solution incorporating anatomical features external to the cardiac region. The
solution uses support vector machine regression models wherein complexity and
feature selection are optimized using multi-objective genetic algorithms.
Additionally, we examine the robustness of our approach by training our models
on images with additive Rician-Gaussian mixtures at varying Signal to Noise
(SNR) levels. Our approach has shown promising results, with an angular
deviation of less than 15 degrees on 90% cases across oblique planes, measured
in terms of average 6-fold cross validation performance -- this is generally
within acceptable bounds of variation as specified by clinicians.
| Ramasubramanian Sundararajan, Hima Patel, Dattesh Shanbhag, Vivek
Vaidya | null | 1407.3026 | null | null |
Deep Networks with Internal Selective Attention through Feedback
Connections | cs.CV cs.LG cs.NE | Traditional convolutional neural networks (CNN) are stationary and
feedforward. They neither change their parameters during evaluation nor use
feedback from higher to lower layers. Real brains, however, do. So does our
Deep Attention Selective Network (dasNet) architecture. DasNets feedback
structure can dynamically alter its convolutional filter sensitivities during
classification. It harnesses the power of sequential processing to improve
classification performance, by allowing the network to iteratively focus its
internal attention on some of its convolutional filters. Feedback is trained
through direct policy search in a huge million-dimensional parameter space,
through scalable natural evolution strategies (SNES). On the CIFAR-10 and
CIFAR-100 datasets, dasNet outperforms the previous state-of-the-art model.
| Marijn Stollenga, Jonathan Masci, Faustino Gomez, Juergen Schmidhuber | null | 1407.3068 | null | null |
Density Adaptive Parallel Clustering | cs.DS cs.LG stat.ML | In this paper we are going to introduce a new nearest neighbours based
approach to clustering, and compare it with previous solutions; the resulting
algorithm, which takes inspiration from both DBscan and minimum spanning tree
approaches, is deterministic but proves simpler, faster and doesnt require to
set in advance a value for k, the number of clusters.
| Marcello La Rocca | null | 1407.3242 | null | null |
Multiple chaotic central pattern generators with learning for legged
locomotion and malfunction compensation | cs.AI cs.LG cs.NE cs.RO | An originally chaotic system can be controlled into various periodic
dynamics. When it is implemented into a legged robot's locomotion control as a
central pattern generator (CPG), sophisticated gait patterns arise so that the
robot can perform various walking behaviors. However, such a single chaotic CPG
controller has difficulties dealing with leg malfunction. Specifically, in the
scenarios presented here, its movement permanently deviates from the desired
trajectory. To address this problem, we extend the single chaotic CPG to
multiple CPGs with learning. The learning mechanism is based on a simulated
annealing algorithm. In a normal situation, the CPGs synchronize and their
dynamics are identical. With leg malfunction or disability, the CPGs lose
synchronization leading to independent dynamics. In this case, the learning
mechanism is applied to automatically adjust the remaining legs' oscillation
frequencies so that the robot adapts its locomotion to deal with the
malfunction. As a consequence, the trajectory produced by the multiple chaotic
CPGs resembles the original trajectory far better than the one produced by only
a single CPG. The performance of the system is evaluated first in a physical
simulation of a quadruped as well as a hexapod robot and finally in a real
six-legged walking machine called AMOSII. The experimental results presented
here reveal that using multiple CPGs with learning is an effective approach for
adaptive locomotion generation where, for instance, different body parts have
to perform independent movements for malfunction compensation.
| Guanjiao Ren, Weihai Chen, Sakyasingha Dasgupta, Christoph
Kolodziejski, Florentin W\"org\"otter, Poramate Manoonpong | 10.1016/j.ins.2014.05.001 | 1407.3269 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.