title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Local Region Sparse Learning for Image-on-Scalar Regression | stat.ML cs.LG | Identification of regions of interest (ROI) associated with certain disease
has a great impact on public health. Imposing sparsity of pixel values and
extracting active regions simultaneously greatly complicate the image analysis.
We address these challenges by introducing a novel region-selection penalty in
the framework of image-on-scalar regression. Our penalty combines the Smoothly
Clipped Absolute Deviation (SCAD) regularization, enforcing sparsity, and the
SCAD of total variation (TV) regularization, enforcing spatial contiguity, into
one group, which segments contiguous spatial regions against zero-valued
background. Efficient algorithm is based on the alternative direction method of
multipliers (ADMM) which decomposes the non-convex problem into two iterative
optimization problems with explicit solutions. Another virtue of the proposed
method is that a divide and conquer learning algorithm is developed, thereby
allowing scaling to large images. Several examples are presented and the
experimental results are compared with other state-of-the-art approaches.
| Yao Chen, Xiao Wang, Linglong Kong and Hongtu Zhu | null | 1605.08501 | null | null |
SNN: Stacked Neural Networks | cs.LG cs.CV cs.NE | It has been proven that transfer learning provides an easy way to achieve
state-of-the-art accuracies on several vision tasks by training a simple
classifier on top of features obtained from pre-trained neural networks. The
goal of this work is to generate better features for transfer learning from
multiple publicly available pre-trained neural networks. To this end, we
propose a novel architecture called Stacked Neural Networks which leverages the
fast training time of transfer learning while simultaneously being much more
accurate. We show that using a stacked NN architecture can result in up to 8%
improvements in accuracy over state-of-the-art techniques using only one
pre-trained network for transfer learning. A second aim of this work is to make
network fine- tuning retain the generalizability of the base network to unseen
tasks. To this end, we propose a new technique called "joint fine-tuning" that
is able to give accuracies comparable to finetuning the same network
individually over two datasets. We also show that a jointly finetuned network
generalizes better to unseen tasks when compared to a network finetuned over a
single task.
| Milad Mohammadi, Subhasis Das | null | 1605.08512 | null | null |
Stochastic Optimization for Large-scale Optimal Transport | math.OC cs.LG math.NA | Optimal transport (OT) defines a powerful framework to compare probability
distributions in a geometrically faithful way. However, the practical impact of
OT is still limited because of its computational burden. We propose a new class
of stochastic optimization algorithms to cope with large-scale problems
routinely encountered in machine learning applications. These methods are able
to manipulate arbitrary distributions (either discrete or continuous) by simply
requiring to be able to draw samples from them, which is the typical setup in
high-dimensional learning problems. This alleviates the need to discretize
these densities, while giving access to provably convergent methods that output
the correct distance without discretization error. These algorithms rely on two
main ideas: (a) the dual OT problem can be re-cast as the maximization of an
expectation ; (b) entropic regularization of the primal OT problem results in a
smooth dual optimization optimization which can be addressed with algorithms
that have a provably faster convergence. We instantiate these ideas in three
different setups: (i) when comparing a discrete distribution to another, we
show that incremental stochastic optimization schemes can beat Sinkhorn's
algorithm, the current state-of-the-art finite dimensional OT solver; (ii) when
comparing a discrete distribution to a continuous density, a semi-discrete
reformulation of the dual program is amenable to averaged stochastic gradient
descent, leading to better performance than approximately solving the problem
by discretization ; (iii) when dealing with two continuous densities, we
propose a stochastic gradient descent over a reproducing kernel Hilbert space
(RKHS). This is currently the only known method to solve this problem, apart
from computing OT on finite samples. We backup these claims on a set of
discrete, semi-discrete and continuous benchmark problems.
| Genevay Aude (MOKAPLAN, CEREMADE), Marco Cuturi, Gabriel Peyr\'e
(MOKAPLAN, CEREMADE), Francis Bach (SIERRA, LIENS) | null | 1605.08527 | null | null |
Deep API Learning | cs.SE cs.CL cs.LG cs.NE | Developers often wonder how to implement a certain functionality (e.g., how
to parse XML files) using APIs. Obtaining an API usage sequence based on an
API-related natural language query is very helpful in this regard. Given a
query, existing approaches utilize information retrieval models to search for
matching API sequences. These approaches treat queries and APIs as bag-of-words
(i.e., keyword matching or word-to-word alignment) and lack a deep
understanding of the semantics of the query.
We propose DeepAPI, a deep learning based approach to generate API usage
sequences for a given natural language query. Instead of a bags-of-words
assumption, it learns the sequence of words in a query and the sequence of
associated APIs. DeepAPI adapts a neural language model named RNN
Encoder-Decoder. It encodes a word sequence (user query) into a fixed-length
context vector, and generates an API sequence based on the context vector. We
also augment the RNN Encoder-Decoder by considering the importance of
individual APIs. We empirically evaluate our approach with more than 7 million
annotated code snippets collected from GitHub. The results show that our
approach generates largely accurate API sequences and outperforms the related
approaches.
| Xiaodong Gu, Hongyu Zhang, Dongmei Zhang, Sunghun Kim | null | 1605.08535 | null | null |
Variational Bayesian Inference for Hidden Markov Models With
Multivariate Gaussian Output Distributions | cs.LG stat.ML | Hidden Markov Models (HMM) have been used for several years in many time
series analysis or pattern recognitions tasks. HMM are often trained by means
of the Baum-Welch algorithm which can be seen as a special variant of an
expectation maximization (EM) algorithm. Second-order training techniques such
as Variational Bayesian Inference (VI) for probabilistic models regard the
parameters of the probabilistic models as random variables and define
distributions over these distribution parameters, hence the name of this
technique. VI can also bee regarded as a special case of an EM algorithm. In
this article, we bring both together and train HMM with multivariate Gaussian
output distributions with VI. The article defines the new training technique
for HMM. An evaluation based on some case studies and a comparison to related
approaches is part of our ongoing work.
| Christian Gruhl, Bernhard Sick | null | 1605.08618 | null | null |
PAC-Bayesian Theory Meets Bayesian Inference | stat.ML cs.LG | We exhibit a strong link between frequentist PAC-Bayesian risk bounds and the
Bayesian marginal likelihood. That is, for the negative log-likelihood loss
function, we show that the minimization of PAC-Bayesian generalization risk
bounds maximizes the Bayesian marginal likelihood. This provides an alternative
explanation to the Bayesian Occam's razor criteria, under the assumption that
the data is generated by an i.i.d distribution. Moreover, as the negative
log-likelihood is an unbounded loss function, we motivate and propose a
PAC-Bayesian theorem tailored for the sub-gamma loss family, and we show that
our approach is sound on classical Bayesian linear regression tasks.
| Pascal Germain (INRIA Paris), Francis Bach (INRIA Paris), Alexandre
Lacoste (Google), Simon Lacoste-Julien (INRIA Paris) | null | 1605.08636 | null | null |
An optimal algorithm for the Thresholding Bandit Problem | stat.ML cs.LG | We study a specific \textit{combinatorial pure exploration stochastic bandit
problem} where the learner aims at finding the set of arms whose means are
above a given threshold, up to a given precision, and \textit{for a fixed time
horizon}. We propose a parameter-free algorithm based on an original heuristic,
and prove that it is optimal for this problem by deriving matching upper and
lower bounds. To the best of our knowledge, this is the first non-trivial pure
exploration setting with \textit{fixed budget} for which optimal strategies are
constructed.
| Andrea Locatelli, Maurilio Gutzeit, and Alexandra Carpentier | null | 1605.08671 | null | null |
An algorithm with nearly optimal pseudo-regret for both stochastic and
adversarial bandits | cs.LG | We present an algorithm that achieves almost optimal pseudo-regret bounds
against adversarial and stochastic bandits. Against adversarial bandits the
pseudo-regret is $O(K\sqrt{n \log n})$ and against stochastic bandits the
pseudo-regret is $O(\sum_i (\log n)/\Delta_i)$. We also show that no algorithm
with $O(\log n)$ pseudo-regret against stochastic bandits can achieve
$\tilde{O}(\sqrt{n})$ expected regret against adaptive adversarial bandits.
This complements previous results of Bubeck and Slivkins (2012) that show
$\tilde{O}(\sqrt{n})$ expected adversarial regret with $O((\log n)^2)$
stochastic pseudo-regret.
| Peter Auer and Chao-Kai Chiang | null | 1605.08722 | null | null |
Faster Eigenvector Computation via Shift-and-Invert Preconditioning | cs.DS cs.LG math.NA math.OC | We give faster algorithms and improved sample complexities for estimating the
top eigenvector of a matrix $\Sigma$ -- i.e. computing a unit vector $x$ such
that $x^T \Sigma x \ge (1-\epsilon)\lambda_1(\Sigma)$:
Offline Eigenvector Estimation: Given an explicit $A \in \mathbb{R}^{n \times
d}$ with $\Sigma = A^TA$, we show how to compute an $\epsilon$ approximate top
eigenvector in time $\tilde O([nnz(A) + \frac{d*sr(A)}{gap^2} ]* \log
1/\epsilon )$ and $\tilde O([\frac{nnz(A)^{3/4} (d*sr(A))^{1/4}}{\sqrt{gap}} ]
* \log 1/\epsilon )$. Here $nnz(A)$ is the number of nonzeros in $A$, $sr(A)$
is the stable rank, $gap$ is the relative eigengap. By separating the $gap$
dependence from the $nnz(A)$ term, our first runtime improves upon the
classical power and Lanczos methods. It also improves prior work using fast
subspace embeddings [AC09, CW13] and stochastic optimization [Sha15c], giving
significantly better dependencies on $sr(A)$ and $\epsilon$. Our second running
time improves these further when $nnz(A) \le \frac{d*sr(A)}{gap^2}$.
Online Eigenvector Estimation: Given a distribution $D$ with covariance
matrix $\Sigma$ and a vector $x_0$ which is an $O(gap)$ approximate top
eigenvector for $\Sigma$, we show how to refine to an $\epsilon$ approximation
using $ O(\frac{var(D)}{gap*\epsilon})$ samples from $D$. Here $var(D)$ is a
natural notion of variance. Combining our algorithm with previous work to
initialize $x_0$, we obtain improved sample complexity and runtime results
under a variety of assumptions on $D$.
We achieve our results using a general framework that we believe is of
independent interest. We give a robust analysis of the classic method of
shift-and-invert preconditioning to reduce eigenvector computation to
approximately solving a sequence of linear systems. We then apply fast
stochastic variance reduced gradient (SVRG) based system solvers to achieve our
claims.
| Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco,
Praneeth Netrapalli, Aaron Sidford | null | 1605.08754 | null | null |
Stacking With Auxiliary Features | cs.CL cs.CV cs.LG | Ensembling methods are well known for improving prediction accuracy. However,
they are limited in the sense that they cannot discriminate among component
models effectively. In this paper, we propose stacking with auxiliary features
that learns to fuse relevant information from multiple systems to improve
performance. Auxiliary features enable the stacker to rely on systems that not
just agree on an output but also the provenance of the output. We demonstrate
our approach on three very different and difficult problems -- the Cold Start
Slot Filling, the Tri-lingual Entity Discovery and Linking and the ImageNet
object detection tasks. We obtain new state-of-the-art results on the first two
tasks and substantial improvements on the detection task, thus verifying the
power and generality of our approach.
| Nazneen Fatema Rajani and Raymond J. Mooney | null | 1605.08764 | null | null |
Asymptotic Analysis of Objectives based on Fisher Information in Active
Learning | stat.ML cs.LG | Obtaining labels can be costly and time-consuming. Active learning allows a
learning algorithm to intelligently query samples to be labeled for efficient
learning. Fisher information ratio (FIR) has been used as an objective for
selecting queries in active learning. However, little is known about the theory
behind the use of FIR for active learning. There is a gap between the
underlying theory and the motivation of its usage in practice. In this paper,
we attempt to fill this gap and provide a rigorous framework for analyzing
existing FIR-based active learning methods. In particular, we show that FIR can
be asymptotically viewed as an upper bound of the expected variance of the
log-likelihood ratio. Additionally, our analysis suggests a unifying framework
that not only enables us to make theoretical comparisons among the existing
querying methods based on FIR, but also allows us to give insight into the
development of new active learning approaches based on this objective.
| Jamshid Sourati, Murat Akcakaya, Todd K. Leen, Deniz Erdogmus,
Jennifer G. Dy | null | 1605.08798 | null | null |
Density estimation using Real NVP | cs.LG cs.AI cs.NE stat.ML | Unsupervised learning of probabilistic models is a central yet challenging
problem in machine learning. Specifically, designing models with tractable
learning, sampling, inference and evaluation is crucial in solving this task.
We extend the space of such models using real-valued non-volume preserving
(real NVP) transformations, a set of powerful invertible and learnable
transformations, resulting in an unsupervised learning algorithm with exact
log-likelihood computation, exact sampling, exact inference of latent
variables, and an interpretable latent space. We demonstrate its ability to
model natural images on four datasets through sampling, log-likelihood
evaluation and latent variable manipulations.
| Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio | null | 1605.08803 | null | null |
Muffled Semi-Supervised Learning | cs.LG stat.ML | We explore a novel approach to semi-supervised learning. This approach is
contrary to the common approach in that the unlabeled examples serve to
"muffle," rather than enhance, the guidance provided by the labeled examples.
We provide several variants of the basic algorithm and show experimentally that
they can achieve significantly higher AUC than boosted trees, random forests
and logistic regression when unlabeled examples are available.
| Akshay Balsubramani, Yoav Freund | null | 1605.08833 | null | null |
Dueling Bandits with Dependent Arms | cs.LG | We study dueling bandits with weak utility-based regret when preferences over
arms have a total order and carry observable feature vectors. The order is
assumed to be determined by these feature vectors, an unknown preference
vector, and a known utility function. This structure introduces dependence
between preferences for pairs of arms, and allows learning about the preference
over one pair of arms from the preference over another pair of arms. We propose
an algorithm for this setting called Comparing The Best (CTB), which we show
has constant expected cumulative weak utility-based regret. We provide a
Bayesian interpretation for CTB, an implementation appropriate for a small
number of arms, and an alternate implementation for many arms that can be used
when the input parameters satisfy a decomposability condition. We demonstrate
through numerical experiments that CTB with appropriate input parameters
outperforms all benchmarks considered.
| Bangrui Chen, Peter I. Frazier | null | 1605.08838 | null | null |
Online Bayesian Collaborative Topic Regression | cs.LG cs.IR | Collaborative Topic Regression (CTR) combines ideas of probabilistic matrix
factorization (PMF) and topic modeling (e.g., LDA) for recommender systems,
which has gained increasing successes in many applications. Despite enjoying
many advantages, the existing CTR algorithms have some critical limitations.
First of all, they are often designed to work in a batch learning manner,
making them unsuitable to deal with streaming data or big data in real-world
recommender systems. Second, the document-specific topic proportions of LDA are
fed to the downstream PMF, but not reverse, which is sub-optimal as the rating
information is not exploited in discovering the low-dimensional representation
of documents and thus can result in a sub-optimal representation for
prediction. In this paper, we propose a novel scheme of Online Bayesian
Collaborative Topic Regression (OBCTR) which is efficient and scalable for
learning from data streams. Particularly, we {\it jointly} optimize the
combined objective function of both PMF and LDA in an online learning fashion,
in which both PMF and LDA tasks can be reinforced each other during the online
learning process. Our encouraging experimental results on real-world data
validate the effectiveness of the proposed method.
| Chenghao Liu, Tao Jin, Steven C.H. Hoi, Peilin Zhao, Jianling Sun | null | 1605.08872 | null | null |
Optimal Rates for Multi-pass Stochastic Gradient Methods | cs.LG math.OC stat.ML | We analyze the learning properties of the stochastic gradient method when
multiple passes over the data and mini-batches are allowed. We study how
regularization properties are controlled by the step-size, the number of passes
and the mini-batch size. In particular, we consider the square loss and show
that for a universal step-size choice, the number of passes acts as a
regularization parameter, and optimal finite sample bounds can be achieved by
early-stopping. Moreover, we show that larger step-sizes are allowed when
considering mini-batches. Our analysis is based on a unifying approach,
encompassing both batch and stochastic gradient methods as special cases. As a
byproduct, we derive optimal convergence results for batch gradient methods
(even in the non-attainable cases).
| Junhong Lin, Lorenzo Rosasco | null | 1605.08882 | null | null |
On Explore-Then-Commit Strategies | math.ST cs.LG stat.TH | We study the problem of minimising regret in two-armed bandit problems with
Gaussian rewards. Our objective is to use this simple setting to illustrate
that strategies based on an exploration phase (up to a stopping time) followed
by exploitation are necessarily suboptimal. The results hold regardless of
whether or not the difference in means between the two arms is known. Besides
the main message, we also refine existing deviation inequalities, which allow
us to design fully sequential strategies with finite-time regret guarantees
that are (a) asymptotically optimal as the horizon grows and (b) order-optimal
in the minimax sense. Furthermore we provide empirical evidence that the theory
also holds in practice and discuss extensions to non-gaussian and
multiple-armed case.
| Aur\'elien Garivier (IMT), Emilie Kaufmann (SEQUEL, CRIStAL, CNRS),
Tor Lattimore | null | 1605.08988 | null | null |
Tight (Lower) Bounds for the Fixed Budget Best Arm Identification Bandit
Problem | stat.ML cs.LG | We consider the problem of \textit{best arm identification} with a
\textit{fixed budget $T$}, in the $K$-armed stochastic bandit setting, with
arms distribution defined on $[0,1]$. We prove that any bandit strategy, for at
least one bandit problem characterized by a complexity $H$, will misidentify
the best arm with probability lower bounded by
$$\exp\Big(-\frac{T}{\log(K)H}\Big),$$ where $H$ is the sum for all sub-optimal
arms of the inverse of the squared gaps. Our result disproves formally the
general belief - coming from results in the fixed confidence setting - that
there must exist an algorithm for this problem whose probability of error is
upper bounded by $\exp(-T/H)$. This also proves that some existing strategies
based on the Successive Rejection of the arms are optimal - closing therefore
the current gap between upper and lower bounds for the fixed budget best arm
identification problem.
| Alexandra Carpentier and Andrea Locatelli | null | 1605.09004 | null | null |
TripleSpin - a generic compact paradigm for fast machine learning
computations | cs.LG stat.ML | We present a generic compact computational framework relying on structured
random matrices that can be applied to speed up several machine learning
algorithms with almost no loss of accuracy. The applications include new fast
LSH-based algorithms, efficient kernel computations via random feature maps,
convex optimization algorithms, quantization techniques and many more. Certain
models of the presented paradigm are even more compressible since they apply
only bit matrices. This makes them suitable for deploying on mobile devices.
All our findings come with strong theoretical guarantees. In particular, as a
byproduct of the presented techniques and by using relatively new
Berry-Esseen-type CLT for random vectors, we give the first theoretical
guarantees for one of the most efficient existing LSH algorithms based on the
$\textbf{HD}_{3}\textbf{HD}_{2}\textbf{HD}_{1}$ structured matrix ("Practical
and Optimal LSH for Angular Distance"). These guarantees as well as theoretical
results for other aforementioned applications follow from the same general
theoretical principle that we present in the paper. Our structured family
contains as special cases all previously considered structured schemes,
including the recently introduced $P$-model. Experimental evaluation confirms
the accuracy and efficiency of TripleSpin matrices.
| Krzysztof Choromanski, Francois Fagan, Cedric Gouy-Pailler, Anne
Morvan, Tamas Sarlos, Jamal Atif | null | 1605.09046 | null | null |
Recycling Randomness with Structure for Sublinear time Kernel Expansions | cs.LG cs.NA stat.ML | We propose a scheme for recycling Gaussian random vectors into structured
matrices to approximate various kernel functions in sublinear time via random
embeddings. Our framework includes the Fastfood construction as a special case,
but also extends to Circulant, Toeplitz and Hankel matrices, and the broader
family of structured matrices that are characterized by the concept of
low-displacement rank. We introduce notions of coherence and graph-theoretic
structural constants that control the approximation quality, and prove
unbiasedness and low-variance properties of random feature maps that arise
within our framework. For the case of low-displacement matrices, we show how
the degree of structure and randomness can be controlled to reduce statistical
variance at the cost of increased computation and storage requirements.
Empirical results strongly support our theory and justify the use of a broader
family of structured matrices for scaling up kernel methods using random
features.
| Krzysztof Choromanski, Vikas Sindhwani | null | 1605.09049 | null | null |
Distributed Asynchronous Dual Free Stochastic Dual Coordinate Ascent | cs.LG | The primal-dual distributed optimization methods have broad large-scale
machine learning applications. Previous primal-dual distributed methods are not
applicable when the dual formulation is not available, e.g. the
sum-of-non-convex objectives. Moreover, these algorithms and theoretical
analysis are based on the fundamental assumption that the computing speeds of
multiple machines in a cluster are similar. However, the straggler problem is
an unavoidable practical issue in the distributed system because of the
existence of slow machines. Therefore, the total computational time of the
distributed optimization methods is highly dependent on the slowest machine. In
this paper, we address these two issues by proposing distributed asynchronous
dual free stochastic dual coordinate ascent algorithm for distributed
optimization. Our method does not need the dual formulation of the target
problem in the optimization. We tackle the straggler problem through
asynchronous communication and the negative effect of slow machines is
significantly alleviated. We also analyze the convergence rate of our method
and prove the linear convergence rate even if the individual functions in
objective are non-convex. Experiments on both convex and non-convex loss
functions are used to validate our statements.
| Zhouyuan Huo and Heng Huang | null | 1605.09066 | null | null |
A budget-constrained inverse classification framework for smooth
classifiers | cs.LG stat.ML | Inverse classification is the process of manipulating an instance such that
it is more likely to conform to a specific class. Past methods that address
such a problem have shortcomings. Greedy methods make changes that are overly
radical, often relying on data that is strictly discrete. Other methods rely on
certain data points, the presence of which cannot be guaranteed. In this paper
we propose a general framework and method that overcomes these and other
limitations. The formulation of our method can use any differentiable
classification function. We demonstrate the method by using logistic regression
and Gaussian kernel SVMs. We constrain the inverse classification to occur on
features that can actually be changed, each of which incurs an individual cost.
We further subject such changes to fall within a certain level of cumulative
change (budget). Our framework can also accommodate the estimation of
(indirectly changeable) features whose values change as a consequence of
actions taken. Furthermore, we propose two methods for specifying feature-value
ranges that result in different algorithmic behavior. We apply our method, and
a proposed sensitivity analysis-based benchmark method, to two freely available
datasets: Student Performance from the UCI Machine Learning Repository and a
real world cardiovascular disease dataset. The results obtained demonstrate the
validity and benefits of our framework and method.
| Michael T. Lash, Qihang Lin, W. Nick Street and Jennifer G. Robinson | null | 1605.09068 | null | null |
Spectral Methods for Correlated Topic Models | cs.LG stat.ML | In this paper, we propose guaranteed spectral methods for learning a broad
range of topic models, which generalize the popular Latent Dirichlet Allocation
(LDA). We overcome the limitation of LDA to incorporate arbitrary topic
correlations, by assuming that the hidden topic proportions are drawn from a
flexible class of Normalized Infinitely Divisible (NID) distributions. NID
distributions are generated through the process of normalizing a family of
independent Infinitely Divisible (ID) random variables. The Dirichlet
distribution is a special case obtained by normalizing a set of Gamma random
variables. We prove that this flexible topic model class can be learned via
spectral methods using only moments up to the third order, with (low order)
polynomial sample and computational complexity. The proof is based on a key new
technique derived here that allows us to diagonalize the moments of the NID
distribution through an efficient procedure that requires evaluating only
univariate integrals, despite the fact that we are handling high dimensional
multivariate moments. In order to assess the performance of our proposed Latent
NID topic model, we use two real datasets of articles collected from New York
Times and Pubmed. Our experiments yield improved perplexity on both datasets
compared with the baseline.
| Forough Arabshahi, Animashree Anandkumar | null | 1605.09080 | null | null |
One-Pass Learning with Incremental and Decremental Features | cs.LG | In many real tasks the features are evolving, with some features being
vanished and some other features augmented. For example, in environment
monitoring some sensors expired whereas some new ones deployed; in mobile game
recommendation some games dropped whereas some new ones added. Learning with
such incremental and decremental features is crucial but rarely studied,
particularly when the data coming like a stream and thus it is infeasible to
keep the whole data for optimization. In this paper, we study this challenging
problem and present the OPID approach. Our approach attempts to compress
important information of vanished features into functions of survived features,
and then expand to include the augmented features. It is the one-pass learning
approach, which only needs to scan each instance once and does not need to
store the whole data, and thus satisfy the evolving streaming data nature. The
effectiveness of our approach is validated theoretically and empirically.
| Chenping Hou and Zhi-Hua Zhou | null | 1605.09082 | null | null |
Stochastic Function Norm Regularization of Deep Networks | cs.LG cs.CV stat.ML | Deep neural networks have had an enormous impact on image analysis.
State-of-the-art training methods, based on weight decay and DropOut, result in
impressive performance when a very large training set is available. However,
they tend to have large problems overfitting to small data sets. Indeed, the
available regularization methods deal with the complexity of the network
function only indirectly. In this paper, we study the feasibility of directly
using the $L_2$ function norm for regularization. Two methods to integrate this
new regularization in the stochastic backpropagation are proposed. Moreover,
the convergence of these new algorithms is studied. We finally show that they
outperform the state-of-the-art methods in the low sample regime on benchmark
datasets (MNIST and CIFAR10). The obtained results demonstrate very clear
improvement, especially in the context of small sample regimes with data laying
in a low dimensional manifold. Source code of the method can be found at
\url{https://github.com/AmalRT/DNN_Reg}.
| Amal Rannen Triki and Matthew B. Blaschko | null | 1605.09085 | null | null |
The Bayesian Linear Information Filtering Problem | cs.LG | We present a Bayesian sequential decision-making formulation of the
information filtering problem, in which an algorithm presents items (news
articles, scientific papers, tweets) arriving in a stream, and learns relevance
from user feedback on presented items. We model user preferences using a
Bayesian linear model, similar in spirit to a Bayesian linear bandit. We
compute a computational upper bound on the value of the optimal policy, which
allows computing an optimality gap for implementable policies. We then use this
analysis as motivation in introducing a pair of new Decompose-Then-Decide (DTD)
heuristic policies, DTD-Dynamic-Programming (DTD-DP) and
DTD-Upper-Confidence-Bound (DTD-UCB). We compare DTD-DP and DTD-UCB against
several benchmarks on real and simulated data, demonstrating significant
improvement, and show that the achieved performance is close to the upper
bound.
| Bangrui Chen, Peter I. Frazier | null | 1605.09088 | null | null |
ParMAC: distributed optimisation of nested functions, with application
to learning binary autoencoders | cs.LG cs.DC cs.NE math.OC stat.ML | Many powerful machine learning models are based on the composition of
multiple processing layers, such as deep nets, which gives rise to nonconvex
objective functions. A general, recent approach to optimise such "nested"
functions is the method of auxiliary coordinates (MAC). MAC introduces an
auxiliary coordinate for each data point in order to decouple the nested model
into independent submodels. This decomposes the optimisation into steps that
alternate between training single layers and updating the coordinates. It has
the advantage that it reuses existing single-layer algorithms, introduces
parallelism, and does not need to use chain-rule gradients, so it works with
nondifferentiable layers. With large-scale problems, or when distributing the
computation is necessary for faster training, the dataset may not fit in a
single machine. It is then essential to limit the amount of communication
between machines so it does not obliterate the benefit of parallelism. We
describe a general way to achieve this, ParMAC. ParMAC works on a cluster of
processing machines with a circular topology and alternates two steps until
convergence: one step trains the submodels in parallel using stochastic
updates, and the other trains the coordinates in parallel. Only submodel
parameters, no data or coordinates, are ever communicated between machines.
ParMAC exhibits high parallelism, low communication overhead, and facilitates
data shuffling, load balancing, fault tolerance and streaming data processing.
We study the convergence of ParMAC and propose a theoretical model of its
runtime and parallel speedup. We develop ParMAC to learn binary autoencoders
for fast, approximate image retrieval. We implement it in MPI in a distributed
system and demonstrate nearly perfect speedups in a 128-processor cluster with
a training set of 100 million high-dimensional points.
| Miguel \'A. Carreira-Perpi\~n\'an and Mehdi Alizadeh | null | 1605.09114 | null | null |
Control of Memory, Active Perception, and Action in Minecraft | cs.AI cs.CV cs.LG | In this paper, we introduce a new set of reinforcement learning (RL) tasks in
Minecraft (a flexible 3D world). We then use these tasks to systematically
compare and contrast existing deep reinforcement learning (DRL) architectures
with our new memory-based DRL architectures. These tasks are designed to
emphasize, in a controllable manner, issues that pose challenges for RL methods
including partial observability (due to first-person visual observations),
delayed rewards, high-dimensional visual observations, and the need to use
active perception in a correct manner so as to perform well in the tasks. While
these tasks are conceptually simple to describe, by virtue of having all of
these challenges simultaneously they are difficult for current DRL
architectures. Additionally, we evaluate the generalization performance of the
architectures on environments not used during training. The experimental
results show that our new architectures generalize to unseen environments
better than existing DRL architectures.
| Junhyuk Oh, Valliappa Chockalingam, Satinder Singh, Honglak Lee | null | 1605.09128 | null | null |
Classification under Streaming Emerging New Classes: A Solution using
Completely Random Trees | cs.LG | This paper investigates an important problem in stream mining, i.e.,
classification under streaming emerging new classes or SENC. The common
approach is to treat it as a classification problem and solve it using either a
supervised learner or a semi-supervised learner. We propose an alternative
approach by using unsupervised learning as the basis to solve this problem. The
SENC problem can be decomposed into three sub problems: detecting emerging new
classes, classifying for known classes, and updating models to enable
classification of instances of the new class and detection of more emerging new
classes. The proposed method employs completely random trees which have been
shown to work well in unsupervised learning and supervised learning
independently in the literature. This is the first time, as far as we know,
that completely random trees are used as a single common core to solve all
three sub problems: unsupervised learning, supervised learning and model update
in data streams. We show that the proposed unsupervised-learning-focused method
often achieves significantly better outcomes than existing
classification-focused methods.
| Xin Mu and Kai Ming Ting and Zhi-Hua Zhou | null | 1605.09131 | null | null |
Hyperspectral Image Classification with Support Vector Machines on
Kernel Distribution Embeddings | cs.CV cs.LG stat.ML | We propose a novel approach for pixel classification in hyperspectral images,
leveraging on both the spatial and spectral information in the data. The
introduced method relies on a recently proposed framework for learning on
distributions -- by representing them with mean elements in reproducing kernel
Hilbert spaces (RKHS) and formulating a classification algorithm therein. In
particular, we associate each pixel to an empirical distribution of its
neighbouring pixels, a judicious representation of which in an RKHS, in
conjunction with the spectral information contained in the pixel itself, give a
new explicit set of features that can be fed into a suite of standard
classification techniques -- we opt for a well-established framework of support
vector machines (SVM). Furthermore, the computational complexity is reduced via
random Fourier features formalism. We study the consistency and the convergence
rates of the proposed method and the experiments demonstrate strong performance
on hyperspectral data with gains in comparison to the state-of-the-art results.
| Gianni Franchi, Jesus Angulo, and Dino Sejdinovic | null | 1605.09136 | null | null |
Does Multimodality Help Human and Machine for Translation and Image
Captioning? | cs.CL cs.LG cs.NE | This paper presents the systems developed by LIUM and CVC for the WMT16
Multimodal Machine Translation challenge. We explored various comparative
methods, namely phrase-based systems and attentional recurrent neural networks
models trained using monomodal or multimodal data. We also performed a human
evaluation in order to estimate the usefulness of multimodal data for human
machine translation and image description generation. Our systems obtained the
best results for both tasks according to the automatic evaluation metrics BLEU
and METEOR.
| Ozan Caglayan, Walid Aransa, Yaxing Wang, Marc Masana, Mercedes
Garc\'ia-Mart\'inez, Fethi Bougares, Lo\"ic Barrault, Joost van de Weijer | 10.18653/v1/W16-2358 | 1605.09186 | null | null |
Forest Floor Visualizations of Random Forests | stat.ML cs.LG | We propose a novel methodology, forest floor, to visualize and interpret
random forest (RF) models. RF is a popular and useful tool for non-linear
multi-variate classification and regression, which yields a good trade-off
between robustness (low variance) and adaptiveness (low bias). Direct
interpretation of a RF model is difficult, as the explicit ensemble model of
hundreds of deep trees is complex. Nonetheless, it is possible to visualize a
RF model fit by its mapping from feature space to prediction space. Hereby the
user is first presented with the overall geometrical shape of the model
structure, and when needed one can zoom in on local details. Dimensional
reduction by projection is used to visualize high dimensional shapes. The
traditional method to visualize RF model structure, partial dependence plots,
achieve this by averaging multiple parallel projections. We suggest to first
use feature contributions, a method to decompose trees by splitting features,
and then subsequently perform projections. The advantages of forest floor over
partial dependence plots is that interactions are not masked by averaging. As a
consequence, it is possible to locate interactions, which are not visualized in
a given projection. Furthermore, we introduce: a goodness-of-visualization
measure, use of colour gradients to identify interactions and an out-of-bag
cross validated variant of feature contributions.
| Soeren H. Welling, Hanne H.F. Refsgaard, Per B. Brockhoff, Line H.
Clemmensen | null | 1605.09196 | null | null |
Deep Reinforcement Learning Radio Control and Signal Detection with
KeRLym, a Gym RL Agent | cs.LG | This paper presents research in progress investigating the viability and
adaptation of reinforcement learning using deep neural network based function
approximation for the task of radio control and signal detection in the
wireless domain. We demonstrate a successful initial method for radio control
which allows naive learning of search without the need for expert features,
heuristics, or search strategies. We also introduce Kerlym, an open Keras based
reinforcement learning agent collection for OpenAI's Gym.
| Timothy J. O'Shea, T. Charles Clancy | null | 1605.09221 | null | null |
Learning Combinatorial Functions from Pairwise Comparisons | cs.LG cs.DS | A large body of work in machine learning has focused on the problem of
learning a close approximation to an underlying combinatorial function, given a
small set of labeled examples. However, for real-valued functions, cardinal
labels might not be accessible, or it may be difficult for an expert to
consistently assign real-valued labels over the entire set of examples. For
instance, it is notoriously hard for consumers to reliably assign values to
bundles of merchandise. Instead, it might be much easier for a consumer to
report which of two bundles she likes better. With this motivation in mind, we
consider an alternative learning model, wherein the algorithm must learn the
underlying function up to pairwise comparisons, from pairwise comparisons. In
this model, we present a series of novel algorithms that learn over a wide
variety of combinatorial function classes. These range from graph functions to
broad classes of valuation functions that are fundamentally important in
microeconomic theory, the analysis of social networks, and machine learning,
such as coverage, submodular, XOS, and subadditive functions, as well as
functions with sparse Fourier support.
| Maria-Florina Balcan, Ellen Vitercik, Colin White | null | 1605.09227 | null | null |
Tradeoffs between Convergence Speed and Reconstruction Accuracy in
Inverse Problems | cs.NA cs.LG cs.NE math.OC stat.ML | Solving inverse problems with iterative algorithms is popular, especially for
large data. Due to time constraints, the number of possible iterations is
usually limited, potentially affecting the achievable accuracy. Given an error
one is willing to tolerate, an important question is whether it is possible to
modify the original iterations to obtain faster convergence to a minimizer
achieving the allowed error without increasing the computational cost of each
iteration considerably. Relying on recent recovery techniques developed for
settings in which the desired signal belongs to some low-dimensional set, we
show that using a coarse estimate of this set may lead to faster convergence at
the cost of an additional reconstruction error related to the accuracy of the
set approximation. Our theory ties to recent advances in sparse recovery,
compressed sensing, and deep learning. Particularly, it may provide a possible
explanation to the successful approximation of the l1-minimization solution by
neural networks with layers representing iterations, as practiced in the
learned iterative shrinkage-thresholding algorithm (LISTA).
| Raja Giryes and Yonina C. Eldar and Alex M. Bronstein and Guillermo
Sapiro | null | 1605.09232 | null | null |
Internal Guidance for Satallax | cs.LO cs.AI cs.LG | We propose a new internal guidance method for automated theorem provers based
on the given-clause algorithm. Our method influences the choice of unprocessed
clauses using positive and negative examples from previous proofs. To this end,
we present an efficient scheme for Naive Bayesian classification by
generalising label occurrences to types with monoid structure. This makes it
possible to extend existing fast classifiers, which consider only positive
examples, with negative ones. We implement the method in the higher-order logic
prover Satallax, where we modify the delay with which propositions are
processed. We evaluated our method on a simply-typed higher-order logic version
of the Flyspeck project, where it solves 26% more problems than Satallax
without internal guidance.
| Michael F\"arber and Chad Brown | null | 1605.09293 | null | null |
k2-means for fast and accurate large scale clustering | cs.LG cs.CV | We propose k^2-means, a new clustering method which efficiently copes with
large numbers of clusters and achieves low energy solutions. k^2-means builds
upon the standard k-means (Lloyd's algorithm) and combines a new strategy to
accelerate the convergence with a new low time complexity divisive
initialization. The accelerated convergence is achieved through only looking at
k_n nearest clusters and using triangle inequality bounds in the assignment
step while the divisive initialization employs an optimal 2-clustering along a
direction. The worst-case time complexity per iteration of our k^2-means is
O(nk_nd+k^2d), where d is the dimension of the n data points and k is the
number of clusters and usually n << k << k_n. Compared to k-means' O(nkd)
complexity, our k^2-means complexity is significantly lower, at the expense of
slightly increasing the memory complexity by O(nk_n+k^2). In our extensive
experiments k^2-means is order(s) of magnitude faster than standard methods in
computing accurate clusterings on several standard datasets and settings with
hundreds of clusters and high dimensional data. Moreover, the proposed divisive
initialization generally leads to clustering energies comparable to those
achieved with the standard k-means++ initialization, while being significantly
faster.
| Eirikur Agustsson, Radu Timofte and Luc Van Gool | null | 1605.09299 | null | null |
Synthesizing the preferred inputs for neurons in neural networks via
deep generator networks | cs.NE cs.AI cs.CV cs.LG | Deep neural networks (DNNs) have demonstrated state-of-the-art results on
many pattern recognition tasks, especially vision classification problems.
Understanding the inner workings of such computational brains is both
fascinating basic science that is interesting in its own right - similar to why
we study the human brain - and will enable researchers to further improve DNNs.
One path to understanding how a neural network functions internally is to study
what each of its neurons has learned to detect. One such method is called
activation maximization (AM), which synthesizes an input (e.g. an image) that
highly activates a neuron. Here we dramatically improve the qualitative state
of the art of activation maximization by harnessing a powerful, learned prior:
a deep generator network (DGN). The algorithm (1) generates qualitatively
state-of-the-art synthetic images that look almost real, (2) reveals the
features learned by each neuron in an interpretable way, (3) generalizes well
to new datasets and somewhat well to different network architectures without
requiring the prior to be relearned, and (4) can be considered as a
high-quality generative method (in this case, by generating novel, creative,
interesting, recognizable images).
| Anh Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, Jeff
Clune | null | 1605.09304 | null | null |
Parametric Exponential Linear Unit for Deep Convolutional Neural
Networks | cs.LG cs.CV cs.NE | Object recognition is an important task for improving the ability of visual
systems to perform complex scene understanding. Recently, the Exponential
Linear Unit (ELU) has been proposed as a key component for managing bias shift
in Convolutional Neural Networks (CNNs), but defines a parameter that must be
set by hand. In this paper, we propose learning a parameterization of ELU in
order to learn the proper activation shape at each layer in the CNNs. Our
results on the MNIST, CIFAR-10/100 and ImageNet datasets using the NiN,
Overfeat, All-CNN and ResNet networks indicate that our proposed Parametric ELU
(PELU) has better performances than the non-parametric ELU. We have observed as
much as a 7.28% relative error improvement on ImageNet with the NiN network,
with only 0.0003% parameter increase. Our visual examination of the non-linear
behaviors adopted by Vgg using PELU shows that the network took advantage of
the added flexibility by learning different activations at different layers.
| Ludovic Trottier, Philippe Gigu\`ere, Brahim Chaib-draa | null | 1605.09332 | null | null |
Minding the Gaps for Block Frank-Wolfe Optimization of Structured SVMs | cs.LG math.OC stat.ML | In this paper, we propose several improvements on the block-coordinate
Frank-Wolfe (BCFW) algorithm from Lacoste-Julien et al. (2013) recently used to
optimize the structured support vector machine (SSVM) objective in the context
of structured prediction, though it has wider applications. The key intuition
behind our improvements is that the estimates of block gaps maintained by BCFW
reveal the block suboptimality that can be used as an adaptive criterion.
First, we sample objects at each iteration of BCFW in an adaptive non-uniform
way via gapbased sampling. Second, we incorporate pairwise and away-step
variants of Frank-Wolfe into the block-coordinate setting. Third, we cache
oracle calls with a cache-hit criterion based on the block gaps. Fourth, we
provide the first method to compute an approximate regularization path for
SSVM. Finally, we provide an exhaustive empirical evaluation of all our methods
on four structured prediction datasets.
| Anton Osokin, Jean-Baptiste Alayrac, Isabella Lukasewitz, Puneet K.
Dokania, Simon Lacoste-Julien | null | 1605.09346 | null | null |
Review of Fall Detection Techniques: A Data Availability Perspective | cs.LG | A fall is an abnormal activity that occurs rarely; however, missing to
identify falls can have serious health and safety implications on an
individual. Due to the rarity of occurrence of falls, there may be insufficient
or no training data available for them. Therefore, standard supervised machine
learning methods may not be directly applied to handle this problem. In this
paper, we present a taxonomy for the study of fall detection from the
perspective of availability of fall data. The proposed taxonomy is independent
of the type of sensors used and specific feature extraction/selection methods.
The taxonomy identifies different categories of classification methods for the
study of fall detection based on the availability of their data during training
the classifiers. Then, we present a comprehensive literature review within
those categories and identify the approach of treating a fall as an abnormal
activity to be a plausible research direction. We conclude our paper by
discussing several open research problems in the field and pointers for future
research.
| Shehroz S. Khan, Jesse Hoey | 10.1016/j.medengphy.2016.10.014 | 1605.09351 | null | null |
Unsupervised Discovery of El Nino Using Causal Feature Learning on
Microlevel Climate Data | stat.ML cs.AI cs.LG physics.ao-ph | We show that the climate phenomena of El Nino and La Nina arise naturally as
states of macro-variables when our recent causal feature learning framework
(Chalupka 2015, Chalupka 2016) is applied to micro-level measures of zonal wind
(ZW) and sea surface temperatures (SST) taken over the equatorial band of the
Pacific Ocean. The method identifies these unusual climate states on the basis
of the relation between ZW and SST patterns without any input about past
occurrences of El Nino or La Nina. The simpler alternatives of (i) clustering
the SST fields while disregarding their relationship with ZW patterns, or (ii)
clustering the joint ZW-SST patterns, do not discover El Nino. We discuss the
degree to which our method supports a causal interpretation and use a
low-dimensional toy example to explain its success over other clustering
approaches. Finally, we propose a new robust and scalable alternative to our
original algorithm (Chalupka 2016), which circumvents the need for
high-dimensional density learning.
| Krzysztof Chalupka, Tobias Bischoff, Pietro Perona, Frederick
Eberhardt | null | 1605.09370 | null | null |
End-to-End Instance Segmentation with Recurrent Attention | cs.LG cs.CV | While convolutional neural networks have gained impressive success recently
in solving structured prediction problems such as semantic segmentation, it
remains a challenge to differentiate individual object instances in the scene.
Instance segmentation is very important in a variety of applications, such as
autonomous driving, image captioning, and visual question answering. Techniques
that combine large graphical models with low-level vision have been proposed to
address this problem; however, we propose an end-to-end recurrent neural
network (RNN) architecture with an attention mechanism to model a human-like
counting process, and produce detailed instance segmentations. The network is
jointly trained to sequentially produce regions of interest as well as a
dominant object segmentation within each region. The proposed model achieves
competitive results on the CVPPP, KITTI, and Cityscapes datasets.
| Mengye Ren, Richard S. Zemel | null | 1605.09410 | null | null |
Evaluating Crowdsourcing Participants in the Absence of Ground-Truth | cs.HC cs.LG | Given a supervised/semi-supervised learning scenario where multiple
annotators are available, we consider the problem of identification of
adversarial or unreliable annotators.
| Ramanathan Subramanian, Romer Rosales, Glenn Fung, Jennifer Dy | null | 1605.09432 | null | null |
A Novel Fault Classification Scheme Based on Least Square SVM | cs.SY cs.LG | This paper presents a novel approach for fault classification and section
identification in a series compensated transmission line based on least square
support vector machine. The current signal corresponding to one-fourth of the
post fault cycle is used as input to proposed modular LS-SVM classifier. The
proposed scheme uses four binary classifier; three for selection of three
phases and fourth for ground detection. The proposed classification scheme is
found to be accurate and reliable in presence of noise as well. The simulation
results validate the efficacy of proposed scheme for accurate classification of
fault in a series compensated transmission line.
| Harishchandra Dubey, A.K. Tiwari, Nandita, P.K. Ray, S.R. Mohanty and
Nand Kishor | 10.1109/SCES.2012.6199047 | 1605.09444 | null | null |
Training Auto-encoders Effectively via Eliminating Task-irrelevant Input
Variables | cs.LG | Auto-encoders are often used as building blocks of deep network classifier to
learn feature extractors, but task-irrelevant information in the input data may
lead to bad extractors and result in poor generalization performance of the
network. In this paper,via dropping the task-irrelevant input variables the
performance of auto-encoders can be obviously improved .Specifically, an
importance-based variable selection method is proposed to aim at finding the
task-irrelevant input variables and dropping them.It firstly estimates
importance of each variable,and then drops the variables with importance value
lower than a threshold. In order to obtain better performance, the method can
be employed for each layer of stacked auto-encoders. Experimental results show
that when combined with our method the stacked denoising auto-encoders achieves
significantly improved performance on three challenging datasets.
| Hui Shen, Dehua Li, Hong Wu, Zhaoxiang Zang | null | 1605.09458 | null | null |
A Neural Autoregressive Approach to Collaborative Filtering | cs.IR cs.LG stat.ML | This paper proposes CF-NADE, a neural autoregressive architecture for
collaborative filtering (CF) tasks, which is inspired by the Restricted
Boltzmann Machine (RBM) based CF model and the Neural Autoregressive
Distribution Estimator (NADE). We first describe the basic CF-NADE model for CF
tasks. Then we propose to improve the model by sharing parameters between
different ratings. A factored version of CF-NADE is also proposed for better
scalability. Furthermore, we take the ordinal nature of the preferences into
consideration and propose an ordinal cost to optimize CF-NADE, which shows
superior performance. Finally, CF-NADE can be extended to a deep model, with
only moderately increased computational complexity. Experimental results show
that CF-NADE with a single hidden layer beats all previous state-of-the-art
methods on MovieLens 1M, MovieLens 10M, and Netflix datasets, and adding more
hidden layers can further improve the performance.
| Yin Zheng, Bangsheng Tang, Wenkui Ding, Hanning Zhou | null | 1605.09477 | null | null |
Deep convolutional neural networks for predominant instrument
recognition in polyphonic music | cs.SD cs.CV cs.LG cs.NE | Identifying musical instruments in polyphonic music recordings is a
challenging but important problem in the field of music information retrieval.
It enables music search by instrument, helps recognize musical genres, or can
make music transcription easier and more accurate. In this paper, we present a
convolutional neural network framework for predominant instrument recognition
in real-world polyphonic music. We train our network from fixed-length music
excerpts with a single-labeled predominant instrument and estimate an arbitrary
number of predominant instruments from an audio signal with a variable length.
To obtain the audio-excerpt-wise result, we aggregate multiple outputs from
sliding windows over the test audio. In doing so, we investigated two different
aggregation methods: one takes the average for each instrument and the other
takes the instrument-wise sum followed by normalization. In addition, we
conducted extensive experiments on several important factors that affect the
performance, including analysis window size, identification threshold, and
activation functions for neural networks to find the optimal set of parameters.
Using a dataset of 10k audio excerpts from 11 instruments for evaluation, we
found that convolutional neural networks are more robust than conventional
methods that exploit spectral features and source separation with support
vector machines. Experimental results showed that the proposed convolutional
network architecture obtained an F1 measure of 0.602 for micro and 0.503 for
macro, respectively, achieving 19.6% and 16.4% in performance improvement
compared with other state-of-the-art algorithms.
| Yoonchang Han, Jaehun Kim, Kyogu Lee | 10.1109/TASLP.2016.2632307 | 1605.09507 | null | null |
Kernel Mean Embedding of Distributions: A Review and Beyond | stat.ML cs.LG | A Hilbert space embedding of a distribution---in short, a kernel mean
embedding---has recently emerged as a powerful tool for machine learning and
inference. The basic idea behind this framework is to map distributions into a
reproducing kernel Hilbert space (RKHS) in which the whole arsenal of kernel
methods can be extended to probability measures. It can be viewed as a
generalization of the original "feature map" common to support vector machines
(SVMs) and other kernel methods. While initially closely associated with the
latter, it has meanwhile found application in fields ranging from kernel
machines and probabilistic modeling to statistical inference, causal discovery,
and deep learning. The goal of this survey is to give a comprehensive review of
existing work and recent advances in this research area, and to discuss the
most challenging issues and open problems that could lead to new research
directions. The survey begins with a brief introduction to the RKHS and
positive definite kernels which forms the backbone of this survey, followed by
a thorough discussion of the Hilbert space embedding of marginal distributions,
theoretical guarantees, and a review of its applications. The embedding of
distributions enables us to apply RKHS methods to probability measures which
prompts a wide range of applications such as kernel two-sample testing,
independent testing, and learning on distributional data. Next, we discuss the
Hilbert space embedding for conditional distributions, give theoretical
insights, and review some applications. The conditional mean embedding enables
us to perform sum, product, and Bayes' rules---which are ubiquitous in
graphical model, probabilistic inference, and reinforcement learning---in a
non-parametric way. We then discuss relationships between this framework and
other related areas. Lastly, we give some suggestions on future research
directions.
| Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, Bernhard
Sch\"olkopf | 10.1561/2200000060 | 1605.09522 | null | null |
Robust Deep-Learning-Based Road-Prediction for Augmented Reality
Navigation Systems | cs.CV cs.LG cs.RO | This paper proposes an approach that predicts the road course from camera
sensors leveraging deep learning techniques. Road pixels are identified by
training a multi-scale convolutional neural network on a large number of
full-scene-labeled night-time road images including adverse weather conditions.
A framework is presented that applies the proposed approach to longer distance
road course estimation, which is the basis for an augmented reality navigation
application. In this framework long range sensor data (radar) and data from a
map database are fused with short range sensor data (camera) to produce a
precise longitudinal and lateral localization and road course estimation. The
proposed approach reliably detects roads with and without lane markings and
thus increases the robustness and availability of road course estimations and
augmented reality navigation. Evaluations on an extensive set of high precision
ground truth data taken from a differential GPS and an inertial measurement
unit show that the proposed approach reaches state-of-the-art performance
without the limitation of requiring existing lane markings.
| Matthias Limmer, Julian Forster, Dennis Baudach, Florian Sch\"ule,
Roland Schweiger, Hendrik P.A. Lensch | null | 1605.09533 | null | null |
Attention Correctness in Neural Image Captioning | cs.CV cs.CL cs.LG | Attention mechanisms have recently been introduced in deep learning for
various tasks in natural language processing and computer vision. But despite
their popularity, the "correctness" of the implicitly-learned attention maps
has only been assessed qualitatively by visualization of several examples. In
this paper we focus on evaluating and improving the correctness of attention in
neural image captioning models. Specifically, we propose a quantitative
evaluation metric for the consistency between the generated attention maps and
human annotations, using recently released datasets with alignment between
regions in images and entities in captions. We then propose novel models with
different levels of explicit supervision for learning attention maps during
training. The supervision can be strong when alignment between regions and
caption entities are available, or weak when only object segments and
categories are provided. We show on the popular Flickr30k and COCO datasets
that introducing supervision of attention maps during training solidly improves
both attention correctness and caption quality, showing the promise of making
machine perception more human-like.
| Chenxi Liu, Junhua Mao, Fei Sha, Alan Yuille | null | 1605.09553 | null | null |
Adaptive Learning Rate via Covariance Matrix Based Preconditioning for
Deep Neural Networks | cs.LG cs.AI stat.ML | Adaptive learning rate algorithms such as RMSProp are widely used for
training deep neural networks. RMSProp offers efficient training since it uses
first order gradients to approximate Hessian-based preconditioning. However,
since the first order gradients include noise caused by stochastic
optimization, the approximation may be inaccurate. In this paper, we propose a
novel adaptive learning rate algorithm called SDProp. Its key idea is effective
handling of the noise by preconditioning based on covariance matrix. For
various neural networks, our approach is more efficient and effective than
RMSProp and its variant.
| Yasutoshi Ida, Yasuhiro Fujiwara, Sotetsu Iwamura | 10.24963/ijcai.2017/267 | 1605.09593 | null | null |
Horizontally Scalable Submodular Maximization | stat.ML cs.DC cs.DM cs.LG | A variety of large-scale machine learning problems can be cast as instances
of constrained submodular maximization. Existing approaches for distributed
submodular maximization have a critical drawback: The capacity - number of
instances that can fit in memory - must grow with the data set size. In
practice, while one can provision many machines, the capacity of each machine
is limited by physical constraints. We propose a truly scalable approach for
distributed submodular maximization under fixed capacity. The proposed
framework applies to a broad class of algorithms and constraints and provides
theoretical guarantees on the approximation factor for any available capacity.
We empirically evaluate the proposed algorithm on a variety of data sets and
demonstrate that it achieves performance competitive with the centralized
greedy solution.
| Mario Lucic and Olivier Bachem and Morteza Zadimoghaddam and Andreas
Krause | null | 1605.09619 | null | null |
Average-case Hardness of RIP Certification | cs.LG cs.CC math.ST stat.ML stat.TH | The restricted isometry property (RIP) for design matrices gives guarantees
for optimal recovery in sparse linear models. It is of high interest in
compressed sensing and statistical learning. This property is particularly
important for computationally efficient recovery methods. As a consequence,
even though it is in general NP-hard to check that RIP holds, there have been
substantial efforts to find tractable proxies for it. These would allow the
construction of RIP matrices and the polynomial-time verification of RIP given
an arbitrary matrix. We consider the framework of average-case certifiers, that
never wrongly declare that a matrix is RIP, while being often correct for
random instances. While there are such functions which are tractable in a
suboptimal parameter regime, we show that this is a computationally hard task
in any better regime. Our results are based on a new, weaker assumption on the
problem of detecting dense subgraphs.
| Tengyao Wang, Quentin Berthet, Yaniv Plan | null | 1605.09646 | null | null |
Dynamic Filter Networks | cs.LG cs.CV | In a traditional convolutional layer, the learned filters stay fixed after
training. In contrast, we introduce a new framework, the Dynamic Filter
Network, where filters are generated dynamically conditioned on an input. We
show that this architecture is a powerful one, with increased flexibility
thanks to its adaptive nature, yet without an excessive increase in the number
of model parameters. A wide variety of filtering operations can be learned this
way, including local spatial transformations, but also others like selective
(de)blurring or adaptive feature extraction. Moreover, multiple such layers can
be combined, e.g. in a recurrent architecture. We demonstrate the effectiveness
of the dynamic filter network on the tasks of video and stereo prediction, and
reach state-of-the-art performance on the moving MNIST dataset with a much
smaller model. By visualizing the learned filters, we illustrate that the
network has picked up flow information by only looking at unlabelled training
data. This suggests that the network can be used to pretrain networks for
various supervised tasks in an unsupervised way, like optical flow and depth
estimation.
| Bert De Brabandere, Xu Jia, Tinne Tuytelaars, Luc Van Gool | null | 1605.09673 | null | null |
VIME: Variational Information Maximizing Exploration | cs.LG cs.AI cs.RO stat.ML | Scalable and effective exploration remains a key challenge in reinforcement
learning (RL). While there are methods with optimality guarantees in the
setting of discrete state and action spaces, these methods cannot be applied in
high-dimensional deep RL scenarios. As such, most contemporary RL relies on
simple heuristics such as epsilon-greedy exploration or adding Gaussian noise
to the controls. This paper introduces Variational Information Maximizing
Exploration (VIME), an exploration strategy based on maximization of
information gain about the agent's belief of environment dynamics. We propose a
practical implementation, using variational inference in Bayesian neural
networks which efficiently handles continuous state and action spaces. VIME
modifies the MDP reward function, and can be applied with several different
underlying RL algorithms. We demonstrate that VIME achieves significantly
better performance compared to heuristic exploration methods across a variety
of continuous control tasks and algorithms, including tasks with very sparse
rewards.
| Rein Houthooft, Xi Chen, Yan Duan, John Schulman, Filip De Turck,
Pieter Abbeel | null | 1605.09674 | null | null |
Generalized Multi-view Embedding for Visual Recognition and Cross-modal
Retrieval | cs.CV cs.LG | In this paper, the problem of multi-view embedding from different visual cues
and modalities is considered. We propose a unified solution for subspace
learning methods using the Rayleigh quotient, which is extensible for multiple
views, supervised learning, and non-linear embeddings. Numerous methods
including Canonical Correlation Analysis, Partial Least Sqaure regression and
Linear Discriminant Analysis are studied using specific intrinsic and penalty
graphs within the same framework. Non-linear extensions based on kernels and
(deep) neural networks are derived, achieving better performance than the
linear ones. Moreover, a novel Multi-view Modular Discriminant Analysis (MvMDA)
is proposed by taking the view difference into consideration. We demonstrate
the effectiveness of the proposed multi-view embedding methods on visual object
recognition and cross-modal image retrieval, and obtain superior results in
both applications compared to related methods.
| Guanqun Cao, Alexandros Iosifidis, Ke Chen, Moncef Gabbouj | 10.1109/TCYB.2017.2742705 | 1605.09696 | null | null |
CYCLADES: Conflict-free Asynchronous Machine Learning | stat.ML cs.DC cs.DS cs.LG math.OC | We present CYCLADES, a general framework for parallelizing stochastic
optimization algorithms in a shared memory setting. CYCLADES is asynchronous
during shared model updates, and requires no memory locking mechanisms, similar
to HOGWILD!-type algorithms. Unlike HOGWILD!, CYCLADES introduces no conflicts
during the parallel execution, and offers a black-box analysis for provable
speedups across a large family of algorithms. Due to its inherent conflict-free
nature and cache locality, our multi-core implementation of CYCLADES
consistently outperforms HOGWILD!-type algorithms on sufficiently sparse
datasets, leading to up to 40% speedup gains compared to the HOGWILD!
implementation of SGD, and up to 5x gains over asynchronous implementations of
variance reduction algorithms.
| Xinghao Pan, Maximilian Lam, Stephen Tu, Dimitris Papailiopoulos, Ce
Zhang, Michael I. Jordan, Kannan Ramchandran, Chris Re, Benjamin Recht | null | 1605.09721 | null | null |
Asynchrony begets Momentum, with an Application to Deep Learning | stat.ML cs.DC cs.LG math.OC | Asynchronous methods are widely used in deep learning, but have limited
theoretical justification when applied to non-convex problems. We show that
running stochastic gradient descent (SGD) in an asynchronous manner can be
viewed as adding a momentum-like term to the SGD iteration. Our result does not
assume convexity of the objective function, so it is applicable to deep
learning systems. We observe that a standard queuing model of asynchrony
results in a form of momentum that is commonly used by deep learning
practitioners. This forges a link between queuing theory and asynchrony in deep
learning systems, which could be useful for systems builders. For convolutional
neural networks, we experimentally validate that the degree of asynchrony
directly correlates with the momentum, confirming our main result. An important
implication is that tuning the momentum parameter is important when considering
different levels of asynchrony. We assert that properly tuned momentum reduces
the number of steps required for convergence. Finally, our theory suggests new
ways of counteracting the adverse effects of asynchrony: a simple mechanism
like using negative algorithmic momentum can improve performance under high
asynchrony. Since asynchronous methods have better hardware efficiency, this
result may shed light on when asynchronous execution is more efficient for deep
learning systems.
| Ioannis Mitliagkas, Ce Zhang, Stefan Hadjis, Christopher R\'e | null | 1605.09774 | null | null |
Adversarial Feature Learning | cs.LG cs.AI cs.CV cs.NE stat.ML | The ability of the Generative Adversarial Networks (GANs) framework to learn
generative models mapping from simple latent distributions to arbitrarily
complex data distributions has been demonstrated empirically, with compelling
results showing that the latent space of such generators captures semantic
variation in the data distribution. Intuitively, models trained to predict
these semantic latent representations given data may serve as useful feature
representations for auxiliary problems where semantics are relevant. However,
in their existing form, GANs have no means of learning the inverse mapping --
projecting data back into the latent space. We propose Bidirectional Generative
Adversarial Networks (BiGANs) as a means of learning this inverse mapping, and
demonstrate that the resulting learned feature representation is useful for
auxiliary supervised discrimination tasks, competitive with contemporary
approaches to unsupervised and self-supervised feature learning.
| Jeff Donahue, Philipp Kr\"ahenb\"uhl, Trevor Darrell | null | 1605.09782 | null | null |
Quantifying the probable approximation error of probabilistic inference
programs | cs.AI cs.LG stat.ML | This paper introduces a new technique for quantifying the approximation error
of a broad class of probabilistic inference programs, including ones based on
both variational and Monte Carlo approaches. The key idea is to derive a
subjective bound on the symmetrized KL divergence between the distribution
achieved by an approximate inference program and its true target distribution.
The bound's validity (and subjectivity) rests on the accuracy of two auxiliary
probabilistic programs: (i) a "reference" inference program that defines a gold
standard of accuracy and (ii) a "meta-inference" program that answers the
question "what internal random choices did the original approximate inference
program probably make given that it produced a particular result?" The paper
includes empirical results on inference problems drawn from linear regression,
Dirichlet process mixture modeling, HMMs, and Bayesian networks. The
experiments show that the technique is robust to the quality of the reference
inference program and that it can detect implementation bugs that are not
apparent from predictive performance.
| Marco F Cusumano-Towner, Vikash K Mansinghka | null | 1606.00068 | null | null |
Contextual Bandits with Latent Confounders: An NMF Approach | cs.LG cs.SY stat.ML | Motivated by online recommendation and advertising systems, we consider a
causal model for stochastic contextual bandits with a latent low-dimensional
confounder. In our model, there are $L$ observed contexts and $K$ arms of the
bandit. The observed context influences the reward obtained through a latent
confounder variable with cardinality $m$ ($m \ll L,K$). The arm choice and the
latent confounder causally determines the reward while the observed context is
correlated with the confounder. Under this model, the $L \times K$ mean reward
matrix $\mathbf{U}$ (for each context in $[L]$ and each arm in $[K]$)
factorizes into non-negative factors $\mathbf{A}$ ($L \times m$) and
$\mathbf{W}$ ($m \times K$). This insight enables us to propose an
$\epsilon$-greedy NMF-Bandit algorithm that designs a sequence of interventions
(selecting specific arms), that achieves a balance between learning this
low-dimensional structure and selecting the best arm to minimize regret. Our
algorithm achieves a regret of $\mathcal{O}\left(L\mathrm{poly}(m, \log K) \log
T \right)$ at time $T$, as compared to $\mathcal{O}(LK\log T)$ for conventional
contextual bandits, assuming a constant gap between the best arm and the rest
for each context. These guarantees are obtained under mild sufficiency
conditions on the factors that are weaker versions of the well-known
Statistical RIP condition. We further propose a class of generative models that
satisfy our sufficient conditions, and derive a lower bound of
$\mathcal{O}\left(Km\log T\right)$. These are the first regret guarantees for
online matrix completion with bandit feedback, when the rank is greater than
one. We further compare the performance of our algorithm with the state of the
art, on synthetic and real world data-sets.
| Rajat Sen, Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G.
Dimakis, and Sanjay Shakkottai | null | 1606.00119 | null | null |
Self-Paced Learning: an Implicit Regularization Perspective | cs.LG cs.CV | Self-paced learning (SPL) mimics the cognitive mechanism of humans and
animals that gradually learns from easy to hard samples. One key issue in SPL
is to obtain better weighting strategy that is determined by minimizer
function. Existing methods usually pursue this by artificially designing the
explicit form of SPL regularizer. In this paper, we focus on the minimizer
function, and study a group of new regularizer, named self-paced implicit
regularizer that is deduced from robust loss function. Based on the convex
conjugacy theory, the minimizer function for self-paced implicit regularizer
can be directly learned from the latent loss function, while the analytic form
of the regularizer can be even known. A general framework (named SPL-IR) for
SPL is developed accordingly. We demonstrate that the learning procedure of
SPL-IR is associated with latent robust loss functions, thus can provide some
theoretical inspirations for its working mechanism. We further analyze the
relation between SPL-IR and half-quadratic optimization. Finally, we implement
SPL-IR to both supervised and unsupervised tasks, and experimental results
corroborate our ideas and demonstrate the correctness and effectiveness of
implicit regularizers.
| Yanbo Fan, Ran He, Jian Liang, Bao-Gang Hu | null | 1606.00128 | null | null |
Efficiently Bounding Optimal Solutions after Small Data Modification in
Large-Scale Empirical Risk Minimization | stat.ML cs.LG | We study large-scale classification problems in changing environments where a
small part of the dataset is modified, and the effect of the data modification
must be quickly incorporated into the classifier. When the entire dataset is
large, even if the amount of the data modification is fairly small, the
computational cost of re-training the classifier would be prohibitively large.
In this paper, we propose a novel method for efficiently incorporating such a
data modification effect into the classifier without actually re-training it.
The proposed method provides bounds on the unknown optimal classifier with the
cost only proportional to the size of the data modification. We demonstrate
through numerical experiments that the proposed method provides sufficiently
tight bounds with negligible computational costs, especially when a small part
of the dataset is modified in a large-scale classification problem.
| Hiroyuki Hanada, Atsushi Shibagaki, Jun Sakuma, Ichiro Takeuchi | null | 1606.00136 | null | null |
On the Troll-Trust Model for Edge Sign Prediction in Social Networks | cs.LG cs.SI | In the problem of edge sign prediction, we are given a directed graph
(representing a social network), and our task is to predict the binary labels
of the edges (i.e., the positive or negative nature of the social
relationships). Many successful heuristics for this problem are based on the
troll-trust features, estimating at each node the fraction of outgoing and
incoming positive/negative edges. We show that these heuristics can be
understood, and rigorously analyzed, as approximators to the Bayes optimal
classifier for a simple probabilistic model of the edge labels. We then show
that the maximum likelihood estimator for this model approximately corresponds
to the predictions of a Label Propagation algorithm run on a transformed
version of the original social graph. Extensive experiments on a number of
real-world datasets show that this algorithm is competitive against
state-of-the-art classifiers in terms of both accuracy and scalability.
Finally, we show that troll-trust features can also be used to derive online
learning algorithms which have theoretical guarantees even when edges are
adversarially labeled.
| G\'eraud Le Falher, Nicol\`o Cesa-Bianchi, Claudio Gentile, Fabio
Vitale | null | 1606.00182 | null | null |
A Minimax Optimal Algorithm for Crowdsourcing | stat.ML cs.HC cs.LG cs.SI | We consider the problem of accurately estimating the reliability of workers
based on noisy labels they provide, which is a fundamental question in
crowdsourcing. We propose a novel lower bound on the minimax estimation error
which applies to any estimation procedure. We further propose Triangular
Estimation (TE), an algorithm for estimating the reliability of workers. TE has
low complexity, may be implemented in a streaming setting when labels are
provided by workers in real time, and does not rely on an iterative procedure.
We further prove that TE is minimax optimal and matches our lower bound. We
conclude by assessing the performance of TE and other state-of-the-art
algorithms on both synthetic and real-world data sets.
| Thomas Bonald and Richard Combes | null | 1606.00226 | null | null |
On a Topic Model for Sentences | cs.CL cs.IR cs.LG | Probabilistic topic models are generative models that describe the content of
documents by discovering the latent topics underlying them. However, the
structure of the textual input, and for instance the grouping of words in
coherent text spans such as sentences, contains much information which is
generally lost with these models. In this paper, we propose sentenceLDA, an
extension of LDA whose goal is to overcome this limitation by incorporating the
structure of the text in the generative and inference processes. We illustrate
the advantages of sentenceLDA by comparing it with LDA using both intrinsic
(perplexity) and extrinsic (text classification) evaluation tasks on different
text collections.
| Georgios Balikas, Massih-Reza Amini, Marianne Clausel | null | 1606.00253 | null | null |
Multi-Label Zero-Shot Learning via Concept Embedding | cs.LG | Zero Shot Learning (ZSL) enables a learning model to classify instances of an
unseen class during training. While most research in ZSL focuses on
single-label classification, few studies have been done in multi-label ZSL,
where an instance is associated with a set of labels simultaneously, due to the
difficulty in modeling complex semantics conveyed by a set of labels. In this
paper, we propose a novel approach to multi-label ZSL via concept embedding
learned from collections of public users' annotations of multimedia. Thanks to
concept embedding, multi-label ZSL can be done by efficiently mapping an
instance input features onto the concept embedding space in a similar manner
used in single-label ZSL. Moreover, our semantic learning model is capable of
embedding an out-of-vocabulary label by inferring its meaning from its
co-occurring labels. Thus, our approach allows both seen and unseen labels
during the concept embedding learning to be used in the aforementioned instance
mapping, which makes multi-label ZSL more flexible and suitable for real
applications. Experimental results of multi-label ZSL on images and music
tracks suggest that our approach outperforms a state-of-the-art multi-label ZSL
model and can deal with a scenario involving out-of-vocabulary labels without
re-training the semantics learning model.
| Ubai Sandouk and Ke Chen | null | 1606.00282 | null | null |
Automatic tagging using deep convolutional neural networks | cs.SD cs.LG | We present a content-based automatic music tagging algorithm using fully
convolutional neural networks (FCNs). We evaluate different architectures
consisting of 2D convolutional layers and subsampling layers only. In the
experiments, we measure the AUC-ROC scores of the architectures with different
complexities and input types using the MagnaTagATune dataset, where a 4-layer
architecture shows state-of-the-art performance with mel-spectrogram input.
Furthermore, we evaluated the performances of the architectures with varying
the number of layers on a larger dataset (Million Song Dataset), and found that
deeper models outperformed the 4-layer architecture. The experiments show that
mel-spectrogram is an effective time-frequency representation for automatic
tagging and that more complex models benefit from more training data.
| Keunwoo Choi, George Fazekas, Mark Sandler | null | 1606.00298 | null | null |
Decoding Emotional Experience through Physiological Signal Processing | cs.HC cs.LG | There is an increasing consensus among re- searchers that making a computer
emotionally intelligent with the ability to decode human affective states would
allow a more meaningful and natural way of human-computer interactions (HCIs).
One unobtrusive and non-invasive way of recognizing human affective states
entails the exploration of how physiological signals vary under different
emotional experiences. In particular, this paper explores the correlation
between autonomically-mediated changes in multimodal body signals and discrete
emotional states. In order to fully exploit the information in each modality,
we have provided an innovative classification approach for three specific
physiological signals including Electromyogram (EMG), Blood Volume Pressure
(BVP) and Galvanic Skin Response (GSR). These signals are analyzed as inputs to
an emotion recognition paradigm based on fusion of a series of weak learners.
Our proposed classification approach showed 88.1% recognition accuracy, which
outperformed the conventional Support Vector Machine (SVM) classifier with 17%
accuracy improvement. Furthermore, in order to avoid information redundancy and
the resultant over-fitting, a feature reduction method is proposed based on a
correlation analysis to optimize the number of features required for training
and validating each weak learner. Results showed that despite the feature space
dimensionality reduction from 27 to 18 features, our methodology preserved the
recognition accuracy of about 85.0%. This reduction in complexity will get us
one step closer towards embedding this human emotion encoder in the wireless
and wearable HCI platforms.
| Maria S. Perez-Rosero, Behnaz Rezaei, Murat Akcakaya, and Sarah
Ostadabbas | null | 1606.00370 | null | null |
Conversational Contextual Cues: The Case of Personalization and History
for Response Ranking | cs.CL cs.LG | We investigate the task of modeling open-domain, multi-turn, unstructured,
multi-participant, conversational dialogue. We specifically study the effect of
incorporating different elements of the conversation. Unlike previous efforts,
which focused on modeling messages and responses, we extend the modeling to
long context and participant's history. Our system does not rely on handwritten
rules or engineered features; instead, we train deep neural networks on a large
conversational dataset. In particular, we exploit the structure of Reddit
comments and posts to extract 2.1 billion messages and 133 million
conversations. We evaluate our models on the task of predicting the next
response in a conversation, and we find that modeling both context and
participants improves prediction accuracy.
| Rami Al-Rfou and Marc Pickett and Javier Snaider and Yun-hsuan Sung
and Brian Strope and Ray Kurzweil | null | 1606.00372 | null | null |
Stream Clipper: Scalable Submodular Maximization on Stream | stat.ML cs.LG math.CO | We propose a streaming submodular maximization algorithm "stream clipper"
that performs as well as the offline greedy algorithm on document/video
summarization in practice. It adds elements from a stream either to a solution
set $S$ or to an extra buffer $B$ based on two adaptive thresholds, and
improves $S$ by a final greedy step that starts from $S$ adding elements from
$B$. During this process, swapping elements out of $S$ can occur if doing so
yields improvements. The thresholds adapt based on if current memory
utilization exceeds a budget, e.g., it increases the lower threshold, and
removes from the buffer $B$ elements below the new lower threshold. We show
that, while our approximation factor in the worst case is $1/2$ (like in
previous work, and corresponding to the tight bound), we show that there are
data-dependent conditions where our bound falls within the range $[1/2,
1-1/e]$. In news and video summarization experiments, the algorithm
consistently outperforms other streaming methods, and, while using
significantly less computation and memory, performs similarly to the offline
greedy algorithm.
| Tianyi Zhou and Jeff Bilmes | null | 1606.00389 | null | null |
Short Communication on QUIST: A Quick Clustering Algorithm | cs.LG stat.ML | In this short communication we introduce the quick clustering algorithm
(QUIST), an efficient hierarchical clustering algorithm based on sorting. QUIST
is a poly-logarithmic divisive clustering algorithm that does not assume the
number of clusters, and/or the cluster size to be known ahead of time. It is
also insensitive to the original ordering of the input.
| Sherenaz W. Al-Haj Baddar | null | 1606.00398 | null | null |
Scaling Submodular Maximization via Pruned Submodularity Graphs | cs.LG math.CO stat.ML | We propose a new random pruning method (called "submodular sparsification
(SS)") to reduce the cost of submodular maximization. The pruning is applied
via a "submodularity graph" over the $n$ ground elements, where each directed
edge is associated with a pairwise dependency defined by the submodular
function. In each step, SS prunes a $1-1/\sqrt{c}$ (for $c>1$) fraction of the
nodes using weights on edges computed based on only a small number ($O(\log
n)$) of randomly sampled nodes. The algorithm requires $\log_{\sqrt{c}}n$ steps
with a small and highly parallelizable per-step computation. An accuracy-speed
tradeoff parameter $c$, set as $c = 8$, leads to a fast shrink rate
$\sqrt{2}/4$ and small iteration complexity $\log_{2\sqrt{2}}n$. Analysis shows
that w.h.p., the greedy algorithm on the pruned set of size $O(\log^2 n)$ can
achieve a guarantee similar to that of processing the original dataset. In news
and video summarization tasks, SS is able to substantially reduce both
computational costs and memory usage, while maintaining (or even slightly
exceeding) the quality of the original (and much more costly) greedy algorithm.
| Tianyi Zhou, Hua Ouyang, Yi Chang, Jeff Bilmes, Carlos Guestrin | null | 1606.00399 | null | null |
Distributed Hessian-Free Optimization for Deep Neural Network | cs.LG cs.DC math.OC | Training deep neural network is a high dimensional and a highly non-convex
optimization problem. Stochastic gradient descent (SGD) algorithm and it's
variations are the current state-of-the-art solvers for this task. However, due
to non-covexity nature of the problem, it was observed that SGD slows down near
saddle point. Recent empirical work claim that by detecting and escaping saddle
point efficiently, it's more likely to improve training performance. With this
objective, we revisit Hessian-free optimization method for deep networks. We
also develop its distributed variant and demonstrate superior scaling potential
to SGD, which allows more efficiently utilizing larger computing resources thus
enabling large models and faster time to obtain desired solution. Furthermore,
unlike truncated Newton method (Marten's HF) that ignores negative curvature
information by using na\"ive conjugate gradient method and Gauss-Newton Hessian
approximation information - we propose a novel algorithm to explore negative
curvature direction by solving the sub-problem with stabilized bi-conjugate
method involving possible indefinite stochastic Hessian information. We show
that these techniques accelerate the training process for both the standard
MNIST dataset and also the TIMIT speech recognition problem, demonstrating
robust performance with upto an order of magnitude larger batch sizes. This
increased scaling potential is illustrated with near linear speed-up on upto 16
CPU nodes for a simple 4-layer network.
| Xi He and Dheevatsa Mudigere and Mikhail Smelyanskiy and Martin
Tak\'a\v{c} | null | 1606.00511 | null | null |
Multi-pretrained Deep Neural Network | cs.NE cs.LG | Pretraining is widely used in deep neutral network and one of the most famous
pretraining models is Deep Belief Network (DBN). The optimization formulas are
different during the pretraining process for different pretraining models. In
this paper, we pretrained deep neutral network by different pretraining models
and hence investigated the difference between DBN and Stacked Denoising
Autoencoder (SDA) when used as pretraining model. The experimental results show
that DBN get a better initial model. However the model converges to a
relatively worse model after the finetuning process. Yet after pretrained by
SDA for the second time the model converges to a better model if finetuned.
| Zhen Hu, Zhuyin Xue, Tong Cui, Shiqiang Zong, Chenglong He | null | 1606.00540 | null | null |
Ensemble-Compression: A New Method for Parallel Training of Deep Neural
Networks | cs.DC cs.LG cs.NE | Parallelization framework has become a necessity to speed up the training of
deep neural networks (DNN) recently. Such framework typically employs the Model
Average approach, denoted as MA-DNN, in which parallel workers conduct
respective training based on their own local data while the parameters of local
models are periodically communicated and averaged to obtain a global model
which serves as the new start of local models. However, since DNN is a highly
non-convex model, averaging parameters cannot ensure that such global model can
perform better than those local models. To tackle this problem, we introduce a
new parallel training framework called Ensemble-Compression, denoted as EC-DNN.
In this framework, we propose to aggregate the local models by ensemble, i.e.,
averaging the outputs of local models instead of the parameters. As most of
prevalent loss functions are convex to the output of DNN, the performance of
ensemble-based global model is guaranteed to be at least as good as the average
performance of local models. However, a big challenge lies in the explosion of
model size since each round of ensemble can give rise to multiple times size
increment. Thus, we carry out model compression after each ensemble,
specialized by a distillation based method in this paper, to reduce the size of
the global model to be the same as the local ones. Our experimental results
demonstrate the prominent advantage of EC-DNN over MA-DNN in terms of both
accuracy and speedup.
| Shizhao Sun, Wei Chen, Jiang Bian, Xiaoguang Liu, Tie-Yan Liu | null | 1606.00575 | null | null |
Source-LDA: Enhancing probabilistic topic models using prior knowledge
sources | cs.CL cs.IR cs.LG | A popular approach to topic modeling involves extracting co-occurring n-grams
of a corpus into semantic themes. The set of n-grams in a theme represents an
underlying topic, but most topic modeling approaches are not able to label
these sets of words with a single n-gram. Such labels are useful for topic
identification in summarization systems. This paper introduces a novel approach
to labeling a group of n-grams comprising an individual topic. The approach
taken is to complement the existing topic distributions over words with a known
distribution based on a predefined set of topics. This is done by integrating
existing labeled knowledge sources representing known potential topics into the
probabilistic topic model. These knowledge sources are translated into a
distribution and used to set the hyperparameters of the Dirichlet generated
distribution over words. In the inference these modified distributions guide
the convergence of the latent topics to conform with the complementary
distributions. This approach ensures that the topic inference process is
consistent with existing knowledge. The label assignment from the complementary
knowledge sources are then transferred to the latent topics of the corpus. The
results show both accurate label assignment to topics as well as improved topic
generation than those obtained using various labeling approaches based off
Latent Dirichlet allocation (LDA).
| Justin Wood, Patrick Tan, Wei Wang, Corey Arnold | null | 1606.00577 | null | null |
Variance-Reduced Proximal Stochastic Gradient Descent for Non-convex
Composite optimization | stat.ML cs.LG cs.NA | Here we study non-convex composite optimization: first, a finite-sum of
smooth but non-convex functions, and second, a general function that admits a
simple proximal mapping. Most research on stochastic methods for composite
optimization assumes convexity or strong convexity of each function. In this
paper, we extend this problem into the non-convex setting using variance
reduction techniques, such as prox-SVRG and prox-SAGA. We prove that, with a
constant step size, both prox-SVRG and prox-SAGA are suitable for non-convex
composite optimization, and help the problem converge to a stationary point
within $O(1/\epsilon)$ iterations. That is similar to the convergence rate seen
with the state-of-the-art RSAG method and faster than stochastic gradient
descent. Our analysis is also extended into the min-batch setting, which
linearly accelerates the convergence. To the best of our knowledge, this is the
first analysis of convergence rate of variance-reduced proximal stochastic
gradient for non-convex composite optimization.
| Xiyu Yu, Dacheng Tao | null | 1606.00602 | null | null |
Recursive Autoconvolution for Unsupervised Learning of Convolutional
Neural Networks | cs.CV cs.LG cs.NE | In visual recognition tasks, such as image classification, unsupervised
learning exploits cheap unlabeled data and can help to solve these tasks more
efficiently. We show that the recursive autoconvolution operator, adopted from
physics, boosts existing unsupervised methods by learning more discriminative
filters. We take well established convolutional neural networks and train their
filters layer-wise. In addition, based on previous works we design a network
which extracts more than 600k features per sample, but with the total number of
trainable parameters greatly reduced by introducing shared filters in higher
layers. We evaluate our networks on the MNIST, CIFAR-10, CIFAR-100 and STL-10
image classification benchmarks and report several state of the art results
among other unsupervised methods.
| Boris Knyazev, Erhardt Barth, Thomas Martinetz | null | 1606.00611 | null | null |
Adversarially Learned Inference | stat.ML cs.LG | We introduce the adversarially learned inference (ALI) model, which jointly
learns a generation network and an inference network using an adversarial
process. The generation network maps samples from stochastic latent variables
to the data space while the inference network maps training examples in data
space to the space of latent variables. An adversarial game is cast between
these two networks and a discriminative network is trained to distinguish
between joint latent/data-space samples from the generative network and joint
samples from the inference network. We illustrate the ability of the model to
learn mutually coherent inference and generation networks through the
inspections of model samples and reconstructions and confirm the usefulness of
the learned representations by obtaining a performance competitive with
state-of-the-art on the semi-supervised SVHN and CIFAR10 tasks.
| Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro,
Alex Lamb, Martin Arjovsky, Aaron Courville | null | 1606.00704 | null | null |
f-GAN: Training Generative Neural Samplers using Variational Divergence
Minimization | stat.ML cs.LG stat.ME | Generative neural samplers are probabilistic models that implement sampling
using feedforward neural networks: they take a random input vector and produce
a sample from a probability distribution defined by the network weights. These
models are expressive and allow efficient computation of samples and
derivatives, but cannot be used for computing likelihoods or for
marginalization. The generative-adversarial training method allows to train
such models through the use of an auxiliary discriminative neural network. We
show that the generative-adversarial approach is a special case of an existing
more general variational divergence estimation approach. We show that any
f-divergence can be used for training generative neural samplers. We discuss
the benefits of various choices of divergence functions on training complexity
and the quality of the obtained generative models.
| Sebastian Nowozin, Botond Cseke, Ryota Tomioka | null | 1606.00709 | null | null |
Differentially Private Gaussian Processes | stat.ML cs.LG | A major challenge for machine learning is increasing the availability of data
while respecting the privacy of individuals. Here we combine the provable
privacy guarantees of the differential privacy framework with the flexibility
of Gaussian processes (GPs). We propose a method using GPs to provide
differentially private (DP) regression. We then improve this method by crafting
the DP noise covariance structure to efficiently protect the training data,
while minimising the scale of the added noise. We find that this cloaking
method achieves the greatest accuracy, while still providing privacy
guarantees, and offers practical DP for regression over multi-dimensional
inputs. Together these methods provide a starter toolkit for combining
differential privacy and GPs.
| Michael Thomas Smith, Max Zwiessele, Neil D. Lawrence | null | 1606.00720 | null | null |
Stochastic Structured Prediction under Bandit Feedback | cs.CL cs.LG stat.ML | Stochastic structured prediction under bandit feedback follows a learning
protocol where on each of a sequence of iterations, the learner receives an
input, predicts an output structure, and receives partial feedback in form of a
task loss evaluation of the predicted structure. We present applications of
this learning scenario to convex and non-convex objectives for structured
prediction and analyze them as stochastic first-order methods. We present an
experimental evaluation on problems of natural language processing over
exponential output spaces, and compare convergence speed across different
objectives under the practical criterion of optimal task performance on
development data and the optimization-theoretic criterion of minimal squared
gradient norm. Best results under both criteria are obtained for a non-convex
objective for pairwise preference learning under bandit feedback.
| Artem Sokolov and Julia Kreutzer and Christopher Lo and Stefan Riezler | null | 1606.00739 | null | null |
Multiresolution Recurrent Neural Networks: An Application to Dialogue
Response Generation | cs.CL cs.AI cs.LG cs.NE stat.ML | We introduce the multiresolution recurrent neural network, which extends the
sequence-to-sequence framework to model natural language generation as two
parallel discrete stochastic processes: a sequence of high-level coarse tokens,
and a sequence of natural language tokens. There are many ways to estimate or
learn the high-level coarse tokens, but we argue that a simple extraction
procedure is sufficient to capture a wealth of high-level discourse semantics.
Such procedure allows training the multiresolution recurrent neural network by
maximizing the exact joint log-likelihood over both sequences. In contrast to
the standard log- likelihood objective w.r.t. natural language tokens (word
perplexity), optimizing the joint log-likelihood biases the model towards
modeling high-level abstractions. We apply the proposed model to the task of
dialogue response generation in two challenging domains: the Ubuntu technical
support domain, and Twitter conversations. On Ubuntu, the model outperforms
competing approaches by a substantial margin, achieving state-of-the-art
results according to both automatic evaluation metrics and a human evaluation
study. On Twitter, the model appears to generate more relevant and on-topic
responses according to automatic evaluation metrics. Finally, our experiments
demonstrate that the proposed model is more adept at overcoming the sparsity of
natural language and is better able to capture long-term structure.
| Iulian Vlad Serban, Tim Klinger, Gerald Tesauro, Kartik Talamadupula,
Bowen Zhou, Yoshua Bengio, Aaron Courville | null | 1606.00776 | null | null |
Post-Inference Prior Swapping | stat.ML cs.AI cs.LG stat.CO stat.ME | While Bayesian methods are praised for their ability to incorporate useful
prior knowledge, in practice, convenient priors that allow for computationally
cheap or tractable inference are commonly used. In this paper, we investigate
the following question: for a given model, is it possible to compute an
inference result with any convenient false prior, and afterwards, given any
target prior of interest, quickly transform this result into the target
posterior? A potential solution is to use importance sampling (IS). However, we
demonstrate that IS will fail for many choices of the target prior, depending
on its parametric form and similarity to the false prior. Instead, we propose
prior swapping, a method that leverages the pre-inferred false posterior to
efficiently generate accurate posterior samples under arbitrary target priors.
Prior swapping lets us apply less-costly inference algorithms to certain
models, and incorporate new or updated prior information "post-inference". We
give theoretical guarantees about our method, and demonstrate it empirically on
a number of models and priors.
| Willie Neiswanger, Eric Xing | null | 1606.00787 | null | null |
Sequential Principal Curves Analysis | stat.ML cs.LG | This work includes all the technical details of the Sequential Principal
Curves Analysis (SPCA) in a single document. SPCA is an unsupervised nonlinear
and invertible feature extraction technique. The identified curvilinear
features can be interpreted as a set of nonlinear sensors: the response of each
sensor is the projection onto the corresponding feature. Moreover, it can be
easily tuned for different optimization criteria; e.g. infomax, error
minimization, decorrelation; by choosing the right way to measure distances
along each curvilinear feature. Even though proposed in [Laparra et al. Neural
Comp. 12] and shown to work in multiple modalities in [Laparra and Malo
Frontiers Hum. Neuro. 15], the SPCA framework has its original roots in the
nonlinear ICA algorithm in [Malo and Gutierrez Network 06]. Later on, the SPCA
philosophy for nonlinear generalization of PCA originated substantially faster
alternatives at the cost of introducing different constraints in the model.
Namely, the Principal Polynomial Analysis (PPA) [Laparra et al. IJNS 14], and
the Dimensionality Reduction via Regression (DRR) [Laparra et al. IEEE TGRS
15]. This report illustrates the reasons why we developed such family and is
the appropriate technical companion for the missing details in [Laparra et al.,
NeCo 12, Laparra and Malo, Front.Hum.Neuro. 15]. See also the data, code and
examples in the dedicated sites http://isp.uv.es/spca.html and
http://isp.uv.es/after effects.html
| Valero Laparra and Jesus Malo | null | 1606.00856 | null | null |
Unified Framework for Quantification | cs.LG | Quantification is the machine learning task of estimating test-data class
proportions that are not necessarily similar to those in training. Apart from
its intrinsic value as an aggregate statistic, quantification output can also
be used to optimize classifier probabilities, thereby increasing classification
accuracy. We unify major quantification approaches under a constrained
multi-variate regression framework, and use mathematical programming to
estimate class proportions for different loss functions. With this modeling
approach, we extend existing binary-only quantification approaches to
multi-class settings as well. We empirically verify our unified framework by
experimenting with several multi-class datasets including the Stanford
Sentiment Treebank and CIFAR-10.
| Aykut Firat | null | 1606.00868 | null | null |
Multi-Organ Cancer Classification and Survival Analysis | q-bio.QM cs.LG q-bio.TO stat.ML | Accurate and robust cell nuclei classification is the cornerstone for a wider
range of tasks in digital and Computational Pathology. However, most machine
learning systems require extensive labeling from expert pathologists for each
individual problem at hand, with no or limited abilities for knowledge transfer
between datasets and organ sites. In this paper we implement and evaluate a
variety of deep neural network models and model ensembles for nuclei
classification in renal cell cancer (RCC) and prostate cancer (PCa). We propose
a convolutional neural network system based on residual learning which
significantly improves over the state-of-the-art in cell nuclei classification.
Finally, we show that the combination of tissue types during training increases
not only classification accuracy but also overall survival analysis.
| Stefan Bauer and Nicolas Carion and Peter Sch\"uffler and Thomas Fuchs
and Peter Wild and Joachim M. Buhmann | null | 1606.00897 | null | null |
Distributed Cooperative Decision-Making in Multiarmed Bandits:
Frequentist and Bayesian Algorithms | cs.SY cs.LG math.OC | We study distributed cooperative decision-making under the explore-exploit
tradeoff in the multiarmed bandit (MAB) problem. We extend the state-of-the-art
frequentist and Bayesian algorithms for single-agent MAB problems to
cooperative distributed algorithms for multi-agent MAB problems in which agents
communicate according to a fixed network graph. We rely on a running consensus
algorithm for each agent's estimation of mean rewards from its own rewards and
the estimated rewards of its neighbors. We prove the performance of these
algorithms and show that they asymptotically recover the performance of a
centralized agent. Further, we rigorously characterize the influence of the
communication graph structure on the decision-making performance of the group.
| Peter Landgren, Vaibhav Srivastava, and Naomi Ehrich Leonard | null | 1606.00911 | null | null |
Towards a Job Title Classification System | cs.LG cs.AI | Document classification for text, images and other applicable entities has
long been a focus of research in academia and also finds application in many
industrial settings. Amidst a plethora of approaches to solve such problems,
machine-learning techniques have found success in a variety of scenarios. In
this paper we discuss the design of a machine learning-based semi-supervised
job title classification system for the online job recruitment domain currently
in production at CareerBuilder.com and propose enhancements to it. The system
leverages a varied collection of classification as well clustering algorithms.
These algorithms are encompassed in an architecture that facilitates leveraging
existing off-the-shelf machine learning tools and techniques while keeping into
consideration the challenges of constructing a scalable classification system
for a large taxonomy of categories. As a continuously evolving system that is
still under development we first discuss the existing semi-supervised
classification system which is composed of both clustering and classification
components in a proximity-based classifier setup and results of which are
already used across numerous products at CareerBuilder. We then elucidate our
long-term goals for job title classification and propose enhancements to the
existing system in the form of a two-stage coarse and fine level classifier
augmentation to construct a cascade of hierarchical vertical classifiers.
Preliminary results are presented using experimental evaluation on real world
industrial data.
| Faizan Javed, Matt McNair, Ferosh Jacob, Meng Zhao | null | 1606.00917 | null | null |
Convolutional Imputation of Matrix Networks | cs.LG stat.ML | A matrix network is a family of matrices, with relatedness modeled by a
weighted graph. We consider the task of completing a partially observed matrix
network. We assume a novel sampling scheme where a fraction of matrices might
be completely unobserved. How can we recover the entire matrix network from
incomplete observations? This mathematical problem arises in many applications
including medical imaging and social networks.
To recover the matrix network, we propose a structural assumption that the
matrices have a graph Fourier transform which is low-rank. We formulate a
convex optimization problem and prove an exact recovery guarantee for the
optimization problem. Furthermore, we numerically characterize the exact
recovery regime for varying rank and sampling rate and discover a new phase
transition phenomenon. Then we give an iterative imputation algorithm to
efficiently solve the optimization problem and complete large scale matrix
networks. We demonstrate the algorithm with a variety of applications such as
MRI and Facebook user network.
| Qingyun Sun, Mengyuan Yan David Donoho and Stephen Boyd | null | 1606.00925 | null | null |
Comparison of 14 different families of classification algorithms on 115
binary datasets | cs.LG cs.CV | We tested 14 very different classification algorithms (random forest,
gradient boosting machines, SVM - linear, polynomial, and RBF - 1-hidden-layer
neural nets, extreme learning machines, k-nearest neighbors and a bagging of
knn, naive Bayes, learning vector quantization, elastic net logistic
regression, sparse linear discriminant analysis, and a boosting of linear
classifiers) on 115 real life binary datasets. We followed the Demsar analysis
and found that the three best classifiers (random forest, gbm and RBF SVM) are
not significantly different from each other. We also discuss that a change of
less then 0.0112 in the error rate should be considered as an irrelevant
change, and used a Bayesian ANOVA analysis to conclude that with high
probability the differences between these three classifiers is not of practical
consequence. We also verified the execution time of "standard implementations"
of these algorithms and concluded that RBF SVM is the fastest (significantly
so) both in training time and in training plus testing time.
| Jacques Wainer | null | 1606.00930 | null | null |
Smooth Imitation Learning for Online Sequence Prediction | cs.LG | We study the problem of smooth imitation learning for online sequence
prediction, where the goal is to train a policy that can smoothly imitate
demonstrated behavior in a dynamic and continuous environment in response to
online, sequential context input. Since the mapping from context to behavior is
often complex, we take a learning reduction approach to reduce smooth imitation
learning to a regression problem using complex function classes that are
regularized to ensure smoothness. We present a learning meta-algorithm that
achieves fast and stable convergence to a good policy. Our approach enjoys
several attractive properties, including being fully deterministic, employing
an adaptive learning rate that can provably yield larger policy improvements
compared to previous approaches, and the ability to ensure stable convergence.
Our empirical results demonstrate significant performance gains over previous
approaches.
| Hoang M. Le, Andrew Kang, Yisong Yue and Peter Carr | null | 1606.00968 | null | null |
Synthesizing Dynamic Patterns by Spatial-Temporal Generative ConvNet | stat.ML cs.CV cs.LG cs.NE | Video sequences contain rich dynamic patterns, such as dynamic texture
patterns that exhibit stationarity in the temporal domain, and action patterns
that are non-stationary in either spatial or temporal domain. We show that a
spatial-temporal generative ConvNet can be used to model and synthesize dynamic
patterns. The model defines a probability distribution on the video sequence,
and the log probability is defined by a spatial-temporal ConvNet that consists
of multiple layers of spatial-temporal filters to capture spatial-temporal
patterns of different scales. The model can be learned from the training video
sequences by an "analysis by synthesis" learning algorithm that iterates the
following two steps. Step 1 synthesizes video sequences from the currently
learned model. Step 2 then updates the model parameters based on the difference
between the synthesized video sequences and the observed training sequences. We
show that the learning algorithm can synthesize realistic dynamic patterns.
| Jianwen Xie, Song-Chun Zhu, Ying Nian Wu | null | 1606.00972 | null | null |
A Graph-Based Semi-Supervised k Nearest-Neighbor Method for Nonlinear
Manifold Distributed Data Classification | cs.LG stat.ML | $k$ Nearest Neighbors ($k$NN) is one of the most widely used supervised
learning algorithms to classify Gaussian distributed data, but it does not
achieve good results when it is applied to nonlinear manifold distributed data,
especially when a very limited amount of labeled samples are available. In this
paper, we propose a new graph-based $k$NN algorithm which can effectively
handle both Gaussian distributed data and nonlinear manifold distributed data.
To achieve this goal, we first propose a constrained Tired Random Walk (TRW) by
constructing an $R$-level nearest-neighbor strengthened tree over the graph,
and then compute a TRW matrix for similarity measurement purposes. After this,
the nearest neighbors are identified according to the TRW matrix and the class
label of a query point is determined by the sum of all the TRW weights of its
nearest neighbors. To deal with online situations, we also propose a new
algorithm to handle sequential samples based a local neighborhood
reconstruction. Comparison experiments are conducted on both synthetic data
sets and real-world data sets to demonstrate the validity of the proposed new
$k$NN algorithm and its improvements to other version of $k$NN algorithms.
Given the widespread appearance of manifold structures in real-world problems
and the popularity of the traditional $k$NN algorithm, the proposed manifold
version $k$NN shows promising potential for classifying manifold-distributed
data.
| Enmei Tu, Yaqian Zhang, Lin Zhu, Jie Yang and Nikola Kasabov | null | 1606.00985 | null | null |
Machine Learning for E-mail Spam Filtering: Review,Techniques and Trends | cs.LG cs.CR | We present a comprehensive review of the most effective content-based e-mail
spam filtering techniques. We focus primarily on Machine Learning-based spam
filters and their variants, and report on a broad review ranging from surveying
the relevant ideas, efforts, effectiveness, and the current progress. The
initial exposition of the background examines the basics of e-mail spam
filtering, the evolving nature of spam, spammers playing cat-and-mouse with
e-mail service providers (ESPs), and the Machine Learning front in fighting
spam. We conclude by measuring the impact of Machine Learning-based filters and
explore the promising offshoots of latest developments.
| Alexy Bhowmick, Shyamanta M. Hazarika | null | 1606.01042 | null | null |
Difference of Convex Functions Programming Applied to Control with
Expert Data | math.OC cs.LG stat.ML | This paper reports applications of Difference of Convex functions (DC)
programming to Learning from Demonstrations (LfD) and Reinforcement Learning
(RL) with expert data. This is made possible because the norm of the Optimal
Bellman Residual (OBR), which is at the heart of many RL and LfD algorithms, is
DC. Improvement in performance is demonstrated on two specific algorithms,
namely Reward-regularized Classification for Apprenticeship Learning (RCAL) and
Reinforcement Learning with Expert Demonstrations (RLED), through experiments
on generic Markov Decision Processes (MDP), called Garnets.
| Bilal Piot, Matthieu Geist, Olivier Pietquin | null | 1606.01128 | null | null |
On Valid Optimal Assignment Kernels and Applications to Graph
Classification | cs.LG stat.ML | The success of kernel methods has initiated the design of novel positive
semidefinite functions, in particular for structured data. A leading design
paradigm for this is the convolution kernel, which decomposes structured
objects into their parts and sums over all pairs of parts. Assignment kernels,
in contrast, are obtained from an optimal bijection between parts, which can
provide a more valid notion of similarity. In general however, optimal
assignments yield indefinite functions, which complicates their use in kernel
methods. We characterize a class of base kernels used to compare parts that
guarantees positive semidefinite optimal assignment kernels. These base kernels
give rise to hierarchies from which the optimal assignment kernels are computed
in linear time by histogram intersection. We apply these results by developing
the Weisfeiler-Lehman optimal assignment kernel for graphs. It provides high
classification accuracy on widely-used benchmark data sets improving over the
original Weisfeiler-Lehman kernel.
| Nils M. Kriege, Pierre-Louis Giscard, Richard C. Wilson | null | 1606.01141 | null | null |
Robust Ensemble Clustering Using Probability Trajectories | stat.ML cs.LG | Although many successful ensemble clustering approaches have been developed
in recent years, there are still two limitations to most of the existing
approaches. First, they mostly overlook the issue of uncertain links, which may
mislead the overall consensus process. Second, they generally lack the ability
to incorporate global information to refine the local links. To address these
two limitations, in this paper, we propose a novel ensemble clustering approach
based on sparse graph representation and probability trajectory analysis. In
particular, we present the elite neighbor selection strategy to identify the
uncertain links by locally adaptive thresholds and build a sparse graph with a
small number of probably reliable links. We argue that a small number of
probably reliable links can lead to significantly better consensus results than
using all graph links regardless of their reliability. The random walk process
driven by a new transition probability matrix is utilized to explore the global
information in the graph. We derive a novel and dense similarity measure from
the sparse graph by analyzing the probability trajectories of the random
walkers, based on which two consensus functions are further proposed.
Experimental results on multiple real-world datasets demonstrate the
effectiveness and efficiency of our approach.
| Dong Huang and Jian-Huang Lai and Chang-Dong Wang | 10.1109/TKDE.2015.2503753 | 1606.01160 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.