title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Empirical Studies on Symbolic Aggregation Approximation Under
Statistical Perspectives for Knowledge Discovery in Time Series | cs.LG cs.IT math.IT | Symbolic Aggregation approXimation (SAX) has been the de facto standard
representation methods for knowledge discovery in time series on a number of
tasks and applications. So far, very little work has been done in empirically
investigating the intrinsic properties and statistical mechanics in SAX words.
In this paper, we applied several statistical measurements and proposed a new
statistical measurement, i.e. information embedding cost (IEC) to analyze the
statistical behaviors of the symbolic dynamics. Our experiments on the
benchmark datasets and the clinical signals demonstrate that SAX can always
reduce the complexity while preserving the core information embedded in the
original time series with significant embedding efficiency. Our proposed IEC
score provide a priori to determine if SAX is adequate for specific dataset,
which can be generalized to evaluate other symbolic representations. Our work
provides an analytical framework with several statistical tools to analyze,
evaluate and further improve the symbolic dynamics for knowledge discovery in
time series.
| Wei Song, Zhiguang Wang, Yangdong Ye, Ming Fan | null | 1506.02732 | null | null |
Inverting Visual Representations with Convolutional Networks | cs.NE cs.CV cs.LG | Feature representations, both hand-designed and learned ones, are often hard
to analyze and interpret, even when they are extracted from visual data. We
propose a new approach to study image representations by inverting them with an
up-convolutional neural network. We apply the method to shallow representations
(HOG, SIFT, LBP), as well as to deep networks. For shallow representations our
approach provides significantly better reconstructions than existing methods,
revealing that there is surprisingly rich information contained in these
features. Inverting a deep network trained on ImageNet provides several
insights into the properties of the feature representation learned by the
network. Most strikingly, the colors and the rough contours of an image can be
reconstructed from activations in higher network layers and even from the
predicted class probabilities.
| Alexey Dosovitskiy and Thomas Brox | null | 1506.02753 | null | null |
WordRank: Learning Word Embeddings via Robust Ranking | cs.CL cs.LG stat.ML | Embedding words in a vector space has gained a lot of attention in recent
years. While state-of-the-art methods provide efficient computation of word
similarities via a low-dimensional matrix embedding, their motivation is often
left unclear. In this paper, we argue that word embedding can be naturally
viewed as a ranking problem due to the ranking nature of the evaluation
metrics. Then, based on this insight, we propose a novel framework WordRank
that efficiently estimates word representations via robust ranking, in which
the attention mechanism and robustness to noise are readily achieved via the
DCG-like ranking losses. The performance of WordRank is measured in word
similarity and word analogy benchmarks, and the results are compared to the
state-of-the-art word embedding techniques. Our algorithm is very competitive
to the state-of-the- arts on large corpora, while outperforms them by a
significant margin when the training set is limited (i.e., sparse and noisy).
With 17 million tokens, WordRank performs almost as well as existing methods
using 7.2 billion tokens on a popular word similarity benchmark. Our multi-node
distributed implementation of WordRank is publicly available for general usage.
| Shihao Ji, Hyokun Yun, Pinar Yanardag, Shin Matsushima, and S. V. N.
Vishwanathan | null | 1506.02761 | null | null |
Estimating Posterior Ratio for Classification: Transfer Learning from
Probabilistic Perspective | stat.ML cs.LG | Transfer learning assumes classifiers of similar tasks share certain
parameter structures. Unfortunately, modern classifiers uses sophisticated
feature representations with huge parameter spaces which lead to costly
transfer. Under the impression that changes from one classifier to another
should be ``simple'', an efficient transfer learning criteria that only learns
the ``differences'' is proposed in this paper. We train a \emph{posterior
ratio} which turns out to minimizes the upper-bound of the target learning
risk. The model of posterior ratio does not have to share the same parameter
space with the source classifier at all so it can be easily modelled and
efficiently trained. The resulting classifier therefore is obtained by simply
multiplying the existing probabilistic-classifier with the learned posterior
ratio.
| Song Liu, Kenji Fukumizu | null | 1506.02784 | null | null |
On the Error of Random Fourier Features | cs.LG stat.ML | Kernel methods give powerful, flexible, and theoretically grounded approaches
to solving many problems in machine learning. The standard approach, however,
requires pairwise evaluations of a kernel function, which can lead to
scalability issues for very large datasets. Rahimi and Recht (2007) suggested a
popular approach to handling this problem, known as random Fourier features.
The quality of this approximation, however, is not well understood. We improve
the uniform error bound of that paper, as well as giving novel understandings
of the embedding's variance, approximation error, and use in some machine
learning methods. We also point out that surprisingly, of the two main variants
of those features, the more widely used is strictly higher-variance for the
Gaussian kernel and has worse bounds.
| Danica J. Sutherland and Jeff Schneider | null | 1506.02785 | null | null |
Mixing Time Estimation in Reversible Markov Chains from a Single Sample
Path | cs.LG stat.ML | This article provides the first procedure for computing a fully
data-dependent interval that traps the mixing time $t_{\text{mix}}$ of a finite
reversible ergodic Markov chain at a prescribed confidence level. The interval
is computed from a single finite-length sample path from the Markov chain, and
does not require the knowledge of any parameters of the chain. This stands in
contrast to previous approaches, which either only provide point estimates, or
require a reset mechanism, or additional prior knowledge. The interval is
constructed around the relaxation time $t_{\text{relax}}$, which is strongly
related to the mixing time, and the width of the interval converges to zero
roughly at a $\sqrt{n}$ rate, where $n$ is the length of the sample path. Upper
and lower bounds are given on the number of samples required to achieve
constant-factor multiplicative accuracy. The lower bounds indicate that, unless
further restrictions are placed on the chain, no procedure can achieve this
accuracy level before seeing each state at least $\Omega(t_{\text{relax}})$
times on the average. Finally, future directions of research are identified.
| Daniel Hsu, Aryeh Kontorovich, Csaba Szepesv\'ari | null | 1506.02903 | null | null |
Training Restricted Boltzmann Machines via the Thouless-Anderson-Palmer
Free Energy | cond-mat.dis-nn cs.LG cs.NE stat.ML | Restricted Boltzmann machines are undirected neural networks which have been
shown to be effective in many applications, including serving as
initializations for training deep multi-layer neural networks. One of the main
reasons for their success is the existence of efficient and practical
stochastic algorithms, such as contrastive divergence, for unsupervised
training. We propose an alternative deterministic iterative procedure based on
an improved mean field method from statistical physics known as the
Thouless-Anderson-Palmer approach. We demonstrate that our algorithm provides
performance equal to, and sometimes superior to, persistent contrastive
divergence, while also providing a clear and easy to evaluate objective
function. We believe that this strategy can be easily generalized to other
models as well as to more accurate higher-order approximations, paving the way
for systematic improvements in training Boltzmann machines with hidden units.
| Marylou Gabri\'e and Eric W. Tramel and Florent Krzakala | null | 1506.02914 | null | null |
Stagewise Learning for Sparse Clustering of Discretely-Valued Data | stat.ML cs.LG q-bio.QM | The performance of EM in learning mixtures of product distributions often
depends on the initialization. This can be problematic in crowdsourcing and
other applications, e.g. when a small number of 'experts' are diluted by a
large number of noisy, unreliable participants. We develop a new EM algorithm
that is driven by these experts. In a manner that differs from other
approaches, we start from a single mixture class. The algorithm then develops
the set of 'experts' in a stagewise fashion based on a mutual information
criterion. At each stage EM operates on this subset of the players, effectively
regularizing the E rather than the M step. Experiments show that stagewise EM
outperforms other initialization techniques for crowdsourcing and neurosciences
applications, and can guide a full EM to results comparable to those obtained
knowing the exact distribution.
| Vincent Zhao, Steven W. Zucker | null | 1506.02975 | null | null |
Accelerated Stochastic Gradient Descent for Minimizing Finite Sums | stat.ML cs.LG | We propose an optimization method for minimizing the finite sums of smooth
convex functions. Our method incorporates an accelerated gradient descent (AGD)
and a stochastic variance reduction gradient (SVRG) in a mini-batch setting.
Unlike SVRG, our method can be directly applied to non-strongly and strongly
convex problems. We show that our method achieves a lower overall complexity
than the recently proposed methods that supports non-strongly convex problems.
Moreover, this method has a fast rate of convergence for strongly convex
problems. Our experiments show the effectiveness of our method.
| Atsushi Nitanda | null | 1506.03016 | null | null |
On the Interpretability of Conditional Probability Estimates in the
Agnostic Setting | cs.LG | We study the interpretability of conditional probability estimates for binary
classification under the agnostic setting or scenario. Under the agnostic
setting, conditional probability estimates do not necessarily reflect the true
conditional probabilities. Instead, they have a certain calibration property:
among all data points that the classifier has predicted P(Y = 1|X) = p, p
portion of them actually have label Y = 1. For cost-sensitive decision
problems, this calibration property provides adequate support for us to use
Bayes Decision Theory. In this paper, we define a novel measure for the
calibration property together with its empirical counterpart, and prove an
uniform convergence result between them. This new measure enables us to
formally justify the calibration property of conditional probability
estimations, and provides new insights on the problem of estimating and
calibrating conditional probabilities.
| Yihan Gao, Aditya Parameswaran, Jian Peng | null | 1506.03018 | null | null |
Measuring Sample Quality with Stein's Method | stat.ML cs.LG math.PR stat.ME | To improve the efficiency of Monte Carlo estimation, practitioners are
turning to biased Markov chain Monte Carlo procedures that trade off asymptotic
exactness for computational speed. The reasoning is sound: a reduction in
variance due to more rapid sampling can outweigh the bias introduced. However,
the inexactness creates new challenges for sampler and parameter selection,
since standard measures of sample quality like effective sample size do not
account for asymptotic bias. To address these challenges, we introduce a new
computable quality measure based on Stein's method that quantifies the maximum
discrepancy between sample and target expectations over a large class of test
functions. We use our tool to compare exact, biased, and deterministic sample
sequences and illustrate applications to hyperparameter selection, convergence
rate assessment, and quantifying bias-variance tradeoffs in posterior
inference.
| Jackson Gorham and Lester Mackey | null | 1506.03039 | null | null |
Deep SimNets | cs.NE cs.LG | We present a deep layered architecture that generalizes convolutional neural
networks (ConvNets). The architecture, called SimNets, is driven by two
operators: (i) a similarity function that generalizes inner-product, and (ii) a
log-mean-exp function called MEX that generalizes maximum and average. The two
operators applied in succession give rise to a standard neuron but in "feature
space". The feature spaces realized by SimNets depend on the choice of the
similarity operator. The simplest setting, which corresponds to a convolution,
realizes the feature space of the Exponential kernel, while other settings
realize feature spaces of more powerful kernels (Generalized Gaussian, which
includes as special cases RBF and Laplacian), or even dynamically learned
feature spaces (Generalized Multiple Kernel Learning). As a result, the SimNet
contains a higher abstraction level compared to a traditional ConvNet. We argue
that enhanced expressiveness is important when the networks are small due to
run-time constraints (such as those imposed by mobile applications). Empirical
evaluation validates the superior expressiveness of SimNets, showing a
significant gain in accuracy over ConvNets when computational resources at
run-time are limited. We also show that in large-scale settings, where
computational complexity is less of a concern, the additional capacity of
SimNets can be controlled with proper regularization, yielding accuracies
comparable to state of the art ConvNets.
| Nadav Cohen, Or Sharir and Amnon Shashua | null | 1506.03059 | null | null |
Clustering by transitive propagation | cs.LG cond-mat.stat-mech stat.ML | We present a global optimization algorithm for clustering data given the
ratio of likelihoods that each pair of data points is in the same cluster or in
different clusters. To define a clustering solution in terms of pairwise
relationships, a necessary and sufficient condition is that belonging to the
same cluster satisfies transitivity. We define a global objective function
based on pairwise likelihood ratios and a transitivity constraint over all
triples, assigning an equal prior probability to all clustering solutions. We
maximize the objective function by implementing max-sum message passing on the
corresponding factor graph to arrive at an O(N^3) algorithm. Lastly, we
demonstrate an application inspired by mutational sequencing for decoding
random binary words transmitted through a noisy channel.
| Vijay Kumar and Dan Levy | null | 1506.03072 | null | null |
Scheduled Sampling for Sequence Prediction with Recurrent Neural
Networks | cs.LG cs.CL cs.CV | Recurrent Neural Networks can be trained to produce sequences of tokens given
some input, as exemplified by recent results in machine translation and image
captioning. The current approach to training them consists of maximizing the
likelihood of each token in the sequence given the current (recurrent) state
and the previous token. At inference, the unknown previous token is then
replaced by a token generated by the model itself. This discrepancy between
training and inference can yield errors that can accumulate quickly along the
generated sequence. We propose a curriculum learning strategy to gently change
the training process from a fully guided scheme using the true previous token,
towards a less guided scheme which mostly uses the generated token instead.
Experiments on several sequence prediction tasks show that this approach yields
significant improvements. Moreover, it was used successfully in our winning
entry to the MSCOCO image captioning challenge, 2015.
| Samy Bengio, Oriol Vinyals, Navdeep Jaitly, Noam Shazeer | null | 1506.03099 | null | null |
Provable Bayesian Inference via Particle Mirror Descent | cs.LG stat.CO stat.ML | Bayesian methods are appealing in their flexibility in modeling complex data
and ability in capturing uncertainty in parameters. However, when Bayes' rule
does not result in tractable closed-form, most approximate inference algorithms
lack either scalability or rigorous guarantees. To tackle this challenge, we
propose a simple yet provable algorithm, \emph{Particle Mirror Descent} (PMD),
to iteratively approximate the posterior density. PMD is inspired by stochastic
functional mirror descent where one descends in the density space using a small
batch of data points at each iteration, and by particle filtering where one
uses samples to approximate a function. We prove result of the first kind that,
with $m$ particles, PMD provides a posterior density estimator that converges
in terms of $KL$-divergence to the true posterior in rate $O(1/\sqrt{m})$. We
demonstrate competitive empirical performances of PMD compared to several
approximate inference algorithms in mixture models, logistic regression, sparse
Gaussian processes and latent Dirichlet allocation on large scale datasets.
| Bo Dai, Niao He, Hanjun Dai, Le Song | null | 1506.03101 | null | null |
Pointer Networks | stat.ML cs.CG cs.LG cs.NE | We introduce a new neural architecture to learn the conditional probability
of an output sequence with elements that are discrete tokens corresponding to
positions in an input sequence. Such problems cannot be trivially addressed by
existent approaches such as sequence-to-sequence and Neural Turing Machines,
because the number of target classes in each step of the output depends on the
length of the input, which is variable. Problems such as sorting variable sized
sequences, and various combinatorial optimization problems belong to this
class. Our model solves the problem of variable size output dictionaries using
a recently proposed mechanism of neural attention. It differs from the previous
attention attempts in that, instead of using attention to blend hidden units of
an encoder to a context vector at each decoder step, it uses attention as a
pointer to select a member of the input sequence as the output. We call this
architecture a Pointer Net (Ptr-Net). We show Ptr-Nets can be used to learn
approximate solutions to three challenging geometric problems -- finding planar
convex hulls, computing Delaunay triangulations, and the planar Travelling
Salesman Problem -- using training examples alone. Ptr-Nets not only improve
over sequence-to-sequence with input attention, but also allow us to generalize
to variable size output dictionaries. We show that the learnt models generalize
beyond the maximum lengths they were trained on. We hope our results on these
tasks will encourage a broader exploration of neural learning for discrete
problems.
| Oriol Vinyals, Meire Fortunato, Navdeep Jaitly | null | 1506.03134 | null | null |
Symmetric Tensor Completion from Multilinear Entries and Learning
Product Mixtures over the Hypercube | cs.DS cs.LG stat.ML | We give an algorithm for completing an order-$m$ symmetric low-rank tensor
from its multilinear entries in time roughly proportional to the number of
tensor entries. We apply our tensor completion algorithm to the problem of
learning mixtures of product distributions over the hypercube, obtaining new
algorithmic results. If the centers of the product distribution are linearly
independent, then we recover distributions with as many as $\Omega(n)$ centers
in polynomial time and sample complexity. In the general case, we recover
distributions with as many as $\tilde\Omega(n)$ centers in quasi-polynomial
time, answering an open problem of Feldman et al. (SIAM J. Comp.) for the
special case of distributions with incoherent bias vectors.
Our main algorithmic tool is the iterated application of a low-rank matrix
completion algorithm for matrices with adversarially missing entries.
| Tselil Schramm and Benjamin Weitz | null | 1506.03137 | null | null |
Copula variational inference | stat.ML cs.LG stat.CO stat.ME | We develop a general variational inference method that preserves dependency
among the latent variables. Our method uses copulas to augment the families of
distributions used in mean-field and structured approximations. Copulas model
the dependency that is not captured by the original variational distribution,
and thus the augmented variational family guarantees better approximations to
the posterior. With stochastic optimization, inference on the augmented
distribution is scalable. Furthermore, our strategy is generic: it can be
applied to any inference procedure that currently uses the mean-field or
structured approach. Copula variational inference has many advantages: it
reduces bias; it is less sensitive to local optima; it is less sensitive to
hyperparameters; and it helps characterize and interpret the dependency among
the latent variables.
| Dustin Tran, David M. Blei, Edoardo M. Airoldi | null | 1506.03159 | null | null |
Permutation Search Methods are Efficient, Yet Faster Search is Possible | cs.LG cs.DB cs.DS | We survey permutation-based methods for approximate k-nearest neighbor
search. In these methods, every data point is represented by a ranked list of
pivots sorted by the distance to this point. Such ranked lists are called
permutations. The underpinning assumption is that, for both metric and
non-metric spaces, the distance between permutations is a good proxy for the
distance between original points. Thus, it should be possible to efficiently
retrieve most true nearest neighbors by examining only a tiny subset of data
points whose permutations are similar to the permutation of a query. We further
test this assumption by carrying out an extensive experimental evaluation where
permutation methods are pitted against state-of-the art benchmarks (the
multi-probe LSH, the VP-tree, and proximity-graph based retrieval) on a variety
of realistically large data set from the image and textual domain. The focus is
on the high-accuracy retrieval methods for generic spaces. Additionally, we
assume that both data and indices are stored in main memory. We find
permutation methods to be reasonably efficient and describe a setup where these
methods are most useful. To ease reproducibility, we make our software and data
sets publicly available.
| Bilegsaikhan Naidan, Leonid Boytsov, Eric Nyberg | null | 1506.03163 | null | null |
Explore no more: Improved high-probability regret bounds for
non-stochastic bandits | cs.LG stat.ML | This work addresses the problem of regret minimization in non-stochastic
multi-armed bandit problems, focusing on performance guarantees that hold with
high probability. Such results are rather scarce in the literature since
proving them requires a large deal of technical effort and significant
modifications to the standard, more intuitive algorithms that come only with
guarantees that hold on expectation. One of these modifications is forcing the
learner to sample arms from the uniform distribution at least
$\Omega(\sqrt{T})$ times over $T$ rounds, which can adversely affect
performance if many of the arms are suboptimal. While it is widely conjectured
that this property is essential for proving high-probability regret bounds, we
show in this paper that it is possible to achieve such strong results without
this undesirable exploration component. Our result relies on a simple and
intuitive loss-estimation strategy called Implicit eXploration (IX) that allows
a remarkably clean analysis. To demonstrate the flexibility of our technique,
we derive several improved high-probability bounds for various extensions of
the standard multi-armed bandit framework. Finally, we conduct a simple
experiment that illustrates the robustness of our implicit exploration
technique.
| Gergely Neu | null | 1506.03271 | null | null |
Neural Adaptive Sequential Monte Carlo | cs.LG stat.ML | Sequential Monte Carlo (SMC), or particle filtering, is a popular class of
methods for sampling from an intractable target distribution using a sequence
of simpler intermediate distributions. Like other importance sampling-based
methods, performance is critically dependent on the proposal distribution: a
bad proposal can lead to arbitrarily inaccurate estimates of the target
distribution. This paper presents a new method for automatically adapting the
proposal using an approximation of the Kullback-Leibler divergence between the
true posterior and the proposal distribution. The method is very flexible,
applicable to any parameterized proposal distribution and it supports online
and batch variants. We use the new framework to adapt powerful proposal
distributions with rich parameterizations based upon neural networks leading to
Neural Adaptive Sequential Monte Carlo (NASMC). Experiments indicate that NASMC
significantly improves inference in a non-linear state space model
outperforming adaptive proposal methods including the Extended Kalman and
Unscented Particle Filters. Experiments also indicate that improved inference
translates into improved parameter learning when NASMC is used as a subroutine
of Particle Marginal Metropolis Hastings. Finally we show that NASMC is able to
train a latent variable recurrent neural network (LV-RNN) achieving results
that compete with the state-of-the-art for polymorphic music modelling. NASMC
can be seen as bridging the gap between adaptive SMC methods and the recent
work in scalable, black-box variational inference.
| Shixiang Gu and Zoubin Ghahramani and Richard E. Turner | null | 1506.03338 | null | null |
An efficient algorithm for contextual bandits with knapsacks, and an
extension to concave objectives | cs.LG cs.AI stat.ML | We consider a contextual version of multi-armed bandit problem with global
knapsack constraints. In each round, the outcome of pulling an arm is a scalar
reward and a resource consumption vector, both dependent on the context, and
the global knapsack constraints require the total consumption for each resource
to be below some pre-fixed budget. The learning agent competes with an
arbitrary set of context-dependent policies. This problem was introduced by
Badanidiyuru et al. (2014), who gave a computationally inefficient algorithm
with near-optimal regret bounds for it. We give a computationally efficient
algorithm for this problem with slightly better regret bounds, by generalizing
the approach of Agarwal et al. (2014) for the non-constrained version of the
problem. The computational time of our algorithm scales logarithmically in the
size of the policy space. This answers the main open question of Badanidiyuru
et al. (2014). We also extend our results to a variant where there are no
knapsack constraints but the objective is an arbitrary Lipschitz concave
function of the sum of outcome vectors.
| Shipra Agrawal and Nikhil R. Devanur and Lihong Li | null | 1506.03374 | null | null |
On the Prior Sensitivity of Thompson Sampling | cs.LG cs.AI stat.ML | The empirically successful Thompson Sampling algorithm for stochastic bandits
has drawn much interest in understanding its theoretical properties. One
important benefit of the algorithm is that it allows domain knowledge to be
conveniently encoded as a prior distribution to balance exploration and
exploitation more effectively. While it is generally believed that the
algorithm's regret is low (high) when the prior is good (bad), little is known
about the exact dependence. In this paper, we fully characterize the
algorithm's worst-case dependence of regret on the choice of prior, focusing on
a special yet representative case. These results also provide insights into the
general sensitivity of the algorithm to the choice of priors. In particular,
with $p$ being the prior probability mass of the true reward-generating model,
we prove $O(\sqrt{T/p})$ and $O(\sqrt{(1-p)T})$ regret upper bounds for the
bad- and good-prior cases, respectively, as well as \emph{matching} lower
bounds. Our proofs rely on the discovery of a fundamental property of Thompson
Sampling and make heavy use of martingale theory, both of which appear novel in
the literature, to the best of our knowledge.
| Che-Yu Liu and Lihong Li | null | 1506.03378 | null | null |
The Online Coupon-Collector Problem and Its Application to Lifelong
Reinforcement Learning | cs.LG cs.AI | Transferring knowledge across a sequence of related tasks is an important
challenge in reinforcement learning (RL). Despite much encouraging empirical
evidence, there has been little theoretical analysis. In this paper, we study a
class of lifelong RL problems: the agent solves a sequence of tasks modeled as
finite Markov decision processes (MDPs), each of which is from a finite set of
MDPs with the same state/action sets and different transition/reward functions.
Motivated by the need for cross-task exploration in lifelong learning, we
formulate a novel online coupon-collector problem and give an optimal
algorithm. This allows us to develop a new lifelong RL algorithm, whose overall
sample complexity in a sequence of tasks is much smaller than single-task
learning, even if the sequence of tasks is generated by an adversary. Benefits
of the algorithm are demonstrated in simulated problems, including a recently
introduced human-robot interaction problem.
| Emma Brunskill and Lihong Li | null | 1506.03379 | null | null |
Sparse Projection Oblique Randomer Forests | stat.ML cs.LG | Decision forests, including Random Forests and Gradient Boosting Trees, have
recently demonstrated state-of-the-art performance in a variety of machine
learning settings. Decision forests are typically ensembles of axis-aligned
decision trees; that is, trees that split only along feature dimensions. In
contrast, many recent extensions to decision forests are based on axis-oblique
splits. Unfortunately, these extensions forfeit one or more of the favorable
properties of decision forests based on axis-aligned splits, such as robustness
to many noise dimensions, interpretability, or computational efficiency. We
introduce yet another decision forest, called "Sparse Projection Oblique
Randomer Forests" (SPORF). SPORF uses very sparse random projections, i.e.,
linear combinations of a small subset of features. SPORF significantly improves
accuracy over existing state-of-the-art algorithms on a standard benchmark
suite for classification with >100 problems of varying dimension, sample size,
and number of classes. To illustrate how SPORF addresses the limitations of
both axis-aligned and existing oblique decision forest methods, we conduct
extensive simulated experiments. SPORF typically yields improved performance
over existing decision forests, while mitigating computational efficiency and
scalability and maintaining interpretability. SPORF can easily be incorporated
into other ensemble methods such as boosting to obtain potentially similar
gains.
| Tyler M. Tomita, James Browne, Cencheng Shen, Jaewon Chung, Jesse L.
Patsolic, Benjamin Falk, Jason Yim, Carey E. Priebe, Randal Burns, Mauro
Maggioni, Joshua T. Vogelstein | null | 1506.03410 | null | null |
Convergence rates for pretraining and dropout: Guiding learning
parameters using network structure | cs.LG cs.CV cs.NE math.OC stat.ML | Unsupervised pretraining and dropout have been well studied, especially with
respect to regularization and output consistency. However, our understanding
about the explicit convergence rates of the parameter estimates, and their
dependence on the learning (like denoising and dropout rate) and structural
(like depth and layer lengths) aspects of the network is less mature. An
interesting question in this context is to ask if the network structure could
"guide" the choices of such learning parameters. In this work, we explore these
gaps between network structure, the learning mechanisms and their interaction
with parameter convergence rates. We present a way to address these issues
based on the backpropagation convergence rates for general nonconvex objectives
using first-order information. We then incorporate two learning mechanisms into
this general framework -- denoising autoencoder and dropout, and subsequently
derive the convergence rates of deep networks. Building upon these bounds, we
provide insights into the choices of learning parameters and network sizes that
achieve certain levels of convergence accuracy. The results derived here
support existing empirical observations, and we also conduct a set of
experiments to evaluate them.
| Vamsi K. Ithapu, Sathya Ravi, Vikas Singh | null | 1506.03412 | null | null |
Fast Online Clustering with Randomized Skeleton Sets | cs.AI cs.LG | We present a new fast online clustering algorithm that reliably recovers
arbitrary-shaped data clusters in high throughout data streams. Unlike the
existing state-of-the-art online clustering methods based on k-means or
k-medoid, it does not make any restrictive generative assumptions. In addition,
in contrast to existing nonparametric clustering techniques such as DBScan or
DenStream, it gives provable theoretical guarantees. To achieve fast
clustering, we propose to represent each cluster by a skeleton set which is
updated continuously as new data is seen. A skeleton set consists of weighted
samples from the data where weights encode local densities. The size of each
skeleton set is adapted according to the cluster geometry. The proposed
technique automatically detects the number of clusters and is robust to
outliers. The algorithm works for the infinite data stream where more than one
pass over the data is not feasible. We provide theoretical guarantees on the
quality of the clustering and also demonstrate its advantage over the existing
state-of-the-art on several datasets.
| Krzysztof Choromanski and Sanjiv Kumar and Xiaofeng Liu | null | 1506.03425 | null | null |
Generative Image Modeling Using Spatial LSTMs | stat.ML cs.CV cs.LG | Modeling the distribution of natural images is challenging, partly because of
strong statistical dependencies which can extend over hundreds of pixels.
Recurrent neural networks have been successful in capturing long-range
dependencies in a number of problems but only recently have found their way
into generative image models. We here introduce a recurrent image model based
on multi-dimensional long short-term memory units which are particularly suited
for image modeling due to their spatial structure. Our model scales to images
of arbitrary size and its likelihood is computationally tractable. We find that
it outperforms the state of the art in quantitative comparisons on several
image datasets and produces promising results when used for texture synthesis
and inpainting.
| Lucas Theis and Matthias Bethge | null | 1506.03478 | null | null |
Sequential Nonparametric Testing with the Law of the Iterated Logarithm | stat.ML cs.LG math.ST stat.ME stat.TH | We propose a new algorithmic framework for sequential hypothesis testing with
i.i.d. data, which includes A/B testing, nonparametric two-sample testing, and
independence testing as special cases. It is novel in several ways: (a) it
takes linear time and constant space to compute on the fly, (b) it has the same
power guarantee as a non-sequential version of the test with the same
computational constraints up to a small factor, and (c) it accesses only as
many samples as are required - its stopping time adapts to the unknown
difficulty of the problem. All our test statistics are constructed to be
zero-mean martingales under the null hypothesis, and the rejection threshold is
governed by a uniform non-asymptotic law of the iterated logarithm (LIL). For
the case of nonparametric two-sample mean testing, we also provide a finite
sample power analysis, and the first non-asymptotic stopping time calculations
for this class of problems. We verify our predictions for type I and II errors
and stopping times using simulations.
| Akshay Balsubramani, Aaditya Ramdas | null | 1506.03486 | null | null |
Bayesian Poisson Tensor Factorization for Inferring Multilateral
Relations from Sparse Dyadic Event Counts | stat.ML cs.AI cs.LG cs.SI stat.AP | We present a Bayesian tensor factorization model for inferring latent group
structures from dynamic pairwise interaction patterns. For decades, political
scientists have collected and analyzed records of the form "country $i$ took
action $a$ toward country $j$ at time $t$"---known as dyadic events---in order
to form and test theories of international relations. We represent these event
data as a tensor of counts and develop Bayesian Poisson tensor factorization to
infer a low-dimensional, interpretable representation of their salient
patterns. We demonstrate that our model's predictive performance is better than
that of standard non-negative tensor factorization methods. We also provide a
comparison of our variational updates to their maximum likelihood counterparts.
In doing so, we identify a better way to form point estimates of the latent
factors than that typically used in Bayesian Poisson matrix factorization.
Finally, we showcase our model as an exploratory analysis tool for political
scientists. We show that the inferred latent factor matrices capture
interpretable multilateral relations that both conform to and inform our
knowledge of international affairs.
| Aaron Schein, John Paisley, David M. Blei, Hanna Wallach | null | 1506.03493 | null | null |
Matrix Completion from Fewer Entries: Spectral Detectability and Rank
Estimation | cond-mat.dis-nn cs.LG stat.ML | The completion of low rank matrices from few entries is a task with many
practical applications. We consider here two aspects of this problem:
detectability, i.e. the ability to estimate the rank $r$ reliably from the
fewest possible random entries, and performance in achieving small
reconstruction error. We propose a spectral algorithm for these two tasks
called MaCBetH (for Matrix Completion with the Bethe Hessian). The rank is
estimated as the number of negative eigenvalues of the Bethe Hessian matrix,
and the corresponding eigenvectors are used as initial condition for the
minimization of the discrepancy between the estimated matrix and the revealed
entries. We analyze the performance in a random matrix setting using results
from the statistical mechanics of the Hopfield neural network, and show in
particular that MaCBetH efficiently detects the rank $r$ of a large $n\times m$
matrix from $C(r)r\sqrt{nm}$ entries, where $C(r)$ is a constant close to $1$.
We also evaluate the corresponding root-mean-square error empirically and show
that MaCBetH compares favorably to other existing approaches.
| Alaa Saade, Florent Krzakala and Lenka Zdeborov\'a | null | 1506.03498 | null | null |
Data Generation as Sequential Decision Making | cs.LG stat.ML | We connect a broad class of generative models through their shared reliance
on sequential decision making. Motivated by this view, we develop extensions to
an existing model, and then explore the idea further in the context of data
imputation -- perhaps the simplest setting in which to investigate the relation
between unconditional and conditional generative modelling. We formulate data
imputation as an MDP and develop models capable of representing effective
policies for it. We construct the models using neural networks and train them
using a form of guided policy search. Our models generate predictions through
an iterative process of feedback and refinement. We show that this approach can
learn effective policies for imputation problems of varying difficulty and
across multiple datasets.
| Philip Bachman and Doina Precup | null | 1506.03504 | null | null |
Convolutional Dictionary Learning through Tensor Factorization | cs.LG stat.ML | Tensor methods have emerged as a powerful paradigm for consistent learning of
many latent variable models such as topic models, independent component
analysis and dictionary learning. Model parameters are estimated via CP
decomposition of the observed higher order input moments. However, in many
domains, additional invariances such as shift invariances exist, enforced via
models such as convolutional dictionary learning. In this paper, we develop
novel tensor decomposition algorithms for parameter estimation of convolutional
models. Our algorithm is based on the popular alternating least squares method,
but with efficient projections onto the space of stacked circulant matrices.
Our method is embarrassingly parallel and consists of simple operations such as
fast Fourier transforms and matrix multiplications. Our algorithm converges to
the dictionary much faster and more accurately compared to the alternating
minimization over filters and activation maps.
| Furong Huang, Animashree Anandkumar | null | 1506.03509 | null | null |
Max-Entropy Feed-Forward Clustering Neural Network | cs.LG | The outputs of non-linear feed-forward neural network are positive, which
could be treated as probability when they are normalized to one. If we take
Entropy-Based Principle into consideration, the outputs for each sample could
be represented as the distribution of this sample for different clusters.
Entropy-Based Principle is the principle with which we could estimate the
unknown distribution under some limited conditions. As this paper defines two
processes in Feed-Forward Neural Network, our limited condition is the
abstracted features of samples which are worked out in the abstraction process.
And the final outputs are the probability distribution for different clusters
in the clustering process. As Entropy-Based Principle is considered into the
feed-forward neural network, a clustering method is born. We have conducted
some experiments on six open UCI datasets, comparing with a few baselines and
applied purity as the measurement . The results illustrate that our method
outperforms all the other baselines that are most popular clustering methods.
| Han Xiao, Xiaoyan Zhu | null | 1506.03623 | null | null |
Margin-Based Feed-Forward Neural Network Classifiers | cs.LG | Margin-Based Principle has been proposed for a long time, it has been proved
that this principle could reduce the structural risk and improve the
performance in both theoretical and practical aspects. Meanwhile, feed-forward
neural network is a traditional classifier, which is very hot at present with a
deeper architecture. However, the training algorithm of feed-forward neural
network is developed and generated from Widrow-Hoff Principle that means to
minimize the squared error. In this paper, we propose a new training algorithm
for feed-forward neural networks based on Margin-Based Principle, which could
effectively promote the accuracy and generalization ability of neural network
classifiers with less labelled samples and flexible network. We have conducted
experiments on four UCI open datasets and achieved good results as expected. In
conclusion, our model could handle more sparse labelled and more high-dimension
dataset in a high accuracy while modification from old ANN method to our method
is easy and almost free of work.
| Han Xiao, Xiaoyan Zhu | null | 1506.03626 | null | null |
Constrained Convolutional Neural Networks for Weakly Supervised
Segmentation | cs.CV cs.LG | We present an approach to learn a dense pixel-wise labeling from image-level
tags. Each image-level tag imposes constraints on the output labeling of a
Convolutional Neural Network (CNN) classifier. We propose Constrained CNN
(CCNN), a method which uses a novel loss function to optimize for any set of
linear constraints on the output space (i.e. predicted label distribution) of a
CNN. Our loss formulation is easy to optimize and can be incorporated directly
into standard stochastic gradient descent optimization. The key idea is to
phrase the training objective as a biconvex optimization for linear models,
which we then relax to nonlinear deep networks. Extensive experiments
demonstrate the generality of our new learning framework. The constrained loss
yields state-of-the-art results on weakly supervised semantic image
segmentation. We further demonstrate that adding slightly more supervision can
greatly improve the performance of the learning algorithm.
| Deepak Pathak, Philipp Kr\"ahenb\"uhl and Trevor Darrell | null | 1506.03648 | null | null |
Variance Reduced Stochastic Gradient Descent with Neighbors | cs.LG math.OC stat.ML | Stochastic Gradient Descent (SGD) is a workhorse in machine learning, yet its
slow convergence can be a computational bottleneck. Variance reduction
techniques such as SAG, SVRG and SAGA have been proposed to overcome this
weakness, achieving linear convergence. However, these methods are either based
on computations of full gradients at pivot points, or on keeping per data point
corrections in memory. Therefore speed-ups relative to SGD may need a minimal
number of epochs in order to materialize. This paper investigates algorithms
that can exploit neighborhood structure in the training data to share and
re-use information about past stochastic gradients across data points, which
offers advantages in the transient optimization phase. As a side-product we
provide a unified convergence analysis for a family of variance reduction
algorithms, which we call memorization algorithms. We provide experimental
results supporting our theory.
| Thomas Hofmann, Aurelien Lucchi, Simon Lacoste-Julien, Brian
McWilliams | null | 1506.03662 | null | null |
Optimization Monte Carlo: Efficient and Embarrassingly Parallel
Likelihood-Free Inference | cs.LG stat.ML | We describe an embarrassingly parallel, anytime Monte Carlo method for
likelihood-free models. The algorithm starts with the view that the
stochasticity of the pseudo-samples generated by the simulator can be
controlled externally by a vector of random numbers u, in such a way that the
outcome, knowing u, is deterministic. For each instantiation of u we run an
optimization procedure to minimize the distance between summary statistics of
the simulator and the data. After reweighing these samples using the prior and
the Jacobian (accounting for the change of volume in transforming from the
space of summary statistics to the space of parameters) we show that this
weighted ensemble represents a Monte Carlo estimate of the posterior
distribution. The procedure can be run embarrassingly parallel (each node
handling one sample) and anytime (by allocating resources to the worst
performing sample). The procedure is validated on six experiments.
| Edward Meeds and Max Welling | null | 1506.03693 | null | null |
Random Maxout Features | cs.LG stat.ML | In this paper, we propose and study random maxout features, which are
constructed by first projecting the input data onto sets of randomly generated
vectors with Gaussian elements, and then outputing the maximum projection value
for each set. We show that the resulting random feature map, when used in
conjunction with linear models, allows for the locally linear estimation of the
function of interest in classification tasks, and for the locally linear
embedding of points when used for dimensionality reduction or data
visualization. We derive generalization bounds for learning that assess the
error in approximating locally linear functions by linear functions in the
maxout feature space, and empirically evaluate the efficacy of the approach on
the MNIST and TIMIT classification tasks.
| Youssef Mroueh, Steven Rennie, Vaibhava Goel | null | 1506.03705 | null | null |
Recovering communities in the general stochastic block model without
knowing the parameters | math.PR cs.IT cs.LG cs.SI math.IT | Most recent developments on the stochastic block model (SBM) rely on the
knowledge of the model parameters, or at least on the number of communities.
This paper introduces efficient algorithms that do not require such knowledge
and yet achieve the optimal information-theoretic tradeoffs identified in
[AS15] for linear size communities. The results are three-fold: (i) in the
constant degree regime, an algorithm is developed that requires only a
lower-bound on the relative sizes of the communities and detects communities
with an optimal accuracy scaling for large degrees; (ii) in the regime where
degrees are scaled by $\omega(1)$ (diverging degrees), this is enhanced into a
fully agnostic algorithm that only takes the graph in question and
simultaneously learns the model parameters (including the number of
communities) and detects communities with accuracy $1-o(1)$, with an overall
quasi-linear complexity; (iii) in the logarithmic degree regime, an agnostic
algorithm is developed that learns the parameters and achieves the optimal
CH-limit for exact recovery, in quasi-linear time. These provide the first
algorithms affording efficiency, universality and information-theoretic
optimality for strong and weak consistency in the general SBM with linear size
communities.
| Emmanuel Abbe and Colin Sandon | null | 1506.03729 | null | null |
GAP Safe screening rules for sparse multi-task and multi-class models | stat.ML cs.LG math.OC stat.CO | High dimensional regression benefits from sparsity promoting regularizations.
Screening rules leverage the known sparsity of the solution by ignoring some
variables in the optimization, hence speeding up solvers. When the procedure is
proven not to discard features wrongly the rules are said to be \emph{safe}. In
this paper we derive new safe rules for generalized linear models regularized
with $\ell_1$ and $\ell_1/\ell_2$ norms. The rules are based on duality gap
computations and spherical safe regions whose diameters converge to zero. This
allows to discard safely more variables, in particular for low regularization
parameters. The GAP Safe rule can cope with any iterative solver and we
illustrate its performance on coordinate descent for multi-task Lasso, binary
and multinomial logistic regression, demonstrating significant speed ups on all
tested datasets with respect to previous safe rules.
| Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon | null | 1506.03736 | null | null |
Spectral Representations for Convolutional Neural Networks | stat.ML cs.LG | Discrete Fourier transforms provide a significant speedup in the computation
of convolutions in deep learning. In this work, we demonstrate that, beyond its
advantages for efficient computation, the spectral domain also provides a
powerful representation in which to model and train convolutional neural
networks (CNNs).
We employ spectral representations to introduce a number of innovations to
CNN design. First, we propose spectral pooling, which performs dimensionality
reduction by truncating the representation in the frequency domain. This
approach preserves considerably more information per parameter than other
pooling strategies and enables flexibility in the choice of pooling output
dimensionality. This representation also enables a new form of stochastic
regularization by randomized modification of resolution. We show that these
methods achieve competitive results on classification and approximation tasks,
without using any dropout or max-pooling.
Finally, we demonstrate the effectiveness of complex-coefficient spectral
parameterization of convolutional filters. While this leaves the underlying
model unchanged, it results in a representation that greatly facilitates
optimization. We observe on a variety of popular CNN configurations that this
leads to significantly faster convergence during training.
| Oren Rippel, Jasper Snoek and Ryan P. Adams | null | 1506.03767 | null | null |
Mondrian Forests for Large-Scale Regression when Uncertainty Matters | stat.ML cs.LG | Many real-world regression problems demand a measure of the uncertainty
associated with each prediction. Standard decision forests deliver efficient
state-of-the-art predictive performance, but high-quality uncertainty estimates
are lacking. Gaussian processes (GPs) deliver uncertainty estimates, but
scaling GPs to large-scale data sets comes at the cost of approximating the
uncertainty estimates. We extend Mondrian forests, first proposed by
Lakshminarayanan et al. (2014) for classification problems, to the large-scale
non-parametric regression setting. Using a novel hierarchical Gaussian prior
that dovetails with the Mondrian forest framework, we obtain principled
uncertainty estimates, while still retaining the computational advantages of
decision forests. Through a combination of illustrative examples, real-world
large-scale datasets, and Bayesian optimization benchmarks, we demonstrate that
Mondrian forests outperform approximate GPs on large-scale regression tasks and
deliver better-calibrated uncertainty assessments than decision-forest-based
methods.
| Balaji Lakshminarayanan, Daniel M. Roy and Yee Whye Teh | null | 1506.03805 | null | null |
Bidirectional Helmholtz Machines | cs.LG stat.ML | Efficient unsupervised training and inference in deep generative models
remains a challenging problem. One basic approach, called Helmholtz machine,
involves training a top-down directed generative model together with a
bottom-up auxiliary model used for approximate inference. Recent results
indicate that better generative models can be obtained with better approximate
inference procedures. Instead of improving the inference procedure, we here
propose a new model which guarantees that the top-down and bottom-up
distributions can efficiently invert each other. We achieve this by
interpreting both the top-down and the bottom-up directed models as approximate
inference distributions and by defining the model distribution to be the
geometric mean of these two. We present a lower-bound for the likelihood of
this model and we show that optimizing this bound regularizes the model so that
the Bhattacharyya distance between the bottom-up and top-down approximate
distributions is minimized. This approach results in state of the art
generative models which prefer significantly deeper architectures while it
allows for orders of magnitude more efficient approximate inference.
| Jorg Bornschein and Samira Shabanian and Asja Fischer and Yoshua
Bengio | null | 1506.03877 | null | null |
Place classification with a graph regularized deep neural network model | cs.RO cs.CV cs.LG cs.NE | Place classification is a fundamental ability that a robot should possess to
carry out effective human-robot interactions. It is a nontrivial classification
problem which has attracted many research. In recent years, there is a high
exploitation of Artificial Intelligent algorithms in robotics applications.
Inspired by the recent successes of deep learning methods, we propose an
end-to-end learning approach for the place classification problem. With the
deep architectures, this methodology automatically discovers features and
contributes in general to higher classification accuracies. The pipeline of our
approach is composed of three parts. Firstly, we construct multiple layers of
laser range data to represent the environment information in different levels
of granularity. Secondly, each layer of data is fed into a deep neural network
model for classification, where a graph regularization is imposed to the deep
architecture for keeping local consistency between adjacent samples. Finally,
the predicted labels obtained from all the layers are fused based on confidence
trees to maximize the overall confidence. Experimental results validate the
effective- ness of our end-to-end place classification framework in which both
the multi-layer structure and the graph regularization promote the
classification performance. Furthermore, results show that the features
automatically learned from the raw input range data can achieve competitive
results to the features constructed based on statistical and geometrical
information.
| Yiyi Liao, Sarath Kodagoda, Yue Wang, Lei Shi, Yong Liu | null | 1506.03899 | null | null |
Optimal $\gamma$ and $C$ for $\epsilon$-Support Vector Regression with
RBF Kernels | cs.LG stat.ML | The objective of this study is to investigate the efficient determination of
$C$ and $\gamma$ for Support Vector Regression with RBF or mahalanobis kernel
based on numerical and statistician considerations, which indicates the
connection between $C$ and kernels and demonstrates that the deviation of
geometric distance of neighbour observation in mapped space effects the predict
accuracy of $\epsilon$-SVR. We determinate the arrange of $\gamma$ & $C$ and
propose our method to choose their best values.
| Longfei Lu | null | 1506.03942 | null | null |
Knowledge Representation in Learning Classifier Systems: A Review | cs.NE cs.LG | Knowledge representation is a key component to the success of all rule based
systems including learning classifier systems (LCSs). This component brings
insight into how to partition the problem space what in turn seeks prominent
role in generalization capacity of the system as a whole. Recently, knowledge
representation component has received great deal of attention within data
mining communities due to its impacts on rule based systems in terms of
efficiency and efficacy. The current work is an attempt to find a comprehensive
and yet elaborate view into the existing knowledge representation techniques in
LCS domain in general and XCS in specific. To achieve the objectives, knowledge
representation techniques are grouped into different categories based on the
classification approach in which they are incorporated. In each category, the
underlying rule representation schema and the format of classifier condition to
support the corresponding representation are presented. Furthermore, a precise
explanation on the way that each technique partitions the problem space along
with the extensive experimental results is provided. To have an elaborated view
on the functionality of each technique, a comparative analysis of existing
techniques on some conventional problems is provided. We expect this survey to
be of interest to the LCS researchers and practitioners since it provides a
guideline for choosing a proper knowledge representation technique for a given
problem and also opens up new streams of research on this topic.
| Farzaneh Shoeleh, Mahshid Majd, Ali Hamzeh, Sattar Hashemi | null | 1506.04002 | null | null |
Listen, Attend, and Walk: Neural Mapping of Navigational Instructions to
Action Sequences | cs.CL cs.AI cs.LG cs.NE cs.RO | We propose a neural sequence-to-sequence model for direction following, a
task that is essential to realizing effective autonomous agents. Our
alignment-based encoder-decoder model with long short-term memory recurrent
neural networks (LSTM-RNN) translates natural language instructions to action
sequences based upon a representation of the observable world state. We
introduce a multi-level aligner that empowers our model to focus on sentence
"regions" salient to the current world state by using multiple abstractions of
the input sentence. In contrast to existing methods, our model uses no
specialized linguistic resources (e.g., parsers) or task-specific annotations
(e.g., seed lexicons). It is therefore generalizable, yet still achieves the
best results reported to-date on a benchmark single-sentence dataset and
competitive results for the limited-training multi-sentence setting. We analyze
our model through a series of ablations that elucidate the contributions of the
primary components of our model.
| Hongyuan Mei, Mohit Bansal, Matthew R. Walter | null | 1506.04089 | null | null |
Adaptive Stochastic Primal-Dual Coordinate Descent for Separable Saddle
Point Problems | stat.ML cs.LG | We consider a generic convex-concave saddle point problem with separable
structure, a form that covers a wide-ranged machine learning applications.
Under this problem structure, we follow the framework of primal-dual updates
for saddle point problems, and incorporate stochastic block coordinate descent
with adaptive stepsize into this framework. We theoretically show that our
proposal of adaptive stepsize potentially achieves a sharper linear convergence
rate compared with the existing methods. Additionally, since we can select
"mini-batch" of block coordinates to update, our method is also amenable to
parallel processing for large-scale data. We apply the proposed method to
regularized empirical risk minimization and show that it performs comparably
or, more often, better than state-of-the-art methods on both synthetic and
real-world data sets.
| Zhanxing Zhu and Amos J. Storkey | null | 1506.04093 | null | null |
Stochastic Expectation Propagation | stat.ML cs.LG | Expectation propagation (EP) is a deterministic approximation algorithm that
is often used to perform approximate Bayesian parameter learning. EP
approximates the full intractable posterior distribution through a set of local
approximations that are iteratively refined for each datapoint. EP can offer
analytic and computational advantages over other approximations, such as
Variational Inference (VI), and is the method of choice for a number of models.
The local nature of EP appears to make it an ideal candidate for performing
Bayesian learning on large models in large-scale dataset settings. However, EP
has a crucial limitation in this context: the number of approximating factors
needs to increase with the number of data-points, N, which often entails a
prohibitively large memory overhead. This paper presents an extension to EP,
called stochastic expectation propagation (SEP), that maintains a global
posterior approximation (like VI) but updates it in a local way (like EP).
Experiments on a number of canonical learning problems using synthetic and
real-world datasets indicate that SEP performs almost as well as full EP, but
reduces the memory consumption by a factor of $N$. SEP is therefore ideally
suited to performing approximate Bayesian learning in the large model, large
dataset setting.
| Yingzhen Li, Jose Miguel Hernandez-Lobato, Richard E. Turner | null | 1506.04132 | null | null |
Reducing offline evaluation bias of collaborative filtering algorithms | cs.IR cs.LG stat.ML | Recommendation systems have been integrated into the majority of large online
systems to filter and rank information according to user profiles. It thus
influences the way users interact with the system and, as a consequence, bias
the evaluation of the performance of a recommendation algorithm computed using
historical data (via offline evaluation). This paper presents a new application
of a weighted offline evaluation to reduce this bias for collaborative
filtering algorithms.
| Arnaud De Myttenaere (SAMM, Viadeo), Boris Golden (Viadeo),
B\'en\'edicte Le Grand (CRI), Fabrice Rossi (SAMM) | null | 1506.04135 | null | null |
On the accuracy of self-normalized log-linear models | stat.ML cs.CL cs.LG stat.ME | Calculation of the log-normalizer is a major computational obstacle in
applications of log-linear models with large output spaces. The problem of fast
normalizer computation has therefore attracted significant attention in the
theoretical and applied machine learning literature. In this paper, we analyze
a recently proposed technique known as "self-normalization", which introduces a
regularization term in training to penalize log normalizers for deviating from
zero. This makes it possible to use unnormalized model scores as approximate
probabilities. Empirical evidence suggests that self-normalization is extremely
effective, but a theoretical understanding of why it should work, and how
generally it can be applied, is largely lacking. We prove generalization bounds
on the estimated variance of normalizers and upper bounds on the loss in
accuracy due to self-normalization, describe classes of input distributions
that self-normalize easily, and construct explicit examples of high-variance
input distributions. Our theoretical results make predictions about the
difficulty of fitting self-normalized models to several classes of
distributions, and we conclude with empirical validation of these predictions.
| Jacob Andreas, Maxim Rabinovich, Dan Klein, Michael I. Jordan | null | 1506.04147 | null | null |
Using the Mean Absolute Percentage Error for Regression Models | stat.ML cs.LG | We study in this paper the consequences of using the Mean Absolute Percentage
Error (MAPE) as a measure of quality for regression models. We show that
finding the best model under the MAPE is equivalent to doing weighted Mean
Absolute Error (MAE) regression. We show that universal consistency of
Empirical Risk Minimization remains possible using the MAPE instead of the MAE.
| Arnaud De Myttenaere (SAMM), Boris Golden (Viadeo), B\'en\'edicte Le
Grand (CRI), Fabrice Rossi (SAMM) | null | 1506.04176 | null | null |
Search Strategies for Binary Feature Selection for a Naive Bayes
Classifier | stat.ML cs.LG | We compare in this paper several feature selection methods for the Naive
Bayes Classifier (NBC) when the data under study are described by a large
number of redundant binary indicators. Wrapper approaches guided by the NBC
estimation of the classification error probability out-perform filter
approaches while retaining a reasonable computational cost.
| Tsirizo Rabenoro (SAMM), J\'er\^ome Lacaille, Marie Cottrell (SAMM),
Fabrice Rossi (SAMM) | null | 1506.04177 | null | null |
A Flexible and Efficient Algorithmic Framework for Constrained Matrix
and Tensor Factorization | stat.ML cs.LG math.OC stat.CO | We propose a general algorithmic framework for constrained matrix and tensor
factorization, which is widely used in signal processing and machine learning.
The new framework is a hybrid between alternating optimization (AO) and the
alternating direction method of multipliers (ADMM): each matrix factor is
updated in turn, using ADMM, hence the name AO-ADMM. This combination can
naturally accommodate a great variety of constraints on the factor matrices,
and almost all possible loss measures for the fitting. Computation caching and
warm start strategies are used to ensure that each update is evaluated
efficiently, while the outer AO framework exploits recent developments in block
coordinate descent (BCD)-type methods which help ensure that every limit point
is a stationary point, as well as faster and more robust convergence in
practice. Three special cases are studied in detail: non-negative matrix/tensor
factorization, constrained matrix/tensor completion, and dictionary learning.
Extensive simulations and experiments with real data are used to showcase the
effectiveness and broad applicability of the proposed framework.
| Kejun Huang, Nicholas D. Sidiropoulos, Athanasios P. Liavas | 10.1109/TSP.2016.2576427 | 1506.04209 | null | null |
On the Equivalence of CoCoA+ and DisDCA | cs.LG | In this document, we show that the algorithm CoCoA+ (Ma et al., ICML, 2015)
under the setting used in their experiments, which is also the best setting
suggested by the authors that proposed this algorithm, is equivalent to the
practical variant of DisDCA (Yang, NIPS, 2013).
| Ching-pei Lee | null | 1506.04217 | null | null |
Contamination Estimation via Convex Relaxations | cs.IT cs.LG math.IT math.OC | Identifying anomalies and contamination in datasets is important in a wide
variety of settings. In this paper, we describe a new technique for estimating
contamination in large, discrete valued datasets. Our approach considers the
normal condition of the data to be specified by a model consisting of a set of
distributions. Our key contribution is in our approach to contamination
estimation. Specifically, we develop a technique that identifies the minimum
number of data points that must be discarded (i.e., the level of contamination)
from an empirical data set in order to match the model to within a specified
goodness-of-fit, controlled by a p-value. Appealing to results from large
deviations theory, we show a lower bound on the level of contamination is
obtained by solving a series of convex programs. Theoretical results guarantee
the bound converges at a rate of $O(\sqrt{\log(p)/p})$, where p is the size of
the empirical data set.
| Matthew L. Malloy, Scott Alfeld, Paul Barford | null | 1506.04257 | null | null |
Generating and Exploring S-Box Multivariate Quadratic Equation Systems
with SageMath | cs.CR cs.AI cs.LG | A new method to derive Multivariate Quadratic equation systems (MQ) for the
input and output bit variables of a cryptographic S-box from its algebraic
expressions with the aid of the computer mathematics software system SageMath
is presented. We consolidate the deficiency of previously presented MQ metrics,
supposed to quantify the resistance of S-boxes against algebraic attacks.
| A.-M. Leventi-Peetz and J.-V. Peetz | 10.1109/DESEC.2017.8073822 | 1506.04319 | null | null |
Multi-class SVMs: From Tighter Data-Dependent Generalization Bounds to
Novel Algorithms | cs.LG | This paper studies the generalization performance of multi-class
classification algorithms, for which we obtain, for the first time, a
data-dependent generalization error bound with a logarithmic dependence on the
class size, substantially improving the state-of-the-art linear dependence in
the existing data-dependent generalization analysis. The theoretical analysis
motivates us to introduce a new multi-class classification machine based on
$\ell_p$-norm regularization, where the parameter $p$ controls the complexity
of the corresponding bounds. We derive an efficient optimization algorithm
based on Fenchel duality theory. Benchmarks on several real-world datasets show
that the proposed algorithm can achieve significant accuracy gains over the
state of the art.
| Yunwen Lei and \"Ur\"un Dogan and Alexander Binder and Marius Kloft | null | 1506.04359 | null | null |
Localized Multiple Kernel Learning---A Convex Approach | cs.LG | We propose a localized approach to multiple kernel learning that can be
formulated as a convex optimization problem over a given cluster structure. For
which we obtain generalization error guarantees and derive an optimization
algorithm based on the Fenchel dual representation. Experiments on real-world
datasets from the application domains of computational biology and computer
vision show that convex localized multiple kernel learning can achieve higher
prediction accuracies than its global and non-convex local counterparts.
| Yunwen Lei and Alexander Binder and \"Ur\"un Dogan and Marius Kloft | null | 1506.04364 | null | null |
Bayesian Dark Knowledge | cs.LG stat.ML | We consider the problem of Bayesian parameter estimation for deep neural
networks, which is important in problem settings where we may have little data,
and/ or where we need accurate posterior predictive densities, e.g., for
applications involving bandits or active learning. One simple approach to this
is to use online Monte Carlo methods, such as SGLD (stochastic gradient
Langevin dynamics). Unfortunately, such a method needs to store many copies of
the parameters (which wastes memory), and needs to make predictions using many
versions of the model (which wastes time).
We describe a method for "distilling" a Monte Carlo approximation to the
posterior predictive density into a more compact form, namely a single deep
neural network. We compare to two very recent approaches to Bayesian neural
networks, namely an approach based on expectation propagation [Hernandez-Lobato
and Adams, 2015] and an approach based on variational Bayes [Blundell et al.,
2015]. Our method performs better than both of these, is much simpler to
implement, and uses less computation at test time.
| Anoop Korattikara, Vivek Rathod, Kevin Murphy, Max Welling | null | 1506.04416 | null | null |
A Fast Incremental Gaussian Mixture Model | cs.LG | This work builds upon previous efforts in online incremental learning, namely
the Incremental Gaussian Mixture Network (IGMN). The IGMN is capable of
learning from data streams in a single-pass by improving its model after
analyzing each data point and discarding it thereafter. Nevertheless, it
suffers from the scalability point-of-view, due to its asymptotic time
complexity of $\operatorname{O}\bigl(NKD^3\bigr)$ for $N$ data points, $K$
Gaussian components and $D$ dimensions, rendering it inadequate for
high-dimensional data. In this paper, we manage to reduce this complexity to
$\operatorname{O}\bigl(NKD^2\bigr)$ by deriving formulas for working directly
with precision matrices instead of covariance matrices. The final result is a
much faster and scalable algorithm which can be applied to high dimensional
tasks. This is confirmed by applying the modified algorithm to high-dimensional
classification datasets.
| Rafael Pinto and Paulo Engel | 10.1371/journal.pone.0139931 | 1506.04422 | null | null |
Fast and Guaranteed Tensor Decomposition via Sketching | stat.ML cs.LG | Tensor CANDECOMP/PARAFAC (CP) decomposition has wide applications in
statistical learning of latent variable models and in data mining. In this
paper, we propose fast and randomized tensor CP decomposition algorithms based
on sketching. We build on the idea of count sketches, but introduce many novel
ideas which are unique to tensors. We develop novel methods for randomized
computation of tensor contractions via FFTs, without explicitly forming the
tensors. Such tensor contractions are encountered in decomposition methods such
as tensor power iterations and alternating least squares. We also design novel
colliding hashes for symmetric tensors to further save time in computing the
sketches. We then combine these sketching ideas with existing whitening and
tensor power iterative techniques to obtain the fastest algorithm on both
sparse and dense tensors. The quality of approximation under our method does
not depend on properties such as sparsity, uniformity of elements, etc. We
apply the method for topic modeling and obtain competitive results.
| Yining Wang, Hsiao-Yu Tung, Alexander Smola and Animashree Anandkumar | null | 1506.04448 | null | null |
Compressing Convolutional Neural Networks | cs.LG cs.CV cs.NE | Convolutional neural networks (CNN) are increasingly used in many areas of
computer vision. They are particularly attractive because of their ability to
"absorb" great quantities of labeled data through millions of parameters.
However, as model sizes increase, so do the storage and memory requirements of
the classifiers. We present a novel network architecture, Frequency-Sensitive
Hashed Nets (FreshNets), which exploits inherent redundancy in both
convolutional layers and fully-connected layers of a deep learning model,
leading to dramatic savings in memory and storage consumption. Based on the key
observation that the weights of learned convolutional filters are typically
smooth and low-frequency, we first convert filter weights to the frequency
domain with a discrete cosine transform (DCT) and use a low-cost hash function
to randomly group frequency parameters into hash buckets. All parameters
assigned the same hash bucket share a single value learned with standard
back-propagation. To further reduce model size we allocate fewer hash buckets
to high-frequency components, which are generally less important. We evaluate
FreshNets on eight data sets, and show that it leads to drastically better
compressed performance than several relevant baselines.
| Wenlin Chen, James T. Wilson, Stephen Tyree, Kilian Q. Weinberger,
Yixin Chen | null | 1506.04449 | null | null |
Dual Memory Architectures for Fast Deep Learning of Stream Data via an
Online-Incremental-Transfer Strategy | cs.LG | The online learning of deep neural networks is an interesting problem of
machine learning because, for example, major IT companies want to manage the
information of the massive data uploaded on the web daily, and this technology
can contribute to the next generation of lifelong learning. We aim to train
deep models from new data that consists of new classes, distributions, and
tasks at minimal computational cost, which we call online deep learning.
Unfortunately, deep neural network learning through classical online and
incremental methods does not work well in both theory and practice. In this
paper, we introduce dual memory architectures for online incremental deep
learning. The proposed architecture consists of deep representation learners
and fast learnable shallow kernel networks, both of which synergize to track
the information of new data. During the training phase, we use various online,
incremental ensemble, and transfer learning techniques in order to achieve
lower error of the architecture. On the MNIST, CIFAR-10, and ImageNet image
recognition tasks, the proposed dual memory architectures performs much better
than the classical online and incremental ensemble algorithm, and their
accuracies are similar to that of the batch learner.
| Sang-Woo Lee, Min-Oh Heo, Jiwon Kim, Jeonghee Kim, Byoung-Tak Zhang | null | 1506.04477 | null | null |
Distilling Word Embeddings: An Encoding Approach | cs.CL cs.LG | Distilling knowledge from a well-trained cumbersome network to a small one
has recently become a new research topic, as lightweight neural networks with
high performance are particularly in need in various resource-restricted
systems. This paper addresses the problem of distilling word embeddings for NLP
tasks. We propose an encoding approach to distill task-specific knowledge from
a set of high-dimensional embeddings, which can reduce model complexity by a
large margin as well as retain high accuracy, showing a good compromise between
efficiency and performance. Experiments in two tasks reveal the phenomenon that
distilling knowledge from cumbersome embeddings is better than directly
training neural networks with small embeddings.
| Lili Mou, Ran Jia, Yan Xu, Ge Li, Lu Zhang, Zhi Jin | null | 1506.04488 | null | null |
Convex Risk Minimization and Conditional Probability Estimation | cs.LG stat.ML | This paper proves, in very general settings, that convex risk minimization is
a procedure to select a unique conditional probability model determined by the
classification problem. Unlike most previous work, we give results that are
general enough to include cases in which no minimum exists, as occurs
typically, for instance, with standard boosting algorithms. Concretely, we
first show that any sequence of predictors minimizing convex risk over the
source distribution will converge to this unique model when the class of
predictors is linear (but potentially of infinite dimension). Secondly, we show
the same result holds for \emph{empirical} risk minimization whenever this
class of predictors is finite dimensional, where the essential technical
contribution is a norm-free generalization bound.
| Matus Telgarsky and Miroslav Dud\'ik and Robert Schapire | null | 1506.04513 | null | null |
Learning Deep Generative Models with Doubly Stochastic MCMC | cs.LG | We present doubly stochastic gradient MCMC, a simple and generic method for
(approximate) Bayesian inference of deep generative models (DGMs) in a
collapsed continuous parameter space. At each MCMC sampling step, the algorithm
randomly draws a mini-batch of data samples to estimate the gradient of
log-posterior and further estimates the intractable expectation over hidden
variables via a neural adaptive importance sampler, where the proposal
distribution is parameterized by a deep neural network and learnt jointly. We
demonstrate the effectiveness on learning various DGMs in a wide range of
tasks, including density estimation, data generation and missing data
imputation. Our method outperforms many state-of-the-art competitors.
| Chao Du, Jun Zhu and Bo Zhang | null | 1506.04557 | null | null |
A New PAC-Bayesian Perspective on Domain Adaptation | stat.ML cs.LG | We study the issue of PAC-Bayesian domain adaptation: We want to learn, from
a source domain, a majority vote model dedicated to a target one. Our
theoretical contribution brings a new perspective by deriving an upper-bound on
the target risk where the distributions' divergence---expressed as a
ratio---controls the trade-off between a source error measure and the target
voters' disagreement. Our bound suggests that one has to focus on regions where
the source data is informative.From this result, we derive a PAC-Bayesian
generalization bound, and specialize it to linear classifiers. Then, we infer a
learning algorithmand perform experiments on real data.
| Pascal Germain (SIERRA), Amaury Habrard (LaHC), Fran\c{c}ois
Laviolette, Emilie Morvant (LaHC) | null | 1506.04573 | null | null |
Re-scale AdaBoost for Attack Detection in Collaborative Filtering
Recommender Systems | cs.IR cs.CR cs.LG | Collaborative filtering recommender systems (CFRSs) are the key components of
successful e-commerce systems. Actually, CFRSs are highly vulnerable to attacks
since its openness. However, since attack size is far smaller than that of
genuine users, conventional supervised learning based detection methods could
be too "dull" to handle such imbalanced classification. In this paper, we
improve detection performance from following two aspects. First, we extract
well-designed features from user profiles based on the statistical properties
of the diverse attack models, making hard classification task becomes easier to
perform. Then, refer to the general idea of re-scale Boosting (RBoosting) and
AdaBoost, we apply a variant of AdaBoost, called the re-scale AdaBoost
(RAdaBoost) as our detection method based on extracted features. RAdaBoost is
comparable to the optimal Boosting-type algorithm and can effectively improve
the performance in some hard scenarios. Finally, a series of experiments on the
MovieLens-100K data set are conducted to demonstrate the outperformance of
RAdaBoost comparing with some classical techniques such as SVM, kNN and
AdaBoost.
| Zhihai Yang, Lin Xu, Zhongmin Cai | null | 1506.04584 | null | null |
Latent Regression Bayesian Network for Data Representation | cs.LG | Deep directed generative models have attracted much attention recently due to
their expressive representation power and the ability of ancestral sampling.
One major difficulty of learning directed models with many latent variables is
the intractable inference. To address this problem, most existing algorithms
make assumptions to render the latent variables independent of each other,
either by designing specific priors, or by approximating the true posterior
using a factorized distribution. We believe the correlations among latent
variables are crucial for faithful data representation. Driven by this idea, we
propose an inference method based on the conditional pseudo-likelihood that
preserves the dependencies among the latent variables. For learning, we propose
to employ the hard Expectation Maximization (EM) algorithm, which avoids the
intractability of the traditional EM by max-out instead of sum-out to compute
the data likelihood. Qualitative and quantitative evaluations of our model
against state of the art deep models on benchmark datasets demonstrate the
effectiveness of the proposed algorithm in data representation and
reconstruction.
| Siqi Nie, Qiang Ji | null | 1506.04720 | null | null |
Encog: Library of Interchangeable Machine Learning Models for Java and
C# | cs.MS cs.LG | This paper introduces the Encog library for Java and C#, a scalable,
adaptable, multiplatform machine learning framework that was 1st released in
2008. Encog allows a variety of machine learning models to be applied to
datasets using regression, classification, and clustering. Various supported
machine learning models can be used interchangeably with minimal recoding.
Encog uses efficient multithreaded code to reduce training time by exploiting
modern multicore processors. The current version of Encog can be downloaded
from http://www.encog.org.
| Jeff Heaton | null | 1506.04776 | null | null |
Cheap Bandits | cs.LG | We consider stochastic sequential learning problems where the learner can
observe the \textit{average reward of several actions}. Such a setting is
interesting in many applications involving monitoring and surveillance, where
the set of the actions to observe represent some (geographical) area. The
importance of this setting is that in these applications, it is actually
\textit{cheaper} to observe average reward of a group of actions rather than
the reward of a single action. We show that when the reward is \textit{smooth}
over a given graph representing the neighboring actions, we can maximize the
cumulative reward of learning while \textit{minimizing the sensing cost}. In
this paper we propose CheapUCB, an algorithm that matches the regret guarantees
of the known algorithms for this setting and at the same time guarantees a
linear cost again over them. As a by-product of our analysis, we establish a
$\Omega(\sqrt{dT})$ lower bound on the cumulative regret of spectral bandits
for a class of graphs with effective dimension $d$.
| Manjesh Kumar Hanawal and Venkatesh Saligrama and Michal Valko and R\'
emi Munos | null | 1506.04782 | null | null |
Online Gradient Boosting | cs.LG | We extend the theory of boosting for regression problems to the online
learning setting. Generalizing from the batch setting for boosting, the notion
of a weak learning algorithm is modeled as an online learning algorithm with
linear loss functions that competes with a base class of regression functions,
while a strong learning algorithm is an online learning algorithm with convex
loss functions that competes with a larger class of regression functions. Our
main result is an online gradient boosting algorithm which converts a weak
online learning algorithm into a strong one where the larger class of functions
is the linear span of the base class. We also give a simpler boosting algorithm
that converts a weak online learning algorithm into a strong one where the
larger class of functions is the convex hull of the base class, and prove its
optimality.
| Alina Beygelzimer, Elad Hazan, Satyen Kale and Haipeng Luo | null | 1506.04820 | null | null |
Tree-structured composition in neural networks without tree-structured
architectures | cs.CL cs.LG | Tree-structured neural networks encode a particular tree geometry for a
sentence in the network design. However, these models have at best only
slightly outperformed simpler sequence-based models. We hypothesize that neural
sequence models like LSTMs are in fact able to discover and implicitly use
recursive compositional structure, at least for tasks with clear cues to that
structure in the data. We demonstrate this possibility using an artificial data
task for which recursive compositional structure is crucial, and find an
LSTM-based sequence model can indeed learn to exploit the underlying tree
structure. However, its performance consistently lags behind that of tree
models, even on large training sets, suggesting that tree-structured models are
more effective at exploiting recursive structure.
| Samuel R. Bowman, Christopher D. Manning, and Christopher Potts | null | 1506.04834 | null | null |
Spectral Sparsification and Regret Minimization Beyond Matrix
Multiplicative Updates | cs.LG cs.DS math.OC stat.ML | In this paper, we provide a novel construction of the linear-sized spectral
sparsifiers of Batson, Spielman and Srivastava [BSS14]. While previous
constructions required $\Omega(n^4)$ running time [BSS14, Zou12], our
sparsification routine can be implemented in almost-quadratic running time
$O(n^{2+\varepsilon})$.
The fundamental conceptual novelty of our work is the leveraging of a strong
connection between sparsification and a regret minimization problem over
density matrices. This connection was known to provide an interpretation of the
randomized sparsifiers of Spielman and Srivastava [SS11] via the application of
matrix multiplicative weight updates (MWU) [CHS11, Vis14]. In this paper, we
explain how matrix MWU naturally arises as an instance of the
Follow-the-Regularized-Leader framework and generalize this approach to yield a
larger class of updates. This new class allows us to accelerate the
construction of linear-sized spectral sparsifiers, and give novel insights on
the motivation behind Batson, Spielman and Srivastava [BSS14].
| Zeyuan Allen-Zhu and Zhenyu Liao and Lorenzo Orecchia | null | 1506.04838 | null | null |
PCA with Gaussian perturbations | cs.LG stat.ML | Most of machine learning deals with vector parameters. Ideally we would like
to take higher order information into account and make use of matrix or even
tensor parameters. However the resulting algorithms are usually inefficient.
Here we address on-line learning with matrix parameters. It is often easy to
obtain online algorithm with good generalization performance if you
eigendecompose the current parameter matrix in each trial (at a cost of
$O(n^3)$ per trial). Ideally we want to avoid the decompositions and spend
$O(n^2)$ per trial, i.e. linear time in the size of the matrix data. There is a
core trade-off between the running time and the generalization performance,
here measured by the regret of the on-line algorithm (total gain of the best
off-line predictor minus the total gain of the on-line algorithm). We focus on
the key matrix problem of rank $k$ Principal Component Analysis in
$\mathbb{R}^n$ where $k \ll n$. There are $O(n^3)$ algorithms that achieve the
optimum regret but require eigendecompositions. We develop a simple algorithm
that needs $O(kn^2)$ per trial whose regret is off by a small factor of
$O(n^{1/4})$. The algorithm is based on the Follow the Perturbed Leader
paradigm. It replaces full eigendecompositions at each trial by the problem
finding $k$ principal components of the current covariance matrix that is
perturbed by Gaussian noise.
| Wojciech Kot{\l}owski, Manfred K. Warmuth | null | 1506.04855 | null | null |
Author Identification using Multi-headed Recurrent Neural Networks | cs.CL cs.LG cs.NE | Recurrent neural networks (RNNs) are very good at modelling the flow of text,
but typically need to be trained on a far larger corpus than is available for
the PAN 2015 Author Identification task. This paper describes a novel approach
where the output layer of a character-level RNN language model is split into
several independent predictive sub-models, each representing an author, while
the recurrent layer is shared by all. This allows the recurrent layer to model
the language as a whole without over-fitting, while the outputs select aspects
of the underlying model that reflect their author's style. The method proves
competitive, ranking first in two of the four languages.
| Douglas Bagnall | null | 1506.04891 | null | null |
Learning with Clustering Structure | cs.LG | We study supervised learning problems using clustering constraints to impose
structure on either features or samples, seeking to help both prediction and
interpretation. The problem of clustering features arises naturally in text
classification for instance, to reduce dimensionality by grouping words
together and identify synonyms. The sample clustering problem on the other
hand, applies to multiclass problems where we are allowed to make multiple
predictions and the performance of the best answer is recorded. We derive a
unified optimization formulation highlighting the common structure of these
problems and produce algorithms whose core iteration complexity amounts to a
k-means clustering step, which can be approximated efficiently. We extend these
results to combine sparsity and clustering constraints, and develop a new
projection algorithm on the set of clustered sparse vectors. We prove
convergence of our algorithms on random instances, based on a union of
subspaces interpretation of the clustering structure. Finally, we test the
robustness of our methods on artificial data sets as well as real data
extracted from movie reviews.
| Vincent Roulet, Fajwel Fogel, Alexandre d'Aspremont, Francis Bach | null | 1506.04908 | null | null |
Bayesian representation learning with oracle constraints | stat.ML cs.CV cs.LG | Representation learning systems typically rely on massive amounts of labeled
data in order to be trained to high accuracy. Recently, high-dimensional
parametric models like neural networks have succeeded in building rich
representations using either compressive, reconstructive or supervised
criteria. However, the semantic structure inherent in observations is
oftentimes lost in the process. Human perception excels at understanding
semantics but cannot always be expressed in terms of labels. Thus,
\emph{oracles} or \emph{human-in-the-loop systems}, for example crowdsourcing,
are often employed to generate similarity constraints using an implicit
similarity function encoded in human perception. In this work we propose to
combine \emph{generative unsupervised feature learning} with a
\emph{probabilistic treatment of oracle information like triplets} in order to
transfer implicit privileged oracle knowledge into explicit nonlinear Bayesian
latent factor models of the observations. We use a fast variational algorithm
to learn the joint model and demonstrate applicability to a well-known image
dataset. We show how implicit triplet information can provide rich information
to learn representations that outperform previous metric learning approaches as
well as generative models without this side-information in a variety of
predictive tasks. In addition, we illustrate that the proposed approach
compartmentalizes the latent spaces semantically which allows interpretation of
the latent variables.
| Theofanis Karaletsos, Serge Belongie, Gunnar R\"atsch | null | 1506.05011 | null | null |
Numeric Input Relations for Relational Learning with Applications to
Community Structure Analysis | cs.LG | Most work in the area of statistical relational learning (SRL) is focussed on
discrete data, even though a few approaches for hybrid SRL models have been
proposed that combine numerical and discrete variables. In this paper we
distinguish numerical random variables for which a probability distribution is
defined by the model from numerical input variables that are only used for
conditioning the distribution of discrete response variables. We show how
numerical input relations can very easily be used in the Relational Bayesian
Network framework, and that existing inference and learning methods need only
minor adjustments to be applied in this generalized setting. The resulting
framework provides natural relational extensions of classical probabilistic
models for categorical data. We demonstrate the usefulness of RBN models with
numeric input relations by several examples.
In particular, we use the augmented RBN framework to define probabilistic
models for multi-relational (social) networks in which the probability of a
link between two nodes depends on numeric latent feature vectors associated
with the nodes. A generic learning procedure can be used to obtain a
maximum-likelihood fit of model parameters and latent feature values for a
variety of models that can be expressed in the high-level RBN representation.
Specifically, we propose a model that allows us to interpret learned latent
feature values as community centrality degrees by which we can identify nodes
that are central for one community, that are hubs between communities, or that
are isolated nodes. In a multi-relational setting, the model also provides a
characterization of how different relations are associated with each community.
| Jiuchuan Jiang and Manfred Jaeger | null | 1506.05055 | null | null |
Reservoir Characterization: A Machine Learning Approach | cs.CE cs.LG | Reservoir Characterization (RC) can be defined as the act of building a
reservoir model that incorporates all the characteristics of the reservoir that
are pertinent to its ability to store hydrocarbons and also to produce them.It
is a difficult problem due to non-linear and heterogeneous subsurface
properties and associated with a number of complex tasks such as data fusion,
data mining, formulation of the knowledge base, and handling of the
uncertainty.This present work describes the development of algorithms to obtain
the functional relationships between predictor seismic attributes and target
lithological properties. Seismic attributes are available over a study area
with lower vertical resolution. Conversely, well logs and lithological
properties are available only at specific well locations in a study area with
high vertical resolution.Sand fraction, which represents per unit sand volume
within the rock, has a balanced distribution between zero to unity.The thesis
addresses the issues of handling the information content mismatch between
predictor and target variables and proposes regularization of target property
prior to building a prediction model.In this thesis, two Artificial Neural
Network (ANN) based frameworks are proposed to model sand fraction from
multiple seismic attributes without and with well tops information
respectively. The performances of the frameworks are quantified in terms of
Correlation Coefficient, Root Mean Square Error, Absolute Error Mean, etc.
| Soumi Chaki | null | 1506.05070 | null | null |
Time Series Classification using the Hidden-Unit Logistic Model | cs.LG cs.CV | We present a new model for time series classification, called the hidden-unit
logistic model, that uses binary stochastic hidden units to model latent
structure in the data. The hidden units are connected in a chain structure that
models temporal dependencies in the data. Compared to the prior models for time
series classification such as the hidden conditional random field, our model
can model very complex decision boundaries because the number of latent states
grows exponentially with the number of hidden units. We demonstrate the strong
performance of our model in experiments on a variety of (computer vision)
tasks, including handwritten character recognition, speech recognition, facial
expression, and action recognition. We also present a state-of-the-art system
for facial action unit detection based on the hidden-unit logistic model.
| Wenjie Pei, Hamdi Dibeklio\u{g}lu, David M.J. Tax, Laurens van der
Maaten | null | 1506.05085 | null | null |
Big Data Analytics in Bioinformatics: A Machine Learning Perspective | cs.CE cs.LG | Bioinformatics research is characterized by voluminous and incremental
datasets and complex data analytics methods. The machine learning methods used
in bioinformatics are iterative and parallel. These methods can be scaled to
handle big data using the distributed and parallel computing technologies.
Usually big data tools perform computation in batch-mode and are not
optimized for iterative processing and high data dependency among operations.
In the recent years, parallel, incremental, and multi-view machine learning
algorithms have been proposed. Similarly, graph-based architectures and
in-memory big data tools have been developed to minimize I/O cost and optimize
iterative processing.
However, there lack standard big data architectures and tools for many
important bioinformatics problems, such as fast construction of co-expression
and regulatory networks and salient module identification, detection of
complexes over growing protein-protein interaction data, fast analysis of
massive DNA, RNA, and protein sequence data, and fast querying on incremental
and heterogeneous disease networks. This paper addresses the issues and
challenges posed by several big data problems in bioinformatics, and gives an
overview of the state of the art and the future research opportunities.
| Hirak Kashyap, Hasin Afzal Ahmed, Nazrul Hoque, Swarup Roy and Dhruba
Kumar Bhattacharyya | null | 1506.05101 | null | null |
Deep Convolutional Networks on Graph-Structured Data | cs.LG cs.CV cs.NE | Deep Learning's recent successes have mostly relied on Convolutional
Networks, which exploit fundamental statistical properties of images, sounds
and video data: the local stationarity and multi-scale compositional structure,
that allows expressing long range interactions in terms of shorter, localized
interactions. However, there exist other important examples, such as text
documents or bioinformatic data, that may lack some or all of these strong
statistical regularities.
In this paper we consider the general question of how to construct deep
architectures with small learning complexity on general non-Euclidean domains,
which are typically unknown and need to be estimated from the data. In
particular, we develop an extension of Spectral Networks which incorporates a
Graph Estimation procedure, that we test on large-scale classification
problems, matching or improving over Dropout Networks with far less parameters
to estimate.
| Mikael Henaff, Joan Bruna, Yann LeCun | null | 1506.05163 | null | null |
Feature Selection for Ridge Regression with Provable Guarantees | stat.ML cs.IT cs.LG math.IT | We introduce single-set spectral sparsification as a deterministic sampling
based feature selection technique for regularized least squares classification,
which is the classification analogue to ridge regression. The method is
unsupervised and gives worst-case guarantees of the generalization power of the
classification function after feature selection with respect to the
classification function obtained using all features. We also introduce
leverage-score sampling as an unsupervised randomized feature selection method
for ridge regression. We provide risk bounds for both single-set spectral
sparsification and leverage-score sampling on ridge regression in the fixed
design setting and show that the risk in the sampled space is comparable to the
risk in the full-feature space. We perform experiments on synthetic and
real-world datasets, namely a subset of TechTC-300 datasets, to support our
theory. Experimental results indicate that the proposed methods perform better
than the existing feature selection methods.
| Saurabh Paul, Petros Drineas | null | 1506.05173 | null | null |
On the Depth of Deep Neural Networks: A Theoretical View | cs.LG | People believe that depth plays an important role in success of deep neural
networks (DNN). However, this belief lacks solid theoretical justifications as
far as we know. We investigate role of depth from perspective of margin bound.
In margin bound, expected error is upper bounded by empirical margin error plus
Rademacher Average (RA) based capacity term. First, we derive an upper bound
for RA of DNN, and show that it increases with increasing depth. This indicates
negative impact of depth on test performance. Second, we show that deeper
networks tend to have larger representation power (measured by Betti numbers
based complexity) than shallower networks in multi-class setting, and thus can
lead to smaller empirical margin error. This implies positive impact of depth.
The combination of these two results shows that for DNN with restricted number
of hidden units, increasing depth is not always good since there is a tradeoff
between positive and negative impacts. These results inspire us to seek
alternative ways to achieve positive impact of depth, e.g., imposing
margin-based penalty terms to cross entropy loss so as to reduce empirical
margin error without increasing depth. Our experiments show that in this way,
we achieve significantly better test performance.
| Shizhao Sun, Wei Chen, Liwei Wang, Xiaoguang Liu, Tie-Yan Liu | null | 1506.05232 | null | null |
Gradient Estimation Using Stochastic Computation Graphs | cs.LG | In a variety of problems originating in supervised, unsupervised, and
reinforcement learning, the loss function is defined by an expectation over a
collection of random variables, which might be part of a probabilistic model or
the external world. Estimating the gradient of this loss function, using
samples, lies at the core of gradient-based learning algorithms for these
problems. We introduce the formalism of stochastic computation
graphs---directed acyclic graphs that include both deterministic functions and
conditional probability distributions---and describe how to easily and
automatically derive an unbiased estimator of the loss function's gradient. The
resulting algorithm for computing the gradient estimator is a simple
modification of the standard backpropagation algorithm. The generic scheme we
propose unifies estimators derived in variety of prior work, along with
variance-reduction techniques therein. It could assist researchers in
developing intricate models involving a combination of stochastic and
deterministic operations, enabling, for example, attention, memory, and control
actions.
| John Schulman, Nicolas Heess, Theophane Weber, Pieter Abbeel | null | 1506.05254 | null | null |
Deep Denoising Auto-encoder for Statistical Speech Synthesis | cs.SD cs.LG | This paper proposes a deep denoising auto-encoder technique to extract better
acoustic features for speech synthesis. The technique allows us to
automatically extract low-dimensional features from high dimensional spectral
features in a non-linear, data-driven, unsupervised way. We compared the new
stochastic feature extractor with conventional mel-cepstral analysis in
analysis-by-synthesis and text-to-speech experiments. Our results confirm that
the proposed method increases the quality of synthetic speech in both
experiments.
| Zhenzhou Wu, Shinji Takaki, Junichi Yamagishi | null | 1506.05268 | null | null |
Learning with a Wasserstein Loss | cs.LG cs.CV stat.ML | Learning to predict multi-label outputs is challenging, but in many problems
there is a natural metric on the outputs that can be used to improve
predictions. In this paper we develop a loss function for multi-label learning,
based on the Wasserstein distance. The Wasserstein distance provides a natural
notion of dissimilarity for probability measures. Although optimizing with
respect to the exact Wasserstein distance is costly, recent work has described
a regularized approximation that is efficiently computed. We describe an
efficient learning algorithm based on this regularization, as well as a novel
extension of the Wasserstein distance from probability measures to unnormalized
measures. We also describe a statistical learning bound for the loss. The
Wasserstein loss can encourage smoothness of the predictions with respect to a
chosen metric on the output space. We demonstrate this property on a real-data
tag prediction problem, using the Yahoo Flickr Creative Commons dataset,
outperforming a baseline that doesn't use the metric.
| Charlie Frogner, Chiyuan Zhang, Hossein Mobahi, Mauricio Araya-Polo,
Tomaso Poggio | null | 1506.05439 | null | null |
Learning Contextualized Semantics from Co-occurring Terms via a Siamese
Architecture | cs.IR cs.CL cs.LG | One of the biggest challenges in Multimedia information retrieval and
understanding is to bridge the semantic gap by properly modeling concept
semantics in context. The presence of out of vocabulary (OOV) concepts
exacerbates this difficulty. To address the semantic gap issues, we formulate a
problem on learning contextualized semantics from descriptive terms and propose
a novel Siamese architecture to model the contextualized semantics from
descriptive terms. By means of pattern aggregation and probabilistic topic
models, our Siamese architecture captures contextualized semantics from the
co-occurring descriptive terms via unsupervised learning, which leads to a
concept embedding space of the terms in context. Furthermore, the co-occurring
OOV concepts can be easily represented in the learnt concept embedding space.
The main properties of the concept embedding space are demonstrated via
visualization. Using various settings in semantic priming, we have carried out
a thorough evaluation by comparing our approach to a number of state-of-the-art
methods on six annotation corpora in different domains, i.e., MagTag5K, CAL500
and Million Song Dataset in the music domain as well as Corel5K, LabelMe and
SUNDatabase in the image domain. Experimental results on semantic priming
suggest that our approach outperforms those state-of-the-art methods
considerably in various aspects.
| Ubai Sandouk, Ke Chen | null | 1506.05514 | null | null |
Causality on Cross-Sectional Data: Stable Specification Search in
Constrained Structural Equation Modeling | stat.ML cs.LG | Causal modeling has long been an attractive topic for many researchers and in
recent decades there has seen a surge in theoretical development and discovery
algorithms. Generally discovery algorithms can be divided into two approaches:
constraint-based and score-based. The constraint-based approach is able to
detect common causes of the observed variables but the use of independence
tests makes it less reliable. The score-based approach produces a result that
is easier to interpret as it also measures the reliability of the inferred
causal relationships, but it is unable to detect common confounders of the
observed variables. A drawback of both score-based and constrained-based
approaches is the inherent instability in structure estimation. With finite
samples small changes in the data can lead to completely different optimal
structures. The present work introduces a new hypothesis-free score-based
causal discovery algorithm, called stable specification search, that is robust
for finite samples based on recent advances in stability selection using
subsampling and selection algorithms. Structure search is performed over
Structural Equation Models. Our approach uses exploratory search but allows
incorporation of prior background knowledge. We validated our approach on one
simulated data set, which we compare to the known ground truth, and two
real-world data sets for Chronic Fatigue Syndrome and Attention Deficit
Hyperactivity Disorder, which we compare to earlier medical studies. The
results on the simulated data set show significant improvement over alternative
approaches and the results on the real-word data sets show consistency with the
hypothesis driven models constructed by medical experts.
| Ridho Rahmadi, Perry Groot, Marianne Heins, Hans Knoop, Tom Heskes
(The OPTIMISTIC consortium) | 10.1016/j.asoc.2016.10.003 | 1506.05600 | null | null |
A hybrid algorithm for Bayesian network structure learning with
application to multi-label learning | stat.ML cs.AI cs.LG | We present a novel hybrid algorithm for Bayesian network structure learning,
called H2PC. It first reconstructs the skeleton of a Bayesian network and then
performs a Bayesian-scoring greedy hill-climbing search to orient the edges.
The algorithm is based on divide-and-conquer constraint-based subroutines to
learn the local structure around a target variable. We conduct two series of
experimental comparisons of H2PC against Max-Min Hill-Climbing (MMHC), which is
currently the most powerful state-of-the-art algorithm for Bayesian network
structure learning. First, we use eight well-known Bayesian network benchmarks
with various data sizes to assess the quality of the learned structure returned
by the algorithms. Our extensive experiments show that H2PC outperforms MMHC in
terms of goodness of fit to new data and quality of the network structure with
respect to the true dependence structure of the data. Second, we investigate
H2PC's ability to solve the multi-label learning problem. We provide
theoretical results to characterize and identify graphically the so-called
minimal label powersets that appear as irreducible factors in the joint
distribution under the faithfulness condition. The multi-label learning problem
is then decomposed into a series of multi-class classification problems, where
each multi-class variable encodes a label powerset. H2PC is shown to compare
favorably to MMHC in terms of global classification accuracy over ten
multi-label data sets covering different application domains. Overall, our
experiments support the conclusions that local structural learning with H2PC in
the form of local neighborhood induction is a theoretically well-motivated and
empirically effective learning framework that is well suited to multi-label
learning. The source code (in R) of H2PC as well as all data sets used for the
empirical tests are publicly available.
| Maxime Gasse (DM2L), Alex Aussem (DM2L), Haytham Elghazel (DM2L) | 10.1016/j.eswa.2014.04.032 | 1506.05692 | null | null |
Scalable Semi-Supervised Aggregation of Classifiers | cs.LG | We present and empirically evaluate an efficient algorithm that learns to
aggregate the predictions of an ensemble of binary classifiers. The algorithm
uses the structure of the ensemble predictions on unlabeled data to yield
significant performance improvements. It does this without making assumptions
on the structure or origin of the ensemble, without parameters, and as scalably
as linear learning. We empirically demonstrate these performance gains with
random forests.
| Akshay Balsubramani, Yoav Freund | null | 1506.05790 | null | null |
An Iterative Convolutional Neural Network Algorithm Improves Electron
Microscopy Image Segmentation | cs.NE cs.LG | To build the connectomics map of the brain, we developed a new algorithm that
can automatically refine the Membrane Detection Probability Maps (MDPM)
generated to perform automatic segmentation of electron microscopy (EM) images.
To achieve this, we executed supervised training of a convolutional neural
network to recover the removed center pixel label of patches sampled from a
MDPM. MDPM can be generated from other machine learning based algorithms
recognizing whether a pixel in an image corresponds to the cell membrane. By
iteratively applying this network over MDPM for multiple rounds, we were able
to significantly improve membrane segmentation results.
| Xundong Wu | null | 1506.05849 | null | null |
Information-based inference for singular models and finite sample sizes:
A frequentist information criterion | stat.ML cs.LG physics.data-an | In the information-based paradigm of inference, model selection is performed
by selecting the candidate model with the best estimated predictive
performance. The success of this approach depends on the accuracy of the
estimate of the predictive complexity. In the large-sample-size limit of a
regular model, the predictive performance is well estimated by the Akaike
Information Criterion (AIC). However, this approximation can either
significantly under or over-estimating the complexity in a wide range of
important applications where models are either non-regular or
finite-sample-size corrections are significant. We introduce an improved
approximation for the complexity that is used to define a new information
criterion: the Frequentist Information Criterion (QIC). QIC extends the
applicability of information-based inference to the finite-sample-size regime
of regular models and to singular models. We demonstrate the power and the
comparative advantage of QIC in a number of example analyses.
| Colin H. LaMont and Paul A. Wiggins | null | 1506.05855 | null | null |
Variational Gaussian Copula Inference | stat.ML cs.LG stat.CO | We utilize copulas to constitute a unified framework for constructing and
optimizing variational proposals in hierarchical Bayesian models. For models
with continuous and non-Gaussian hidden variables, we propose a semiparametric
and automated variational Gaussian copula approach, in which the parametric
Gaussian copula family is able to preserve multivariate posterior dependence,
and the nonparametric transformations based on Bernstein polynomials provide
ample flexibility in characterizing the univariate marginal posteriors.
| Shaobo Han, Xuejun Liao, David B. Dunson, Lawrence Carin | null | 1506.05860 | null | null |
LCSTS: A Large Scale Chinese Short Text Summarization Dataset | cs.CL cs.IR cs.LG | Automatic text summarization is widely regarded as the highly difficult
problem, partially because of the lack of large text summarization data set.
Due to the great challenge of constructing the large scale summaries for full
text, in this paper, we introduce a large corpus of Chinese short text
summarization dataset constructed from the Chinese microblogging website Sina
Weibo, which is released to the public
{http://icrc.hitsz.edu.cn/Article/show/139.html}. This corpus consists of over
2 million real Chinese short texts with short summaries given by the author of
each text. We also manually tagged the relevance of 10,666 short summaries with
their corresponding short texts. Based on the corpus, we introduce recurrent
neural network for the summary generation and achieve promising results, which
not only shows the usefulness of the proposed corpus for short text
summarization research, but also provides a baseline for further research on
this topic.
| Baotian Hu, Qingcai Chen, Fangze Zhu | null | 1506.05865 | null | null |
Representation Learning for Clustering: A Statistical Framework | stat.ML cs.LG | We address the problem of communicating domain knowledge from a user to the
designer of a clustering algorithm. We propose a protocol in which the user
provides a clustering of a relatively small random sample of a data set. The
algorithm designer then uses that sample to come up with a data representation
under which $k$-means clustering results in a clustering (of the full data set)
that is aligned with the user's clustering. We provide a formal statistical
model for analyzing the sample complexity of learning a clustering
representation with this paradigm. We then introduce a notion of capacity of a
class of possible representations, in the spirit of the VC-dimension, showing
that classes of representations that have finite such dimension can be
successfully learned with sample size error bounds, and end our discussion with
an analysis of that dimension for classes of representations induced by linear
embeddings.
| Hassan Ashtiani, Shai Ben-David | null | 1506.05900 | null | null |
Deep Knowledge Tracing | cs.AI cs.CY cs.LG | Knowledge tracing---where a machine models the knowledge of a student as they
interact with coursework---is a well established problem in computer supported
education. Though effectively modeling student knowledge would have high
educational impact, the task has many inherent challenges. In this paper we
explore the utility of using Recurrent Neural Networks (RNNs) to model student
learning. The RNN family of models have important advantages over previous
methods in that they do not require the explicit encoding of human domain
knowledge, and can capture more complex representations of student knowledge.
Using neural networks results in substantial improvements in prediction
performance on a range of knowledge tracing datasets. Moreover the learned
model can be used for intelligent curriculum design and allows straightforward
interpretation and discovery of structure in student tasks. These results
suggest a promising new line of research for knowledge tracing and an exemplary
application task for RNNs.
| Chris Piech, Jonathan Spencer, Jonathan Huang, Surya Ganguli, Mehran
Sahami, Leonidas Guibas, Jascha Sohl-Dickstein | null | 1506.05908 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.