title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
On the exact learnability of graph parameters: The case of partition
functions | cs.LG math.CO | We study the exact learnability of real valued graph parameters $f$ which are
known to be representable as partition functions which count the number of
weighted homomorphisms into a graph $H$ with vertex weights $\alpha$ and edge
weights $\beta$. M. Freedman, L. Lov\'asz and A. Schrijver have given a
characterization of these graph parameters in terms of the $k$-connection
matrices $C(f,k)$ of $f$. Our model of learnability is based on D. Angluin's
model of exact learning using membership and equivalence queries. Given such a
graph parameter $f$, the learner can ask for the values of $f$ for graphs of
their choice, and they can formulate hypotheses in terms of the connection
matrices $C(f,k)$ of $f$. The teacher can accept the hypothesis as correct, or
provide a counterexample consisting of a graph. Our main result shows that in
this scenario, a very large class of partition functions, the rigid partition
functions, can be learned in time polynomial in the size of $H$ and the size of
the largest counterexample in the Blum-Shub-Smale model of computation over the
reals with unit cost.
| Nadia Labai and Johann A. Makowsky | null | 1606.04056 | null | null |
Matching Networks for One Shot Learning | cs.LG stat.ML | Learning from a few examples remains a key challenge in machine learning.
Despite recent advances in important domains such as vision and language, the
standard supervised deep learning paradigm does not offer a satisfactory
solution for learning new concepts rapidly from little data. In this work, we
employ ideas from metric learning based on deep neural features and from recent
advances that augment neural networks with external memories. Our framework
learns a network that maps a small labelled support set and an unlabelled
example to its label, obviating the need for fine-tuning to adapt to new class
types. We then define one-shot learning problems on vision (using Omniglot,
ImageNet) and language tasks. Our algorithm improves one-shot accuracy on
ImageNet from 87.6% to 93.2% and from 88.0% to 93.8% on Omniglot compared to
competing approaches. We also demonstrate the usefulness of the same model on
language modeling by introducing a one-shot task on the Penn Treebank.
| Oriol Vinyals and Charles Blundell and Timothy Lillicrap and Koray
Kavukcuoglu and Daan Wierstra | null | 1606.04080 | null | null |
Modeling Missing Data in Clinical Time Series with RNNs | cs.LG cs.IR cs.NE stat.ML | We demonstrate a simple strategy to cope with missing data in sequential
inputs, addressing the task of multilabel classification of diagnoses given
clinical time series. Collected from the pediatric intensive care unit (PICU)
at Children's Hospital Los Angeles, our data consists of multivariate time
series of observations. The measurements are irregularly spaced, leading to
missingness patterns in temporally discretized sequences. While these artifacts
are typically handled by imputation, we achieve superior predictive performance
by treating the artifacts as features. Unlike linear models, recurrent neural
networks can realize this improvement using only simple binary indicators of
missingness. For linear models, we show an alternative strategy to capture this
signal. Training models on missingness patterns only, we show that for some
diseases, what tests are run can be as predictive as the results themselves.
| Zachary C. Lipton, David C. Kale, Randall Wetzel | null | 1606.04130 | null | null |
Mutual information for symmetric rank-one matrix estimation: A proof of
the replica formula | cs.IT cond-mat.dis-nn cs.LG math-ph math.IT math.MP | Factorizing low-rank matrices has many applications in machine learning and
statistics. For probabilistic models in the Bayes optimal setting, a general
expression for the mutual information has been proposed using heuristic
statistical physics computations, and proven in few specific cases. Here, we
show how to rigorously prove the conjectured formula for the symmetric rank-one
case. This allows to express the minimal mean-square-error and to characterize
the detectability phase transitions in a large set of estimation problems
ranging from community detection to sparse PCA. We also show that for a large
set of parameters, an iterative algorithm called approximate message-passing is
Bayes optimal. There exists, however, a gap between what currently known
polynomial algorithms can do and what is expected information theoretically.
Additionally, the proof technique has an interest of its own and exploits three
essential ingredients: the interpolation method introduced in statistical
physics by Guerra, the analysis of the approximate message-passing algorithm
and the theory of spatial coupling and threshold saturation in coding. Our
approach is generic and applicable to other open problems in statistical
estimation where heuristic statistical physics predictions are available.
| Jean Barbier, Mohamad Dia, Nicolas Macris, Florent Krzakala, Thibault
Lesieur, Lenka Zdeborova | null | 1606.04142 | null | null |
Sample Complexity of Automated Mechanism Design | cs.LG cs.GT | The design of revenue-maximizing combinatorial auctions, i.e. multi-item
auctions over bundles of goods, is one of the most fundamental problems in
computational economics, unsolved even for two bidders and two items for sale.
In the traditional economic models, it is assumed that the bidders' valuations
are drawn from an underlying distribution and that the auction designer has
perfect knowledge of this distribution. Despite this strong and oftentimes
unrealistic assumption, it is remarkable that the revenue-maximizing
combinatorial auction remains unknown. In recent years, automated mechanism
design has emerged as one of the most practical and promising approaches to
designing high-revenue combinatorial auctions. The most scalable automated
mechanism design algorithms take as input samples from the bidders' valuation
distribution and then search for a high-revenue auction in a rich auction
class. In this work, we provide the first sample complexity analysis for the
standard hierarchy of deterministic combinatorial auction classes used in
automated mechanism design. In particular, we provide tight sample complexity
bounds on the number of samples needed to guarantee that the empirical revenue
of the designed mechanism on the samples is close to its expected revenue on
the underlying, unknown distribution over bidder valuations, for each of the
auction classes in the hierarchy. In addition to helping set automated
mechanism design on firm foundations, our results also push the boundaries of
learning theory. In particular, the hypothesis functions used in our contexts
are defined through multi-stage combinatorial optimization procedures, rather
than simple decision boundaries, as are common in machine learning.
| Maria-Florina Balcan, Tuomas Sandholm, Ellen Vitercik | null | 1606.04145 | null | null |
The Crossover Process: Learnability and Data Protection from Inference
Attacks | cs.LG stat.ML | It is usual to consider data protection and learnability as conflicting
objectives. This is not always the case: we show how to jointly control
inference --- seen as the attack --- and learnability by a noise-free process
that mixes training examples, the Crossover Process (cp). One key point is that
the cp~is typically able to alter joint distributions without touching on
marginals, nor altering the sufficient statistic for the class. In other words,
it saves (and sometimes improves) generalization for supervised learning, but
can alter the relationship between covariates --- and therefore fool measures
of nonlinear independence and causal inference into misleading ad-hoc
conclusions. For example, a cp~can increase / decrease odds ratios, bring
fairness or break fairness, tamper with disparate impact, strengthen, weaken or
reverse causal directions, change observed statistical measures of dependence.
For each of these, we quantify changes brought by a cp, as well as its
statistical impact on generalization abilities via a new complexity measure
that we call the Rademacher cp~complexity. Experiments on a dozen readily
available domains validate the theory.
| Richard Nock, Giorgio Patrini, Finnian Lattimore, Tiberio Caetano | null | 1606.04160 | null | null |
Modal-set estimation with an application to clustering | stat.ML cs.LG | We present a first procedure that can estimate -- with statistical
consistency guarantees -- any local-maxima of a density, under benign
distributional conditions. The procedure estimates all such local maxima, or
$\textit{modal-sets}$, of any bounded shape or dimension, including usual
point-modes. In practice, modal-sets can arise as dense low-dimensional
structures in noisy data, and more generally serve to better model the rich
variety of locally-high-density structures in data.
The procedure is then shown to be competitive on clustering applications, and
moreover is quite stable to a wide range of settings of its tuning parameter.
| Heinrich Jiang, Samory Kpotufe | null | 1606.04166 | null | null |
Inverting face embeddings with convolutional neural networks | cs.CV cs.LG cs.NE | Deep neural networks have dramatically advanced the state of the art for many
areas of machine learning. Recently they have been shown to have a remarkable
ability to generate highly complex visual artifacts such as images and text
rather than simply recognize them.
In this work we use neural networks to effectively invert low-dimensional
face embeddings while producing realistically looking consistent images. Our
contribution is twofold, first we show that a gradient ascent style approaches
can be used to reproduce consistent images, with a help of a guiding image.
Second, we demonstrate that we can train a separate neural network to
effectively solve the minimization problem in one pass, and generate images in
real-time. We then evaluate the loss imposed by using a neural network instead
of the gradient descent by comparing the final values of the minimized loss
function.
| Andrey Zhmoginov and Mark Sandler | null | 1606.04189 | null | null |
Deep Recurrent Models with Fast-Forward Connections for Neural Machine
Translation | cs.CL cs.LG | Neural machine translation (NMT) aims at solving machine translation (MT)
problems using neural networks and has exhibited promising results in recent
years. However, most of the existing NMT models are shallow and there is still
a performance gap between a single NMT model and the best conventional MT
system. In this work, we introduce a new type of linear connections, named
fast-forward connections, based on deep Long Short-Term Memory (LSTM) networks,
and an interleaved bi-directional architecture for stacking the LSTM layers.
Fast-forward connections play an essential role in propagating the gradients
and building a deep topology of depth 16. On the WMT'14 English-to-French task,
we achieve BLEU=37.7 with a single attention model, which outperforms the
corresponding single shallow model by 6.2 BLEU points. This is the first time
that a single NMT model achieves state-of-the-art performance and outperforms
the best conventional model by 0.7 BLEU points. We can still achieve BLEU=36.3
even without using an attention mechanism. After special handling of unknown
words and model ensembling, we obtain the best score reported to date on this
task with BLEU=40.4. Our models are also validated on the more difficult WMT'14
English-to-German task.
| Jie Zhou and Ying Cao and Xuguang Wang and Peng Li and Wei Xu | null | 1606.04199 | null | null |
Conditional Generative Moment-Matching Networks | cs.LG | Maximum mean discrepancy (MMD) has been successfully applied to learn deep
generative models for characterizing a joint distribution of variables via
kernel mean embedding. In this paper, we present conditional generative moment-
matching networks (CGMMN), which learn a conditional distribution given some
input variables based on a conditional maximum mean discrepancy (CMMD)
criterion. The learning is performed by stochastic gradient descent with the
gradient calculated by back-propagation. We evaluate CGMMN on a wide range of
tasks, including predictive modeling, contextual generation, and Bayesian dark
knowledge, which distills knowledge from a Bayesian model by learning a
relatively small CGMMN student network. Our results demonstrate competitive
performance in all the tasks.
| Yong Ren, Jialian Li, Yucen Luo, Jun Zhu | null | 1606.04218 | null | null |
DCNNs on a Diet: Sampling Strategies for Reducing the Training Set Size | cs.CV cs.LG | Large-scale supervised classification algorithms, especially those based on
deep convolutional neural networks (DCNNs), require vast amounts of training
data to achieve state-of-the-art performance. Decreasing this data requirement
would significantly speed up the training process and possibly improve
generalization. Motivated by this objective, we consider the task of adaptively
finding concise training subsets which will be iteratively presented to the
learner. We use convex optimization methods, based on an objective criterion
and feedback from the current performance of the classifier, to efficiently
identify informative samples to train on. We propose an algorithm to decompose
the optimization problem into smaller per-class problems, which can be solved
in parallel. We test our approach on standard classification tasks and
demonstrate its effectiveness in decreasing the training set size without
compromising performance. We also show that our approach can make the
classifier more robust in the presence of label noise and class imbalance.
| Maya Kabkab, Azadeh Alavi, Rama Chellappa | null | 1606.04232 | null | null |
Context-Aware Proactive Content Caching with Service Differentiation in
Wireless Networks | cs.NI cs.LG | Content caching in small base stations or wireless infostations is considered
to be a suitable approach to improve the efficiency in wireless content
delivery. Placing the optimal content into local caches is crucial due to
storage limitations, but it requires knowledge about the content popularity
distribution, which is often not available in advance. Moreover, local content
popularity is subject to fluctuations since mobile users with different
interests connect to the caching entity over time. Which content a user prefers
may depend on the user's context. In this paper, we propose a novel algorithm
for context-aware proactive caching. The algorithm learns context-specific
content popularity online by regularly observing context information of
connected users, updating the cache content and observing cache hits
subsequently. We derive a sublinear regret bound, which characterizes the
learning speed and proves that our algorithm converges to the optimal cache
content placement strategy in terms of maximizing the number of cache hits.
Furthermore, our algorithm supports service differentiation by allowing
operators of caching entities to prioritize customer groups. Our numerical
results confirm that our algorithm outperforms state-of-the-art algorithms in a
real world data set, with an increase in the number of cache hits of at least
14%.
| Sabrina M\"uller, Onur Atan, Mihaela van der Schaar, Anja Klein | 10.1109/TWC.2016.2636139 | 1606.04236 | null | null |
Local Canonical Correlation Analysis for Nonlinear Common Variables
Discovery | cs.LG stat.ML | In this paper, we address the problem of hidden common variables discovery
from multimodal data sets of nonlinear high-dimensional observations. We
present a metric based on local applications of canonical correlation analysis
(CCA) and incorporate it in a kernel-based manifold learning technique.We show
that this metric discovers the hidden common variables underlying the
multimodal observations by estimating the Euclidean distance between them. Our
approach can be viewed both as an extension of CCA to a nonlinear setting as
well as an extension of manifold learning to multiple data sets. Experimental
results show that our method indeed discovers the common variables underlying
high-dimensional nonlinear observations without assuming prior rigid model
assumptions.
| Or Yair, Ronen Talmon | 10.1109/TSP.2016.2628348 | 1606.04268 | null | null |
Context Trees: Augmenting Geospatial Trajectories with Context | cs.DS cs.LG | Exposing latent knowledge in geospatial trajectories has the potential to
provide a better understanding of the movements of individuals and groups.
Motivated by such a desire, this work presents the context tree, a new
hierarchical data structure that summarises the context behind user actions in
a single model. We propose a method for context tree construction that augments
geospatial trajectories with land usage data to identify such contexts. Through
evaluation of the construction method and analysis of the properties of
generated context trees, we demonstrate the foundation for understanding and
modelling behaviour afforded. Summarising user contexts into a single data
structure gives easy access to information that would otherwise remain latent,
providing the basis for better understanding and predicting the actions and
behaviours of individuals and groups. Finally, we also present a method for
pruning context trees, for use in applications where it is desirable to reduce
the size of the tree while retaining useful information.
| Alasdair Thomason, Nathan Griffiths, Victor Sanchez | 10.1145/2978578 | 1606.04269 | null | null |
Efficient Pairwise Learning Using Kernel Ridge Regression: an Exact
Two-Step Method | cs.LG | Pairwise learning or dyadic prediction concerns the prediction of properties
for pairs of objects. It can be seen as an umbrella covering various machine
learning problems such as matrix completion, collaborative filtering,
multi-task learning, transfer learning, network prediction and zero-shot
learning. In this work we analyze kernel-based methods for pairwise learning,
with a particular focus on a recently-suggested two-step method. We show that
this method offers an appealing alternative for commonly-applied
Kronecker-based methods that model dyads by means of pairwise feature
representations and pairwise kernels. In a series of theoretical results, we
establish correspondences between the two types of methods in terms of linear
algebra and spectral filtering, and we analyze their statistical consistency.
In addition, the two-step method allows us to establish novel algorithmic
shortcuts for efficient training and validation on very large datasets. Putting
those properties together, we believe that this simple, yet powerful method can
become a standard tool for many problems. Extensive experimental results for a
range of practical settings are reported.
| Michiel Stock and Tapio Pahikkala and Antti Airola and Bernard De
Baets and Willem Waegeman | null | 1606.04275 | null | null |
Exact and efficient top-K inference for multi-target prediction by
querying separable linear relational models | cs.IR cs.LG | Many complex multi-target prediction problems that concern large target
spaces are characterised by a need for efficient prediction strategies that
avoid the computation of predictions for all targets explicitly. Examples of
such problems emerge in several subfields of machine learning, such as
collaborative filtering, multi-label classification, dyadic prediction and
biological network inference. In this article we analyse efficient and exact
algorithms for computing the top-$K$ predictions in the above problem settings,
using a general class of models that we refer to as separable linear relational
models. We show how to use those inference algorithms, which are modifications
of well-known information retrieval methods, in a variety of machine learning
settings. Furthermore, we study the possibility of scoring items incompletely,
while still retaining an exact top-K retrieval. Experimental results in several
application domains reveal that the so-called threshold algorithm is very
scalable, performing often many orders of magnitude more efficiently than the
naive approach.
| Michiel Stock and Krzysztof Dembczynski and Bernard De Baets and
Willem Waegeman | 10.1007/s10618-016-0456-z | 1606.04278 | null | null |
Automatic Text Scoring Using Neural Networks | cs.CL cs.LG cs.NE | Automated Text Scoring (ATS) provides a cost-effective and consistent
alternative to human marking. However, in order to achieve good performance,
the predictive features of the system need to be manually engineered by human
experts. We introduce a model that forms word representations by learning the
extent to which specific words contribute to the text's score. Using Long-Short
Term Memory networks to represent the meaning of texts, we demonstrate that a
fully automated framework is able to achieve excellent results over similar
approaches. In an attempt to make our results more interpretable, and inspired
by recent advances in visualizing neural networks, we introduce a novel method
for identifying the regions of the text that the model has found more
discriminative.
| Dimitrios Alikaniotis and Helen Yannakoudakis and Marek Rei | 10.18653/v1/P16-1068 | 1606.04289 | null | null |
Time for a change: a tutorial for comparing multiple classifiers through
Bayesian analysis | stat.ML cs.LG | The machine learning community adopted the use of null hypothesis
significance testing (NHST) in order to ensure the statistical validity of
results. Many scientific fields however realized the shortcomings of
frequentist reasoning and in the most radical cases even banned its use in
publications. We should do the same: just as we have embraced the Bayesian
paradigm in the development of new machine learning methods, so we should also
use it in the analysis of our own results. We argue for abandonment of NHST by
exposing its fallacies and, more importantly, offer better - more sound and
useful - alternatives for it.
| Alessio Benavoli, Giorgio Corani, Janez Demsar, Marco Zaffalon | null | 1606.04316 | null | null |
Calibration of Phone Likelihoods in Automatic Speech Recognition | stat.ML cs.LG cs.SD | In this paper we study the probabilistic properties of the posteriors in a
speech recognition system that uses a deep neural network (DNN) for acoustic
modeling. We do this by reducing Kaldi's DNN shared pdf-id posteriors to phone
likelihoods, and using test set forced alignments to evaluate these using a
calibration sensitive metric. Individual frame posteriors are in principle
well-calibrated, because the DNN is trained using cross entropy as the
objective function, which is a proper scoring rule. When entire phones are
assessed, we observe that it is best to average the log likelihoods over the
duration of the phone. Further scaling of the average log likelihoods by the
logarithm of the duration slightly improves the calibration, and this
improvement is retained when tested on independent test data.
| David A. van Leeuwen and Joost van Doremalen | null | 1606.04317 | null | null |
TwiSE at SemEval-2016 Task 4: Twitter Sentiment Classification | cs.CL cs.IR cs.LG | This paper describes the participation of the team "TwiSE" in the SemEval
2016 challenge. Specifically, we participated in Task 4, namely "Sentiment
Analysis in Twitter" for which we implemented sentiment classification systems
for subtasks A, B, C and D. Our approach consists of two steps. In the first
step, we generate and validate diverse feature sets for twitter sentiment
evaluation, inspired by the work of participants of previous editions of such
challenges. In the second step, we focus on the optimization of the evaluation
measures of the different subtasks. To this end, we examine different learning
strategies by validating them on the data provided by the task organisers. For
our final submissions we used an ensemble learning approach (stacked
generalization) for Subtask A and single linear models for the rest of the
subtasks. In the official leaderboard we were ranked 9/35, 8/19, 1/11 and 2/14
for subtasks A, B, C and D respectively.\footnote{We make the code available
for research purposes at
\url{https://github.com/balikasg/SemEval2016-Twitter\_Sentiment\_Evaluation}.}
| Georgios Balikas, Massih-Reza Amini | null | 1606.04351 | null | null |
Deep Learning with Darwin: Evolutionary Synthesis of Deep Neural
Networks | cs.CV cs.LG cs.NE stat.ML | Taking inspiration from biological evolution, we explore the idea of "Can
deep neural networks evolve naturally over successive generations into highly
efficient deep neural networks?" by introducing the notion of synthesizing new
highly efficient, yet powerful deep neural networks over successive generations
via an evolutionary process from ancestor deep neural networks. The
architectural traits of ancestor deep neural networks are encoded using
synaptic probability models, which can be viewed as the `DNA' of these
networks. New descendant networks with differing network architectures are
synthesized based on these synaptic probability models from the ancestor
networks and computational environmental factor models, in a random manner to
mimic heredity, natural selection, and random mutation. These offspring
networks are then trained into fully functional networks, like one would train
a newborn, and have more efficient, more diverse network architectures than
their ancestor networks, while achieving powerful modeling capabilities.
Experimental results for the task of visual saliency demonstrated that the
synthesized `evolved' offspring networks can achieve state-of-the-art
performance while having network architectures that are significantly more
efficient (with a staggering $\sim$48-fold decrease in synapses by the fourth
generation) compared to the original ancestor network.
| Mohammad Javad Shafiee, Akshaya Mishra, and Alexander Wong | null | 1606.04393 | null | null |
The Parallel Knowledge Gradient Method for Batch Bayesian Optimization | stat.ML cs.AI cs.LG | In many applications of black-box optimization, one can evaluate multiple
points simultaneously, e.g. when evaluating the performances of several
different neural network architectures in a parallel computing environment. In
this paper, we develop a novel batch Bayesian optimization algorithm --- the
parallel knowledge gradient method. By construction, this method provides the
one-step Bayes-optimal batch of points to sample. We provide an efficient
strategy for computing this Bayes-optimal batch of points, and we demonstrate
that the parallel knowledge gradient method finds global optima significantly
faster than previous batch Bayesian optimization algorithms on both synthetic
test functions and when tuning hyperparameters of practical machine learning
algorithms, especially when function evaluations are noisy.
| Jian Wu, Peter I. Frazier | null | 1606.04414 | null | null |
Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and
Knowledge | cs.AI cs.LG cs.LO cs.NE | We propose Logic Tensor Networks: a uniform framework for integrating
automatic learning and reasoning. A logic formalism called Real Logic is
defined on a first-order language whereby formulas have truth-value in the
interval [0,1] and semantics defined concretely on the domain of real numbers.
Logical constants are interpreted as feature vectors of real numbers. Real
Logic promotes a well-founded integration of deductive reasoning on a
knowledge-base and efficient data-driven relational machine learning. We show
how Real Logic can be implemented in deep Tensor Neural Networks with the use
of Google's tensorflow primitives. The paper concludes with experiments
applying Logic Tensor Networks on a simple but representative example of
knowledge completion.
| Luciano Serafini and Artur d'Avila Garcez | null | 1606.04422 | null | null |
Adversarial Perturbations Against Deep Neural Networks for Malware
Classification | cs.CR cs.LG cs.NE | Deep neural networks, like many other machine learning models, have recently
been shown to lack robustness against adversarially crafted inputs. These
inputs are derived from regular inputs by minor yet carefully selected
perturbations that deceive machine learning models into desired
misclassifications. Existing work in this emerging field was largely specific
to the domain of image classification, since the high-entropy of images can be
conveniently manipulated without changing the images' overall visual
appearance. Yet, it remains unclear how such attacks translate to more
security-sensitive applications such as malware detection - which may pose
significant challenges in sample generation and arguably grave consequences for
failure.
In this paper, we show how to construct highly-effective adversarial sample
crafting attacks for neural networks used as malware classifiers. The
application domain of malware classification introduces additional constraints
in the adversarial sample crafting problem when compared to the computer vision
domain: (i) continuous, differentiable input domains are replaced by discrete,
often binary inputs; and (ii) the loose condition of leaving visual appearance
unchanged is replaced by requiring equivalent functional behavior. We
demonstrate the feasibility of these attacks on many different instances of
malware classifiers that we trained using the DREBIN Android malware data set.
We furthermore evaluate to which extent potential defensive mechanisms against
adversarial crafting can be leveraged to the setting of malware classification.
While feature reduction did not prove to have a positive impact, distillation
and re-training on adversarially crafted samples show promising results.
| Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes,
Patrick McDaniel | null | 1606.04435 | null | null |
DeepMath - Deep Sequence Models for Premise Selection | cs.AI cs.LG cs.LO | We study the effectiveness of neural sequence models for premise selection in
automated theorem proving, one of the main bottlenecks in the formalization of
mathematics. We propose a two stage approach for this task that yields good
results for the premise selection task on the Mizar corpus while avoiding the
hand-engineered features of existing state-of-the-art models. To our knowledge,
this is the first time deep learning has been applied to theorem proving on a
large scale.
| Alex A. Alemi, Francois Chollet, Niklas Een, Geoffrey Irving,
Christian Szegedy and Josef Urban | null | 1606.04442 | null | null |
A scalable end-to-end Gaussian process adapter for irregularly sampled
time series classification | stat.ML cs.LG | We present a general framework for classification of sparse and
irregularly-sampled time series. The properties of such time series can result
in substantial uncertainty about the values of the underlying temporal
processes, while making the data difficult to deal with using standard
classification methods that assume fixed-dimensional feature spaces. To address
these challenges, we propose an uncertainty-aware classification framework
based on a special computational layer we refer to as the Gaussian process
adapter that can connect irregularly sampled time series data to any black-box
classifier learnable using gradient descent. We show how to scale up the
required computations based on combining the structured kernel interpolation
framework and the Lanczos approximation method, and how to discriminatively
train the Gaussian process adapter in combination with a number of classifiers
end-to-end using backpropagation.
| Steven Cheng-Xian Li, Benjamin Marlin | null | 1606.04443 | null | null |
Recurrent neural network training with preconditioned stochastic
gradient descent | stat.ML cs.LG | This paper studies the performance of a recently proposed preconditioned
stochastic gradient descent (PSGD) algorithm on recurrent neural network (RNN)
training. PSGD adaptively estimates a preconditioner to accelerate gradient
descent, and is designed to be simple, general and easy to use, as stochastic
gradient descent (SGD). RNNs, especially the ones requiring extremely long term
memories, are difficult to train. We have tested PSGD on a set of synthetic
pathological RNN learning problems and the real world MNIST handwritten digit
recognition task. Experimental results suggest that PSGD is able to achieve
highly competitive performance without using any trick like preprocessing,
pretraining or parameter tweaking.
| Xi-Lin Li | null | 1606.04449 | null | null |
Model-Free Episodic Control | stat.ML cs.LG q-bio.NC | State of the art deep reinforcement learning algorithms take many millions of
interactions to attain human-level performance. Humans, on the other hand, can
very quickly exploit highly rewarding nuances of an environment upon first
discovery. In the brain, such rapid learning is thought to depend on the
hippocampus and its capacity for episodic memory. Here we investigate whether a
simple model of hippocampal episodic control can learn to solve difficult
sequential decision-making tasks. We demonstrate that it not only attains a
highly rewarding strategy significantly faster than state-of-the-art deep
reinforcement learning algorithms, but also achieves a higher overall reward on
some of the more challenging domains.
| Charles Blundell and Benigno Uria and Alexander Pritzel and Yazhe Li
and Avraham Ruderman and Joel Z Leibo and Jack Rae and Daan Wierstra and
Demis Hassabis | null | 1606.04460 | null | null |
Learning to learn by gradient descent by gradient descent | cs.NE cs.LG | The move from hand-designed features to learned features in machine learning
has been wildly successful. In spite of this, optimization algorithms are still
designed by hand. In this paper we show how the design of an optimization
algorithm can be cast as a learning problem, allowing the algorithm to learn to
exploit structure in the problems of interest in an automatic way. Our learned
algorithms, implemented by LSTMs, outperform generic, hand-designed competitors
on the tasks for which they are trained, and also generalize well to new tasks
with similar structure. We demonstrate this on a number of tasks, including
simple convex problems, training neural networks, and styling images with
neural art.
| Marcin Andrychowicz and Misha Denil and Sergio Gomez and Matthew W.
Hoffman and David Pfau and Tom Schaul and Brendan Shillingford and Nando de
Freitas | null | 1606.04474 | null | null |
Omnivore: An Optimizer for Multi-device Deep Learning on CPUs and GPUs | cs.DC cs.LG | We study the factors affecting training time in multi-device deep learning
systems. Given a specification of a convolutional neural network, our goal is
to minimize the time to train this model on a cluster of commodity CPUs and
GPUs. We first focus on the single-node setting and show that by using standard
batching and data-parallel techniques, throughput can be improved by at least
5.5x over state-of-the-art systems on CPUs. This ensures an end-to-end training
speed directly proportional to the throughput of a device regardless of its
underlying hardware, allowing each node in the cluster to be treated as a black
box. Our second contribution is a theoretical and empirical study of the
tradeoffs affecting end-to-end training time in a multiple-device setting. We
identify the degree of asynchronous parallelization as a key factor affecting
both hardware and statistical efficiency. We see that asynchrony can be viewed
as introducing a momentum term. Our results imply that tuning momentum is
critical in asynchronous parallel configurations, and suggest that published
results that have not been fully tuned might report suboptimal performance for
some configurations. For our third contribution, we use our novel understanding
of the interaction between system and optimization dynamics to provide an
efficient hyperparameter optimizer. Our optimizer involves a predictive model
for the total time to convergence and selects an allocation of resources to
minimize that time. We demonstrate that the most popular distributed deep
learning systems fall within our tradeoff space, but do not optimize within the
space. By doing this optimization, our prototype runs 1.9x to 12x faster than
the fastest state-of-the-art systems.
| Stefan Hadjis, Ce Zhang, Ioannis Mitliagkas, Dan Iter, Christopher
R\'e | null | 1606.04487 | null | null |
Max-Margin Feature Selection | cs.LG cs.CV | Many machine learning applications such as in vision, biology and social
networking deal with data in high dimensions. Feature selection is typically
employed to select a subset of features which im- proves generalization
accuracy as well as reduces the computational cost of learning the model. One
of the criteria used for feature selection is to jointly minimize the
redundancy and maximize the rele- vance of the selected features. In this
paper, we formulate the task of feature selection as a one class SVM problem in
a space where features correspond to the data points and instances correspond
to the dimensions. The goal is to look for a representative subset of the
features (support vectors) which describes the boundary for the region where
the set of the features (data points) exists. This leads to a joint
optimization of relevance and redundancy in a principled max-margin framework.
Additionally, our formulation enables us to leverage existing techniques for
optimizing the SVM objective resulting in highly computationally efficient
solutions for the task of feature selection. Specifically, we employ the dual
coordinate descent algorithm (Hsieh et al., 2008), originally proposed for
SVMs, for our formulation. We use a sparse representation to deal with data in
very high dimensions. Experiments on seven publicly available benchmark
datasets from a variety of domains show that our approach results in orders of
magnitude faster solutions even while retaining the same level of accuracy
compared to the state of the art feature selection techniques.
| Yamuna Prasad, Dinesh Khandelwal, K. K. Biswas | null | 1606.04506 | null | null |
Sparsely Connected and Disjointly Trained Deep Neural Networks for Low
Resource Behavioral Annotation: Acoustic Classification in Couples' Therapy | cs.LG cs.NE | Observational studies are based on accurate assessment of human state. A
behavior recognition system that models interlocutors' state in real-time can
significantly aid the mental health domain. However, behavior recognition from
speech remains a challenging task since it is difficult to find generalizable
and representative features because of noisy and high-dimensional data,
especially when data is limited and annotated coarsely and subjectively. Deep
Neural Networks (DNN) have shown promise in a wide range of machine learning
tasks, but for Behavioral Signal Processing (BSP) tasks their application has
been constrained due to limited quantity of data. We propose a
Sparsely-Connected and Disjointly-Trained DNN (SD-DNN) framework to deal with
limited data. First, we break the acoustic feature set into subsets and train
multiple distinct classifiers. Then, the hidden layers of these classifiers
become parts of a deeper network that integrates all feature streams. The
overall system allows for full connectivity while limiting the number of
parameters trained at any time and allows convergence possible with even
limited data. We present results on multiple behavior codes in the couples'
therapy domain and demonstrate the benefits in behavior classification
accuracy. We also show the viability of this system towards live behavior
annotations.
| Haoqi Li, Brian Baucom, Panayiotis Georgiou | null | 1606.04518 | null | null |
Training variance and performance evaluation of neural networks in
speech | cs.LG | In this work we study variance in the results of neural network training on a
wide variety of configurations in automatic speech recognition. Although this
variance itself is well known, this is, to the best of our knowledge, the first
paper that performs an extensive empirical study on its effects in speech
recognition. We view training as sampling from a distribution and show that
these distributions can have a substantial variance. These results show the
urgent need to rethink the way in which results in the literature are reported
and interpreted.
| Ewout van den Berg, Bhuvana Ramabhadran, Michael Picheny | null | 1606.04521 | null | null |
A New Approach to Dimensionality Reduction for Anomaly Detection in Data
Traffic | cs.LG cs.CR cs.NI | The monitoring and management of high-volume feature-rich traffic in large
networks offers significant challenges in storage, transmission and
computational costs. The predominant approach to reducing these costs is based
on performing a linear mapping of the data to a low-dimensional subspace such
that a certain large percentage of the variance in the data is preserved in the
low-dimensional representation. This variance-based subspace approach to
dimensionality reduction forces a fixed choice of the number of dimensions, is
not responsive to real-time shifts in observed traffic patterns, and is
vulnerable to normal traffic spoofing. Based on theoretical insights proved in
this paper, we propose a new distance-based approach to dimensionality
reduction motivated by the fact that the real-time structural differences
between the covariance matrices of the observed and the normal traffic is more
relevant to anomaly detection than the structure of the training data alone.
Our approach, called the distance-based subspace method, allows a different
number of reduced dimensions in different time windows and arrives at only the
number of dimensions necessary for effective anomaly detection. We present
centralized and distributed versions of our algorithm and, using simulation on
real traffic traces, demonstrate the qualitative and quantitative advantages of
the distance-based subspace approach.
| Tingshan Huang, Harish Sethu and Nagarajan Kandasamy | null | 1606.04552 | null | null |
A two-stage learning method for protein-protein interaction prediction | cs.LG cs.CE | In this paper, a new method for PPI (proteinprotein interaction) prediction
is proposed. In PPI prediction, a reliable and sufficient number of training
samples is not available, but a large number of unlabeled samples is in hand.
In the proposed method, the denoising auto encoders are employed for learning
robust features. The obtained robust features are used in order to train a
classifier with a better performance. The experimental results demonstrate the
capabilities of the proposed method.
Protein-protein interaction; Denoising auto encoder;Robust features;
Unlabelled data;
| Amir Ahooye Atashin, Parsa Bagherzadeh, Kamaledin Ghiasi-Shirazi | null | 1606.04561 | null | null |
Deep Reinforcement Learning With Macro-Actions | cs.LG cs.AI cs.NE | Deep reinforcement learning has been shown to be a powerful framework for
learning policies from complex high-dimensional sensory inputs to actions in
complex tasks, such as the Atari domain. In this paper, we explore output
representation modeling in the form of temporal abstraction to improve
convergence and reliability of deep reinforcement learning approaches. We
concentrate on macro-actions, and evaluate these on different Atari 2600 games,
where we show that they yield significant improvements in learning speed.
Additionally, we show that they can even achieve better scores than DQN. We
offer analysis and explanation for both convergence and final results,
revealing a problem deep RL approaches have with sparse reward signals.
| Ishan P. Durugkar, Clemens Rosenbaum, Stefan Dernbach, Sridhar
Mahadevan | null | 1606.04615 | null | null |
Masking Strategies for Image Manifolds | stat.ML cs.LG | We consider the problem of selecting an optimal mask for an image manifold,
i.e., choosing a subset of the pixels of the image that preserves the
manifold's geometric structure present in the original data. Such masking
implements a form of compressive sensing through emerging imaging sensor
platforms for which the power expense grows with the number of pixels acquired.
Our goal is for the manifold learned from masked images to resemble its full
image counterpart as closely as possible. More precisely, we show that one can
indeed accurately learn an image manifold without having to consider a large
majority of the image pixels. In doing so, we consider two masking methods that
preserve the local and global geometric structure of the manifold,
respectively. In each case, the process of finding the optimal masking pattern
can be cast as a binary integer program, which is computationally expensive but
can be approximated by a fast greedy algorithm. Numerical experiments show that
the relevant manifold structure is preserved through the data-dependent masking
process, even for modest mask sizes.
| Hamid Dadkhahi and Marco F. Duarte | null | 1606.04618 | null | null |
Finite-time Analysis for the Knowledge-Gradient Policy | cs.LG | We consider sequential decision problems in which we adaptively choose one of
finitely many alternatives and observe a stochastic reward. We offer a new
perspective of interpreting Bayesian ranking and selection problems as adaptive
stochastic multi-set maximization problems and derive the first finite-time
bound of the knowledge-gradient policy for adaptive submodular objective
functions. In addition, we introduce the concept of prior-optimality and
provide another insight into the performance of the knowledge gradient policy
based on the submodular assumption on the value of information. We demonstrate
submodularity for the two-alternative case and provide other conditions for
more general problems, bringing out the issue and importance of submodularity
in learning problems. Empirical experiments are conducted to further illustrate
the finite time behavior of the knowledge gradient policy.
| Yingfei Wang and Warren Powell | null | 1606.04624 | null | null |
Unsupervised Learning of Predictors from Unpaired Input-Output Samples | cs.LG | Unsupervised learning is the most challenging problem in machine learning and
especially in deep learning. Among many scenarios, we study an unsupervised
learning problem of high economic value --- learning to predict without costly
pairing of input data and corresponding labels. Part of the difficulty in this
problem is a lack of solid evaluation measures. In this paper, we take a
practical approach to grounding unsupervised learning by using the same success
criterion as for supervised learning in prediction tasks but we do not require
the presence of paired input-output training data. In particular, we propose an
objective function that aims to make the predicted outputs fit well the
structure of the output while preserving the correlation between the input and
the predicted output. We experiment with a synthetic structural prediction
problem and show that even with simple linear classifiers, the objective
function is already highly non-convex. We further demonstrate the nature of
this non-convex optimization problem as well as potential solutions. In
particular, we show that with regularization via a generative model, learning
with the proposed unsupervised objective function converges to an optimal
solution.
| Jianshu Chen, Po-Sen Huang, Xiaodong He, Jianfeng Gao and Li Deng | null | 1606.04646 | null | null |
Progressive Neural Networks | cs.LG | Learning to solve complex sequences of tasks--while both leveraging transfer
and avoiding catastrophic forgetting--remains a key obstacle to achieving
human-level intelligence. The progressive networks approach represents a step
forward in this direction: they are immune to forgetting and can leverage prior
knowledge via lateral connections to previously learned features. We evaluate
this architecture extensively on a wide variety of reinforcement learning tasks
(Atari and 3D maze games), and show that it outperforms common baselines based
on pretraining and finetuning. Using a novel sensitivity measure, we
demonstrate that transfer occurs at both low-level sensory and high-level
control layers of the learned policy.
| Andrei A. Rusu, Neil C. Rabinowitz, Guillaume Desjardins, Hubert
Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, Raia Hadsell | null | 1606.04671 | null | null |
Strategic Attentive Writer for Learning Macro-Actions | cs.AI cs.LG | We present a novel deep recurrent neural network architecture that learns to
build implicit plans in an end-to-end manner by purely interacting with an
environment in reinforcement learning setting. The network builds an internal
plan, which is continuously updated upon observation of the next input from the
environment. It can also partition this internal representation into contiguous
sub- sequences by learning for how long the plan can be committed to - i.e.
followed without re-planing. Combining these properties, the proposed model,
dubbed STRategic Attentive Writer (STRAW) can learn high-level, temporally
abstracted macro- actions of varying lengths that are solely learnt from data
without any prior information. These macro-actions enable both structured
exploration and economic computation. We experimentally demonstrate that STRAW
delivers strong improvements on several ATARI games by employing temporally
extended planning strategies (e.g. Ms. Pacman and Frostbite). It is at the same
time a general algorithm that can be applied on any sequence data. To that end,
we also show that when trained on text prediction task, STRAW naturally
predicts frequent n-grams (instead of macro-actions), demonstrating the
generality of the approach.
| Alexander (Sasha) Vezhnevets, Volodymyr Mnih, John Agapiou, Simon
Osindero, Alex Graves, Oriol Vinyals, Koray Kavukcuoglu | null | 1606.04695 | null | null |
Bolt-on Differential Privacy for Scalable Stochastic Gradient
Descent-based Analytics | cs.LG cs.CR cs.DB stat.ML | While significant progress has been made separately on analytics systems for
scalable stochastic gradient descent (SGD) and private SGD, none of the major
scalable analytics frameworks have incorporated differentially private SGD.
There are two inter-related issues for this disconnect between research and
practice: (1) low model accuracy due to added noise to guarantee privacy, and
(2) high development and runtime overhead of the private algorithms. This paper
takes a first step to remedy this disconnect and proposes a private SGD
algorithm to address \emph{both} issues in an integrated manner. In contrast to
the white-box approach adopted by previous work, we revisit and use the
classical technique of {\em output perturbation} to devise a novel "bolt-on"
approach to private SGD. While our approach trivially addresses (2), it makes
(1) even more challenging. We address this challenge by providing a novel
analysis of the $L_2$-sensitivity of SGD, which allows, under the same privacy
guarantees, better convergence of SGD when only a constant number of passes can
be made over the data. We integrate our algorithm, as well as other
state-of-the-art differentially private SGD, into Bismarck, a popular scalable
SGD-based analytics system on top of an RDBMS. Extensive experiments show that
our algorithm can be easily integrated, incurs virtually no overhead, scales
well, and most importantly, yields substantially better (up to 4X) test
accuracy than the state-of-the-art algorithms on many real datasets.
| Xi Wu, Fengan Li, Arun Kumar, Kamalika Chaudhuri, Somesh Jha, Jeffrey
F. Naughton | null | 1606.04722 | null | null |
Multi-Modal Hybrid Deep Neural Network for Speech Enhancement | cs.LG cs.NE cs.SD | Deep Neural Networks (DNN) have been successful in en- hancing noisy speech
signals. Enhancement is achieved by learning a nonlinear mapping function from
the features of the corrupted speech signal to that of the reference clean
speech signal. The quality of predicted features can be improved by providing
additional side channel information that is robust to noise, such as visual
cues. In this paper we propose a novel deep learning model inspired by insights
from human audio visual perception. In the proposed unified hybrid
architecture, features from a Convolution Neural Network (CNN) that processes
the visual cues and features from a fully connected DNN that processes the
audio signal are integrated using a Bidirectional Long Short-Term Memory
(BiLSTM) network. The parameters of the hybrid model are jointly learned using
backpropagation. We compare the quality of enhanced speech from the hybrid
models with those from traditional DNN and BiLSTM models.
| Zhenzhou Wu, Sunil Sivadas, Yong Kiam Tan, Ma Bin, Rick Siow Mong Goh | null | 1606.04750 | null | null |
Safe Exploration in Finite Markov Decision Processes with Gaussian
Processes | cs.LG cs.AI cs.RO stat.ML | In classical reinforcement learning, when exploring an environment, agents
accept arbitrary short term loss for long term gain. This is infeasible for
safety critical applications, such as robotics, where even a single unsafe
action may cause system failure. In this paper, we address the problem of
safely exploring finite Markov decision processes (MDP). We define safety in
terms of an, a priori unknown, safety constraint that depends on states and
actions. We aim to explore the MDP under this constraint, assuming that the
unknown function satisfies regularity conditions expressed via a Gaussian
process prior. We develop a novel algorithm for this task and prove that it is
able to completely explore the safely reachable part of the MDP without
violating the safety constraint. To achieve this, it cautiously explores safe
states and actions in order to gain statistical confidence about the safety of
unvisited state-action pairs from noisy observations collected while navigating
the environment. Moreover, the algorithm explicitly considers reachability when
exploring the MDP, ensuring that it does not get stuck in any state with no
safe way out. We demonstrate our method on digital terrain models for the task
of exploring an unknown map with a rover.
| Matteo Turchetta, Felix Berkenkamp, Andreas Krause | null | 1606.04753 | null | null |
The Learning and Prediction of Application-level Traffic Data in
Cellular Networks | cs.NI cs.LG | Traffic learning and prediction is at the heart of the evaluation of the
performance of telecommunications networks and attracts a lot of attention in
wired broadband networks. Now, benefiting from the big data in cellular
networks, it becomes possible to make the analyses one step further into the
application level. In this paper, we firstly collect a significant amount of
application-level traffic data from cellular network operators. Afterwards,
with the aid of the traffic "big data", we make a comprehensive study over the
modeling and prediction framework of cellular network traffic. Our results
solidly demonstrate that there universally exist some traffic statistical
modeling characteristics, including ALPHA-stable modeled property in the
temporal domain and the sparsity in the spatial domain. Meanwhile, the results
also demonstrate the distinctions originated from the uniqueness of different
service types of applications. Furthermore, we propose a new traffic prediction
framework to encompass and explore these aforementioned characteristics and
then develop a dictionary learning-based alternating direction method to solve
it. Besides, we validate the prediction accuracy improvement and the robustness
of the proposed framework through extensive simulation results.
| Rongpeng Li, Zhifeng Zhao, Jianchao Zheng, Chengli Mei, Yueming Cai,
and Honggang Zhang | null | 1606.04778 | null | null |
A Powerful Generative Model Using Random Weights for the Deep Image
Representation | cs.CV cs.LG cs.NE | To what extent is the success of deep visualization due to the training?
Could we do deep visualization using untrained, random weight networks? To
address this issue, we explore new and powerful generative models for three
popular deep visualization tasks using untrained, random weight convolutional
neural networks. First we invert representations in feature spaces and
reconstruct images from white noise inputs. The reconstruction quality is
statistically higher than that of the same method applied on well trained
networks with the same architecture. Next we synthesize textures using scaled
correlations of representations in multiple layers and our results are almost
indistinguishable with the original natural texture and the synthesized
textures based on the trained network. Third, by recasting the content of an
image in the style of various artworks, we create artistic images with high
perceptual quality, highly competitive to the prior work of Gatys et al. on
pretrained networks. To our knowledge this is the first demonstration of image
representations using untrained deep neural networks. Our work provides a new
and fascinating tool to study the representation of deep network architecture
and sheds light on new understandings on deep visualization.
| Kun He and Yan Wang and John Hopcroft | null | 1606.04801 | null | null |
ASAGA: Asynchronous Parallel SAGA | math.OC cs.LG stat.ML | We describe ASAGA, an asynchronous parallel version of the incremental
gradient algorithm SAGA that enjoys fast linear convergence rates. Through a
novel perspective, we revisit and clarify a subtle but important technical
issue present in a large fraction of the recent convergence rate proofs for
asynchronous parallel optimization algorithms, and propose a simplification of
the recently introduced "perturbed iterate" framework that resolves it. We
thereby prove that ASAGA can obtain a theoretical linear speedup on multi-core
systems even without sparsity assumptions. We present results of an
implementation on a 40-core architecture illustrating the practical speedup as
well as the hardware overhead.
| R\'emi Leblond, Fabian Pedregosa and Simon Lacoste-Julien | null | 1606.04809 | null | null |
Optimization Methods for Large-Scale Machine Learning | stat.ML cs.LG math.OC | This paper provides a review and commentary on the past, present, and future
of numerical optimization algorithms in the context of machine learning
applications. Through case studies on text classification and the training of
deep neural networks, we discuss how optimization problems arise in machine
learning and what makes them challenging. A major theme of our study is that
large-scale machine learning represents a distinctive setting in which the
stochastic gradient (SG) method has traditionally played a central role while
conventional gradient-based nonlinear optimization techniques typically falter.
Based on this viewpoint, we present a comprehensive theory of a
straightforward, yet versatile SG algorithm, discuss its practical behavior,
and highlight opportunities for designing algorithms with improved performance.
This leads to a discussion about the next generation of optimization methods
for large-scale machine learning, including an investigation of two main
streams of research on techniques that diminish noise in the stochastic
directions and methods that make use of second-order derivative approximations.
| L\'eon Bottou, Frank E. Curtis, Jorge Nocedal | null | 1606.04838 | null | null |
Deep Learning for Music | cs.LG cs.SD | Our goal is to be able to build a generative model from a deep neural network
architecture to try to create music that has both harmony and melody and is
passable as music composed by humans. Previous work in music generation has
mainly been focused on creating a single melody. More recent work on polyphonic
music modeling, centered around time series probability density estimation, has
met some partial success. In particular, there has been a lot of work based off
of Recurrent Neural Networks combined with Restricted Boltzmann Machines
(RNN-RBM) and other similar recurrent energy based models. Our approach,
however, is to perform end-to-end learning and generation with deep neural nets
alone.
| Allen Huang, Raymond Wu | null | 1606.04930 | null | null |
Improving Variational Inference with Inverse Autoregressive Flow | cs.LG stat.ML | The framework of normalizing flows provides a general strategy for flexible
variational inference of posteriors over latent variables. We propose a new
type of normalizing flow, inverse autoregressive flow (IAF), that, in contrast
to earlier published flows, scales well to high-dimensional latent spaces. The
proposed flow consists of a chain of invertible transformations, where each
transformation is based on an autoregressive neural network. In experiments, we
show that IAF significantly improves upon diagonal Gaussian approximate
posteriors. In addition, we demonstrate that a novel type of variational
autoencoder, coupled with IAF, is competitive with neural autoregressive models
in terms of attained log-likelihood on natural images, while allowing
significantly faster synthesis.
| Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya
Sutskever and Max Welling | null | 1606.04934 | null | null |
Combining multiscale features for classification of hyperspectral
images: a sequence based kernel approach | cs.CV cs.LG stat.ML | Nowadays, hyperspectral image classification widely copes with spatial
information to improve accuracy. One of the most popular way to integrate such
information is to extract hierarchical features from a multiscale segmentation.
In the classification context, the extracted features are commonly concatenated
into a long vector (also called stacked vector), on which is applied a
conventional vector-based machine learning technique (e.g. SVM with Gaussian
kernel). In this paper, we rather propose to use a sequence structured kernel:
the spectrum kernel. We show that the conventional stacked vector-based kernel
is actually a special case of this kernel. Experiments conducted on various
publicly available hyperspectral datasets illustrate the improvement of the
proposed kernel w.r.t. conventional ones using the same hierarchical spatial
features.
| Yanwei Cui, Laetitia Chapel, S\'ebastien Lef\`evre | null | 1606.04985 | null | null |
Logarithmic Time One-Against-Some | stat.ML cs.LG | We create a new online reduction of multiclass classification to binary
classification for which training and prediction time scale logarithmically
with the number of classes. Compared to previous approaches, we obtain
substantially better statistical performance for two reasons: First, we prove a
tighter and more complete boosting theorem, and second we translate the results
more directly into an algorithm. We show that several simple techniques give
rise to an algorithm that can compete with one-against-all in both space and
predictive power while offering exponential improvements in speed when the
number of classes is large.
| Hal Daume III, Nikos Karampatziakis, John Langford, Paul Mineiro | null | 1606.04988 | null | null |
A Class of Parallel Doubly Stochastic Algorithms for Large-Scale
Learning | cs.LG math.OC stat.ML | We consider learning problems over training sets in which both, the number of
training examples and the dimension of the feature vectors, are large. To solve
these problems we propose the random parallel stochastic algorithm (RAPSA). We
call the algorithm random parallel because it utilizes multiple parallel
processors to operate on a randomly chosen subset of blocks of the feature
vector. We call the algorithm stochastic because processors choose training
subsets uniformly at random. Algorithms that are parallel in either of these
dimensions exist, but RAPSA is the first attempt at a methodology that is
parallel in both the selection of blocks and the selection of elements of the
training set. In RAPSA, processors utilize the randomly chosen functions to
compute the stochastic gradient component associated with a randomly chosen
block. The technical contribution of this paper is to show that this minimally
coordinated algorithm converges to the optimal classifier when the training
objective is convex. Moreover, we present an accelerated version of RAPSA
(ARAPSA) that incorporates the objective function curvature information by
premultiplying the descent direction by a Hessian approximation matrix. We
further extend the results for asynchronous settings and show that if the
processors perform their updates without any coordination the algorithms are
still convergent to the optimal argument. RAPSA and its extensions are then
numerically evaluated on a linear estimation problem and a binary image
classification task using the MNIST handwritten digit dataset.
| Aryan Mokhtari and Alec Koppel and Alejandro Ribeiro | null | 1606.04991 | null | null |
Automatic Pronunciation Generation by Utilizing a Semi-supervised Deep
Neural Networks | cs.CL cs.LG cs.SD | Phonemic or phonetic sub-word units are the most commonly used atomic
elements to represent speech signals in modern ASRs. However they are not the
optimal choice due to several reasons such as: large amount of effort required
to handcraft a pronunciation dictionary, pronunciation variations, human
mistakes and under-resourced dialects and languages. Here, we propose a
data-driven pronunciation estimation and acoustic modeling method which only
takes the orthographic transcription to jointly estimate a set of sub-word
units and a reliable dictionary. Experimental results show that the proposed
method which is based on semi-supervised training of a deep neural network
largely outperforms phoneme based continuous speech recognition on the TIMIT
dataset.
| Naoya Takahashi, Tofigh Naghibi, Beat Pfister | null | 1606.05007 | null | null |
Improving Power Generation Efficiency using Deep Neural Networks | stat.ML cs.LG cs.NE | Recently there has been significant research on power generation,
distribution and transmission efficiency especially in the case of renewable
resources. The main objective is reduction of energy losses and this requires
improvements on data acquisition and analysis. In this paper we address these
concerns by using consumers' electrical smart meter readings to estimate
network loading and this information can then be used for better capacity
planning. We compare Deep Neural Network (DNN) methods with traditional methods
for load forecasting. Our results indicate that DNN methods outperform most
traditional methods. This comes at the cost of additional computational
complexity but this can be addressed with the use of cloud resources. We also
illustrate how these results can be used to better support dynamic pricing.
| Stefan Hosein and Patrick Hosein | null | 1606.05018 | null | null |
Learning Optimal Interventions | stat.ML cs.LG | Our goal is to identify beneficial interventions from observational data. We
consider interventions that are narrowly focused (impacting few covariates) and
may be tailored to each individual or globally enacted over a population. For
applications where harmful intervention is drastically worse than proposing no
change, we propose a conservative definition of the optimal intervention.
Assuming the underlying relationship remains invariant under intervention, we
develop efficient algorithms to identify the optimal intervention policy from
limited data and provide theoretical guarantees for our approach in a Gaussian
Process setting. Although our methods assume covariates can be precisely
adjusted, they remain capable of improving outcomes in misspecified settings
where interventions incur unintentional downstream effects. Empirically, our
approach identifies good interventions in two practical applications: gene
perturbation and writing improvement.
| Jonas Mueller, David N. Reshef, George Du, Tommi Jaakkola | null | 1606.05027 | null | null |
Pruning Random Forests for Prediction on a Budget | stat.ML cs.LG | We propose to prune a random forest (RF) for resource-constrained prediction.
We first construct a RF and then prune it to optimize expected feature cost &
accuracy. We pose pruning RFs as a novel 0-1 integer program with linear
constraints that encourages feature re-use. We establish total unimodularity of
the constraint set to prove that the corresponding LP relaxation solves the
original integer program. We then exploit connections to combinatorial
optimization and develop an efficient primal-dual algorithm, scalable to large
datasets. In contrast to our bottom-up approach, which benefits from good RF
initialization, conventional methods are top-down acquiring features based on
their utility value and is generally intractable, requiring heuristics.
Empirically, our pruning algorithm outperforms existing state-of-the-art
resource-constrained algorithms.
| Feng Nan, Joseph Wang, Venkatesh Saligrama | null | 1606.05060 | null | null |
How many faces can be recognized? Performance extrapolation for
multi-class classification | stat.ML cs.CV cs.IT cs.LG math.IT | The difficulty of multi-class classification generally increases with the
number of classes. Using data from a subset of the classes, can we predict how
well a classifier will scale with an increased number of classes? Under the
assumption that the classes are sampled exchangeably, and under the assumption
that the classifier is generative (e.g. QDA or Naive Bayes), we show that the
expected accuracy when the classifier is trained on $k$ classes is the $k-1$st
moment of a \emph{conditional accuracy distribution}, which can be estimated
from data. This provides the theoretical foundation for performance
extrapolation based on pseudolikelihood, unbiased estimation, and
high-dimensional asymptotics. We investigate the robustness of our methods to
non-generative classifiers in simulations and one optical character recognition
example.
| Charles Y. Zheng, Rakesh Achanta, and Yuval Benjamini | null | 1606.05228 | null | null |
Learning feed-forward one-shot learners | cs.CV cs.LG | One-shot learning is usually tackled by using generative models or
discriminative embeddings. Discriminative methods based on deep learning, which
are very effective in other learning scenarios, are ill-suited for one-shot
learning as they need large amounts of training data. In this paper, we propose
a method to learn the parameters of a deep model in one shot. We construct the
learner as a second deep network, called a learnet, which predicts the
parameters of a pupil network from a single exemplar. In this manner we obtain
an efficient feed-forward one-shot learner, trained end-to-end by minimizing a
one-shot classification objective in a learning to learn formulation. In order
to make the construction feasible, we propose a number of factorizations of the
parameters of the pupil network. We demonstrate encouraging results by learning
characters from single exemplars in Omniglot, and by tracking visual objects
from a single initial exemplar in the Visual Object Tracking benchmark.
| Luca Bertinetto, Jo\~ao F. Henriques, Jack Valmadre, Philip H. S.
Torr, Andrea Vedaldi | null | 1606.05233 | null | null |
Generalized Direct Change Estimation in Ising Model Structure | math.ST cs.LG stat.TH | We consider the problem of estimating change in the dependency structure
between two $p$-dimensional Ising models, based on respectively $n_1$ and $n_2$
samples drawn from the models. The change is assumed to be structured, e.g.,
sparse, block sparse, node-perturbed sparse, etc., such that it can be
characterized by a suitable (atomic) norm. We present and analyze a
norm-regularized estimator for directly estimating the change in structure,
without having to estimate the structures of the individual Ising models. The
estimator can work with any norm, and can be generalized to other graphical
models under mild assumptions. We show that only one set of samples, say $n_2$,
needs to satisfy the sample complexity requirement for the estimator to work,
and the estimation error decreases as $\frac{c}{\sqrt{\min(n_1,n_2)}}$, where
$c$ depends on the Gaussian width of the unit norm ball. For example, for
$\ell_1$ norm applied to $s$-sparse change, the change can be accurately
estimated with $\min(n_1,n_2)=O(s \log p)$ which is sharper than an existing
result $n_1= O(s^2 \log p)$ and $n_2 = O(n_1^2)$. Experimental results
illustrating the effectiveness of the proposed estimator are presented.
| Farideh Fazayeli and Arindam Banerjee | null | 1606.05302 | null | null |
Unsupervised Risk Estimation Using Only Conditional Independence
Structure | cs.LG cs.AI stat.ML | We show how to estimate a model's test error from unlabeled data, on
distributions very different from the training distribution, while assuming
only that certain conditional independencies are preserved between train and
test. We do not need to assume that the optimal predictor is the same between
train and test, or that the true distribution lies in any parametric family. We
can also efficiently differentiate the error estimate to perform unsupervised
discriminative learning. Our technical tool is the method of moments, which
allows us to exploit conditional independencies in the absence of a
fully-specified model. Our framework encompasses a large family of losses
including the log and exponential loss, and extends to structured output
settings such as hidden Markov models.
| Jacob Steinhardt and Percy Liang | null | 1606.05313 | null | null |
Learning Infinite-Layer Networks: Without the Kernel Trick | cs.LG | Infinite--Layer Networks (ILN) have recently been proposed as an architecture
that mimics neural networks while enjoying some of the advantages of kernel
methods. ILN are networks that integrate over infinitely many nodes within a
single hidden layer. It has been demonstrated by several authors that the
problem of learning ILN can be reduced to the kernel trick, implying that
whenever a certain integral can be computed analytically they are efficiently
learnable.
In this work we give an online algorithm for ILN, which avoids the kernel
trick assumption. More generally and of independent interest, we show that
kernel methods in general can be exploited even when the kernel cannot be
efficiently computed but can only be estimated via sampling.
We provide a regret analysis for our algorithm, showing that it matches the
sample complexity of methods which have access to kernel values. Thus, our
method is the first to demonstrate that the kernel trick is not necessary as
such, and random features suffice to obtain comparable performance.
| Roi Livni and Daniel Carmon and Amir Globerson | null | 1606.05316 | null | null |
Increasing the Interpretability of Recurrent Neural Networks Using
Hidden Markov Models | stat.ML cs.CL cs.LG | As deep neural networks continue to revolutionize various application
domains, there is increasing interest in making these powerful models more
understandable and interpretable, and narrowing down the causes of good and bad
predictions. We focus on recurrent neural networks (RNNs), state of the art
models in speech recognition and translation. Our approach to increasing
interpretability is by combining an RNN with a hidden Markov model (HMM), a
simpler and more transparent model. We explore various combinations of RNNs and
HMMs: an HMM trained on LSTM states; a hybrid model where an HMM is trained
first, then a small LSTM is given HMM state distributions and trained to fill
in gaps in the HMM's performance; and a jointly trained hybrid model. We find
that the LSTM and HMM learn complementary information about the features in the
text.
| Viktoriya Krakovna, Finale Doshi-Velez | null | 1606.05320 | null | null |
ACDC: $\alpha$-Carving Decision Chain for Risk Stratification | stat.ML cs.LG | In many healthcare settings, intuitive decision rules for risk stratification
can help effective hospital resource allocation. This paper introduces a novel
variant of decision tree algorithms that produces a chain of decisions, not a
general tree. Our algorithm, $\alpha$-Carving Decision Chain (ACDC),
sequentially carves out "pure" subsets of the majority class examples. The
resulting chain of decision rules yields a pure subset of the minority class
examples. Our approach is particularly effective in exploring large and
class-imbalanced health datasets. Moreover, ACDC provides an interactive
interpretation in conjunction with visual performance metrics such as Receiver
Operating Characteristics curve and Lift chart.
| Yubin Park and Joyce Ho and Joydeep Ghosh | null | 1606.05325 | null | null |
Conditional Image Generation with PixelCNN Decoders | cs.CV cs.LG | This work explores conditional image generation with a new image density
model based on the PixelCNN architecture. The model can be conditioned on any
vector, including descriptive labels or tags, or latent embeddings created by
other networks. When conditioned on class labels from the ImageNet database,
the model is able to generate diverse, realistic scenes representing distinct
animals, objects, landscapes and structures. When conditioned on an embedding
produced by a convolutional network given a single image of an unseen face, it
generates a variety of new portraits of the same person with different facial
expressions, poses and lighting conditions. We also show that conditional
PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally,
the gated convolutional layers in the proposed model improve the log-likelihood
of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet,
with greatly reduced computational cost.
| Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt,
Alex Graves, Koray Kavukcuoglu | null | 1606.05328 | null | null |
On the Expressive Power of Deep Neural Networks | stat.ML cs.AI cs.LG | We propose a new approach to the problem of neural network expressivity,
which seeks to characterize how structural properties of a neural network
family affect the functions it is able to compute. Our approach is based on an
interrelated set of measures of expressivity, unified by the novel notion of
trajectory length, which measures how the output of a network changes as the
input sweeps along a one-dimensional path. Our findings can be summarized as
follows:
(1) The complexity of the computed function grows exponentially with depth.
(2) All weights are not equal: trained networks are more sensitive to their
lower (initial) layer weights.
(3) Regularizing on trajectory length (trajectory regularization) is a
simpler alternative to batch normalization, with the same performance.
| Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, Jascha
Sohl-Dickstein | null | 1606.05336 | null | null |
Exponential expressivity in deep neural networks through transient chaos | stat.ML cond-mat.dis-nn cs.LG | We combine Riemannian geometry with the mean field theory of high dimensional
chaos to study the nature of signal propagation in generic, deep neural
networks with random weights. Our results reveal an order-to-chaos expressivity
phase transition, with networks in the chaotic phase computing nonlinear
functions whose global curvature grows exponentially with depth but not width.
We prove this generic class of deep random functions cannot be efficiently
computed by any shallow network, going beyond prior work restricted to the
analysis of single functions. Moreover, we formalize and quantitatively
demonstrate the long conjectured idea that deep networks can disentangle highly
curved manifolds in input space into flat manifolds in hidden space. Our
theoretical analysis of the expressive power of deep networks broadly applies
to arbitrary nonlinearities, and provides a quantitative underpinning for
previously abstract notions about the geometry of deep functions.
| Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein,
Surya Ganguli | null | 1606.05340 | null | null |
Avoiding Imposters and Delinquents: Adversarial Crowdsourcing and Peer
Prediction | cs.HC cs.CR cs.DS cs.GT cs.LG | We consider a crowdsourcing model in which $n$ workers are asked to rate the
quality of $n$ items previously generated by other workers. An unknown set of
$\alpha n$ workers generate reliable ratings, while the remaining workers may
behave arbitrarily and possibly adversarially. The manager of the experiment
can also manually evaluate the quality of a small number of items, and wishes
to curate together almost all of the high-quality items with at most an
$\epsilon$ fraction of low-quality items. Perhaps surprisingly, we show that
this is possible with an amount of work required of the manager, and each
worker, that does not scale with $n$: the dataset can be curated with
$\tilde{O}\Big(\frac{1}{\beta\alpha^3\epsilon^4}\Big)$ ratings per worker, and
$\tilde{O}\Big(\frac{1}{\beta\epsilon^2}\Big)$ ratings by the manager, where
$\beta$ is the fraction of high-quality items. Our results extend to the more
general setting of peer prediction, including peer grading in online
classrooms.
| Jacob Steinhardt and Gregory Valiant and Moses Charikar | null | 1606.05374 | null | null |
Sampling Method for Fast Training of Support Vector Data Description | cs.LG stat.AP stat.ML | Support Vector Data Description (SVDD) is a popular outlier detection
technique which constructs a flexible description of the input data. SVDD
computation time is high for large training datasets which limits its use in
big-data process-monitoring applications. We propose a new iterative
sampling-based method for SVDD training. The method incrementally learns the
training data description at each iteration by computing SVDD on an independent
random sample selected with replacement from the training data set. The
experimental results indicate that the proposed method is extremely fast and
provides a good data description .
| Arin Chaudhuri, Deovrat Kakde, Maria Jahja, Wei Xiao, Hansi Jiang,
Seunghyun Kong, Sergiy Peredriy | 10.1109/RAM.2018.8463127 | 1606.05382 | null | null |
Model-Agnostic Interpretability of Machine Learning | stat.ML cs.LG | Understanding why machine learning models behave the way they do empowers
both system designers and end-users in many ways: in model selection, feature
engineering, in order to trust and act upon the predictions, and in more
intuitive user interfaces. Thus, interpretability has become a vital concern in
machine learning, and work in the area of interpretable models has found
renewed interest. In some applications, such models are as accurate as
non-interpretable ones, and thus are preferred for their transparency. Even
when they are not accurate, they may still be preferred when interpretability
is of paramount importance. However, restricting machine learning to
interpretable models is often a severe limitation. In this paper we argue for
explaining machine learning predictions using model-agnostic approaches. By
treating the machine learning models as black-box functions, these approaches
provide crucial flexibility in the choice of models, explanations, and
representations, improving debugging, comparison, and interfaces for a variety
of users and models. We also outline the main challenges for such methods, and
review a recently-introduced model-agnostic explanation approach (LIME) that
addresses these challenges.
| Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin | null | 1606.05386 | null | null |
Proceedings First International Workshop on Hammers for Type Theories | cs.LO cs.AI cs.LG | This volume of EPTCS contains the proceedings of the First Workshop on
Hammers for Type Theories (HaTT 2016), held on 1 July 2016 as part of the
International Joint Conference on Automated Reasoning (IJCAR 2016) in Coimbra,
Portugal. The proceedings contain four regular papers, as well as abstracts of
the two invited talks by Pierre Corbineau (Verimag, France) and Aleksy Schubert
(University of Warsaw, Poland).
| Jasmin Christian Blanchette, Cezary Kaliszyk | 10.4204/EPTCS.210 | 1606.05427 | null | null |
Stance Detection with Bidirectional Conditional Encoding | cs.CL cs.LG cs.NE | Stance detection is the task of classifying the attitude expressed in a text
towards a target such as Hillary Clinton to be "positive", negative" or
"neutral". Previous work has assumed that either the target is mentioned in the
text or that training data for every target is given. This paper considers the
more challenging version of this task, where targets are not always mentioned
and no training data is available for the test targets. We experiment with
conditional LSTM encoding, which builds a representation of the tweet that is
dependent on the target, and demonstrate that it outperforms encoding the tweet
and the target independently. Performance is improved further when the
conditional model is augmented with bidirectional encoding. We evaluate our
approach on the SemEval 2016 Task 6 Twitter Stance Detection corpus achieving
performance second best only to a system trained on semi-automatically labelled
tweets for the test target. When such weak supervision is added, our approach
achieves state-of-the-art results.
| Isabelle Augenstein and Tim Rockt\"aschel and Andreas Vlachos and
Kalina Bontcheva | null | 1606.05464 | null | null |
SMS Spam Filtering using Probabilistic Topic Modelling and Stacked
Denoising Autoencoder | cs.CL cs.LG cs.NE | In This paper we present a novel approach to spam filtering and demonstrate
its applicability with respect to SMS messages. Our approach requires minimum
features engineering and a small set of la- belled data samples. Features are
extracted using topic modelling based on latent Dirichlet allocation, and then
a comprehensive data model is created using a Stacked Denoising Autoencoder
(SDA). Topic modelling summarises the data providing ease of use and high
interpretability by visualising the topics using word clouds. Given that the
SMS messages can be regarded as either spam (unwanted) or ham (wanted), the SDA
is able to model the messages and accurately discriminate between the two
classes without the need for a pre-labelled training set. The results are
compared against the state-of-the-art spam detection algorithms with our
proposed approach achieving over 97% accuracy which compares favourably to the
best reported algorithms presented in the literature.
| Noura Al Moubayed, Toby Breckon, Peter Matthews, and A. Stephen
McGough | null | 1606.05554 | null | null |
Learning Interpretable Musical Compositional Rules and Traces | stat.ML cs.LG | Throughout music history, theorists have identified and documented
interpretable rules that capture the decisions of composers. This paper asks,
"Can a machine behave like a music theorist?" It presents MUS-ROVER, a
self-learning system for automatically discovering rules from symbolic music.
MUS-ROVER performs feature learning via $n$-gram models to extract
compositional rules --- statistical patterns over the resulting features. We
evaluate MUS-ROVER on Bach's (SATB) chorales, demonstrating that it can recover
known rules, as well as identify new, characteristic patterns for further
study. We discuss how the extracted rules can be used in both machine and human
composition.
| Haizi Yu, Lav R. Varshney, Guy E. Garnett, Ranjitha Kumar | null | 1606.05572 | null | null |
Early Visual Concept Learning with Unsupervised Deep Learning | stat.ML cs.LG q-bio.NC | Automated discovery of early visual concepts from raw image data is a major
open challenge in AI research. Addressing this problem, we propose an
unsupervised approach for learning disentangled representations of the
underlying factors of variation. We draw inspiration from neuroscience, and
show how this can be achieved in an unsupervised generative model by applying
the same learning pressures as have been suggested to act in the ventral visual
stream in the brain. By enforcing redundancy reduction, encouraging statistical
independence, and exposure to data with transform continuities analogous to
those to which human infants are exposed, we obtain a variational autoencoder
(VAE) framework capable of learning disentangled factors. Our approach makes
few assumptions and works well across a wide variety of datasets. Furthermore,
our solution has useful emergent properties, such as zero-shot inference and an
intuitive understanding of "objectness".
| Irina Higgins, Loic Matthey, Xavier Glorot, Arka Pal, Benigno Uria,
Charles Blundell, Shakir Mohamed, Alexander Lerchner | null | 1606.05579 | null | null |
Ground Truth Bias in External Cluster Validity Indices | stat.ML cs.LG | It has been noticed that some external CVIs exhibit a preferential bias
towards a larger or smaller number of clusters which is monotonic (directly or
inversely) in the number of clusters in candidate partitions. This type of bias
is caused by the functional form of the CVI model. For example, the popular
Rand index (RI) exhibits a monotone increasing (NCinc) bias, while the Jaccard
Index (JI) index suffers from a monotone decreasing (NCdec) bias. This type of
bias has been previously recognized in the literature. In this work, we
identify a new type of bias arising from the distribution of the ground truth
(reference) partition against which candidate partitions are compared. We call
this new type of bias ground truth (GT) bias. This type of bias occurs if a
change in the reference partition causes a change in the bias status (e.g.,
NCinc, NCdec) of a CVI. For example, NCinc bias in the RI can be changed to
NCdec bias by skewing the distribution of clusters in the ground truth
partition. It is important for users to be aware of this new type of biased
behaviour, since it may affect the interpretations of CVI results. The
objective of this article is to study the empirical and theoretical
implications of GT bias. To the best of our knowledge, this is the first
extensive study of such a property for external cluster validity indices.
| Yang Lei, James C. Bezdek, Simone Romano, Nguyen Xuan Vinh, Jeffrey
Chan and James Bailey | null | 1606.05596 | null | null |
Guaranteed Non-convex Optimization: Submodular Maximization over
Continuous Domains | cs.LG cs.DS | Submodular continuous functions are a category of (generally)
non-convex/non-concave functions with a wide spectrum of applications. We
characterize these functions and demonstrate that they can be maximized
efficiently with approximation guarantees. Specifically, i) We introduce the
weak DR property that gives a unified characterization of submodularity for all
set, integer-lattice and continuous functions; ii) for maximizing monotone
DR-submodular continuous functions under general down-closed convex
constraints, we propose a Frank-Wolfe variant with $(1-1/e)$ approximation
guarantee, and sub-linear convergence rate; iii) for maximizing general
non-monotone submodular continuous functions subject to box constraints, we
propose a DoubleGreedy algorithm with $1/3$ approximation guarantee. Submodular
continuous functions naturally find applications in various real-world
settings, including influence and revenue maximization with continuous
assignments, sensor energy management, multi-resolution data summarization,
facility location, etc. Experimental results show that the proposed algorithms
efficiently generate superior solutions compared to baseline algorithms.
| Andrew An Bian, Baharan Mirzasoleiman, Joachim M. Buhmann, Andreas
Krause | null | 1606.05615 | null | null |
Balancing New Against Old Information: The Role of Surprise in Learning | stat.ML cs.LG q-bio.NC | Surprise describes a range of phenomena from unexpected events to behavioral
responses. We propose a measure of surprise and use it for surprise-driven
learning. Our surprise measure takes into account data likelihood as well as
the degree of commitment to a belief via the entropy of the belief
distribution. We find that surprise-minimizing learning dynamically adjusts the
balance between new and old information without the need of knowledge about the
temporal statistics of the environment. We apply our framework to a dynamic
decision-making task and a maze exploration task. Our surprise minimizing
framework is suitable for learning in complex environments, even if the
environment undergoes gradual or sudden changes and could eventually provide a
framework to study the behavior of humans and animals encountering surprising
events.
| Mohammadjavad Faraji, Kerstin Preuschoff, Wulfram Gerstner | null | 1606.05642 | null | null |
Linear Classification of data with Support Vector Machines and
Generalized Support Vector Machines | cs.LG | In this paper, we study the support vector machine and introduced the notion
of generalized support vector machine for classification of data. We show that
the problem of generalized support vector machine is equivalent to the problem
of generalized variational inequality and establish various results for the
existence of solutions. Moreover, we provide various examples to support our
results.
| Xiaomin Qi, Sergei Silvestrov and Talat Nazir | 10.1063/1.4972718 | 1606.05664 | null | null |
Using Visual Analytics to Interpret Predictive Machine Learning Models | stat.ML cs.LG | It is commonly believed that increasing the interpretability of a machine
learning model may decrease its predictive power. However, inspecting
input-output relationships of those models using visual analytics, while
treating them as black-box, can help to understand the reasoning behind
outcomes without sacrificing predictive quality. We identify a space of
possible solutions and provide two examples of where such techniques have been
successfully used in practice.
| Josua Krause, Adam Perer, Enrico Bertini | null | 1606.05685 | null | null |
ZNNi - Maximizing the Inference Throughput of 3D Convolutional Networks
on Multi-Core CPUs and GPUs | cs.DC cs.LG | Sliding window convolutional networks (ConvNets) have become a popular
approach to computer vision problems such as image segmentation, and object
detection and localization. Here we consider the problem of inference, the
application of a previously trained ConvNet, with emphasis on 3D images. Our
goal is to maximize throughput, defined as average number of output voxels
computed per unit time. Other things being equal, processing a larger image
tends to increase throughput, because fractionally less computation is wasted
on the borders of the image. It follows that an apparently slower algorithm may
end up having higher throughput if it can process a larger image within the
constraint of the available RAM. We introduce novel CPU and GPU primitives for
convolutional and pooling layers, which are designed to minimize memory
overhead. The primitives include convolution based on highly efficient pruned
FFTs. Our theoretical analyses and empirical tests reveal a number of
interesting findings. For some ConvNet architectures, cuDNN is outperformed by
our FFT-based GPU primitives, and these in turn can be outperformed by our CPU
primitives. The CPU manages to achieve higher throughput because of its fast
access to more RAM. A novel primitive in which the GPU accesses host RAM can
significantly increase GPU throughput. Finally, a CPU-GPU algorithm achieves
the greatest throughput of all, 10x or more than other publicly available
implementations of sliding window 3D ConvNets. All of our code has been made
available as open source project.
| Aleksandar Zlateski, Kisuk Lee and H. Sebastian Seung | null | 1606.05688 | null | null |
Structured Stochastic Linear Bandits | stat.ML cs.LG | The stochastic linear bandit problem proceeds in rounds where at each round
the algorithm selects a vector from a decision set after which it receives a
noisy linear loss parameterized by an unknown vector. The goal in such a
problem is to minimize the (pseudo) regret which is the difference between the
total expected loss of the algorithm and the total expected loss of the best
fixed vector in hindsight. In this paper, we consider settings where the
unknown parameter has structure, e.g., sparse, group sparse, low-rank, which
can be captured by a norm, e.g., $L_1$, $L_{(1,2)}$, nuclear norm. We focus on
constructing confidence ellipsoids which contain the unknown parameter across
all rounds with high-probability. We show the radius of such ellipsoids depend
on the Gaussian width of sets associated with the norm capturing the structure.
Such characterization leads to tighter confidence ellipsoids and, therefore,
sharper regret bounds compared to bounds in the existing literature which are
based on the ambient dimensionality.
| Nicholas Johnson, Vidyashankar Sivakumar, Arindam Banerjee | null | 1606.05693 | null | null |
An Efficient Large-scale Semi-supervised Multi-label Classifier Capable
of Handling Missing labels | cs.LG cs.AI stat.ML | Multi-label classification has received considerable interest in recent
years. Multi-label classifiers have to address many problems including:
handling large-scale datasets with many instances and a large set of labels,
compensating missing label assignments in the training set, considering
correlations between labels, as well as exploiting unlabeled data to improve
prediction performance. To tackle datasets with a large set of labels,
embedding-based methods have been proposed which seek to represent the label
assignments in a low-dimensional space. Many state-of-the-art embedding-based
methods use a linear dimensionality reduction to represent the label
assignments in a low-dimensional space. However, by doing so, these methods
actually neglect the tail labels - labels that are infrequently assigned to
instances. We propose an embedding-based method that non-linearly embeds the
label vectors using an stochastic approach, thereby predicting the tail labels
more accurately. Moreover, the proposed method have excellent mechanisms for
handling missing labels, dealing with large-scale datasets, as well as
exploiting unlabeled data. With the best of our knowledge, our proposed method
is the first multi-label classifier that simultaneously addresses all of the
mentioned challenges. Experiments on real-world datasets show that our method
outperforms stateof-the-art multi-label classifiers by a large margin, in terms
of prediction performance, as well as training time.
| Amirhossein Akbarnejad, Mahdieh Soleymani Baghshah | null | 1606.05725 | null | null |
A Comparative Analysis of classification data mining techniques :
Deriving key factors useful for predicting students performance | cs.LG cs.AI cs.CY | Students opting for Engineering as their discipline is increasing rapidly.
But due to various factors and inappropriate primary education in India,
failure rates are high. Students are unable to excel in core engineering
because of complex and mathematical subjects. Hence, they fail in such
subjects. With the help of data mining techniques, we can predict the
performance of students in terms of grades and failure in subjects. This paper
performs a comparative analysis of various classification techniques, such as
Na\"ive Bayes, LibSVM, J48, Random Forest, and JRip and tries to choose best
among these. Based on the results obtained, we found that Na\"ive Bayes is the
most accurate method in terms of students failure prediction and JRip is most
accurate in terms of students grade prediction. We also found that JRip
marginally differs from Na\"ive Bayes in terms of accuracy for students failure
prediction and gives us a set of rules from which we derive the key factors
influencing students performance. Finally, we suggest various ways to mitigate
these factors. This study is limited to Indian Education system scenarios.
However, the factors found can be helpful in other scenarios as well.
| Muhammed Salman Shamsi, Jhansi Lakshmi | null | 1606.05735 | null | null |
Building an Interpretable Recommender via Loss-Preserving Transformation | stat.ML cs.LG | We propose a method for building an interpretable recommender system for
personalizing online content and promotions. Historical data available for the
system consists of customer features, provided content (promotions), and user
responses. Unlike in a standard multi-class classification setting,
misclassification costs depend on both recommended actions and customers. Our
method transforms such a data set to a new set which can be used with standard
interpretable multi-class classification algorithms. The transformation has the
desirable property that minimizing the standard misclassification penalty in
this new space is equivalent to minimizing the custom cost function.
| Amit Dhurandhar, Sechan Oh, Marek Petrik | null | 1606.05819 | null | null |
Statistical Parametric Speech Synthesis Using Bottleneck Representation
From Sequence Auto-encoder | cs.SD cs.LG | In this paper, we describe a statistical parametric speech synthesis approach
with unit-level acoustic representation. In conventional deep neural network
based speech synthesis, the input text features are repeated for the entire
duration of phoneme for mapping text and speech parameters. This mapping is
learnt at the frame-level which is the de-facto acoustic representation.
However much of this computational requirement can be drastically reduced if
every unit can be represented with a fixed-dimensional representation. Using
recurrent neural network based auto-encoder, we show that it is indeed possible
to map units of varying duration to a single vector. We then use this acoustic
representation at unit-level to synthesize speech using deep neural network
based statistical parametric speech synthesis technique. Results show that the
proposed approach is able to synthesize at the same quality as the conventional
frame based approach at a highly reduced computational cost.
| Sivanand Achanta, KNRK Raju Alluri, Suryakanth V Gangashetty | null | 1606.05844 | null | null |
Guaranteed bounds on the Kullback-Leibler divergence of univariate
mixtures using piecewise log-sum-exp inequalities | cs.LG cs.IT math.IT stat.ML | Information-theoretic measures such as the entropy, cross-entropy and the
Kullback-Leibler divergence between two mixture models is a core primitive in
many signal processing tasks. Since the Kullback-Leibler divergence of mixtures
provably does not admit a closed-form formula, it is in practice either
estimated using costly Monte-Carlo stochastic integration, approximated, or
bounded using various techniques. We present a fast and generic method that
builds algorithmically closed-form lower and upper bounds on the entropy, the
cross-entropy and the Kullback-Leibler divergence of mixtures. We illustrate
the versatile method by reporting on our experiments for approximating the
Kullback-Leibler divergence between univariate exponential mixtures, Gaussian
mixtures, Rayleigh mixtures, and Gamma mixtures.
| Frank Nielsen and Ke Sun | 10.3390/e18120442 | 1606.05850 | null | null |
Tutorial on Variational Autoencoders | stat.ML cs.LG | In just three years, Variational Autoencoders (VAEs) have emerged as one of
the most popular approaches to unsupervised learning of complicated
distributions. VAEs are appealing because they are built on top of standard
function approximators (neural networks), and can be trained with stochastic
gradient descent. VAEs have already shown promise in generating many kinds of
complicated data, including handwritten digits, faces, house numbers, CIFAR
images, physical models of scenes, segmentation, and predicting the future from
static images. This tutorial introduces the intuitions behind VAEs, explains
the mathematics behind them, and describes some empirical behavior. No prior
knowledge of variational Bayesian methods is assumed.
| Carl Doersch | null | 1606.05908 | null | null |
Slack and Margin Rescaling as Convex Extensions of Supermodular
Functions | cs.LG cs.DM | Slack and margin rescaling are variants of the structured output SVM, which
is frequently applied to problems in computer vision such as image
segmentation, object localization, and learning parts based object models. They
define convex surrogates to task specific loss functions, which, when
specialized to non-additive loss functions for multi-label problems, yield
extensions to increasing set functions. We demonstrate in this paper that we
may use these concepts to define polynomial time convex extensions of arbitrary
supermodular functions, providing an analysis framework for the tightness of
these surrogates. This analysis framework shows that, while neither margin nor
slack rescaling dominate the other, known bounds on supermodular functions can
be used to derive extensions that dominate both of these, indicating possible
directions for defining novel structured output prediction surrogates. In
addition to the analysis of structured prediction loss functions, these results
imply an approach to supermodular minimization in which margin rescaling is
combined with non-polynomial time convex extensions to compute a sequence of LP
relaxations reminiscent of a cutting plane method. This approach is applied to
the problem of selecting representative exemplars from a set of images,
validating our theoretical contributions.
| Matthew B. Blaschko | null | 1606.05918 | null | null |
Graph based manifold regularized deep neural networks for automatic
speech recognition | stat.ML cs.CL cs.LG | Deep neural networks (DNNs) have been successfully applied to a wide variety
of acoustic modeling tasks in recent years. These include the applications of
DNNs either in a discriminative feature extraction or in a hybrid acoustic
modeling scenario. Despite the rapid progress in this area, a number of
challenges remain in training DNNs. This paper presents an effective way of
training DNNs using a manifold learning based regularization framework. In this
framework, the parameters of the network are optimized to preserve underlying
manifold based relationships between speech feature vectors while minimizing a
measure of loss between network outputs and targets. This is achieved by
incorporating manifold based locality constraints in the objective criterion of
DNNs. Empirical evidence is provided to demonstrate that training a network
with manifold constraints preserves structural compactness in the hidden layers
of the network. Manifold regularization is applied to train bottleneck DNNs for
feature extraction in hidden Markov model (HMM) based speech recognition. The
experiments in this work are conducted on the Aurora-2 spoken digits and the
Aurora-4 read news large vocabulary continuous speech recognition tasks. The
performance is measured in terms of word error rate (WER) on these tasks. It is
shown that the manifold regularized DNNs result in up to 37% reduction in WER
relative to standard DNNs.
| Vikrant Singh Tomar and Richard C. Rose | null | 1606.05925 | null | null |
Adapting ELM to Time Series Classification: A Novel Diversified Top-k
Shapelets Extraction Method | cs.LG | ELM (Extreme Learning Machine) is a single hidden layer feed-forward network,
where the weights between input and hidden layer are initialized randomly. ELM
is efficient due to its utilization of the analytical approach to compute
weights between hidden and output layer. However, ELM still fails to output the
semantic classification outcome. To address such limitation, in this paper, we
propose a diversified top-k shapelets transform framework, where the shapelets
are the subsequences i.e., the best representative and interpretative features
of each class. As we identified, the most challenge problems are how to extract
the best k shapelets in original candidate sets and how to automatically
determine the k value. Specifically, we first define the similar shapelets and
diversified top-k shapelets to construct diversity shapelets graph. Then, a
novel diversity graph based top-k shapelets extraction algorithm named as
\textbf{DivTopkshapelets}\ is proposed to search top-k diversified shapelets.
Finally, we propose a shapelets transformed ELM algorithm named as
\textbf{DivShapELM} to automatically determine the k value, which is further
utilized for time series classification. The experimental results over public
data sets demonstrate that the proposed approach significantly outperforms
traditional ELM algorithm in terms of effectiveness and efficiency.
| Qiuyan Yan and Qifa Sun and Xinming Yan | null | 1606.05934 | null | null |
Continuum directions for supervised dimension reduction | stat.ME cs.LG stat.ML | Dimension reduction of multivariate data supervised by auxiliary information
is considered. A series of basis for dimension reduction is obtained as
minimizers of a novel criterion. The proposed method is akin to continuum
regression, and the resulting basis is called continuum directions. With a
presence of binary supervision data, these directions continuously bridge the
principal component, mean difference and linear discriminant directions, thus
ranging from unsupervised to fully supervised dimension reduction.
High-dimensional asymptotic studies of continuum directions for binary
supervision reveal several interesting facts. The conditions under which the
sample continuum directions are inconsistent, but their classification
performance is good, are specified. While the proposed method can be directly
used for binary and multi-category classification, its generalizations to
incorporate any form of auxiliary data are also presented. The proposed method
enjoys fast computation, and the performance is better or on par with more
computer-intensive alternatives.
| Sungkyu Jung | 10.1016/j.csda.2018.03.015 | 1606.05988 | null | null |
A New Training Method for Feedforward Neural Networks Based on Geometric
Contraction Property of Activation Functions | cs.NE cs.LG | We propose a new training method for a feedforward neural network having the
activation functions with the geometric contraction property. The method
consists of constructing a new functional that is less nonlinear in comparison
with the classical functional by removing the nonlinearity of the activation
function from the output layer. We validate this new method by a series of
experiments that show an improved learning speed and better classification
error.
| Petre Birtea, Cosmin Cernazanu-Glavan, Alexandru Sisu | null | 1606.05990 | null | null |
The LAMBADA dataset: Word prediction requiring a broad discourse context | cs.CL cs.AI cs.LG | We introduce LAMBADA, a dataset to evaluate the capabilities of computational
models for text understanding by means of a word prediction task. LAMBADA is a
collection of narrative passages sharing the characteristic that human subjects
are able to guess their last word if they are exposed to the whole passage, but
not if they only see the last sentence preceding the target word. To succeed on
LAMBADA, computational models cannot simply rely on local context, but must be
able to keep track of information in the broader discourse. We show that
LAMBADA exemplifies a wide range of linguistic phenomena, and that none of
several state-of-the-art language models reaches accuracy above 1% on this
novel benchmark. We thus propose LAMBADA as a challenging test set, meant to
encourage the development of new models capable of genuine understanding of
broad context in natural language text.
| Denis Paperno (1), Germ\'an Kruszewski (1), Angeliki Lazaridou (1),
Quan Ngoc Pham (1), Raffaella Bernardi (1), Sandro Pezzelle (1), Marco Baroni
(1), Gemma Boleda (1), Raquel Fern\'andez (2) ((1) CIMeC - Center for
Mind/Brain Sciences, University of Trento, (2) Institute for Logic, Language
& Computation, University of Amsterdam) | null | 1606.06031 | null | null |
Mining Local Process Models | cs.DB cs.LG | In this paper we describe a method to discover frequent behavioral patterns
in event logs. We express these patterns as \emph{local process models}. Local
process model mining can be positioned in-between process discovery and episode
/ sequential pattern mining. The technique presented in this paper is able to
learn behavioral patterns involving sequential composition, concurrency, choice
and loop, like in process mining. However, we do not look at start-to-end
models, which distinguishes our approach from process discovery and creates a
link to episode / sequential pattern mining. We propose an incremental
procedure for building local process models capturing frequent patterns based
on so-called process trees. We propose five quality dimensions and
corresponding metrics for local process models, given an event log. We show
monotonicity properties for some quality dimensions, enabling a speedup of
local process model discovery through pruning. We demonstrate through a real
life case study that mining local patterns allows us to get insights in
processes where regular start-to-end process discovery techniques are only able
to learn unstructured, flower-like, models.
| Niek Tax, Natalia Sidorova, Reinder Haakma, Wil M. P. van der Aalst | 10.1016/j.jides.2016.11.001 | 1606.06066 | null | null |
Relative Natural Gradient for Learning Large Complex Models | cs.LG | Fisher information and natural gradient provided deep insights and powerful
tools to artificial neural networks. However related analysis becomes more and
more difficult as the learner's structure turns large and complex. This paper
makes a preliminary step towards a new direction. We extract a local component
of a large neuron system, and defines its relative Fisher information metric
that describes accurately this small component, and is invariant to the other
parts of the system. This concept is important because the geometry structure
is much simplified and it can be easily applied to guide the learning of neural
networks. We provide an analysis on a list of commonly used components, and
demonstrate how to use this concept to further improve optimization.
| Ke Sun and Frank Nielsen | null | 1606.06069 | null | null |
Quantifying and Reducing Stereotypes in Word Embeddings | cs.CL cs.LG stat.ML | Machine learning algorithms are optimized to model statistical properties of
the training data. If the input data reflects stereotypes and biases of the
broader society, then the output of the learning algorithm also captures these
stereotypes. In this paper, we initiate the study of gender stereotypes in {\em
word embedding}, a popular framework to represent text data. As their use
becomes increasingly common, applications can inadvertently amplify unwanted
stereotypes. We show across multiple datasets that the embeddings contain
significant gender stereotypes, especially with regard to professions. We
created a novel gender analogy task and combined it with crowdsourcing to
systematically quantify the gender bias in a given embedding. We developed an
efficient algorithm that reduces gender stereotype using just a handful of
training examples while preserving the useful geometric properties of the
embedding. We evaluated our algorithm on several metrics. While we focus on
male/female stereotypes, our framework may be applicable to other types of
embedding biases.
| Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam
Kalai | null | 1606.06121 | null | null |
Bootstrapping with Models: Confidence Intervals for Off-Policy
Evaluation | cs.AI cs.LG stat.ML | For an autonomous agent, executing a poor policy may be costly or even
dangerous. For such agents, it is desirable to determine confidence interval
lower bounds on the performance of any given policy without executing said
policy. Current methods for exact high confidence off-policy evaluation that
use importance sampling require a substantial amount of data to achieve a tight
lower bound. Existing model-based methods only address the problem in discrete
state spaces. Since exact bounds are intractable for many domains we trade off
strict guarantees of safety for more data-efficient approximate bounds. In this
context, we propose two bootstrapping off-policy evaluation methods which use
learned MDP transition models in order to estimate lower confidence bounds on
policy performance with limited data in both continuous and discrete state
spaces. Since direct use of a model may introduce bias, we derive a theoretical
upper bound on model bias for when the model transition function is estimated
with i.i.d. trajectories. This bound broadens our understanding of the
conditions under which model-based methods have high bias. Finally, we
empirically evaluate our proposed methods and analyze the settings in which
different bootstrapping off-policy confidence interval methods succeed and
fail.
| Josiah P. Hanna, Peter Stone, Scott Niekum | null | 1606.06126 | null | null |
DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low
Bitwidth Gradients | cs.NE cs.LG | We propose DoReFa-Net, a method to train convolutional neural networks that
have low bitwidth weights and activations using low bitwidth parameter
gradients. In particular, during backward pass, parameter gradients are
stochastically quantized to low bitwidth numbers before being propagated to
convolutional layers. As convolutions during forward/backward passes can now
operate on low bitwidth weights and activations/gradients respectively,
DoReFa-Net can use bit convolution kernels to accelerate both training and
inference. Moreover, as bit convolutions can be efficiently implemented on CPU,
FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low
bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet
datasets prove that DoReFa-Net can achieve comparable prediction accuracy as
32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has
1-bit weights, 2-bit activations, can be trained from scratch using 6-bit
gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The
DoReFa-Net AlexNet model is released publicly.
| Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou | null | 1606.06160 | null | null |
CNNLab: a Novel Parallel Framework for Neural Networks using GPU and
FPGA-a Practical Study with Trade-off Analysis | cs.LG cs.DC | Designing and implementing efficient, provably correct parallel neural
network processing is challenging. Existing high-level parallel abstractions
like MapReduce are insufficiently expressive while low-level tools like MPI and
Pthreads leave ML experts repeatedly solving the same design challenges.
However, the diversity and large-scale data size have posed a significant
challenge to construct a flexible and high-performance implementation of deep
learning neural networks. To improve the performance and maintain the
scalability, we present CNNLab, a novel deep learning framework using GPU and
FPGA-based accelerators. CNNLab provides a uniform programming model to users
so that the hardware implementation and the scheduling are invisible to the
programmers. At runtime, CNNLab leverages the trade-offs between GPU and FPGA
before offloading the tasks to the accelerators. Experimental results on the
state-of-the-art Nvidia K40 GPU and Altera DE5 FPGA board demonstrate that the
CNNLab can provide a universal framework with efficient support for diverse
applications without increasing the burden of the programmers. Moreover, we
analyze the detailed quantitative performance, throughput, power, energy, and
performance density for both approaches. Experimental results leverage the
trade-offs between GPU and FPGA and provide useful practical experiences for
the deep learning research community.
| Maohua Zhu, Liu Liu, Chao Wang, Yuan Xie | null | 1606.06234 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.