title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Dense Associative Memory for Pattern Recognition | cs.NE cond-mat.dis-nn cs.LG q-bio.NC stat.ML | A model of associative memory is studied, which stores and reliably retrieves
many more patterns than the number of neurons in the network. We propose a
simple duality between this dense associative memory and neural networks
commonly used in deep learning. On the associative memory side of this duality,
a family of models that smoothly interpolates between two limiting cases can be
constructed. One limit is referred to as the feature-matching mode of pattern
recognition, and the other one as the prototype regime. On the deep learning
side of the duality, this family corresponds to feedforward neural networks
with one hidden layer and various activation functions, which transmit the
activities of the visible neurons to the hidden layer. This family of
activation functions includes logistics, rectified linear units, and rectified
polynomials of higher degrees. The proposed duality makes it possible to apply
energy-based intuition from associative memory to analyze computational
properties of neural networks with unusual activation functions - the higher
rectified polynomials which until now have not been used in deep learning. The
utility of the dense memories is illustrated for two test cases: the logical
gate XOR and the recognition of handwritten digits from the MNIST data set.
| Dmitry Krotov, John J Hopfield | null | 1606.01164 | null | null |
Generalizing the Convolution Operator to extend CNNs to Irregular
Domains | cs.LG cs.CV cs.NE | Convolutional Neural Networks (CNNs) have become the state-of-the-art in
supervised learning vision tasks. Their convolutional filters are of paramount
importance for they allow to learn patterns while disregarding their locations
in input images. When facing highly irregular domains, generalized
convolutional operators based on an underlying graph structure have been
proposed. However, these operators do not exactly match standard ones on grid
graphs, and introduce unwanted additional invariance (e.g. with regards to
rotations). We propose a novel approach to generalize CNNs to irregular domains
using weight sharing and graph-based operators. Using experiments, we show that
these models resemble CNNs on regular domains and offer better performance than
multilayer perceptrons on distorded ones.
| Jean-Charles Vialatte, Vincent Gripon, Gr\'egoire Mercier | null | 1606.01166 | null | null |
Infant directed speech is consistent with teaching | cs.LG | Infant-directed speech (IDS) has distinctive properties that differ from
adult-directed speech (ADS). Why it has these properties -- and whether they
are intended to facilitate language learning -- is matter of contention. We
argue that much of this disagreement stems from lack of a formal, guiding
theory of how phonetic categories should best be taught to infant-like
learners. In the absence of such a theory, researchers have relied on
intuitions about learning to guide the argument. We use a formal theory of
teaching, validated through experiments in other domains, as the basis for a
detailed analysis of whether IDS is well-designed for teaching phonetic
categories. Using the theory, we generate ideal data for teaching phonetic
categories in English. We qualitatively compare the simulated teaching data
with human IDS, finding that the teaching data exhibit many features of IDS,
including some that have been taken as evidence IDS is not for teaching. The
simulated data reveal potential pitfalls for experimentalists exploring the
role of IDS in language learning. Focusing on different formants and phoneme
sets leads to different conclusions, and the benefit of the teaching data to
learners is not apparent until a sufficient number of examples have been
provided. Finally, we investigate transfer of IDS to learning ADS. The teaching
data improves classification of ADS data, but only for the learner they were
generated to teach; not universally across all classes of learner. This
research offers a theoretically-grounded framework that empowers
experimentalists to systematically evaluate whether IDS is for teaching.
| Baxter S. Eaves Jr., Naomi H. Feldman, Thomas L. Griffiths, Patrick
Shafto | null | 1606.01175 | null | null |
Distributed stochastic optimization via matrix exponential learning | cs.IT cs.LG math.IT math.OC | In this paper, we investigate a distributed learning scheme for a broad class
of stochastic optimization problems and games that arise in signal processing
and wireless communications. The proposed algorithm relies on the method of
matrix exponential learning (MXL) and only requires locally computable gradient
observations that are possibly imperfect and/or obsolete. To analyze it, we
introduce the notion of a stable Nash equilibrium and we show that the
algorithm is globally convergent to such equilibria - or locally convergent
when an equilibrium is only locally stable. We also derive an explicit linear
bound for the algorithm's convergence speed, which remains valid under
measurement errors and uncertainty of arbitrarily high variance. To validate
our theoretical analysis, we test the algorithm in realistic
multi-carrier/multiple-antenna wireless scenarios where several users seek to
maximize their energy efficiency. Our results show that learning allows users
to attain a net increase between 100% and 500% in energy efficiency, even under
very high uncertainty.
| Panayotis Mertikopoulos and E. Veronica Belmega and Romain Negrel and
Luca Sanguinetti | 10.1109/TSP.2017.2656847 | 1606.01190 | null | null |
Minimizing Regret on Reflexive Banach Spaces and Learning Nash
Equilibria in Continuous Zero-Sum Games | cs.LG | We study a general version of the adversarial online learning problem. We are
given a decision set $\mathcal{X}$ in a reflexive Banach space $X$ and a
sequence of reward vectors in the dual space of $X$. At each iteration, we
choose an action from $\mathcal{X}$, based on the observed sequence of previous
rewards. Our goal is to minimize regret, defined as the gap between the
realized reward and the reward of the best fixed action in hindsight. Using
results from infinite dimensional convex analysis, we generalize the method of
Dual Averaging (or Follow the Regularized Leader) to our setting and obtain
general upper bounds on the worst-case regret that subsume a wide range of
results from the literature. Under the assumption of uniformly continuous
rewards, we obtain explicit anytime regret bounds in a setting where the
decision set is the set of probability distributions on a compact metric space
$S$ whose Radon-Nikodym derivatives are elements of $L^p(S)$ for some $p > 1$.
Importantly, we make no convexity assumptions on either the set $S$ or the
reward functions. We also prove a general lower bound on the worst-case regret
for any online algorithm. We then apply these results to the problem of
learning in repeated continuous two-player zero-sum games, in which players'
strategy sets are compact metric spaces. In doing so, we first prove that if
both players play a Hannan-consistent strategy, then with probability 1 the
empirical distributions of play weakly converge to the set of Nash equilibria
of the game. We then show that, under mild assumptions, Dual Averaging on the
(infinite-dimensional) space of probability distributions indeed achieves
Hannan-consistency. Finally, we illustrate our results through numerical
examples.
| Maximilian Balandat, Walid Krichene, Claire Tomlin and Alexandre Bayen | null | 1606.01261 | null | null |
End-to-end LSTM-based dialog control optimized with supervised and
reinforcement learning | cs.CL cs.AI cs.LG | This paper presents a model for end-to-end learning of task-oriented dialog
systems. The main component of the model is a recurrent neural network (an
LSTM), which maps from raw dialog history directly to a distribution over
system actions. The LSTM automatically infers a representation of dialog
history, which relieves the system developer of much of the manual feature
engineering of dialog state. In addition, the developer can provide software
that expresses business rules and provides access to programmatic APIs,
enabling the LSTM to take actions in the real world on behalf of the user. The
LSTM can be optimized using supervised learning (SL), where a domain expert
provides example dialogs which the LSTM should imitate; or using reinforcement
learning (RL), where the system improves by interacting directly with end
users. Experiments show that SL and RL are complementary: SL alone can derive a
reasonable initial policy from a small number of training dialogs; and starting
RL optimization with a policy trained with SL substantially accelerates the
learning rate of RL.
| Jason D. Williams and Geoffrey Zweig | null | 1606.01269 | null | null |
Predicting with Distributions | cs.DS cs.LG | We consider a new learning model in which a joint distribution over vector
pairs $(x,y)$ is determined by an unknown function $c(x)$ that maps input
vectors $x$ not to individual outputs, but to entire {\em distributions\/} over
output vectors $y$. Our main results take the form of rather general reductions
from our model to algorithms for PAC learning the function class and the
distribution class separately, and show that virtually every such combination
yields an efficient algorithm in our model. Our methods include a randomized
reduction to classification noise and an application of Le Cam's method to
obtain robust learning algorithms.
| Michael Kearns, Zhiwei Steven Wu | null | 1606.01275 | null | null |
Dependency Parsing as Head Selection | cs.CL cs.LG | Conventional graph-based dependency parsers guarantee a tree structure both
during training and inference. Instead, we formalize dependency parsing as the
problem of independently selecting the head of each word in a sentence. Our
model which we call \textsc{DeNSe} (as shorthand for {\bf De}pendency {\bf
N}eural {\bf Se}lection) produces a distribution over possible heads for each
word using features obtained from a bidirectional recurrent neural network.
Without enforcing structural constraints during training, \textsc{DeNSe}
generates (at inference time) trees for the overwhelming majority of sentences,
while non-tree outputs can be adjusted with a maximum spanning tree algorithm.
We evaluate \textsc{DeNSe} on four languages (English, Chinese, Czech, and
German) with varying degrees of non-projectivity. Despite the simplicity of the
approach, our parsers are on par with the state of the art.
| Xingxing Zhang, Jianpeng Cheng and Mirella Lapata | null | 1606.01280 | null | null |
Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations | cs.NE cs.CL cs.LG | We propose zoneout, a novel method for regularizing RNNs. At each timestep,
zoneout stochastically forces some hidden units to maintain their previous
values. Like dropout, zoneout uses random noise to train a pseudo-ensemble,
improving generalization. But by preserving instead of dropping hidden units,
gradient information and state information are more readily propagated through
time, as in feedforward stochastic depth networks. We perform an empirical
investigation of various RNN regularizers, and find that zoneout gives
significant performance improvements across tasks. We achieve competitive
results with relatively simple models in character- and word-level language
modelling on the Penn Treebank and Text8 datasets, and combining with recurrent
batch normalization yields state-of-the-art results on permuted sequential
MNIST.
| David Krueger, Tegan Maharaj, J\'anos Kram\'ar, Mohammad Pezeshki,
Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron
Courville, Chris Pal | null | 1606.01305 | null | null |
Distance Metric Ensemble Learning and the Andrews-Curtis Conjecture | cs.AI cs.DM cs.LG | Motivated by the search for a counterexample to the Poincar\'e conjecture in
three and four dimensions, the Andrews-Curtis conjecture was proposed in 1965.
It is now generally suspected that the Andrews-Curtis conjecture is false, but
small potential counterexamples are not so numerous, and previous work has
attempted to eliminate some via combinatorial search. Progress has however been
limited, with the most successful approach (breadth-first-search using
secondary storage) being neither scalable nor heuristically-informed. A
previous empirical analysis of problem structure examined several heuristic
measures of search progress and determined that none of them provided any
useful guidance for search. In this article, we induce new quality measures
directly from the problem structure and combine them to produce a more
effective search driver via ensemble machine learning. By this means, we
eliminate 19 potential counterexamples, the status of which had been unknown
for some years.
| Krzysztof Krawiec and Jerry Swan | null | 1606.01412 | null | null |
Deep Q-Networks for Accelerating the Training of Deep Neural Networks | cs.LG cs.NE | In this paper, we propose a principled deep reinforcement learning (RL)
approach that is able to accelerate the convergence rate of general deep neural
networks (DNNs). With our approach, a deep RL agent (synonym for optimizer in
this work) is used to automatically learn policies about how to schedule
learning rates during the optimization of a DNN. The state features of the
agent are learned from the weight statistics of the optimizee during training.
The reward function of this agent is designed to learn policies that minimize
the optimizee's training time given a certain performance goal. The actions of
the agent correspond to changing the learning rate for the optimizee during
training. As far as we know, this is the first attempt to use deep RL to learn
how to optimize a large-sized DNN. We perform extensive experiments on a
standard benchmark dataset and demonstrate the effectiveness of the policies
learned by our approach.
| Jie Fu | null | 1606.01467 | null | null |
Bounds for Vector-Valued Function Estimation | stat.ML cs.LG | We present a framework to derive risk bounds for vector-valued learning with
a broad class of feature maps and loss functions. Multi-task learning and
one-vs-all multi-category learning are treated as examples. We discuss in
detail vector-valued functions with one hidden layer, and demonstrate that the
conditions under which shared representations are beneficial for multi- task
learning are equally applicable to multi-category learning.
| Andreas Maurer and Massimiliano Pontil | null | 1606.01487 | null | null |
Adaptive Submodular Ranking and Routing | cs.DS cs.LG | We study a general stochastic ranking problem where an algorithm needs to
adaptively select a sequence of elements so as to "cover" a random scenario
(drawn from a known distribution) at minimum expected cost. The coverage of
each scenario is captured by an individual submodular function, where the
scenario is said to be covered when its function value goes above a given
threshold. We obtain a logarithmic factor approximation algorithm for this
adaptive ranking problem, which is the best possible (unless P=NP). This
problem unifies and generalizes many previously studied problems with
applications in search ranking and active learning. The approximation ratio of
our algorithm either matches or improves the best result known in each of these
special cases. Furthermore, we extend our results to an adaptive vehicle
routing problem, where costs are determined by an underlying metric. This
routing problem is a significant generalization of the previously-studied
adaptive traveling salesman and traveling repairman problems. Our approximation
ratio nearly matches the best bound known for these special cases. Finally, we
present experimental results for some applications of adaptive ranking.
| Fatemeh Navidi, Prabhanjan Kambadur, Viswanath Nagarajan | null | 1606.01530 | null | null |
OpenAI Gym | cs.LG cs.AI | OpenAI Gym is a toolkit for reinforcement learning research. It includes a
growing collection of benchmark problems that expose a common interface, and a
website where people can share their results and compare the performance of
algorithms. This whitepaper discusses the components of OpenAI Gym and the
design decisions that went into the software.
| Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John
Schulman, Jie Tang, Wojciech Zaremba | null | 1606.01540 | null | null |
Gated-Attention Readers for Text Comprehension | cs.CL cs.LG | In this paper we study the problem of answering cloze-style questions over
documents. Our model, the Gated-Attention (GA) Reader, integrates a multi-hop
architecture with a novel attention mechanism, which is based on multiplicative
interactions between the query embedding and the intermediate states of a
recurrent neural network document reader. This enables the reader to build
query-specific representations of tokens in the document for accurate answer
selection. The GA Reader obtains state-of-the-art results on three benchmarks
for this task--the CNN \& Daily Mail news stories and the Who Did What dataset.
The effectiveness of multiplicative interaction is demonstrated by an ablation
study, and by comparing to alternative compositional operators for implementing
the gated-attention. The code is available at
https://github.com/bdhingra/ga-reader.
| Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W. Cohen, Ruslan
Salakhutdinov | null | 1606.01549 | null | null |
Active Regression with Adaptive Huber Loss | cs.LG cs.CV | This paper addresses the scalar regression problem through a novel solution
to exactly optimize the Huber loss in a general semi-supervised setting, which
combines multi-view learning and manifold regularization. We propose a
principled algorithm to 1) avoid computationally expensive iterative schemes
while 2) adapting the Huber loss threshold in a data-driven fashion and 3)
actively balancing the use of labelled data to remove noisy or inconsistent
annotations at the training stage. In a wide experimental evaluation, dealing
with diverse applications, we assess the superiority of our paradigm which is
able to combine robustness towards noise with both strong performance and low
computational cost.
| Jacopo Cavazza and Vittorio Murino | null | 1606.01568 | null | null |
Semi-Supervised Learning with Generative Adversarial Networks | stat.ML cs.LG | We extend Generative Adversarial Networks (GANs) to the semi-supervised
context by forcing the discriminator network to output class labels. We train a
generative model G and a discriminator D on a dataset with inputs belonging to
one of N classes. At training time, D is made to predict which of N+1 classes
the input belongs to, where an extra class is added to correspond to the
outputs of G. We show that this method can be used to create a more
data-efficient classifier and that it allows for generating higher quality
samples than a regular GAN.
| Augustus Odena | null | 1606.01583 | null | null |
Curie: A method for protecting SVM Classifier from Poisoning Attack | cs.CR cs.LG | Machine learning is used in a number of security related applications such as
biometric user authentication, speaker identification etc. A type of causative
integrity attack against machine learning called Poisoning attack works by
injecting specially crafted data points in the training data so as to increase
the false positive rate of the classifier. In the context of the biometric
authentication, this means that more intruders will be classified as valid
user, and in case of speaker identification system, user A will be classified
user B. In this paper, we examine poisoning attack against SVM and introduce -
Curie - a method to protect the SVM classifier from the poisoning attack. The
basic idea of our method is to identify the poisoned data points injected by
the adversary and filter them out. Our method is light weight and can be easily
integrated into existing systems. Experimental results show that it works very
well in filtering out the poisoned data.
| Ricky Laishram, Vir Virander Phoha | null | 1606.01584 | null | null |
A Deep-Learning Approach for Operation of an Automated Realtime Flare
Forecast | astro-ph.SR cs.LG | Automated forecasts serve important role in space weather science, by
providing statistical insights to flare-trigger mechanisms, and by enabling
tailor-made forecasts and high-frequency forecasts. Only by realtime forecast
we can experimentally measure the performance of flare-forecasting methods
while confidently avoiding overlearning.
We have been operating unmanned flare forecast service since August, 2015
that provides 24-hour-ahead forecast of solar flares, every 12 minutes. We
report the method and prediction results of the system.
| Yuko Hada-Muranushi, Takayuki Muranushi, Ayumi Asai, Daisuke
Okanohara, Rudy Raymond, Gentaro Watanabe, Shigeru Nemoto, Kazunari Shibata | null | 1606.01587 | null | null |
Feedforward Initialization for Fast Inference of Deep Generative
Networks is biologically plausible | cs.LG cs.NE q-bio.NC | We consider deep multi-layered generative models such as Boltzmann machines
or Hopfield nets in which computation (which implements inference) is both
recurrent and stochastic, but where the recurrence is not to model sequential
structure, only to perform computation. We find conditions under which a simple
feedforward computation is a very good initialization for inference, after the
input units are clamped to observed values. It means that after the feedforward
initialization, the recurrent network is very close to a fixed point of the
network dynamics, where the energy gradient is 0. The main condition is that
consecutive layers form a good auto-encoder, or more generally that different
groups of inputs into the unit (in particular, bottom-up inputs on one hand,
top-down inputs on the other hand) are consistent with each other, producing
the same contribution into the total weighted sum of inputs. In biological
terms, this would correspond to having each dendritic branch correctly
predicting the aggregate input from all the dendritic branches, i.e., the soma
potential. This is consistent with the prediction that the synaptic weights
into dendritic branches such as those of the apical and basal dendrites of
pyramidal cells are trained to minimize the prediction error made by the
dendritic branch when the target is the somatic activity. Whereas previous work
has shown how to achieve fast negative phase inference (when the model is
unclamped) in a predictive recurrent model, this contribution helps to achieve
fast positive phase inference (when the target output is clamped) in such
recurrent neural models.
| Yoshua Bengio, Benjamin Scellier, Olexa Bilaniuk, Joao Sacramento and
Walter Senn | null | 1606.01651 | null | null |
Integrated perception with recurrent multi-task neural networks | stat.ML cs.CV cs.LG | Modern discriminative predictors have been shown to match natural
intelligences in specific perceptual tasks in image classification, object and
part detection, boundary extraction, etc. However, a major advantage that
natural intelligences still have is that they work well for "all" perceptual
problems together, solving them efficiently and coherently in an "integrated
manner". In order to capture some of these advantages in machine perception, we
ask two questions: whether deep neural networks can learn universal image
representations, useful not only for a single task but for all of them, and how
the solutions to the different tasks can be integrated in this framework. We
answer by proposing a new architecture, which we call "MultiNet", in which not
only deep image features are shared between tasks, but where tasks can interact
in a recurrent manner by encoding the results of their analysis in a common
shared representation of the data. In this manner, we show that the performance
of individual tasks in standard benchmarks can be improved first by sharing
features between them and then, more significantly, by integrating their
solutions in the common representation.
| Hakan Bilen and Andrea Vedaldi | null | 1606.01735 | null | null |
Very Deep Convolutional Networks for Text Classification | cs.CL cs.LG cs.NE | The dominant approach for many NLP tasks are recurrent neural networks, in
particular LSTMs, and convolutional neural networks. However, these
architectures are rather shallow in comparison to the deep convolutional
networks which have pushed the state-of-the-art in computer vision. We present
a new architecture (VDCNN) for text processing which operates directly at the
character level and uses only small convolutions and pooling operations. We are
able to show that the performance of this model increases with depth: using up
to 29 convolutional layers, we report improvements over the state-of-the-art on
several public text classification tasks. To the best of our knowledge, this is
the first time that very deep convolutional nets have been applied to text
processing.
| Alexis Conneau, Holger Schwenk, Lo\"ic Barrault, Yann Lecun | null | 1606.01781 | null | null |
Low-rank Optimization with Convex Constraints | math.OC cs.LG stat.ML | The problem of low-rank approximation with convex constraints, which appears
in data analysis, system identification, model order reduction, low-order
controller design and low-complexity modelling is considered. Given a matrix,
the objective is to find a low-rank approximation that meets rank and convex
constraints, while minimizing the distance to the matrix in the squared
Frobenius norm. In many situations, this non-convex problem is convexified by
nuclear norm regularization. However, we will see that the approximations
obtained by this method may be far from optimal. In this paper, we propose an
alternative convex relaxation that uses the convex envelope of the squared
Frobenius norm and the rank constraint. With this approach, easily verifiable
conditions are obtained under which the solutions to the convex relaxation and
the original non-convex problem coincide. An SDP representation of the convex
envelope is derived, which allows us to apply this approach to several known
problems. Our example on optimal low-rank Hankel approximation/model reduction
illustrates that the proposed convex relaxation performs consistently better
than nuclear norm regularization and may outperform balanced truncation.
| Christian Grussler, Anders Rantzer and Pontus Giselsson | 10.1109/TAC.2018.2813009 | 1606.01793 | null | null |
Bayesian Poisson Tucker Decomposition for Learning the Structure of
International Relations | stat.ML cs.AI cs.LG cs.SI stat.AP | We introduce Bayesian Poisson Tucker decomposition (BPTD) for modeling
country--country interaction event data. These data consist of interaction
events of the form "country $i$ took action $a$ toward country $j$ at time
$t$." BPTD discovers overlapping country--community memberships, including the
number of latent communities. In addition, it discovers directed
community--community interaction networks that are specific to "topics" of
action types and temporal "regimes." We show that BPTD yields an efficient MCMC
inference algorithm and achieves better predictive performance than related
models. We also demonstrate that it discovers interpretable latent structure
that agrees with our knowledge of international relations.
| Aaron Schein, Mingyuan Zhou, David M. Blei, Hanna Wallach | null | 1606.01855 | null | null |
Recurrent Neural Networks for Multivariate Time Series with Missing
Values | cs.LG cs.NE stat.ML | Multivariate time series data in practical applications, such as health care,
geoscience, and biology, are characterized by a variety of missing values. In
time series prediction and other related tasks, it has been noted that missing
values and their missing patterns are often correlated with the target labels,
a.k.a., informative missingness. There is very limited work on exploiting the
missing patterns for effective imputation and improving prediction performance.
In this paper, we develop novel deep learning models, namely GRU-D, as one of
the early attempts. GRU-D is based on Gated Recurrent Unit (GRU), a
state-of-the-art recurrent neural network. It takes two representations of
missing patterns, i.e., masking and time interval, and effectively incorporates
them into a deep model architecture so that it not only captures the long-term
temporal dependencies in time series, but also utilizes the missing patterns to
achieve better prediction results. Experiments of time series classification
tasks on real-world clinical datasets (MIMIC-III, PhysioNet) and synthetic
datasets demonstrate that our models achieve state-of-the-art performance and
provides useful insights for better understanding and utilization of missing
values in time series analysis.
| Zhengping Che, Sanjay Purushotham, Kyunghyun Cho, David Sontag, Yan
Liu | null | 1606.01865 | null | null |
Unifying Count-Based Exploration and Intrinsic Motivation | cs.AI cs.LG stat.ML | We consider an agent's uncertainty about its environment and the problem of
generalizing this uncertainty across observations. Specifically, we focus on
the problem of exploration in non-tabular reinforcement learning. Drawing
inspiration from the intrinsic motivation literature, we use density models to
measure uncertainty, and propose a novel algorithm for deriving a pseudo-count
from an arbitrary density model. This technique enables us to generalize
count-based exploration algorithms to the non-tabular case. We apply our ideas
to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We
transform these pseudo-counts into intrinsic rewards and obtain significantly
improved exploration in a number of hard games, including the infamously
difficult Montezuma's Revenge.
| Marc G. Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul,
David Saxton, Remi Munos | null | 1606.01868 | null | null |
Learning to Optimize | cs.LG cs.AI math.OC stat.ML | Algorithm design is a laborious process and often requires many iterations of
ideation and validation. In this paper, we explore automating algorithm design
and present a method to learn an optimization algorithm, which we believe to be
the first method that can automatically discover a better algorithm. We
approach this problem from a reinforcement learning perspective and represent
any particular optimization algorithm as a policy. We learn an optimization
algorithm using guided policy search and demonstrate that the resulting
algorithm outperforms existing hand-engineered algorithms in terms of
convergence speed and/or the final objective value.
| Ke Li and Jitendra Malik | null | 1606.01885 | null | null |
Deep neural networks are robust to weight binarization and other
non-linear distortions | cs.NE cs.CV cs.LG | Recent results show that deep neural networks achieve excellent performance
even when, during training, weights are quantized and projected to a binary
representation. Here, we show that this is just the tip of the iceberg: these
same networks, during testing, also exhibit a remarkable robustness to
distortions beyond quantization, including additive and multiplicative noise,
and a class of non-linear projections where binarization is just a special
case. To quantify this robustness, we show that one such network achieves 11%
test error on CIFAR-10 even with 0.68 effective bits per weight. Furthermore,
we find that a common training heuristic--namely, projecting quantized weights
during backpropagation--can be altered (or even removed) and networks still
achieve a base level of robustness during testing. Specifically, training with
weight projections other than quantization also works, as does simply clipping
the weights, both of which have never been reported before. We confirm our
results for CIFAR-10 and ImageNet datasets. Finally, drawing from these ideas,
we propose a stochastic projection rule that leads to a new state of the art
network with 7.64% test error on CIFAR-10 using no data augmentation.
| Paul Merolla, Rathinakumar Appuswamy, John Arthur, Steve K. Esser,
Dharmendra Modha | null | 1606.01981 | null | null |
Regret Bounds for Non-decomposable Metrics with Missing Labels | cs.LG stat.ML | We consider the problem of recommending relevant labels (items) for a given
data point (user). In particular, we are interested in the practically
important setting where the evaluation is with respect to non-decomposable
(over labels) performance metrics like the $F_1$ measure, and the training data
has missing labels. To this end, we propose a generic framework that given a
performance metric $\Psi$, can devise a regularized objective function and a
threshold such that all the values in the predicted score vector above and only
above the threshold are selected to be positive. We show that the regret or
generalization error in the given metric $\Psi$ is bounded ultimately by
estimation error of certain underlying parameters. In particular, we derive
regret bounds under three popular settings: a) collaborative filtering, b)
multilabel classification, and c) PU (positive-unlabeled) learning. For each of
the above problems, we can obtain precise non-asymptotic regret bound which is
small even when a large fraction of labels is missing. Our empirical results on
synthetic and benchmark datasets demonstrate that by explicitly modeling for
missing labels and optimizing the desired performance metric, our algorithm
indeed achieves significantly better performance (like $F_1$ score) when
compared to methods that do not model missing label information carefully.
| Prateek Jain and Nagarajan Natarajan | null | 1606.02077 | null | null |
Efficient differentially private learning improves drug sensitivity
prediction | stat.ML cs.CR cs.LG stat.ME | Users of a personalised recommendation system face a dilemma: recommendations
can be improved by learning from data, but only if the other users are willing
to share their private information. Good personalised predictions are vitally
important in precision medicine, but genomic information on which the
predictions are based is also particularly sensitive, as it directly identifies
the patients and hence cannot easily be anonymised. Differential privacy has
emerged as a potentially promising solution: privacy is considered sufficient
if presence of individual patients cannot be distinguished. However,
differentially private learning with current methods does not improve
predictions with feasible data sizes and dimensionalities. Here we show that
useful predictors can be learned under powerful differential privacy
guarantees, and even from moderately-sized data sets, by demonstrating
significant improvements with a new robust private regression method in the
accuracy of private drug sensitivity prediction. The method combines two key
properties not present even in recent proposals, which can be generalised to
other predictors: we prove it is asymptotically consistently and efficiently
private, and demonstrate that it performs well on finite data. Good finite data
performance is achieved by limiting the sharing of private information by
decreasing the dimensionality and by projecting outliers to fit tighter bounds,
therefore needing to add less noise for equal privacy. As already the
simple-to-implement method shows promise on the challenging genomic data, we
anticipate rapid progress towards practical applications in many fields, such
as mobile sensing and social media, in addition to the badly needed precision
medicine solutions.
| Antti Honkela, Mrinal Das, Arttu Nieminen, Onur Dikmen and Samuel
Kaski | 10.1186/s13062-017-0203-4 | 1606.02109 | null | null |
Towards a Neural Statistician | stat.ML cs.LG | An efficient learner is one who reuses what they already know to tackle a new
problem. For a machine learner, this means understanding the similarities
amongst datasets. In order to do this, one must take seriously the idea of
working with datasets, rather than datapoints, as the key objects to model.
Towards this goal, we demonstrate an extension of a variational autoencoder
that can learn a method for computing representations, or statistics, of
datasets in an unsupervised fashion. The network is trained to produce
statistics that encapsulate a generative model for each dataset. Hence the
network enables efficient learning from new datasets for both unsupervised and
supervised tasks. We show that we are able to learn statistics that can be used
for: clustering datasets, transferring generative models to new datasets,
selecting representative samples of datasets and classifying previously unseen
classes. We refer to our model as a neural statistician, and by this we mean a
neural network that can learn to compute summary statistics of datasets without
supervision.
| Harrison Edwards, Amos Storkey | null | 1606.02185 | null | null |
Adapting Sampling Interval of Sensor Networks Using On-Line
Reinforcement Learning | cs.NI cs.LG cs.SY | Monitoring Wireless Sensor Networks (WSNs) are composed of sensor nodes that
report temperature, relative humidity, and other environmental parameters. The
time between two successive measurements is a critical parameter to set during
the WSN configuration because it can impact the WSN's lifetime, the wireless
medium contention and the quality of the reported data. As trends in monitored
parameters can significantly vary between scenarios and within time,
identifying a sampling interval suitable for several cases is also challenging.
In this work, we propose a dynamic sampling rate adaptation scheme based on
reinforcement learning, able to tune sensors' sampling interval on-the-fly,
according to environmental conditions and application requirements. The primary
goal is to set the sampling interval to the best value possible so as to avoid
oversampling and save energy, while not missing environmental changes that can
be relevant for the application. In simulations, our mechanism could reduce up
to 73% the total number of transmissions compared to a fixed strategy and,
simultaneously, keep the average quality of information provided by the WSN.
The inherent flexibility of the reinforcement learning algorithm facilitates
its use in several scenarios, so as to exploit the broad scope of the Internet
of Things.
| Gabriel Martins Dias, Maddalena Nurchis and Boris Bellalta | null | 1606.02193 | null | null |
A Minimax Approach to Supervised Learning | stat.ML cs.IT cs.LG math.IT | Given a task of predicting $Y$ from $X$, a loss function $L$, and a set of
probability distributions $\Gamma$ on $(X,Y)$, what is the optimal decision
rule minimizing the worst-case expected loss over $\Gamma$? In this paper, we
address this question by introducing a generalization of the principle of
maximum entropy. Applying this principle to sets of distributions with marginal
on $X$ constrained to be the empirical marginal from the data, we develop a
general minimax approach for supervised learning problems. While for some loss
functions such as squared-error and log loss, the minimax approach rederives
well-knwon regression models, for the 0-1 loss it results in a new linear
classifier which we call the maximum entropy machine. The maximum entropy
machine minimizes the worst-case 0-1 loss over the structured set of
distribution, and by our numerical experiments can outperform other well-known
linear classifiers such as SVM. We also prove a bound on the generalization
worst-case error in the minimax approach.
| Farzan Farnia, David Tse | null | 1606.02206 | null | null |
Systematic evaluation of CNN advances on the ImageNet | cs.NE cs.CV cs.LG | The paper systematically studies the impact of a range of recent advances in
CNN architectures and learning methods on the object categorization (ILSVRC)
problem. The evalution tests the influence of the following choices of the
architecture: non-linearity (ReLU, ELU, maxout, compatibility with batch
normalization), pooling variants (stochastic, max, average, mixed), network
width, classifier design (convolutional, fully-connected, SPP), image
pre-processing, and of learning parameters: learning rate, batch size,
cleanliness of the data, etc.
The performance gains of the proposed modifications are first tested
individually and then in combination. The sum of individual gains is bigger
than the observed improvement when all modifications are introduced, but the
"deficit" is small suggesting independence of their benefits. We show that the
use of 128x128 pixel images is sufficient to make qualitative conclusions about
optimal network structure that hold for the full size Caffe and VGG nets. The
results are obtained an order of magnitude faster than with the standard 224
pixel images.
| Dmytro Mishkin and Nikolay Sergievskiy and Jiri Matas | 10.1016/j.cviu.2017.05.007 | 1606.02228 | null | null |
Measuring the reliability of MCMC inference with bidirectional Monte
Carlo | cs.LG stat.CO stat.ML | Markov chain Monte Carlo (MCMC) is one of the main workhorses of
probabilistic inference, but it is notoriously hard to measure the quality of
approximate posterior samples. This challenge is particularly salient in black
box inference methods, which can hide details and obscure inference failures.
In this work, we extend the recently introduced bidirectional Monte Carlo
technique to evaluate MCMC-based posterior inference algorithms. By running
annealed importance sampling (AIS) chains both from prior to posterior and vice
versa on simulated data, we upper bound in expectation the symmetrized KL
divergence between the true posterior distribution and the distribution of
approximate samples. We present Bounding Divergences with REverse Annealing
(BREAD), a protocol for validating the relevance of simulated data experiments
to real datasets, and integrate it into two probabilistic programming
languages: WebPPL and Stan. As an example of how BREAD can be used to guide the
design of inference algorithms, we apply it to study the effectiveness of
different model representations in both WebPPL and Stan.
| Roger B. Grosse and Siddharth Ancha and Daniel M. Roy | null | 1606.02275 | null | null |
Semi-supervised structured output prediction by local linear regression
and sub-gradient descent | cs.LG | We propose a novel semi-supervised structured output prediction method based
on local linear regression in this paper. The existing semi-supervise
structured output prediction methods learn a global predictor for all the data
points in a data set, which ignores the differences of local distributions of
the data set, and the effects to the structured output prediction. To solve
this problem, we propose to learn the missing structured outputs and local
predictors for neighborhoods of different data points jointly. Using the local
linear regression strategy, in the neighborhood of each data point, we propose
to learn a local linear predictor by minimizing both the complexity of the
predictor and the upper bound of the structured prediction loss. The
minimization problem is solved by sub-gradient descent algorithms. We conduct
experiments over two benchmark data sets, and the results show the advantages
of the proposed method.
| Ru-Ze Liang, Wei Xie, Weizhi Li, Xin Du, Jim Jing-Yan Wang, Jingbin
Wang | null | 1606.02279 | null | null |
How is a data-driven approach better than random choice in label space
division for multi-label classification? | cs.LG cs.PF cs.SI stat.ML | We propose using five data-driven community detection approaches from social
networks to partition the label space for the task of multi-label
classification as an alternative to random partitioning into equal subsets as
performed by RAkELd: modularity-maximizing fastgreedy and leading eigenvector,
infomap, walktrap and label propagation algorithms. We construct a label
co-occurence graph (both weighted an unweighted versions) based on training
data and perform community detection to partition the label set. We include
Binary Relevance and Label Powerset classification methods for comparison. We
use gini-index based Decision Trees as the base classifier. We compare educated
approaches to label space divisions against random baselines on 12 benchmark
data sets over five evaluation measures. We show that in almost all cases seven
educated guess approaches are more likely to outperform RAkELd than otherwise
in all measures, but Hamming Loss. We show that fastgreedy and walktrap
community detection methods on weighted label co-occurence graphs are 85-92%
more likely to yield better F1 scores than random partitioning. Infomap on the
unweighted label co-occurence graphs is on average 90% of the times better than
random paritioning in terms of Subset Accuracy and 89% when it comes to Jaccard
similarity. Weighted fastgreedy is better on average than RAkELd when it comes
to Hamming Loss.
| Piotr Szyma\'nski, Tomasz Kajdanowicz, Kristian Kersting | 10.3390/e18080282 | 1606.02346 | null | null |
Active Long Term Memory Networks | cs.LG cs.AI stat.ML | Continual Learning in artificial neural networks suffers from interference
and forgetting when different tasks are learned sequentially. This paper
introduces the Active Long Term Memory Networks (A-LTM), a model of sequential
multi-task deep learning that is able to maintain previously learned
association between sensory input and behavioral output while acquiring knew
knowledge. A-LTM exploits the non-convex nature of deep neural networks and
actively maintains knowledge of previously learned, inactive tasks using a
distillation loss. Distortions of the learned input-output map are penalized
but hidden layers are free to transverse towards new local optima that are more
favorable for the multi-task objective. We re-frame the McClelland's seminal
Hippocampal theory with respect to Catastrophic Inference (CI) behavior
exhibited by modern deep architectures trained with back-propagation and
inhomogeneous sampling of latent factors across epochs. We present empirical
results of non-trivial CI during continual learning in Deep Linear Networks
trained on the same task, in Convolutional Neural Networks when the task shifts
from predicting semantic to graphical factors and during domain adaptation from
simple to complex environments. We present results of the A-LTM model's ability
to maintain viewpoint recognition learned in the highly controlled iLab-20M
dataset with 10 object categories and 88 camera viewpoints, while adapting to
the unstructured domain of Imagenet with 1,000 object categories.
| Tommaso Furlanello, Jiaping Zhao, Andrew M. Saxe, Laurent Itti, Bosco
S. Tjan | null | 1606.02355 | null | null |
SE3-Nets: Learning Rigid Body Motion using Deep Neural Networks | cs.LG cs.AI cs.CV cs.RO | We introduce SE3-Nets, which are deep neural networks designed to model and
learn rigid body motion from raw point cloud data. Based only on sequences of
depth images along with action vectors and point wise data associations,
SE3-Nets learn to segment effected object parts and predict their motion
resulting from the applied force. Rather than learning point wise flow vectors,
SE3-Nets predict SE3 transformations for different parts of the scene. Using
simulated depth data of a table top scene and a robot manipulator, we show that
the structure underlying SE3-Nets enables them to generate a far more
consistent prediction of object motion than traditional flow based networks.
Additional experiments with a depth camera observing a Baxter robot pushing
objects on a table show that SE3-Nets also work well on real data.
| Arunkumar Byravan and Dieter Fox | null | 1606.02378 | null | null |
Deep Successor Reinforcement Learning | stat.ML cs.AI cs.LG cs.NE | Learning robust value functions given raw observations and rewards is now
possible with model-free and model-based deep reinforcement learning
algorithms. There is a third alternative, called Successor Representations
(SR), which decomposes the value function into two components -- a reward
predictor and a successor map. The successor map represents the expected future
state occupancy from any given state and the reward predictor maps states to
scalar rewards. The value function of a state can be computed as the inner
product between the successor map and the reward weights. In this paper, we
present DSR, which generalizes SR within an end-to-end deep reinforcement
learning framework. DSR has several appealing properties including: increased
sensitivity to distal reward changes due to factorization of reward and world
dynamics, and the ability to extract bottleneck states (subgoals) given
successor maps trained under a random policy. We show the efficacy of our
approach on two diverse environments given raw pixel observations -- simple
grid-world domains (MazeBase) and the Doom game engine.
| Tejas D. Kulkarni, Ardavan Saeedi, Simanta Gautam, Samuel J. Gershman | null | 1606.02396 | null | null |
Clustering with Same-Cluster Queries | cs.LG stat.ML | We propose a framework for Semi-Supervised Active Clustering framework
(SSAC), where the learner is allowed to interact with a domain expert, asking
whether two given instances belong to the same cluster or not. We study the
query and computational complexity of clustering in this framework. We consider
a setting where the expert conforms to a center-based clustering with a notion
of margin. We show that there is a trade off between computational complexity
and query complexity; We prove that for the case of $k$-means clustering (i.e.,
when the expert conforms to a solution of $k$-means), having access to
relatively few such queries allows efficient solutions to otherwise NP hard
problems.
In particular, we provide a probabilistic polynomial-time (BPP) algorithm for
clustering in this setting that asks $O\big(k^2\log k + k\log n)$ same-cluster
queries and runs with time complexity $O\big(kn\log n)$ (where $k$ is the
number of clusters and $n$ is the number of instances). The algorithm succeeds
with high probability for data satisfying margin conditions under which,
without queries, we show that the problem is NP hard. We also prove a lower
bound on the number of queries needed to have a computationally efficient
clustering algorithm in this setting.
| Hassan Ashtiani, Shrinu Kushagra and Shai Ben-David | null | 1606.02404 | null | null |
Structured Convolution Matrices for Energy-efficient Deep learning | cs.NE cs.AI cs.CV cs.LG | We derive a relationship between network representation in energy-efficient
neuromorphic architectures and block Toplitz convolutional matrices. Inspired
by this connection, we develop deep convolutional networks using a family of
structured convolutional matrices and achieve state-of-the-art trade-off
between energy efficiency and classification accuracy for well-known image
recognition tasks. We also put forward a novel method to train binary
convolutional networks by utilising an existing connection between
noisy-rectified linear units and binary activations.
| Rathinakumar Appuswamy, Tapan Nayak, John Arthur, Steven Esser, Paul
Merolla, Jeffrey Mckinstry, Timothy Melano, Myron Flickner, Dharmendra Modha | null | 1606.02407 | null | null |
Gossip Dual Averaging for Decentralized Optimization of Pairwise
Functions | stat.ML cs.AI cs.DC cs.LG cs.SY | In decentralized networks (of sensors, connected objects, etc.), there is an
important need for efficient algorithms to optimize a global cost function, for
instance to learn a global model from the local data collected by each
computing unit. In this paper, we address the problem of decentralized
minimization of pairwise functions of the data points, where these points are
distributed over the nodes of a graph defining the communication topology of
the network. This general problem finds applications in ranking, distance
metric learning and graph inference, among others. We propose new gossip
algorithms based on dual averaging which aims at solving such problems both in
synchronous and asynchronous settings. The proposed framework is flexible
enough to deal with constrained and regularized variants of the optimization
problem. Our theoretical analysis reveals that the proposed algorithms preserve
the convergence rate of centralized dual averaging up to an additive bias term.
We present numerical simulations on Area Under the ROC Curve (AUC) maximization
and metric learning problems which illustrate the practical interest of our
approach.
| Igor Colin, Aur\'elien Bellet, Joseph Salmon, St\'ephan
Cl\'emen\c{c}on | null | 1606.02421 | null | null |
Multiple-Play Bandits in the Position-Based Model | cs.LG math.ST stat.TH | Sequentially learning to place items in multi-position displays or lists is a
task that can be cast into the multiple-play semi-bandit setting. However, a
major concern in this context is when the system cannot decide whether the user
feedback for each item is actually exploitable. Indeed, much of the content may
have been simply ignored by the user. The present work proposes to exploit
available information regarding the display position bias under the so-called
Position-based click model (PBM). We first discuss how this model differs from
the Cascade model and its variants considered in several recent works on
multiple-play bandits. We then provide a novel regret lower bound for this
model as well as computationally efficient algorithms that display good
empirical and theoretical performance.
| Paul Lagr\'ee (UP11, LRI), Claire Vernade (LTCI), Olivier Capp\'e
(LTCI) | null | 1606.02448 | null | null |
Convolutional Neural Fabrics | cs.CV cs.LG cs.NE | Despite the success of CNNs, selecting the optimal architecture for a given
task remains an open problem. Instead of aiming to select a single optimal
architecture, we propose a "fabric" that embeds an exponentially large number
of architectures. The fabric consists of a 3D trellis that connects response
maps at different layers, scales, and channels with a sparse homogeneous local
connectivity pattern. The only hyper-parameters of a fabric are the number of
channels and layers. While individual architectures can be recovered as paths,
the fabric can in addition ensemble all embedded architectures together,
sharing their weights where their paths overlap. Parameters can be learned
using standard methods based on back-propagation, at a cost that scales
linearly in the fabric size. We present benchmark results competitive with the
state of the art for image classification on MNIST and CIFAR10, and for
semantic segmentation on the Part Labels dataset.
| Shreyas Saxena and Jakob Verbeek | null | 1606.02492 | null | null |
Improving Recurrent Neural Networks For Sequence Labelling | cs.CL cs.LG cs.NE | In this paper we study different types of Recurrent Neural Networks (RNN) for
sequence labeling tasks. We propose two new variants of RNNs integrating
improvements for sequence labeling, and we compare them to the more traditional
Elman and Jordan RNNs. We compare all models, either traditional or new, on
four distinct tasks of sequence labeling: two on Spoken Language Understanding
(ATIS and MEDIA); and two of POS tagging for the French Treebank (FTB) and the
Penn Treebank (PTB) corpora. The results show that our new variants of RNNs are
always more effective than the others.
| Marco Dinarelli and Isabelle Tellier | null | 1606.02555 | null | null |
Towards End-to-End Learning for Dialog State Tracking and Management
using Deep Reinforcement Learning | cs.AI cs.CL cs.LG | This paper presents an end-to-end framework for task-oriented dialog systems
using a variant of Deep Recurrent Q-Networks (DRQN). The model is able to
interface with a relational database and jointly learn policies for both
language understanding and dialog strategy. Moreover, we propose a hybrid
algorithm that combines the strength of reinforcement learning and supervised
learning to achieve faster learning speed. We evaluated the proposed model on a
20 Question Game conversational game simulator. Results show that the proposed
method outperforms the modular-based baseline and learns a distributed
representation of the latent dialog state.
| Tiancheng Zhao and Maxine Eskenazi | null | 1606.02560 | null | null |
Convolution by Evolution: Differentiable Pattern Producing Networks | cs.NE cs.CV cs.LG | In this work we introduce a differentiable version of the Compositional
Pattern Producing Network, called the DPPN. Unlike a standard CPPN, the
topology of a DPPN is evolved but the weights are learned. A Lamarckian
algorithm, that combines evolution and learning, produces DPPNs to reconstruct
an image. Our main result is that DPPNs can be evolved/trained to compress the
weights of a denoising autoencoder from 157684 to roughly 200 parameters, while
achieving a reconstruction accuracy comparable to a fully connected network
with more than two orders of magnitude more parameters. The regularization
ability of the DPPN allows it to rediscover (approximate) convolutional network
architectures embedded within a fully connected architecture. Such
convolutional architectures are the current state of the art for many computer
vision applications, so it is satisfying that DPPNs are capable of discovering
this structure rather than having to build it in by design. DPPNs exhibit
better generalization when tested on the Omniglot dataset after being trained
on MNIST, than directly encoded fully connected autoencoders. DPPNs are
therefore a new framework for integrating learning and evolution.
| Chrisantha Fernando, Dylan Banarse, Malcolm Reynolds, Frederic Besse,
David Pfau, Max Jaderberg, Marc Lanctot, Daan Wierstra | null | 1606.02580 | null | null |
Fast and Extensible Online Multivariate Kernel Density Estimation | cs.LG cs.CV | We present xokde++, a state-of-the-art online kernel density estimation
approach that maintains Gaussian mixture models input data streams. The
approach follows state-of-the-art work on online density estimation, but was
redesigned with computational efficiency, numerical robustness, and
extensibility in mind. Our approach produces comparable or better results than
the current state-of-the-art, while achieving significant computational
performance gains and improved numerical stability. The use of diagonal
covariance Gaussian kernels, which further improve performance and stability,
at a small loss of modelling quality, is also explored. Our approach is up to
40 times faster, while requiring 90\% less memory than the closest
state-of-the-art counterpart.
| Jaime Ferreira and David Martins de Matos and Ricardo Ribeiro | null | 1606.02608 | null | null |
Specific Differential Entropy Rate Estimation for Continuous-Valued Time
Series | cs.LG stat.ME | We introduce a method for quantifying the inherent unpredictability of a
continuous-valued time series via an extension of the differential Shannon
entropy rate. Our extension, the specific entropy rate, quantifies the amount
of predictive uncertainty associated with a specific state, rather than
averaged over all states. We relate the specific entropy rate to popular
`complexity' measures such as Approximate and Sample Entropies. We provide a
data-driven approach for estimating the specific entropy rate of an observed
time series. Finally, we consider three case studies of estimating specific
entropy rate from synthetic and physiological data relevant to the analysis of
heart rate variability.
| David Darmon | null | 1606.02615 | null | null |
Efficient Estimation of k for the Nearest Neighbors Class of Methods | cs.LG | The k Nearest Neighbors (kNN) method has received much attention in the past
decades, where some theoretical bounds on its performance were identified and
where practical optimizations were proposed for making it work fairly well in
high dimensional spaces and on large datasets. From countless experiments of
the past it became widely accepted that the value of k has a significant impact
on the performance of this method. However, the efficient optimization of this
parameter has not received so much attention in literature. Today, the most
common approach is to cross-validate or bootstrap this value for all values in
question. This approach forces distances to be recomputed many times, even if
efficient methods are used. Hence, estimating the optimal k can become
expensive even on modern systems. Frequently, this circumstance leads to a
sparse manual search of k. In this paper we want to point out that a systematic
and thorough estimation of the parameter k can be performed efficiently. The
discussed approach relies on large matrices, but we want to argue, that in
practice a higher space complexity is often much less of a problem than
repetitive distance computations.
| Aleksander Lodwich, Faisal Shafait and Thomas Breuel | 10.13140/RG.2.1.5045.4649 | 1606.02617 | null | null |
Safe and Efficient Off-Policy Reinforcement Learning | cs.LG cs.AI stat.ML | In this work, we take a fresh look at some old and new algorithms for
off-policy, return-based reinforcement learning. Expressing these in a common
form, we derive a novel algorithm, Retrace($\lambda$), with three desired
properties: (1) it has low variance; (2) it safely uses samples collected from
any behaviour policy, whatever its degree of "off-policyness"; and (3) it is
efficient as it makes the best use of samples collected from near on-policy
behaviour policies. We analyze the contractive nature of the related operator
under both off-policy policy evaluation and control settings and derive online
sample-based algorithms. We believe this is the first return-based off-policy
control algorithm converging a.s. to $Q^*$ without the GLIE assumption (Greedy
in the Limit with Infinite Exploration). As a corollary, we prove the
convergence of Watkins' Q($\lambda$), which was an open problem since 1989. We
illustrate the benefits of Retrace($\lambda$) on a standard suite of Atari 2600
games.
| R\'emi Munos, Tom Stepleton, Anna Harutyunyan, Marc G. Bellemare | null | 1606.02647 | null | null |
Learning Power Spectrum Maps from Quantized Power Measurements | cs.IT cs.LG math.FA math.IT stat.ML | Power spectral density (PSD) maps providing the distribution of RF power
across space and frequency are constructed using power measurements collected
by a network of low-cost sensors. By introducing linear compression and
quantization to a small number of bits, sensor measurements can be communicated
to the fusion center with minimal bandwidth requirements. Strengths of data-
and model-driven approaches are combined to develop estimators capable of
incorporating multiple forms of spectral and propagation prior information
while fitting the rapid variations of shadow fading across space. To this end,
novel nonparametric and semiparametric formulations are investigated. It is
shown that PSD maps can be obtained using support vector machine-type solvers.
In addition to batch approaches, an online algorithm attuned to real-time
operation is developed. Numerical tests assess the performance of the novel
algorithms.
| Daniel Romero, Seung-Jun Kim, Georgios B. Giannakis, Roberto
Lopez-Valcarce | 10.1109/TSP.2017.2666775 | 1606.02679 | null | null |
Continuously Learning Neural Dialogue Management | cs.CL cs.LG | We describe a two-step approach for dialogue management in task-oriented
spoken dialogue systems. A unified neural network framework is proposed to
enable the system to first learn by supervision from a set of dialogue data and
then continuously improve its behaviour via reinforcement learning, all using
gradient-based algorithms on one single model. The experiments demonstrate the
supervised model's effectiveness in the corpus-based evaluation, with user
simulation, and with paid human subjects. The use of reinforcement learning
further improves the model's performance in both interactive settings,
especially under higher-noise conditions.
| Pei-Hao Su, Milica Gasic, Nikola Mrksic, Lina Rojas-Barahona, Stefan
Ultes, David Vandyke, Tsung-Hsien Wen, Steve Young | null | 1606.02689 | null | null |
Efficient Smoothed Concomitant Lasso Estimation for High Dimensional
Regression | stat.ML cs.LG math.OC | In high dimensional settings, sparse structures are crucial for efficiency,
both in term of memory, computation and performance. It is customary to
consider $\ell_1$ penalty to enforce sparsity in such scenarios. Sparsity
enforcing methods, the Lasso being a canonical example, are popular candidates
to address high dimension. For efficiency, they rely on tuning a parameter
trading data fitting versus sparsity. For the Lasso theory to hold this tuning
parameter should be proportional to the noise level, yet the latter is often
unknown in practice. A possible remedy is to jointly optimize over the
regression parameter as well as over the noise level. This has been considered
under several names in the literature: Scaled-Lasso, Square-root Lasso,
Concomitant Lasso estimation for instance, and could be of interest for
confidence sets or uncertainty quantification. In this work, after illustrating
numerical difficulties for the Smoothed Concomitant Lasso formulation, we
propose a modification we coined Smoothed Concomitant Lasso, aimed at
increasing numerical stability. We propose an efficient and accurate solver
leading to a computational cost no more expansive than the one for the Lasso.
We leverage on standard ingredients behind the success of fast Lasso solvers: a
coordinate descent algorithm, combined with safe screening rules to achieve
speed efficiency, by eliminating early irrelevant features.
| Eugene Ndiaye and Olivier Fercoq and Alexandre Gramfort and Vincent
Lecl\`ere and Joseph Salmon | 10.1088/1742-6596/904/1/012006 | 1606.02702 | null | null |
Learning Thermodynamics with Boltzmann Machines | cond-mat.stat-mech cond-mat.dis-nn cs.LG | A Boltzmann machine is a stochastic neural network that has been extensively
used in the layers of deep architectures for modern machine learning
applications. In this paper, we develop a Boltzmann machine that is capable of
modelling thermodynamic observables for physical systems in thermal
equilibrium. Through unsupervised learning, we train the Boltzmann machine on
data sets constructed with spin configurations importance-sampled from the
partition function of an Ising Hamiltonian at different temperatures using
Monte Carlo (MC) methods. The trained Boltzmann machine is then used to
generate spin states, for which we compare thermodynamic observables to those
computed by direct MC sampling. We demonstrate that the Boltzmann machine can
faithfully reproduce the observables of the physical system. Further, we
observe that the number of neurons required to obtain accurate results
increases as the system is brought close to criticality.
| Giacomo Torlai and Roger G. Melko | 10.1103/PhysRevB.94.165134 | 1606.02718 | null | null |
Variational Information Maximization for Feature Selection | stat.ML cs.LG | Feature selection is one of the most fundamental problems in machine
learning. An extensive body of work on information-theoretic feature selection
exists which is based on maximizing mutual information between subsets of
features and class labels. Practical methods are forced to rely on
approximations due to the difficulty of estimating mutual information. We
demonstrate that approximations made by existing methods are based on
unrealistic assumptions. We formulate a more flexible and general class of
assumptions based on variational distributions and use them to tractably
generate lower bounds for mutual information. These bounds define a novel
information-theoretic framework for feature selection, which we prove to be
optimal under tree graphical models with proper choice of variational
distributions. Our experiments demonstrate that the proposed method strongly
outperforms existing information-theoretic feature selection approaches.
| Shuyang Gao, Greg Ver Steeg, Aram Galstyan | null | 1606.02827 | null | null |
Sketching for Large-Scale Learning of Mixture Models | cs.LG stat.ML | Learning parameters from voluminous data can be prohibitive in terms of
memory and computational requirements. We propose a "compressive learning"
framework where we estimate model parameters from a sketch of the training
data. This sketch is a collection of generalized moments of the underlying
probability distribution of the data. It can be computed in a single pass on
the training set, and is easily computable on streams or distributed datasets.
The proposed framework shares similarities with compressive sensing, which aims
at drastically reducing the dimension of high-dimensional signals while
preserving the ability to reconstruct them. To perform the estimation task, we
derive an iterative algorithm analogous to sparse reconstruction algorithms in
the context of linear inverse problems. We exemplify our framework with the
compressive estimation of a Gaussian Mixture Model (GMM), providing heuristics
on the choice of the sketching procedure and theoretical guarantees of
reconstruction. We experimentally show on synthetic data that the proposed
algorithm yields results comparable to the classical Expectation-Maximization
(EM) technique while requiring significantly less memory and fewer computations
when the number of database elements is large. We further demonstrate the
potential of the approach on real large-scale data (over 10 8 training samples)
for the task of model-based speaker verification. Finally, we draw some
connections between the proposed framework and approximate Hilbert space
embedding of probability distributions using random features. We show that the
proposed sketching operator can be seen as an innovative method to design
translation-invariant kernels adapted to the analysis of GMMs. We also use this
theoretical framework to derive information preservation guarantees, in the
spirit of infinite-dimensional compressive sensing.
| Nicolas Keriven (UR1, PANAMA), Anthony Bourrier (GIPSA-lab), R\'emi
Gribonval (PANAMA), Patrick P\'erez | null | 1606.02838 | null | null |
e-Commerce product classification: our participation at cDiscount 2015
challenge | cs.LG cs.AI | This report describes our participation in the cDiscount 2015 challenge where
the goal was to classify product items in a predefined taxonomy of products.
Our best submission yielded an accuracy score of 64.20\% in the private part of
the leaderboard and we were ranked 10th out of 175 participating teams. We
followed a text classification approach employing mainly linear models. The
final solution was a weighted voting system which combined a variety of trained
models.
| Ioannis Partalas, Georgios Balikas | null | 1606.02854 | null | null |
Sequence-to-Sequence Learning as Beam-Search Optimization | cs.CL cs.LG cs.NE stat.ML | Sequence-to-Sequence (seq2seq) modeling has rapidly become an important
general-purpose NLP tool that has proven effective for many text-generation and
sequence-labeling tasks. Seq2seq builds on deep neural language modeling and
inherits its remarkable accuracy in estimating local, next-word distributions.
In this work, we introduce a model and beam-search training scheme, based on
the work of Daume III and Marcu (2005), that extends seq2seq to learn global
sequence scores. This structured approach avoids classical biases associated
with local training and unifies the training loss with the test-time usage,
while preserving the proven model architecture of seq2seq and its efficient
training approach. We show that our system outperforms a highly-optimized
attention-based seq2seq system and other baselines on three different sequence
to sequence tasks: word ordering, parsing, and machine translation.
| Sam Wiseman and Alexander M. Rush | null | 1606.02960 | null | null |
Generative Topic Embedding: a Continuous Representation of Documents
(Extended Version with Proofs) | cs.CL cs.AI cs.IR cs.LG stat.ML | Word embedding maps words into a low-dimensional continuous embedding space
by exploiting the local word collocation patterns in a small context window. On
the other hand, topic modeling maps documents onto a low-dimensional topic
space, by utilizing the global word collocation patterns in the same document.
These two types of patterns are complementary. In this paper, we propose a
generative topic embedding model to combine the two types of patterns. In our
model, topics are represented by embedding vectors, and are shared across
documents. The probability of each word is influenced by both its local context
and its topic. A variational inference method yields the topic embeddings as
well as the topic mixing proportions for each document. Jointly they represent
the document in a low-dimensional continuous space. In two document
classification tasks, our method performs better than eight existing methods,
with fewer features. In addition, we illustrate with an example that our method
can generate coherent topics even based on only one document.
| Shaohua Li, Tat-Seng Chua, Jun Zhu, Chunyan Miao | null | 1606.02979 | null | null |
On Projected Stochastic Gradient Descent Algorithm with Weighted
Averaging for Least Squares Regression | cs.IT cs.LG math.IT | The problem of least squares regression of a $d$-dimensional unknown
parameter is considered. A stochastic gradient descent based algorithm with
weighted iterate-averaging that uses a single pass over the data is studied and
its convergence rate is analyzed. We first consider a bounded constraint set of
the unknown parameter. Under some standard regularity assumptions, we provide
an explicit $O(1/k)$ upper bound on the convergence rate, depending on the
variance (due to the additive noise in the measurements) and the size of the
constraint set. We show that the variance term dominates the error and
decreases with rate $1/k$, while the term which is related to the size of the
constraint set decreases with rate $\log k/k^2$. We then compare the asymptotic
ratio $\rho$ between the convergence rate of the proposed scheme and the
empirical risk minimizer (ERM) as the number of iterations approaches infinity.
We show that $\rho\leq 4$ under some mild conditions for all $d\geq 1$. We
further improve the upper bound by showing that $\rho\leq 4/3$ for the case of
$d=1$ and unbounded parameter set. Simulation results demonstrate strong
performance of the algorithm as compared to existing methods, and coincide with
$\rho\leq 4/3$ even for large $d$ in practice.
| Kobi Cohen and Angelia Nedic and R. Srikant | null | 1606.03000 | null | null |
Efficient Robust Proper Learning of Log-concave Distributions | cs.DS cs.LG math.ST stat.TH | We study the {\em robust proper learning} of univariate log-concave
distributions (over continuous and discrete domains). Given a set of samples
drawn from an unknown target distribution, we want to compute a log-concave
hypothesis distribution that is as close as possible to the target, in total
variation distance. In this work, we give the first computationally efficient
algorithm for this learning problem. Our algorithm achieves the
information-theoretically optimal sample size (up to a constant factor), runs
in polynomial time, and is robust to model misspecification with nearly-optimal
error guarantees.
Specifically, we give an algorithm that, on input $n=O(1/\eps^{5/2})$ samples
from an unknown distribution $f$, runs in time $\widetilde{O}(n^{8/5})$, and
outputs a log-concave hypothesis $h$ that (with high probability) satisfies
$\dtv(h, f) = O(\opt)+\eps$, where $\opt$ is the minimum total variation
distance between $f$ and the class of log-concave distributions. Our approach
to the robust proper learning problem is quite flexible and may be applicable
to many other univariate distribution families.
| Ilias Diakonikolas and Daniel M. Kane and Alistair Stewart | null | 1606.03077 | null | null |
Mutual Exclusivity Loss for Semi-Supervised Deep Learning | cs.CV cs.LG stat.ML | In this paper we consider the problem of semi-supervised learning with deep
Convolutional Neural Networks (ConvNets). Semi-supervised learning is motivated
on the observation that unlabeled data is cheap and can be used to improve the
accuracy of classifiers. In this paper we propose an unsupervised
regularization term that explicitly forces the classifier's prediction for
multiple classes to be mutually-exclusive and effectively guides the decision
boundary to lie on the low density space between the manifolds corresponding to
different classes of data. Our proposed approach is general and can be used
with any backpropagation-based learning method. We show through different
experiments that our method can improve the object recognition performance of
ConvNets using unlabeled data.
| Mehdi Sajjadi, Mehran Javanmardi, Tolga Tasdizen | null | 1606.03141 | null | null |
Sentence Similarity Measures for Fine-Grained Estimation of Topical
Relevance in Learner Essays | cs.CL cs.LG cs.NE | We investigate the task of assessing sentence-level prompt relevance in
learner essays. Various systems using word overlap, neural embeddings and
neural compositional models are evaluated on two datasets of learner writing.
We propose a new method for sentence-level similarity calculation, which learns
to adjust the weights of pre-trained word embeddings for a specific task,
achieving substantially higher accuracy compared to other relevant baselines.
| Marek Rei and Ronan Cummins | 10.18653/v1/W16-0533 | 1606.03144 | null | null |
Unsupervised Learning of Word-Sequence Representations from Scratch via
Convolutional Tensor Decomposition | cs.CL cs.LG | Unsupervised text embeddings extraction is crucial for text understanding in
machine learning. Word2Vec and its variants have received substantial success
in mapping words with similar syntactic or semantic meaning to vectors close to
each other. However, extracting context-aware word-sequence embedding remains a
challenging task. Training over large corpus is difficult as labels are
difficult to get. More importantly, it is challenging for pre-trained models to
obtain word-sequence embeddings that are universally good for all downstream
tasks or for any new datasets. We propose a two-phased ConvDic+DeconvDec
framework to solve the problem by combining a word-sequence dictionary learning
model with a word-sequence embedding decode model. We propose a convolutional
tensor decomposition mechanism to learn good word-sequence phrase dictionary in
the learning phase. It is proved to be more accurate and much more efficient
than the popular alternating minimization method. In the decode phase, we
introduce a deconvolution framework that is immune to the problem of varying
sentence lengths. The word-sequence embeddings we extracted using
ConvDic+DeconvDec are universally good for a few downstream tasks we test on.
The framework requires neither pre-training nor prior/outside information.
| Furong Huang, Animashree Anandkumar | null | 1606.03153 | null | null |
Finding Low-Rank Solutions via Non-Convex Matrix Factorization,
Efficiently and Provably | math.OC cs.DS cs.IT cs.LG cs.NA math.IT | A rank-$r$ matrix $X \in \mathbb{R}^{m \times n}$ can be written as a product
$U V^\top$, where $U \in \mathbb{R}^{m \times r}$ and $V \in \mathbb{R}^{n
\times r}$. One could exploit this observation in optimization: e.g., consider
the minimization of a convex function $f(X)$ over rank-$r$ matrices, where the
set of rank-$r$ matrices is modeled via the factorization $UV^\top$. Though
such parameterization reduces the number of variables, and is more
computationally efficient (of particular interest is the case $r \ll \min\{m,
n\}$), it comes at a cost: $f(UV^\top)$ becomes a non-convex function w.r.t.
$U$ and $V$.
We study such parameterization for optimization of generic convex objectives
$f$, and focus on first-order, gradient descent algorithmic solutions. We
propose the Bi-Factored Gradient Descent (BFGD) algorithm, an efficient
first-order method that operates on the $U, V$ factors. We show that when $f$
is (restricted) smooth, BFGD has local sublinear convergence, and linear
convergence when $f$ is both (restricted) smooth and (restricted) strongly
convex. For several key applications, we provide simple and efficient
initialization schemes that provide approximate solutions good enough for the
above convergence results to hold.
| Dohyung Park, Anastasios Kyrillidis, Constantine Caramanis, Sujay
Sanghavi | null | 1606.03168 | null | null |
Phase Retrieval via Incremental Truncated Wirtinger Flow | cs.IT cs.LG math.IT | In the phase retrieval problem, an unknown vector is to be recovered given
quadratic measurements. This problem has received considerable attention in
recent times. In this paper, we present an algorithm to solve a nonconvex
formulation of the phase retrieval problem, that we call $\textit{Incremental
Truncated Wirtinger Flow}$. Given random Gaussian sensing vectors, we prove
that it converges linearly to the solution, with an optimal sample complexity.
We also provide stability guarantees of the algorithm under noisy measurements.
Performance and comparisons with existing algorithms are illustrated via
numerical experiments on simulated and real data, with both random and
structured sensing vectors.
| Ritesh Kolte and Ayfer \"Ozg\"ur | null | 1606.03196 | null | null |
Causal Bandits: Learning Good Interventions via Causal Inference | stat.ML cs.LG | We study the problem of using causal models to improve the rate at which good
interventions can be learned online in a stochastic environment. Our formalism
combines multi-arm bandits and causal inference to model a novel type of bandit
feedback that is not exploited by existing approaches. We propose a new
algorithm that exploits the causal feedback and prove a bound on its simple
regret that is strictly better (in all quantities) than algorithms that do not
use the additional causal information.
| Finnian Lattimore and Tor Lattimore and Mark D. Reid | null | 1606.03203 | null | null |
Deep CNNs along the Time Axis with Intermap Pooling for Robustness to
Spectral Variations | cs.CL cs.LG cs.NE | Convolutional neural networks (CNNs) with convolutional and pooling
operations along the frequency axis have been proposed to attain invariance to
frequency shifts of features. However, this is inappropriate with regard to the
fact that acoustic features vary in frequency. In this paper, we contend that
convolution along the time axis is more effective. We also propose the addition
of an intermap pooling (IMP) layer to deep CNNs. In this layer, filters in each
group extract common but spectrally variant features, then the layer pools the
feature maps of each group. As a result, the proposed IMP CNN can achieve
insensitivity to spectral variations characteristic of different speakers and
utterances. The effectiveness of the IMP CNN architecture is demonstrated on
several LVCSR tasks. Even without speaker adaptation techniques, the
architecture achieved a WER of 12.7% on the SWB part of the Hub5'2000
evaluation test set, which is competitive with other state-of-the-art methods.
| Hwaran Lee, Geonmin Kim, Ho-Gyeong Kim, Sang-Hoon Oh, and Soo-Young
Lee | 10.1109/LSP.2016.2589962 | 1606.03207 | null | null |
Discovery of Latent Factors in High-dimensional Data Using Tensor
Methods | cs.LG | Unsupervised learning aims at the discovery of hidden structure that drives
the observations in the real world. It is essential for success in modern
machine learning. Latent variable models are versatile in unsupervised learning
and have applications in almost every domain. Training latent variable models
is challenging due to the non-convexity of the likelihood objective. An
alternative method is based on the spectral decomposition of low order moment
tensors. This versatile framework is guaranteed to estimate the correct model
consistently. My thesis spans both theoretical analysis of tensor decomposition
framework and practical implementation of various applications. This thesis
presents theoretical results on convergence to globally optimal solution of
tensor decomposition using the stochastic gradient descent, despite
non-convexity of the objective. This is the first work that gives global
convergence guarantees for the stochastic gradient descent on non-convex
functions with exponentially many local minima and saddle points. This thesis
also presents large-scale deployment of spectral methods carried out on various
platforms. Dimensionality reduction techniques such as random projection are
incorporated for a highly parallel and scalable tensor decomposition algorithm.
We obtain a gain in both accuracies and in running times by several orders of
magnitude compared to the state-of-art variational methods. To solve real world
problems, more advanced models and learning algorithms are proposed. This
thesis discusses generalization of LDA model to mixed membership stochastic
block model for learning user communities in social network, convolutional
dictionary model for learning word-sequence embeddings, hierarchical tensor
decomposition and latent tree structure model for learning disease hierarchy,
and spatial point process mixture model for detecting cell types in
neuroscience.
| Furong Huang | null | 1606.03212 | null | null |
IDNet: Smartphone-based Gait Recognition with Convolutional Neural
Networks | cs.CV cs.LG | Here, we present IDNet, a user authentication framework from
smartphone-acquired motion signals. Its goal is to recognize a target user from
their way of walking, using the accelerometer and gyroscope (inertial) signals
provided by a commercial smartphone worn in the front pocket of the user's
trousers. IDNet features several innovations including: i) a robust and
smartphone-orientation-independent walking cycle extraction block, ii) a novel
feature extractor based on convolutional neural networks, iii) a one-class
support vector machine to classify walking cycles, and the coherent integration
of these into iv) a multi-stage authentication technique. IDNet is the first
system that exploits a deep learning approach as universal feature extractors
for gait recognition, and that combines classification results from subsequent
walking cycles into a multi-stage decision making framework. Experimental
results show the superiority of our approach against state-of-the-art
techniques, leading to misclassification rates (either false negatives or
positives) smaller than 0.15% with fewer than five walking cycles. Design
choices are discussed and motivated throughout, assessing their impact on the
user authentication performance.
| Matteo Gadaleta and Michele Rossi | 10.1016/j.patcog.2017.09.005 | 1606.03238 | null | null |
An Application of Network Lasso Optimization For Ride Sharing Prediction | cs.CY cs.LG stat.ML | Ride sharing has important implications in terms of environmental, social and
individual goals by reducing carbon footprints, fostering social interactions
and economizing commuter costs. The ride sharing systems that are commonly
available lack adaptive and scalable techniques that can simultaneously learn
from the large scale data and predict in real-time dynamic fashion. In this
paper, we study such a problem towards a smart city initiative, where a generic
ride sharing system is conceived capable of making predictions about ride share
opportunities based on the historically recorded data while satisfying
real-time ride requests. Underpinning the system is an application of a
powerful machine learning convex optimization framework called Network Lasso
that uses the Alternate Direction Method of Multipliers (ADMM) optimization for
learning and dynamic prediction. We propose an application of a robust and
scalable unified optimization framework within the ride sharing case-study. The
application of Network Lasso framework is capable of jointly optimizing and
clustering different rides based on their spatial and model similarity. The
prediction from the framework clusters new ride requests, making accurate price
prediction based on the clusters, detecting hidden correlations in the data and
allowing fast convergence due to the network topology. We provide an empirical
evaluation of the application of ADMM network Lasso on real trip record and
simulated data, proving their effectiveness since the mean squared error of the
algorithm's prediction is minimized on the test rides.
| Shaona Ghosh, Kevin Page and David De Roure | null | 1606.03276 | null | null |
Memory-Efficient Backpropagation Through Time | cs.NE cs.LG | We propose a novel approach to reduce memory consumption of the
backpropagation through time (BPTT) algorithm when training recurrent neural
networks (RNNs). Our approach uses dynamic programming to balance a trade-off
between caching of intermediate results and recomputation. The algorithm is
capable of tightly fitting within almost any user-set memory budget while
finding an optimal execution policy minimizing the computational cost.
Computational devices have limited memory capacity and maximizing a
computational performance given a fixed memory budget is a practical use-case.
We provide asymptotic computational upper bounds for various regimes. The
algorithm is particularly effective for long sequences. For sequences of length
1000, our algorithm saves 95\% of memory usage while using only one third more
time per iteration than the standard BPTT.
| Audr\=unas Gruslys, Remi Munos, Ivo Danihelka, Marc Lanctot, Alex
Graves | null | 1606.03401 | null | null |
Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on
How Much | cs.LG cs.AI stat.ML | Gibbs sampling is a Markov Chain Monte Carlo sampling technique that
iteratively samples variables from their conditional distributions. There are
two common scan orders for the variables: random scan and systematic scan. Due
to the benefits of locality in hardware, systematic scan is commonly used, even
though most statistical guarantees are only for random scan. While it has been
conjectured that the mixing times of random scan and systematic scan do not
differ by more than a logarithmic factor, we show by counterexample that this
is not the case, and we prove that that the mixing times do not differ by more
than a polynomial factor under mild conditions. To prove these relative bounds,
we introduce a method of augmenting the state space to study systematic scan
using conductance.
| Bryan He, Christopher De Sa, Ioannis Mitliagkas, Christopher R\'e | null | 1606.03432 | null | null |
Deep Directed Generative Models with Energy-Based Probability Estimation | cs.LG stat.ML | Training energy-based probabilistic models is confronted with apparently
intractable sums, whose Monte Carlo estimation requires sampling from the
estimated probability distribution in the inner loop of training. This can be
approximately achieved by Markov chain Monte Carlo methods, but may still face
a formidable obstacle that is the difficulty of mixing between modes with sharp
concentrations of probability. Whereas an MCMC process is usually derived from
a given energy function based on mathematical considerations and requires an
arbitrarily long time to obtain good and varied samples, we propose to train a
deep directed generative model (not a Markov chain) so that its sampling
distribution approximately matches the energy function that is being trained.
Inspired by generative adversarial networks, the proposed framework involves
training of two models that represent dual views of the estimated probability
distribution: the energy function (mapping an input configuration to a scalar
energy value) and the generator (mapping a noise vector to a generated
configuration), both represented by deep neural networks.
| Taesup Kim, Yoshua Bengio | null | 1606.03439 | null | null |
Learning overcomplete, low coherence dictionaries with linear inference | cs.LG | Finding overcomplete latent representations of data has applications in data
analysis, signal processing, machine learning, theoretical neuroscience and
many other fields. In an overcomplete representation, the number of latent
features exceeds the data dimensionality, which is useful when the data is
undersampled by the measurements (compressed sensing, information bottlenecks
in neural systems) or composed from multiple complete sets of linear features,
each spanning the data space. Independent Components Analysis (ICA) is a linear
technique for learning sparse latent representations, which typically has a
lower computational cost than sparse coding, its nonlinear, recurrent
counterpart. While well suited for finding complete representations, we show
that overcompleteness poses a challenge to existing ICA algorithms.
Specifically, the coherence control in existing ICA algorithms, necessary to
prevent the formation of duplicate dictionary features, is ill-suited in the
overcomplete case. We show that in this case several existing ICA algorithms
have undesirable global minima that maximize coherence. Further, by comparing
ICA algorithms on synthetic data and natural images to the computationally more
expensive sparse coding solution, we show that the coherence control biases the
exploration of the data manifold, sometimes yielding suboptimal solutions. We
provide a theoretical explanation of these failures and, based on the theory,
propose improved overcomplete ICA algorithms. All told, this study contributes
new insights into and methods for coherence control for linear ICA, some of
which are applicable to many other, potentially nonlinear, unsupervised
learning methods.
| Jesse A. Livezey and Alejandro F. Bujan and Friedrich T. Sommer | null | 1606.03474 | null | null |
Generative Adversarial Imitation Learning | cs.LG cs.AI | Consider learning a policy from example expert behavior, without interaction
with the expert or access to reinforcement signal. One approach is to recover
the expert's cost function with inverse reinforcement learning, then extract a
policy from that cost function with reinforcement learning. This approach is
indirect and can be slow. We propose a new general framework for directly
extracting a policy from data, as if it were obtained by reinforcement learning
following inverse reinforcement learning. We show that a certain instantiation
of our framework draws an analogy between imitation learning and generative
adversarial networks, from which we derive a model-free imitation learning
algorithm that obtains significant performance gains over existing model-free
methods in imitating complex behaviors in large, high-dimensional environments.
| Jonathan Ho, Stefano Ermon | null | 1606.03476 | null | null |
The Mythos of Model Interpretability | cs.LG cs.AI cs.CV cs.NE stat.ML | Supervised machine learning models boast remarkable predictive capabilities.
But can you trust your model? Will it work in deployment? What else can it tell
you about the world? We want models to be not only good, but interpretable. And
yet the task of interpretation appears underspecified. Papers provide diverse
and sometimes non-overlapping motivations for interpretability, and offer
myriad notions of what attributes render models interpretable. Despite this
ambiguity, many papers proclaim interpretability axiomatically, absent further
explanation. In this paper, we seek to refine the discourse on
interpretability. First, we examine the motivations underlying interest in
interpretability, finding them to be diverse and occasionally discordant. Then,
we address model properties and techniques thought to confer interpretability,
identifying transparency to humans and post-hoc explanations as competing
notions. Throughout, we discuss the feasibility and desirability of different
notions, and question the oft-made assertions that linear models are
interpretable and that deep neural networks are not.
| Zachary C. Lipton | null | 1606.03490 | null | null |
Improved Techniques for Training GANs | cs.LG cs.CV cs.NE | We present a variety of new architectural features and training procedures
that we apply to the generative adversarial networks (GANs) framework. We focus
on two applications of GANs: semi-supervised learning, and the generation of
images that humans find visually realistic. Unlike most work on generative
models, our primary goal is not to train a model that assigns high likelihood
to test data, nor do we require the model to be able to learn well without
using any labels. Using our new techniques, we achieve state-of-the-art results
in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated
images are of high quality as confirmed by a visual Turing test: our model
generates MNIST samples that humans cannot distinguish from real data, and
CIFAR-10 samples that yield a human error rate of 21.3%. We also present
ImageNet samples with unprecedented resolution and show that our methods enable
the model to learn recognizable features of ImageNet classes.
| Tim Salimans and Ian Goodfellow and Wojciech Zaremba and Vicki Cheung
and Alec Radford and Xi Chen | null | 1606.03498 | null | null |
Distributed Machine Learning in Materials that Couple Sensing,
Actuation, Computation and Communication | cs.LG cs.RO | This paper reviews machine learning applications and approaches to detection,
classification and control of intelligent materials and structures with
embedded distributed computation elements. The purpose of this survey is to
identify desired tasks to be performed in each type of material or structure
(e.g., damage detection in composites), identify and compare common approaches
to learning such tasks, and investigate models and training paradigms used.
Machine learning approaches and common temporal features used in the domains of
structural health monitoring, morphable aircraft, wearable computing and
robotic skins are explored. As the ultimate goal of this research is to
incorporate the approaches described in this survey into a robotic material
paradigm, the potential for adapting the computational models used in these
applications, and corresponding training algorithms, to an amorphous network of
computing nodes is considered. Distributed versions of support vector machines,
graphical models and mixture models developed in the field of wireless sensor
networks are reviewed. Potential areas of investigation, including possible
architectures for incorporating machine learning into robotic nodes, training
approaches, and the possibility of using deep learning approaches for automatic
feature extraction, are discussed.
| Dana Hughes and Nikolaus Correll | null | 1606.03508 | null | null |
TRex: A Tomography Reconstruction Proximal Framework for Robust Sparse
View X-Ray Applications | math.OC cs.CV cs.LG stat.ML | We present TRex, a flexible and robust Tomographic Reconstruction framework
using proximal algorithms. We provide an overview and perform an experimental
comparison between the famous iterative reconstruction methods in terms of
reconstruction quality in sparse view situations. We then derive the proximal
operators for the four best methods. We show the flexibility of our framework
by deriving solvers for two noise models: Gaussian and Poisson; and by plugging
in three powerful regularizers. We compare our framework to state of the art
methods, and show superior quality on both synthetic and real datasets.
| Mohamed Aly, Guangming Zang, Wolfgang Heidrich, Peter Wonka | null | 1606.03601 | null | null |
Drug response prediction by inferring pathway-response associations with
Kernelized Bayesian Matrix Factorization | stat.ML cs.LG q-bio.QM | A key goal of computational personalized medicine is to systematically
utilize genomic and other molecular features of samples to predict drug
responses for a previously unseen sample. Such predictions are valuable for
developing hypotheses for selecting therapies tailored for individual patients.
This is especially valuable in oncology, where molecular and genetic
heterogeneity of the cells has a major impact on the response. However, the
prediction task is extremely challenging, raising the need for methods that can
effectively model and predict drug responses. In this study, we propose a novel
formulation of multi-task matrix factorization that allows selective data
integration for predicting drug responses. To solve the modeling task, we
extend the state-of-the-art kernelized Bayesian matrix factorization (KBMF)
method with component-wise multiple kernel learning. In addition, our approach
exploits the known pathway information in a novel and biologically meaningful
fashion to learn the drug response associations. Our method quantitatively
outperforms the state of the art on predicting drug responses in two publicly
available cancer data sets as well as on a synthetic data set. In addition, we
validated our model predictions with lab experiments using an in-house cancer
cell line panel. We finally show the practical applicability of the proposed
method by utilizing prior knowledge to infer pathway-drug response
associations, opening up the opportunity for elucidating drug action
mechanisms. We demonstrate that pathway-response associations can be learned by
the proposed model for the well known EGFR and MEK inhibitors.
| Muhammad Ammad-ud-din, Suleiman A.Khan, Disha Malani, Astrid
Murum\"agi, Olli Kallioniemi, Tero Aittokallio and Samuel Kaski | 10.1093/bioinformatics/btw433. | 1606.03623 | null | null |
metricDTW: local distance metric learning in Dynamic Time Warping | cs.LG | We propose to learn multiple local Mahalanobis distance metrics to perform
k-nearest neighbor (kNN) classification of temporal sequences. Temporal
sequences are first aligned by dynamic time warping (DTW); given the alignment
path, similarity between two sequences is measured by the DTW distance, which
is computed as the accumulated distance between matched temporal point pairs
along the alignment path. Traditionally, Euclidean metric is used for distance
computation between matched pairs, which ignores the data regularities and
might not be optimal for applications at hand. Here we propose to learn
multiple Mahalanobis metrics, such that DTW distance becomes the sum of
Mahalanobis distances. We adapt the large margin nearest neighbor (LMNN)
framework to our case, and formulate multiple metric learning as a linear
programming problem. Extensive sequence classification results show that our
proposed multiple metrics learning approach is effective, insensitive to the
preceding alignment qualities, and reaches the state-of-the-art performances on
UCR time series datasets.
| Jiaping Zhao, Zerong Xi and Laurent Itti | null | 1606.03628 | null | null |
InfoGAN: Interpretable Representation Learning by Information Maximizing
Generative Adversarial Nets | cs.LG stat.ML | This paper describes InfoGAN, an information-theoretic extension to the
Generative Adversarial Network that is able to learn disentangled
representations in a completely unsupervised manner. InfoGAN is a generative
adversarial network that also maximizes the mutual information between a small
subset of the latent variables and the observation. We derive a lower bound to
the mutual information objective that can be optimized efficiently, and show
that our training procedure can be interpreted as a variation of the Wake-Sleep
algorithm. Specifically, InfoGAN successfully disentangles writing styles from
digit shapes on the MNIST dataset, pose from lighting of 3D rendered images,
and background digits from the central digit on the SVHN dataset. It also
discovers visual concepts that include hair styles, presence/absence of
eyeglasses, and emotions on the CelebA face dataset. Experiments show that
InfoGAN learns interpretable representations that are competitive with
representations learned by existing fully supervised methods.
| Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever,
Pieter Abbeel | null | 1606.03657 | null | null |
Weakly Supervised Scalable Audio Content Analysis | cs.SD cs.LG cs.MM | Audio Event Detection is an important task for content analysis of multimedia
data. Most of the current works on detection of audio events is driven through
supervised learning approaches. We propose a weakly supervised learning
framework which can make use of the tremendous amount of web multimedia data
with significantly reduced annotation effort and expense. Specifically, we use
several multiple instance learning algorithms to show that audio event
detection through weak labels is feasible. We also propose a novel scalable
multiple instance learning algorithm and show that its competitive with other
multiple instance learning algorithms for audio event detection tasks.
| Anurag Kumar, Bhiksha Raj | null | 1606.03664 | null | null |
Deep Reinforcement Learning with a Combinatorial Action Space for
Predicting Popular Reddit Threads | cs.CL cs.AI cs.LG | We introduce an online popularity prediction and tracking task as a benchmark
task for reinforcement learning with a combinatorial, natural language action
space. A specified number of discussion threads predicted to be popular are
recommended, chosen from a fixed window of recent comments to track. Novel deep
reinforcement learning architectures are studied for effective modeling of the
value function associated with actions comprised of interdependent sub-actions.
The proposed model, which represents dependence between sub-actions through a
bi-directional LSTM, gives the best performance across different experimental
configurations and domains, and it also generalizes well with varying numbers
of recommendation requests.
| Ji He, Mari Ostendorf, Xiaodong He, Jianshu Chen, Jianfeng Gao, Lihong
Li, Li Deng | null | 1606.03667 | null | null |
Comparison of Several Sparse Recovery Methods for Low Rank Matrices with
Random Samples | cs.LG stat.ML | In this paper, we will investigate the efficacy of IMAT (Iterative Method of
Adaptive Thresholding) in recovering the sparse signal (parameters) for linear
models with missing data. Sparse recovery rises in compressed sensing and
machine learning problems and has various applications necessitating viable
reconstruction methods specifically when we work with big data. This paper will
focus on comparing the power of IMAT in reconstruction of the desired sparse
signal with LASSO. Additionally, we will assume the model has random missing
information. Missing data has been recently of interest in big data and machine
learning problems since they appear in many cases including but not limited to
medical imaging datasets, hospital datasets, and massive MIMO. The dominance of
IMAT over the well-known LASSO will be taken into account in different
scenarios. Simulations and numerical results are also provided to verify the
arguments.
| Ashkan Esmaeili and Farokh Marvasti | null | 1606.03672 | null | null |
Efficient KLMS and KRLS Algorithms: A Random Fourier Feature Perspective | cs.LG stat.ML | We present a new framework for online Least Squares algorithms for nonlinear
modeling in RKH spaces (RKHS). Instead of implicitly mapping the data to a RKHS
(e.g., kernel trick), we map the data to a finite dimensional Euclidean space,
using random features of the kernel's Fourier transform. The advantage is that,
the inner product of the mapped data approximates the kernel function. The
resulting "linear" algorithm does not require any form of sparsification,
since, in contrast to all existing algorithms, the solution's size remains
fixed and does not increase with the iteration steps. As a result, the obtained
algorithms are computationally significantly more efficient compared to
previously derived variants, while, at the same time, they converge at similar
speeds and to similar error floors.
| Pantelis Bouboulis and Spyridon Pougkakiotis, Sergios Theodoridis | null | 1606.03685 | null | null |
Neural Belief Tracker: Data-Driven Dialogue State Tracking | cs.CL cs.AI cs.LG | One of the core components of modern spoken dialogue systems is the belief
tracker, which estimates the user's goal at every step of the dialogue.
However, most current approaches have difficulty scaling to larger, more
complex dialogue domains. This is due to their dependency on either: a) Spoken
Language Understanding models that require large amounts of annotated training
data; or b) hand-crafted lexicons for capturing some of the linguistic
variation in users' language. We propose a novel Neural Belief Tracking (NBT)
framework which overcomes these problems by building on recent advances in
representation learning. NBT models reason over pre-trained word vectors,
learning to compose them into distributed representations of user utterances
and dialogue context. Our evaluation on two datasets shows that this approach
surpasses past limitations, matching the performance of state-of-the-art models
which rely on hand-crafted semantic lexicons and outperforming them when such
lexicons are not provided.
| Nikola Mrk\v{s}i\'c and Diarmuid \'O S\'eaghdha and Tsung-Hsien Wen
and Blaise Thomson and Steve Young | null | 1606.03777 | null | null |
Retrieving and Ranking Similar Questions from Question-Answer Archives
Using Topic Modelling and Topic Distribution Regression | cs.IR cs.CL cs.LG | Presented herein is a novel model for similar question ranking within
collaborative question answer platforms. The presented approach integrates a
regression stage to relate topics derived from questions to those derived from
question-answer pairs. This helps to avoid problems caused by the differences
in vocabulary used within questions and answers, and the tendency for questions
to be shorter than answers. The performance of the model is shown to outperform
translation methods and topic modelling (without regression) on several
real-world datasets.
| Pedro Chahuara, Thomas Lampert, Pierre Gancarski | 10.1007/978-3-319-43997-6_4 | 1606.03783 | null | null |
Open-Set Support Vector Machines | cs.LG stat.ML | Often, when dealing with real-world recognition problems, we do not need, and
often cannot have, knowledge of the entire set of possible classes that might
appear during operational testing. In such cases, we need to think of robust
classification methods able to deal with the "unknown" and properly reject
samples belonging to classes never seen during training. Notwithstanding,
existing classifiers to date were mostly developed for the closed-set scenario,
i.e., the classification setup in which it is assumed that all test samples
belong to one of the classes with which the classifier was trained. In the
open-set scenario, however, a test sample can belong to none of the known
classes and the classifier must properly reject it by classifying it as
unknown. In this work, we extend upon the well-known Support Vector Machines
(SVM) classifier and introduce the Open-Set Support Vector Machines (OSSVM),
which is suitable for recognition in open-set setups. OSSVM balances the
empirical risk and the risk of the unknown and ensures that the region of the
feature space in which a test sample would be classified as known (one of the
known classes) is always bounded, ensuring a finite risk of the unknown. In
this work, we also highlight the properties of the SVM classifier related to
the open-set scenario, and provide necessary and sufficient conditions for an
RBF SVM to have bounded open-space risk.
| Pedro Ribeiro Mendes J\'unior, Terrance E. Boult, Jacques Wainer, and
Anderson Rocha | 10.1109/TSMC.2021.3074496 | 1606.03802 | null | null |
Efficient Learning with a Family of Nonconvex Regularizers by
Redistributing Nonconvexity | math.OC cs.LG stat.ML | The use of convex regularizers allows for easy optimization, though they
often produce biased estimation and inferior prediction performance. Recently,
nonconvex regularizers have attracted a lot of attention and outperformed
convex ones. However, the resultant optimization problem is much harder. In
this paper, for a large class of nonconvex regularizers, we propose to move the
nonconvexity from the regularizer to the loss. The nonconvex regularizer is
then transformed to a familiar convex regularizer, while the resultant loss
function can still be guaranteed to be smooth. Learning with the convexified
regularizer can be performed by existing efficient algorithms originally
designed for convex regularizers (such as the proximal algorithm, Frank-Wolfe
algorithm, alternating direction method of multipliers and stochastic gradient
descent). Extensions are made when the convexified regularizer does not have
closed-form proximal step, and when the loss function is nonconvex, nonsmooth.
Extensive experiments on a variety of machine learning application scenarios
show that optimizing the transformed problem is much faster than running the
state-of-the-art on the original problem.
| Quanming Yao and James.T Kwok | null | 1606.03841 | null | null |
Sorting out typicality with the inverse moment matrix SOS polynomial | cs.LG | We study a surprising phenomenon related to the representation of a cloud of
data points using polynomials. We start with the previously unnoticed empirical
observation that, given a collection (a cloud) of data points, the sublevel
sets of a certain distinguished polynomial capture the shape of the cloud very
accurately. This distinguished polynomial is a sum-of-squares (SOS) derived in
a simple manner from the inverse of the empirical moment matrix. In fact, this
SOS polynomial is directly related to orthogonal polynomials and the
Christoffel function. This allows to generalize and interpret extremality
properties of orthogonal polynomials and to provide a mathematical rationale
for the observed phenomenon. Among diverse potential applications, we
illustrate the relevance of our results on a network intrusion detection task
for which we obtain performances similar to existing dedicated methods reported
in the literature.
| Jean-Bernard Lasserre, Edouard Pauwels | null | 1606.03858 | null | null |
Robust Probabilistic Modeling with Bayesian Data Reweighting | stat.ML cs.AI cs.LG | Probabilistic models analyze data by relying on a set of assumptions. Data
that exhibit deviations from these assumptions can undermine inference and
prediction quality. Robust models offer protection against mismatch between a
model's assumptions and reality. We propose a way to systematically detect and
mitigate mismatch of a large class of probabilistic models. The idea is to
raise the likelihood of each observation to a weight and then to infer both the
latent variables and the weights from data. Inferring the weights allows a
model to identify observations that match its assumptions and down-weight
others. This enables robust inference and improves predictive accuracy. We
study four different forms of mismatch with reality, ranging from missing
latent groups to structure misspecification. A Poisson factorization analysis
of the Movielens 1M dataset shows the benefits of this approach in a practical
scenario.
| Yixin Wang, Alp Kucukelbir, David M. Blei | null | 1606.03860 | null | null |
Neural Associative Memory for Dual-Sequence Modeling | cs.NE cs.AI cs.CL cs.LG | Many important NLP problems can be posed as dual-sequence or
sequence-to-sequence modeling tasks. Recent advances in building end-to-end
neural architectures have been highly successful in solving such tasks. In this
work we propose a new architecture for dual-sequence modeling that is based on
associative memory. We derive AM-RNNs, a recurrent associative memory (AM)
which augments generic recurrent neural networks (RNN). This architecture is
extended to the Dual AM-RNN which operates on two AMs at once. Our models
achieve very competitive results on textual entailment. A qualitative analysis
demonstrates that long range dependencies between source and target-sequence
can be bridged effectively using Dual AM-RNNs. However, an initial experiment
on auto-encoding reveals that these benefits are not exploited by the system
when learning to solve sequence-to-sequence tasks which indicates that
additional supervision or regularization is needed.
| Dirk Weissenborn | null | 1606.03864 | null | null |
Inferring Sparsity: Compressed Sensing using Generalized Restricted
Boltzmann Machines | cs.IT cond-mat.dis-nn cs.LG math.IT stat.ML | In this work, we consider compressed sensing reconstruction from $M$
measurements of $K$-sparse structured signals which do not possess a writable
correlation model. Assuming that a generative statistical model, such as a
Boltzmann machine, can be trained in an unsupervised manner on example signals,
we demonstrate how this signal model can be used within a Bayesian framework of
signal reconstruction. By deriving a message-passing inference for general
distribution restricted Boltzmann machines, we are able to integrate these
inferred signal models into approximate message passing for compressed sensing
reconstruction. Finally, we show for the MNIST dataset that this approach can
be very effective, even for $M < K$.
| Eric W. Tramel and Andre Manoel and Francesco Caltagirone and Marylou
Gabri\'e and Florent Krzakala | 10.1109/ITW.2016.7606837 | 1606.03956 | null | null |
Making Contextual Decisions with Low Technical Debt | cs.LG cs.DC | Applications and systems are constantly faced with decisions that require
picking from a set of actions based on contextual information.
Reinforcement-based learning algorithms such as contextual bandits can be very
effective in these settings, but applying them in practice is fraught with
technical debt, and no general system exists that supports them completely. We
address this and create the first general system for contextual learning,
called the Decision Service.
Existing systems often suffer from technical debt that arises from issues
like incorrect data collection and weak debuggability, issues we systematically
address through our ML methodology and system abstractions. The Decision
Service enables all aspects of contextual bandit learning using four system
abstractions which connect together in a loop: explore (the decision space),
log, learn, and deploy. Notably, our new explore and log abstractions ensure
the system produces correct, unbiased data, which our learner uses for online
learning and to enable real-time safeguards, all in a fully reproducible
manner.
The Decision Service has a simple user interface and works with a variety of
applications: we present two live production deployments for content
recommendation that achieved click-through improvements of 25-30%, another with
18% revenue lift in the landing page, and ongoing applications in tech support
and machine failure handling. The service makes real-time decisions and learns
continuously and scalably, while significantly lowering technical debt.
| Alekh Agarwal, Sarah Bird, Markus Cozowicz, Luong Hoang, John
Langford, Stephen Lee, Jiaji Li, Dan Melamed, Gal Oshri, Oswaldo Ribas,
Siddhartha Sen, Alex Slivkins | null | 1606.03966 | null | null |
Estimating individual treatment effect: generalization bounds and
algorithms | stat.ML cs.AI cs.LG | There is intense interest in applying machine learning to problems of causal
inference in fields such as healthcare, economics and education. In particular,
individual-level causal inference has important applications such as precision
medicine. We give a new theoretical analysis and family of algorithms for
predicting individual treatment effect (ITE) from observational data, under the
assumption known as strong ignorability. The algorithms learn a "balanced"
representation such that the induced treated and control distributions look
similar. We give a novel, simple and intuitive generalization-error bound
showing that the expected ITE estimation error of a representation is bounded
by a sum of the standard generalization-error of that representation and the
distance between the treated and control distributions induced by the
representation. We use Integral Probability Metrics to measure distances
between distributions, deriving explicit bounds for the Wasserstein and Maximum
Mean Discrepancy (MMD) distances. Experiments on real and simulated data show
the new algorithms match or outperform the state-of-the-art.
| Uri Shalit, Fredrik D. Johansson, David Sontag | null | 1606.03976 | null | null |
Trace Norm Regularised Deep Multi-Task Learning | cs.LG | We propose a framework for training multiple neural networks simultaneously.
The parameters from all models are regularised by the tensor trace norm, so
that each neural network is encouraged to reuse others' parameters if possible
-- this is the main motivation behind multi-task learning. In contrast to many
deep multi-task learning models, we do not predefine a parameter sharing
strategy by specifying which layers have tied parameters. Instead, our
framework considers sharing for all shareable layers, and the sharing strategy
is learned in a data-driven way.
| Yongxin Yang, Timothy M. Hospedales | null | 1606.04038 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.