title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
DeepCare: A Deep Dynamic Memory Model for Predictive Medicine | stat.ML cs.LG | Personalized predictive medicine necessitates the modeling of patient illness
and care processes, which inherently have long-term temporal dependencies.
Healthcare observations, recorded in electronic medical records, are episodic
and irregular in time. We introduce DeepCare, an end-to-end deep dynamic neural
network that reads medical records, stores previous illness history, infers
current illness states and predicts future medical outcomes. At the data level,
DeepCare represents care episodes as vectors in space, models patient health
state trajectories through explicit memory of historical records. Built on Long
Short-Term Memory (LSTM), DeepCare introduces time parameterizations to handle
irregular timed events by moderating the forgetting and consolidation of memory
cells. DeepCare also incorporates medical interventions that change the course
of illness and shape future medical risk. Moving up to the health state level,
historical and present health states are then aggregated through multiscale
temporal pooling, before passing through a neural network that estimates future
outcomes. We demonstrate the efficacy of DeepCare for disease progression
modeling, intervention recommendation, and future risk prediction. On two
important cohorts with heavy social and economic burden -- diabetes and mental
health -- the results show improved modeling and risk prediction accuracy.
| Trang Pham, Truyen Tran, Dinh Phung and Svetha Venkatesh | null | 1602.00357 | null | null |
Visualizing Large-scale and High-dimensional Data | cs.LG cs.HC | We study the problem of visualizing large-scale and high-dimensional data in
a low-dimensional (typically 2D or 3D) space. Much success has been reported
recently by techniques that first compute a similarity structure of the data
points and then project them into a low-dimensional space with the structure
preserved. These two steps suffer from considerable computational costs,
preventing the state-of-the-art methods such as the t-SNE from scaling to
large-scale and high-dimensional data (e.g., millions of data points and
hundreds of dimensions). We propose the LargeVis, a technique that first
constructs an accurately approximated K-nearest neighbor graph from the data
and then layouts the graph in the low-dimensional space. Comparing to t-SNE,
LargeVis significantly reduces the computational cost of the graph construction
step and employs a principled probabilistic model for the visualization step,
the objective of which can be effectively optimized through asynchronous
stochastic gradient descent with a linear time complexity. The whole procedure
thus easily scales to millions of high-dimensional data points. Experimental
results on real-world data sets demonstrate that the LargeVis outperforms the
state-of-the-art methods in both efficiency and effectiveness. The
hyper-parameters of LargeVis are also much more stable over different data
sets.
| Jian Tang, Jingzhou Liu, Ming Zhang and Qiaozhu Mei | 10.1145/2872427.2883041 | 1602.00370 | null | null |
ConfidentCare: A Clinical Decision Support System for Personalized
Breast Cancer Screening | cs.LG | Breast cancer screening policies attempt to achieve timely diagnosis by the
regular screening of apparently healthy women. Various clinical decisions are
needed to manage the screening process; those include: selecting the screening
tests for a woman to take, interpreting the test outcomes, and deciding whether
or not a woman should be referred to a diagnostic test. Such decisions are
currently guided by clinical practice guidelines (CPGs), which represent a
one-size-fits-all approach that are designed to work well on average for a
population, without guaranteeing that it will work well uniformly over that
population. Since the risks and benefits of screening are functions of each
patients features, personalized screening policies that are tailored to the
features of individuals are needed in order to ensure that the right tests are
recommended to the right woman. In order to address this issue, we present
ConfidentCare: a computer-aided clinical decision support system that learns a
personalized screening policy from the electronic health record (EHR) data.
ConfidentCare operates by recognizing clusters of similar patients, and
learning the best screening policy to adopt for each cluster. A cluster of
patients is a set of patients with similar features (e.g. age, breast density,
family history, etc.), and the screening policy is a set of guidelines on what
actions to recommend for a woman given her features and screening test scores.
ConfidentCare algorithm ensures that the policy adopted for every cluster of
patients satisfies a predefined accuracy requirement with a high level of
confidence. We show that our algorithm outperforms the current CPGs in terms of
cost-efficiency and false positive rates.
| Ahmed M. Alaa, Kyeong H. Moon, William Hsu, and Mihaela van der Schaar | null | 1602.00374 | null | null |
An Iterative Deep Learning Framework for Unsupervised Discovery of
Speech Features and Linguistic Units with Applications on Spoken Term
Detection | cs.CL cs.LG | In this work we aim to discover high quality speech features and linguistic
units directly from unlabeled speech data in a zero resource scenario. The
results are evaluated using the metrics and corpora proposed in the Zero
Resource Speech Challenge organized at Interspeech 2015. A Multi-layered
Acoustic Tokenizer (MAT) was proposed for automatic discovery of multiple sets
of acoustic tokens from the given corpus. Each acoustic token set is specified
by a set of hyperparameters that describe the model configuration. These sets
of acoustic tokens carry different characteristics fof the given corpus and the
language behind, thus can be mutually reinforced. The multiple sets of token
labels are then used as the targets of a Multi-target Deep Neural Network
(MDNN) trained on low-level acoustic features. Bottleneck features extracted
from the MDNN are then used as the feedback input to the MAT and the MDNN
itself in the next iteration. We call this iterative deep learning framework
the Multi-layered Acoustic Tokenizing Deep Neural Network (MAT-DNN), which
generates both high quality speech features for the Track 1 of the Challenge
and acoustic tokens for the Track 2 of the Challenge. In addition, we performed
extra experiments on the same corpora on the application of query-by-example
spoken term detection. The experimental results showed the iterative deep
learning framework of MAT-DNN improved the detection performance due to better
underlying speech features and acoustic tokens.
| Cheng-Tao Chung, Cheng-Yu Tsai, Hsiang-Hung Lu, Chia-Hsiang Liu,
Hung-yi Lee and Lin-shan Lee | null | 1602.00426 | null | null |
Real Time Video Quality Representation Classification of Encrypted HTTP
Adaptive Video Streaming - the Case of Safari | cs.MM cs.CR cs.LG cs.NI | The increasing popularity of HTTP adaptive video streaming services has
dramatically increased bandwidth requirements on operator networks, which
attempt to shape their traffic through Deep Packet Inspection (DPI). However,
Google and certain content providers have started to encrypt their video
services. As a result, operators often encounter difficulties in shaping their
encrypted video traffic via DPI. This highlights the need for new traffic
classification methods for encrypted HTTP adaptive video streaming to enable
smart traffic shaping. These new methods will have to effectively estimate the
quality representation layer and playout buffer. We present a new method and
show for the first time that video quality representation classification for
(YouTube) encrypted HTTP adaptive streaming is possible. We analyze the
performance of this classification method with Safari over HTTPS. Based on a
large number of offline and online traffic classification experiments, we
demonstrate that it can independently classify, in real time, every video
segment into one of the quality representation layers with 97.18% average
accuracy.
| Ran Dubin, Amit Dvir, Ofir Pele, Ofer Hadar, Itay Richman and Ofir
Trabelsi | null | 1602.00489 | null | null |
I Know What You Saw Last Minute - Encrypted HTTP Adaptive Video
Streaming Title Classification | cs.MM cs.LG cs.NI | Desktops and laptops can be maliciously exploited to violate privacy. There
are two main types of attack scenarios: active and passive. In this paper, we
consider the passive scenario where the adversary does not interact actively
with the device, but he is able to eavesdrop on the network traffic of the
device from the network side. Most of the Internet traffic is encrypted and
thus passive attacks are challenging. Previous research has shown that
information can be extracted from encrypted multimedia streams. This includes
video title classification of non HTTP adaptive streams (non-HAS). This paper
presents an algorithm for encrypted HTTP adaptive video streaming title
classification. We show that an external attacker can identify the video title
from video HTTP adaptive streams (HAS) sites such as YouTube. To the best of
our knowledge, this is the first work that shows this. We provide a large data
set of 10000 YouTube video streams of 100 popular video titles (each title
downloaded 100 times) as examples for this task. The dataset was collected
under real-world network conditions. We present several machine algorithms for
the task and run a through set of experiments, which shows that our
classification accuracy is more than 95%. We also show that our algorithms are
able to classify video titles that are not in the training set as unknown and
some of the algorithms are also able to eliminate false prediction of video
titles and instead report unknown. Finally, we evaluate our algorithms
robustness to delays and packet losses at test time and show that a solution
that uses SVM is the most robust against these changes given enough training
data. We provide the dataset and the crawler for future research.
| Ran Dubin, Amit Dvir, Ofir Pele, Ofer Hadar | 10.1109/TIFS.2017.2730819 | 1602.00490 | null | null |
Graph-based Predictable Feature Analysis | cs.LG | We propose graph-based predictable feature analysis (GPFA), a new method for
unsupervised learning of predictable features from high-dimensional time
series, where high predictability is understood very generically as low
variance in the distribution of the next data point given the previous ones. We
show how this measure of predictability can be understood in terms of graph
embedding as well as how it relates to the information-theoretic measure of
predictive information in special cases. We confirm the effectiveness of GPFA
on different datasets, comparing it to three existing algorithms with similar
objectives---namely slow feature analysis, forecastable component analysis, and
predictable feature analysis---to which GPFA shows very competitive results.
| Bj\"orn Weghenkel and Asja Fischer and Laurenz Wiskott | 10.1007/s10994-017-5632-x | 1602.00554 | null | null |
Multi-object Classification via Crowdsourcing with a Reject Option | cs.LG | Consider designing an effective crowdsourcing system for an $M$-ary
classification task. Crowd workers complete simple binary microtasks whose
results are aggregated to give the final result. We consider the novel scenario
where workers have a reject option so they may skip microtasks when they are
unable or choose not to respond. For example, in mismatched speech
transcription, workers who do not know the language may not be able to respond
to microtasks focused on phonological dimensions outside their categorical
perception. We present an aggregation approach using a weighted majority voting
rule, where each worker's response is assigned an optimized weight to maximize
the crowd's classification performance. We evaluate system performance in both
exact and asymptotic forms. Further, we consider the setting where there may be
a set of greedy workers that complete microtasks even when they are unable to
perform it reliably. We consider an oblivious and an expurgation strategy to
deal with greedy workers, developing an algorithm to adaptively switch between
the two based on the estimated fraction of greedy workers in the anonymous
crowd. Simulation results show improved performance compared with conventional
majority voting.
| Qunwei Li, Aditya Vempaty, Lav R. Varshney, and Pramod K. Varshney | 10.1109/TSP.2016.2630038 | 1602.00575 | null | null |
Learning Data Triage: Linear Decoding Works for Compressive MRI | cs.IT cs.LG math.IT stat.ML | The standard approach to compressive sampling considers recovering an unknown
deterministic signal with certain known structure, and designing the
sub-sampling pattern and recovery algorithm based on the known structure. This
approach requires looking for a good representation that reveals the signal
structure, and solving a non-smooth convex minimization problem (e.g., basis
pursuit). In this paper, another approach is considered: We learn a good
sub-sampling pattern based on available training signals, without knowing the
signal structure in advance, and reconstruct an accordingly sub-sampled signal
by computationally much cheaper linear reconstruction. We provide a theoretical
guarantee on the recovery error, and show via experiments on real-world MRI
data the effectiveness of the proposed compressive MRI scheme.
| Yen-Huan Li and Volkan Cevher | null | 1602.00734 | null | null |
Deep Tracking: Seeing Beyond Seeing Using Recurrent Neural Networks | cs.LG cs.AI cs.CV cs.NE cs.RO | This paper presents to the best of our knowledge the first end-to-end object
tracking approach which directly maps from raw sensor input to object tracks in
sensor space without requiring any feature engineering or system identification
in the form of plant or sensor models. Specifically, our system accepts a
stream of raw sensor data at one end and, in real-time, produces an estimate of
the entire environment state at the output including even occluded objects. We
achieve this by framing the problem as a deep learning task and exploit
sequence models in the form of recurrent neural networks to learn a mapping
from sensor measurements to object tracks. In particular, we propose a learning
method based on a form of input dropout which allows learning in an
unsupervised manner, only based on raw, occluded sensor data without access to
ground-truth annotations. We demonstrate our approach using a synthetic dataset
designed to mimic the task of tracking objects in 2D laser data -- as commonly
encountered in robotics applications -- and show that it learns to track many
dynamic objects despite occlusions and the presence of sensor noise.
| Peter Ondruska and Ingmar Posner | null | 1602.00991 | null | null |
On Deep Multi-View Representation Learning: Objectives and Optimization | cs.LG | We consider learning representations (features) in the setting in which we
have access to multiple unlabeled views of the data for learning while only one
view is available for downstream tasks. Previous work on this problem has
proposed several techniques based on deep neural networks, typically involving
either autoencoder-like networks with a reconstruction objective or paired
feedforward networks with a batch-style correlation-based objective. We analyze
several techniques based on prior work, as well as new variants, and compare
them empirically on image, speech, and text tasks. We find an advantage for
correlation-based representation learning, while the best results on most tasks
are obtained with our new variant, deep canonically correlated autoencoders
(DCCAE). We also explore a stochastic optimization procedure for minibatch
correlation-based objectives and discuss the time/performance trade-offs for
kernel-based and neural network-based implementations.
| Weiran Wang, Raman Arora, Karen Livescu, Jeff Bilmes | null | 1602.01024 | null | null |
Improved Achievability and Converse Bounds for Erd\H{o}s-R\'enyi Graph
Matching | cs.IT cs.LG math.IT | We consider the problem of perfectly recovering the vertex correspondence
between two correlated Erd\H{o}s-R\'enyi (ER) graphs. For a pair of correlated
graphs on the same vertex set, the correspondence between the vertices can be
obscured by randomly permuting the vertex labels of one of the graphs. In some
cases, the structural information in the graphs allow this correspondence to be
recovered. We investigate the information-theoretic threshold for exact
recovery, i.e. the conditions under which the entire vertex correspondence can
be correctly recovered given unbounded computational resources.
Pedarsani and Grossglauser provided an achievability result of this type.
Their result establishes the scaling dependence of the threshold on the number
of vertices. We improve on their achievability bound. We also provide a
converse bound, establishing conditions under which exact recovery is
impossible. Together, these establish the scaling dependence of the threshold
on the level of correlation between the two graphs. The converse and
achievability bounds differ by a factor of two for sparse, significantly
correlated graphs.
| Daniel Cullina, Negar Kiyavash | null | 1602.01042 | null | null |
Better safe than sorry: Risky function exploitation through safe
optimization | stat.AP cs.LG stat.ML | Exploration-exploitation of functions, that is learning and optimizing a
mapping between inputs and expected outputs, is ubiquitous to many real world
situations. These situations sometimes require us to avoid certain outcomes at
all cost, for example because they are poisonous, harmful, or otherwise
dangerous. We test participants' behavior in scenarios in which they have to
find the optimum of a function while at the same time avoid outputs below a
certain threshold. In two experiments, we find that Safe-Optimization, a
Gaussian Process-based exploration-exploitation algorithm, describes
participants' behavior well and that participants seem to care firstly whether
a point is safe and then try to pick the optimal point from all such safe
points. This means that their trade-off between exploration and exploitation
can be seen as an intelligent, approximate, and homeostasis-driven strategy.
| Eric Schulz, Quentin J. M. Huys, Dominik R. Bach, Maarten
Speekenbrink, Andreas Krause | null | 1602.01052 | null | null |
Minimum Regret Search for Single- and Multi-Task Optimization | stat.ML cs.IT cs.LG cs.RO math.IT | We propose minimum regret search (MRS), a novel acquisition function for
Bayesian optimization. MRS bears similarities with information-theoretic
approaches such as entropy search (ES). However, while ES aims in each query at
maximizing the information gain with respect to the global maximum, MRS aims at
minimizing the expected simple regret of its ultimate recommendation for the
optimum. While empirically ES and MRS perform similar in most of the cases, MRS
produces fewer outliers with high simple regret than ES. We provide empirical
results both for a synthetic single-task optimization problem as well as for a
simulated multi-task robotic control problem.
| Jan Hendrik Metzen | null | 1602.01064 | null | null |
Interactive algorithms: from pool to stream | stat.ML cs.LG math.ST stat.TH | We consider interactive algorithms in the pool-based setting, and in the
stream-based setting. Interactive algorithms observe suggested elements
(representing actions or queries), and interactively select some of them and
receive responses. Pool-based algorithms can select elements at any order,
while stream-based algorithms observe elements in sequence, and can only select
elements immediately after observing them. We assume that the suggested
elements are generated independently from some source distribution, and ask
what is the stream size required for emulating a pool algorithm with a given
pool size. We provide algorithms and matching lower bounds for general pool
algorithms, and for utility-based pool algorithms. We further show that a
maximal gap between the two settings exists also in the special case of active
learning for binary classification.
| Sivan Sabato and Tom Hess | null | 1602.01132 | null | null |
Single-Solution Hypervolume Maximization and its use for Improving
Generalization of Neural Networks | cs.LG cs.NE stat.ML | This paper introduces the hypervolume maximization with a single solution as
an alternative to the mean loss minimization. The relationship between the two
problems is proved through bounds on the cost function when an optimal solution
to one of the problems is evaluated on the other, with a hyperparameter to
control the similarity between the two problems. This same hyperparameter
allows higher weight to be placed on samples with higher loss when computing
the hypervolume's gradient, whose normalized version can range from the mean
loss to the max loss. An experiment on MNIST with a neural network is used to
validate the theory developed, showing that the hypervolume maximization can
behave similarly to the mean loss minimization and can also provide better
performance, resulting on a 20% reduction of the classification error on the
test set.
| Conrado S. Miranda and Fernando J. Von Zuben | null | 1602.01164 | null | null |
Learning Discriminative Features via Label Consistent Neural Network | cs.CV cs.LG cs.MM cs.NE stat.ML | Deep Convolutional Neural Networks (CNN) enforces supervised information only
at the output layer, and hidden layers are trained by back propagating the
prediction error from the output layer without explicit supervision. We propose
a supervised feature learning approach, Label Consistent Neural Network, which
enforces direct supervision in late hidden layers. We associate each neuron in
a hidden layer with a particular class label and encourage it to be activated
for input signals from the same class. More specifically, we introduce a label
consistency regularization called "discriminative representation error" loss
for late hidden layers and combine it with classification error loss to build
our overall objective function. This label consistency constraint alleviates
the common problem of gradient vanishing and tends to faster convergence; it
also makes the features derived from late hidden layers discriminative enough
for classification even using a simple $k$-NN classifier, since input signals
from the same class will have very similar representations. Experimental
results demonstrate that our approach achieves state-of-the-art performances on
several public benchmarks for action and object category recognition.
| Zhuolin Jiang, Yaming Wang, Larry Davis, Walt Andrews, Viktor Rozgic | null | 1602.01168 | null | null |
k-variates++: more pluses in the k-means++ | cs.LG | k-means++ seeding has become a de facto standard for hard clustering
algorithms. In this paper, our first contribution is a two-way generalisation
of this seeding, k-variates++, that includes the sampling of general densities
rather than just a discrete set of Dirac densities anchored at the point
locations, and a generalisation of the well known Arthur-Vassilvitskii (AV)
approximation guarantee, in the form of a bias+variance approximation bound of
the global optimum. This approximation exhibits a reduced dependency on the
"noise" component with respect to the optimal potential --- actually
approaching the statistical lower bound. We show that k-variates++ reduces to
efficient (biased seeding) clustering algorithms tailored to specific
frameworks; these include distributed, streaming and on-line clustering, with
direct approximation results for these algorithms. Finally, we present a novel
application of k-variates++ to differential privacy. For either the specific
frameworks considered here, or for the differential privacy setting, there is
little to no prior results on the direct application of k-means++ and its
approximation bounds --- state of the art contenders appear to be significantly
more complex and / or display less favorable (approximation) properties. We
stress that our algorithms can still be run in cases where there is \textit{no}
closed form solution for the population minimizer. We demonstrate the
applicability of our analysis via experimental evaluation on several domains
and settings, displaying competitive performances vs state of the art.
| Richard Nock, Rapha\"el Canyasse, Roksana Boreli and Frank Nielsen | null | 1602.01198 | null | null |
Biclustering Readings and Manuscripts via Non-negative Matrix
Factorization, with Application to the Text of Jude | cs.LG | The text-critical practice of grouping witnesses into families or texttypes
often faces two obstacles: Contamination in the manuscript tradition, and
co-dependence in identifying characteristic readings and manuscripts. We
introduce non-negative matrix factorization (NMF) as a simple, unsupervised,
and efficient way to cluster large numbers of manuscripts and readings
simultaneously while summarizing contamination using an easy-to-interpret
mixture model. We apply this method to an extensive collation of the New
Testament epistle of Jude and show that the resulting clusters correspond to
human-identified textual families from existing research.
| Joey McCollum and Stephen Brown | null | 1602.01323 | null | null |
A Kronecker-factored approximate Fisher matrix for convolution layers | stat.ML cs.LG | Second-order optimization methods such as natural gradient descent have the
potential to speed up training of neural networks by correcting for the
curvature of the loss function. Unfortunately, the exact natural gradient is
impractical to compute for large models, and most approximations either require
an expensive iterative procedure or make crude approximations to the curvature.
We present Kronecker Factors for Convolution (KFC), a tractable approximation
to the Fisher matrix for convolutional networks based on a structured
probabilistic model for the distribution over backpropagated derivatives.
Similarly to the recently proposed Kronecker-Factored Approximate Curvature
(K-FAC), each block of the approximate Fisher matrix decomposes as the
Kronecker product of small matrices, allowing for efficient inversion. KFC
captures important curvature information while still yielding comparably
efficient updates to stochastic gradient descent (SGD). We show that the
updates are invariant to commonly used reparameterizations, such as centering
of the activations. In our experiments, approximate natural gradient descent
with KFC was able to train convolutional networks several times faster than
carefully tuned SGD. Furthermore, it was able to train the networks in 10-20
times fewer iterations than SGD, suggesting its potential applicability in a
distributed setting.
| Roger Grosse and James Martens | null | 1602.01407 | null | null |
An ensemble diversity approach to supervised binary hashing | cs.LG cs.CV math.OC stat.ML | Binary hashing is a well-known approach for fast approximate nearest-neighbor
search in information retrieval. Much work has focused on affinity-based
objective functions involving the hash functions or binary codes. These
objective functions encode neighborhood information between data points and are
often inspired by manifold learning algorithms. They ensure that the hash
functions differ from each other through constraints or penalty terms that
encourage codes to be orthogonal or dissimilar across bits, but this couples
the binary variables and complicates the already difficult optimization. We
propose a much simpler approach: we train each hash function (or bit)
independently from each other, but introduce diversity among them using
techniques from classifier ensembles. Surprisingly, we find that not only is
this faster and trivially parallelizable, but it also improves over the more
complex, coupled objective function, and achieves state-of-the-art precision
and recall in experiments with image retrieval.
| Miguel \'A. Carreira-Perpi\~n\'an and Ramin Raziperchikolaei | null | 1602.01557 | null | null |
Long-term Planning by Short-term Prediction | cs.LG | We consider planning problems, that often arise in autonomous driving
applications, in which an agent should decide on immediate actions so as to
optimize a long term objective. For example, when a car tries to merge in a
roundabout it should decide on an immediate acceleration/braking command, while
the long term effect of the command is the success/failure of the merge. Such
problems are characterized by continuous state and action spaces, and by
interaction with multiple agents, whose behavior can be adversarial. We argue
that dual versions of the MDP framework (that depend on the value function and
the $Q$ function) are problematic for autonomous driving applications due to
the non Markovian of the natural state space representation, and due to the
continuous state and action spaces. We propose to tackle the planning task by
decomposing the problem into two phases: First, we apply supervised learning
for predicting the near future based on the present. We require that the
predictor will be differentiable with respect to the representation of the
present. Second, we model a full trajectory of the agent using a recurrent
neural network, where unexplained factors are modeled as (additive) input
nodes. This allows us to solve the long-term planning problem using supervised
learning techniques and direct optimization over the recurrent neural network.
Our approach enables us to learn robust policies by incorporating adversarial
elements to the environment.
| Shai Shalev-Shwartz and Nir Ben-Zrihem and Aviad Cohen and Amnon
Shashua | null | 1602.01580 | null | null |
SDCA without Duality, Regularization, and Individual Convexity | cs.LG | Stochastic Dual Coordinate Ascent is a popular method for solving regularized
loss minimization for the case of convex losses. We describe variants of SDCA
that do not require explicit regularization and do not rely on duality. We
prove linear convergence rates even if individual loss functions are
non-convex, as long as the expected loss is strongly convex.
| Shai Shalev-Shwartz | null | 1602.01582 | null | null |
Minimizing the Maximal Loss: How and Why? | cs.LG | A commonly used learning rule is to approximately minimize the \emph{average}
loss over the training set. Other learning algorithms, such as AdaBoost and
hard-SVM, aim at minimizing the \emph{maximal} loss over the training set. The
average loss is more popular, particularly in deep learning, due to three main
reasons. First, it can be conveniently minimized using online algorithms, that
process few examples at each iteration. Second, it is often argued that there
is no sense to minimize the loss on the training set too much, as it will not
be reflected in the generalization loss. Last, the maximal loss is not robust
to outliers. In this paper we describe and analyze an algorithm that can
convert any online algorithm to a minimizer of the maximal loss. We prove that
in some situations better accuracy on the training set is crucial to obtain
good performance on unseen examples. Last, we propose robust versions of the
approach that can handle outliers.
| Shai Shalev-Shwartz and Yonatan Wexler | null | 1602.01690 | null | null |
The Great Time Series Classification Bake Off: An Experimental
Evaluation of Recently Proposed Algorithms. Extended Version | cs.LG | In the last five years there have been a large number of new time series
classification algorithms proposed in the literature. These algorithms have
been evaluated on subsets of the 47 data sets in the University of California,
Riverside time series classification archive. The archive has recently been
expanded to 85 data sets, over half of which have been donated by researchers
at the University of East Anglia. Aspects of previous evaluations have made
comparisons between algorithms difficult. For example, several different
programming languages have been used, experiments involved a single train/test
split and some used normalised data whilst others did not. The relaunch of the
archive provides a timely opportunity to thoroughly evaluate algorithms on a
larger number of datasets. We have implemented 18 recently proposed algorithms
in a common Java framework and compared them against two standard benchmark
classifiers (and each other) by performing 100 resampling experiments on each
of the 85 datasets. We use these results to test several hypotheses relating to
whether the algorithms are significantly more accurate than the benchmarks and
each other. Our results indicate that only 9 of these algorithms are
significantly more accurate than both benchmarks and that one classifier, the
Collective of Transformation Ensembles, is significantly more accurate than all
of the others. All of our experiments and results are reproducible: we release
all of our code, results and experimental details and we hope these experiments
form the basis for more rigorous testing of new algorithms in the future.
| Anthony Bagnall, Aaron Bostrom, James Large and Jason Lines | null | 1602.01711 | null | null |
Asynchronous Methods for Deep Reinforcement Learning | cs.LG | We propose a conceptually simple and lightweight framework for deep
reinforcement learning that uses asynchronous gradient descent for optimization
of deep neural network controllers. We present asynchronous variants of four
standard reinforcement learning algorithms and show that parallel
actor-learners have a stabilizing effect on training allowing all four methods
to successfully train neural network controllers. The best performing method,
an asynchronous variant of actor-critic, surpasses the current state-of-the-art
on the Atari domain while training for half the time on a single multi-core CPU
instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds
on a wide variety of continuous motor control problems as well as on a new task
of navigating random 3D mazes using a visual input.
| Volodymyr Mnih, Adri\`a Puigdom\`enech Badia, Mehdi Mirza, Alex
Graves, Timothy P. Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu | null | 1602.01783 | null | null |
Random Feature Maps via a Layered Random Projection (LaRP) Framework for
Object Classification | cs.CV cs.LG stat.ML | The approximation of nonlinear kernels via linear feature maps has recently
gained interest due to their applications in reducing the training and testing
time of kernel-based learning algorithms. Current random projection methods
avoid the curse of dimensionality by embedding the nonlinear feature space into
a low dimensional Euclidean space to create nonlinear kernels. We introduce a
Layered Random Projection (LaRP) framework, where we model the linear kernels
and nonlinearity separately for increased training efficiency. The proposed
LaRP framework was assessed using the MNIST hand-written digits database and
the COIL-100 object database, and showed notable improvement in object
classification performance relative to other state-of-the-art random projection
methods.
| A. G. Chung, M. J. Shafiee, and A. Wong | null | 1602.01818 | null | null |
Generate Image Descriptions based on Deep RNN and Memory Cells for
Images Features | cs.CV cs.CL cs.LG | Generating natural language descriptions for images is a challenging task.
The traditional way is to use the convolutional neural network (CNN) to extract
image features, followed by recurrent neural network (RNN) to generate
sentences. In this paper, we present a new model that added memory cells to
gate the feeding of image features to the deep neural network. The intuition is
enabling our model to memorize how much information from images should be fed
at each stage of the RNN. Experiments on Flickr8K and Flickr30K datasets showed
that our model outperforms other state-of-the-art models with higher BLEU
scores.
| Shijian Tang, Song Han | null | 1602.01895 | null | null |
Fast Multiplier Methods to Optimize Non-exhaustive, Overlapping
Clustering | cs.LG | Clustering is one of the most fundamental and important tasks in data mining.
Traditional clustering algorithms, such as K-means, assign every data point to
exactly one cluster. However, in real-world datasets, the clusters may overlap
with each other. Furthermore, often, there are outliers that should not belong
to any cluster. We recently proposed the NEO-K-Means (Non-Exhaustive,
Overlapping K-Means) objective as a way to address both issues in an integrated
fashion. Optimizing this discrete objective is NP-hard, and even though there
is a convex relaxation of the objective, straightforward convex optimization
approaches are too expensive for large datasets. A practical alternative is to
use a low-rank factorization of the solution matrix in the convex formulation.
The resulting optimization problem is non-convex, and we can locally optimize
the objective function using an augmented Lagrangian method. In this paper, we
consider two fast multiplier methods to accelerate the convergence of an
augmented Lagrangian scheme: a proximal method of multipliers and an
alternating direction method of multipliers (ADMM). For the proximal augmented
Lagrangian or proximal method of multipliers, we show a convergence result for
the non-convex case with bound-constrained subproblems. These methods are up to
13 times faster---with no change in quality---compared with a standard
augmented Lagrangian method on problems with over 10,000 variables and bring
runtimes down from over an hour to around 5 minutes.
| Yangyang Hou, Joyce Jiyoung Whang, David F. Gleich, Inderjit S.
Dhillon | null | 1602.01910 | null | null |
Recognition of Visually Perceived Compositional Human Actions by
Multiple Spatio-Temporal Scales Recurrent Neural Networks | cs.CV cs.AI cs.LG | The current paper proposes a novel neural network model for recognizing
visually perceived human actions. The proposed multiple spatio-temporal scales
recurrent neural network (MSTRNN) model is derived by introducing multiple
timescale recurrent dynamics to the conventional convolutional neural network
model. One of the essential characteristics of the MSTRNN is that its
architecture imposes both spatial and temporal constraints simultaneously on
the neural activity which vary in multiple scales among different layers. As
suggested by the principle of the upward and downward causation, it is assumed
that the network can develop meaningful structures such as functional hierarchy
by taking advantage of such constraints during the course of learning. To
evaluate the characteristics of the model, the current study uses three types
of human action video dataset consisting of different types of primitive
actions and different levels of compositionality on them. The performance of
the MSTRNN in testing with these dataset is compared with the ones by other
representative deep learning models used in the field. The analysis of the
internal representation obtained through the learning with the dataset
clarifies what sorts of functional hierarchy can be developed by extracting the
essential compositionality underlying the dataset.
| Haanvid Lee, Minju Jung, and Jun Tani | null | 1602.01921 | null | null |
Compressive Spectral Clustering | cs.DS cs.LG stat.ML | Spectral clustering has become a popular technique due to its high
performance in many contexts. It comprises three main steps: create a
similarity graph between N objects to cluster, compute the first k eigenvectors
of its Laplacian matrix to define a feature vector for each object, and run
k-means on these features to separate objects into k classes. Each of these
three steps becomes computationally intensive for large N and/or k. We propose
to speed up the last two steps based on recent results in the emerging field of
graph signal processing: graph filtering of random signals, and random sampling
of bandlimited graph signals. We prove that our method, with a gain in
computation time that can reach several orders of magnitude, is in fact an
approximation of spectral clustering, for which we are able to control the
error. We test the performance of our method on artificial and real-world
network data.
| Nicolas Tremblay, Gilles Puy, Remi Gribonval, Pierre Vandergheynst | null | 1602.02018 | null | null |
From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label
Classification | cs.CL cs.LG stat.ML | We propose sparsemax, a new activation function similar to the traditional
softmax, but able to output sparse probabilities. After deriving its
properties, we show how its Jacobian can be efficiently computed, enabling its
use in a network trained with backpropagation. Then, we propose a new smooth
and convex loss function which is the sparsemax analogue of the logistic loss.
We reveal an unexpected connection between this new loss and the Huber
classification loss. We obtain promising empirical results in multi-label
classification problems and in attention-based neural networks for natural
language inference. For the latter, we achieve a similar performance as the
traditional softmax, but with a selective, more compact, attention focus.
| Andr\'e F. T. Martins and Ram\'on Fernandez Astudillo | null | 1602.02068 | null | null |
Compressive PCA for Low-Rank Matrices on Graphs | cs.LG | We introduce a novel framework for an approxi- mate recovery of data matrices
which are low-rank on graphs, from sampled measurements. The rows and columns
of such matrices belong to the span of the first few eigenvectors of the graphs
constructed between their rows and columns. We leverage this property to
recover the non-linear low-rank structures efficiently from sampled data
measurements, with a low cost (linear in n). First, a Resrtricted Isometry
Property (RIP) condition is introduced for efficient uniform sampling of the
rows and columns of such matrices based on the cumulative coherence of graph
eigenvectors. Secondly, a state-of-the-art fast low-rank recovery method is
suggested for the sampled data. Finally, several efficient, parallel and
parameter-free decoders are presented along with their theoretical analysis for
decoding the low-rank and cluster indicators for the full data matrix. Thus, we
overcome the computational limitations of the standard linear low-rank recovery
methods for big datasets. Our method can also be seen as a major step towards
efficient recovery of non- linear low-rank structures. For a matrix of size n X
p, on a single core machine, our method gains a speed up of $p^2/k$ over Robust
Principal Component Analysis (RPCA), where k << p is the subspace dimension.
Numerically, we can recover a low-rank matrix of size 10304 X 1000, 100 times
faster than Robust PCA.
| Nauman Shahid, Nathanael Perraudin, Gilles Puy, Pierre Vandergheynst | null | 1602.02070 | null | null |
Variance-Reduced and Projection-Free Stochastic Optimization | cs.LG | The Frank-Wolfe optimization algorithm has recently regained popularity for
machine learning applications due to its projection-free property and its
ability to handle structured constraints. However, in the stochastic learning
setting, it is still relatively understudied compared to the gradient descent
counterpart. In this work, leveraging a recent variance reduction technique, we
propose two stochastic Frank-Wolfe variants which substantially improve
previous results in terms of the number of stochastic gradient evaluations
needed to achieve $1-\epsilon$ accuracy. For example, we improve from
$O(\frac{1}{\epsilon})$ to $O(\ln\frac{1}{\epsilon})$ if the objective function
is smooth and strongly convex, and from $O(\frac{1}{\epsilon^2})$ to
$O(\frac{1}{\epsilon^{1.5}})$ if the objective function is smooth and
Lipschitz. The theoretical improvement is also observed in experiments on
real-world datasets for a multiclass classification application.
| Elad Hazan and Haipeng Luo | null | 1602.02101 | null | null |
Sequence Classification with Neural Conditional Random Fields | cs.LG | The proliferation of sensor devices monitoring human activity generates
voluminous amount of temporal sequences needing to be interpreted and
categorized. Moreover, complex behavior detection requires the personalization
of multi-sensor fusion algorithms. Conditional random fields (CRFs) are
commonly used in structured prediction tasks such as part-of-speech tagging in
natural language processing. Conditional probabilities guide the choice of each
tag/label in the sequence conflating the structured prediction task with the
sequence classification task where different models provide different
categorization of the same sequence. The claim of this paper is that CRF models
also provide discriminative models to distinguish between types of sequence
regardless of the accuracy of the labels obtained if we calibrate the class
membership estimate of the sequence. We introduce and compare different neural
network based linear-chain CRFs and we present experiments on two complex
sequence classification and structured prediction tasks to support this claim.
| Myriam Abramson | null | 1602.02123 | null | null |
Reducing Runtime by Recycling Samples | cs.LG stat.ML | Contrary to the situation with stochastic gradient descent, we argue that
when using stochastic methods with variance reduction, such as SDCA, SAG or
SVRG, as well as their variants, it could be beneficial to reuse previously
used samples instead of fresh samples, even when fresh samples are available.
We demonstrate this empirically for SDCA, SAG and SVRG, studying the optimal
sample size one should use, and also uncover be-havior that suggests running
SDCA for an integer number of epochs could be wasteful.
| Jialei Wang, Hai Wang, Nathan Srebro | null | 1602.02136 | null | null |
Exploiting the Structure: Stochastic Gradient Methods Using Raw Clusters | cs.LG stat.ML | The amount of data available in the world is growing faster than our ability
to deal with it. However, if we take advantage of the internal
\emph{structure}, data may become much smaller for machine learning purposes.
In this paper we focus on one of the fundamental machine learning tasks,
empirical risk minimization (ERM), and provide faster algorithms with the help
from the clustering structure of the data.
We introduce a simple notion of raw clustering that can be efficiently
computed from the data, and propose two algorithms based on clustering
information. Our accelerated algorithm ClusterACDM is built on a novel Haar
transformation applied to the dual space of the ERM problem, and our
variance-reduction based algorithm ClusterSVRG introduces a new gradient
estimator using clustering. Our algorithms outperform their classical
counterparts ACDM and SVRG respectively.
| Zeyuan Allen-Zhu, Yang Yuan, Karthik Sridharan | null | 1602.02151 | null | null |
Daleel: Simplifying Cloud Instance Selection Using Machine Learning | cs.DC cs.LG cs.PF | Decision making in cloud environments is quite challenging due to the
diversity in service offerings and pricing models, especially considering that
the cloud market is an incredibly fast moving one. In addition, there are no
hard and fast rules, each customer has a specific set of constraints (e.g.
budget) and application requirements (e.g. minimum computational resources).
Machine learning can help address some of the complicated decisions by carrying
out customer-specific analytics to determine the most suitable instance type(s)
and the most opportune time for starting or migrating instances. We employ
machine learning techniques to develop an adaptive deployment policy, providing
an optimal match between the customer demands and the available cloud service
offerings. We provide an experimental study based on extensive set of job
executions over a major public cloud infrastructure.
| Faiza Samreen, Yehia Elkhatib, Matthew Rowe, Gordon S. Blair | 10.1109/NOMS.2016.7502858 | 1602.02159 | null | null |
A Note on Alternating Minimization Algorithm for the Matrix Completion
Problem | stat.ML cs.LG cs.NA | We consider the problem of reconstructing a low rank matrix from a subset of
its entries and analyze two variants of the so-called Alternating Minimization
algorithm, which has been proposed in the past. We establish that when the
underlying matrix has rank $r=1$, has positive bounded entries, and the graph
$\mathcal{G}$ underlying the revealed entries has bounded degree and diameter
which is at most logarithmic in the size of the matrix, both algorithms succeed
in reconstructing the matrix approximately in polynomial time starting from an
arbitrary initialization. We further provide simulation results which suggest
that the second algorithm which is based on the message passing type updates,
performs significantly better.
| David Gamarnik and Sidhant Misra | 10.1109/LSP.2016.2576979 | 1602.02164 | null | null |
On Column Selection in Approximate Kernel Canonical Correlation Analysis | cs.LG stat.ML | We study the problem of column selection in large-scale kernel canonical
correlation analysis (KCCA) using the Nystr\"om approximation, where one
approximates two positive semi-definite kernel matrices using "landmark" points
from the training set. When building low-rank kernel approximations in KCCA,
previous work mostly samples the landmarks uniformly at random from the
training set. We propose novel strategies for sampling the landmarks
non-uniformly based on a version of statistical leverage scores recently
developed for kernel ridge regression. We study the approximation accuracy of
the proposed non-uniform sampling strategy, develop an incremental algorithm
that explores the path of approximation ranks and facilitates efficient model
selection, and derive the kernel stability of out-of-sample mapping for our
method. Experimental results on both synthetic and real-world datasets
demonstrate the promise of our method.
| Weiran Wang | null | 1602.02172 | null | null |
Active Information Acquisition | stat.ML cs.LG | We propose a general framework for sequential and dynamic acquisition of
useful information in order to solve a particular task. While our goal could in
principle be tackled by general reinforcement learning, our particular setting
is constrained enough to allow more efficient algorithms. In this paper, we
work under the Learning to Search framework and show how to formulate the goal
of finding a dynamic information acquisition policy in that framework. We apply
our formulation on two tasks, sentiment analysis and image recognition, and
show that the learned policies exhibit good statistical performance. As an
emergent byproduct, the learned policies show a tendency to focus on the most
prominent parts of each instance and give harder instances more attention
without explicitly being trained to do so.
| He He, Paul Mineiro, Nikos Karampatziakis | null | 1602.02181 | null | null |
Convex Relaxation Regression: Black-Box Optimization of Smooth Functions
by Learning Their Convex Envelopes | stat.ML cs.LG | Finding efficient and provable methods to solve non-convex optimization
problems is an outstanding challenge in machine learning and optimization
theory. A popular approach used to tackle non-convex problems is to use convex
relaxation techniques to find a convex surrogate for the problem.
Unfortunately, convex relaxations typically must be found on a
problem-by-problem basis. Thus, providing a general-purpose strategy to
estimate a convex relaxation would have a wide reaching impact. Here, we
introduce Convex Relaxation Regression (CoRR), an approach for learning convex
relaxations for a class of smooth functions. The main idea behind our approach
is to estimate the convex envelope of a function $f$ by evaluating $f$ at a set
of $T$ random points and then fitting a convex function to these function
evaluations. We prove that with probability greater than $1-\delta$, the
solution of our algorithm converges to the global optimizer of $f$ with error
$\mathcal{O} \Big( \big(\frac{\log(1/\delta) }{T} \big)^{\alpha} \Big)$ for
some $\alpha> 0$. Our approach enables the use of convex optimization tools to
solve a class of non-convex optimization problems.
| Mohammad Gheshlaghi Azar, Eva Dyer, Konrad Kording | null | 1602.02191 | null | null |
BISTRO: An Efficient Relaxation-Based Method for Contextual Bandits | cs.LG stat.ML | We present efficient algorithms for the problem of contextual bandits with
i.i.d. covariates, an arbitrary sequence of rewards, and an arbitrary class of
policies. Our algorithm BISTRO requires d calls to the empirical risk
minimization (ERM) oracle per round, where d is the number of actions. The
method uses unlabeled data to make the problem computationally simple. When the
ERM problem itself is computationally hard, we extend the approach by employing
multiplicative approximation algorithms for the ERM. The integrality gap of the
relaxation only enters in the regret bound rather than the benchmark. Finally,
we show that the adversarial version of the contextual bandit problem is
learnable (and efficient) whenever the full-information supervised online
learning problem has a non-trivial regret guarantee (and efficient).
| Alexander Rakhlin and Karthik Sridharan | null | 1602.02196 | null | null |
Efficient Second Order Online Learning by Sketching | cs.LG | We propose Sketched Online Newton (SON), an online second order learning
algorithm that enjoys substantially improved regret guarantees for
ill-conditioned data. SON is an enhanced version of the Online Newton Step,
which, via sketching techniques enjoys a running time linear in the dimension
and sketch size. We further develop sparse forms of the sketching methods (such
as Oja's rule), making the computation linear in the sparsity of features.
Together, the algorithm eliminates all computational obstacles in previous
second order online learning approaches.
| Haipeng Luo, Alekh Agarwal, Nicolo Cesa-Bianchi, John Langford | null | 1602.02202 | null | null |
Classification accuracy as a proxy for two sample testing | cs.LG cs.AI math.ST stat.ML stat.TH | When data analysts train a classifier and check if its accuracy is
significantly different from chance, they are implicitly performing a
two-sample test. We investigate the statistical properties of this flexible
approach in the high-dimensional setting. We prove two results that hold for
all classifiers in any dimensions: if its true error remains $\epsilon$-better
than chance for some $\epsilon>0$ as $d,n \to \infty$, then (a) the
permutation-based test is consistent (has power approaching to one), (b) a
computationally efficient test based on a Gaussian approximation of the null
distribution is also consistent. To get a finer understanding of the rates of
consistency, we study a specialized setting of distinguishing Gaussians with
mean-difference $\delta$ and common (known or unknown) covariance $\Sigma$,
when $d/n \to c \in (0,\infty)$. We study variants of Fisher's linear
discriminant analysis (LDA) such as "naive Bayes" in a nontrivial regime when
$\epsilon \to 0$ (the Bayes classifier has true accuracy approaching 1/2), and
contrast their power with corresponding variants of Hotelling's test.
Surprisingly, the expressions for their power match exactly in terms of
$n,d,\delta,\Sigma$, and the LDA approach is only worse by a constant factor,
achieving an asymptotic relative efficiency (ARE) of $1/\sqrt{\pi}$ for
balanced samples. We also extend our results to high-dimensional elliptical
distributions with finite kurtosis. Other results of independent interest
include minimax lower bounds, and the optimality of Hotelling's test when
$d=o(n)$. Simulation results validate our theory, and we present practical
takeaway messages along with natural open problems.
| Ilmun Kim, Aaditya Ramdas, Aarti Singh, Larry Wasserman | null | 1602.02210 | null | null |
Strongly-Typed Recurrent Neural Networks | cs.LG cs.NE | Recurrent neural networks are increasing popular models for sequential
learning. Unfortunately, although the most effective RNN architectures are
perhaps excessively complicated, extensive searches have not found simpler
alternatives. This paper imports ideas from physics and functional programming
into RNN design to provide guiding principles. From physics, we introduce type
constraints, analogous to the constraints that forbids adding meters to
seconds. From functional programming, we require that strongly-typed
architectures factorize into stateless learnware and state-dependent firmware,
reducing the impact of side-effects. The features learned by strongly-typed
nets have a simple semantic interpretation via dynamic average-pooling on
one-dimensional convolutions. We also show that strongly-typed gradients are
better behaved than in classical architectures, and characterize the
representational power of strongly-typed nets. Finally, experiments show that,
despite being more constrained, strongly-typed architectures achieve lower
training and comparable generalization error to classical architectures.
| David Balduzzi, Muhammad Ghifary | null | 1602.02218 | null | null |
Improved Dropout for Shallow and Deep Learning | cs.LG stat.ML | Dropout has been witnessed with great success in training deep neural
networks by independently zeroing out the outputs of neurons at random. It has
also received a surge of interest for shallow learning, e.g., logistic
regression. However, the independent sampling for dropout could be suboptimal
for the sake of convergence. In this paper, we propose to use multinomial
sampling for dropout, i.e., sampling features or neurons according to a
multinomial distribution with different probabilities for different
features/neurons. To exhibit the optimal dropout probabilities, we analyze the
shallow learning with multinomial dropout and establish the risk bound for
stochastic optimization. By minimizing a sampling dependent factor in the risk
bound, we obtain a distribution-dependent dropout with sampling probabilities
dependent on the second order statistics of the data distribution. To tackle
the issue of evolving distribution of neurons in deep learning, we propose an
efficient adaptive dropout (named \textbf{evolutional dropout}) that computes
the sampling probabilities on-the-fly from a mini-batch of examples. Empirical
studies on several benchmark datasets demonstrate that the proposed dropouts
achieve not only much faster convergence and but also a smaller testing error
than the standard dropout. For example, on the CIFAR-100 data, the evolutional
dropout achieves relative improvements over 10\% on the prediction performance
and over 50\% on the convergence speed compared to the standard dropout.
| Zhe Li, Boqing Gong, Tianbao Yang | null | 1602.02220 | null | null |
A Tractable Fully Bayesian Method for the Stochastic Block Model | cs.LG stat.ML | The stochastic block model (SBM) is a generative model revealing macroscopic
structures in graphs. Bayesian methods are used for (i) cluster assignment
inference and (ii) model selection for the number of clusters. In this paper,
we study the behavior of Bayesian inference in the SBM in the large sample
limit. Combining variational approximation and Laplace's method, a consistent
criterion of the fully marginalized log-likelihood is established. Based on
that, we derive a tractable algorithm that solves tasks (i) and (ii)
concurrently, obviating the need for an outer loop to check all model
candidates. Our empirical and theoretical results demonstrate that our method
is scalable in computation, accurate in approximation, and concise in model
selection.
| Kohei Hayashi, Takuya Konishi, Tatsuro Kawamoto | null | 1602.02256 | null | null |
Recovery guarantee of weighted low-rank approximation via alternating
minimization | cs.LG cs.DS stat.ML | Many applications require recovering a ground truth low-rank matrix from
noisy observations of the entries, which in practice is typically formulated as
a weighted low-rank approximation problem and solved by non-convex optimization
heuristics such as alternating minimization. In this paper, we provide provable
recovery guarantee of weighted low-rank via a simple alternating minimization
algorithm. In particular, for a natural class of matrices and weights and
without any assumption on the noise, we bound the spectral norm of the
difference between the recovered matrix and the ground truth, by the spectral
norm of the weighted noise plus an additive error that decreases exponentially
with the number of rounds of alternating minimization, from either
initialization by SVD or, more importantly, random initialization. These
provide the first theoretical results for weighted low-rank via alternating
minimization with non-binary deterministic weights, significantly generalizing
those for matrix completion, the special case with binary weights, since our
assumptions are similar or weaker than those made in existing works.
Furthermore, this is achieved by a very simple algorithm that improves the
vanilla alternating minimization with a simple clipping step.
The key technical challenge is that under non-binary deterministic weights,
na\"ive alternating steps will destroy the incoherence and spectral properties
of the intermediate solutions, which are needed for making progress towards the
ground truth. We show that the properties only need to hold in an average sense
and can be achieved by the clipping step.
We further provide an alternating algorithm that uses a whitening step that
keeps the properties via SDP and Rademacher rounding and thus requires weaker
assumptions. This technique can potentially be applied in some other
applications and is of independent interest.
| Yuanzhi Li, Yingyu Liang, Andrej Risteski | null | 1602.02262 | null | null |
DOLPHIn - Dictionary Learning for Phase Retrieval | math.OC cs.IT cs.LG math.IT stat.ML | We propose a new algorithm to learn a dictionary for reconstructing and
sparsely encoding signals from measurements without phase. Specifically, we
consider the task of estimating a two-dimensional image from squared-magnitude
measurements of a complex-valued linear transformation of the original image.
Several recent phase retrieval algorithms exploit underlying sparsity of the
unknown signal in order to improve recovery performance. In this work, we
consider such a sparse signal prior in the context of phase retrieval, when the
sparsifying dictionary is not known in advance. Our algorithm jointly
reconstructs the unknown signal - possibly corrupted by noise - and learns a
dictionary such that each patch of the estimated image can be sparsely
represented. Numerical experiments demonstrate that our approach can obtain
significantly better reconstructions for phase retrieval problems with noise
than methods that cannot exploit such "hidden" sparsity. Moreover, on the
theoretical side, we provide a convergence result for our method.
| Andreas M. Tillmann, Yonina C. Eldar, Julien Mairal | 10.1109/TSP.2016.2607180 | 1602.02263 | null | null |
Ladder Variational Autoencoders | stat.ML cs.LG | Variational Autoencoders are powerful models for unsupervised learning.
However deep models with several layers of dependent stochastic variables are
difficult to train which limits the improvements obtained using these highly
expressive models. We propose a new inference model, the Ladder Variational
Autoencoder, that recursively corrects the generative distribution by a data
dependent approximate likelihood in a process resembling the recently proposed
Ladder Network. We show that this model provides state of the art predictive
log-likelihood and tighter log-likelihood lower bound compared to the purely
bottom-up inference in layered Variational Autoencoders and other generative
models. We provide a detailed analysis of the learned hierarchical latent
representation and show that our new inference model is qualitatively different
and utilizes a deeper more distributed hierarchy of latent variables. Finally,
we observe that batch normalization and deterministic warm-up (gradually
turning on the KL-term) are crucial for training variational models with many
stochastic layers.
| Casper Kaae S{\o}nderby, Tapani Raiko, Lars Maal{\o}e, S{\o}ren Kaae
S{\o}nderby, Ole Winther | null | 1602.02282 | null | null |
Importance Sampling for Minibatches | cs.LG math.OC stat.ML | Minibatching is a very well studied and highly popular technique in
supervised learning, used by practitioners due to its ability to accelerate
training through better utilization of parallel processing power and reduction
of stochastic variance. Another popular technique is importance sampling -- a
strategy for preferential sampling of more important examples also capable of
accelerating the training process. However, despite considerable effort by the
community in these areas, and due to the inherent technical difficulty of the
problem, there is no existing work combining the power of importance sampling
with the strength of minibatching. In this paper we propose the first {\em
importance sampling for minibatches} and give simple and rigorous complexity
analysis of its performance. We illustrate on synthetic problems that for
training data of certain properties, our sampling can lead to several orders of
magnitude improvement in training time. We then test the new sampling on
several popular datasets, and show that the improvement can reach an order of
magnitude.
| Dominik Csiba and Peter Richt\'arik | null | 1602.02283 | null | null |
A Deep Learning Approach to Unsupervised Ensemble Learning | stat.ML cs.LG | We show how deep learning methods can be applied in the context of
crowdsourcing and unsupervised ensemble learning. First, we prove that the
popular model of Dawid and Skene, which assumes that all classifiers are
conditionally independent, is {\em equivalent} to a Restricted Boltzmann
Machine (RBM) with a single hidden node. Hence, under this model, the posterior
probabilities of the true labels can be instead estimated via a trained RBM.
Next, to address the more general case, where classifiers may strongly violate
the conditional independence assumption, we propose to apply RBM-based Deep
Neural Net (DNN). Experimental results on various simulated and real-world
datasets demonstrate that our proposed DNN approach outperforms other
state-of-the-art methods, in particular when the data violates the conditional
independence assumption.
| Uri Shaham, Xiuyuan Cheng, Omer Dror, Ariel Jaffe, Boaz Nadler, Joseph
Chang, Yuval Kluger | null | 1602.02285 | null | null |
R\'enyi Divergence Variational Inference | stat.ML cs.LG | This paper introduces the variational R\'enyi bound (VR) that extends
traditional variational inference to R\'enyi's alpha-divergences. This new
family of variational methods unifies a number of existing approaches, and
enables a smooth interpolation from the evidence lower-bound to the log
(marginal) likelihood that is controlled by the value of alpha that
parametrises the divergence. The reparameterization trick, Monte Carlo
approximation and stochastic optimisation methods are deployed to obtain a
tractable and unified framework for optimisation. We further consider negative
alpha values and propose a novel variational inference method as a new special
case in the proposed framework. Experiments on Bayesian neural networks and
variational auto-encoders demonstrate the wide applicability of the VR bound.
| Yingzhen Li, Richard E. Turner | null | 1602.02311 | null | null |
Stratified Bayesian Optimization | cs.LG math.OC stat.ML | We consider derivative-free black-box global optimization of expensive noisy
functions, when most of the randomness in the objective is produced by a few
influential scalar random inputs. We present a new Bayesian global optimization
algorithm, called Stratified Bayesian Optimization (SBO), which uses this
strong dependence to improve performance. Our algorithm is similar in spirit to
stratification, a technique from simulation, which uses strong dependence on a
categorical representation of the random input to reduce variance. We
demonstrate in numerical experiments that SBO outperforms state-of-the-art
Bayesian optimization benchmarks that do not leverage this dependence.
| Saul Toscano-Palmerin and Peter I. Frazier | null | 1602.02338 | null | null |
Solving Ridge Regression using Sketched Preconditioned SVRG | cs.LG | We develop a novel preconditioning method for ridge regression, based on
recent linear sketching methods. By equipping Stochastic Variance Reduced
Gradient (SVRG) with this preconditioning process, we obtain a significant
speed-up relative to fast stochastic methods such as SVRG, SDCA and SAG.
| Alon Gonen, Francesco Orabona, Shai Shalev-Shwartz | null | 1602.02350 | null | null |
Hyperparameter optimization with approximate gradient | stat.ML cs.LG math.OC | Most models in machine learning contain at least one hyperparameter to
control for model complexity. Choosing an appropriate set of hyperparameters is
both crucial in terms of model accuracy and computationally challenging. In
this work we propose an algorithm for the optimization of continuous
hyperparameters using inexact gradient information. An advantage of this method
is that hyperparameters can be updated before model parameters have fully
converged. We also give sufficient conditions for the global convergence of
this method, based on regularity conditions of the involved functions and
summability of errors. Finally, we validate the empirical performance of this
method on the estimation of regularization constants of L2-regularized logistic
regression and kernel Ridge regression. Empirical benchmarks indicate that our
approach is highly competitive with respect to state of the art methods.
| Fabian Pedregosa | null | 1602.02355 | null | null |
NED: An Inter-Graph Node Metric Based On Edit Distance | cs.DB cs.LG cs.SI | Node similarity is a fundamental problem in graph analytics. However, node
similarity between nodes in different graphs (inter-graph nodes) has not
received a lot of attention yet. The inter-graph node similarity is important
in learning a new graph based on the knowledge of an existing graph (transfer
learning on graphs) and has applications in biological, communication, and
social networks. In this paper, we propose a novel distance function for
measuring inter-graph node similarity with edit distance, called NED. In NED,
two nodes are compared according to their local neighborhood structures which
are represented as unordered k-adjacent trees, without relying on labels or
other assumptions. Since the computation problem of tree edit distance on
unordered trees is NP-Complete, we propose a modified tree edit distance,
called TED*, for comparing neighborhood trees. TED* is a metric distance, as
the original tree edit distance, but more importantly, TED* is polynomially
computable. As a metric distance, NED admits efficient indexing, provides
interpretable results, and shows to perform better than existing approaches on
a number of data analysis tasks, including graph de-anonymization. Finally, the
efficiency and effectiveness of NED are empirically demonstrated using
real-world graphs.
| Haohan Zhu, Xianrui Meng and George Kollios | null | 1602.02358 | null | null |
Supervised and Semi-Supervised Text Categorization using LSTM for Region
Embeddings | stat.ML cs.CL cs.LG | One-hot CNN (convolutional neural network) has been shown to be effective for
text categorization (Johnson & Zhang, 2015). We view it as a special case of a
general framework which jointly trains a linear model with a non-linear feature
generator consisting of `text region embedding + pooling'. Under this
framework, we explore a more sophisticated region embedding method using Long
Short-Term Memory (LSTM). LSTM can embed text regions of variable (and possibly
large) sizes, whereas the region size needs to be fixed in a CNN. We seek
effective and efficient use of LSTM for this purpose in the supervised and
semi-supervised settings. The best results were obtained by combining region
embeddings in the form of LSTM and convolution layers trained on unlabeled
data. The results indicate that on this task, embeddings of text regions, which
can convey complex concepts, are more useful than embeddings of single words in
isolation. We report performances exceeding the previous best results on four
benchmark datasets.
| Rie Johnson, Tong Zhang | null | 1602.02373 | null | null |
Disentangled Representations in Neural Models | cs.LG cs.NE | Representation learning is the foundation for the recent success of neural
network models. However, the distributed representations generated by neural
networks are far from ideal. Due to their highly entangled nature, they are di
cult to reuse and interpret, and they do a poor job of capturing the sparsity
which is present in real- world transformations. In this paper, I describe
methods for learning disentangled representations in the two domains of
graphics and computation. These methods allow neural methods to learn
representations which are easy to interpret and reuse, yet they incur little or
no penalty to performance. In the Graphics section, I demonstrate the ability
of these methods to infer the generating parameters of images and rerender
those images under novel conditions. In the Computation section, I describe a
model which is able to factorize a multitask learning problem into subtasks and
which experiences no catastrophic forgetting. Together these techniques provide
the tools to design a wide range of models that learn disentangled
representations and better model the factors of variation in the real world.
| William Whitney | null | 1602.02383 | null | null |
Network Inference by Learned Node-Specific Degree Prior | stat.ML cs.LG | We propose a novel method for network inference from partially observed edges
using a node-specific degree prior. The degree prior is derived from observed
edges in the network to be inferred, and its hyper-parameters are determined by
cross validation. Then we formulate network inference as a matrix completion
problem regularized by our degree prior. Our theoretical analysis indicates
that this prior favors a network following the learned degree distribution, and
may lead to improved network recovery error bound than previous work.
Experimental results on both simulated and real biological networks demonstrate
the superior performance of our method in various settings.
| Qingming Tang, Lifu Tu, Weiran Wang and Jinbo Xu | null | 1602.02386 | null | null |
Ensemble Robustness and Generalization of Stochastic Deep Learning
Algorithms | cs.LG cs.CV stat.ML | The question why deep learning algorithms generalize so well has attracted
increasing research interest. However, most of the well-established approaches,
such as hypothesis capacity, stability or sparseness, have not provided
complete explanations (Zhang et al., 2016; Kawaguchi et al., 2017). In this
work, we focus on the robustness approach (Xu & Mannor, 2012), i.e., if the
error of a hypothesis will not change much due to perturbations of its training
examples, then it will also generalize well. As most deep learning algorithms
are stochastic (e.g., Stochastic Gradient Descent, Dropout, and
Bayes-by-backprop), we revisit the robustness arguments of Xu & Mannor, and
introduce a new approach, ensemble robustness, that concerns the robustness of
a population of hypotheses. Through the lens of ensemble robustness, we reveal
that a stochastic learning algorithm can generalize well as long as its
sensitiveness to adversarial perturbations is bounded in average over training
examples. Moreover, an algorithm may be sensitive to some adversarial examples
(Goodfellow et al., 2015) but still generalize well. To support our claims, we
provide extensive simulations for different deep learning algorithms and
different network architectures exhibiting a strong correlation between
ensemble robustness and the ability to generalize.
| Tom Zahavy, Bingyi Kang, Alex Sivak, Jiashi Feng, Huan Xu, Shie Mannor | null | 1602.02389 | null | null |
A Simple Practical Accelerated Method for Finite Sums | stat.ML cs.LG | We describe a novel optimization method for finite sums (such as empirical
risk minimization problems) building on the recently introduced SAGA method.
Our method achieves an accelerated convergence rate on strongly convex smooth
problems. Our method has only one parameter (a step size), and is radically
simpler than other accelerated methods for finite sums. Additionally it can be
applied when the terms are non-smooth, yielding a method applicable in many
areas where operator splitting methods would traditionally be applied.
| Aaron Defazio | null | 1602.02442 | null | null |
Loss factorization, weakly supervised learning and label noise
robustness | cs.LG stat.ML | We prove that the empirical risk of most well-known loss functions factors
into a linear term aggregating all labels with a term that is label free, and
can further be expressed by sums of the loss. This holds true even for
non-smooth, non-convex losses and in any RKHS. The first term is a (kernel)
mean operator --the focal quantity of this work-- which we characterize as the
sufficient statistic for the labels. The result tightens known generalization
bounds and sheds new light on their interpretation.
Factorization has a direct application on weakly supervised learning. In
particular, we demonstrate that algorithms like SGD and proximal methods can be
adapted with minimal effort to handle weak supervision, once the mean operator
has been estimated. We apply this idea to learning with asymmetric noisy
labels, connecting and extending prior work. Furthermore, we show that most
losses enjoy a data-dependent (by the mean operator) form of noise robustness,
in contrast with known negative results.
| Giorgio Patrini, Frank Nielsen, Richard Nock, Marcello Carioni | null | 1602.02450 | null | null |
Efficient Algorithms for Adversarial Contextual Learning | cs.LG | We provide the first oracle efficient sublinear regret algorithms for
adversarial versions of the contextual bandit problem. In this problem, the
learner repeatedly makes an action on the basis of a context and receives
reward for the chosen action, with the goal of achieving reward competitive
with a large class of policies. We analyze two settings: i) in the transductive
setting the learner knows the set of contexts a priori, ii) in the small
separator setting, there exists a small set of contexts such that any two
policies behave differently in one of the contexts in the set. Our algorithms
fall into the follow the perturbed leader family \cite{Kalai2005} and achieve
regret $O(T^{3/4}\sqrt{K\log(N)})$ in the transductive setting and $O(T^{2/3}
d^{3/4} K\sqrt{\log(N)})$ in the separator setting, where $K$ is the number of
actions, $N$ is the number of baseline policies, and $d$ is the size of the
separator. We actually solve the more general adversarial contextual
semi-bandit linear optimization problem, whilst in the full information setting
we address the even more general contextual combinatorial optimization. We
provide several extensions and implications of our algorithms, such as
switching regret and efficient learning with predictable sequences.
| Vasilis Syrgkanis, Akshay Krishnamurthy, Robert E. Schapire | null | 1602.02454 | null | null |
Binarized Neural Networks | cs.LG cs.NE | We introduce a method to train Binarized Neural Networks (BNNs) - neural
networks with binary weights and activations at run-time and when computing the
parameters' gradient at train-time. We conduct two sets of experiments, each
based on a different framework, namely Torch7 and Theano, where we train BNNs
on MNIST, CIFAR-10 and SVHN, and achieve nearly state-of-the-art results.
During the forward pass, BNNs drastically reduce memory size and accesses, and
replace most arithmetic operations with bit-wise operations, which might lead
to a great increase in power-efficiency. Last but not least, we wrote a binary
matrix multiplication GPU kernel with which it is possible to run our MNIST BNN
7 times faster than with an unoptimized GPU kernel, without suffering any loss
in classification accuracy. The code for training and running our BNNs is
available.
| Itay Hubara, Daniel Soudry, Ran El Yaniv | null | 1602.02505 | null | null |
Fast K-Means with Accurate Bounds | stat.ML cs.LG | We propose a novel accelerated exact k-means algorithm, which performs better
than the current state-of-the-art low-dimensional algorithm in 18 of 22
experiments, running up to 3 times faster. We also propose a general
improvement of existing state-of-the-art accelerated exact k-means algorithms
through better estimates of the distance bounds used to reduce the number of
distance calculations, and get a speedup in 36 of 44 experiments, up to 1.8
times faster.
We have conducted experiments with our own implementations of existing
methods to ensure homogeneous evaluation of performance, and we show that our
implementations perform as well or better than existing available
implementations. Finally, we propose simplified variants of standard approaches
and show that they are faster than their fully-fledged counterparts in 59 of 62
experiments.
| James Newling and Fran\c{c}ois Fleuret | null | 1602.02514 | null | null |
Multi-view Kernel Completion | cs.LG stat.ML | In this paper, we introduce the first method that (1) can complete kernel
matrices with completely missing rows and columns as opposed to individual
missing kernel values, (2) does not require any of the kernels to be complete a
priori, and (3) can tackle non-linear kernels. These aspects are necessary in
practical applications such as integrating legacy data sets, learning under
sensor failures and learning when measurements are costly for some of the
views. The proposed approach predicts missing rows by modelling both
within-view and between-view relationships among kernel values. We show, both
on simulated data and real world data, that the proposed method outperforms
existing techniques in the restricted settings where they are available, and
extends applicability to new settings.
| Sahely Bhadra, Samuel Kaski and Juho Rousu | null | 1602.02518 | null | null |
Data-Efficient Reinforcement Learning in Continuous-State POMDPs | stat.ML cs.LG cs.SY | We present a data-efficient reinforcement learning algorithm resistant to
observation noise. Our method extends the highly data-efficient PILCO algorithm
(Deisenroth & Rasmussen, 2011) into partially observed Markov decision
processes (POMDPs) by considering the filtering process during policy
evaluation. PILCO conducts policy search, evaluating each policy by first
predicting an analytic distribution of possible system trajectories. We
additionally predict trajectories w.r.t. a filtering process, achieving
significantly higher performance than combining a filter with a policy
optimised by the original (unfiltered) framework. Our test setup is the
cartpole swing-up task with sensor noise, which involves nonlinear dynamics and
requires nonlinear control.
| Rowan McAllister, Carl Edward Rasmussen | null | 1602.02523 | null | null |
Homogeneity of Cluster Ensembles | cs.LG cs.CV | The expectation and the mean of partitions generated by a cluster ensemble
are not unique in general. This issue poses challenges in statistical inference
and cluster stability. In this contribution, we state sufficient conditions for
uniqueness of expectation and mean. The proposed conditions show that a unique
mean is neither exceptional nor generic. To cope with this issue, we introduce
homogeneity as a measure of how likely is a unique mean for a sample of
partitions. We show that homogeneity is related to cluster stability. This
result points to a possible conflict between cluster stability and diversity in
consensus clustering. To assess homogeneity in a practical setting, we propose
an efficient way to compute a lower bound of homogeneity. Empirical results
using the k-means algorithm suggest that uniqueness of the mean partition is
not exceptional for real-world data. Moreover, for samples of high homogeneity,
uniqueness can be enforced by increasing the number of data points or by
removing outlier partitions. In a broader context, this contribution can be
placed as a further step towards a statistical theory of partitions.
| Brijnesh J. Jain | null | 1602.02543 | null | null |
Generating Images with Perceptual Similarity Metrics based on Deep
Networks | cs.LG cs.CV cs.NE | Image-generating machine learning models are typically trained with loss
functions based on distance in the image space. This often leads to
over-smoothed results. We propose a class of loss functions, which we call deep
perceptual similarity metrics (DeePSiM), that mitigate this problem. Instead of
computing distances in the image space, we compute distances between image
features extracted by deep neural networks. This metric better reflects
perceptually similarity of images and thus leads to better results. We show
three applications: autoencoder training, a modification of a variational
autoencoder, and inversion of deep convolutional networks. In all cases, the
generated images look sharp and resemble natural images.
| Alexey Dosovitskiy and Thomas Brox | null | 1602.02644 | null | null |
Graying the black box: Understanding DQNs | cs.LG cs.AI cs.NE | In recent years there is a growing interest in using deep representations for
reinforcement learning. In this paper, we present a methodology and tools to
analyze Deep Q-networks (DQNs) in a non-blind matter. Moreover, we propose a
new model, the Semi Aggregated Markov Decision Process (SAMDP), and an
algorithm that learns it automatically. The SAMDP model allows us to identify
spatio-temporal abstractions directly from features and may be used as a
sub-goal detector in future work. Using our tools we reveal that the features
learned by DQNs aggregate the state space in a hierarchical fashion, explaining
its success. Moreover, we are able to understand and describe the policies
learned by DQNs for three different Atari2600 games and suggest ways to
interpret, debug and optimize deep neural networks in reinforcement learning.
| Tom Zahavy, Nir Ben Zrihem, Shie Mannor | null | 1602.02658 | null | null |
Exploiting Cyclic Symmetry in Convolutional Neural Networks | cs.LG cs.CV cs.NE | Many classes of images exhibit rotational symmetry. Convolutional neural
networks are sometimes trained using data augmentation to exploit this, but
they are still required to learn the rotation equivariance properties from the
data. Encoding these properties into the network architecture, as we are
already used to doing for translation equivariance by using convolutional
layers, could result in a more efficient use of the parameter budget by
relieving the model from learning them. We introduce four operations which can
be inserted into neural network models as layers, and which can be combined to
make these models partially equivariant to rotations. They also enable
parameter sharing across different orientations. We evaluate the effect of
these architectural modifications on three datasets which exhibit rotational
symmetry and demonstrate improved performance with smaller models.
| Sander Dieleman, Jeffrey De Fauw, Koray Kavukcuoglu | null | 1602.02660 | null | null |
A Variational Analysis of Stochastic Gradient Algorithms | stat.ML cs.LG | Stochastic Gradient Descent (SGD) is an important algorithm in machine
learning. With constant learning rates, it is a stochastic process that, after
an initial phase of convergence, generates samples from a stationary
distribution. We show that SGD with constant rates can be effectively used as
an approximate posterior inference algorithm for probabilistic modeling.
Specifically, we show how to adjust the tuning parameters of SGD such as to
match the resulting stationary distribution to the posterior. This analysis
rests on interpreting SGD as a continuous-time stochastic process and then
minimizing the Kullback-Leibler divergence between its stationary distribution
and the target posterior. (This is in the spirit of variational inference.) In
more detail, we model SGD as a multivariate Ornstein-Uhlenbeck process and then
use properties of this process to derive the optimal parameters. This
theoretical framework also connects SGD to modern scalable inference
algorithms; we analyze the recently proposed stochastic gradient Fisher scoring
under this perspective. We demonstrate that SGD with properly chosen constant
rates gives a new way to optimize hyperparameters in probabilistic models.
| Stephan Mandt, Matthew D. Hoffman, and David M. Blei | null | 1602.02666 | null | null |
Learning to Communicate to Solve Riddles with Deep Distributed Recurrent
Q-Networks | cs.AI cs.LG | We propose deep distributed recurrent Q-networks (DDRQN), which enable teams
of agents to learn to solve communication-based coordination tasks. In these
tasks, the agents are not given any pre-designed communication protocol.
Therefore, in order to successfully communicate, they must first automatically
develop and agree upon their own communication protocol. We present empirical
results on two multi-agent learning problems based on well-known riddles,
demonstrating that DDRQN can successfully solve such tasks and discover elegant
communication protocols to do so. To our knowledge, this is the first time deep
reinforcement learning has succeeded in learning communication protocols. In
addition, we present ablation experiments that confirm that each of the main
components of the DDRQN architecture are critical to its success.
| Jakob N. Foerster, Yannis M. Assael, Nando de Freitas, Shimon Whiteson | null | 1602.02672 | null | null |
Predicting Clinical Events by Combining Static and Dynamic Information
Using Recurrent Neural Networks | cs.LG cs.AI cs.NE | In clinical data sets we often find static information (e.g. patient gender,
blood type, etc.) combined with sequences of data that are recorded during
multiple hospital visits (e.g. medications prescribed, tests performed, etc.).
Recurrent Neural Networks (RNNs) have proven to be very successful for
modelling sequences of data in many areas of Machine Learning. In this work we
present an approach based on RNNs, specifically designed for the clinical
domain, that combines static and dynamic information in order to predict future
events. We work with a database collected in the Charit\'{e} Hospital in Berlin
that contains complete information concerning patients that underwent a kidney
transplantation. After the transplantation three main endpoints can occur:
rejection of the kidney, loss of the kidney and death of the patient. Our goal
is to predict, based on information recorded in the Electronic Health Record of
each patient, whether any of those endpoints will occur within the next six or
twelve months after each visit to the clinic. We compared different types of
RNNs that we developed for this work, with a model based on a Feedforward
Neural Network and a Logistic Regression model. We found that the RNN that we
developed based on Gated Recurrent Units provides the best performance for this
task. We also used the same models for a second task, i.e., next event
prediction, and found that here the model based on a Feedforward Neural Network
outperformed the other models. Our hypothesis is that long-term dependencies
are not as relevant in this task.
| Crist\'obal Esteban, Oliver Staeck, Yinchong Yang and Volker Tresp | null | 1602.02685 | null | null |
Practical Black-Box Attacks against Machine Learning | cs.CR cs.LG | Machine learning (ML) models, e.g., deep neural networks (DNNs), are
vulnerable to adversarial examples: malicious inputs modified to yield
erroneous model outputs, while appearing unmodified to human observers.
Potential attacks include having malicious content like malware identified as
legitimate or controlling vehicle behavior. Yet, all existing adversarial
example attacks require knowledge of either the model internals or its training
data. We introduce the first practical demonstration of an attacker controlling
a remotely hosted DNN with no such knowledge. Indeed, the only capability of
our black-box adversary is to observe labels given by the DNN to chosen inputs.
Our attack strategy consists in training a local model to substitute for the
target DNN, using inputs synthetically generated by an adversary and labeled by
the target DNN. We use the local substitute to craft adversarial examples, and
find that they are misclassified by the targeted DNN. To perform a real-world
and properly-blinded evaluation, we attack a DNN hosted by MetaMind, an online
deep learning API. We find that their DNN misclassifies 84.24% of the
adversarial examples crafted with our substitute. We demonstrate the general
applicability of our strategy to many ML techniques by conducting the same
attack against models hosted by Amazon and Google, using logistic regression
substitutes. They yield adversarial examples misclassified by Amazon and Google
at rates of 96.19% and 88.94%. We also find that this black-box attack strategy
is capable of evading defense strategies previously found to make adversarial
example crafting harder.
| Nicolas Papernot and Patrick McDaniel and Ian Goodfellow and Somesh
Jha and Z. Berkay Celik and Ananthram Swami | null | 1602.02697 | null | null |
Compressed Online Dictionary Learning for Fast fMRI Decomposition | stat.ML cs.LG | We present a method for fast resting-state fMRI spatial decomposi-tions of
very large datasets, based on the reduction of the temporal dimension before
applying dictionary learning on concatenated individual records from groups of
subjects. Introducing a measure of correspondence between spatial
decompositions of rest fMRI, we demonstrates that time-reduced dictionary
learning produces result as reliable as non-reduced decompositions. We also
show that this reduction significantly improves computational scalability.
| Arthur Mensch (PARIETAL), Ga\"el Varoquaux (PARIETAL), Bertrand
Thirion (PARIETAL) | 10.1109/ISBI.2016.7493501 | 1602.02701 | null | null |
Decoy Bandits Dueling on a Poset | cs.LG cs.AI | We adress the problem of dueling bandits defined on partially ordered sets,
or posets. In this setting, arms may not be comparable, and there may be
several (incomparable) optimal arms. We propose an algorithm, UnchainedBandits,
that efficiently finds the set of optimal arms of any poset even when pairs of
comparable arms cannot be distinguished from pairs of incomparable arms, with a
set of minimal assumptions. This algorithm relies on the concept of decoys,
which stems from social psychology. For the easier case where the
incomparability information may be accessible, we propose a second algorithm,
SlicingBandits, which takes advantage of this information and achieves a very
significant gain of performance compared to UnchainedBandits. We provide
theoretical guarantees and experimental evaluation for both algorithms.
| Julien Audiffren (CMLA), Ralaivola Liva (LIF) | null | 1602.02706 | null | null |
PAC Reinforcement Learning with Rich Observations | cs.LG stat.ML | We propose and study a new model for reinforcement learning with rich
observations, generalizing contextual bandits to sequential decision making.
These models require an agent to take actions based on observations (features)
with the goal of achieving long-term performance competitive with a large set
of policies. To avoid barriers to sample-efficient learning associated with
large observation spaces and general POMDPs, we focus on problems that can be
summarized by a small number of hidden states and have long-term rewards that
are predictable by a reactive function class. In this setting, we design and
analyze a new reinforcement learning algorithm, Least Squares Value Elimination
by Exploration. We prove that the algorithm learns near optimal behavior after
a number of episodes that is polynomial in all relevant parameters, logarithmic
in the number of policies, and independent of the size of the observation
space. Our result provides theoretical justification for reinforcement learning
with function approximation.
| Akshay Krishnamurthy, Alekh Agarwal, John Langford | null | 1602.02722 | null | null |
Local and Global Convergence of a General Inertial Proximal Splitting
Scheme | math.OC cs.LG math.NA | This paper is concerned with convex composite minimization problems in a
Hilbert space. In these problems, the objective is the sum of two closed,
proper, and convex functions where one is smooth and the other admits a
computationally inexpensive proximal operator. We analyze a general family of
inertial proximal splitting algorithms (GIPSA) for solving such problems. We
establish finiteness of the sum of squared increments of the iterates and
optimality of the accumulation points. Weak convergence of the entire sequence
then follows if the minimum is attained. Our analysis unifies and extends
several previous results.
We then focus on $\ell_1$-regularized optimization, which is the ubiquitous
special case where the nonsmooth term is the $\ell_1$-norm. For certain
parameter choices, GIPSA is amenable to a local analysis for this problem. For
these choices we show that GIPSA achieves finite "active manifold
identification", i.e. convergence in a finite number of iterations to the
optimal support and sign, after which GIPSA reduces to minimizing a local
smooth function. Local linear convergence then holds under certain conditions.
We determine the rate in terms of the inertia, stepsize, and local curvature.
Our local analysis is applicable to certain recent variants of the Fast
Iterative Shrinkage-Thresholding Algorithm (FISTA), for which we establish
active manifold identification and local linear convergence. Our analysis
motivates the use of a momentum restart scheme in these FISTA variants to
obtain the optimal local linear convergence rate.
| Patrick R. Johnstone and Pierre Moulin | 10.1007/s10589-017-9896-7 | 1602.02726 | null | null |
Poor starting points in machine learning | cs.LG cs.NE math.OC stat.ML | Poor (even random) starting points for learning/training/optimization are
common in machine learning. In many settings, the method of Robbins and Monro
(online stochastic gradient descent) is known to be optimal for good starting
points, but may not be optimal for poor starting points -- indeed, for poor
starting points Nesterov acceleration can help during the initial iterations,
even though Nesterov methods not designed for stochastic approximation could
hurt during later iterations. The common practice of training with nontrivial
minibatches enhances the advantage of Nesterov acceleration.
| Mark Tygert | null | 1602.02823 | null | null |
Binarized Neural Networks: Training Deep Neural Networks with Weights
and Activations Constrained to +1 or -1 | cs.LG | We introduce a method to train Binarized Neural Networks (BNNs) - neural
networks with binary weights and activations at run-time. At training-time the
binary weights and activations are used for computing the parameters gradients.
During the forward pass, BNNs drastically reduce memory size and accesses, and
replace most arithmetic operations with bit-wise operations, which is expected
to substantially improve power-efficiency. To validate the effectiveness of
BNNs we conduct two sets of experiments on the Torch7 and Theano frameworks. On
both, BNNs achieved nearly state-of-the-art results over the MNIST, CIFAR-10
and SVHN datasets. Last but not least, we wrote a binary matrix multiplication
GPU kernel with which it is possible to run our MNIST BNN 7 times faster than
with an unoptimized GPU kernel, without suffering any loss in classification
accuracy. The code for training and running our BNNs is available on-line.
| Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv and
Yoshua Bengio | null | 1602.02830 | null | null |
Collaborative filtering via sparse Markov random fields | stat.ML cs.IR cs.LG | Recommender systems play a central role in providing individualized access to
information and services. This paper focuses on collaborative filtering, an
approach that exploits the shared structure among mind-liked users and similar
items. In particular, we focus on a formal probabilistic framework known as
Markov random fields (MRF). We address the open problem of structure learning
and introduce a sparsity-inducing algorithm to automatically estimate the
interaction structures between users and between items. Item-item and user-user
correlation networks are obtained as a by-product. Large-scale experiments on
movie recommendation and date matching datasets demonstrate the power of the
proposed method.
| Truyen Tran, Dinh Phung and Svetha Venkatesh | null | 1602.02842 | null | null |
Online Active Linear Regression via Thresholding | stat.ML cs.LG | We consider the problem of online active learning to collect data for
regression modeling. Specifically, we consider a decision maker with a limited
experimentation budget who must efficiently learn an underlying linear
population model. Our main contribution is a novel threshold-based algorithm
for selection of most informative observations; we characterize its performance
and fundamental lower bounds. We extend the algorithm and its guarantees to
sparse linear regression in high-dimensional settings. Simulations suggest the
algorithm is remarkably robust: it provides significant benefits over passive
random sampling in real-world datasets that exhibit high nonlinearity and high
dimensionality --- significantly reducing both the mean and variance of the
squared error.
| Carlos Riquelme, Ramesh Johari, Baosen Zhang | null | 1602.02845 | null | null |
Toward Optimal Feature Selection in Naive Bayes for Text Categorization | stat.ML cs.CL cs.IR cs.LG | Automated feature selection is important for text categorization to reduce
the feature size and to speed up the learning process of classifiers. In this
paper, we present a novel and efficient feature selection framework based on
the Information Theory, which aims to rank the features with their
discriminative capacity for classification. We first revisit two information
measures: Kullback-Leibler divergence and Jeffreys divergence for binary
hypothesis testing, and analyze their asymptotic properties relating to type I
and type II errors of a Bayesian classifier. We then introduce a new divergence
measure, called Jeffreys-Multi-Hypothesis (JMH) divergence, to measure
multi-distribution divergence for multi-class classification. Based on the
JMH-divergence, we develop two efficient feature selection methods, termed
maximum discrimination ($MD$) and $MD-\chi^2$ methods, for text categorization.
The promising results of extensive experiments demonstrate the effectiveness of
the proposed approaches.
| Bo Tang, Steven Kay, and Haibo He | 10.1109/TKDE.2016.2563436 | 1602.02850 | null | null |
Compliance-Aware Bandits | stat.ML cs.LG | Motivated by clinical trials, we study bandits with observable
non-compliance. At each step, the learner chooses an arm, after, instead of
observing only the reward, it also observes the action that took place. We show
that such noncompliance can be helpful or hurtful to the learner in general.
Unfortunately, naively incorporating compliance information into bandit
algorithms loses guarantees on sublinear regret. We present hybrid algorithms
that maintain regret bounds up to a multiplicative factor and can incorporate
compliance information. Simulations based on real data from the International
Stoke Trial show the practical potential of these algorithms.
| Nicol\'as Della Penna, Mark D. Reid, David Balduzzi | null | 1602.02852 | null | null |
The Role of Typicality in Object Classification: Improving The
Generalization Capacity of Convolutional Neural Networks | cs.CV cs.LG cs.NE | Deep artificial neural networks have made remarkable progress in different
tasks in the field of computer vision. However, the empirical analysis of these
models and investigation of their failure cases has received attention
recently. In this work, we show that deep learning models cannot generalize to
atypical images that are substantially different from training images. This is
in contrast to the superior generalization ability of the visual system in the
human brain. We focus on Convolutional Neural Networks (CNN) as the
state-of-the-art models in object recognition and classification; investigate
this problem in more detail, and hypothesize that training CNN models suffer
from unstructured loss minimization. We propose computational models to improve
the generalization capacity of CNNs by considering how typical a training image
looks like. By conducting an extensive set of experiments we show that
involving a typicality measure can improve the classification results on a new
set of images by a large margin. More importantly, this significant improvement
is achieved without fine-tuning the CNN model on the target image set.
| Babak Saleh and Ahmed Elgammal and Jacob Feldman | null | 1602.02865 | null | null |
Value Iteration Networks | cs.AI cs.LG cs.NE stat.ML | We introduce the value iteration network (VIN): a fully differentiable neural
network with a `planning module' embedded within. VINs can learn to plan, and
are suitable for predicting outcomes that involve planning-based reasoning,
such as policies for reinforcement learning. Key to our approach is a novel
differentiable approximation of the value-iteration algorithm, which can be
represented as a convolutional neural network, and trained end-to-end using
standard backpropagation. We evaluate VIN based policies on discrete and
continuous path-planning domains, and on a natural-language based search task.
We show that by learning an explicit planning computation, VIN policies
generalize better to new, unseen domains.
| Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel | null | 1602.02867 | null | null |
Classification with Boosting of Extreme Learning Machine Over
Arbitrarily Partitioned Data | cs.LG | Machine learning based computational intelligence methods are widely used to
analyze large scale data sets in this age of big data. Extracting useful
predictive modeling from these types of data sets is a challenging problem due
to their high complexity. Analyzing large amount of streaming data that can be
leveraged to derive business value is another complex problem to solve. With
high levels of data availability (\textit{i.e. Big Data}) automatic
classification of them has become an important and complex task. Hence, we
explore the power of applying MapReduce based Distributed AdaBoosting of
Extreme Learning Machine (ELM) to build a predictive bag of classification
models. Accordingly, (i) data set ensembles are created; (ii) ELM algorithm is
used to build weak learners (classifier functions); and (iii) builds a strong
learner from a set of weak learners. We applied this training model to the
benchmark knowledge discovery and data mining data sets.
| Ferhat \"Ozg\"ur \c{C}atak | 10.1007/s00500-015-1938-4 | 1602.02887 | null | null |
Robust Ensemble Classifier Combination Based on Noise Removal with
One-Class SVM | cs.LG | In machine learning area, as the number of labeled input samples becomes very
large, it is very difficult to build a classification model because of input
data set is not fit in a memory in training phase of the algorithm, therefore,
it is necessary to utilize data partitioning to handle overall data set.
Bagging and boosting based data partitioning methods have been broadly used in
data mining and pattern recognition area. Both of these methods have shown a
great possibility for improving classification model performance. This study is
concerned with the analysis of data set partitioning with noise removal and its
impact on the performance of multiple classifier models. In this study, we
propose noise filtering preprocessing at each data set partition to increment
classifier model performance. We applied Gini impurity approach to find the
best split percentage of noise filter ratio. The filtered sub data set is then
used to train individual ensemble models.
| Ferhat \"Ozg\"ur \c{C}atak | null | 1602.02888 | null | null |
Secure Multi-Party Computation Based Privacy Preserving Extreme Learning
Machine Algorithm Over Vertically Distributed Data | cs.CR cs.LG | Especially in the Big Data era, the usage of different classification methods
is increasing day by day. The success of these classification methods depends
on the effectiveness of learning methods. Extreme learning machine (ELM)
classification algorithm is a relatively new learning method built on
feed-forward neural-network. ELM classification algorithm is a simple and fast
method that can create a model from high-dimensional data sets. Traditional ELM
learning algorithm implicitly assumes complete access to whole data set. This
is a major privacy concern in most of cases. Sharing of private data (i.e.
medical records) is prevented because of security concerns. In this research,
we propose an efficient and secure privacy-preserving learning algorithm for
ELM classification over data that is vertically partitioned among several
parties. The new learning method preserves the privacy on numerical attributes,
builds a classification model without sharing private data without disclosing
the data of each party to others.
| Ferhat \"Ozg\"ur \c{C}atak | 10.1007/978-3-319-26535-3_39 | 1602.02899 | null | null |
Nested Mini-Batch K-Means | stat.ML cs.LG | A new algorithm is proposed which accelerates the mini-batch k-means
algorithm of Sculley (2010) by using the distance bounding approach of Elkan
(2003). We argue that, when incorporating distance bounds into a mini-batch
algorithm, already used data should preferentially be reused. To this end we
propose using nested mini-batches, whereby data in a mini-batch at iteration t
is automatically reused at iteration t+1.
Using nested mini-batches presents two difficulties. The first is that
unbalanced use of data can bias estimates, which we resolve by ensuring that
each data sample contributes exactly once to centroids. The second is in
choosing mini-batch sizes, which we address by balancing premature fine-tuning
of centroids with redundancy induced slow-down. Experiments show that the
resulting nmbatch algorithm is very effective, often arriving within 1% of the
empirical minimum 100 times earlier than the standard mini-batch algorithm.
| James Newling and Fran\c{c}ois Fleuret | null | 1602.02934 | null | null |
Spoofing detection under noisy conditions: a preliminary investigation
and an initial database | cs.LG cs.SD | Spoofing detection for automatic speaker verification (ASV), which is to
discriminate between live speech and attacks, has received increasing
attentions recently. However, all the previous studies have been done on the
clean data without significant additive noise. To simulate the real-life
scenarios, we perform a preliminary investigation of spoofing detection under
additive noisy conditions, and also describe an initial database for this task.
The noisy database is based on the ASVspoof challenge 2015 database and
generated by artificially adding background noises at different signal-to-noise
ratios (SNRs). Five different additive noises are included. Our preliminary
results show that using the model trained from clean data, the system
performance degrades significantly in noisy conditions. Phase-based feature is
more noise robust than magnitude-based features. And the systems perform
significantly differ under different noise scenarios.
| Xiaohai Tian, Zhizheng Wu, Xiong Xiao, Eng Siong Chng, Haizhou Li | null | 1602.02950 | null | null |
Self-organized control for musculoskeletal robots | cs.RO cs.LG cs.SY | With the accelerated development of robot technologies, optimal control
becomes one of the central themes of research. In traditional approaches, the
controller, by its internal functionality, finds appropriate actions on the
basis of the history of sensor values, guided by the goals, intentions,
objectives, learning schemes, and so on planted into it. The idea is that the
controller controls the world---the body plus its environment---as reliably as
possible. However, in elastically actuated robots this approach faces severe
difficulties. This paper advocates for a new paradigm of self-organized
control. The paper presents a solution with a controller that is devoid of any
functionalities of its own, given by a fixed, explicit and context-free
function of the recent history of the sensor values. When applying this
controller to a muscle-tendon driven arm-shoulder system from the Myorobotics
toolkit, we observe a vast variety of self-organized behavior patterns: when
left alone, the arm realizes pseudo-random sequences of different poses but one
can also manipulate the system into definite motion patterns. But most
interestingly, after attaching an object, the controller gets in a functional
resonance with the object's internal dynamics: when given a half-filled bottle,
the system spontaneously starts shaking the bottle so that maximum response
from the dynamics of the water is being generated. After attaching a pendulum
to the arm, the controller drives the pendulum into a circular mode. In this
way, the robot discovers dynamical affordances of objects its body is
interacting with. We also discuss perspectives for using this controller
paradigm for intention driven behavior generation.
| Ralf Der and Georg Martius | null | 1602.02990 | null | null |
A Convolutional Attention Network for Extreme Summarization of Source
Code | cs.LG cs.CL cs.SE | Attention mechanisms in neural networks have proved useful for problems in
which the input and output do not have fixed dimension. Often there exist
features that are locally translation invariant and would be valuable for
directing the model's attention, but previous attentional architectures are not
constructed to learn such features specifically. We introduce an attentional
neural network that employs convolution on the input tokens to detect local
time-invariant and long-range topical attention features in a context-dependent
way. We apply this architecture to the problem of extreme summarization of
source code snippets into short, descriptive function name-like summaries.
Using those features, the model sequentially generates a summary by
marginalizing over two attention mechanisms: one that predicts the next summary
token based on the attention weights of the input tokens and another that is
able to copy a code token as-is directly into the summary. We demonstrate our
convolutional attention neural network's performance on 10 popular Java
projects showing that it achieves better performance compared to previous
attentional mechanisms.
| Miltiadis Allamanis, Hao Peng, Charles Sutton | null | 1602.03001 | null | null |
Herding as a Learning System with Edge-of-Chaos Dynamics | stat.ML cs.LG | Herding defines a deterministic dynamical system at the edge of chaos. It
generates a sequence of model states and parameters by alternating parameter
perturbations with state maximizations, where the sequence of states can be
interpreted as "samples" from an associated MRF model. Herding differs from
maximum likelihood estimation in that the sequence of parameters does not
converge to a fixed point and differs from an MCMC posterior sampling approach
in that the sequence of states is generated deterministically. Herding may be
interpreted as a"perturb and map" method where the parameter perturbations are
generated using a deterministic nonlinear dynamical system rather than randomly
from a Gumbel distribution. This chapter studies the distinct statistical
characteristics of the herding algorithm and shows that the fast convergence
rate of the controlled moments may be attributed to edge of chaos dynamics. The
herding algorithm can also be generalized to models with latent variables and
to a discriminative learning setting. The perceptron cycling theorem ensures
that the fast moment matching property is preserved in the more general
framework.
| Yutian Chen and Max Welling | null | 1602.03014 | null | null |
Minimax Lower Bounds for Realizable Transductive Classification | stat.ML cs.LG | Transductive learning considers a training set of $m$ labeled samples and a
test set of $u$ unlabeled samples, with the goal of best labeling that
particular test set. Conversely, inductive learning considers a training set of
$m$ labeled samples drawn iid from $P(X,Y)$, with the goal of best labeling any
future samples drawn iid from $P(X)$. This comparison suggests that
transduction is a much easier type of inference than induction, but is this
really the case? This paper provides a negative answer to this question, by
proving the first known minimax lower bounds for transductive, realizable,
binary classification. Our lower bounds show that $m$ should be at least
$\Omega(d/\epsilon + \log(1/\delta)/\epsilon)$ when $\epsilon$-learning a
concept class $\mathcal{H}$ of finite VC-dimension $d<\infty$ with confidence
$1-\delta$, for all $m \leq u$. This result draws three important conclusions.
First, general transduction is as hard as general induction, since both
problems have $\Omega(d/m)$ minimax values. Second, the use of unlabeled data
does not help general transduction, since supervised learning algorithms such
as ERM and (Hanneke, 2015) match our transductive lower bounds while ignoring
the unlabeled test set. Third, our transductive lower bounds imply lower bounds
for semi-supervised learning, which add to the important discussion about the
role of unlabeled data in machine learning.
| Ilya Tolstikhin and David Lopez-Paz | null | 1602.03027 | null | null |
The Structured Weighted Violations Perceptron Algorithm | cs.LG | We present the Structured Weighted Violations Perceptron (SWVP) algorithm, a
new structured prediction algorithm that generalizes the Collins Structured
Perceptron (CSP). Unlike CSP, the update rule of SWVP explicitly exploits the
internal structure of the predicted labels. We prove the convergence of SWVP
for linearly separable training sets, provide mistake and generalization
bounds, and show that in the general case these bounds are tighter than those
of the CSP special case. In synthetic data experiments with data drawn from an
HMM, various variants of SWVP substantially outperform its CSP special case.
SWVP also provides encouraging initial dependency parsing results.
| Rotem Dror, Roi Reichart | null | 1602.03040 | null | null |
Minimum Conditional Description Length Estimation for Markov Random
Fields | cs.IT cs.LG math.IT math.ST stat.TH | In this paper we discuss a method, which we call Minimum Conditional
Description Length (MCDL), for estimating the parameters of a subset of sites
within a Markov random field. We assume that the edges are known for the entire
graph $G=(V,E)$. Then, for a subset $U\subset V$, we estimate the parameters
for nodes and edges in $U$ as well as for edges incident to a node in $U$, by
finding the exponential parameter for that subset that yields the best
compression conditioned on the values on the boundary $\partial U$. Our
estimate is derived from a temporally stationary sequence of observations on
the set $U$. We discuss how this method can also be applied to estimate a
spatially invariant parameter from a single configuration, and in so doing,
derive the Maximum Pseudo-Likelihood (MPL) estimate.
| Matthew G. Reyes and David L. Neuhoff | null | 1602.03061 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.