title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Recursive Multikernel Filters Exploiting Nonlinear Temporal Structure
|
stat.ML cs.LG
|
In kernel methods, temporal information on the data is commonly included by
using time-delayed embeddings as inputs. Recently, an alternative formulation
was proposed by defining a gamma-filter explicitly in a reproducing kernel
Hilbert space, giving rise to a complex model where multiple kernels operate on
different temporal combinations of the input signal. In the original
formulation, the kernels are then simply combined to obtain a single kernel
matrix (for instance by averaging), which provides computational benefits but
discards important information on the temporal structure of the signal.
Inspired by works on multiple kernel learning, we overcome this drawback by
considering the different kernels separately. We propose an efficient strategy
to adaptively combine and select these kernels during the training phase. The
resulting batch and online algorithms automatically learn to process highly
nonlinear temporal information extracted from the input signal, which is
implicitly encoded in the kernel values. We evaluate our proposal on several
artificial and real tasks, showing that it can outperform classical approaches
both in batch and online settings.
|
Steven Van Vaerenbergh, Simone Scardapane, Ignacio Santamaria
| null |
1706.03533
| null | null |
Enriched Deep Recurrent Visual Attention Model for Multiple Object
Recognition
|
cs.CV cs.LG
|
We design an Enriched Deep Recurrent Visual Attention Model (EDRAM) - an
improved attention-based architecture for multiple object recognition. The
proposed model is a fully differentiable unit that can be optimized end-to-end
by using Stochastic Gradient Descent (SGD). The Spatial Transformer (ST) was
employed as visual attention mechanism which allows to learn the geometric
transformation of objects within images. With the combination of the Spatial
Transformer and the powerful recurrent architecture, the proposed EDRAM can
localize and recognize objects simultaneously. EDRAM has been evaluated on two
publicly available datasets including MNIST Cluttered (with 70K cluttered
digits) and SVHN (with up to 250k real world images of house numbers).
Experiments show that it obtains superior performance as compared with the
state-of-the-art models.
|
Artsiom Ablavatski, Shijian Lu and Jianfei Cai
|
10.1109/WACV.2017.113
|
1706.03581
| null | null |
Context encoding enables machine learning-based quantitative
photoacoustics
|
physics.med-ph cs.LG physics.comp-ph
|
Real-time monitoring of functional tissue parameters, such as local blood
oxygenation, based on optical imaging could provide groundbreaking advances in
the diagnosis and interventional therapy of various diseases. While
photoacoustic (PA) imaging is a novel modality with great potential to measure
optical absorption deep inside tissue, quantification of the measurements
remains a major challenge. In this paper, we introduce the first machine
learning based approach to quantitative PA imaging (qPAI), which relies on
learning the fluence in a voxel to deduce the corresponding optical absorption.
The method encodes relevant information of the measured signal and the
characteristics of the imaging system in voxel-based feature vectors, which
allow the generation of thousands of training samples from a single simulated
PA image. Comprehensive in silico experiments suggest that context encoding
(CE)-qPAI enables highly accurate and robust quantification of the local
fluence and thereby the optical absorption from PA images.
|
Thomas Kirchner, Janek Gr\"ohl and Lena Maier-Hein
|
10.1117/1.JBO.23.5.056008
|
1706.03595
| null | null |
Clustering Small Samples with Quality Guarantees: Adaptivity with
One2all pps
|
cs.LG cs.DS
|
Clustering of data points is a fundamental tool in data analysis. We consider
points $X$ in a relaxed metric space, where the triangle inequality holds
within a constant factor. The {\em cost} of clustering $X$ by $Q$ is
$V(Q)=\sum_{x\in X} d_{xQ}$. Two basic tasks, parametrized by $k \geq 1$, are
{\em cost estimation}, which returns (approximate) $V(Q)$ for queries $Q$ such
that $|Q|=k$ and {\em clustering}, which returns an (approximate) minimizer of
$V(Q)$ of size $|Q|=k$. With very large data sets $X$, we seek efficient
constructions of small samples that act as surrogates to the full data for
performing these tasks. Existing constructions that provide quality guarantees
are either worst-case, and unable to benefit from structure of real data sets,
or make explicit strong assumptions on the structure. We show here how to avoid
both these pitfalls using adaptive designs.
At the core of our design is the {\em one2all} construction of
multi-objective probability-proportional-to-size (pps) samples: Given a set $M$
of centroids and $\alpha \geq 1$, one2all efficiently assigns probabilities to
points so that the clustering cost of {\em each} $Q$ with cost $V(Q) \geq
V(M)/\alpha$ can be estimated well from a sample of size $O(\alpha
|M|\epsilon^{-2})$. For cost queries, we can obtain worst-case sample size
$O(k\epsilon^{-2})$ by applying one2all to a bicriteria approximation $M$, but
we adaptively balance $|M|$ and $\alpha$ to further reduce sample size. For
clustering, we design an adaptive wrapper that applies a base clustering
algorithm to a sample $S$. Our wrapper uses the smallest sample that provides
statistical guarantees that the quality of the clustering on the sample carries
over to the full data set. We demonstrate experimentally the huge gains of
using our adaptive instead of worst-case methods.
|
Edith Cohen, Shiri Chechik, Haim Kaplan
| null |
1706.03607
| null | null |
Tackling Over-pruning in Variational Autoencoders
|
cs.LG
|
Variational autoencoders (VAE) are directed generative models that learn
factorial latent variables. As noted by Burda et al. (2015), these models
exhibit the problem of factor over-pruning where a significant number of
stochastic factors fail to learn anything and become inactive. This can limit
their modeling power and their ability to learn diverse and meaningful latent
representations. In this paper, we evaluate several methods to address this
problem and propose a more effective model-based approach called the epitomic
variational autoencoder (eVAE). The so-called epitomes of this model are groups
of mutually exclusive latent factors that compete to explain the data. This
approach helps prevent inactive units since each group is pressured to explain
the data. We compare the approaches with qualitative and quantitative results
on MNIST and TFD datasets. Our results show that eVAE makes efficient use of
model capacity and generalizes better than VAE.
|
Serena Yeung, Anitha Kannan, Yann Dauphin, Li Fei-Fei
| null |
1706.03643
| null | null |
Certified Defenses for Data Poisoning Attacks
|
cs.LG cs.CR
|
Machine learning systems trained on user-provided data are susceptible to
data poisoning attacks, whereby malicious users inject false training data with
the aim of corrupting the learned model. While recent work has proposed a
number of attacks and defenses, little is understood about the worst-case loss
of a defense in the face of a determined attacker. We address this by
constructing approximate upper bounds on the loss across a broad family of
attacks, for defenders that first perform outlier removal followed by empirical
risk minimization. Our approximation relies on two assumptions: (1) that the
dataset is large enough for statistical concentration between train and test
error to hold, and (2) that outliers within the clean (non-poisoned) data do
not have a strong effect on the model. Our bound comes paired with a candidate
attack that often nearly matches the upper bound, giving us a powerful tool for
quickly assessing defenses on a given dataset. Empirically, we find that even
under a simple defense, the MNIST-1-7 and Dogfish datasets are resilient to
attack, while in contrast the IMDB sentiment dataset can be driven from 12% to
23% test error by adding only 3% poisoned data.
|
Jacob Steinhardt, Pang Wei Koh, Percy Liang
| null |
1706.03691
| null | null |
SEVEN: Deep Semi-supervised Verification Networks
|
cs.LG stat.ML
|
Verification determines whether two samples belong to the same class or not,
and has important applications such as face and fingerprint verification, where
thousands or millions of categories are present but each category has scarce
labeled examples, presenting two major challenges for existing deep learning
models. We propose a deep semi-supervised model named SEmi-supervised
VErification Network (SEVEN) to address these challenges. The model consists of
two complementary components. The generative component addresses the lack of
supervision within each category by learning general salient structures from a
large amount of data across categories. The discriminative component exploits
the learned general features to mitigate the lack of supervision within
categories, and also directs the generative component to find more informative
structures of the whole data manifold. The two components are tied together in
SEVEN to allow an end-to-end training of the two components. Extensive
experiments on four verification tasks demonstrate that SEVEN significantly
outperforms other state-of-the-art deep semi-supervised techniques when labeled
data are in short supply. Furthermore, SEVEN is competitive with fully
supervised baselines trained with a larger amount of labeled data. It indicates
the importance of the generative component in SEVEN.
|
Vahid Noroozi, Lei Zheng, Sara Bahaadini, Sihong Xie, Philip S. Yu
| null |
1706.03692
| null | null |
Channel-Recurrent Autoencoding for Image Modeling
|
cs.LG cs.CV
|
Despite recent successes in synthesizing faces and bedrooms, existing
generative models struggle to capture more complex image types, potentially due
to the oversimplification of their latent space constructions. To tackle this
issue, building on Variational Autoencoders (VAEs), we integrate recurrent
connections across channels to both inference and generation steps, allowing
the high-level features to be captured in global-to-local, coarse-to-fine
manners. Combined with adversarial loss, our channel-recurrent VAE-GAN
(crVAE-GAN) outperforms VAE-GAN in generating a diverse spectrum of high
resolution images while maintaining the same level of computational efficacy.
Our model produces interpretable and expressive latent representations to
benefit downstream tasks such as image completion. Moreover, we propose two
novel regularizations, namely the KL objective weighting scheme over time steps
and mutual information maximization between transformed latent variables and
the outputs, to enhance the training.
|
Wenling Shang and Kihyuk Sohn and Yuandong Tian
| null |
1706.03729
| null | null |
Large-Scale Plant Classification with Deep Neural Networks
|
cs.LG cs.CV stat.AP
|
This paper discusses the potential of applying deep learning techniques for
plant classification and its usage for citizen science in large-scale
biodiversity monitoring. We show that plant classification using near
state-of-the-art convolutional network architectures like ResNet50 achieves
significant improvements in accuracy compared to the most widespread plant
classification application in test sets composed of thousands of different
species labels. We find that the predictions can be confidently used as a
baseline classification in citizen science communities like iNaturalist (or its
Spanish fork, Natusfera) which in turn can share their data with biodiversity
portals like GBIF.
|
Ignacio Heredia
|
10.1145/3075564.3075590
|
1706.03736
| null | null |
Deep reinforcement learning from human preferences
|
stat.ML cs.AI cs.HC cs.LG
|
For sophisticated reinforcement learning (RL) systems to interact usefully
with real-world environments, we need to communicate complex goals to these
systems. In this work, we explore goals defined in terms of (non-expert) human
preferences between pairs of trajectory segments. We show that this approach
can effectively solve complex RL tasks without access to the reward function,
including Atari games and simulated robot locomotion, while providing feedback
on less than one percent of our agent's interactions with the environment. This
reduces the cost of human oversight far enough that it can be practically
applied to state-of-the-art RL systems. To demonstrate the flexibility of our
approach, we show that we can successfully train complex novel behaviors with
about an hour of human time. These behaviors and environments are considerably
more complex than any that have been previously learned from human feedback.
|
Paul Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg,
Dario Amodei
| null |
1706.03741
| null | null |
Attention Is All You Need
|
cs.CL cs.LG
|
The dominant sequence transduction models are based on complex recurrent or
convolutional neural networks in an encoder-decoder configuration. The best
performing models also connect the encoder and decoder through an attention
mechanism. We propose a new simple network architecture, the Transformer, based
solely on attention mechanisms, dispensing with recurrence and convolutions
entirely. Experiments on two machine translation tasks show these models to be
superior in quality while being more parallelizable and requiring significantly
less time to train. Our model achieves 28.4 BLEU on the WMT 2014
English-to-German translation task, improving over the existing best results,
including ensembles by over 2 BLEU. On the WMT 2014 English-to-French
translation task, our model establishes a new single-model state-of-the-art
BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction
of the training costs of the best models from the literature. We show that the
Transformer generalizes well to other tasks by applying it successfully to
English constituency parsing both with large and limited training data.
|
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion
Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
| null |
1706.03762
| null | null |
Encoding of phonology in a recurrent neural model of grounded speech
|
cs.CL cs.LG cs.SD
|
We study the representation and encoding of phonemes in a recurrent neural
network model of grounded speech. We use a model which processes images and
their spoken descriptions, and projects the visual and auditory representations
into the same semantic space. We perform a number of analyses on how
information about individual phonemes is encoded in the MFCC features extracted
from the speech signal, and the activations of the layers of the model. Via
experiments with phoneme decoding and phoneme discrimination we show that
phoneme representations are most salient in the lower layers of the model,
where low-level signals are processed at a fine-grained level, although a large
amount of phonological information is retain at the top recurrent layer. We
further find out that the attention mechanism following the top recurrent layer
significantly attenuates encoding of phonology and makes the utterance
embeddings much more invariant to synonymy. Moreover, a hierarchical clustering
of phoneme representations learned by the network shows an organizational
structure of phonemes similar to those proposed in linguistics.
|
Afra Alishahi, Marie Barking, Grzegorz Chrupa{\l}a
|
10.18653/v1/K17-1037
|
1706.03815
| null | null |
SmoothGrad: removing noise by adding noise
|
cs.LG cs.CV stat.ML
|
Explaining the output of a deep network remains a challenge. In the case of
an image classifier, one type of explanation is to identify pixels that
strongly influence the final decision. A starting point for this strategy is
the gradient of the class score function with respect to the input image. This
gradient can be interpreted as a sensitivity map, and there are several
techniques that elaborate on this basic idea. This paper makes two
contributions: it introduces SmoothGrad, a simple method that can help visually
sharpen gradient-based sensitivity maps, and it discusses lessons in the
visualization of these maps. We publish the code for our experiments and a
website with our results.
|
Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Vi\'egas, Martin
Wattenberg
| null |
1706.03825
| null | null |
Recurrent Neural Networks with Top-k Gains for Session-based
Recommendations
|
cs.LG
|
RNNs have been shown to be excellent models for sequential data and in
particular for data that is generated by users in an session-based manner. The
use of RNNs provides impressive performance benefits over classical methods in
session-based recommendations. In this work we introduce novel ranking loss
functions tailored to RNNs in the recommendation setting. The improved
performance of these losses over alternatives, along with further tricks and
refinements described in this work, allow for an overall improvement of up to
35% in terms of MRR and Recall@20 over previous session-based RNN solutions and
up to 53% over classical collaborative filtering approaches. Unlike data
augmentation-based improvements, our method does not increase training times
significantly. We further demonstrate the performance gain of the RNN over
baselines in an online A/B test.
|
Bal\'azs Hidasi, Alexandros Karatzoglou
|
10.1145/3269206.3271761
|
1706.03847
| null | null |
Adversarial Feature Matching for Text Generation
|
stat.ML cs.CL cs.LG
|
The Generative Adversarial Network (GAN) has achieved great success in
generating realistic (real-valued) synthetic data. However, convergence issues
and difficulties dealing with discrete data hinder the applicability of GAN to
text. We propose a framework for generating realistic text via adversarial
training. We employ a long short-term memory network as generator, and a
convolutional network as discriminator. Instead of using the standard objective
of GAN, we propose matching the high-dimensional latent feature distributions
of real and synthetic sentences, via a kernelized discrepancy metric. This
eases adversarial training by alleviating the mode-collapsing problem. Our
experiments show superior performance in quantitative evaluation, and
demonstrate that our model can generate realistic-looking sentences.
|
Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen,
Lawrence Carin
| null |
1706.0385
| null | null |
Subspace Clustering via Optimal Direction Search
|
cs.CV cs.IR cs.LG stat.AP stat.ML
|
This letter presents a new spectral-clustering-based approach to the subspace
clustering problem. Underpinning the proposed method is a convex program for
optimal direction search, which for each data point d finds an optimal
direction in the span of the data that has minimum projection on the other data
points and non-vanishing projection on d. The obtained directions are
subsequently leveraged to identify a neighborhood set for each data point. An
alternating direction method of multipliers framework is provided to
efficiently solve for the optimal directions. The proposed method is shown to
notably outperform the existing subspace clustering methods, particularly for
unwieldy scenarios involving high levels of noise and close subspaces, and
yields the state-of-the-art results for the problem of face clustering using
subspace segmentation.
|
Mostafa Rahmani and George Atia
|
10.1109/LSP.2017.2757901
|
1706.0386
| null | null |
MNL-Bandit: A Dynamic Learning Approach to Assortment Selection
|
cs.LG
|
We consider a dynamic assortment selection problem, where in every round the
retailer offers a subset (assortment) of $N$ substitutable products to a
consumer, who selects one of these products according to a multinomial logit
(MNL) choice model. The retailer observes this choice and the objective is to
dynamically learn the model parameters, while optimizing cumulative revenues
over a selling horizon of length $T$. We refer to this exploration-exploitation
formulation as the MNL-Bandit problem. Existing methods for this problem follow
an "explore-then-exploit" approach, which estimate parameters to a desired
accuracy and then, treating these estimates as if they are the correct
parameter values, offers the optimal assortment based on these estimates. These
approaches require certain a priori knowledge of "separability", determined by
the true parameters of the underlying MNL model, and this in turn is critical
in determining the length of the exploration period. (Separability refers to
the distinguishability of the true optimal assortment from the other
sub-optimal alternatives.) In this paper, we give an efficient algorithm that
simultaneously explores and exploits, achieving performance independent of the
underlying parameters. The algorithm can be implemented in a fully online
manner, without knowledge of the horizon length $T$. Furthermore, the algorithm
is adaptive in the sense that its performance is near-optimal in both the "well
separated" case, as well as the general parameter setting where this separation
need not hold.
|
Shipra Agrawal, Vashist Avadhanula, Vineet Goyal and Assaf Zeevi
| null |
1706.0388
| null | null |
A Well-Tempered Landscape for Non-convex Robust Subspace Recovery
|
cs.LG math.OC stat.ML
|
We present a mathematical analysis of a non-convex energy landscape for
robust subspace recovery. We prove that an underlying subspace is the only
stationary point and local minimizer in a specified neighborhood under a
deterministic condition on a dataset. If the deterministic condition is
satisfied, we further show that a geodesic gradient descent method over the
Grassmannian manifold can exactly recover the underlying subspace when the
method is properly initialized. Proper initialization by principal component
analysis is guaranteed with a simple deterministic condition. Under slightly
stronger assumptions, the gradient descent method with a piecewise constant
step-size scheme achieves linear convergence. The practicality of the
deterministic condition is demonstrated on some statistical models of data, and
the method achieves almost state-of-the-art recovery guarantees on the Haystack
Model for different regimes of sample size and ambient dimension. In
particular, when the ambient dimension is fixed and the sample size is large
enough, we show that our gradient method can exactly recover the underlying
subspace for any fixed fraction of outliers (less than 1).
|
Tyler Maunu, Teng Zhang, Gilad Lerman
| null |
1706.03896
| null | null |
SEP-Nets: Small and Effective Pattern Networks
|
cs.CV cs.LG
|
While going deeper has been witnessed to improve the performance of
convolutional neural networks (CNN), going smaller for CNN has received
increasing attention recently due to its attractiveness for mobile/embedded
applications. It remains an active and important topic how to design a small
network while retaining the performance of large and deep CNNs (e.g., Inception
Nets, ResNets). Albeit there are already intensive studies on compressing the
size of CNNs, the considerable drop of performance is still a key concern in
many designs. This paper addresses this concern with several new contributions.
First, we propose a simple yet powerful method for compressing the size of deep
CNNs based on parameter binarization. The striking difference from most
previous work on parameter binarization/quantization lies at different
treatments of $1\times 1$ convolutions and $k\times k$ convolutions ($k>1$),
where we only binarize $k\times k$ convolutions into binary patterns. The
resulting networks are referred to as pattern networks. By doing this, we show
that previous deep CNNs such as GoogLeNet and Inception-type Nets can be
compressed dramatically with marginal drop in performance. Second, in light of
the different functionalities of $1\times 1$ (data projection/transformation)
and $k\times k$ convolutions (pattern extraction), we propose a new block
structure codenamed the pattern residual block that adds transformed feature
maps generated by $1\times 1$ convolutions to the pattern feature maps
generated by $k\times k$ convolutions, based on which we design a small network
with $\sim 1$ million parameters. Combining with our parameter binarization, we
achieve better performance on ImageNet than using similar sized networks
including recently released Google MobileNets.
|
Zhe Li, Xiaoyu Wang, Xutao Lv, Tianbao Yang
| null |
1706.03912
| null | null |
Analyzing the Robustness of Nearest Neighbors to Adversarial Examples
|
stat.ML cs.CR cs.LG
|
Motivated by safety-critical applications, test-time attacks on classifiers
via adversarial examples has recently received a great deal of attention.
However, there is a general lack of understanding on why adversarial examples
arise; whether they originate due to inherent properties of data or due to lack
of training samples remains ill-understood. In this work, we introduce a
theoretical framework analogous to bias-variance theory for understanding these
effects.
We use our framework to analyze the robustness of a canonical non-parametric
classifier - the k-nearest neighbors. Our analysis shows that its robustness
properties depend critically on the value of k - the classifier may be
inherently non-robust for small k, but its robustness approaches that of the
Bayes Optimal classifier for fast-growing k. We propose a novel modified
1-nearest neighbor classifier, and guarantee its robustness in the large sample
limit. Our experiments suggest that this classifier may have good robustness
properties even for reasonable data set sizes.
|
Yizhen Wang, Somesh Jha, Kamalika Chaudhuri
| null |
1706.03922
| null | null |
Generative Models for Learning from Crowds
|
cs.AI cs.HC cs.LG
|
In this paper, we propose generative probabilistic models for label
aggregation. We use Gibbs sampling and a novel variational inference algorithm
to perform the posterior inference. Empirical results show that our methods
consistently outperform state-of-the-art methods.
|
Chi Hong
| null |
1706.0393
| null | null |
Exact Learning from an Honest Teacher That Answers Membership Queries
|
cs.LG
|
Given a teacher that holds a function $f:X\to R$ from some class of functions
$C$. The teacher can receive from the learner an element~$d$ in the domain $X$
(a query) and returns the value of the function in $d$, $f(d)\in R$. The
learner goal is to find $f$ with a minimum number of queries, optimal time
complexity, and optimal resources.
In this survey, we present some of the results known from the literature,
different techniques used, some new problems, and open problems.
|
Nader H. Bshouty
| null |
1706.03935
| null | null |
Accelerated Dual Learning by Homotopic Initialization
|
cs.LG
|
Gradient descent and coordinate descent are well understood in terms of their
asymptotic behavior, but less so in a transient regime often used for
approximations in machine learning. We investigate how proper initialization
can have a profound effect on finding near-optimal solutions quickly. We show
that a certain property of a data set, namely the boundedness of the
correlations between eigenfeatures and the response variable, can lead to
faster initial progress than expected by commonplace analysis. Convex
optimization problems can tacitly benefit from that, but this automatism does
not apply to their dual formulation. We analyze this phenomenon and devise
provably good initialization strategies for dual optimization as well as
heuristics for the non-convex case, relevant for deep learning. We find our
predictions and methods to be experimentally well-supported.
|
Hadi Daneshmand, Hamed Hassani, Thomas Hofmann
| null |
1706.03958
| null | null |
Getting deep recommenders fit: Bloom embeddings for sparse binary
input/output networks
|
cs.LG cs.AI cs.IR cs.NE
|
Recommendation algorithms that incorporate techniques from deep learning are
becoming increasingly popular. Due to the structure of the data coming from
recommendation domains (i.e., one-hot-encoded vectors of item preferences),
these algorithms tend to have large input and output dimensionalities that
dominate their overall size. This makes them difficult to train, due to the
limited memory of graphical processing units, and difficult to deploy on mobile
devices with limited hardware. To address these difficulties, we propose Bloom
embeddings, a compression technique that can be applied to the input and output
of neural network models dealing with sparse high-dimensional binary-coded
instances. Bloom embeddings are computationally efficient, and do not seriously
compromise the accuracy of the model up to 1/5 compression ratios. In some
cases, they even improve over the original accuracy, with relative increases up
to 12%. We evaluate Bloom embeddings on 7 data sets and compare it against 4
alternative methods, obtaining favorable results. We also discuss a number of
further advantages of Bloom embeddings, such as 'on-the-fly' constant-time
operation, zero or marginal space requirements, training time speedups, or the
fact that they do not require any change to the core model architecture or
training configuration.
|
Joan Serr\`a and Alexandros Karatzoglou
| null |
1706.03993
| null | null |
Recurrent Latent Variable Networks for Session-Based Recommendation
|
cs.IR cs.LG stat.ML
|
In this work, we attempt to ameliorate the impact of data sparsity in the
context of session-based recommendation. Specifically, we seek to devise a
machine learning mechanism capable of extracting subtle and complex underlying
temporal dynamics in the observed session data, so as to inform the
recommendation algorithm. To this end, we improve upon systems that utilize
deep learning techniques with recurrently connected units; we do so by adopting
concepts from the field of Bayesian statistics, namely variational inference.
Our proposed approach consists in treating the network recurrent units as
stochastic latent variables with a prior distribution imposed over them. On
this basis, we proceed to infer corresponding posteriors; these can be used for
prediction and recommendation generation, in a way that accounts for the
uncertainty in the available sparse training data. To allow for our approach to
easily scale to large real-world datasets, we perform inference under an
approximate amortized variational inference (AVI) setup, whereby the learned
posteriors are parameterized via (conventional) neural networks. We perform an
extensive experimental evaluation of our approach using challenging benchmark
datasets, and illustrate its superiority over existing state-of-the-art
techniques.
|
Sotirios Chatzis, Panayiotis Christodoulou, Andreas S. Andreou
| null |
1706.04026
| null | null |
Beyond Monte Carlo Tree Search: Playing Go with Deep Alternative Neural
Network and Long-Term Evaluation
|
cs.AI cs.LG cs.NE
|
Monte Carlo tree search (MCTS) is extremely popular in computer Go which
determines each action by enormous simulations in a broad and deep search tree.
However, human experts select most actions by pattern analysis and careful
evaluation rather than brute search of millions of future nteractions. In this
paper, we propose a computer Go system that follows experts way of thinking and
playing. Our system consists of two parts. The first part is a novel deep
alternative neural network (DANN) used to generate candidates of next move.
Compared with existing deep convolutional neural network (DCNN), DANN inserts
recurrent layer after each convolutional layer and stacks them in an
alternative manner. We show such setting can preserve more contexts of local
features and its evolutions which are beneficial for move prediction. The
second part is a long-term evaluation (LTE) module used to provide a reliable
evaluation of candidates rather than a single probability from move predictor.
This is consistent with human experts nature of playing since they can foresee
tens of steps to give an accurate estimation of candidates. In our system, for
each candidate, LTE calculates a cumulative reward after several future
interactions when local variations are settled. Combining criteria from the two
parts, our system determines the optimal choice of next move. For more
comprehensive experiments, we introduce a new professional Go dataset (PGD),
consisting of 253233 professional records. Experiments on GoGoD and PGD
datasets show the DANN can substantially improve performance of move prediction
over pure DCNN. When combining LTE, our system outperforms most relevant
approaches and open engines based on MCTS.
|
Jinzhuo Wang, Wenmin Wang, Ronggang Wang, Wen Gao
| null |
1706.04052
| null | null |
Convergence analysis of belief propagation for pairwise linear Gaussian
models
|
cs.LG stat.ML
|
Gaussian belief propagation (BP) has been widely used for distributed
inference in large-scale networks such as the smart grid, sensor networks, and
social networks, where local measurements/observations are scattered over a
wide geographical area. One particular case is when two neighboring agents
share a common observation. For example, to estimate voltage in the direct
current (DC) power flow model, the current measurement over a power line is
proportional to the voltage difference between two neighboring buses. When
applying the Gaussian BP algorithm to this type of problem, the convergence
condition remains an open issue. In this paper, we analyze the convergence
properties of Gaussian BP for this pairwise linear Gaussian model. We show
analytically that the updating information matrix converges at a geometric rate
to a unique positive definite matrix with arbitrary positive semidefinite
initial value and further provide the necessary and sufficient convergence
condition for the belief mean vector to the optimal estimate.
|
Jian Du, Shaodan Ma, Yik-Chung Wu, Soummya Kar and Jos\'e M. F. Moura
| null |
1706.04074
| null | null |
Interaction-Based Distributed Learning in Cyber-Physical and Social
Networks
|
math.OC cs.LG math.ST stat.TH
|
In this paper we consider a network scenario in which agents can evaluate
each other according to a score graph that models some physical or social
interaction. The goal is to design a distributed protocol, run by the agents,
allowing them to learn their unknown state among a finite set of possible
values. We propose a Bayesian framework in which scores and states are
associated to probabilistic events with unknown parameters and hyperparameters
respectively. We prove that each agent can learn its state by means of a local
Bayesian classifier and a (centralized) Maximum-Likelihood (ML) estimator of
the parameter-hyperparameter that combines plain ML and Empirical Bayes
approaches. By using tools from graphical models, which allow us to gain
insight on conditional dependences of scores and states, we provide two relaxed
probabilistic models that ultimately lead to ML parameter-hyperparameter
estimators amenable to distributed computation. In order to highlight the
appropriateness of the proposed relaxations, we demonstrate the distributed
estimators on a machine-to-machine testing set-up for anomaly detection and on
a social interaction set-up for user profiling.
|
Francesco Sasso and Angelo Coluccia and Giuseppe Notarstefano
| null |
1706.04081
| null | null |
Provable Alternating Gradient Descent for Non-negative Matrix
Factorization with Strong Correlations
|
cs.LG cs.DS cs.NA stat.ML
|
Non-negative matrix factorization is a basic tool for decomposing data into
the feature and weight matrices under non-negativity constraints, and in
practice is often solved in the alternating minimization framework. However, it
is unclear whether such algorithms can recover the ground-truth feature matrix
when the weights for different features are highly correlated, which is common
in applications. This paper proposes a simple and natural alternating gradient
descent based algorithm, and shows that with a mild initialization it provably
recovers the ground-truth in the presence of strong correlations. In most
interesting cases, the correlation can be in the same order as the highest
possible. Our analysis also reveals its several favorable features including
robustness to noise. We complement our theoretical results with empirical
studies on semi-synthetic datasets, demonstrating its advantage over several
popular methods in recovering the ground-truth.
|
Yuanzhi Li, Yingyu Liang
| null |
1706.04097
| null | null |
Zero-Shot Relation Extraction via Reading Comprehension
|
cs.CL cs.AI cs.LG
|
We show that relation extraction can be reduced to answering simple reading
comprehension questions, by associating one or more natural-language questions
with each relation slot. This reduction has several advantages: we can (1)
learn relation-extraction models by extending recent neural
reading-comprehension techniques, (2) build very large training sets for those
models by combining relation-specific crowd-sourced questions with distant
supervision, and even (3) do zero-shot learning by extracting new relation
types that are only specified at test-time, for which we have no labeled
training examples. Experiments on a Wikipedia slot-filling task demonstrate
that the approach can generalize to new questions for known relation types with
high accuracy, and that zero-shot generalization to unseen relation types is
possible, at lower accuracy levels, setting the bar for future work on this
task.
|
Omer Levy, Minjoon Seo, Eunsol Choi, Luke Zettlemoyer
| null |
1706.04115
| null | null |
Online Learning for Structured Loss Spaces
|
cs.LG
|
We consider prediction with expert advice when the loss vectors are assumed
to lie in a set described by the sum of atomic norm balls. We derive a regret
bound for a general version of the online mirror descent (OMD) algorithm that
uses a combination of regularizers, each adapted to the constituent atomic
norms. The general result recovers standard OMD regret bounds, and yields
regret bounds for new structured settings where the loss vectors are (i) noisy
versions of points from a low-rank subspace, (ii) sparse vectors corrupted with
noise, and (iii) sparse perturbations of low-rank vectors. For the problem of
online learning with structured losses, we also show lower bounds on regret in
terms of rank and sparsity of the source set of the loss vectors, which implies
lower bounds for the above additive loss settings as well.
|
Siddharth Barman, Aditya Gopalan, and Aadirupa Saha
| null |
1706.04125
| null | null |
Personalizing Session-based Recommendations with Hierarchical Recurrent
Neural Networks
|
cs.LG cs.HC cs.IR
|
Session-based recommendations are highly relevant in many modern on-line
services (e.g. e-commerce, video streaming) and recommendation settings.
Recently, Recurrent Neural Networks have been shown to perform very well in
session-based settings. While in many session-based recommendation domains user
identifiers are hard to come by, there are also domains in which user profiles
are readily available. We propose a seamless way to personalize RNN models with
cross-session information transfer and devise a Hierarchical RNN model that
relays end evolves latent hidden states of the RNNs across user sessions.
Results on two industry datasets show large improvements over the session-only
RNNs.
|
Massimo Quadrana, Alexandros Karatzoglou, Bal\'azs Hidasi and Paolo
Cremonesi
|
10.1145/3109859.3109896
|
1706.04148
| null | null |
Gradient descent GAN optimization is locally stable
|
cs.LG cs.AI math.OC stat.ML
|
Despite the growing prominence of generative adversarial networks (GANs),
optimization in GANs is still a poorly understood topic. In this paper, we
analyze the "gradient descent" form of GAN optimization i.e., the natural
setting where we simultaneously take small gradient steps in both generator and
discriminator parameters. We show that even though GAN optimization does not
correspond to a convex-concave game (even for simple parameterizations), under
proper conditions, equilibrium points of this optimization procedure are still
\emph{locally asymptotically stable} for the traditional GAN formulation. On
the other hand, we show that the recently proposed Wasserstein GAN can have
non-convergent limit cycles near equilibrium. Motivated by this stability
analysis, we propose an additional regularization term for gradient descent GAN
updates, which \emph{is} able to guarantee local stability for both the WGAN
and the traditional GAN, and also shows practical promise in speeding up
convergence and addressing mode collapse.
|
Vaishnavh Nagarajan, J. Zico Kolter
| null |
1706.04156
| null | null |
Lost Relatives of the Gumbel Trick
|
stat.ML cs.LG
|
The Gumbel trick is a method to sample from a discrete probability
distribution, or to estimate its normalizing partition function. The method
relies on repeatedly applying a random perturbation to the distribution in a
particular way, each time solving for the most likely configuration. We derive
an entire family of related methods, of which the Gumbel trick is one member,
and show that the new methods have superior properties in several settings with
minimal additional computational cost. In particular, for the Gumbel trick to
yield computational benefits for discrete graphical models, Gumbel
perturbations on all configurations are typically replaced with so-called
low-rank perturbations. We show how a subfamily of our new methods adapts to
this setting, proving new upper and lower bounds on the log partition function
and deriving a family of sequential samplers for the Gibbs distribution.
Finally, we balance the discussion by showing how the simpler analytical form
of the Gumbel trick enables additional theoretical results.
|
Matej Balog, Nilesh Tripuraneni, Zoubin Ghahramani, Adrian Weller
| null |
1706.04161
| null | null |
Hybrid Reward Architecture for Reinforcement Learning
|
cs.LG
|
One of the main challenges in reinforcement learning (RL) is generalisation.
In typical deep RL methods this is achieved by approximating the optimal value
function with a low-dimensional representation using a deep network. While this
approach works well in many domains, in domains where the optimal value
function cannot easily be reduced to a low-dimensional representation, learning
can be very slow and unstable. This paper contributes towards tackling such
challenging domains, by proposing a new method, called Hybrid Reward
Architecture (HRA). HRA takes as input a decomposed reward function and learns
a separate value function for each component reward function. Because each
component typically only depends on a subset of all features, the corresponding
value function can be approximated more easily by a low-dimensional
representation, enabling more effective learning. We demonstrate HRA on a
toy-problem and the Atari game Ms. Pac-Man, where HRA achieves above-human
performance.
|
Harm van Seijen and Mehdi Fatemi and Joshua Romoff and Romain Laroche
and Tavian Barnes and Jeffrey Tsang
| null |
1706.04208
| null | null |
Adversarially Regularized Autoencoders
|
cs.LG cs.CL cs.NE
|
Deep latent variable models, trained using variational autoencoders or
generative adversarial networks, are now a key technique for representation
learning of continuous structures. However, applying similar methods to
discrete structures, such as text sequences or discretized images, has proven
to be more challenging. In this work, we propose a flexible method for training
deep latent variable models of discrete structures. Our approach is based on
the recently-proposed Wasserstein autoencoder (WAE) which formalizes the
adversarial autoencoder (AAE) as an optimal transport problem. We first extend
this framework to model discrete sequences, and then further explore different
learned priors targeting a controllable representation. This adversarially
regularized autoencoder (ARAE) allows us to generate natural textual outputs as
well as perform manipulations in the latent space to induce change in the
output space. Finally we show that the latent representation can be trained to
perform unaligned textual style transfer, giving improvements both in
automatic/human evaluation compared to existing methods.
|
Jake Zhao (Junbo), Yoon Kim, Kelly Zhang, Alexander M. Rush and Yann
LeCun
| null |
1706.04223
| null | null |
On Optimistic versus Randomized Exploration in Reinforcement Learning
|
stat.ML cs.LG
|
We discuss the relative merits of optimistic and randomized approaches to
exploration in reinforcement learning. Optimistic approaches presented in the
literature apply an optimistic boost to the value estimate at each state-action
pair and select actions that are greedy with respect to the resulting
optimistic value function. Randomized approaches sample from among
statistically plausible value functions and select actions that are greedy with
respect to the random sample. Prior computational experience suggests that
randomized approaches can lead to far more statistically efficient learning. We
present two simple analytic examples that elucidate why this is the case. In
principle, there should be optimistic approaches that fare well relative to
randomized approaches, but that would require intractable computation.
Optimistic approaches that have been proposed in the literature sacrifice
statistical efficiency for the sake of computational efficiency. Randomized
approaches, on the other hand, may enable simultaneous statistical and
computational efficiency.
|
Ian Osband, Benjamin Van Roy
| null |
1706.04241
| null | null |
Optimization by a quantum reinforcement algorithm
|
cond-mat.dis-nn cond-mat.stat-mech cs.AI cs.LG quant-ph
|
A reinforcement algorithm solves a classical optimization problem by
introducing a feedback to the system which slowly changes the energy landscape
and converges the algorithm to an optimal solution in the configuration space.
Here, we use this strategy to concentrate (localize) preferentially the wave
function of a quantum particle, which explores the configuration space of the
problem, on an optimal configuration. We examine the method by solving
numerically the equations governing the evolution of the system, which are
similar to the nonlinear Schr\"odinger equations, for small problem sizes. In
particular, we observe that reinforcement increases the minimal energy gap of
the system in a quantum annealing algorithm. Our numerical simulations and the
latter observation show that such kind of quantum feedbacks might be helpful in
solving a computationally hard optimization problem by a quantum reinforcement
algorithm.
|
A. Ramezanpour
|
10.1103/PhysRevA.96.052307
|
1706.04262
| null | null |
Transfer entropy-based feedback improves performance in artificial
neural networks
|
cs.LG cs.IT cs.NE math.IT
|
The structure of the majority of modern deep neural networks is characterized
by uni- directional feed-forward connectivity across a very large number of
layers. By contrast, the architecture of the cortex of vertebrates contains
fewer hierarchical levels but many recurrent and feedback connections. Here we
show that a small, few-layer artificial neural network that employs feedback
will reach top level performance on a standard benchmark task, otherwise only
obtained by large feed-forward structures. To achieve this we use feed-forward
transfer entropy between neurons to structure feedback connectivity. Transfer
entropy can here intuitively be understood as a measure for the relevance of
certain pathways in the network, which are then amplified by feedback. Feedback
may therefore be key for high network performance in small brain-like
architectures.
|
Sebastian Herzog, Christian Tetzlaff and Florentin W\"org\"otter
| null |
1706.04265
| null | null |
Leveraging Node Attributes for Incomplete Relational Data
|
stat.ML cs.LG cs.SI
|
Relational data are usually highly incomplete in practice, which inspires us
to leverage side information to improve the performance of community detection
and link prediction. This paper presents a Bayesian probabilistic approach that
incorporates various kinds of node attributes encoded in binary form in
relational models with Poisson likelihood. Our method works flexibly with both
directed and undirected relational networks. The inference can be done by
efficient Gibbs sampling which leverages sparsity of both networks and node
attributes. Extensive experiments show that our models achieve the
state-of-the-art link prediction results, especially with highly incomplete
relational data.
|
He Zhao, Lan Du, Wray Buntine
| null |
1706.04289
| null | null |
Dueling Bandits With Weak Regret
|
cs.LG
|
We consider online content recommendation with implicit feedback through
pairwise comparisons, formalized as the so-called dueling bandit problem. We
study the dueling bandit problem in the Condorcet winner setting, and consider
two notions of regret: the more well-studied strong regret, which is 0 only
when both arms pulled are the Condorcet winner; and the less well-studied weak
regret, which is 0 if either arm pulled is the Condorcet winner. We propose a
new algorithm for this problem, Winner Stays (WS), with variations for each
kind of regret: WS for weak regret (WS-W) has expected cumulative weak regret
that is $O(N^2)$, and $O(N\log(N))$ if arms have a total order; WS for strong
regret (WS-S) has expected cumulative strong regret of $O(N^2 + N \log(T))$,
and $O(N\log(N)+N\log(T))$ if arms have a total order. WS-W is the first
dueling bandit algorithm with weak regret that is constant in time. WS is
simple to compute, even for problems with many arms, and we demonstrate through
numerical experiments on simulated and real data that WS has significantly
smaller regret than existing algorithms in both the weak- and strong-regret
settings.
|
Bangrui Chen, Peter I. Frazier
| null |
1706.04304
| null | null |
Teaching Compositionality to CNNs
|
cs.CV cs.LG
|
Convolutional neural networks (CNNs) have shown great success in computer
vision, approaching human-level performance when trained for specific tasks via
application-specific loss functions. In this paper, we propose a method for
augmenting and training CNNs so that their learned features are compositional.
It encourages networks to form representations that disentangle objects from
their surroundings and from each other, thereby promoting better
generalization. Our method is agnostic to the specific details of the
underlying CNN to which it is applied and can in principle be used with any
CNN. As we show in our experiments, the learned representations lead to feature
activations that are more localized and improve performance over
non-compositional baselines in object recognition tasks.
|
Austin Stone, Huayan Wang, Michael Stark, Yi Liu, D. Scott Phoenix,
Dileep George
| null |
1706.04313
| null | null |
Transfer Learning for Neural Semantic Parsing
|
cs.CL cs.LG
|
The goal of semantic parsing is to map natural language to a machine
interpretable meaning representation language (MRL). One of the constraints
that limits full exploration of deep learning technologies for semantic parsing
is the lack of sufficient annotation training data. In this paper, we propose
using sequence-to-sequence in a multi-task setup for semantic parsing with a
focus on transfer learning. We explore three multi-task architectures for
sequence-to-sequence modeling and compare their performance with an
independently trained model. Our experiments show that the multi-task setup
aids transfer learning from an auxiliary task with large labeled data to a
target task with smaller labeled data. We see absolute accuracy gains ranging
from 1.0% to 4.4% in our in- house data set, and we also see good gains ranging
from 2.5% to 7.0% on the ATIS semantic parsing tasks with syntactic and
semantic auxiliary tasks.
|
Xing Fan, Emilio Monti, Lambert Mathias, Markus Dreyer
| null |
1706.04326
| null | null |
A survey of dimensionality reduction techniques based on random
projection
|
cs.LG
|
Dimensionality reduction techniques play important roles in the analysis of
big data. Traditional dimensionality reduction approaches, such as principal
component analysis (PCA) and linear discriminant analysis (LDA), have been
studied extensively in the past few decades. However, as the dimensionality of
data increases, the computational cost of traditional dimensionality reduction
methods grows exponentially, and the computation becomes prohibitively
intractable. These drawbacks have triggered the development of random
projection (RP) techniques, which map high-dimensional data onto a
low-dimensional subspace with extremely reduced time cost. However, the RP
transformation matrix is generated without considering the intrinsic structure
of the original data and usually leads to relatively high distortion.
Therefore, in recent years, methods based on RP have been proposed to address
this problem. In this paper, we summarize the methods used in different
situations to help practitioners to employ the proper techniques for their
specific applications. Meanwhile, we enumerate the benefits and limitations of
the various methods and provide further references for researchers to develop
novel RP-based approaches.
|
Haozhe Xie, Jie Li, Hanqing Xue
| null |
1706.04371
| null | null |
Empirical Analysis of the Hessian of Over-Parametrized Neural Networks
|
cs.LG
|
We study the properties of common loss surfaces through their Hessian matrix.
In particular, in the context of deep learning, we empirically show that the
spectrum of the Hessian is composed of two parts: (1) the bulk centered near
zero, (2) and outliers away from the bulk. We present numerical evidence and
mathematical justifications to the following conjectures laid out by Sagun et
al. (2016): Fixing data, increasing the number of parameters merely scales the
bulk of the spectrum; fixing the dimension and changing the data (for instance
adding more clusters or making the data less separable) only affects the
outliers. We believe that our observations have striking implications for
non-convex optimization in high dimensions. First, the flatness of such
landscapes (which can be measured by the singularity of the Hessian) implies
that classical notions of basins of attraction may be quite misleading. And
that the discussion of wide/narrow basins may be in need of a new perspective
around over-parametrization and redundancy that are able to create large
connected components at the bottom of the landscape. Second, the dependence of
small number of large eigenvalues to the data distribution can be linked to the
spectrum of the covariance matrix of gradients of model outputs. With this in
mind, we may reevaluate the connections within the data-architecture-algorithm
framework of a model, hoping that it would shed light into the geometry of
high-dimensional and non-convex spaces in modern applications. In particular,
we present a case that links the two observations: small and large batch
gradient descent appear to converge to different basins of attraction but we
show that they are in fact connected through their flat region and so belong to
the same basin.
|
Levent Sagun, Utku Evci, V. Ugur Guney, Yann Dauphin, Leon Bottou
| null |
1706.04454
| null | null |
SEARNN: Training RNNs with Global-Local Losses
|
cs.LG stat.ML
|
We propose SEARNN, a novel training algorithm for recurrent neural networks
(RNNs) inspired by the "learning to search" (L2S) approach to structured
prediction. RNNs have been widely successful in structured prediction
applications such as machine translation or parsing, and are commonly trained
using maximum likelihood estimation (MLE). Unfortunately, this training loss is
not always an appropriate surrogate for the test error: by only maximizing the
ground truth probability, it fails to exploit the wealth of information offered
by structured losses. Further, it introduces discrepancies between training and
predicting (such as exposure bias) that may hurt test performance. Instead,
SEARNN leverages test-alike search space exploration to introduce global-local
losses that are closer to the test error. We first demonstrate improved
performance over MLE on two different tasks: OCR and spelling correction. Then,
we propose a subsampling strategy to enable SEARNN to scale to large vocabulary
sizes. This allows us to validate the benefits of our approach on a machine
translation task.
|
R\'emi Leblond, Jean-Baptiste Alayrac, Anton Osokin and Simon
Lacoste-Julien
| null |
1706.04499
| null | null |
Reinforcement Learning with Budget-Constrained Nonparametric Function
Approximation for Opportunistic Spectrum Access
|
cs.IT cs.LG math.IT stat.ML
|
Opportunistic spectrum access is one of the emerging techniques for
maximizing throughput in congested bands and is enabled by predicting idle
slots in spectrum. We propose a kernel-based reinforcement learning approach
coupled with a novel budget-constrained sparsification technique that
efficiently captures the environment to find the best channel access actions.
This approach allows learning and planning over the intrinsic state-action
space and extends well to large state spaces. We apply our methods to evaluate
coexistence of a reinforcement learning-based radio with a multi-channel
adversarial radio and a single-channel CSMA-CA radio. Numerical experiments
show the performance gains over carrier-sense systems.
|
Theodoros Tsiligkaridis, David Romero
| null |
1706.04546
| null | null |
Deep Learning Methods for Efficient Large Scale Video Labeling
|
stat.ML cs.CV cs.LG
|
We present a solution to "Google Cloud and YouTube-8M Video Understanding
Challenge" that ranked 5th place. The proposed model is an ensemble of three
model families, two frame level and one video level. The training was performed
on augmented dataset, with cross validation.
|
Miha Skalic, Marcin Pekalski, Xingguo E. Pan
| null |
1706.04572
| null | null |
On Calibration of Modern Neural Networks
|
cs.LG
|
Confidence calibration -- the problem of predicting probability estimates
representative of the true correctness likelihood -- is important for
classification models in many applications. We discover that modern neural
networks, unlike those from a decade ago, are poorly calibrated. Through
extensive experiments, we observe that depth, width, weight decay, and Batch
Normalization are important factors influencing calibration. We evaluate the
performance of various post-processing calibration methods on state-of-the-art
architectures with image and document classification datasets. Our analysis and
experiments not only offer insights into neural network learning, but also
provide a simple and straightforward recipe for practical settings: on most
datasets, temperature scaling -- a single-parameter variant of Platt Scaling --
is surprisingly effective at calibrating predictions.
|
Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q. Weinberger
| null |
1706.04599
| null | null |
Provable benefits of representation learning
|
cs.LG stat.ML
|
There is general consensus that learning representations is useful for a
variety of reasons, e.g. efficient use of labeled data (semi-supervised
learning), transfer learning and understanding hidden structure of data.
Popular techniques for representation learning include clustering, manifold
learning, kernel-learning, autoencoders, Boltzmann machines, etc.
To study the relative merits of these techniques, it's essential to formalize
the definition and goals of representation learning, so that they are all
become instances of the same definition. This paper introduces such a formal
framework that also formalizes the utility of learning the representation. It
is related to previous Bayesian notions, but with some new twists. We show the
usefulness of our framework by exhibiting simple and natural settings -- linear
mixture models and loglinear models, where the power of representation learning
can be formally shown. In these examples, representation learning can be
performed provably and efficiently under plausible assumptions (despite being
NP-hard), and furthermore: (i) it greatly reduces the need for labeled data
(semi-supervised learning) and (ii) it allows solving classification tasks when
simpler approaches like nearest neighbors require too much data (iii) it is
more powerful than manifold learning methods.
|
Sanjeev Arora, Andrej Risteski
| null |
1706.04601
| null | null |
Information Potential Auto-Encoders
|
cs.LG cs.IT math.IT stat.ML
|
In this paper, we suggest a framework to make use of mutual information as a
regularization criterion to train Auto-Encoders (AEs). In the proposed
framework, AEs are regularized by minimization of the mutual information
between input and encoding variables of AEs during the training phase. In order
to estimate the entropy of the encoding variables and the mutual information,
we propose a non-parametric method. We also give an information theoretic view
of Variational AEs (VAEs), which suggests that VAEs can be considered as
parametric methods that estimate entropy. Experimental results show that the
proposed non-parametric models have more degree of freedom in terms of
representation learning of features drawn from complex distributions such as
Mixture of Gaussians, compared to methods which estimate entropy using
parametric approaches, such as Variational AEs.
|
Yan Zhang and Mete Ozay and Zhun Sun and Takayuki Okatani
| null |
1706.04635
| null | null |
Proximal Backpropagation
|
cs.LG
|
We propose proximal backpropagation (ProxProp) as a novel algorithm that
takes implicit instead of explicit gradient steps to update the network
parameters during neural network training. Our algorithm is motivated by the
step size limitation of explicit gradient descent, which poses an impediment
for optimization. ProxProp is developed from a general point of view on the
backpropagation algorithm, currently the most common technique to train neural
networks via stochastic gradient descent and variants thereof. Specifically, we
show that backpropagation of a prediction error is equivalent to sequential
gradient descent steps on a quadratic penalty energy, which comprises the
network activations as variables of the optimization. We further analyze
theoretical properties of ProxProp and in particular prove that the algorithm
yields a descent direction in parameter space and can therefore be combined
with a wide variety of convergent algorithms. Finally, we devise an efficient
numerical implementation that integrates well with popular deep learning
frameworks. We conclude by demonstrating promising numerical results and show
that ProxProp can be effectively combined with common first order optimizers
such as Adam.
|
Thomas Frerix, Thomas M\"ollenhoff, Michael Moeller, Daniel Cremers
| null |
1706.04638
| null | null |
Differentially Private Learning of Undirected Graphical Models using
Collective Graphical Models
|
cs.LG cs.CR stat.ML
|
We investigate the problem of learning discrete, undirected graphical models
in a differentially private way. We show that the approach of releasing noisy
sufficient statistics using the Laplace mechanism achieves a good trade-off
between privacy, utility, and practicality. A naive learning algorithm that
uses the noisy sufficient statistics "as is" outperforms general-purpose
differentially private learning algorithms. However, it has three limitations:
it ignores knowledge about the data generating process, rests on uncertain
theoretical foundations, and exhibits certain pathologies. We develop a more
principled approach that applies the formalism of collective graphical models
to perform inference over the true sufficient statistics within an
expectation-maximization framework. We show that this learns better models than
competing approaches on both synthetic data and on real human mobility data
used as a case study.
|
Garrett Bernstein, Ryan McKenna, Tao Sun, Daniel Sheldon, Michael Hay,
Gerome Miklau
| null |
1706.04646
| null | null |
A Practical Method for Solving Contextual Bandit Problems Using Decision
Trees
|
cs.LG stat.ML
|
Many efficient algorithms with strong theoretical guarantees have been
proposed for the contextual multi-armed bandit problem. However, applying these
algorithms in practice can be difficult because they require domain expertise
to build appropriate features and to tune their parameters. We propose a new
method for the contextual bandit problem that is simple, practical, and can be
applied with little or no domain expertise. Our algorithm relies on decision
trees to model the context-reward relationship. Decision trees are
non-parametric, interpretable, and work well without hand-crafted features. To
guide the exploration-exploitation trade-off, we use a bootstrapping approach
which abstracts Thompson sampling to non-Bayesian settings. We also discuss
several computational heuristics and demonstrate the performance of our method
on several datasets.
|
Adam N. Elmachtoub, Ryan McNellis, Sechan Oh, Marek Petrik
| null |
1706.04687
| null | null |
Adaptive Feature Selection: Computationally Efficient Online Sparse
Linear Regression under RIP
|
cs.LG
|
Online sparse linear regression is an online problem where an algorithm
repeatedly chooses a subset of coordinates to observe in an adversarially
chosen feature vector, makes a real-valued prediction, receives the true label,
and incurs the squared loss. The goal is to design an online learning algorithm
with sublinear regret to the best sparse linear predictor in hindsight. Without
any assumptions, this problem is known to be computationally intractable. In
this paper, we make the assumption that data matrix satisfies restricted
isometry property, and show that this assumption leads to computationally
efficient algorithms with sublinear regret for two variants of the problem. In
the first variant, the true label is generated according to a sparse linear
model with additive Gaussian noise. In the second, the true label is chosen
adversarially.
|
Satyen Kale, Zohar Karnin, Tengyuan Liang and D\'avid P\'al
| null |
1706.0469
| null | null |
Gradient Descent for Spiking Neural Networks
|
q-bio.NC cs.LG cs.NE stat.ML
|
Much of studies on neural computation are based on network models of static
neurons that produce analog output, despite the fact that information
processing in the brain is predominantly carried out by dynamic neurons that
produce discrete pulses called spikes. Research in spike-based computation has
been impeded by the lack of efficient supervised learning algorithm for spiking
networks. Here, we present a gradient descent method for optimizing spiking
network models by introducing a differentiable formulation of spiking networks
and deriving the exact gradient calculation. For demonstration, we trained
recurrent spiking networks on two dynamic tasks: one that requires optimizing
fast (~millisecond) spike-based interactions for efficient encoding of
information, and a delayed memory XOR task over extended duration (~second).
The results show that our method indeed optimizes the spiking network dynamics
on the time scale of individual spikes as well as behavioral time scales. In
conclusion, our result offers a general purpose supervised learning algorithm
for spiking neural networks, thus advancing further investigations on
spike-based computation.
|
Dongsung Huh, Terrence J. Sejnowski
| null |
1706.04698
| null | null |
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong
|
cs.LG
|
Ongoing research has proposed several methods to defend neural networks
against adversarial examples, many of which researchers have shown to be
ineffective. We ask whether a strong defense can be created by combining
multiple (possibly weak) defenses. To answer this question, we study three
defenses that follow this approach. Two of these are recently proposed defenses
that intentionally combine components designed to work well together. A third
defense combines three independent defenses. For all the components of these
defenses and the combined defenses themselves, we show that an adaptive
adversary can create adversarial examples successfully with low distortion.
Thus, our work implies that ensemble of weak defenses is not sufficient to
provide strong defense against adversarial examples.
|
Warren He and James Wei and Xinyun Chen and Nicholas Carlini and Dawn
Song
| null |
1706.04701
| null | null |
Deep learning-based numerical methods for high-dimensional parabolic
partial differential equations and backward stochastic differential equations
|
math.NA cs.LG cs.NE math.PR stat.ML
|
We propose a new algorithm for solving parabolic partial differential
equations (PDEs) and backward stochastic differential equations (BSDEs) in high
dimension, by making an analogy between the BSDE and reinforcement learning
with the gradient of the solution playing the role of the policy function, and
the loss function given by the error between the prescribed terminal condition
and the solution of the BSDE. The policy function is then approximated by a
neural network, as is done in deep reinforcement learning. Numerical results
using TensorFlow illustrate the efficiency and accuracy of the proposed
algorithms for several 100-dimensional nonlinear PDEs from physics and finance
such as the Allen-Cahn equation, the Hamilton-Jacobi-Bellman equation, and a
nonlinear pricing model for financial derivatives.
|
Weinan E and Jiequn Han and Arnulf Jentzen
|
10.1007/s40304-017-0117-6
|
1706.04702
| null | null |
Reinforcement Learning under Model Mismatch
|
cs.LG stat.ML
|
We study reinforcement learning under model misspecification, where we do not
have access to the true environment but only to a reasonably close
approximation to it. We address this problem by extending the framework of
robust MDPs to the model-free Reinforcement Learning setting, where we do not
have access to the model parameters, but can only sample states from it. We
define robust versions of Q-learning, SARSA, and TD-learning and prove
convergence to an approximately optimal robust policy and approximate value
function respectively. We scale up the robust algorithms to large MDPs via
function approximation and prove convergence under two different settings. We
prove convergence of robust approximate policy iteration and robust approximate
value iteration for linear architectures (under mild assumptions). We also
define a robust loss function, the mean squared robust projected Bellman error
and give stochastic gradient descent algorithms that are guaranteed to converge
to a local minimum.
|
Aurko Roy, Huan Xu and Sebastian Pokutta
| null |
1706.04711
| null | null |
Effective Sequential Classifier Training for SVM-based Multitemporal
Remote Sensing Image Classification
|
cs.CV cs.LG
|
The explosive availability of remote sensing images has challenged supervised
classification algorithms such as Support Vector Machines (SVM), as training
samples tend to be highly limited due to the expensive and laborious task of
ground truthing. The temporal correlation and spectral similarity between
multitemporal images have opened up an opportunity to alleviate this problem.
In this study, a SVM-based Sequential Classifier Training (SCT-SVM) approach is
proposed for multitemporal remote sensing image classification. The approach
leverages the classifiers of previous images to reduce the required number of
training samples for the classifier training of an incoming image. For each
incoming image, a rough classifier is firstly predicted based on the temporal
trend of a set of previous classifiers. The predicted classifier is then
fine-tuned into a more accurate position with current training samples. This
approach can be applied progressively to sequential image data, with only a
small number of training samples being required from each image. Experiments
were conducted with Sentinel-2A multitemporal data over an agricultural area in
Australia. Results showed that the proposed SCT-SVM achieved better
classification accuracies compared with two state-of-the-art model transfer
algorithms. When training data are insufficient, the overall classification
accuracy of the incoming image was improved from 76.18% to 94.02% with the
proposed SCT-SVM, compared with those obtained without the assistance from
previous images. These results demonstrate that the leverage of a priori
information from previous images can provide advantageous assistance for later
images in multitemporal image classification.
|
Yiqing Guo, Xiuping Jia, and David Paull
|
10.1109/TIP.2018.2808767
|
1706.04719
| null | null |
Target Curricula via Selection of Minimum Feature Sets: a Case Study in
Boolean Networks
|
cs.AI cs.LG
|
We consider the effect of introducing a curriculum of targets when training
Boolean models on supervised Multi Label Classification (MLC) problems. In
particular, we consider how to order targets in the absence of prior knowledge,
and how such a curriculum may be enforced when using meta-heuristics to train
discrete non-linear models.
We show that hierarchical dependencies between targets can be exploited by
enforcing an appropriate curriculum using hierarchical loss functions. On
several multi output circuit-inference problems with known target difficulties,
Feedforward Boolean Networks (FBNs) trained with such a loss function achieve
significantly lower out-of-sample error, up to $10\%$ in some cases. This
improvement increases as the loss places more emphasis on target order and is
strongly correlated with an easy-to-hard curricula. We also demonstrate the
same improvements on three real-world models and two Gene Regulatory Network
(GRN) inference problems.
We posit a simple a-priori method for identifying an appropriate target order
and estimating the strength of target relationships in Boolean MLCs. These
methods use intrinsic dimension as a proxy for target difficulty, which is
estimated using optimal solutions to a combinatorial optimisation problem known
as the Minimum-Feature-Set (minFS) problem. We also demonstrate that the same
generalisation gains can be achieved without providing any knowledge of target
difficulty.
|
Shannon Fenn, Pablo Moscato
| null |
1706.04721
| null | null |
Revenue Optimization with Approximate Bid Predictions
|
cs.LG cs.GT
|
In the context of advertising auctions, finding good reserve prices is a
notoriously challenging learning problem. This is due to the heterogeneity of
ad opportunity types and the non-convexity of the objective function. In this
work, we show how to reduce reserve price optimization to the standard setting
of prediction under squared loss, a well understood problem in the learning
community. We further bound the gap between the expected bid and revenue in
terms of the average loss of the predictor. This is the first result that
formally relates the revenue gained to the quality of a standard machine
learned model.
|
Andr\'es Mu\~noz Medina and Sergei Vassilvitskii
| null |
1706.04732
| null | null |
Efficient Representative Subset Selection over Sliding Windows
|
cs.DS cs.LG cs.SI
|
Representative subset selection (RSS) is an important tool for users to draw
insights from massive datasets. Existing literature models RSS as the
submodular maximization problem to capture the "diminishing returns" property
of the representativeness of selected subsets, but often only has a single
constraint (e.g., cardinality), which limits its applications in many
real-world problems. To capture the data recency issue and support different
types of constraints, we formulate dynamic RSS in data streams as maximizing
submodular functions subject to general $d$-knapsack constraints (SMDK) over
sliding windows. We propose a \textsc{KnapWindow} framework (KW) for SMDK. KW
utilizes the \textsc{KnapStream} algorithm (KS) for SMDK in append-only streams
as a subroutine. It maintains a sequence of checkpoints and KS instances over
the sliding window. Theoretically, KW is
$\frac{1-\varepsilon}{1+d}$-approximate for SMDK. Furthermore, we propose a
\textsc{KnapWindowPlus} framework (KW$^{+}$) to improve upon KW. KW$^{+}$
builds an index \textsc{SubKnapChk} to manage the checkpoints and KS instances.
\textsc{SubKnapChk} deletes a checkpoint whenever it can be approximated by its
successors. By keeping much fewer checkpoints, KW$^{+}$ achieves higher
efficiency than KW while still guaranteeing a
$\frac{1-\varepsilon'}{2+2d}$-approximate solution for SMDK. Finally, we
evaluate the efficiency and solution quality of KW and KW$^{+}$ in real-world
datasets. The experimental results demonstrate that KW achieves more than two
orders of magnitude speedups over the batch baseline and preserves high-quality
solutions for SMDK over sliding windows. KW$^{+}$ further runs 5-10 times
faster than KW while providing solutions with equivalent or even better
utilities.
|
Yanhao Wang and Yuchen Li and Kian-Lee Tan
|
10.1109/TKDE.2018.2854182
|
1706.04764
| null | null |
Stochastic Training of Neural Networks via Successive Convex
Approximations
|
stat.ML cs.LG
|
This paper proposes a new family of algorithms for training neural networks
(NNs). These are based on recent developments in the field of non-convex
optimization, going under the general name of successive convex approximation
(SCA) techniques. The basic idea is to iteratively replace the original
(non-convex, highly dimensional) learning problem with a sequence of (strongly
convex) approximations, which are both accurate and simple to optimize.
Differently from similar ideas (e.g., quasi-Newton algorithms), the
approximations can be constructed using only first-order information of the
neural network function, in a stochastic fashion, while exploiting the overall
structure of the learning problem for a faster convergence. We discuss several
use cases, based on different choices for the loss function (e.g., squared loss
and cross-entropy loss), and for the regularization of the NN's weights. We
experiment on several medium-sized benchmark problems, and on a large-scale
dataset involving simulated physical data. The results show how the algorithm
outperforms state-of-the-art techniques, providing faster convergence to a
better minimum. Additionally, we show how the algorithm can be easily
parallelized over multiple computational units without hindering its
performance. In particular, each computational unit can optimize a tailored
surrogate function defined on a randomly assigned subset of the input
variables, whose dimension can be selected depending entirely on the available
computational power.
|
Simone Scardapane, Paolo Di Lorenzo
| null |
1706.04769
| null | null |
Sobolev Training for Neural Networks
|
cs.LG
|
At the heart of deep learning we aim to use neural networks as function
approximators - training them to produce outputs from inputs in emulation of a
ground truth function or data creation process. In many cases we only have
access to input-output pairs from the ground truth, however it is becoming more
common to have access to derivatives of the target output with respect to the
input - for example when the ground truth function is itself a neural network
such as in network compression or distillation. Generally these target
derivatives are not computed, or are ignored. This paper introduces Sobolev
Training for neural networks, which is a method for incorporating these target
derivatives in addition the to target values while training. By optimising
neural networks to not only approximate the function's outputs but also the
function's derivatives we encode additional information about the target
function within the parameters of the neural network. Thereby we can improve
the quality of our predictors, as well as the data-efficiency and
generalization capabilities of our learned function approximation. We provide
theoretical justifications for such an approach as well as examples of
empirical evidence on three distinct domains: regression on classical
optimisation datasets, distilling policies of an agent playing Atari, and on
large-scale applications of synthetic gradients. In all three domains the use
of Sobolev Training, employing target derivatives in addition to target values,
results in models with higher accuracy and stronger generalisation.
|
Wojciech Marian Czarnecki, Simon Osindero, Max Jaderberg, Grzegorz
\'Swirszcz and Razvan Pascanu
| null |
1706.04859
| null | null |
Second-Order Kernel Online Convex Optimization with Adaptive Sketching
|
stat.ML cs.LG
|
Kernel online convex optimization (KOCO) is a framework combining the
expressiveness of non-parametric kernel models with the regret guarantees of
online learning. First-order KOCO methods such as functional gradient descent
require only $\mathcal{O}(t)$ time and space per iteration, and, when the only
information on the losses is their convexity, achieve a minimax optimal
$\mathcal{O}(\sqrt{T})$ regret. Nonetheless, many common losses in kernel
problems, such as squared loss, logistic loss, and squared hinge loss posses
stronger curvature that can be exploited. In this case, second-order KOCO
methods achieve $\mathcal{O}(\log(\text{Det}(\boldsymbol{K})))$ regret, which
we show scales as $\mathcal{O}(d_{\text{eff}}\log T)$, where $d_{\text{eff}}$
is the effective dimension of the problem and is usually much smaller than
$\mathcal{O}(\sqrt{T})$. The main drawback of second-order methods is their
much higher $\mathcal{O}(t^2)$ space and time complexity. In this paper, we
introduce kernel online Newton step (KONS), a new second-order KOCO method that
also achieves $\mathcal{O}(d_{\text{eff}}\log T)$ regret. To address the
computational complexity of second-order methods, we introduce a new matrix
sketching algorithm for the kernel matrix $\boldsymbol{K}_t$, and show that for
a chosen parameter $\gamma \leq 1$ our Sketched-KONS reduces the space and time
complexity by a factor of $\gamma^2$ to $\mathcal{O}(t^2\gamma^2)$ space and
time per iteration, while incurring only $1/\gamma$ times more regret.
|
Daniele Calandriello, Alessandro Lazaric and Michal Valko
| null |
1706.04892
| null | null |
A Survey Of Cross-lingual Word Embedding Models
|
cs.CL cs.LG
|
Cross-lingual representations of words enable us to reason about word meaning
in multilingual contexts and are a key facilitator of cross-lingual transfer
when developing natural language processing models for low-resource languages.
In this survey, we provide a comprehensive typology of cross-lingual word
embedding models. We compare their data requirements and objective functions.
The recurring theme of the survey is that many of the models presented in the
literature optimize for the same objectives, and that seemingly different
models are often equivalent modulo optimization strategies, hyper-parameters,
and such. We also discuss the different ways cross-lingual word embeddings are
evaluated, as well as future challenges and research horizons.
|
Sebastian Ruder, Ivan Vuli\'c, Anders S{\o}gaard
|
10.1613/jair.1.11640
|
1706.04902
| null | null |
Robust Submodular Maximization: A Non-Uniform Partitioning Approach
|
stat.ML cs.LG
|
We study the problem of maximizing a monotone submodular function subject to
a cardinality constraint $k$, with the added twist that a number of items
$\tau$ from the returned set may be removed. We focus on the worst-case setting
considered in (Orlin et al., 2016), in which a constant-factor approximation
guarantee was given for $\tau = o(\sqrt{k})$. In this paper, we solve a key
open problem raised therein, presenting a new Partitioned Robust (PRo)
submodular maximization algorithm that achieves the same guarantee for more
general $\tau = o(k)$. Our algorithm constructs partitions consisting of
buckets with exponentially increasing sizes, and applies standard submodular
optimization subroutines on the buckets in order to construct the robust
solution. We numerically demonstrate the performance of PRo in data
summarization and influence maximization, demonstrating gains over both the
greedy algorithm and the algorithm of (Orlin et al., 2016).
|
Ilija Bogunovic, Slobodan Mitrovi\'c, Jonathan Scarlett, Volkan Cevher
| null |
1706.04918
| null | null |
Multi-objective Bandits: Optimizing the Generalized Gini Index
|
cs.LG
|
We study the multi-armed bandit (MAB) problem where the agent receives a
vectorial feedback that encodes many possibly competing objectives to be
optimized. The goal of the agent is to find a policy, which can optimize these
objectives simultaneously in a fair way. This multi-objective online
optimization problem is formalized by using the Generalized Gini Index (GGI)
aggregation function. We propose an online gradient descent algorithm which
exploits the convexity of the GGI aggregation function, and controls the
exploration in a careful way achieving a distribution-free regret
$\tilde{\bigO} (T^{-1/2} )$ with high probability. We test our algorithm on
synthetic data as well as on an electric battery control problem where the goal
is to trade off the use of the different cells of a battery in order to balance
their respective degradation rates.
|
Robert Busa-Fekete, Balazs Szorenyi, Paul Weng, Shie Mannor
| null |
1706.04933
| null | null |
Learning Deep ResNet Blocks Sequentially using Boosting Theory
|
cs.LG
|
Deep neural networks are known to be difficult to train due to the
instability of back-propagation. A deep \emph{residual network} (ResNet) with
identity loops remedies this by stabilizing gradient computations. We prove a
boosting theory for the ResNet architecture. We construct $T$ weak module
classifiers, each contains two of the $T$ layers, such that the combined strong
learner is a ResNet. Therefore, we introduce an alternative Deep ResNet
training algorithm, \emph{BoostResNet}, which is particularly suitable in
non-differentiable architectures. Our proposed algorithm merely requires a
sequential training of $T$ "shallow ResNets" which are inexpensive. We prove
that the training error decays exponentially with the depth $T$ if the
\emph{weak module classifiers} that we train perform slightly better than some
weak baseline. In other words, we propose a weak learning condition and prove a
boosting theory for ResNet under the weak learning condition. Our results apply
to general multi-class ResNets. A generalization error bound based on margin
theory is proved and suggests ResNet's resistant to overfitting under network
with $l_1$ norm bounded weights.
|
Furong Huang, Jordan Ash, John Langford, Robert Schapire
| null |
1706.04964
| null | null |
Device Placement Optimization with Reinforcement Learning
|
cs.LG cs.AI
|
The past few years have witnessed a growth in size and computational
requirements for training and inference with neural networks. Currently, a
common approach to address these requirements is to use a heterogeneous
distributed environment with a mixture of hardware devices such as CPUs and
GPUs. Importantly, the decision of placing parts of the neural models on
devices is often made by human experts based on simple heuristics and
intuitions. In this paper, we propose a method which learns to optimize device
placement for TensorFlow computational graphs. Key to our method is the use of
a sequence-to-sequence model to predict which subsets of operations in a
TensorFlow graph should run on which of the available devices. The execution
time of the predicted placements is then used as the reward signal to optimize
the parameters of the sequence-to-sequence model. Our main result is that on
Inception-V3 for ImageNet classification, and on RNN LSTM, for language
modeling and neural machine translation, our model finds non-trivial device
placements that outperform hand-crafted heuristics and traditional algorithmic
methods.
|
Azalia Mirhoseini and Hieu Pham and Quoc V. Le and Benoit Steiner and
Rasmus Larsen and Yuefeng Zhou and Naveen Kumar and Mohammad Norouzi and Samy
Bengio and Jeff Dean
| null |
1706.04972
| null | null |
FreezeOut: Accelerate Training by Progressively Freezing Layers
|
stat.ML cs.LG
|
The early layers of a deep neural net have the fewest parameters, but take up
the most computation. In this extended abstract, we propose to only train the
hidden layers for a set portion of the training run, freezing them out
one-by-one and excluding them from the backward pass. Through experiments on
CIFAR, we empirically demonstrate that FreezeOut yields savings of up to 20%
wall-clock time during training with 3% loss in accuracy for DenseNets, a 20%
speedup without loss of accuracy for ResNets, and no improvement for VGG
networks. Our code is publicly available at
https://github.com/ajbrock/FreezeOut
|
Andrew Brock, Theodore Lim, J.M. Ritchie, Nick Weston
| null |
1706.04983
| null | null |
Variational Approaches for Auto-Encoding Generative Adversarial Networks
|
stat.ML cs.LG
|
Auto-encoding generative adversarial networks (GANs) combine the standard GAN
algorithm, which discriminates between real and model-generated data, with a
reconstruction loss given by an auto-encoder. Such models aim to prevent mode
collapse in the learned generative model by ensuring that it is grounded in all
the available training data. In this paper, we develop a principle upon which
auto-encoders can be combined with generative adversarial networks by
exploiting the hierarchical structure of the generative model. The underlying
principle shows that variational inference can be used a basic tool for
learning, but with the in- tractable likelihood replaced by a synthetic
likelihood, and the unknown posterior distribution replaced by an implicit
distribution; both synthetic likelihoods and implicit posterior distributions
can be learned using discriminators. This allows us to develop a natural fusion
of variational auto-encoders and generative adversarial networks, combining the
best of both these methods. We describe a unified objective for optimization,
discuss the constraints needed to guide learning, connect to the wide range of
existing work, and use a battery of tests to systematically and quantitatively
assess the performance of our method.
|
Mihaela Rosca, Balaji Lakshminarayanan, David Warde-Farley, Shakir
Mohamed
| null |
1706.04987
| null | null |
Consensus-Based Transfer Linear Support Vector Machines for
Decentralized Multi-Task Multi-Agent Learning
|
cs.LG cs.DC
|
Transfer learning has been developed to improve the performances of different
but related tasks in machine learning. However, such processes become less
efficient with the increase of the size of training data and the number of
tasks. Moreover, privacy can be violated as some tasks may contain sensitive
and private data, which are communicated between nodes and tasks. We propose a
consensus-based distributed transfer learning framework, where several tasks
aim to find the best linear support vector machine (SVM) classifiers in a
distributed network. With alternating direction method of multipliers, tasks
can achieve better classification accuracies more efficiently and privately, as
each node and each task train with their own data, and only decision variables
are transferred between different tasks and nodes. Numerical experiments on
MNIST datasets show that the knowledge transferred from the source tasks can be
used to decrease the risks of the target tasks that lack training data or have
unbalanced training labels. We show that the risks of the target tasks in the
nodes without the data of the source tasks can also be reduced using the
information transferred from the nodes who contain the data of the source
tasks. We also show that the target tasks can enter and leave in real-time
without rerunning the whole algorithm.
|
Rui Zhang, Quanyan Zhu
| null |
1706.05039
| null | null |
Human-like Clustering with Deep Convolutional Neural Networks
|
cs.LG cs.CV
|
Classification and clustering have been studied separately in machine
learning and computer vision. Inspired by the recent success of deep learning
models in solving various vision problems (e.g., object recognition, semantic
segmentation) and the fact that humans serve as the gold standard in assessing
clustering algorithms, here, we advocate for a unified treatment of the two
problems and suggest that hierarchical frameworks that progressively build
complex patterns on top of the simpler ones (e.g., convolutional neural
networks) offer a promising solution. We do not dwell much on the learning
mechanisms in these frameworks as they are still a matter of debate, with
respect to biological constraints. Instead, we emphasize on the
compositionality of the real world structures and objects. In particular, we
show that CNNs, trained end to end using back propagation with noisy labels,
are able to cluster data points belonging to several overlapping shapes, and do
so much better than the state of the art algorithms. The main takeaway lesson
from our study is that mechanisms of human vision, particularly the hierarchal
organization of the visual ventral stream should be taken into account in
clustering algorithms (e.g., for learning representations in an unsupervised
manner or with minimum supervision) to reach human level clustering
performance. This, by no means, suggests that other methods do not hold merits.
For example, methods relying on pairwise affinities (e.g., spectral clustering)
have been very successful in many scenarios but still fail in some cases (e.g.,
overlapping clusters).
|
Ali Borji and Aysegul Dundar
| null |
1706.05048
| null | null |
Zero-Shot Task Generalization with Multi-Task Deep Reinforcement
Learning
|
cs.AI cs.LG
|
As a step towards developing zero-shot task generalization capabilities in
reinforcement learning (RL), we introduce a new RL problem where the agent
should learn to execute sequences of instructions after learning useful skills
that solve subtasks. In this problem, we consider two types of generalizations:
to previously unseen instructions and to longer sequences of instructions. For
generalization over unseen instructions, we propose a new objective which
encourages learning correspondences between similar subtasks by making
analogies. For generalization over sequential instructions, we present a
hierarchical architecture where a meta controller learns to use the acquired
skills for executing the instructions. To deal with delayed reward, we propose
a new neural architecture in the meta controller that learns when to update the
subtask, which makes learning more efficient. Experimental results on a
stochastic 3D domain show that the proposed ideas are crucial for
generalization to longer instructions as well as unseen instructions.
|
Junhyuk Oh, Satinder Singh, Honglak Lee, Pushmeet Kohli
| null |
1706.05064
| null | null |
Generalization for Adaptively-chosen Estimators via Stable Median
|
cs.LG cs.DS stat.ML
|
Datasets are often reused to perform multiple statistical analyses in an
adaptive way, in which each analysis may depend on the outcomes of previous
analyses on the same dataset. Standard statistical guarantees do not account
for these dependencies and little is known about how to provably avoid
overfitting and false discovery in the adaptive setting. We consider a natural
formalization of this problem in which the goal is to design an algorithm that,
given a limited number of i.i.d.~samples from an unknown distribution, can
answer adaptively-chosen queries about that distribution.
We present an algorithm that estimates the expectations of $k$ arbitrary
adaptively-chosen real-valued estimators using a number of samples that scales
as $\sqrt{k}$. The answers given by our algorithm are essentially as accurate
as if fresh samples were used to evaluate each estimator. In contrast, prior
work yields error guarantees that scale with the worst-case sensitivity of each
estimator. We also give a version of our algorithm that can be used to verify
answers to such queries where the sample complexity depends logarithmically on
the number of queries $k$ (as in the reusable holdout technique).
Our algorithm is based on a simple approximate median algorithm that
satisfies the strong stability guarantees of differential privacy. Our
techniques provide a new approach for analyzing the generalization guarantees
of differentially private algorithms.
|
Vitaly Feldman and Thomas Steinke
| null |
1706.05069
| null | null |
Learning Disjunctions of Predicates
|
cs.LG
|
Let $F$ be a set of boolean functions. We present an algorithm for learning
$F_\vee := \{\vee_{f\in S} f \mid S \subseteq F\}$ from membership queries. Our
algorithm asks at most $|F| \cdot OPT(F_\vee)$ membership queries where
$OPT(F_\vee)$ is the minimum worst case number of membership queries for
learning $F_\vee$. When $F$ is a set of halfspaces over a constant dimension
space or a set of variable inequalities, our algorithm runs in polynomial time.
The problem we address has practical importance in the field of program
synthesis, where the goal is to synthesize a program that meets some
requirements. Program synthesis has become popular especially in settings
aiming to help end users. In such settings, the requirements are not provided
upfront and the synthesizer can only learn them by posing membership queries to
the end user. Our work enables such synthesizers to learn the exact
requirements while bounding the number of membership queries.
|
Nader H. Bshouty, Dana Drachsler-Cohen, Martin Vechev, Eran Yahav
| null |
1706.0507
| null | null |
Joint Extraction of Entities and Relations Based on a Novel Tagging
Scheme
|
cs.CL cs.AI cs.LG
|
Joint extraction of entities and relations is an important task in
information extraction. To tackle this problem, we firstly propose a novel
tagging scheme that can convert the joint extraction task to a tagging problem.
Then, based on our tagging scheme, we study different end-to-end models to
extract entities and their relations directly, without identifying entities and
relations separately. We conduct experiments on a public dataset produced by
distant supervision method and the experimental results show that the tagging
based methods are better than most of the existing pipelined and joint learning
methods. What's more, the end-to-end model proposed in this paper, achieves the
best results on the public dataset.
|
Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, Bo Xu
| null |
1706.05075
| null | null |
Topic supervised non-negative matrix factorization
|
cs.CL cs.IR cs.LG stat.ML
|
Topic models have been extensively used to organize and interpret the
contents of large, unstructured corpora of text documents. Although topic
models often perform well on traditional training vs. test set evaluations, it
is often the case that the results of a topic model do not align with human
interpretation. This interpretability fallacy is largely due to the
unsupervised nature of topic models, which prohibits any user guidance on the
results of a model. In this paper, we introduce a semi-supervised method called
topic supervised non-negative matrix factorization (TS-NMF) that enables the
user to provide labeled example documents to promote the discovery of more
meaningful semantic structure of a corpus. In this way, the results of TS-NMF
better match the intuition and desired labeling of the user. The core of TS-NMF
relies on solving a non-convex optimization problem for which we derive an
iterative algorithm that is shown to be monotonic and convergent to a local
optimum. We demonstrate the practical utility of TS-NMF on the Reuters and
PubMed corpora, and find that TS-NMF is especially useful for conceptual or
broad topics, where topic key terms are not well understood. Although
identifying an optimal latent structure for the data is not a primary objective
of the proposed approach, we find that TS-NMF achieves higher weighted Jaccard
similarity scores than the contemporary methods, (unsupervised) NMF and latent
Dirichlet allocation, at supervision rates as low as 10% to 20%.
|
Kelsey MacMillan and James D. Wilson
| null |
1706.05084
| null | null |
An Overview of Multi-Task Learning in Deep Neural Networks
|
cs.LG cs.AI stat.ML
|
Multi-task learning (MTL) has led to successes in many applications of
machine learning, from natural language processing and speech recognition to
computer vision and drug discovery. This article aims to give a general
overview of MTL, particularly in deep neural networks. It introduces the two
most common methods for MTL in Deep Learning, gives an overview of the
literature, and discusses recent advances. In particular, it seeks to help ML
practitioners apply MTL by shedding light on how MTL works and providing
guidelines for choosing appropriate auxiliary tasks.
|
Sebastian Ruder
| null |
1706.05098
| null | null |
Deriving Compact Laws Based on Algebraic Formulation of a Data Set
|
cs.LG
|
In various subjects, there exist compact and consistent relationships between
input and output parameters. Discovering the relationships, or namely compact
laws, in a data set is of great interest in many fields, such as physics,
chemistry, and finance. While data discovery has made great progress in
practice thanks to the success of machine learning in recent years, the
development of analytical approaches in finding the theory behind the data is
relatively slow. In this paper, we develop an innovative approach in
discovering compact laws from a data set. By proposing a novel algebraic
equation formulation, we convert the problem of deriving meaning from data into
formulating a linear algebra model and searching for relationships that fit the
data. Rigorous proof is presented in validating the approach. The algebraic
formulation allows the search of equation candidates in an explicit
mathematical manner. Searching algorithms are also proposed for finding the
governing equations with improved efficiency. For a certain type of compact
theory, our approach assures convergence and the discovery is computationally
efficient and mathematically precise.
|
Wenqing Xu, Mark Stalzer
| null |
1706.05123
| null | null |
One Model To Learn Them All
|
cs.LG stat.ML
|
Deep learning yields great results across many fields, from speech
recognition, image classification, to translation. But for each problem,
getting a deep model to work well involves research into the architecture and a
long period of tuning. We present a single model that yields good results on a
number of problems spanning multiple domains. In particular, this single model
is trained concurrently on ImageNet, multiple translation tasks, image
captioning (COCO dataset), a speech recognition corpus, and an English parsing
task. Our model architecture incorporates building blocks from multiple
domains. It contains convolutional layers, an attention mechanism, and
sparsely-gated layers. Each of these computational blocks is crucial for a
subset of the tasks we train on. Interestingly, even if a block is not crucial
for a task, we observe that adding it never hurts performance and in most cases
improves it on all tasks. We also show that tasks with less data benefit
largely from joint training with other tasks, while performance on large tasks
degrades only slightly if at all.
|
Lukasz Kaiser, Aidan N. Gomez, Noam Shazeer, Ashish Vaswani, Niki
Parmar, Llion Jones, Jakob Uszkoreit
| null |
1706.05137
| null | null |
Hidden Talents of the Variational Autoencoder
|
cs.LG
|
Variational autoencoders (VAE) represent a popular, flexible form of deep
generative model that can be stochastically fit to samples from a given random
process using an information-theoretic variational bound on the true underlying
distribution. Once so-obtained, the model can be putatively used to generate
new samples from this distribution, or to provide a low-dimensional latent
representation of existing samples. While quite effective in numerous
application domains, certain important mechanisms which govern the behavior of
the VAE are obfuscated by the intractable integrals and resulting stochastic
approximations involved. Moreover, as a highly non-convex model, it remains
unclear exactly how minima of the underlying energy relate to original design
purposes. We attempt to better quantify these issues by analyzing a series of
tractable special cases of increasing complexity. In doing so, we unveil
interesting connections with more traditional dimensionality reduction models,
as well as an intrinsic yet underappreciated propensity for robustly dismissing
sparse outliers when estimating latent manifolds. With respect to the latter,
we demonstrate that the VAE can be viewed as the natural evolution of recent
robust PCA models, capable of learning nonlinear manifolds of unknown dimension
obscured by gross corruptions.
|
Bin Dai and Yu Wang and John Aston and Gang Hua and David Wipf
| null |
1706.05148
| null | null |
Structured Best Arm Identification with Fixed Confidence
|
cs.LG cs.AI
|
We study the problem of identifying the best action among a set of possible
options when the value of each action is given by a mapping from a number of
noisy micro-observables in the so-called fixed confidence setting. Our main
motivation is the application to the minimax game search, which has been a
major topic of interest in artificial intelligence. In this paper we introduce
an abstract setting to clearly describe the essential properties of the
problem. While previous work only considered a two-move game tree search
problem, our abstract setting can be applied to the general minimax games where
the depth can be non-uniform and arbitrary, and transpositions are allowed. We
introduce a new algorithm (LUCB-micro) for the abstract setting, and give its
lower and upper sample complexity results. Our bounds recover some previous
results, which were only available in more limited settings, while they also
shed further light on how the structure of minimax problems influence sample
complexity.
|
Ruitong Huang, Mohammad M. Ajallooeian, Csaba Szepesv\'ari, Martin
M\"uller
| null |
1706.05198
| null | null |
Learning with Feature Evolvable Streams
|
cs.LG stat.ML
|
Learning with streaming data has attracted much attention during the past few
years. Though most studies consider data stream with fixed features, in real
practice the features may be evolvable. For example, features of data gathered
by limited-lifespan sensors will change when these sensors are substituted by
new ones. In this paper, we propose a novel learning paradigm: \emph{Feature
Evolvable Streaming Learning} where old features would vanish and new features
would occur. Rather than relying on only the current features, we attempt to
recover the vanished features and exploit it to improve performance.
Specifically, we learn two models from the recovered features and the current
features, respectively. To benefit from the recovered features, we develop two
ensemble methods. In the first method, we combine the predictions from two
models and theoretically show that with the assistance of old features, the
performance on new features can be improved. In the second approach, we
dynamically select the best single prediction and establish a better
performance guarantee when the best model switches. Experiments on both
synthetic and real data validate the effectiveness of our proposal.
|
Bo-Jian Hou and Lijun Zhang and Zhi-Hua Zhou
| null |
1706.05259
| null | null |
Unsupervised Domain Adaptation with Random Walks on Target Labelings
|
stat.ML cs.LG
|
Unsupervised Domain Adaptation (DA) is used to automatize the task of
labeling data: an unlabeled dataset (target) is annotated using a labeled
dataset (source) from a related domain. We cast domain adaptation as the
problem of finding stable labels for target examples. A new definition of label
stability is proposed, motivated by a generalization error bound for large
margin linear classifiers: a target labeling is stable when, with high
probability, a classifier trained on a random subsample of the target with that
labeling yields the same labeling. We find stable labelings using a random walk
on a directed graph with transition probabilities based on labeling stability.
The majority vote of those labelings visited by the walk yields a stable label
for each target example. The resulting domain adaptation algorithm is
strikingly easy to implement and apply: It does not rely on data
transformations, which are in general computational prohibitive in the presence
of many input features, and does not need to access the source data, which is
advantageous when data sharing is restricted. By acting on the original feature
space, our method is able to take full advantage of deep features from external
pre-trained neural networks, as demonstrated by the results of our experiments.
|
Twan van Laarhoven and Elena Marchiori
| null |
1706.05335
| null | null |
L2 Regularization versus Batch and Weight Normalization
|
cs.LG stat.ML
|
Batch Normalization is a commonly used trick to improve the training of deep
neural networks. These neural networks use L2 regularization, also called
weight decay, ostensibly to prevent overfitting. However, we show that L2
regularization has no regularizing effect when combined with normalization.
Instead, regularization has an influence on the scale of weights, and thereby
on the effective learning rate. We investigate this dependence, both in theory,
and experimentally. We show that popular optimization methods such as ADAM only
partially eliminate the influence of normalization on the learning rate. This
leads to a discussion on other ways to mitigate this issue.
|
Twan van Laarhoven
| null |
1706.0535
| null | null |
Local Feature Descriptor Learning with Adaptive Siamese Network
|
cs.LG stat.ML
|
Although the recent progress in the deep neural network has led to the
development of learnable local feature descriptors, there is no explicit answer
for estimation of the necessary size of a neural network. Specifically, the
local feature is represented in a low dimensional space, so the neural network
should have more compact structure. The small networks required for local
feature descriptor learning may be sensitive to initial conditions and learning
parameters and more likely to become trapped in local minima. In order to
address the above problem, we introduce an adaptive pruning Siamese
Architecture based on neuron activation to learn local feature descriptors,
making the network more computationally efficient with an improved recognition
rate over more complex networks. Our experiments demonstrate that our learned
local feature descriptors outperform the state-of-art methods in patch
matching.
|
Chong Huang, Qiong Liu, Yan-Ying Chen, Kwang-Ting (Tim) Cheng
| null |
1706.05358
| null | null |
Expected Policy Gradients
|
stat.ML cs.LG
|
We propose expected policy gradients (EPG), which unify stochastic policy
gradients (SPG) and deterministic policy gradients (DPG) for reinforcement
learning. Inspired by expected sarsa, EPG integrates across the action when
estimating the gradient, instead of relying only on the action in the sampled
trajectory. We establish a new general policy gradient theorem, of which the
stochastic and deterministic policy gradient theorems are special cases. We
also prove that EPG reduces the variance of the gradient estimates without
requiring deterministic policies and, for the Gaussian case, with no
computational overhead. Finally, we show that it is optimal in a certain sense
to explore with a Gaussian policy such that the covariance is proportional to
the exponential of the scaled Hessian of the critic with respect to the
actions. We present empirical results confirming that this new form of
exploration substantially outperforms DPG with the Ornstein-Uhlenbeck heuristic
in four challenging MuJoCo domains.
|
Kamil Ciosek and Shimon Whiteson
| null |
1706.05374
| null | null |
A framework for Multi-A(rmed)/B(andit) testing with online FDR control
|
stat.ML cs.LG stat.ME
|
We propose an alternative framework to existing setups for controlling false
alarms when multiple A/B tests are run over time. This setup arises in many
practical applications, e.g. when pharmaceutical companies test new treatment
options against control pills for different diseases, or when internet
companies test their default webpages versus various alternatives over time.
Our framework proposes to replace a sequence of A/B tests by a sequence of
best-arm MAB instances, which can be continuously monitored by the data
scientist. When interleaving the MAB tests with an an online false discovery
rate (FDR) algorithm, we can obtain the best of both worlds: low sample
complexity and any time online FDR control. Our main contributions are: (i) to
propose reasonable definitions of a null hypothesis for MAB instances; (ii) to
demonstrate how one can derive an always-valid sequential p-value that allows
continuous monitoring of each MAB test; and (iii) to show that using rejection
thresholds of online-FDR algorithms as the confidence levels for the MAB
algorithms results in both sample-optimality, high power and low FDR at any
point in time. We run extensive simulations to verify our claims, and also
report results on real data collected from the New Yorker Cartoon Caption
contest.
|
Fanny Yang, Aaditya Ramdas, Kevin Jamieson, Martin J. Wainwright
| null |
1706.05378
| null | null |
A Closer Look at Memorization in Deep Networks
|
stat.ML cs.LG
|
We examine the role of memorization in deep learning, drawing connections to
capacity, generalization, and adversarial robustness. While deep networks are
capable of memorizing noise data, our results suggest that they tend to
prioritize learning simple patterns first. In our experiments, we expose
qualitative differences in gradient-based optimization of deep neural networks
(DNNs) on noise vs. real data. We also demonstrate that for appropriately tuned
explicit regularization (e.g., dropout) we can degrade DNN training performance
on noise datasets without compromising generalization on real data. Our
analysis suggests that the notions of effective capacity which are dataset
independent are unlikely to explain the generalization performance of deep
networks when trained with gradient based methods because training data itself
plays an important role in determining the degree of memorization.
|
Devansh Arpit, Stanis{\l}aw Jastrz\k{e}bski, Nicolas Ballas, David
Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer,
Aaron Courville, Yoshua Bengio, Simon Lacoste-Julien
| null |
1706.05394
| null | null |
Control Variates for Stochastic Gradient MCMC
|
stat.CO cs.LG stat.ML
|
It is well known that Markov chain Monte Carlo (MCMC) methods scale poorly
with dataset size. A popular class of methods for solving this issue is
stochastic gradient MCMC. These methods use a noisy estimate of the gradient of
the log posterior, which reduces the per iteration computational cost of the
algorithm. Despite this, there are a number of results suggesting that
stochastic gradient Langevin dynamics (SGLD), probably the most popular of
these methods, still has computational cost proportional to the dataset size.
We suggest an alternative log posterior gradient estimate for stochastic
gradient MCMC, which uses control variates to reduce the variance. We analyse
SGLD using this gradient estimate, and show that, under log-concavity
assumptions on the target distribution, the computational cost required for a
given level of accuracy is independent of the dataset size. Next we show that a
different control variate technique, known as zero variance control variates
can be applied to SGMCMC algorithms for free. This post-processing step
improves the inference of the algorithm by reducing the variance of the MCMC
output. Zero variance control variates rely on the gradient of the log
posterior; we explore how the variance reduction is affected by replacing this
with the noisy gradient estimate calculated by SGMCMC.
|
Jack Baker, Paul Fearnhead, Emily B. Fox, Christopher Nemeth
| null |
1706.05439
| null | null |
Bayesian Conditional Generative Adverserial Networks
|
cs.LG cs.AI stat.ML
|
Traditional GANs use a deterministic generator function (typically a neural
network) to transform a random noise input $z$ to a sample $\mathbf{x}$ that
the discriminator seeks to distinguish. We propose a new GAN called Bayesian
Conditional Generative Adversarial Networks (BC-GANs) that use a random
generator function to transform a deterministic input $y'$ to a sample
$\mathbf{x}$. Our BC-GANs extend traditional GANs to a Bayesian framework, and
naturally handle unsupervised learning, supervised learning, and
semi-supervised learning problems. Experiments show that the proposed BC-GANs
outperforms the state-of-the-arts.
|
M. Ehsan Abbasnejad, Qinfeng Shi, Iman Abbasnejad, Anton van den
Hengel, Anthony Dick
| null |
1706.05477
| null | null |
Variants of RMSProp and Adagrad with Logarithmic Regret Bounds
|
cs.LG cs.AI cs.CV cs.NE stat.ML
|
Adaptive gradient methods have become recently very popular, in particular as
they have been shown to be useful in the training of deep neural networks. In
this paper we have analyzed RMSProp, originally proposed for the training of
deep neural networks, in the context of online convex optimization and show
$\sqrt{T}$-type regret bounds. Moreover, we propose two variants SC-Adagrad and
SC-RMSProp for which we show logarithmic regret bounds for strongly convex
functions. Finally, we demonstrate in the experiments that these new variants
outperform other adaptive gradient techniques or stochastic gradient descent in
the optimization of strongly convex functions as well as in training of deep
neural networks.
|
Mahesh Chandra Mukkamala, Matthias Hein
| null |
1706.05507
| null | null |
Rgtsvm: Support Vector Machines on a GPU in R
|
stat.ML cs.LG
|
Rgtsvm provides a fast and flexible support vector machine (SVM)
implementation for the R language. The distinguishing feature of Rgtsvm is that
support vector classification and support vector regression tasks are
implemented on a graphical processing unit (GPU), allowing the libraries to
scale to millions of examples with >100-fold improvement in performance over
existing implementations. Nevertheless, Rgtsvm retains feature parity and has
an interface that is compatible with the popular e1071 SVM package in R.
Altogether, Rgtsvm enables large SVM models to be created by both experienced
and novice practitioners.
|
Zhong Wang, Tinyi Chu, Lauren A Choate, Charles G Danko
| null |
1706.05544
| null | null |
Coresets for Vector Summarization with Applications to Network Graphs
|
cs.LG
|
We provide a deterministic data summarization algorithm that approximates the
mean $\bar{p}=\frac{1}{n}\sum_{p\in P} p$ of a set $P$ of $n$ vectors in
$\REAL^d$, by a weighted mean $\tilde{p}$ of a \emph{subset} of $O(1/\eps)$
vectors, i.e., independent of both $n$ and $d$. We prove that the squared
Euclidean distance between $\bar{p}$ and $\tilde{p}$ is at most $\eps$
multiplied by the variance of $P$. We use this algorithm to maintain an
approximated sum of vectors from an unbounded stream, using memory that is
independent of $d$, and logarithmic in the $n$ vectors seen so far. Our main
application is to extract and represent in a compact way friend groups and
activity summaries of users from underlying data exchanges. For example, in the
case of mobile networks, we can use GPS traces to identify meetings, in the
case of social networks, we can use information exchange to identify friend
groups. Our algorithm provably identifies the {\it Heavy Hitter} entries in a
proximity (adjacency) matrix. The Heavy Hitters can be used to extract and
represent in a compact way friend groups and activity summaries of users from
underlying data exchanges. We evaluate the algorithm on several large data
sets.
|
Dan Feldman, Sedat Ozer, Daniela Rus
| null |
1706.05554
| null | null |
Fatiguing STDP: Learning from Spike-Timing Codes in the Presence of Rate
Codes
|
cs.NE cs.LG stat.ML
|
Spiking neural networks (SNNs) could play a key role in unsupervised machine
learning applications, by virtue of strengths related to learning from the fine
temporal structure of event-based signals. However, some spike-timing-related
strengths of SNNs are hindered by the sensitivity of spike-timing-dependent
plasticity (STDP) rules to input spike rates, as fine temporal correlations may
be obstructed by coarser correlations between firing rates. In this article, we
propose a spike-timing-dependent learning rule that allows a neuron to learn
from the temporally-coded information despite the presence of rate codes. Our
long-term plasticity rule makes use of short-term synaptic fatigue dynamics. We
show analytically that, in contrast to conventional STDP rules, our fatiguing
STDP (FSTDP) helps learn the temporal code, and we derive the necessary
conditions to optimize the learning process. We showcase the effectiveness of
FSTDP in learning spike-timing correlations among processes of different rates
in synthetic data. Finally, we use FSTDP to detect correlations in real-world
weather data from the United States in an experimental realization of the
algorithm that uses a neuromorphic hardware platform comprising phase-change
memristive devices. Taken together, our analyses and demonstrations suggest
that FSTDP paves the way for the exploitation of the spike-based strengths of
SNNs in real-world applications.
|
Timoleon Moraitis, Abu Sebastian, Irem Boybat, Manuel Le Gallo, Tomas
Tuma, Evangelos Eleftheriou
|
10.1109/IJCNN.2017.7966072
|
1706.05563
| null | null |
On the Optimization Landscape of Tensor Decompositions
|
cs.LG cs.DS math.OC math.PR stat.ML
|
Non-convex optimization with local search heuristics has been widely used in
machine learning, achieving many state-of-art results. It becomes increasingly
important to understand why they can work for these NP-hard problems on typical
data. The landscape of many objective functions in learning has been
conjectured to have the geometric property that "all local optima are
(approximately) global optima", and thus they can be solved efficiently by
local search algorithms. However, establishing such property can be very
difficult.
In this paper, we analyze the optimization landscape of the random
over-complete tensor decomposition problem, which has many applications in
unsupervised learning, especially in learning latent variable models. In
practice, it can be efficiently solved by gradient ascent on a non-convex
objective. We show that for any small constant $\epsilon > 0$, among the set of
points with function values $(1+\epsilon)$-factor larger than the expectation
of the function, all the local maxima are approximate global maxima.
Previously, the best-known result only characterizes the geometry in small
neighborhoods around the true components. Our result implies that even with an
initialization that is barely better than the random guess, the gradient ascent
algorithm is guaranteed to solve this problem.
Our main technique uses Kac-Rice formula and random matrix theory. To our
best knowledge, this is the first time when Kac-Rice formula is successfully
applied to counting the number of local minima of a highly-structured random
polynomial with dependent coefficients.
|
Rong Ge and Tengyu Ma
| null |
1706.05598
| null | null |
Sample, computation vs storage tradeoffs for classification using tensor
subspace models
|
cs.LG stat.ML
|
In this paper, we exhibit the tradeoffs between the (training) sample,
computation and storage complexity for the problem of supervised classification
using signal subspace estimation. Our main tool is the use of tensor subspaces,
i.e. subspaces with a Kronecker structure, for embedding the data into lower
dimensions. Among the subspaces with a Kronecker structure, we show that using
subspaces with a hierarchical structure for representing data leads to improved
tradeoffs. One of the main reasons for the improvement is that embedding data
into these hierarchical Kronecker structured subspaces prevents overfitting at
higher latent dimensions.
|
Mohammadhossein Chaghazardi, Shuchin Aeron
| null |
1706.05599
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.