title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Learning Convolutional Text Representations for Visual Question
Answering | cs.LG cs.CL cs.NE stat.ML | Visual question answering is a recently proposed artificial intelligence task
that requires a deep understanding of both images and texts. In deep learning,
images are typically modeled through convolutional neural networks, and texts
are typically modeled through recurrent neural networks. While the requirement
for modeling images is similar to traditional computer vision tasks, such as
object recognition and image classification, visual question answering raises a
different need for textual representation as compared to other natural language
processing tasks. In this work, we perform a detailed analysis on natural
language questions in visual question answering. Based on the analysis, we
propose to rely on convolutional neural networks for learning textual
representations. By exploring the various properties of convolutional neural
networks specialized for text data, such as width and depth, we present our
"CNN Inception + Gate" model. We show that our model improves question
representations and thus the overall accuracy of visual question answering
models. We also show that the text representation requirement in visual
question answering is more complicated and comprehensive than that in
conventional natural language processing tasks, making it a better task to
evaluate textual representation methods. Shallow models like fastText, which
can obtain comparable results with deep learning models in tasks like text
classification, are not suitable in visual question answering.
| Zhengyang Wang, Shuiwang Ji | 10.1137/1.9781611975321.67 | 1705.06824 | null | null |
A Unified Framework for Stochastic Matrix Factorization via Variance
Reduction | stat.ML cs.LG math.OC | We propose a unified framework to speed up the existing stochastic matrix
factorization (SMF) algorithms via variance reduction. Our framework is general
and it subsumes several well-known SMF formulations in the literature. We
perform a non-asymptotic convergence analysis of our framework and derive
computational and sample complexities for our algorithm to converge to an
$\epsilon$-stationary point in expectation. In addition, extensive experiments
for a wide class of SMF formulations demonstrate that our framework
consistently yields faster convergence and a more accurate output dictionary
vis-\`a-vis state-of-the-art frameworks.
| Renbo Zhao, William B. Haskell, Jiashi Feng | null | 1705.06884 | null | null |
Practical Algorithms for Best-K Identification in Multi-Armed Bandits | cs.LG cs.DS stat.ML | In the Best-$K$ identification problem (Best-$K$-Arm), we are given $N$
stochastic bandit arms with unknown reward distributions. Our goal is to
identify the $K$ arms with the largest means with high confidence, by drawing
samples from the arms adaptively. This problem is motivated by various
practical applications and has attracted considerable attention in the past
decade. In this paper, we propose new practical algorithms for the Best-$K$-Arm
problem, which have nearly optimal sample complexity bounds (matching the lower
bound up to logarithmic factors) and outperform the state-of-the-art algorithms
for the Best-$K$-Arm problem (even for $K=1$) in practice.
| Haotian Jiang, Jian Li, Mingda Qiao | null | 1705.06894 | null | null |
CDS Rate Construction Methods by Machine Learning Techniques | q-fin.ST cs.LG q-fin.RM stat.ML | Regulators require financial institutions to estimate counterparty default
risks from liquid CDS quotes for the valuation and risk management of OTC
derivatives. However, the vast majority of counterparties do not have liquid
CDS quotes and need proxy CDS rates. Existing methods cannot account for
counterparty-specific default risks; we propose to construct proxy CDS rates by
associating to illiquid counterparty liquid CDS Proxy based on Machine Learning
Techniques. After testing 156 classifiers from 8 most popular classifier
families, we found that some classifiers achieve highly satisfactory accuracy
rates. Furthermore, we have rank-ordered the performances and investigated
performance variations amongst and within the 8 classifier families. This paper
is, to the best of our knowledge, the first systematic study of CDS Proxy
construction by Machine Learning techniques, and the first systematic
classifier comparison study based entirely on financial market data. Its
findings both confirm and contrast existing classifier performance literature.
Given the typically highly correlated nature of financial data, we investigated
the impact of correlation on classifier performance. The techniques used in
this paper should be of interest for financial institutions seeking a CDS Proxy
method, and can serve for proxy construction for other financial variables.
Some directions for future research are indicated.
| Raymond Brummelhuis and Zhongmin Luo | null | 1705.06899 | null | null |
Unbiased estimates for linear regression via volume sampling | cs.LG | Given a full rank matrix $X$ with more columns than rows, consider the task
of estimating the pseudo inverse $X^+$ based on the pseudo inverse of a sampled
subset of columns (of size at least the number of rows). We show that this is
possible if the subset of columns is chosen proportional to the squared volume
spanned by the rows of the chosen submatrix (ie, volume sampling). The
resulting estimator is unbiased and surprisingly the covariance of the
estimator also has a closed form: It equals a specific factor times
$X^{+\top}X^+$. Pseudo inverse plays an important part in solving the linear
least squares problem, where we try to predict a label for each column of $X$.
We assume labels are expensive and we are only given the labels for the small
subset of columns we sample from $X$. Using our methods we show that the weight
vector of the solution for the sub problem is an unbiased estimator of the
optimal solution for the whole problem based on all column labels. We believe
that these new formulas establish a fundamental connection between linear least
squares and volume sampling. We use our methods to obtain an algorithm for
volume sampling that is faster than state-of-the-art and for obtaining bounds
for the total loss of the estimated least-squares solution on all labeled
columns.
| Micha{\l} Derezi\'nski and Manfred K. Warmuth | null | 1705.06908 | null | null |
Spectral-graph Based Classifications: Linear Regression for
Classification and Normalized Radial Basis Function Network | cs.LG | Spectral graph theory has been widely applied in unsupervised and
semi-supervised learning. In this paper, we find for the first time, to our
knowledge, that it also plays a concrete role in supervised classification. It
turns out that two classifiers are inherently related to the theory: linear
regression for classification (LRC) and normalized radial basis function
network (nRBFN), corresponding to linear and nonlinear kernel respectively. The
spectral graph theory provides us with a new insight into a fundamental aspect
of classification: the tradeoff between fitting error and overfitting risk.
With the theory, ideal working conditions for LRC and nRBFN are presented,
which ensure not only zero fitting error but also low overfitting risk. For
quantitative analysis, two concepts, the fitting error and the spectral risk
(indicating overfitting), have been defined. Their bounds for nRBFN and LRC are
derived. A special result shows that the spectral risk of nRBFN is lower
bounded by the number of classes and upper bounded by the size of radial basis.
When the conditions are not met exactly, the classifiers will pursue the
minimum fitting error, running into the risk of overfitting. It turns out that
$\ell_2$-norm regularization can be applied to control overfitting. Its effect
is explored under the spectral context. It is found that the two terms in the
$\ell_2$-regularized objective are one-one correspondent to the fitting error
and the spectral risk, revealing a tradeoff between the two quantities.
Concerning practical performance, we devise a basis selection strategy to
address the main problem hindering the applications of (n)RBFN. With the
strategy, nRBFN is easy to implement yet flexible. Experiments on 14 benchmark
data sets show the performance of nRBFN is comparable to that of SVM, whereas
the parameter tuning of nRBFN is much easier, leading to reduction of model
selection time.
| Zhenfang Hu, Gang Pan, and Zhaohui Wu | null | 1705.06922 | null | null |
Atari games and Intel processors | cs.DC cs.AI cs.LG | The asynchronous nature of the state-of-the-art reinforcement learning
algorithms such as the Asynchronous Advantage Actor-Critic algorithm, makes
them exceptionally suitable for CPU computations. However, given the fact that
deep reinforcement learning often deals with interpreting visual information, a
large part of the train and inference time is spent performing convolutions. In
this work we present our results on learning strategies in Atari games using a
Convolutional Neural Network, the Math Kernel Library and TensorFlow 0.11rc0
machine learning framework. We also analyze effects of asynchronous
computations on the convergence of reinforcement learning algorithms.
| Robert Adamski, Tomasz Grel, Maciej Klimek and Henryk Michalewski | 10.1007/978-3-319-75931-9_1 | 1705.06936 | null | null |
Nearly second-order asymptotic optimality of sequential change-point
detection with one-sample updates | math.ST cs.LG stat.TH | Sequential change-point detection when the distribution parameters are
unknown is a fundamental problem in statistics and machine learning. When the
post-change parameters are unknown, we consider a set of detection procedures
based on sequential likelihood ratios with non-anticipating estimators
constructed using online convex optimization algorithms such as online mirror
descent, which provides a more versatile approach to tackle complex situations
where recursive maximum likelihood estimators cannot be found. When the
underlying distributions belong to a exponential family and the estimators
satisfy the logarithm regret property, we show that this approach is nearly
second-order asymptotically optimal. This means that the upper bound for the
false alarm rate of the algorithm (measured by the average-run-length) meets
the lower bound asymptotically up to a log-log factor when the threshold tends
to infinity. Our proof is achieved by making a connection between sequential
change-point and online convex optimization and leveraging the logarithmic
regret bound property of online mirror descent algorithm. Numerical and real
data examples validate our theory.
| Yang Cao, Liyan Xie, Yao Xie, and Huan Xu | null | 1705.06995 | null | null |
Effective Representations of Clinical Notes | stat.ML cs.LG | Clinical notes are a rich source of information about patient state. However,
using them to predict clinical events with machine learning models is
challenging. They are very high dimensional, sparse and have complex structure.
Furthermore, training data is often scarce because it is expensive to obtain
reliable labels for many clinical events. These difficulties have traditionally
been addressed by manual feature engineering encoding task specific domain
knowledge. We explored the use of neural networks and transfer learning to
learn representations of clinical notes that are useful for predicting future
clinical events of interest, such as all causes mortality, inpatient
admissions, and emergency room visits. Our data comprised 2.7 million notes and
115 thousand patients at Stanford Hospital. We used the learned
representations, along with commonly used bag of words and topic model
representations, as features for predictive models of clinical events. We
evaluated the effectiveness of these representations with respect to the
performance of the models trained on small datasets. Models using the neural
network derived representations performed significantly better than models
using the baseline representations with small ($N < 1000$) training datasets.
The learned representations offer significant performance gains over commonly
used baseline representations for a range of predictive modeling tasks and
cohort sizes, offering an effective alternative to task specific feature
engineering when plentiful labeled training data is not available.
| Sebastien Dubois, Nathanael Romano, David C. Kale, Nigam Shah, and
Kenneth Jung | null | 1705.07025 | null | null |
The Landscape of Deep Learning Algorithms | stat.ML cs.LG math.OC | This paper studies the landscape of empirical risk of deep neural networks by
theoretically analyzing its convergence behavior to the population risk as well
as its stationary points and properties. For an $l$-layer linear neural
network, we prove its empirical risk uniformly converges to its population risk
at the rate of $\mathcal{O}(r^{2l}\sqrt{d\log(l)}/\sqrt{n})$ with training
sample size of $n$, the total weight dimension of $d$ and the magnitude bound
$r$ of weight of each layer. We then derive the stability and generalization
bounds for the empirical risk based on this result. Besides, we establish the
uniform convergence of gradient of the empirical risk to its population
counterpart. We prove the one-to-one correspondence of the non-degenerate
stationary points between the empirical and population risks with convergence
guarantees, which describes the landscape of deep neural networks. In addition,
we analyze these properties for deep nonlinear neural networks with sigmoid
activation functions. We prove similar results for convergence behavior of
their empirical risks as well as the gradients and analyze properties of their
non-degenerate stationary points.
To our best knowledge, this work is the first one theoretically
characterizing landscapes of deep learning algorithms. Besides, our results
provide the sample complexity of training a good deep neural network. We also
provide theoretical understanding on how the neural network depth $l$, the
layer width, the network size $d$ and parameter magnitude determine the neural
network landscapes.
| Pan Zhou, Jiashi Feng | null | 1705.07038 | null | null |
Posterior sampling for reinforcement learning: worst-case regret bounds | cs.LG | We present an algorithm based on posterior sampling (aka Thompson sampling)
that achieves near-optimal worst-case regret bounds when the underlying Markov
Decision Process (MDP) is communicating with a finite, though unknown,
diameter. Our main result is a high probability regret upper bound of
$\tilde{O}(DS\sqrt{AT})$ for any communicating MDP with $S$ states, $A$ actions
and diameter $D$. Here, regret compares the total reward achieved by the
algorithm to the total expected reward of an optimal infinite-horizon
undiscounted average reward policy, in time horizon $T$. This result closely
matches the known lower bound of $\Omega(\sqrt{DSAT})$. Our techniques involve
proving some novel results about the anti-concentration of Dirichlet
distribution, which may be of independent interest.
| Shipra Agrawal and Randy Jia | null | 1705.07041 | null | null |
Linear regression without correspondence | cs.LG math.ST stat.ML stat.TH | This article considers algorithmic and statistical aspects of linear
regression when the correspondence between the covariates and the responses is
unknown. First, a fully polynomial-time approximation scheme is given for the
natural least squares optimization problem in any constant dimension. Next, in
an average-case and noise-free setting where the responses exactly correspond
to a linear function of i.i.d. draws from a standard multivariate normal
distribution, an efficient algorithm based on lattice basis reduction is shown
to exactly recover the unknown linear function in arbitrary dimension. Finally,
lower bounds on the signal-to-noise ratio are established for approximate
recovery of the unknown linear function by any estimator.
| Daniel Hsu, Kevin Shi, Xiaorui Sun | null | 1705.07048 | null | null |
Masked Autoregressive Flow for Density Estimation | stat.ML cs.LG | Autoregressive models are among the best performing neural density
estimators. We describe an approach for increasing the flexibility of an
autoregressive model, based on modelling the random numbers that the model uses
internally when generating data. By constructing a stack of autoregressive
models, each modelling the random numbers of the next model in the stack, we
obtain a type of normalizing flow suitable for density estimation, which we
call Masked Autoregressive Flow. This type of flow is closely related to
Inverse Autoregressive Flow and is a generalization of Real NVP. Masked
Autoregressive Flow achieves state-of-the-art performance in a range of
general-purpose density estimation tasks.
| George Papamakarios, Theo Pavlakou, Iain Murray | null | 1705.07057 | null | null |
EE-Grad: Exploration and Exploitation for Cost-Efficient Mini-Batch SGD | cs.LG stat.ML | We present a generic framework for trading off fidelity and cost in computing
stochastic gradients when the costs of acquiring stochastic gradients of
different quality are not known a priori. We consider a mini-batch oracle that
distributes a limited query budget over a number of stochastic gradients and
aggregates them to estimate the true gradient. Since the optimal mini-batch
size depends on the unknown cost-fidelity function, we propose an algorithm,
{\it EE-Grad}, that sequentially explores the performance of mini-batch oracles
and exploits the accumulated knowledge to estimate the one achieving the best
performance in terms of cost-efficiency. We provide performance guarantees for
EE-Grad with respect to the optimal mini-batch oracle, and illustrate these
results in the case of strongly convex objectives. We also provide a simple
numerical example that corroborates our theoretical findings.
| Mehmet A. Donmez and Maxim Raginsky and Andrew C. Singer | null | 1705.0707 | null | null |
What do We Learn by Semantic Scene Understanding for Remote Sensing
imagery in CNN framework? | cs.CV cs.LG | Recently, deep convolutional neural network (DCNN) achieved increasingly
remarkable success and rapidly developed in the field of natural image
recognition. Compared with the natural image, the scale of remote sensing image
is larger and the scene and the object it represents are more macroscopic. This
study inquires whether remote sensing scene and natural scene recognitions
differ and raises the following questions: What are the key factors in remote
sensing scene recognition? Is the DCNN recognition mechanism centered on object
recognition still applicable to the scenarios of remote sensing scene
understanding? We performed several experiments to explore the influence of the
DCNN structure and the scale of remote sensing scene understanding from the
perspective of scene complexity. Our experiment shows that understanding a
complex scene depends on an in-depth network and multiple-scale perception.
Using a visualization method, we qualitatively and quantitatively analyze the
recognition mechanism in a complex remote sensing scene and demonstrate the
importance of multi-objective joint semantic support.
| Haifeng Li, Jian Peng, Chao Tao, Jie Chen, Min Deng | null | 1705.07077 | null | null |
Estimating Accuracy from Unlabeled Data: A Probabilistic Logic Approach | cs.LG cs.AI stat.ML | We propose an efficient method to estimate the accuracy of classifiers using
only unlabeled data. We consider a setting with multiple classification
problems where the target classes may be tied together through logical
constraints. For example, a set of classes may be mutually exclusive, meaning
that a data instance can belong to at most one of them. The proposed method is
based on the intuition that: (i) when classifiers agree, they are more likely
to be correct, and (ii) when the classifiers make a prediction that violates
the constraints, at least one classifier must be making an error. Experiments
on four real-world data sets produce accuracy estimates within a few percent of
the true accuracy, using solely unlabeled data. Our models also outperform
existing state-of-the-art solutions in both estimating accuracies, and
combining multiple classifier outputs. The results emphasize the utility of
logical constraints in estimating accuracy, thus validating our intuition.
| Emmanouil A. Platanios, Hoifung Poon, Tom M. Mitchell, Eric Horvitz | null | 1705.07086 | null | null |
Machine learning for classification and quantification of monoclonal
antibody preparations for cancer therapy | q-bio.QM cs.LG | Monoclonal antibodies constitute one of the most important strategies to
treat patients suffering from cancers such as hematological malignancies and
solid tumors. In order to guarantee the quality of those preparations prepared
at hospital, quality control has to be developed. The aim of this study was to
explore a noninvasive, nondestructive, and rapid analytical method to ensure
the quality of the final preparation without causing any delay in the process.
We analyzed four mAbs (Inlfiximab, Bevacizumab, Ramucirumab and Rituximab)
diluted at therapeutic concentration in chloride sodium 0.9% using Raman
spectroscopy. To reduce the prediction errors obtained with traditional
chemometric data analysis, we explored a data-driven approach using statistical
machine learning methods where preprocessing and predictive models are jointly
optimized. We prepared a data analytics workflow and submitted the problem to a
collaborative data challenge platform called Rapid Analytics and Model
Prototyping (RAMP). This allowed to use solutions from about 300 data
scientists during five days of collaborative work. The prediction of the four
mAbs samples was considerably improved with a misclassification rate and the
mean error rate of 0.8% and 4%, respectively.
| Laetitia Le, Camille Marini, Alexandre Gramfort, David Nguyen, Mehdi
Cherti, Sana Tfaili, Ali Tfayli, Arlette Baillet-Guffroy, Patrice Prognon,
Pierre Chaminade, Eric Caudron, Bal\'azs K\'egl | null | 1705.07099 | null | null |
Gradient Estimators for Implicit Models | stat.ML cs.LG | Implicit models, which allow for the generation of samples but not for
point-wise evaluation of probabilities, are omnipresent in real-world problems
tackled by machine learning and a hot topic of current research. Some examples
include data simulators that are widely used in engineering and scientific
research, generative adversarial networks (GANs) for image synthesis, and
hot-off-the-press approximate inference techniques relying on implicit
distributions. The majority of existing approaches to learning implicit models
rely on approximating the intractable distribution or optimisation objective
for gradient-based optimisation, which is liable to produce inaccurate updates
and thus poor models. This paper alleviates the need for such approximations by
proposing the Stein gradient estimator, which directly estimates the score
function of the implicitly defined distribution. The efficacy of the proposed
estimator is empirically demonstrated by examples that include meta-learning
for approximate inference, and entropy regularised GANs that provide improved
sample diversity.
| Yingzhen Li, Richard E. Turner | null | 1705.07107 | null | null |
Deep adversarial neural decoding | q-bio.NC cs.LG stat.ML | Here, we present a novel approach to solve the problem of reconstructing
perceived stimuli from brain responses by combining probabilistic inference
with deep learning. Our approach first inverts the linear transformation from
latent features to brain responses with maximum a posteriori estimation and
then inverts the nonlinear transformation from perceived stimuli to latent
features with adversarial training of convolutional neural networks. We test
our approach with a functional magnetic resonance imaging experiment and show
that it can generate state-of-the-art reconstructions of perceived faces from
brain activations.
| Ya\u{g}mur G\"u\c{c}l\"ut\"urk, Umut G\"u\c{c}l\"u, Katja Seeliger,
Sander Bosch, Rob van Lier, Marcel van Gerven | null | 1705.07109 | null | null |
Fast Singular Value Shrinkage with Chebyshev Polynomial Approximation
Based on Signal Sparsity | cs.NA cs.LG | We propose an approximation method for thresholding of singular values using
Chebyshev polynomial approximation (CPA). Many signal processing problems
require iterative application of singular value decomposition (SVD) for
minimizing the rank of a given data matrix with other cost functions and/or
constraints, which is called matrix rank minimization. In matrix rank
minimization, singular values of a matrix are shrunk by hard-thresholding,
soft-thresholding, or weighted soft-thresholding. However, the computational
cost of SVD is generally too expensive to handle high dimensional signals such
as images; hence, in this case, matrix rank minimization requires enormous
computation time. In this paper, we leverage CPA to (approximately) manipulate
singular values without computing singular values and vectors. The thresholding
of singular values is expressed by a multiplication of certain matrices, which
is derived from a characteristic of CPA. The multiplication is also efficiently
computed using the sparsity of signals. As a result, the computational cost is
significantly reduced. Experimental results suggest the effectiveness of our
method through several image processing applications based on matrix rank
minimization with nuclear norm relaxation in terms of computation time and
approximation precision.
| Masaki Onuki, Shunsuke Ono, Keiichiro Shirai, Yuichi Tanaka | 10.1109/TSP.2017.2745444 | 1705.07112 | null | null |
Deep Learning as a Tool to Predict Flow Patterns in Two-Phase Flow | cs.LG | In order to better model complex real-world data such as multiphase flow, one
approach is to develop pattern recognition techniques and robust features that
capture the relevant information. In this paper, we use deep learning methods,
and in particular employ the multilayer perceptron, to build an algorithm that
can predict flow pattern in twophase flow from fluid properties and pipe
conditions. The preliminary results show excellent performance when compared
with classical methods of flow pattern prediction.
| Mohammadmehdi Ezzatabadipour, Parth Singh, Melvin D. Robinson, Pablo
Guillen-Rondon, Carlos Torres | null | 1705.07117 | null | null |
VAE with a VampPrior | cs.LG cs.AI stat.ML | Many different methods to train deep generative models have been introduced
in the past. In this paper, we propose to extend the variational auto-encoder
(VAE) framework with a new type of prior which we call "Variational Mixture of
Posteriors" prior, or VampPrior for short. The VampPrior consists of a mixture
distribution (e.g., a mixture of Gaussians) with components given by
variational posteriors conditioned on learnable pseudo-inputs. We further
extend this prior to a two layer hierarchical model and show that this
architecture with a coupled prior and posterior, learns significantly better
models. The model also avoids the usual local optima issues related to useless
latent dimensions that plague VAEs. We provide empirical studies on six
datasets, namely, static and binary MNIST, OMNIGLOT, Caltech 101 Silhouettes,
Frey Faces and Histopathology patches, and show that applying the hierarchical
VampPrior delivers state-of-the-art results on all datasets in the unsupervised
permutation invariant setting and the best results or comparable to SOTA
methods for the approach with convolutional networks.
| Jakub M. Tomczak and Max Welling | null | 1705.0712 | null | null |
Softmax Q-Distribution Estimation for Structured Prediction: A
Theoretical Interpretation for RAML | cs.LG cs.CL stat.ML | Reward augmented maximum likelihood (RAML), a simple and effective learning
framework to directly optimize towards the reward function in structured
prediction tasks, has led to a number of impressive empirical successes. RAML
incorporates task-specific reward by performing maximum-likelihood updates on
candidate outputs sampled according to an exponentiated payoff distribution,
which gives higher probabilities to candidates that are close to the reference
output. While RAML is notable for its simplicity, efficiency, and its
impressive empirical successes, the theoretical properties of RAML, especially
the behavior of the exponentiated payoff distribution, has not been examined
thoroughly. In this work, we introduce softmax Q-distribution estimation, a
novel theoretical interpretation of RAML, which reveals the relation between
RAML and Bayesian decision theory. The softmax Q-distribution can be regarded
as a smooth approximation of the Bayes decision boundary, and the Bayes
decision rule is achieved by decoding with this Q-distribution. We further show
that RAML is equivalent to approximately estimating the softmax Q-distribution,
with the temperature $\tau$ controlling approximation error. We perform two
experiments, one on synthetic data of multi-class classification and one on
real data of image captioning, to demonstrate the relationship between RAML and
the proposed softmax Q-distribution estimation method, verifying our
theoretical analysis. Additional experiments on three structured prediction
tasks with rewards defined on sequential (named entity recognition), tree-based
(dependency parsing) and irregular (machine translation) structures show
notable improvements over maximum likelihood baselines.
| Xuezhe Ma, Pengcheng Yin, Jingzhou Liu, Graham Neubig, Eduard Hovy | null | 1705.07136 | null | null |
Local Information with Feedback Perturbation Suffices for Dictionary
Learning in Neural Circuits | cs.LG cs.NE | While the sparse coding principle can successfully model information
processing in sensory neural systems, it remains unclear how learning can be
accomplished under neural architectural constraints. Feasible learning rules
must rely solely on synaptically local information in order to be implemented
on spatially distributed neurons. We describe a neural network with spiking
neurons that can address the aforementioned fundamental challenge and solve the
L1-minimizing dictionary learning problem, representing the first model able to
do so. Our major innovation is to introduce feedback synapses to create a
pathway to turn the seemingly non-local information into local ones. The
resulting network encodes the error signal needed for learning as the change of
network steady states caused by feedback, and operates akin to the classical
stochastic gradient descent method.
| Tsung-Han Lin | null | 1705.07149 | null | null |
Clustering under Local Stability: Bridging the Gap between Worst-Case
and Beyond Worst-Case Analysis | cs.DS cs.LG | Recently, there has been substantial interest in clustering research that
takes a beyond worst-case approach to the analysis of algorithms. The typical
idea is to design a clustering algorithm that outputs a near-optimal solution,
provided the data satisfy a natural stability notion. For example, Bilu and
Linial (2010) and Awasthi et al. (2012) presented algorithms that output
near-optimal solutions, assuming the optimal solution is preserved under small
perturbations to the input distances. A drawback to this approach is that the
algorithms are often explicitly built according to the stability assumption and
give no guarantees in the worst case; indeed, several recent algorithms output
arbitrarily bad solutions even when just a small section of the data does not
satisfy the given stability notion.
In this work, we address this concern in two ways. First, we provide
algorithms that inherit the worst-case guarantees of clustering approximation
algorithms, while simultaneously guaranteeing near-optimal solutions when the
data is stable. Our algorithms are natural modifications to existing
state-of-the-art approximation algorithms. Second, we initiate the study of
local stability, which is a property of a single optimal cluster rather than an
entire optimal solution. We show our algorithms output all optimal clusters
which satisfy stability locally. Specifically, we achieve strong positive
results in our local framework under recent stability notions including metric
perturbation resilience (Angelidakis et al. 2017) and robust perturbation
resilience (Balcan and Liang 2012) for the $k$-median, $k$-means, and
symmetric/asymmetric $k$-center objectives.
| Maria-Florina Balcan, Colin White | null | 1705.07157 | null | null |
Relaxed Wasserstein with Applications to GANs | stat.ML cs.LG | Wasserstein Generative Adversarial Networks (WGANs) provide a versatile class
of models, which have attracted great attention in various applications.
However, this framework has two main drawbacks: (i) Wasserstein-1 (or
Earth-Mover) distance is restrictive such that WGANs cannot always fit data
geometry well; (ii) It is difficult to achieve fast training of WGANs. In this
paper, we propose a new class of \textit{Relaxed Wasserstein} (RW) distances by
generalizing Wasserstein-1 distance with Bregman cost functions. We show that
RW distances achieve nice statistical properties while not sacrificing the
computational tractability. Combined with the GANs framework, we develop
Relaxed WGANs (RWGANs) which are not only statistically flexible but can be
approximated efficiently using heuristic approaches. Experiments on real images
demonstrate that the RWGAN with Kullback-Leibler (KL) cost function outperforms
other competing approaches, e.g., WGANs, even with gradient penalty.
| Xin Guo, Johnny Hong, Tianyi Lin and Nan Yang | null | 1705.07164 | null | null |
Nestrov's Acceleration For Second Order Method | cs.LG | Optimization plays a key role in machine learning. Recently, stochastic
second-order methods have attracted much attention due to their low
computational cost in each iteration. However, these algorithms might perform
poorly especially if it is hard to approximate the Hessian well and
efficiently. As far as we know, there is no effective way to handle this
problem. In this paper, we resort to Nestrov's acceleration technique to
improve the convergence performance of a class of second-order methods called
approximate Newton. We give a theoretical analysis that Nestrov's acceleration
technique can improve the convergence performance for approximate Newton just
like for first-order methods. We accordingly propose an accelerated regularized
sub-sampled Newton. Our accelerated algorithm performs much better than the
original regularized sub-sampled Newton in experiments, which validates our
theory empirically. Besides, the accelerated regularized sub-sampled Newton has
good performance comparable to or even better than state-of-art algorithms.
| Haishan Ye and Zhihua Zhang | null | 1705.07171 | null | null |
Espresso: Efficient Forward Propagation for BCNNs | cs.DC cs.CV cs.LG cs.NE | There are many applications scenarios for which the computational performance
and memory footprint of the prediction phase of Deep Neural Networks (DNNs)
needs to be optimized. Binary Neural Networks (BDNNs) have been shown to be an
effective way of achieving this objective. In this paper, we show how
Convolutional Neural Networks (CNNs) can be implemented using binary
representations. Espresso is a compact, yet powerful library written in C/CUDA
that features all the functionalities required for the forward propagation of
CNNs, in a binary file less than 400KB, without any external dependencies.
Although it is mainly designed to take advantage of massive GPU parallelism,
Espresso also provides an equivalent CPU implementation for CNNs. Espresso
provides special convolutional and dense layers for BCNNs, leveraging
bit-packing and bit-wise computations for efficient execution. These techniques
provide a speed-up of matrix-multiplication routines, and at the same time,
reduce memory usage when storing parameters and activations. We experimentally
show that Espresso is significantly faster than existing implementations of
optimized binary neural networks ($\approx$ 2 orders of magnitude). Espresso is
released under the Apache 2.0 license and is available at
http://github.com/fpeder/espresso.
| Fabrizio Pedersoli and George Tzanetakis and Andrea Tagliasacchi | null | 1705.07175 | null | null |
The High-Dimensional Geometry of Binary Neural Networks | cs.LG | Recent research has shown that one can train a neural network with binary
weights and activations at train time by augmenting the weights with a
high-precision continuous latent variable that accumulates small changes from
stochastic gradient descent. However, there is a dearth of theoretical analysis
to explain why we can effectively capture the features in our data with binary
weights and activations. Our main result is that the neural networks with
binary weights and activations trained using the method of Courbariaux, Hubara
et al. (2016) work because of the high-dimensional geometry of binary vectors.
In particular, the ideal continuous vectors that extract out features in the
intermediate representations of these BNNs are well-approximated by binary
vectors in the sense that dot products are approximately preserved. Compared to
previous research that demonstrated the viability of such BNNs, our work
explains why these BNNs work in terms of the HD geometry. Our theory serves as
a foundation for understanding not only BNNs but a variety of methods that seek
to compress traditional neural networks. Furthermore, a better understanding of
multilayer binary neural networks serves as a starting point for generalizing
BNNs to other neural network architectures such as recurrent neural networks.
| Alexander G. Anderson, Cory P. Berg | null | 1705.07199 | null | null |
Multi-Stage Variational Auto-Encoders for Coarse-to-Fine Image
Generation | cs.CV cs.LG | Variational auto-encoder (VAE) is a powerful unsupervised learning framework
for image generation. One drawback of VAE is that it generates blurry images
due to its Gaussianity assumption and thus L2 loss. To allow the generation of
high quality images by VAE, we increase the capacity of decoder network by
employing residual blocks and skip connections, which also enable efficient
optimization. To overcome the limitation of L2 loss, we propose to generate
images in a multi-stage manner from coarse to fine. In the simplest case, the
proposed multi-stage VAE divides the decoder into two components in which the
second component generates refined images based on the course images generated
by the first component. Since the second component is independent of the VAE
model, it can employ other loss functions beyond the L2 loss and different
model architectures. The proposed framework can be easily generalized to
contain more than two components. Experiment results on the MNIST and CelebA
datasets demonstrate that the proposed multi-stage VAE can generate sharper
images as compared to those from the original VAE.
| Lei Cai and Hongyang Gao and Shuiwang Ji | null | 1705.07202 | null | null |
Ensemble Adversarial Training: Attacks and Defenses | stat.ML cs.CR cs.LG | Adversarial examples are perturbed inputs designed to fool machine learning
models. Adversarial training injects such examples into training data to
increase robustness. To scale this technique to large datasets, perturbations
are crafted using fast single-step methods that maximize a linear approximation
of the model's loss. We show that this form of adversarial training converges
to a degenerate global minimum, wherein small curvature artifacts near the data
points obfuscate a linear approximation of the loss. The model thus learns to
generate weak perturbations, rather than defend against strong ones. As a
result, we find that adversarial training remains vulnerable to black-box
attacks, where we transfer perturbations computed on undefended models, as well
as to a powerful novel single-step attack that escapes the non-smooth vicinity
of the input data via a small random step. We further introduce Ensemble
Adversarial Training, a technique that augments training data with
perturbations transferred from other models. On ImageNet, Ensemble Adversarial
Training yields models with strong robustness to black-box attacks. In
particular, our most robust model won the first round of the NIPS 2017
competition on Defenses against Adversarial Attacks. However, subsequent work
found that more elaborate black-box attacks could significantly enhance
transferability and reduce the accuracy of our models.
| Florian Tram\`er, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow,
Dan Boneh, Patrick McDaniel | null | 1705.07204 | null | null |
Machine learning modeling for time series problem: Predicting flight
ticket prices | cs.LG | Machine learning has been used in all kinds of fields. In this article, we
introduce how machine learning can be applied into time series problem.
Especially, we use the airline ticket prediction problem as our specific
problem. Airline companies use many different variables to determine the flight
ticket prices: indicator whether the travel is during the holidays, the number
of free seats in the plane etc. Some of the variables are observed, but some of
them are hidden. Based on the data over a 103 day period, we trained our
models, getting the best model - which is AdaBoost-Decision Tree
Classification. This algorithm has best performance over the observed 8 routes
which has 61.35$\%$ better performance than the random purchase strategy, and
relatively small variance over these routes. And we also considered the
situation that we cannot get too much historical datas for some routes (for
example the route is new and does not have historical data) or we do not want
to train historical data to predict to buy or wait quickly, in which problem,
we used HMM Sequence Classification based AdaBoost-Decision Tree Classification
to perform our prediction on 12 new routes. Finally, we got 31.71$\%$ better
performance than the random purchase strategy.
| Jun Lu | null | 1705.07205 | null | null |
PixColor: Pixel Recursive Colorization | cs.CV cs.LG | We propose a novel approach to automatically produce multiple colorized
versions of a grayscale image. Our method results from the observation that the
task of automated colorization is relatively easy given a low-resolution
version of the color image. We first train a conditional PixelCNN to generate a
low resolution color for a given grayscale image. Then, given the generated
low-resolution color image and the original grayscale image as inputs, we train
a second CNN to generate a high-resolution colorization of an image. We
demonstrate that our approach produces more diverse and plausible colorizations
than existing methods, as judged by human raters in a "Visual Turing Test".
| Sergio Guadarrama, Ryan Dahl, David Bieber, Mohammad Norouzi, Jonathon
Shlens, Kevin Murphy | null | 1705.07208 | null | null |
Two-temperature logistic regression based on the Tsallis divergence | cs.LG stat.ML | We develop a variant of multiclass logistic regression that is significantly
more robust to noise. The algorithm has one weight vector per class and the
surrogate loss is a function of the linear activations (one per class). The
surrogate loss of an example with linear activation vector $\mathbf{a}$ and
class $c$ has the form $-\log_{t_1} \exp_{t_2} (a_c - G_{t_2}(\mathbf{a}))$
where the two temperatures $t_1$ and $t_2$ ''temper'' the $\log$ and $\exp$,
respectively, and $G_{t_2}(\mathbf{a})$ is a scalar value that generalizes the
log-partition function. We motivate this loss using the Tsallis divergence. Our
method allows transitioning between non-convex and convex losses by the choice
of the temperature parameters. As the temperature $t_1$ of the logarithm
becomes smaller than the temperature $t_2$ of the exponential, the surrogate
loss becomes ''quasi convex''. Various tunings of the temperatures recover
previous methods and tuning the degree of non-convexity is crucial in the
experiments. In particular, quasi-convexity and boundedness of the loss provide
significant robustness to the outliers. We explain this by showing that $t_1 <
1$ caps the surrogate loss and $t_2 >1$ makes the predictive distribution have
a heavy tail.
We show that the surrogate loss is Bayes-consistent, even in the non-convex
case. Additionally, we provide efficient iterative algorithms for calculating
the log-partition value only in a few number of iterations. Our compelling
experimental results on large real-world datasets show the advantage of using
the two-temperature variant in the noisy as well as the noise free case.
| Ehsan Amid, Manfred K. Warmuth, Sriram Srinivasan | null | 1705.0721 | null | null |
MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial
Attacks with Moving Target Defense | cs.LG cs.CR cs.GT | Present attack methods can make state-of-the-art classification systems based
on deep neural networks misclassify every adversarially modified test example.
The design of general defense strategies against a wide range of such attacks
still remains a challenging problem. In this paper, we draw inspiration from
the fields of cybersecurity and multi-agent systems and propose to leverage the
concept of Moving Target Defense (MTD) in designing a meta-defense for
'boosting' the robustness of an ensemble of deep neural networks (DNNs) for
visual classification tasks against such adversarial attacks. To classify an
input image, a trained network is picked randomly from this set of networks by
formulating the interaction between a Defender (who hosts the classification
networks) and their (Legitimate and Malicious) users as a Bayesian Stackelberg
Game (BSG). We empirically show that this approach, MTDeep, reduces
misclassification on perturbed images in various datasets such as MNIST,
FashionMNIST, and ImageNet while maintaining high classification accuracy on
legitimate test images. We then demonstrate that our framework, being the first
meta-defense technique, can be used in conjunction with any existing defense
mechanism to provide more resilience against adversarial attacks that can be
afforded by these defense mechanisms. Lastly, to quantify the increase in
robustness of an ensemble-based classification system when we use MTDeep, we
analyze the properties of a set of DNNs and introduce the concept of
differential immunity that formalizes the notion of attack transferability.
| Sailik Sengupta, Tathagata Chakraborti, Subbarao Kambhampati | null | 1705.07213 | null | null |
On Convergence and Stability of GANs | cs.AI cs.CV cs.GT cs.LG cs.NE | We propose studying GAN training dynamics as regret minimization, which is in
contrast to the popular view that there is consistent minimization of a
divergence between real and generated distributions. We analyze the convergence
of GAN training from this new point of view to understand why mode collapse
happens. We hypothesize the existence of undesirable local equilibria in this
non-convex game to be responsible for mode collapse. We observe that these
local equilibria often exhibit sharp gradients of the discriminator function
around some real data points. We demonstrate that these degenerate local
equilibria can be avoided with a gradient penalty scheme called DRAGAN. We show
that DRAGAN enables faster training, achieves improved stability with fewer
mode collapses, and leads to generator networks with better modeling
performance across a variety of architectures and objective functions.
| Naveen Kodali, Jacob Abernethy, James Hays, Zsolt Kira | null | 1705.07215 | null | null |
GAR: An efficient and scalable Graph-based Activity Regularization for
semi-supervised learning | cs.LG | In this paper, we propose a novel graph-based approach for semi-supervised
learning problems, which considers an adaptive adjacency of the examples
throughout the unsupervised portion of the training. Adjacency of the examples
is inferred using the predictions of a neural network model which is first
initialized by a supervised pretraining. These predictions are then updated
according to a novel unsupervised objective which regularizes another
adjacency, now linking the output nodes. Regularizing the adjacency of the
output nodes, inferred from the predictions of the network, creates an easier
optimization problem and ultimately provides that the predictions of the
network turn into the optimal embedding. Ultimately, the proposed framework
provides an effective and scalable graph-based solution which is natural to the
operational mechanism of deep neural networks. Our results show comparable
performance with state-of-the-art generative approaches for semi-supervised
learning on an easier-to-train, low-cost framework.
| Ozsel Kilinc, Ismail Uysal | 10.1016/j.neucom.2018.03.028 | 1705.07219 | null | null |
AIDE: An algorithm for measuring the accuracy of probabilistic inference
algorithms | stat.ML cs.AI cs.LG | Approximate probabilistic inference algorithms are central to many fields.
Examples include sequential Monte Carlo inference in robotics, variational
inference in machine learning, and Markov chain Monte Carlo inference in
statistics. A key problem faced by practitioners is measuring the accuracy of
an approximate inference algorithm on a specific data set. This paper
introduces the auxiliary inference divergence estimator (AIDE), an algorithm
for measuring the accuracy of approximate inference algorithms. AIDE is based
on the observation that inference algorithms can be treated as probabilistic
models and the random variables used within the inference algorithm can be
viewed as auxiliary variables. This view leads to a new estimator for the
symmetric KL divergence between the approximating distributions of two
inference algorithms. The paper illustrates application of AIDE to algorithms
for inference in regression, hidden Markov, and Dirichlet process mixture
models. The experiments show that AIDE captures the qualitative behavior of a
broad class of inference algorithms and can detect failure modes of inference
algorithms that are missed by standard heuristics.
| Marco F. Cusumano-Towner, Vikash K. Mansinghka | null | 1705.07224 | null | null |
Speedup from a different parametrization within the Neural Network
algorithm | cs.LG | A different parametrization of the hyperplanes is used in the neural network
algorithm. As demonstrated on several autoencoder examples it significantly
outperforms the usual parametrization, reaching lower training error values
with only a fraction of the number of epochs. It's argued that it makes it
easier to understand and initialize the parameters.
| Michael F. Zimmer | null | 1705.0725 | null | null |
SVM via Saddle Point Optimization: New Bounds and Distributed Algorithms | cs.LG cs.NA | We study two important SVM variants: hard-margin SVM (for linearly separable
cases) and $\nu$-SVM (for linearly non-separable cases). We propose new
algorithms from the perspective of saddle point optimization. Our algorithms
achieve $(1-\epsilon)$-approximations with running time $\tilde{O}(nd+n\sqrt{d
/ \epsilon})$ for both variants, where $n$ is the number of points and $d$ is
the dimensionality. To the best of our knowledge, the current best algorithm
for $\nu$-SVM is based on quadratic programming approach which requires
$\Omega(n^2 d)$ time in worst case~\cite{joachims1998making,platt199912}. In
the paper, we provide the first nearly linear time algorithm for $\nu$-SVM. The
current best algorithm for hard margin SVM achieved by Gilbert
algorithm~\cite{gartner2009coresets} requires $O(nd / \epsilon )$ time. Our
algorithm improves the running time by a factor of $\sqrt{d}/\sqrt{\epsilon}$.
Moreover, our algorithms can be implemented in the distributed settings
naturally. We prove that our algorithms require $\tilde{O}(k(d
+\sqrt{d/\epsilon}))$ communication cost, where $k$ is the number of clients,
which almost matches the theoretical lower bound. Numerical experiments support
our theory and show that our algorithms converge faster on high dimensional,
large and dense data sets, as compared to previous methods.
| Yifei Jin and Lingxiao Huang and Jian Li | null | 1705.07252 | null | null |
Learning Feature Nonlinearities with Non-Convex Regularized Binned
Regression | cs.LG cs.IT math.IT math.OC stat.ML | For various applications, the relations between the dependent and independent
variables are highly nonlinear. Consequently, for large scale complex problems,
neural networks and regression trees are commonly preferred over linear models
such as Lasso. This work proposes learning the feature nonlinearities by
binning feature values and finding the best fit in each quantile using
non-convex regularized linear regression. The algorithm first captures the
dependence between neighboring quantiles by enforcing smoothness via
piecewise-constant/linear approximation and then selects a sparse subset of
good features. We prove that the proposed algorithm is statistically and
computationally efficient. In particular, it achieves linear rate of
convergence while requiring near-minimal number of samples. Evaluations on
synthetic and real datasets demonstrate that algorithm is competitive with
current state-of-the-art and accurately learns feature nonlinearities. Finally,
we explore an interesting connection between the binning stage of our algorithm
and sparse Johnson-Lindenstrauss matrices.
| Samet Oymak, Mehrdad Mahdavi, Jiasi Chen | null | 1705.07256 | null | null |
Stochastic Recursive Gradient Algorithm for Nonconvex Optimization | stat.ML cs.LG math.OC | In this paper, we study and analyze the mini-batch version of StochAstic
Recursive grAdient algoritHm (SARAH), a method employing the stochastic
recursive gradient, for solving empirical loss minimization for the case of
nonconvex losses. We provide a sublinear convergence rate (to stationary
points) for general nonconvex functions and a linear convergence rate for
gradient dominated functions, both of which have some advantages compared to
other modern stochastic gradient algorithms for nonconvex losses.
| Lam M. Nguyen, Jie Liu, Katya Scheinberg, Martin Tak\'a\v{c} | null | 1705.07261 | null | null |
Batch Reinforcement Learning on the Industrial Benchmark: First
Experiences | cs.LG cs.AI cs.NE cs.SY | The Particle Swarm Optimization Policy (PSO-P) has been recently introduced
and proven to produce remarkable results on interacting with academic
reinforcement learning benchmarks in an off-policy, batch-based setting. To
further investigate the properties and feasibility on real-world applications,
this paper investigates PSO-P on the so-called Industrial Benchmark (IB), a
novel reinforcement learning (RL) benchmark that aims at being realistic by
including a variety of aspects found in industrial applications, like
continuous state and action spaces, a high dimensional, partially observable
state space, delayed effects, and complex stochasticity. The experimental
results of PSO-P on IB are compared to results of closed-form control policies
derived from the model-based Recurrent Control Neural Network (RCNN) and the
model-free Neural Fitted Q-Iteration (NFQ). Experiments show that PSO-P is not
only of interest for academic benchmarks, but also for real-world industrial
applications, since it also yielded the best performing policy in our IB
setting. Compared to other well established RL techniques, PSO-P produced
outstanding results in performance and robustness, requiring only a relatively
low amount of effort in finding adequate parameters or making complex design
decisions.
| Daniel Hein, Steffen Udluft, Michel Tokic, Alexander Hentschel, Thomas
A. Runkler, Volkmar Sterzing | 10.1109/IJCNN.2017.7966389 | 1705.07262 | null | null |
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
Methods | cs.LG cs.CR cs.CV | Neural networks are known to be vulnerable to adversarial examples: inputs
that are close to natural inputs but classified incorrectly. In order to better
understand the space of adversarial examples, we survey ten recent proposals
that are designed for detection and compare their efficacy. We show that all
can be defeated by constructing new loss functions. We conclude that
adversarial examples are significantly harder to detect than previously
appreciated, and the properties believed to be intrinsic to adversarial
examples are in fact not. Finally, we propose several simple guidelines for
evaluating future proposed defenses.
| Nicholas Carlini, David Wagner | null | 1705.07263 | null | null |
Search Engine Guided Non-Parametric Neural Machine Translation | cs.CL cs.AI cs.LG | In this paper, we extend an attention-based neural machine translation (NMT)
model by allowing it to access an entire training set of parallel sentence
pairs even after training. The proposed approach consists of two stages. In the
first stage--retrieval stage--, an off-the-shelf, black-box search engine is
used to retrieve a small subset of sentence pairs from a training set given a
source sentence. These pairs are further filtered based on a fuzzy matching
score based on edit distance. In the second stage--translation stage--, a novel
translation model, called translation memory enhanced NMT (TM-NMT), seamlessly
uses both the source sentence and a set of retrieved sentence pairs to perform
the translation. Empirical evaluation on three language pairs (En-Fr, En-De,
and En-Es) shows that the proposed approach significantly outperforms the
baseline approach and the improvement is more significant when more relevant
sentence pairs were retrieved.
| Jiatao Gu, Yong Wang, Kyunghyun Cho and Victor O.K. Li | null | 1705.07267 | null | null |
Learning to Factor Policies and Action-Value Functions: Factored Action
Space Representations for Deep Reinforcement learning | cs.LG cs.AI | Deep Reinforcement Learning (DRL) methods have performed well in an
increasing numbering of high-dimensional visual decision making domains. Among
all such visual decision making problems, those with discrete action spaces
often tend to have underlying compositional structure in the said action space.
Such action spaces often contain actions such as go left, go up as well as go
diagonally up and left (which is a composition of the former two actions). The
representations of control policies in such domains have traditionally been
modeled without exploiting this inherent compositional structure in the action
spaces. We propose a new learning paradigm, Factored Action space
Representations (FAR) wherein we decompose a control policy learned using a
Deep Reinforcement Learning Algorithm into independent components, analogous to
decomposing a vector in terms of some orthogonal basis vectors. This
architectural modification of the control policy representation allows the
agent to learn about multiple actions simultaneously, while executing only one
of them. We demonstrate that FAR yields considerable improvements on top of two
DRL algorithms in Atari 2600: FARA3C outperforms A3C (Asynchronous Advantage
Actor Critic) in 9 out of 14 tasks and FARAQL outperforms AQL (Asynchronous
n-step Q-Learning) in 9 out of 13 tasks.
| Sahil Sharma, Aravind Suresh, Rahul Ramesh, Balaraman Ravindran | null | 1705.07269 | null | null |
Deep Sparse Coding Using Optimized Linear Expansion of Thresholds | cs.LG | We address the problem of reconstructing sparse signals from noisy and
compressive measurements using a feed-forward deep neural network (DNN) with an
architecture motivated by the iterative shrinkage-thresholding algorithm
(ISTA). We maintain the weights and biases of the network links as prescribed
by ISTA and model the nonlinear activation function using a linear expansion of
thresholds (LET), which has been very successful in image denoising and
deconvolution. The optimal set of coefficients of the parametrized activation
is learned over a training dataset containing measurement-sparse signal pairs,
corresponding to a fixed sensing matrix. For training, we develop an efficient
second-order algorithm, which requires only matrix-vector product computations
in every training epoch (Hessian-free optimization) and offers superior
convergence performance than gradient-descent optimization. Subsequently, we
derive an improved network architecture inspired by FISTA, a faster version of
ISTA, to achieve similar signal estimation performance with about 50% of the
number of layers. The resulting architecture turns out to be a deep residual
network, which has recently been shown to exhibit superior performance in
several visual recognition tasks. Numerical experiments demonstrate that the
proposed DNN architectures lead to 3 to 4 dB improvement in the reconstruction
signal-to-noise ratio (SNR), compared with the state-of-the-art sparse coding
algorithms.
| Debabrata Mahapatra, Subhadip Mukherjee, and Chandra Sekhar
Seelamantula | null | 1705.0729 | null | null |
Lower Bound On the Computational Complexity of Discounted Markov
Decision Problems | cs.CC cs.LG | We study the computational complexity of the infinite-horizon
discounted-reward Markov Decision Problem (MDP) with a finite state space
$|\mathcal{S}|$ and a finite action space $|\mathcal{A}|$. We show that any
randomized algorithm needs a running time at least
$\Omega(|\mathcal{S}|^2|\mathcal{A}|)$ to compute an $\epsilon$-optimal policy
with high probability. We consider two variants of the MDP where the input is
given in specific data structures, including arrays of cumulative probabilities
and binary trees of transition probabilities. For these cases, we show that the
complexity lower bound reduces to $\Omega\left( \frac{|\mathcal{S}|
|\mathcal{A}|}{\epsilon} \right)$. These results reveal a surprising
observation that the computational complexity of the MDP depends on the data
structure of input.
| Yichen Chen and Mengdi Wang | null | 1705.07312 | null | null |
Ensemble Sampling | stat.ML cs.AI cs.LG | Thompson sampling has emerged as an effective heuristic for a broad range of
online decision problems. In its basic form, the algorithm requires computing
and sampling from a posterior distribution over models, which is tractable only
for simple special cases. This paper develops ensemble sampling, which aims to
approximate Thompson sampling while maintaining tractability even in the face
of complex models such as neural networks. Ensemble sampling dramatically
expands on the range of applications for which Thompson sampling is viable. We
establish a theoretical basis that supports the approach and present
computational results that offer further insight.
| Xiuyuan Lu, Benjamin Van Roy | null | 1705.07347 | null | null |
$\left( \beta, \varpi \right)$-stability for cross-validation and the
choice of the number of folds | stat.ML cs.LG math.ST stat.TH | In this paper, we introduce a new concept of stability for cross-validation,
called the $\left( \beta, \varpi \right)$-stability, and use it as a new
perspective to build the general theory for cross-validation. The $\left(
\beta, \varpi \right)$-stability mathematically connects the generalization
ability and the stability of the cross-validated model via the Rademacher
complexity. Our result reveals mathematically the effect of cross-validation
from two sides: on one hand, cross-validation picks the model with the best
empirical generalization ability by validating all the alternatives on test
sets; on the other hand, cross-validation may compromise the stability of the
model selection by causing subsampling error. Moreover, the difference between
training and test errors in q\textsuperscript{th} round, sometimes referred to
as the generalization error, might be autocorrelated on q. Guided by the ideas
above, the $\left( \beta, \varpi \right)$-stability help us derivd a new class
of Rademacher bounds, referred to as the one-round/convoluted Rademacher
bounds, for the stability of cross-validation in both the i.i.d.\ and
non-i.i.d.\ cases. For both light-tail and heavy-tail losses, the new bounds
quantify the stability of the one-round/average test error of the
cross-validated model in terms of its one-round/average training error, the
sample sizes $n$, number of folds $K$, the tail property of the loss (encoded
as Orlicz-$\Psi_\nu$ norms) and the Rademacher complexity of the model class
$\Lambda$. The new class of bounds not only quantitatively reveals the
stability of the generalization ability of the cross-validated model, it also
shows empirically the optimal choice for number of folds $K$, at which the
upper bound of the one-round/average test error is lowest, or, to put it in
another way, where the test error is most stable.
| Ning Xu, Jian Hong, Timothy C.G. Fisher | null | 1705.07349 | null | null |
Stabilizing Adversarial Nets With Prediction Methods | cs.LG cs.CV cs.NA | Adversarial neural networks solve many important problems in data science,
but are notoriously difficult to train. These difficulties come from the fact
that optimal weights for adversarial nets correspond to saddle points, and not
minimizers, of the loss function. The alternating stochastic gradient methods
typically used for such problems do not reliably converge to saddle points, and
when convergence does happen it is often highly sensitive to learning rates. We
propose a simple modification of stochastic gradient descent that stabilizes
adversarial networks. We show, both in theory and practice, that the proposed
method reliably converges to saddle points, and is stable with a wider range of
training parameters than a non-prediction method. This makes adversarial
networks less likely to "collapse," and enables faster training with larger
learning rates.
| Abhay Yadav, Sohil Shah, Zheng Xu, David Jacobs and Tom Goldstein | null | 1705.07364 | null | null |
Forward Thinking: Building Deep Random Forests | stat.ML cs.LG | The success of deep neural networks has inspired many to wonder whether other
learners could benefit from deep, layered architectures. We present a general
framework called forward thinking for deep learning that generalizes the
architectural flexibility and sophistication of deep neural networks while also
allowing for (i) different types of learning functions in the network, other
than neurons, and (ii) the ability to adaptively deepen the network as needed
to improve results. This is done by training one layer at a time, and once a
layer is trained, the input data are mapped forward through the layer to create
a new learning problem. The process is then repeated, transforming the data
through multiple layers, one at a time, rendering a new dataset, which is
expected to be better behaved, and on which a final output layer can achieve
good performance. In the case where the neurons of deep neural nets are
replaced with decision trees, we call the result a Forward Thinking Deep Random
Forest (FTDRF). We demonstrate a proof of concept by applying FTDRF on the
MNIST dataset. We also provide a general mathematical formulation that allows
for other types of deep learning problems to be considered.
| Kevin Miller, Chris Hettinger, Jeffrey Humpherys, Tyler Jarvis, and
David Kartchner | null | 1705.07366 | null | null |
Mixed Membership Word Embeddings for Computational Social Science | cs.CL cs.AI cs.LG | Word embeddings improve the performance of NLP systems by revealing the
hidden structural relationships between words. Despite their success in many
applications, word embeddings have seen very little use in computational social
science NLP tasks, presumably due to their reliance on big data, and to a lack
of interpretability. I propose a probabilistic model-based word embedding
method which can recover interpretable embeddings, without big data. The key
insight is to leverage mixed membership modeling, in which global
representations are shared, but individual entities (i.e. dictionary words) are
free to use these representations to uniquely differing degrees. I show how to
train the model using a combination of state-of-the-art training techniques for
word embeddings and topic models. The experimental results show an improvement
in predictive language modeling of up to 63% in MRR over the skip-gram, and
demonstrate that the representations are beneficial for supervised learning. I
illustrate the interpretability of the models with computational social science
case studies on State of the Union addresses and NIPS articles.
| James Foulds | null | 1705.07368 | null | null |
Instrument-Armed Bandits | stat.ML cs.LG | We extend the classic multi-armed bandit (MAB) model to the setting of
noncompliance, where the arm pull is a mere instrument and the treatment
applied may differ from it, which gives rise to the instrument-armed bandit
(IAB) problem. The IAB setting is relevant whenever the experimental units are
human since free will, ethics, and the law may prohibit unrestricted or forced
application of treatment. In particular, the setting is relevant in bandit
models of dynamic clinical trials and other controlled trials on human
interventions. Nonetheless, the setting has not been fully investigate in the
bandit literature. We show that there are various and divergent notions of
regret in this setting, all of which coincide only in the classic MAB setting.
We characterize the behavior of these regrets and analyze standard MAB
algorithms. We argue for a particular kind of regret that captures the causal
effect of treatments but show that standard MAB algorithms cannot achieve
sublinear control on this regret. Instead, we develop new algorithms for the
IAB problem, prove new regret bounds for them, and compare them to standard MAB
algorithms in numerical examples.
| Nathan Kallus | null | 1705.07377 | null | null |
Balanced Policy Evaluation and Learning | stat.ML cs.LG math.OC | We present a new approach to the problems of evaluating and learning
personalized decision policies from observational data of past contexts,
decisions, and outcomes. Only the outcome of the enacted decision is available
and the historical policy is unknown. These problems arise in personalized
medicine using electronic health records and in internet advertising. Existing
approaches use inverse propensity weighting (or, doubly robust versions) to
make historical outcome (or, residual) data look like it were generated by a
new policy being evaluated or learned. But this relies on a plug-in approach
that rejects data points with a decision that disagrees with the new policy,
leading to high variance estimates and ineffective learning. We propose a new,
balance-based approach that too makes the data look like the new policy but
does so directly by finding weights that optimize for balance between the
weighted data and the target policy in the given, finite sample, which is
equivalent to minimizing worst-case or posterior conditional mean square error.
Our policy learner proceeds as a two-level optimization problem over policies
and weights. We demonstrate that this approach markedly outperforms existing
ones both in evaluation and learning, which is unsurprising given the wider
support of balance-based weights. We establish extensive theoretical
consistency guarantees and regret bounds that support this empirical success.
| Nathan Kallus | null | 1705.07384 | null | null |
DeepMasterPrints: Generating MasterPrints for Dictionary Attacks via
Latent Variable Evolution | cs.CV cs.CR cs.LG | Recent research has demonstrated the vulnerability of fingerprint recognition
systems to dictionary attacks based on MasterPrints. MasterPrints are real or
synthetic fingerprints that can fortuitously match with a large number of
fingerprints thereby undermining the security afforded by fingerprint systems.
Previous work by Roy et al. generated synthetic MasterPrints at the
feature-level. In this work we generate complete image-level MasterPrints known
as DeepMasterPrints, whose attack accuracy is found to be much superior than
that of previous methods. The proposed method, referred to as Latent Variable
Evolution, is based on training a Generative Adversarial Network on a set of
real fingerprint images. Stochastic search in the form of the Covariance Matrix
Adaptation Evolution Strategy is then used to search for latent input variables
to the generator network that can maximize the number of impostor matches as
assessed by a fingerprint recognizer. Experiments convey the efficacy of the
proposed method in generating DeepMasterPrints. The underlying method is likely
to have broad applications in fingerprint security as well as fingerprint
synthesis.
| Philip Bontrager, Aditi Roy, Julian Togelius, Nasir Memon, Arun Ross | null | 1705.07386 | null | null |
Convergence of backpropagation with momentum for network architectures
with skip connections | cs.CV cs.LG | We study a class of deep neural networks with networks that form a directed
acyclic graph (DAG). For backpropagation defined by gradient descent with
adaptive momentum, we show weights converge for a large class of nonlinear
activation functions. The proof generalizes the results of Wu et al. (2008) who
showed convergence for a feed forward network with one hidden layer. For an
example of the effectiveness of DAG architectures, we describe an example of
compression through an autoencoder, and compare against sequential feed forward
networks under several metrics.
| Chirag Agarwal, Joe Klobusicky, and Dan Schonfeld | null | 1705.07404 | null | null |
Unfolding Hidden Barriers by Active Enhanced Sampling | physics.chem-ph cond-mat.stat-mech cs.LG | Collective variable (CV) or order parameter based enhanced sampling
algorithms have achieved great success due to their ability to efficiently
explore the rough potential energy landscapes of complex systems. However, the
degeneracy of microscopic configurations, originating from the orthogonal space
perpendicular to the CVs, is likely to shadow "hidden barriers" and greatly
reduce the efficiency of CV-based sampling. Here we demonstrate that systematic
machine learning CV, through enhanced sampling, can iteratively lift such
degeneracies on the fly. We introduce an active learning scheme that consists
of a parametric CV learner based on deep neural network and a CV-based enhanced
sampler. Our active enhanced sampling (AES) algorithm is capable of identifying
the least informative regions based on a historical sample, forming a positive
feedback loop between the CV learner and sampler. This approach is able to
globally preserve kinetic characteristics by incrementally enhancing both
sample completeness and CV quality.
| Jing Zhang and Ming Chen | 10.1103/PhysRevLett.121.010601 | 1705.07414 | null | null |
Learning Semantic Relatedness From Human Feedback Using Metric Learning | cs.CL cs.LG | Assessing the degree of semantic relatedness between words is an important
task with a variety of semantic applications, such as ontology learning for the
Semantic Web, semantic search or query expansion. To accomplish this in an
automated fashion, many relatedness measures have been proposed. However, most
of these metrics only encode information contained in the underlying corpus and
thus do not directly model human intuition. To solve this, we propose to
utilize a metric learning approach to improve existing semantic relatedness
measures by learning from additional information, such as explicit human
feedback. For this, we argue to use word embeddings instead of traditional
high-dimensional vector representations in order to leverage their semantic
density and to reduce computational cost. We rigorously test our approach on
several domains including tagging data as well as publicly available embeddings
based on Wikipedia texts and navigation. Human feedback about semantic
relatedness for learning and evaluation is extracted from publicly available
datasets such as MEN or WS-353. We find that our method can significantly
improve semantic relatedness measures by learning from additional information,
such as explicit human feedback. For tagging data, we are the first to generate
and study embeddings. Our results are of special interest for ontology and
recommendation engineers, but also for any other researchers and practitioners
of Semantic Web techniques.
| Thomas Niebler, Martin Becker, Christian P\"olitz, Andreas Hotho | null | 1705.07425 | null | null |
Parallel Streaming Wasserstein Barycenters | cs.LG math.OC stat.CO stat.ML | Efficiently aggregating data from different sources is a challenging problem,
particularly when samples from each source are distributed differently. These
differences can be inherent to the inference task or present for other reasons:
sensors in a sensor network may be placed far apart, affecting their individual
measurements. Conversely, it is computationally advantageous to split Bayesian
inference tasks across subsets of data, but data need not be identically
distributed across subsets. One principled way to fuse probability
distributions is via the lens of optimal transport: the Wasserstein barycenter
is a single distribution that summarizes a collection of input measures while
respecting their geometry. However, computing the barycenter scales poorly and
requires discretization of all input distributions and the barycenter itself.
Improving on this situation, we present a scalable, communication-efficient,
parallel algorithm for computing the Wasserstein barycenter of arbitrary
distributions. Our algorithm can operate directly on continuous input
distributions and is optimized for streaming data. Our method is even robust to
nonstationary input distributions and produces a barycenter estimate that
tracks the input measures over time. The algorithm is semi-discrete, needing to
discretize only the barycenter estimate. To the best of our knowledge, we also
provide the first bounds on the quality of the approximate barycenter as the
discretization becomes finer. Finally, we demonstrate the practical
effectiveness of our method, both in tracking moving distributions on a sphere,
as well as in a large-scale Bayesian inference task.
| Matthew Staib, Sebastian Claici, Justin Solomon, Stefanie Jegelka | null | 1705.07443 | null | null |
Learning to Mix n-Step Returns: Generalizing lambda-Returns for Deep
Reinforcement Learning | cs.LG cs.AI | Reinforcement Learning (RL) can model complex behavior policies for
goal-directed sequential decision making tasks. A hallmark of RL algorithms is
Temporal Difference (TD) learning: value function for the current state is
moved towards a bootstrapped target that is estimated using next state's value
function. $\lambda$-returns generalize beyond 1-step returns and strike a
balance between Monte Carlo and TD learning methods. While lambda-returns have
been extensively studied in RL, they haven't been explored a lot in Deep RL.
This paper's first contribution is an exhaustive benchmarking of
lambda-returns. Although mathematically tractable, the use of exponentially
decaying weighting of n-step returns based targets in lambda-returns is a
rather ad-hoc design choice. Our second major contribution is that we propose a
generalization of lambda-returns called Confidence-based Autodidactic Returns
(CAR), wherein the RL agent learns the weighting of the n-step returns in an
end-to-end manner. This allows the agent to learn to decide how much it wants
to weigh the n-step returns based targets. In contrast, lambda-returns restrict
RL agents to use an exponentially decaying weighting scheme. Autodidactic
returns can be used for improving any RL algorithm which uses TD learning. We
empirically demonstrate that using sophisticated weighted mixtures of
multi-step returns (like CAR and lambda-returns) considerably outperforms the
use of n-step returns. We perform our experiments on the Asynchronous Advantage
Actor Critic (A3C) algorithm in the Atari 2600 domain.
| Sahil Sharma, Girish Raguvir J, Srivatsan Ramesh, Balaraman Ravindran | null | 1705.07445 | null | null |
Shallow Updates for Deep Reinforcement Learning | cs.AI cs.LG stat.ML | Deep reinforcement learning (DRL) methods such as the Deep Q-Network (DQN)
have achieved state-of-the-art results in a variety of challenging,
high-dimensional domains. This success is mainly attributed to the power of
deep neural networks to learn rich domain representations for approximating the
value function or policy. Batch reinforcement learning methods with linear
representations, on the other hand, are more stable and require less hyper
parameter tuning. Yet, substantial feature engineering is necessary to achieve
good results. In this work we propose a hybrid approach -- the Least Squares
Deep Q-Network (LS-DQN), which combines rich feature representations learned by
a DRL algorithm with the stability of a linear least squares method. We do this
by periodically re-training the last hidden layer of a DRL network with a batch
least squares update. Key to our approach is a Bayesian regularization term for
the least squares update, which prevents over-fitting to the more recent data.
We tested LS-DQN on five Atari games and demonstrate significant improvement
over vanilla DQN and Double-DQN. We also investigated the reasons for the
superior performance of our method. Interestingly, we found that the
performance improvement can be attributed to the large batch size used by the
LS method when optimizing the last layer.
| Nir Levine, Tom Zahavy, Daniel J. Mankowitz, Aviv Tamar, Shie Mannor | null | 1705.07461 | null | null |
Why are Big Data Matrices Approximately Low Rank? | cs.LG stat.ML | Matrices of (approximate) low rank are pervasive in data science, appearing
in recommender systems, movie preferences, topic models, medical records, and
genomics. While there is a vast literature on how to exploit low rank structure
in these datasets, there is less attention on explaining why the low rank
structure appears in the first place. Here, we explain the effectiveness of low
rank models in data science by considering a simple generative model for these
matrices: we suppose that each row or column is associated to a (possibly high
dimensional) bounded latent variable, and entries of the matrix are generated
by applying a piecewise analytic function to these latent variables. These
matrices are in general full rank. However, we show that we can approximate
every entry of an $m \times n$ matrix drawn from this model to within a fixed
absolute error by a low rank matrix whose rank grows as $\mathcal O(\log(m +
n))$. Hence any sufficiently large matrix from such a latent variable model can
be approximated, up to a small entrywise error, by a low rank matrix.
| Madeleine Udell and Alex Townsend | null | 1705.07474 | null | null |
Statistical inference using SGD | cs.LG cs.AI math.OC math.ST stat.ML stat.TH | We present a novel method for frequentist statistical inference in
$M$-estimation problems, based on stochastic gradient descent (SGD) with a
fixed step size: we demonstrate that the average of such SGD sequences can be
used for statistical inference, after proper scaling. An intuitive analysis
using the Ornstein-Uhlenbeck process suggests that such averages are
asymptotically normal. From a practical perspective, our SGD-based inference
procedure is a first order method, and is well-suited for large scale problems.
To show its merits, we apply it to both synthetic and real datasets, and
demonstrate that its accuracy is comparable to classical statistical methods,
while requiring potentially far less computation.
| Tianyang Li, Liu Liu, Anastasios Kyrillidis, Constantine Caramanis | null | 1705.07477 | null | null |
Shake-Shake regularization | cs.LG cs.CV | The method introduced in this paper aims at helping deep learning
practitioners faced with an overfit problem. The idea is to replace, in a
multi-branch network, the standard summation of parallel branches with a
stochastic affine combination. Applied to 3-branch residual networks,
shake-shake regularization improves on the best single shot published results
on CIFAR-10 and CIFAR-100 by reaching test errors of 2.86% and 15.85%.
Experiments on architectures without skip connections or Batch Normalization
show encouraging results and open the door to a large set of applications. Code
is available at https://github.com/xgastaldi/shake-shake
| Xavier Gastaldi | null | 1705.07485 | null | null |
Annealed Generative Adversarial Networks | stat.ML cs.LG | We introduce a novel framework for adversarial training where the target
distribution is annealed between the uniform distribution and the data
distribution. We posited a conjecture that learning under continuous annealing
in the nonparametric regime is stable irrespective of the divergence measures
in the objective function and proposed an algorithm, dubbed {\ss}-GAN, in
corollary. In this framework, the fact that the initial support of the
generative network is the whole ambient space combined with annealing are key
to balancing the minimax game. In our experiments on synthetic data, MNIST, and
CelebA, {\ss}-GAN with a fixed annealing schedule was stable and did not suffer
from mode collapse.
| Arash Mehrjou, Bernhard Sch\"olkopf, Saeed Saremi | null | 1705.07505 | null | null |
Infrastructure for Usable Machine Learning: The Stanford DAWN Project | cs.LG cs.DB stat.ML | Despite incredible recent advances in machine learning, building machine
learning applications remains prohibitively time-consuming and expensive for
all but the best-trained, best-funded engineering organizations. This expense
comes not from a need for new and improved statistical models but instead from
a lack of systems and tools for supporting end-to-end machine learning
application development, from data preparation and labeling to
productionization and monitoring. In this document, we outline opportunities
for infrastructure supporting usable, end-to-end machine learning applications
in the context of the nascent DAWN (Data Analytics for What's Next) project at
Stanford.
| Peter Bailis, Kunle Olukotun, Christopher Re, Matei Zaharia | null | 1705.07538 | null | null |
Learning from Complementary Labels | stat.ML cs.LG | Collecting labeled data is costly and thus a critical bottleneck in
real-world classification tasks. To mitigate this problem, we propose a novel
setting, namely learning from complementary labels for multi-class
classification. A complementary label specifies a class that a pattern does not
belong to. Collecting complementary labels would be less laborious than
collecting ordinary labels, since users do not have to carefully choose the
correct class from a long list of candidate classes. However, complementary
labels are less informative than ordinary labels and thus a suitable approach
is needed to better learn from them. In this paper, we show that an unbiased
estimator to the classification risk can be obtained only from complementarily
labeled data, if a loss function satisfies a particular symmetric condition. We
derive estimation error bounds for the proposed method and prove that the
optimal parametric convergence rate is achieved. We further show that learning
from complementary labels can be easily combined with learning from ordinary
labels (i.e., ordinary supervised learning), providing a highly practical
implementation of the proposed method. Finally, we experimentally demonstrate
the usefulness of the proposed methods.
| Takashi Ishida, Gang Niu, Weihua Hu, Masashi Sugiyama | null | 1705.07541 | null | null |
On the diffusion approximation of nonconvex stochastic gradient descent | stat.ML cs.LG | We study the Stochastic Gradient Descent (SGD) method in nonconvex
optimization problems from the point of view of approximating diffusion
processes. We prove rigorously that the diffusion process can approximate the
SGD algorithm weakly using the weak form of master equation for probability
evolution. In the small step size regime and the presence of omnidirectional
noise, our weak approximating diffusion process suggests the following dynamics
for the SGD iteration starting from a local minimizer (resp.~saddle point): it
escapes in a number of iterations exponentially (resp.~almost linearly)
dependent on the inverse stepsize. The results are obtained using the theory
for random perturbations of dynamical systems (theory of large deviations for
local minimizers and theory of exiting for unstable stationary points). In
addition, we discuss the effects of batch size for the deep neural networks,
and we find that small batch size is helpful for SGD algorithms to escape
unstable stationary points and sharp minimizers. Our theory indicates that one
should increase the batch size at later stage for the SGD to be trapped in flat
minimizers for better generalization.
| Wenqing Hu, Chris Junchi Li, Lei Li, Jian-Guo Liu | null | 1705.07562 | null | null |
Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain
Surgeon | cs.NE cs.CV cs.LG | How to develop slim and accurate deep neural networks has become crucial for
real- world applications, especially for those employed in embedded systems.
Though previous work along this research line has shown some promising results,
most existing methods either fail to significantly compress a well-trained deep
network or require a heavy retraining process for the pruned deep network to
re-boost its prediction performance. In this paper, we propose a new layer-wise
pruning method for deep neural networks. In our proposed method, parameters of
each individual layer are pruned independently based on second order
derivatives of a layer-wise error function with respect to the corresponding
parameters. We prove that the final prediction performance drop after pruning
is bounded by a linear combination of the reconstructed errors caused at each
layer. Therefore, there is a guarantee that one only needs to perform a light
retraining process on the pruned network to resume its original prediction
performance. We conduct extensive experiments on benchmark datasets to
demonstrate the effectiveness of our pruning method compared with several
state-of-the-art baseline methods.
| Xin Dong, Shangyu Chen, Sinno Jialin Pan | null | 1705.07565 | null | null |
Global Guarantees for Enforcing Deep Generative Priors by Empirical Risk | cs.IT cs.LG math.IT math.OC math.PR | We examine the theoretical properties of enforcing priors provided by
generative deep neural networks via empirical risk minimization. In particular
we consider two models, one in which the task is to invert a generative neural
network given access to its last layer and another in which the task is to
invert a generative neural network given only compressive linear observations
of its last layer. We establish that in both cases, in suitable regimes of
network layer sizes and a randomness assumption on the network weights, that
the non-convex objective function given by empirical risk minimization does not
have any spurious stationary points. That is, we establish that with high
probability, at any point away from small neighborhoods around two scalar
multiples of the desired solution, there is a descent direction. Hence, there
are no local minima, saddle points, or other stationary points outside these
neighborhoods. These results constitute the first theoretical guarantees which
establish the favorable global geometry of these non-convex optimization
problems, and they bridge the gap between the empirical success of enforcing
deep generative priors and a rigorous understanding of non-linear inverse
problems.
| Paul Hand, Vladislav Voroninski | null | 1705.07576 | null | null |
Classification Using Proximity Catch Digraphs (Technical Report) | cs.LG stat.ME stat.ML | We employ random geometric digraphs to construct semi-parametric classifiers.
These data-random digraphs are from parametrized random digraph families called
proximity catch digraphs (PCDs). A related geometric digraph family, class
cover catch digraph (CCCD), has been used to solve the class cover problem by
using its approximate minimum dominating set. CCCDs showed relatively good
performance in the classification of imbalanced data sets, and although CCCDs
have a convenient construction in $\mathbb{R}^d$, finding minimum dominating
sets is NP-hard and its probabilistic behaviour is not mathematically tractable
except for $d=1$. On the other hand, a particular family of PCDs, called
\emph{proportional-edge} PCDs (PE-PCDs), has mathematical tractable minimum
dominating sets in $\mathbb{R}^d$; however their construction in higher
dimensions may be computationally demanding. More specifically, we show that
the classifiers based on PE-PCDs are prototype-based classifiers such that the
exact minimum number of prototypes (equivalent to minimum dominating sets) are
found in polynomial time on the number of observations. We construct two types
of classifiers based on PE-PCDs. One is a family of hybrid classifiers depend
on the location of the points of the training data set, and another type is a
family of classifiers solely based on class covers. We assess the
classification performance of our PE-PCD based classifiers by extensive Monte
Carlo simulations, and compare them with that of other commonly used
classifiers. We also show that, similar to CCCD classifiers, our classifiers
are relatively better in classification in the presence of class imbalance.
| Art\"ur Manukyan and Elvan Ceyhan | null | 1705.076 | null | null |
Multi-output Polynomial Networks and Factorization Machines | stat.ML cs.LG | Factorization machines and polynomial networks are supervised polynomial
models based on an efficient low-rank decomposition. We extend these models to
the multi-output setting, i.e., for learning vector-valued functions, with
application to multi-class or multi-task problems. We cast this as the problem
of learning a 3-way tensor whose slices share a common basis and propose a
convex formulation of that problem. We then develop an efficient conditional
gradient algorithm and prove its global convergence, despite the fact that it
involves a non-convex basis selection step. On classification tasks, we show
that our algorithm achieves excellent accuracy with much sparser models than
existing methods. On recommendation system tasks, we show how to combine our
algorithm with a reduction from ordinal regression to multi-output
classification and show that the resulting algorithm outperforms simple
baselines in terms of ranking accuracy.
| Mathieu Blondel, Vlad Niculae, Takuma Otsuka and Naonori Ueda | null | 1705.07603 | null | null |
Streaming Binary Sketching based on Subspace Tracking and Diagonal
Uniformization | cs.LG | In this paper, we address the problem of learning compact
similarity-preserving embeddings for massive high-dimensional streams of data
in order to perform efficient similarity search. We present a new online method
for computing binary compressed representations -sketches- of high-dimensional
real feature vectors. Given an expected code length $c$ and high-dimensional
input data points, our algorithm provides a $c$-bits binary code for preserving
the distance between the points from the original high-dimensional space. Our
algorithm does not require neither the storage of the whole dataset nor a
chunk, thus it is fully adaptable to the streaming setting. It also provides
low time complexity and convergence guarantees. We demonstrate the quality of
our binary sketches through experiments on real data for the nearest neighbors
search task in the online setting.
| Anne Morvan and Antoine Souloumiac and C\'edric Gouy-Pailler and Jamal
Atif | null | 1705.07661 | null | null |
LOGAN: Membership Inference Attacks Against Generative Models | cs.CR cs.LG | Generative models estimate the underlying distribution of a dataset to
generate realistic samples according to that distribution. In this paper, we
present the first membership inference attacks against generative models: given
a data point, the adversary determines whether or not it was used to train the
model. Our attacks leverage Generative Adversarial Networks (GANs), which
combine a discriminative and a generative model, to detect overfitting and
recognize inputs that were part of training datasets, using the discriminator's
capacity to learn statistical differences in distributions.
We present attacks based on both white-box and black-box access to the target
model, against several state-of-the-art generative models, over datasets of
complex representations of faces (LFW), objects (CIFAR-10), and medical images
(Diabetic Retinopathy). We also discuss the sensitivity of the attacks to
different training parameters, and their robustness against mitigation
strategies, finding that defenses are either ineffective or lead to
significantly worse performances of the generative models in terms of training
stability and/or sample quality.
| Jamie Hayes, Luca Melis, George Danezis, Emiliano De Cristofaro | null | 1705.07663 | null | null |
CayleyNets: Graph Convolutional Neural Networks with Complex Rational
Spectral Filters | cs.LG | The rise of graph-structured data such as social networks, regulatory
networks, citation graphs, and functional brain networks, in combination with
resounding success of deep learning in various applications, has brought the
interest in generalizing deep learning models to non-Euclidean domains. In this
paper, we introduce a new spectral domain convolutional architecture for deep
learning on graphs. The core ingredient of our model is a new class of
parametric rational complex functions (Cayley polynomials) allowing to
efficiently compute spectral filters on graphs that specialize on frequency
bands of interest. Our model generates rich spectral filters that are localized
in space, scales linearly with the size of the input data for
sparsely-connected graphs, and can handle different constructions of Laplacian
operators. Extensive experimental results show the superior performance of our
approach, in comparison to other spectral domain convolutional architectures,
on spectral image classification, community detection, vertex classification
and matrix completion tasks.
| Ron Levie, Federico Monti, Xavier Bresson, Michael M. Bronstein | null | 1705.07664 | null | null |
A Linear-Time Kernel Goodness-of-Fit Test | stat.ML cs.LG | We propose a novel adaptive test of goodness-of-fit, with computational cost
linear in the number of samples. We learn the test features that best indicate
the differences between observed samples and a reference model, by minimizing
the false negative rate. These features are constructed via Stein's method,
meaning that it is not necessary to compute the normalising constant of the
model. We analyse the asymptotic Bahadur efficiency of the new test, and prove
that under a mean-shift alternative, our test always has greater relative
efficiency than a previous linear-time kernel test, regardless of the choice of
parameters for that test. In experiments, the performance of our method exceeds
that of the earlier linear-time test, and matches or exceeds the power of a
quadratic-time kernel test. In high dimensions and where model structure may be
exploited, our goodness of fit test performs far better than a quadratic-time
two-sample test based on the Maximum Mean Discrepancy, with samples drawn from
the model.
| Wittawat Jitkrittum, Wenkai Xu, Zoltan Szabo, Kenji Fukumizu, Arthur
Gretton | null | 1705.07673 | null | null |
Individualized Risk Prognosis for Critical Care Patients: A Multi-task
Gaussian Process Model | cs.LG | We report the development and validation of a data-driven real-time risk
score that provides timely assessments for the clinical acuity of ward patients
based on their temporal lab tests and vital signs, which allows for timely
intensive care unit (ICU) admissions. Unlike the existing risk scoring
technologies, the proposed score is individualized; it uses the electronic
health record (EHR) data to cluster the patients based on their static
covariates into subcohorts of similar patients, and then learns a separate
temporal, non-stationary multi-task Gaussian Process (GP) model that captures
the physiology of every subcohort. Experiments conducted on data from a
heterogeneous cohort of 6,094 patients admitted to the Ronald Reagan UCLA
medical center show that our risk score significantly outperforms the
state-of-the-art risk scoring technologies, such as the Rothman index and MEWS,
in terms of timeliness, true positive rate (TPR), and positive predictive value
(PPV). In particular, the proposed score increases the AUC with 20% and 38% as
compared to Rothman index and MEWS respectively, and can predict ICU admissions
8 hours before clinicians at a PPV of 35% and a TPR of 50%. Moreover, we show
that the proposed risk score allows for better decisions on when to discharge
clinically stable patients from the ward, thereby improving the efficiency of
hospital resource utilization.
| Ahmed M. Alaa, Jinsung Yoon, Scott Hu, and Mihaela van der Schaar | null | 1705.07674 | null | null |
A Regularized Framework for Sparse and Structured Neural Attention | stat.ML cs.CL cs.LG | Modern neural networks are often augmented with an attention mechanism, which
tells the network where to focus within the input. We propose in this paper a
new framework for sparse and structured attention, building upon a smoothed max
operator. We show that the gradient of this operator defines a mapping from
real values to probabilities, suitable as an attention mechanism. Our framework
includes softmax and a slight generalization of the recently-proposed sparsemax
as special cases. However, we also show how our framework can incorporate
modern structured penalties, resulting in more interpretable attention
mechanisms, that focus on entire segments or groups of an input. We derive
efficient algorithms to compute the forward and backward passes of our
attention mechanisms, enabling their use in a neural network trained with
backpropagation. To showcase their potential as a drop-in replacement for
existing ones, we evaluate our attention mechanisms on three large-scale tasks:
textual entailment, machine translation, and sentence summarization. Our
attention mechanisms improve interpretability without sacrificing performance;
notably, on textual entailment and summarization, we outperform the standard
attention mechanisms based on softmax and sparsemax.
| Vlad Niculae and Mathieu Blondel | null | 1705.07704 | null | null |
An Out-of-the-box Full-network Embedding for Convolutional Neural
Networks | cs.LG cs.NE | Transfer learning for feature extraction can be used to exploit deep
representations in contexts where there is very few training data, where there
are limited computational resources, or when tuning the hyper-parameters needed
for training is not an option. While previous contributions to feature
extraction propose embeddings based on a single layer of the network, in this
paper we propose a full-network embedding which successfully integrates
convolutional and fully connected features, coming from all layers of a deep
convolutional neural network. To do so, the embedding normalizes features in
the context of the problem, and discretizes their values to reduce noise and
regularize the embedding space. Significantly, this also reduces the
computational cost of processing the resultant representations. The proposed
method is shown to outperform single layer embeddings on several image
classification tasks, while also being more robust to the choice of the
pre-trained model used for obtaining the initial features. The performance gap
in classification accuracy between thoroughly tuned solutions and the
full-network embedding is also reduced, which makes of the proposed approach a
competitive solution for a large set of applications.
| Dario Garcia-Gasulla, Armand Vilalta, Ferran Par\'es, Jonatan Moreno,
Eduard Ayguad\'e, Jesus Labarta, Ulises Cort\'es and Toyotaro Suzumura | null | 1705.07706 | null | null |
Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset | cs.CV cs.LG | The paucity of videos in current action classification datasets (UCF-101 and
HMDB-51) has made it difficult to identify good video architectures, as most
methods obtain similar performance on existing small-scale benchmarks. This
paper re-evaluates state-of-the-art architectures in light of the new Kinetics
Human Action Video dataset. Kinetics has two orders of magnitude more data,
with 400 human action classes and over 400 clips per class, and is collected
from realistic, challenging YouTube videos. We provide an analysis on how
current architectures fare on the task of action classification on this dataset
and how much performance improves on the smaller benchmark datasets after
pre-training on Kinetics.
We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on
2D ConvNet inflation: filters and pooling kernels of very deep image
classification ConvNets are expanded into 3D, making it possible to learn
seamless spatio-temporal feature extractors from video while leveraging
successful ImageNet architecture designs and even their parameters. We show
that, after pre-training on Kinetics, I3D models considerably improve upon the
state-of-the-art in action classification, reaching 80.9% on HMDB-51 and 98.0%
on UCF-101.
| Joao Carreira and Andrew Zisserman | null | 1705.0775 | null | null |
Dissecting Adam: The Sign, Magnitude and Variance of Stochastic
Gradients | cs.LG stat.ML | The ADAM optimizer is exceedingly popular in the deep learning community.
Often it works very well, sometimes it doesn't. Why? We interpret ADAM as a
combination of two aspects: for each weight, the update direction is determined
by the sign of stochastic gradients, whereas the update magnitude is determined
by an estimate of their relative variance. We disentangle these two aspects and
analyze them in isolation, gaining insight into the mechanisms underlying ADAM.
This analysis also extends recent results on adverse effects of ADAM on
generalization, isolating the sign aspect as the problematic one. Transferring
the variance adaptation to SGD gives rise to a novel method, completing the
practitioner's toolbox for problems where ADAM fails.
| Lukas Balles and Philipp Hennig | null | 1705.07774 | null | null |
Training Deep Networks without Learning Rates Through Coin Betting | cs.LG math.OC stat.ML | Deep learning methods achieve state-of-the-art performance in many
application scenarios. Yet, these methods require a significant amount of
hyperparameters tuning in order to achieve the best results. In particular,
tuning the learning rates in the stochastic optimization process is still one
of the main bottlenecks. In this paper, we propose a new stochastic gradient
descent procedure for deep networks that does not require any learning rate
setting. Contrary to previous methods, we do not adapt the learning rates nor
we make use of the assumed curvature of the objective function. Instead, we
reduce the optimization process to a game of betting on a coin and propose a
learning-rate-free optimal algorithm for this scenario. Theoretical convergence
is proven for convex and quasi-convex functions and empirical evidence shows
the advantage of our algorithm over popular stochastic gradient algorithms.
| Francesco Orabona and Tatiana Tommasi | null | 1705.07795 | null | null |
A unified view of entropy-regularized Markov decision processes | cs.LG cs.AI stat.ML | We propose a general framework for entropy-regularized average-reward
reinforcement learning in Markov decision processes (MDPs). Our approach is
based on extending the linear-programming formulation of policy optimization in
MDPs to accommodate convex regularization functions. Our key result is showing
that using the conditional entropy of the joint state-action distributions as
regularization yields a dual optimization problem closely resembling the
Bellman optimality equations. This result enables us to formalize a number of
state-of-the-art entropy-regularized reinforcement learning algorithms as
approximate variants of Mirror Descent or Dual Averaging, and thus to argue
about the convergence properties of these methods. In particular, we show that
the exact version of the TRPO algorithm of Schulman et al. (2015) actually
converges to the optimal policy, while the entropy-regularized policy gradient
methods of Mnih et al. (2016) may fail to converge to a fixed point. Finally,
we illustrate empirically the effects of using various regularization
techniques on learning performance in a simple reinforcement learning setup.
| Gergely Neu and Anders Jonsson and Vicen\c{c} G\'omez | null | 1705.07798 | null | null |
Use Privacy in Data-Driven Systems: Theory and Experiments with Machine
Learnt Programs | cs.CR cs.LG | This paper presents an approach to formalizing and enforcing a class of use
privacy properties in data-driven systems. In contrast to prior work, we focus
on use restrictions on proxies (i.e. strong predictors) of protected
information types. Our definition relates proxy use to intermediate
computations that occur in a program, and identify two essential properties
that characterize this behavior: 1) its result is strongly associated with the
protected information type in question, and 2) it is likely to causally affect
the final output of the program. For a specific instantiation of this
definition, we present a program analysis technique that detects instances of
proxy use in a model, and provides a witness that identifies which parts of the
corresponding program exhibit the behavior. Recognizing that not all instances
of proxy use of a protected information type are inappropriate, we make use of
a normative judgment oracle that makes this inappropriateness determination for
a given witness. Our repair algorithm uses the witness of an inappropriate
proxy use to transform the model into one that provably does not exhibit proxy
use, while avoiding changes that unduly affect classification accuracy. Using a
corpus of social datasets, our evaluation shows that these algorithms are able
to detect proxy use instances that would be difficult to find using existing
techniques, and subsequently remove them while maintaining acceptable
classification performance.
| Anupam Datta, Matthew Fredrikson, Gihyuk Ko, Piotr Mardziel, Shayak
Sen | null | 1705.07807 | null | null |
Information-theoretic analysis of generalization capability of learning
algorithms | cs.LG cs.IT math.IT stat.ML | We derive upper bounds on the generalization error of a learning algorithm in
terms of the mutual information between its input and output. The bounds
provide an information-theoretic understanding of generalization in learning
problems, and give theoretical guidelines for striking the right balance
between data fit and generalization by controlling the input-output mutual
information. We propose a number of methods for this purpose, among which are
algorithms that regularize the ERM algorithm with relative entropy or with
random noise. Our work extends and leads to nontrivial improvements on the
recent results of Russo and Zou.
| Aolin Xu and Maxim Raginsky | null | 1705.07809 | null | null |
Minimax Statistical Learning with Wasserstein Distances | cs.LG stat.ML | As opposed to standard empirical risk minimization (ERM), distributionally
robust optimization aims to minimize the worst-case risk over a larger
ambiguity set containing the original empirical distribution of the training
data. In this work, we describe a minimax framework for statistical learning
with ambiguity sets given by balls in Wasserstein space. In particular, we
prove generalization bounds that involve the covering number properties of the
original ERM problem. As an illustrative example, we provide generalization
guarantees for transport-based domain adaptation problems where the Wasserstein
distance between the source and target domain distributions can be reliably
estimated from unlabeled samples.
| Jaeho Lee and Maxim Raginsky | null | 1705.07815 | null | null |
Sparse hierarchical interaction learning with epigraphical projection | cs.LG | This work focuses on learning optimization problems with quadratical
interactions between variables, which go beyond the additive models of
traditional linear learning. We investigate more specifically two different
methods encountered in the literature to deal with this problem: "hierNet" and
structured-sparsity regularization, and study their connections. We propose a
primal-dual proximal algorithm based on an epigraphical projection to optimize
a general formulation of these learning problems. The experimental setting
first highlights the improvement of the proposed procedure compared to
state-of-the-art methods based on fast iterative shrinkage-thresholding
algorithm (i.e. FISTA) or alternating direction method of multipliers (i.e.
ADMM), and then, using the proposed flexible optimization framework, we provide
fair comparisons between the different hierarchical penalizations and their
improvement over the standard $\ell_1$-norm penalization. The experiments are
conducted both on synthetic and real data, and they clearly show that the
proposed primal-dual proximal algorithm based on epigraphical projection is
efficient and effective to solve and investigate the problem of hierarchical
interaction learning.
| Mingyuan Jiu, Nelly Pustelnik, Stefan Janaqi, M\'eriam Chebre, Lin Qi,
Philippe Ricoux | null | 1705.07817 | null | null |
Regularizing deep networks using efficient layerwise adversarial
training | cs.CV cs.LG stat.ML | Adversarial training has been shown to regularize deep neural networks in
addition to increasing their robustness to adversarial examples. However, its
impact on very deep state of the art networks has not been fully investigated.
In this paper, we present an efficient approach to perform adversarial training
by perturbing intermediate layer activations and study the use of such
perturbations as a regularizer during training. We use these perturbations to
train very deep models such as ResNets and show improvement in performance both
on adversarial and original test data. Our experiments highlight the benefits
of perturbing intermediate layer activations compared to perturbing only the
inputs. The results on CIFAR-10 and CIFAR-100 datasets show the merits of the
proposed adversarial training approach. Additional results on WideResNets show
that our approach provides significant improvement in classification accuracy
for a given base model, outperforming dropout and other base models of larger
size.
| Swami Sankaranarayanan, Arpit Jain, Rama Chellappa and Ser Nam Lim | null | 1705.07819 | null | null |
Stabilizing GAN Training with Multiple Random Projections | cs.LG cs.CV | Training generative adversarial networks is unstable in high-dimensions as
the true data distribution tends to be concentrated in a small fraction of the
ambient space. The discriminator is then quickly able to classify nearly all
generated samples as fake, leaving the generator without meaningful gradients
and causing it to deteriorate after a point in training. In this work, we
propose training a single generator simultaneously against an array of
discriminators, each of which looks at a different random low-dimensional
projection of the data. Individual discriminators, now provided with restricted
views of the input, are unable to reject generated samples perfectly and
continue to provide meaningful gradients to the generator throughout training.
Meanwhile, the generator learns to produce samples consistent with the full
data distribution to satisfy all discriminators simultaneously. We demonstrate
the practical utility of this approach experimentally, and show that it is able
to produce image samples with higher quality than traditional training with a
single discriminator.
| Behnam Neyshabur, Srinadh Bhojanapalli, Ayan Chakrabarti | null | 1705.07831 | null | null |
Nonparametric Online Regression while Learning the Metric | cs.LG | We study algorithms for online nonparametric regression that learn the
directions along which the regression function is smoother. Our algorithm
learns the Mahalanobis metric based on the gradient outer product matrix
$\boldsymbol{G}$ of the regression function (automatically adapting to the
effective rank of this matrix), while simultaneously bounding the regret ---on
the same data sequence--- in terms of the spectrum of $\boldsymbol{G}$. As a
preliminary step in our analysis, we extend a nonparametric online learning
algorithm by Hazan and Megiddo enabling it to compete against functions whose
Lipschitzness is measured with respect to an arbitrary Mahalanobis metric.
| Ilja Kuzborskij, Nicol\`o Cesa-Bianchi | null | 1705.07853 | null | null |
On-the-fly Operation Batching in Dynamic Computation Graphs | cs.LG cs.CL stat.ML | Dynamic neural network toolkits such as PyTorch, DyNet, and Chainer offer
more flexibility for implementing models that cope with data of varying
dimensions and structure, relative to toolkits that operate on statically
declared computations (e.g., TensorFlow, CNTK, and Theano). However, existing
toolkits - both static and dynamic - require that the developer organize the
computations into the batches necessary for exploiting high-performance
algorithms and hardware. This batching task is generally difficult, but it
becomes a major hurdle as architectures become complex. In this paper, we
present an algorithm, and its implementation in the DyNet toolkit, for
automatically batching operations. Developers simply write minibatch
computations as aggregations of single instance computations, and the batching
algorithm seamlessly executes them, on the fly, using computationally efficient
batched operations. On a variety of tasks, we obtain throughput similar to that
obtained with manual batches, as well as comparable speedups over
single-instance learning on architectures that are impractical to batch
manually.
| Graham Neubig and Yoav Goldberg and Chris Dyer | null | 1705.0786 | null | null |
SmartPaste: Learning to Adapt Source Code | cs.LG cs.SE | Deep Neural Networks have been shown to succeed at a range of natural
language tasks such as machine translation and text summarization. While tasks
on source code (ie, formal languages) have been considered recently, most work
in this area does not attempt to capitalize on the unique opportunities offered
by its known syntax and structure. In this work, we introduce SmartPaste, a
first task that requires to use such information. The task is a variant of the
program repair problem that requires to adapt a given (pasted) snippet of code
to surrounding, existing source code. As first solutions, we design a set of
deep neural models that learn to represent the context of each variable
location and variable usage in a data flow-sensitive way. Our evaluation
suggests that our models can learn to solve the SmartPaste task in many cases,
achieving 58.6% accuracy, while learning meaningful representation of variable
usages.
| Miltiadis Allamanis and Marc Brockschmidt | null | 1705.07867 | null | null |
A Unified Approach to Interpreting Model Predictions | cs.AI cs.LG stat.ML | Understanding why a model makes a certain prediction can be as crucial as the
prediction's accuracy in many applications. However, the highest accuracy for
large modern datasets is often achieved by complex models that even experts
struggle to interpret, such as ensemble or deep learning models, creating a
tension between accuracy and interpretability. In response, various methods
have recently been proposed to help users interpret the predictions of complex
models, but it is often unclear how these methods are related and when one
method is preferable over another. To address this problem, we present a
unified framework for interpreting predictions, SHAP (SHapley Additive
exPlanations). SHAP assigns each feature an importance value for a particular
prediction. Its novel components include: (1) the identification of a new class
of additive feature importance measures, and (2) theoretical results showing
there is a unique solution in this class with a set of desirable properties.
The new class unifies six existing methods, notable because several recent
methods in the class lack the proposed desirable properties. Based on insights
from this unification, we present new methods that show improved computational
performance and/or better consistency with human intuition than previous
approaches.
| Scott Lundberg and Su-In Lee | null | 1705.07874 | null | null |
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep
Learning | cs.LG cs.DC cs.NE | High network communication cost for synchronizing gradients and parameters is
the well-known bottleneck of distributed training. In this work, we propose
TernGrad that uses ternary gradients to accelerate distributed deep learning in
data parallelism. Our approach requires only three numerical levels {-1,0,1},
which can aggressively reduce the communication time. We mathematically prove
the convergence of TernGrad under the assumption of a bound on gradients.
Guided by the bound, we propose layer-wise ternarizing and gradient clipping to
improve its convergence. Our experiments show that applying TernGrad on AlexNet
does not incur any accuracy loss and can even improve accuracy. The accuracy
loss of GoogLeNet induced by TernGrad is less than 2% on average. Finally, a
performance model is proposed to study the scalability of TernGrad. Experiments
show significant speed gains for various deep neural networks. Our source code
is available.
| Wei Wen, Cong Xu, Feng Yan, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai
Li | null | 1705.07878 | null | null |
Online Factorization and Partition of Complex Networks From Random Walks | cs.LG math.OC stat.ML | Finding the reduced-dimensional structure is critical to understanding
complex networks. Existing approaches such as spectral clustering are
applicable only when the full network is explicitly observed. In this paper, we
focus on the online factorization and partition of implicit large-scale
networks based on observations from an associated random walk. We formulate
this into a nonconvex stochastic factorization problem and propose an efficient
and scalable stochastic generalized Hebbian algorithm. The algorithm is able to
process dependent state-transition data dynamically generated by the underlying
network and learn a low-dimensional representation for each vertex. By applying
a diffusion approximation analysis, we show that the continuous-time limiting
process of the stochastic algorithm converges globally to the "principal
components" of the Markov chain and achieves a nearly optimal sample
complexity. Once given the learned low-dimensional representations, we further
apply clustering techniques to recover the network partition. We show that when
the associated Markov process is lumpable, one can recover the partition
exactly with high probability. We apply the proposed approach to model the
traffic flow of Manhattan as city-wide random walks. By using our algorithm to
analyze the taxi trip data, we discover a latent partition of the Manhattan
city that closely matches the traffic dynamics.
| Lin F. Yang, Vladimir Braverman, Tuo Zhao, Mengdi Wang | null | 1705.07881 | null | null |
Semantically Decomposing the Latent Spaces of Generative Adversarial
Networks | cs.LG cs.AI cs.CV cs.NE stat.ML | We propose a new algorithm for training generative adversarial networks that
jointly learns latent codes for both identities (e.g. individual humans) and
observations (e.g. specific photographs). By fixing the identity portion of the
latent codes, we can generate diverse images of the same subject, and by fixing
the observation portion, we can traverse the manifold of subjects while
maintaining contingent aspects such as lighting and pose. Our algorithm
features a pairwise training scheme in which each sample from the generator
consists of two images with a common identity code. Corresponding samples from
the real dataset consist of two distinct photographs of the same subject. In
order to fool the discriminator, the generator must produce pairs that are
photorealistic, distinct, and appear to depict the same individual. We augment
both the DCGAN and BEGAN approaches with Siamese discriminators to facilitate
pairwise training. Experiments with human judges and an off-the-shelf face
verification system demonstrate our algorithm's ability to generate convincing,
identity-matched photographs.
| Chris Donahue, Zachary C. Lipton, Akshay Balsubramani, Julian McAuley | null | 1705.07904 | null | null |
Large Scale Empirical Risk Minimization via Truncated Adaptive Newton
Method | math.OC cs.LG stat.ML | We consider large scale empirical risk minimization (ERM) problems, where
both the problem dimension and variable size is large. In these cases, most
second order methods are infeasible due to the high cost in both computing the
Hessian over all samples and computing its inverse in high dimensions. In this
paper, we propose a novel adaptive sample size second-order method, which
reduces the cost of computing the Hessian by solving a sequence of ERM problems
corresponding to a subset of samples and lowers the cost of computing the
Hessian inverse using a truncated eigenvalue decomposition. We show that while
we geometrically increase the size of the training set at each stage, a single
iteration of the truncated Newton method is sufficient to solve the new ERM
within its statistical accuracy. Moreover, for a large number of samples we are
allowed to double the size of the training set at each stage, and the proposed
method subsequently reaches the statistical accuracy of the full training set
approximately after two effective passes. In addition to this theoretical
result, we show empirically on a number of well known data sets that the
proposed truncated adaptive sample size algorithm outperforms stochastic
alternatives for solving ERM problems.
| Mark Eisen, Aryan Mokhtari, Alejandro Ribeiro | null | 1705.07957 | null | null |
pix2code: Generating Code from a Graphical User Interface Screenshot | cs.LG cs.AI cs.CL cs.CV cs.NE | Transforming a graphical user interface screenshot created by a designer into
computer code is a typical task conducted by a developer in order to build
customized software, websites, and mobile applications. In this paper, we show
that deep learning methods can be leveraged to train a model end-to-end to
automatically generate code from a single input image with over 77% of accuracy
for three different platforms (i.e. iOS, Android and web-based technologies).
| Tony Beltramelli | null | 1705.07962 | null | null |
Diminishing Batch Normalization | cs.LG | In this paper, we propose a generalization of the Batch Normalization (BN)
algorithm, diminishing batch normalization (DBN), where we update the BN
parameters in a diminishing moving average way. BN is very effective in
accelerating the convergence of a neural network training phase that it has
become a common practice. Our proposed DBN algorithm remains the overall
structure of the original BN algorithm while introduces a weighted averaging
update to some trainable parameters. We provide an analysis of the convergence
of the DBN algorithm that converges to a stationary point with respect to
trainable parameters. Our analysis can be easily generalized for original BN
algorithm by setting some parameters to constant. To the best knowledge of
authors, this analysis is the first of its kind for convergence with Batch
Normalization introduced. We analyze a two-layer model with arbitrary
activation function. The primary challenge of the analysis is the fact that
some parameters are updated by gradient while others are not. The convergence
analysis applies to any activation function that satisfies our common
assumptions. In the numerical experiments, we test the proposed algorithm on
complex modern CNN models with stochastic gradients and ReLU activation. We
observe that DBN outperforms the original BN algorithm on MNIST, NI and
CIFAR-10 datasets with reasonable complex FNN and CNN models.
| Yintai Ma and Diego Klabjan | null | 1705.08011 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.