title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Adaptive Neural Networks for Efficient Inference | cs.LG cs.CV cs.NE stat.ML | We present an approach to adaptively utilize deep neural networks in order to
reduce the evaluation time on new examples without loss of accuracy. Rather
than attempting to redesign or approximate existing networks, we propose two
schemes that adaptively utilize networks. We first pose an adaptive network
evaluation scheme, where we learn a system to adaptively choose the components
of a deep network to be evaluated for each example. By allowing examples
correctly classified using early layers of the system to exit, we avoid the
computational time associated with full evaluation of the network. We extend
this to learn a network selection system that adaptively selects the network to
be evaluated for each example. We show that computational time can be
dramatically reduced by exploiting the fact that many examples can be correctly
classified using relatively efficient networks and that complex,
computationally costly networks are only necessary for a small fraction of
examples. We pose a global objective for learning an adaptive early exit or
network selection policy and solve it by reducing the policy learning problem
to a layer-by-layer weighted binary classification problem. Empirically, these
approaches yield dramatic reductions in computational cost, with up to a 2.8x
speedup on state-of-the-art networks from the ImageNet image recognition
challenge with minimal (<1%) loss of top5 accuracy.
| Tolga Bolukbasi, Joseph Wang, Ofer Dekel, Venkatesh Saligrama | null | 1702.07811 | null | null |
Unsupervised Sequence Classification using Sequential Output Statistics | cs.LG | We consider learning a sequence classifier without labeled data by using
sequential output statistics. The problem is highly valuable since obtaining
labels in training data is often costly, while the sequential output statistics
(e.g., language models) could be obtained independently of input data and thus
with low or no cost. To address the problem, we propose an unsupervised
learning cost function and study its properties. We show that, compared to
earlier works, it is less inclined to be stuck in trivial solutions and avoids
the need for a strong generative model. Although it is harder to optimize in
its functional form, a stochastic primal-dual gradient method is developed to
effectively solve the problem. Experiment results on real-world datasets
demonstrate that the new unsupervised learning method gives drastically lower
errors than other baseline methods. Specifically, it reaches test errors about
twice of those obtained by fully supervised learning.
| Yu Liu, Jianshu Chen, Li Deng | null | 1702.07817 | null | null |
Deep Voice: Real-time Neural Text-to-Speech | cs.CL cs.LG cs.NE cs.SD | We present Deep Voice, a production-quality text-to-speech system constructed
entirely from deep neural networks. Deep Voice lays the groundwork for truly
end-to-end neural speech synthesis. The system comprises five major building
blocks: a segmentation model for locating phoneme boundaries, a
grapheme-to-phoneme conversion model, a phoneme duration prediction model, a
fundamental frequency prediction model, and an audio synthesis model. For the
segmentation model, we propose a novel way of performing phoneme boundary
detection with deep neural networks using connectionist temporal classification
(CTC) loss. For the audio synthesis model, we implement a variant of WaveNet
that requires fewer parameters and trains faster than the original. By using a
neural network for each component, our system is simpler and more flexible than
traditional text-to-speech systems, where each component requires laborious
feature engineering and extensive domain expertise. Finally, we show that
inference with our system can be performed faster than real time and describe
optimized WaveNet inference kernels on both CPU and GPU that achieve up to 400x
speedups over existing implementations.
| Sercan O. Arik, Mike Chrzanowski, Adam Coates, Gregory Diamos, Andrew
Gibiansky, Yongguo Kang, Xian Li, John Miller, Andrew Ng, Jonathan Raiman,
Shubho Sengupta, Mohammad Shoeybi | null | 1702.07825 | null | null |
Rationalization: A Neural Machine Translation Approach to Generating
Natural Language Explanations | cs.AI cs.CL cs.HC cs.LG | We introduce AI rationalization, an approach for generating explanations of
autonomous system behavior as if a human had performed the behavior. We
describe a rationalization technique that uses neural machine translation to
translate internal state-action representations of an autonomous agent into
natural language. We evaluate our technique in the Frogger game environment,
training an autonomous game playing agent to rationalize its action choices
using natural language. A natural language training corpus is collected from
human players thinking out loud as they play the game. We motivate the use of
rationalization as an approach to explanation generation and show the results
of two experiments evaluating the effectiveness of rationalization. Results of
these evaluations show that neural machine translation is able to accurately
generate rationalizations that describe agent behavior, and that
rationalizations are more satisfying to humans than other alternative methods
of explanation.
| Upol Ehsan, Brent Harrison, Larry Chan, Mark O. Riedl | null | 1702.07826 | null | null |
Efficient coordinate-wise leading eigenvector computation | cs.NA cs.LG stat.ML | We develop and analyze efficient "coordinate-wise" methods for finding the
leading eigenvector, where each step involves only a vector-vector product. We
establish global convergence with overall runtime guarantees that are at least
as good as Lanczos's method and dominate it for slowly decaying spectrum. Our
methods are based on combining a shift-and-invert approach with coordinate-wise
algorithms for linear regression.
| Jialei Wang, Weiran Wang, Dan Garber, Nathan Srebro | null | 1702.07834 | null | null |
Online Learning with Many Experts | cs.LG | We study the problem of prediction with expert advice when the number of
experts in question may be extremely large or even infinite. We devise an
algorithm that obtains a tight regret bound of $\widetilde{O}(\epsilon T + N +
\sqrt{NT})$, where $N$ is the empirical $\epsilon$-covering number of the
sequence of loss functions generated by the environment. In addition, we
present a hedging procedure that allows us to find the optimal $\epsilon$ in
hindsight.
Finally, we discuss a few interesting applications of our algorithm. We show
how our algorithm is applicable in the approximately low rank experts model of
Hazan et al. (2016), and discuss the case of experts with bounded variation, in
which there is a surprisingly large gap between the regret bounds obtained in
the statistical and online settings.
| Alon Cohen, Shie Mannor | null | 1702.0787 | null | null |
An EM Based Probabilistic Two-Dimensional CCA with Application to Face
Recognition | cs.CV cs.LG stat.ML | Recently, two-dimensional canonical correlation analysis (2DCCA) has been
successfully applied for image feature extraction. The method instead of
concatenating the columns of the images to the one-dimensional vectors,
directly works with two-dimensional image matrices. Although 2DCCA works well
in different recognition tasks, it lacks a probabilistic interpretation. In
this paper, we present a probabilistic framework for 2DCCA called probabilistic
2DCCA (P2DCCA) and an iterative EM based algorithm for optimizing the
parameters. Experimental results on synthetic and real data demonstrate
superior performance in loading factor estimation for P2DCCA compared to 2DCCA.
For real data, three subsets of AR face database and also the UMIST face
database confirm the robustness of the proposed algorithm in face recognition
tasks with different illumination conditions, facial expressions, poses and
occlusions.
| Mehran Safayani, Seyed Hashem Ahmadi, Homayun Afrabandpey and
Abdolreza Mirzaei | 10.1007/s10489-017-1012-2 | 1702.07884 | null | null |
Coarse Grained Exponential Variational Autoencoders | cs.LG | Variational autoencoders (VAE) often use Gaussian or category distribution to
model the inference process. This puts a limit on variational learning because
this simplified assumption does not match the true posterior distribution,
which is usually much more sophisticated. To break this limitation and apply
arbitrary parametric distribution during inference, this paper derives a
\emph{semi-continuous} latent representation, which approximates a continuous
density up to a prescribed precision, and is much easier to analyze than its
continuous counterpart because it is fundamentally discrete. We showcase the
proposition by applying polynomial exponential family distributions as the
posterior, which are universal probability density function generators. Our
experimental results show consistent improvements over commonly used VAE
models.
| Ke Sun, Xiangliang Zhang | null | 1702.07904 | null | null |
CHAOS: A Parallelization Scheme for Training Convolutional Neural
Networks on Intel Xeon Phi | cs.DC cs.CV cs.LG | Deep learning is an important component of big-data analytic tools and
intelligent applications, such as, self-driving cars, computer vision, speech
recognition, or precision medicine. However, the training process is
computationally intensive, and often requires a large amount of time if
performed sequentially. Modern parallel computing systems provide the
capability to reduce the required training time of deep neural networks. In
this paper, we present our parallelization scheme for training convolutional
neural networks (CNN) named Controlled Hogwild with Arbitrary Order of
Synchronization (CHAOS). Major features of CHAOS include the support for thread
and vector parallelism, non-instant updates of weight parameters during
back-propagation without a significant delay, and implicit synchronization in
arbitrary order. CHAOS is tailored for parallel computing systems that are
accelerated with the Intel Xeon Phi. We evaluate our parallelization approach
empirically using measurement techniques and performance modeling for various
numbers of threads and CNN architectures. Experimental results for the MNIST
dataset of handwritten digits using the total number of threads on the Xeon Phi
show speedups of up to 103x compared to the execution on one thread of the Xeon
Phi, 14x compared to the sequential execution on Intel Xeon E5, and 58x
compared to the sequential execution on Intel Core i5.
| Andre Viebke, Suejb Memeti, Sabri Pllana, Ajith Abraham | 10.1007/s11227-017-1994-x | 1702.07908 | null | null |
Efficient Learning of Mixed Membership Models | cs.LG stat.ML | We present an efficient algorithm for learning mixed membership models when
the number of variables $p$ is much larger than the number of hidden components
$k$. This algorithm reduces the computational complexity of state-of-the-art
tensor methods, which require decomposing an $O\left(p^3\right)$ tensor, to
factorizing $O\left(p/k\right)$ sub-tensors each of size $O\left(k^3\right)$.
In addition, we address the issue of negative entries in the empirical method
of moments based estimators. We provide sufficient conditions under which our
approach has provable guarantees. Our approach obtains competitive empirical
results on both simulated and real data.
| Zilong Tan and Sayan Mukherjee | null | 1702.07933 | null | null |
Stochastic Variance Reduction Methods for Policy Evaluation | cs.LG cs.AI cs.SY math.OC stat.ML | Policy evaluation is a crucial step in many reinforcement-learning
procedures, which estimates a value function that predicts states' long-term
value under a given policy. In this paper, we focus on policy evaluation with
linear function approximation over a fixed dataset. We first transform the
empirical policy evaluation problem into a (quadratic) convex-concave saddle
point problem, and then present a primal-dual batch gradient method, as well as
two stochastic variance reduction methods for solving the problem. These
algorithms scale linearly in both sample size and feature dimension. Moreover,
they achieve linear convergence even when the saddle-point problem has only
strong concavity in the dual variables but no strong convexity in the primal
variables. Numerical experiments on benchmark problems demonstrate the
effectiveness of our methods.
| Simon S. Du, Jianshu Chen, Lihong Li, Lin Xiao, Dengyong Zhou | null | 1702.07944 | null | null |
Generative Adversarial Active Learning | cs.LG stat.ML | We propose a new active learning by query synthesis approach using Generative
Adversarial Networks (GAN). Different from regular active learning, the
resulting algorithm adaptively synthesizes training instances for querying to
increase learning speed. We generate queries according to the uncertainty
principle, but our idea can work with other active learning principles. We
report results from various numerical experiments to demonstrate the
effectiveness the proposed approach. In some settings, the proposed algorithm
outperforms traditional pool-based approaches. To the best our knowledge, this
is the first active learning work using GAN.
| Jia-Jie Zhu, Jos\'e Bento | null | 1702.07956 | null | null |
Efficient Online Bandit Multiclass Learning with $\tilde{O}(\sqrt{T})$
Regret | cs.LG stat.ML | We present an efficient second-order algorithm with
$\tilde{O}(\frac{1}{\eta}\sqrt{T})$ regret for the bandit online multiclass
problem. The regret bound holds simultaneously with respect to a family of loss
functions parameterized by $\eta$, for a range of $\eta$ restricted by the norm
of the competitor. The family of loss functions ranges from hinge loss
($\eta=0$) to squared hinge loss ($\eta=1$). This provides a solution to the
open problem of (J. Abernethy and A. Rakhlin. An efficient bandit algorithm for
$\sqrt{T}$-regret in online multiclass prediction? In COLT, 2009). We test our
algorithm experimentally, showing that it also performs favorably against
earlier algorithms.
| Alina Beygelzimer, Francesco Orabona, Chicheng Zhang | null | 1702.07958 | null | null |
Supervised Learning of Labeled Pointcloud Differences via Cover-Tree
Entropy Reduction | cs.LG cs.CV stat.ML | We introduce a new algorithm, called CDER, for supervised machine learning
that merges the multi-scale geometric properties of Cover Trees with the
information-theoretic properties of entropy. CDER applies to a training set of
labeled pointclouds embedded in a common Euclidean space. If typical
pointclouds corresponding to distinct labels tend to differ at any scale in any
sub-region, CDER can identify these differences in (typically) linear time,
creating a set of distributional coordinates which act as a feature extraction
mechanism for supervised learning. We describe theoretical properties and
implementation details of CDER, and illustrate its benefits on several
synthetic examples.
| Abraham Smith and Paul Bendich and John Harer and Alex Pieloch and Jay
Hineman | null | 1702.07959 | null | null |
Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs | cs.LG math.OC stat.ML | Deep learning models are often successfully trained using gradient descent,
despite the worst case hardness of the underlying non-convex optimization
problem. The key question is then under what conditions can one prove that
optimization will succeed. Here we provide a strong result of this kind. We
consider a neural net with one hidden layer and a convolutional structure with
no overlap and a ReLU activation function. For this architecture we show that
learning is NP-complete in the general case, but that when the input
distribution is Gaussian, gradient descent converges to the global optimum in
polynomial time. To the best of our knowledge, this is the first global
optimality guarantee of gradient descent on a convolutional neural network with
ReLU activations.
| Alon Brutzkus, Amir Globerson | null | 1702.07966 | null | null |
Ratio Utility and Cost Analysis for Privacy Preserving Subspace
Projection | stat.ML cs.LG | With a rapidly increasing number of devices connected to the internet, big
data has been applied to various domains of human life. Nevertheless, it has
also opened new venues for breaching users' privacy. Hence it is highly
required to develop techniques that enable data owners to privatize their data
while keeping it useful for intended applications. Existing methods, however,
do not offer enough flexibility for controlling the utility-privacy trade-off
and may incur unfavorable results when privacy requirements are high. To tackle
these drawbacks, we propose a compressive-privacy based method, namely RUCA
(Ratio Utility and Cost Analysis), which can not only maximize performance for
a privacy-insensitive classification task but also minimize the ability of any
classifier to infer private information from the data. Experimental results on
Census and Human Activity Recognition data sets demonstrate that RUCA
significantly outperforms existing privacy preserving data projection
techniques for a wide range of privacy pricings.
| Mert Al, Shibiao Wan, Sun-Yuan Kung | null | 1702.07976 | null | null |
Maximum-Likelihood Augmented Discrete Generative Adversarial Networks | cs.AI cs.CL cs.LG | Despite the successes in capturing continuous distributions, the application
of generative adversarial networks (GANs) to discrete settings, like natural
language tasks, is rather restricted. The fundamental reason is the difficulty
of back-propagation through discrete random variables combined with the
inherent instability of the GAN training objective. To address these problems,
we propose Maximum-Likelihood Augmented Discrete Generative Adversarial
Networks. Instead of directly optimizing the GAN objective, we derive a novel
and low-variance objective using the discriminator's output that follows
corresponds to the log-likelihood. Compared with the original, the new
objective is proved to be consistent in theory and beneficial in practice. The
experimental results on various discrete datasets demonstrate the effectiveness
of the proposed approach.
| Tong Che, Yanran Li, Ruixiang Zhang, R Devon Hjelm, Wenjie Li, Yangqiu
Song, Yoshua Bengio | null | 1702.07983 | null | null |
Kiefer Wolfowitz Algorithm is Asymptotically Optimal for a Class of
Non-Stationary Bandit Problems | stat.ML cs.LG | We consider the problem of designing an allocation rule or an "online
learning algorithm" for a class of bandit problems in which the set of control
actions available at each time $s$ is a convex, compact subset of
$\mathbb{R}^d$. Upon choosing an action $x$ at time $s$, the algorithm obtains
a noisy value of the unknown and time-varying function $f_s$ evaluated at $x$.
The "regret" of an algorithm is the gap between its expected reward, and the
reward earned by a strategy which has the knowledge of the function $f_s$ at
each time $s$ and hence chooses the action $x_s$ that maximizes $f_s$.
For this non-stationary bandit problem set-up, we consider two variants of
the Kiefer Wolfowitz (KW) algorithm i) KW with fixed step-size $\beta$, and ii)
KW with sliding window of length $L$. We show that if the number of times that
the function $f_s$ varies during time $T$ is $o(T)$, and if the learning rates
of the proposed algorithms are chosen "optimally", then the regret of the
proposed algorithms is $o(T)$, and hence the algorithms are asymptotically
efficient.
| Rahul Singh and Taposh Banerjee | null | 1702.08 | null | null |
Bayesian Nonparametric Feature and Policy Learning for Decision-Making | cs.LG cs.CV | Learning from demonstrations has gained increasing interest in the recent
past, enabling an agent to learn how to make decisions by observing an
experienced teacher. While many approaches have been proposed to solve this
problem, there is only little work that focuses on reasoning about the observed
behavior. We assume that, in many practical problems, an agent makes its
decision based on latent features, indicating a certain action. Therefore, we
propose a generative model for the states and actions. Inference reveals the
number of features, the features, and the policies, allowing us to learn and to
analyze the underlying structure of the observed behavior. Further, our
approach enables prediction of actions for new states. Simulations are used to
assess the performance of the algorithm based upon this model. Moreover, the
problem of learning a driver's behavior is investigated, demonstrating the
performance of the proposed model in a real-world scenario.
| J\"urgen Hahn and Abdelhak M. Zoubir | null | 1702.08001 | null | null |
Support vector machine and its bias correction in high-dimension,
low-sample-size settings | stat.ML cs.LG | In this paper, we consider asymptotic properties of the support vector
machine (SVM) in high-dimension, low-sample-size (HDLSS) settings. We show that
the hard-margin linear SVM holds a consistency property in which
misclassification rates tend to zero as the dimension goes to infinity under
certain severe conditions. We show that the SVM is very biased in HDLSS
settings and its performance is affected by the bias directly. In order to
overcome such difficulties, we propose a bias-corrected SVM (BC-SVM). We show
that the BC-SVM gives preferable performances in HDLSS settings. We also
discuss the SVMs in multiclass HDLSS settings. Finally, we check the
performance of the classifiers in actual data analyses.
| Yugo Nakayama, Kazuyoshi Yata, Makoto Aoshima | null | 1702.08019 | null | null |
Criticality & Deep Learning I: Generally Weighted Nets | cs.AI cs.LG | Motivated by the idea that criticality and universality of phase transitions
might play a crucial role in achieving and sustaining learning and intelligent
behaviour in biological and artificial networks, we analyse a theoretical and a
pragmatic experimental set up for critical phenomena in deep learning. On the
theoretical side, we use results from statistical physics to carry out critical
point calculations in feed-forward/fully connected networks, while on the
experimental side we set out to find traces of criticality in deep neural
networks. This is our first step in a series of upcoming investigations to map
out the relationship between criticality and learning in deep networks.
| Dan Oprisa, Peter Toth | null | 1702.08039 | null | null |
Learning Control for Air Hockey Striking using Deep Reinforcement
Learning | cs.LG cs.RO | We consider the task of learning control policies for a robotic mechanism
striking a puck in an air hockey game. The control signal is a direct command
to the robot's motors. We employ a model free deep reinforcement learning
framework to learn the motoric skills of striking the puck accurately in order
to score. We propose certain improvements to the standard learning scheme which
make the deep Q-learning algorithm feasible when it might otherwise fail. Our
improvements include integrating prior knowledge into the learning scheme, and
accounting for the changing distribution of samples in the experience replay
buffer. Finally we present our simulation results for aimed striking which
demonstrate the successful learning of this task, and the improvement in
algorithm stability due to the proposed modifications.
| Ayal Taitler and Nahum Shimkin | null | 1702.08074 | null | null |
Selection of training populations (and other subset selection problems)
with an accelerated genetic algorithm (STPGA: An R-package for selection of
training populations with a genetic algorithm) | stat.ME cs.LG q-bio.GN q-bio.QM stat.AP | Optimal subset selection is an important task that has numerous algorithms
designed for it and has many application areas. STPGA contains a special
genetic algorithm supplemented with a tabu memory property (that keeps track of
previously tried solutions and their fitness for a number of iterations), and
with a regression of the fitness of the solutions on their coding that is used
to form the ideal estimated solution (look ahead property) to search for
solutions of generic optimal subset selection problems. I have initially
developed the programs for the specific problem of selecting training
populations for genomic prediction or association problems, therefore I give
discussion of the theory behind optimal design of experiments to explain the
default optimization criteria in STPGA, and illustrate the use of the programs
in this endeavor. Nevertheless, I have picked a few other areas of application:
supervised and unsupervised variable selection based on kernel alignment,
supervised variable selection with design criteria, influential observation
identification for regression, solving mixed integer quadratic optimization
problems, balancing gains and inbreeding in a breeding population. Some of
these illustrations pertain new statistical approaches.
| Deniz Akdemir | null | 1702.08088 | null | null |
Dropping Convexity for More Efficient and Scalable Online Multiview
Learning | cs.LG math.OC stat.ML | Multiview representation learning is very popular for latent factor analysis.
It naturally arises in many data analysis, machine learning, and information
retrieval applications to model dependent structures among multiple data
sources. For computational convenience, existing approaches usually formulate
the multiview representation learning as convex optimization problems, where
global optima can be obtained by certain algorithms in polynomial time.
However, many pieces of evidence have corroborated that heuristic nonconvex
approaches also have good empirical computational performance and convergence
to the global optima, although there is a lack of theoretical justification.
Such a gap between theory and practice motivates us to study a nonconvex
formulation for multiview representation learning, which can be efficiently
solved by a simple stochastic gradient descent (SGD) algorithm. We first
illustrate the geometry of the nonconvex formulation; Then, we establish
asymptotic global rates of convergence to the global optima by diffusion
approximations. Numerical experiments are provided to support our theory.
| Zhehui Chen, Lin F. Yang, Chris J. Li, Tuo Zhao | null | 1702.08134 | null | null |
Deceiving Google's Perspective API Built for Detecting Toxic Comments | cs.LG cs.CY cs.SI | Social media platforms provide an environment where people can freely engage
in discussions. Unfortunately, they also enable several problems, such as
online harassment. Recently, Google and Jigsaw started a project called
Perspective, which uses machine learning to automatically detect toxic
language. A demonstration website has been also launched, which allows anyone
to type a phrase in the interface and instantaneously see the toxicity score
[1]. In this paper, we propose an attack on the Perspective toxic detection
system based on the adversarial examples. We show that an adversary can subtly
modify a highly toxic phrase in a way that the system assigns significantly
lower toxicity score to it. We apply the attack on the sample phrases provided
in the Perspective website and show that we can consistently reduce the
toxicity scores to the level of the non-toxic phrases. The existence of such
adversarial examples is very harmful for toxic detection systems and seriously
undermines their usability.
| Hossein Hosseini, Sreeram Kannan, Baosen Zhang and Radha Poovendran | null | 1702.08138 | null | null |
Improved Variational Autoencoders for Text Modeling using Dilated
Convolutions | cs.NE cs.CL cs.LG | Recent work on generative modeling of text has found that variational
auto-encoders (VAE) incorporating LSTM decoders perform worse than simpler LSTM
language models (Bowman et al., 2015). This negative result is so far poorly
understood, but has been attributed to the propensity of LSTM decoders to
ignore conditioning information from the encoder. In this paper, we experiment
with a new type of decoder for VAE: a dilated CNN. By changing the decoder's
dilation architecture, we control the effective context from previously
generated words. In experiments, we find that there is a trade off between the
contextual capacity of the decoder and the amount of encoding information used.
We show that with the right decoder, VAE can outperform LSTM language models.
We demonstrate perplexity gains on two datasets, representing the first
positive experimental result on the use VAE for generative modeling of text.
Further, we conduct an in-depth investigation of the use of VAE (with our new
decoding architecture) for semi-supervised and unsupervised labeling tasks,
demonstrating gains over several strong baselines.
| Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, Taylor Berg-Kirkpatrick | null | 1702.08139 | null | null |
McKernel: A Library for Approximate Kernel Expansions in Log-linear Time | cs.LG stat.ML | McKernel introduces a framework to use kernel approximates in the mini-batch
setting with Stochastic Gradient Descent (SGD) as an alternative to Deep
Learning. Based on Random Kitchen Sinks [Rahimi and Recht 2007], we provide a
C++ library for Large-scale Machine Learning. It contains a CPU optimized
implementation of the algorithm in [Le et al. 2013], that allows the
computation of approximated kernel expansions in log-linear time. The algorithm
requires to compute the product of matrices Walsh Hadamard. A cache friendly
Fast Walsh Hadamard that achieves compelling speed and outperforms current
state-of-the-art methods has been developed. McKernel establishes the
foundation of a new architecture of learning that allows to obtain large-scale
non-linear classification combining lightning kernel expansions and a linear
classifier. It travails in the mini-batch setting working analogously to Neural
Networks. We show the validity of our method through extensive experiments on
MNIST and FASHION MNIST [Xiao et al. 2017].
| J. D. Curt\'o and I. C. Zarza and Feng Yang and Alex Smola and
Fernando de la Torre and Chong Wah Ngo and Luc van Gool | null | 1702.08159 | null | null |
Reinforcement Learning with Deep Energy-Based Policies | cs.LG cs.AI | We propose a method for learning expressive energy-based policies for
continuous states and actions, which has been feasible only in tabular domains
before. We apply our method to learning maximum entropy policies, resulting
into a new algorithm, called soft Q-learning, that expresses the optimal policy
via a Boltzmann distribution. We use the recently proposed amortized Stein
variational gradient descent to learn a stochastic sampling network that
approximates samples from this distribution. The benefits of the proposed
algorithm include improved exploration and compositionality that allows
transferring skills between tasks, which we confirm in simulated experiments
with swimming and walking robots. We also draw a connection to actor-critic
methods, which can be viewed performing approximate inference on the
corresponding energy-based model.
| Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, Sergey Levine | null | 1702.08165 | null | null |
Communication-efficient Algorithms for Distributed Stochastic Principal
Component Analysis | cs.LG | We study the fundamental problem of Principal Component Analysis in a
statistical distributed setting in which each machine out of $m$ stores a
sample of $n$ points sampled i.i.d. from a single unknown distribution. We
study algorithms for estimating the leading principal component of the
population covariance matrix that are both communication-efficient and achieve
estimation error of the order of the centralized ERM solution that uses all
$mn$ samples. On the negative side, we show that in contrast to results
obtained for distributed estimation under convexity assumptions, for the PCA
objective, simply averaging the local ERM solutions cannot guarantee error that
is consistent with the centralized ERM. We show that this unfortunate phenomena
can be remedied by performing a simple correction step which correlates between
the individual solutions, and provides an estimator that is consistent with the
centralized ERM for sufficiently-large $n$. We also introduce an iterative
distributed algorithm that is applicable in any regime of $n$, which is based
on distributed matrix-vector products. The algorithm gives significant
acceleration in terms of communication rounds over previous distributed
algorithms, in a wide regime of parameters.
| Dan Garber, Ohad Shamir, Nathan Srebro | null | 1702.08169 | null | null |
Fixed-point optimization of deep neural networks with adaptive step size
retraining | cs.LG | Fixed-point optimization of deep neural networks plays an important role in
hardware based design and low-power implementations. Many deep neural networks
show fairly good performance even with 2- or 3-bit precision when quantized
weights are fine-tuned by retraining. We propose an improved fixedpoint
optimization algorithm that estimates the quantization step size dynamically
during the retraining. In addition, a gradual quantization scheme is also
tested, which sequentially applies fixed-point optimizations from high- to
low-precision. The experiments are conducted for feed-forward deep neural
networks (FFDNNs), convolutional neural networks (CNNs), and recurrent neural
networks (RNNs).
| Sungho Shin, Yoonho Boo, and Wonyong Sung | null | 1702.08171 | null | null |
DeepNAT: Deep Convolutional Neural Network for Segmenting Neuroanatomy | cs.CV cs.AI cs.LG | We introduce DeepNAT, a 3D Deep convolutional neural network for the
automatic segmentation of NeuroAnaTomy in T1-weighted magnetic resonance
images. DeepNAT is an end-to-end learning-based approach to brain segmentation
that jointly learns an abstract feature representation and a multi-class
classification. We propose a 3D patch-based approach, where we do not only
predict the center voxel of the patch but also neighbors, which is formulated
as multi-task learning. To address a class imbalance problem, we arrange two
networks hierarchically, where the first one separates foreground from
background, and the second one identifies 25 brain structures on the
foreground. Since patches lack spatial context, we augment them with
coordinates. To this end, we introduce a novel intrinsic parameterization of
the brain volume, formed by eigenfunctions of the Laplace-Beltrami operator. As
network architecture, we use three convolutional layers with pooling, batch
normalization, and non-linearities, followed by fully connected layers with
dropout. The final segmentation is inferred from the probabilistic output of
the network with a 3D fully connected conditional random field, which ensures
label agreement between close voxels. The roughly 2.7 million parameters in the
network are learned with stochastic gradient descent. Our results show that
DeepNAT compares favorably to state-of-the-art methods. Finally, the purely
learning-based method may have a high potential for the adaptation to young,
old, or diseased brains by fine-tuning the pre-trained network with a small
training sample on the target application, where the availability of larger
datasets with manual annotations may boost the overall segmentation accuracy in
the future.
| Christian Wachinger, Martin Reuter, Tassilo Klein | 10.1016/j.neuroimage.2017.02.035 | 1702.08192 | null | null |
Algorithmic Chaining and the Role of Partial Feedback in Online
Nonparametric Learning | stat.ML cs.LG math.ST stat.TH | We investigate contextual online learning with nonparametric (Lipschitz)
comparison classes under different assumptions on losses and feedback
information. For full information feedback and Lipschitz losses, we design the
first explicit algorithm achieving the minimax regret rate (up to log factors).
In a partial feedback model motivated by second-price auctions, we obtain
algorithms for Lipschitz and semi-Lipschitz losses with regret bounds improving
on the known bounds for standard bandit feedback. Our analysis combines novel
results for contextual second-price auctions with a novel algorithmic approach
based on chaining. When the context space is Euclidean, our chaining approach
is efficient and delivers an even better regret bound.
| Nicol\`o Cesa-Bianchi, Pierre Gaillard (SIERRA), Claudio Gentile,
S\'ebastien Gerchinovitz (IMT) | null | 1702.08211 | null | null |
Variational Inference using Implicit Distributions | stat.ML cs.LG | Generative adversarial networks (GANs) have given us a great tool to fit
implicit generative models to data. Implicit distributions are ones we can
sample from easily, and take derivatives of samples with respect to model
parameters. These models are highly expressive and we argue they can prove just
as useful for variational inference (VI) as they are for generative modelling.
Several papers have proposed GAN-like algorithms for inference, however,
connections to the theory of VI are not always well understood. This paper
provides a unifying review of existing algorithms establishing connections
between variational autoencoders, adversarially learned inference, operator VI,
GAN-based image reconstruction, and more. Secondly, the paper provides a
framework for building new algorithms: depending on the way the variational
bound is expressed we introduce prior-contrastive and joint-contrastive
methods, and show practical inference algorithms based on either density ratio
estimation or denoising.
| Ferenc Husz\'ar | null | 1702.08235 | null | null |
Scalable k-Means Clustering via Lightweight Coresets | stat.ML cs.DC cs.DS cs.LG stat.CO | Coresets are compact representations of data sets such that models trained on
a coreset are provably competitive with models trained on the full data set. As
such, they have been successfully used to scale up clustering models to massive
data sets. While existing approaches generally only allow for multiplicative
approximation errors, we propose a novel notion of lightweight coresets that
allows for both multiplicative and additive errors. We provide a single
algorithm to construct lightweight coresets for k-means clustering as well as
soft and hard Bregman clustering. The algorithm is substantially faster than
existing constructions, embarrassingly parallel, and the resulting coresets are
smaller. We further show that the proposed approach naturally generalizes to
statistical k-means clustering and that, compared to existing results, it can
be used to compute smaller summaries for empirical risk minimization. In
extensive experiments, we demonstrate that the proposed algorithm outperforms
existing data summarization strategies in practice.
| Olivier Bachem, Mario Lucic, Andreas Krause | null | 1702.08248 | null | null |
Uniform Deviation Bounds for Unbounded Loss Functions like k-Means | stat.ML cs.LG | Uniform deviation bounds limit the difference between a model's expected loss
and its loss on an empirical sample uniformly for all models in a learning
problem. As such, they are a critical component to empirical risk minimization.
In this paper, we provide a novel framework to obtain uniform deviation bounds
for loss functions which are *unbounded*. In our main application, this allows
us to obtain bounds for $k$-Means clustering under weak assumptions on the
underlying distribution. If the fourth moment is bounded, we prove a rate of
$\mathcal{O}\left(m^{-\frac12}\right)$ compared to the previously known
$\mathcal{O}\left(m^{-\frac14}\right)$ rate. Furthermore, we show that the rate
also depends on the kurtosis - the normalized fourth moment which measures the
"tailedness" of a distribution. We further provide improved rates under
progressively stronger assumptions, namely, bounded higher moments,
subgaussianity and bounded support.
| Olivier Bachem, Mario Lucic, S. Hamed Hassani, Andreas Krause | null | 1702.08249 | null | null |
Adaptive Ensemble Prediction for Deep Neural Networks based on
Confidence Level | cs.LG cs.CV stat.ML | Ensembling multiple predictions is a widely used technique for improving the
accuracy of various machine learning tasks. One obvious drawback of ensembling
is its higher execution cost during inference. In this paper, we first describe
our insights on the relationship between the probability of prediction and the
effect of ensembling with current deep neural networks; ensembling does not
help mispredictions for inputs predicted with a high probability even when
there is a non-negligible number of mispredicted inputs. This finding motivated
us to develop a way to adaptively control the ensembling. If the prediction for
an input reaches a high enough probability, i.e., the output from the softmax
function, on the basis of the confidence level, we stop ensembling for this
input to avoid wasting computation power. We evaluated the adaptive ensembling
by using various datasets and showed that it reduces the computation cost
significantly while achieving accuracy similar to that of static ensembling
using a pre-defined number of local predictions. We also show that our
statistically rigorous confidence-level-based early-exit condition reduces the
burden of task-dependent threshold tuning better compared with naive early exit
based on a pre-defined threshold in addition to yielding a better accuracy with
the same cost.
| Hiroshi Inoue | null | 1702.08259 | null | null |
Adaptive Learning to Speed-Up Control of Prosthetic Hands: a Few Things
Everybody Should Know | cs.LG | A number of studies have proposed to use domain adaptation to reduce the
training efforts needed to control an upper-limb prosthesis exploiting
pre-trained models from prior subjects. These studies generally reported
impressive reductions in the required number of training samples to achieve a
certain level of accuracy for intact subjects. We further investigate two
popular methods in this field to verify whether this result equally applies to
amputees. Our findings show instead that this improvement can largely be
attributed to a suboptimal hyperparameter configuration. When hyperparameters
are appropriately tuned, the standard approach that does not exploit prior
information performs on par with the more complicated transfer learning
algorithms. Additionally, earlier studies erroneously assumed that the number
of training samples relates proportionally to the efforts required from the
subject. However, a repetition of a movement is the atomic unit for subjects
and the total number of repetitions should therefore be used as reliable
measure for training efforts. Also when correcting for this mistake, we do not
find any performance increase due to the use of prior models.
| Valentina Gregori, Arjan Gijsberts, Barbara Caputo | null | 1702.08283 | null | null |
Approximate Inference with Amortised MCMC | stat.ML cs.LG | We propose a novel approximate inference algorithm that approximates a target
distribution by amortising the dynamics of a user-selected MCMC sampler. The
idea is to initialise MCMC using samples from an approximation network, apply
the MCMC operator to improve these samples, and finally use the samples to
update the approximation network thereby improving its quality. This provides a
new generic framework for approximate inference, allowing us to deploy highly
complex, or implicitly defined approximation families with intractable
densities, including approximations produced by warping a source of randomness
through a deep neural network. Experiments consider image modelling with deep
generative models as a challenging test for the method. Deep models trained
using amortised MCMC are shown to generate realistic looking samples as well as
producing diverse imputations for images with regions of missing pixels.
| Yingzhen Li, Richard E. Turner, Qiang Liu | null | 1702.08343 | null | null |
Dynamic Word Embeddings | stat.ML cs.LG | We present a probabilistic language model for time-stamped text data which
tracks the semantic evolution of individual words over time. The model
represents words and contexts by latent trajectories in an embedding space. At
each moment in time, the embedding vectors are inferred from a probabilistic
version of word2vec [Mikolov et al., 2013]. These embedding vectors are
connected in time through a latent diffusion process. We describe two scalable
variational inference algorithms--skip-gram smoothing and skip-gram
filtering--that allow us to train the model jointly over all times; thus
learning on all data while simultaneously allowing word and context vectors to
drift. Experimental results on three different corpora demonstrate that our
dynamic model infers word embedding trajectories that are more interpretable
and lead to higher predictive likelihoods than competing methods that are based
on static models trained separately on time slices.
| Robert Bamler and Stephan Mandt | null | 1702.08359 | null | null |
Neural Map: Structured Memory for Deep Reinforcement Learning | cs.LG | A critical component to enabling intelligent reasoning in partially
observable environments is memory. Despite this importance, Deep Reinforcement
Learning (DRL) agents have so far used relatively simple memory architectures,
with the main methods to overcome partial observability being either a temporal
convolution over the past k frames or an LSTM layer. More recent work (Oh et
al., 2016) has went beyond these architectures by using memory networks which
can allow more sophisticated addressing schemes over the past k frames. But
even these architectures are unsatisfactory due to the reason that they are
limited to only remembering information from the last k frames. In this paper,
we develop a memory system with an adaptable write operator that is customized
to the sorts of 3D environments that DRL agents typically interact with. This
architecture, called the Neural Map, uses a spatially structured 2D memory
image to learn to store arbitrary information about the environment over long
time lags. We demonstrate empirically that the Neural Map surpasses previous
DRL memories on a set of challenging 2D and 3D maze environments and show that
it is capable of generalizing to environments that were not seen during
training.
| Emilio Parisotto and Ruslan Salakhutdinov | null | 1702.0836 | null | null |
Learning Hierarchical Features from Generative Models | cs.LG stat.ML | Deep neural networks have been shown to be very successful at learning
feature hierarchies in supervised learning tasks. Generative models, on the
other hand, have benefited less from hierarchical models with multiple layers
of latent variables. In this paper, we prove that hierarchical latent variable
models do not take advantage of the hierarchical structure when trained with
existing variational methods, and provide some limitations on the kind of
features existing models can learn. Finally we propose an alternative
architecture that do not suffer from these limitations. Our model is able to
learn highly interpretable and disentangled hierarchical features on several
natural image datasets with no task specific regularization or prior knowledge.
| Shengjia Zhao, Jiaming Song, Stefano Ermon | null | 1702.08396 | null | null |
McGan: Mean and Covariance Feature Matching GAN | cs.LG stat.ML | We introduce new families of Integral Probability Metrics (IPM) for training
Generative Adversarial Networks (GAN). Our IPMs are based on matching
statistics of distributions embedded in a finite dimensional feature space.
Mean and covariance feature matching IPMs allow for stable training of GANs,
which we will call McGan. McGan minimizes a meaningful loss between
distributions.
| Youssef Mroueh, Tom Sercu, Vaibhava Goel | null | 1702.08398 | null | null |
Boundary-Seeking Generative Adversarial Networks | stat.ML cs.LG | Generative adversarial networks (GANs) are a learning framework that rely on
training a discriminator to estimate a measure of difference between a target
and generated distributions. GANs, as normally formulated, rely on the
generated samples being completely differentiable w.r.t. the generative
parameters, and thus do not work for discrete data. We introduce a method for
training GANs with discrete data that uses the estimated difference measure
from the discriminator to compute importance weights for generated samples,
thus providing a policy gradient for training the generator. The importance
weights have a strong connection to the decision boundary of the discriminator,
and we call our method boundary-seeking GANs (BGANs). We demonstrate the
effectiveness of the proposed algorithm with discrete image and character-based
natural language generation. In addition, the boundary-seeking objective
extends to continuous data, which can be used to improve stability of training,
and we demonstrate this on Celeba, Large-scale Scene Understanding (LSUN)
bedrooms, and Imagenet without conditioning.
| R Devon Hjelm and Athul Paul Jacob and Tong Che and Adam Trischler and
Kyunghyun Cho and Yoshua Bengio | null | 1702.08431 | null | null |
Boosted Generative Models | cs.LG cs.AI stat.ML | We propose a novel approach for using unsupervised boosting to create an
ensemble of generative models, where models are trained in sequence to correct
earlier mistakes. Our meta-algorithmic framework can leverage any existing base
learner that permits likelihood evaluation, including recent deep expressive
models. Further, our approach allows the ensemble to include discriminative
models trained to distinguish real data from model-generated data. We show
theoretical conditions under which incorporating a new model in the ensemble
will improve the fit and empirically demonstrate the effectiveness of our
black-box boosting algorithms on density estimation, classification, and sample
generation on benchmark datasets for a wide range of generative models.
| Aditya Grover, Stefano Ermon | null | 1702.08484 | null | null |
SGD Learns the Conjugate Kernel Class of the Network | cs.LG cs.DS stat.ML | We show that the standard stochastic gradient decent (SGD) algorithm is
guaranteed to learn, in polynomial time, a function that is competitive with
the best function in the conjugate kernel space of the network, as defined in
Daniely, Frostig and Singer. The result holds for log-depth networks from a
rich family of architectures. To the best of our knowledge, it is the first
polynomial-time guarantee for the standard neural network learning algorithm
for networks of depth more that two.
As corollaries, it follows that for neural networks of any depth between $2$
and $\log(n)$, SGD is guaranteed to learn, in polynomial time, constant degree
polynomials with polynomially bounded coefficients. Likewise, it follows that
SGD on large enough networks can learn any continuous function (not in
polynomial time), complementing classical expressivity results.
| Amit Daniely | null | 1702.08503 | null | null |
Learning Deep Visual Object Models From Noisy Web Data: How to Make it
Work | cs.CV cs.DB cs.LG cs.RO | Deep networks thrive when trained on large scale data collections. This has
given ImageNet a central role in the development of deep architectures for
visual object classification. However, ImageNet was created during a specific
period in time, and as such it is prone to aging, as well as dataset bias
issues. Moving beyond fixed training datasets will lead to more robust visual
systems, especially when deployed on robots in new environments which must
train on the objects they encounter there. To make this possible, it is
important to break free from the need for manual annotators. Recent work has
begun to investigate how to use the massive amount of images available on the
Web in place of manual image annotations. We contribute to this research thread
with two findings: (1) a study correlating a given level of noisily labels to
the expected drop in accuracy, for two deep architectures, on two different
types of noise, that clearly identifies GoogLeNet as a suitable architecture
for learning from Web data; (2) a recipe for the creation of Web datasets with
minimal noise and maximum visual variability, based on a visual and natural
language processing concept expansion strategy. By combining these two results,
we obtain a method for learning powerful deep object models automatically from
the Web. We confirm the effectiveness of our approach through object
categorization experiments using our Web-derived version of ImageNet on a
popular robot vision benchmark database, and on a lifelong object discovery
task on a mobile robot.
| Nizar Massouh, Francesca Babiloni, Tatiana Tommasi, Jay Young, Nick
Hawes and Barbara Caputo | 10.1109/IROS.2017.8206444 | 1702.08513 | null | null |
Semi-parametric Network Structure Discovery Models | cs.LG stat.ML | We propose a network structure discovery model for continuous observations
that generalizes linear causal models by incorporating a Gaussian process (GP)
prior on a network-independent component, and random sparsity and weight
matrices as the network-dependent parameters. This approach provides flexible
modeling of network-independent trends in the observations as well as
uncertainty quantification around the discovered network structure. We
establish a connection between our model and multi-task GPs and develop an
efficient stochastic variational inference algorithm for it. Furthermore, we
formally show that our approach is numerically stable and in fact numerically
easy to carry out almost everywhere on the support of the random variables
involved. Finally, we evaluate our model on three applications, showing that it
outperforms previous approaches. We provide a qualitative and quantitative
analysis of the structures discovered for domains such as the study of the full
genome regulation of the yeast Saccharomyces cerevisiae.
| Amir Dezfouli, Edwin V. Bonilla, Richard Nock | null | 1702.0853 | null | null |
Competing Bandits: Learning under Competition | cs.GT cs.LG | Most modern systems strive to learn from interactions with users, and many
engage in exploration: making potentially suboptimal choices for the sake of
acquiring new information. We initiate a study of the interplay between
exploration and competition--how such systems balance the exploration for
learning and the competition for users. Here the users play three distinct
roles: they are customers that generate revenue, they are sources of data for
learning, and they are self-interested agents which choose among the competing
systems. In our model, we consider competition between two multi-armed bandit
algorithms faced with the same bandit instance. Users arrive one by one and
choose among the two algorithms, so that each algorithm makes progress if and
only if it is chosen. We ask whether and to what extent competition
incentivizes the adoption of better bandit algorithms. We investigate this
issue for several models of user response, as we vary the degree of rationality
and competitiveness in the model. Our findings are closely related to the
"competition vs. innovation" relationship, a well-studied theme in economics.
| Yishay Mansour, Aleksandrs Slivkins, Zhiwei Steven Wu | null | 1702.08533 | null | null |
Fast Threshold Tests for Detecting Discrimination | stat.ML cs.LG | Threshold tests have recently been proposed as a useful method for detecting
bias in lending, hiring, and policing decisions. For example, in the case of
credit extensions, these tests aim to estimate the bar for granting loans to
white and minority applicants, with a higher inferred threshold for minorities
indicative of discrimination. This technique, however, requires fitting a
complex Bayesian latent variable model for which inference is often
computationally challenging. Here we develop a method for fitting threshold
tests that is two orders of magnitude faster than the existing approach,
reducing computation from hours to minutes. To achieve these performance gains,
we introduce and analyze a flexible family of probability distributions on the
interval [0, 1] -- which we call discriminant distributions -- that is
computationally efficient to work with. We demonstrate our technique by
analyzing 2.7 million police stops of pedestrians in New York City.
| Emma Pierson, Sam Corbett-Davies, Sharad Goel | null | 1702.08536 | null | null |
Active Learning Using Uncertainty Information | stat.ML cs.LG | Many active learning methods belong to the retraining-based approaches, which
select one unlabeled instance, add it to the training set with its possible
labels, retrain the classification model, and evaluate the criteria that we
base our selection on. However, since the true label of the selected instance
is unknown, these methods resort to calculating the average-case or worse-case
performance with respect to the unknown label. In this paper, we propose a
different method to solve this problem. In particular, our method aims to make
use of the uncertainty information to enhance the performance of
retraining-based models. We apply our method to two state-of-the-art algorithms
and carry out extensive experiments on a wide variety of real-world datasets.
The results clearly demonstrate the effectiveness of the proposed method and
indicate it can reduce human labeling efforts in many real-life applications.
| Yazhou Yang and Marco Loog | null | 1702.0854 | null | null |
Diameter-Based Active Learning | cs.LG stat.ML | To date, the tightest upper and lower-bounds for the active learning of
general concept classes have been in terms of a parameter of the learning
problem called the splitting index. We provide, for the first time, an
efficient algorithm that is able to realize this upper bound, and we
empirically demonstrate its good performance.
| Christopher Tosh, Sanjoy Dasgupta | null | 1702.08553 | null | null |
Optimal Experiment Design for Causal Discovery from Fixed Number of
Experiments | cs.LG cs.AI stat.ML | We study the problem of causal structure learning over a set of random
variables when the experimenter is allowed to perform at most $M$ experiments
in a non-adaptive manner. We consider the optimal learning strategy in terms of
minimizing the portions of the structure that remains unknown given the limited
number of experiments in both Bayesian and minimax setting. We characterize the
theoretical optimal solution and propose an algorithm, which designs the
experiments efficiently in terms of time complexity. We show that for bounded
degree graphs, in the minimax case and in the Bayesian case with uniform
priors, our proposed algorithm is a $\rho$-approximation algorithm, where
$\rho$ is independent of the order of the underlying graph. Simulations on both
synthetic and real data show that the performance of our algorithm is very
close to the optimal solution.
| AmirEmad Ghassami, Saber Salehkaleybar, Negar Kiyavash | null | 1702.08567 | null | null |
eXpose: A Character-Level Convolutional Neural Network with Embeddings
For Detecting Malicious URLs, File Paths and Registry Keys | cs.CR cs.LG | For years security machine learning research has promised to obviate the need
for signature based detection by automatically learning to detect indicators of
attack. Unfortunately, this vision hasn't come to fruition: in fact, developing
and maintaining today's security machine learning systems can require
engineering resources that are comparable to that of signature-based detection
systems, due in part to the need to develop and continuously tune the
"features" these machine learning systems look at as attacks evolve. Deep
learning, a subfield of machine learning, promises to change this by operating
on raw input signals and automating the process of feature design and
extraction. In this paper we propose the eXpose neural network, which uses a
deep learning approach we have developed to take generic, raw short character
strings as input (a common case for security inputs, which include artifacts
like potentially malicious URLs, file paths, named pipes, named mutexes, and
registry keys), and learns to simultaneously extract features and classify
using character-level embeddings and convolutional neural network. In addition
to completely automating the feature design and extraction process, eXpose
outperforms manual feature extraction based baselines on all of the intrusion
detection problems we tested it on, yielding a 5%-10% detection rate gain at
0.1% false positive rate compared to these baselines.
| Joshua Saxe and Konstantin Berlin | null | 1702.08568 | null | null |
Learning Vector Autoregressive Models with Latent Processes | cs.LG stat.ML | We study the problem of learning the support of transition matrix between
random processes in a Vector Autoregressive (VAR) model from samples when a
subset of the processes are latent. It is well known that ignoring the effect
of the latent processes may lead to very different estimates of the influences
among observed processes, and we are concerned with identifying the influences
among the observed processes, those between the latent ones, and those from the
latent to the observed ones. We show that the support of transition matrix
among the observed processes and lengths of all latent paths between any two
observed processes can be identified successfully under some conditions on the
VAR model. From the lengths of latent paths, we reconstruct the latent subgraph
(representing the influences among the latent processes) with a minimum number
of variables uniquely if its topology is a directed tree. Furthermore, we
propose an algorithm that finds all possible minimal latent graphs under some
conditions on the lengths of latent paths. Our results apply to both
non-Gaussian and Gaussian cases, and experimental results on various synthetic
and real-world datasets validate our theoretical results.
| Saber Salehkaleybar, Jalal Etesami, Negar Kiyavash, Kun Zhang | null | 1702.08575 | null | null |
Depth Creates No Bad Local Minima | cs.LG cs.NE math.OC stat.ML | In deep learning, \textit{depth}, as well as \textit{nonlinearity}, create
non-convex loss surfaces. Then, does depth alone create bad local minima? In
this paper, we prove that without nonlinearity, depth alone does not create bad
local minima, although it induces non-convex loss surface. Using this insight,
we greatly simplify a recently proposed proof to show that all of the local
minima of feedforward deep linear neural networks are global minima. Our
theoretical results generalize previous results with fewer assumptions, and
this analysis provides a method to show similar results beyond square loss in
deep linear models.
| Haihao Lu and Kenji Kawaguchi | null | 1702.0858 | null | null |
Can Boltzmann Machines Discover Cluster Updates ? | physics.comp-ph cond-mat.stat-mech cs.LG stat.ML | Boltzmann machines are physics informed generative models with wide
applications in machine learning. They can learn the probability distribution
from an input dataset and generate new samples accordingly. Applying them back
to physics, the Boltzmann machines are ideal recommender systems to accelerate
Monte Carlo simulation of physical systems due to their flexibility and
effectiveness. More intriguingly, we show that the generative sampling of the
Boltzmann Machines can even discover unknown cluster Monte Carlo algorithms.
The creative power comes from the latent representation of the Boltzmann
machines, which learn to mediate complex interactions and identify clusters of
the physical system. We demonstrate these findings with concrete examples of
the classical Ising model with and without four spin plaquette interactions.
Our results endorse a fresh research paradigm where intelligent machines are
designed to create or inspire human discovery of innovative algorithms.
| Lei Wang | 10.1103/PhysRevE.96.051301 | 1702.08586 | null | null |
The Shattered Gradients Problem: If resnets are the answer, then what is
the question? | cs.NE cs.LG stat.ML | A long-standing obstacle to progress in deep learning is the problem of
vanishing and exploding gradients. Although, the problem has largely been
overcome via carefully constructed initializations and batch normalization,
architectures incorporating skip-connections such as highway and resnets
perform much better than standard feedforward architectures despite well-chosen
initialization and batch normalization. In this paper, we identify the
shattered gradients problem. Specifically, we show that the correlation between
gradients in standard feedforward networks decays exponentially with depth
resulting in gradients that resemble white noise whereas, in contrast, the
gradients in architectures with skip-connections are far more resistant to
shattering, decaying sublinearly. Detailed empirical evidence is presented in
support of the analysis, on both fully-connected networks and convnets.
Finally, we present a new "looks linear" (LL) initialization that prevents
shattering, with preliminary experiments showing the new initialization allows
to train very deep networks without the addition of skip-connections.
| David Balduzzi, Marcus Frean, Lennox Leary, JP Lewis, Kurt Wan-Duo Ma,
Brian McWilliams | null | 1702.08591 | null | null |
Towards A Rigorous Science of Interpretable Machine Learning | stat.ML cs.AI cs.LG | As machine learning systems become ubiquitous, there has been a surge of
interest in interpretable machine learning: systems that provide explanation
for their outputs. These explanations are often used to qualitatively assess
other criteria such as safety or non-discrimination. However, despite the
interest in interpretability, there is very little consensus on what
interpretable machine learning is and how it should be measured. In this
position paper, we first define interpretability and describe when
interpretability is needed (and when it is not). Next, we suggest a taxonomy
for rigorous evaluation and expose open questions towards a more rigorous
science of interpretable machine learning.
| Finale Doshi-Velez and Been Kim | null | 1702.08608 | null | null |
Progress Estimation and Phase Detection for Sequential Processes | cs.LG cs.HC | Process modeling and understanding are fundamental for advanced
human-computer interfaces and automation systems. Most recent research has
focused on activity recognition, but little has been done on sensor-based
detection of process progress. We introduce a real-time, sensor-based system
for modeling, recognizing and estimating the progress of a work process. We
implemented a multimodal deep learning structure to extract the relevant
spatio-temporal features from multiple sensory inputs and used a novel deep
regression structure for overall completeness estimation. Using process
completeness estimation with a Gaussian mixture model, our system can predict
the phase for sequential processes. The performance speed, calculated using
completeness estimation, allows online estimation of the remaining time. To
train our system, we introduced a novel rectified hyperbolic tangent (rtanh)
activation function and conditional loss. Our system was tested on data
obtained from the medical process (trauma resuscitation) and sports events
(Olympic swimming competition). Our system outperformed the existing
trauma-resuscitation phase detectors with a phase detection accuracy of over
86%, an F1-score of 0.67, a completeness estimation error of under 12.6%, and a
remaining-time estimation error of less than 7.5 minutes. For the Olympic
swimming dataset, our system achieved an accuracy of 88%, an F1-score of 0.58,
a completeness estimation error of 6.3% and a remaining-time estimation error
of 2.9 minutes.
| Xinyu Li, Yanyi Zhang, Jianyu Zhang, Yueyang Chen, Shuhong Chen, Yue
Gu, Moliang Zhou, Richard A. Farneth, Ivan Marsic and Randall S. Burd | null | 1702.08623 | null | null |
Analysis of Agent Expertise in Ms. Pac-Man using
Value-of-Information-based Policies | cs.LG cs.AI cs.IT math.IT | Conventional reinforcement learning methods for Markov decision processes
rely on weakly-guided, stochastic searches to drive the learning process. It
can therefore be difficult to predict what agent behaviors might emerge. In
this paper, we consider an information-theoretic cost function for performing
constrained stochastic searches that promote the formation of risk-averse to
risk-favoring behaviors. This cost function is the value of information, which
provides the optimal trade-off between the expected return of a policy and the
policy's complexity; policy complexity is measured by number of bits and
controlled by a single hyperparameter on the cost function. As the policy
complexity is reduced, the agents will increasingly eschew risky actions. This
reduces the potential for high accrued rewards. As the policy complexity
increases, the agents will take actions, regardless of the risk, that can raise
the long-term rewards. The obtainable reward depends on a single, tunable
hyperparameter that regulates the degree of policy complexity.
We evaluate the performance of value-of-information-based policies on a
stochastic version of Ms. Pac-Man. A major component of this paper is the
demonstration that ranges of policy complexity values yield different game-play
styles and explaining why this occurs. We also show that our
reinforcement-learning search mechanism is more efficient than the others we
utilize. This result implies that the value of information theory is
appropriate for framing the exploitation-exploration trade-off in reinforcement
learning.
| Isaac J. Sledge, Jose C. Principe | 10.1109/TG.2018.2808201 | 1702.08628 | null | null |
Learning What Data to Learn | cs.LG cs.AI stat.ML | Machine learning is essentially the sciences of playing with data. An
adaptive data selection strategy, enabling to dynamically choose different data
at various training stages, can reach a more effective model in a more
efficient way. In this paper, we propose a deep reinforcement learning
framework, which we call \emph{\textbf{N}eural \textbf{D}ata \textbf{F}ilter}
(\textbf{NDF}), to explore automatic and adaptive data selection in the
training process. In particular, NDF takes advantage of a deep neural network
to adaptively select and filter important data instances from a sequential
stream of training data, such that the future accumulative reward (e.g., the
convergence speed) is maximized. In contrast to previous studies in data
selection that is mainly based on heuristic strategies, NDF is quite generic
and thus can be widely suitable for many machine learning tasks. Taking neural
network training with stochastic gradient descent (SGD) as an example,
comprehensive experiments with respect to various neural network modeling
(e.g., multi-layer perceptron networks, convolutional neural networks and
recurrent neural networks) and several applications (e.g., image classification
and text understanding) demonstrate that NDF powered SGD can achieve comparable
accuracy with standard SGD process by using less data and fewer iterations.
| Yang Fan and Fei Tian and Tao Qin and Jiang Bian and Tie-Yan Liu | null | 1702.08635 | null | null |
Auto-clustering Output Layer: Automatic Learning of Latent Annotations
in Neural Networks | cs.LG | In this paper, we discuss a different type of semi-supervised setting: a
coarse level of labeling is available for all observations but the model has to
learn a fine level of latent annotation for each one of them. Problems in this
setting are likely to be encountered in many domains such as text
categorization, protein function prediction, image classification as well as in
exploratory scientific studies such as medical and genomics research. We
consider this setting as simultaneously performed supervised classification
(per the available coarse labels) and unsupervised clustering (within each one
of the coarse labels) and propose a novel output layer modification called
auto-clustering output layer (ACOL) that allows concurrent classification and
clustering based on Graph-based Activity Regularization (GAR) technique. As the
proposed output layer modification duplicates the softmax nodes at the output
layer for each class, GAR allows for competitive learning between these
duplicates on a traditional error-correction learning framework to ultimately
enable a neural network to learn the latent annotations in this partially
supervised setup. We demonstrate how the coarse label supervision impacts
performance and helps propagate useful clustering information between
sub-classes. Comparative tests on three of the most popular image datasets
MNIST, SVHN and CIFAR-100 rigorously demonstrate the effectiveness and
competitiveness of the proposed approach.
| Ozsel Kilinc, Ismail Uysal | null | 1702.08648 | null | null |
Speeding Up Latent Variable Gaussian Graphical Model Estimation via
Nonconvex Optimizations | stat.ML cs.LG | We study the estimation of the latent variable Gaussian graphical model
(LVGGM), where the precision matrix is the superposition of a sparse matrix and
a low-rank matrix. In order to speed up the estimation of the sparse plus
low-rank components, we propose a sparsity constrained maximum likelihood
estimator based on matrix factorization, and an efficient alternating gradient
descent algorithm with hard thresholding to solve it. Our algorithm is orders
of magnitude faster than the convex relaxation based methods for LVGGM. In
addition, we prove that our algorithm is guaranteed to linearly converge to the
unknown sparse and low-rank components up to the optimal statistical precision.
Experiments on both synthetic and genomic data demonstrate the superiority of
our algorithm over the state-of-the-art algorithms and corroborate our theory.
| Pan Xu and Jian Ma and Quanquan Gu | null | 1702.08651 | null | null |
Towards Deeper Understanding of Variational Autoencoding Models | cs.LG stat.ML | We propose a new family of optimization criteria for variational
auto-encoding models, generalizing the standard evidence lower bound. We
provide conditions under which they recover the data distribution and learn
latent features, and formally show that common issues such as blurry samples
and uninformative latent features arise when these conditions are not met.
Based on these new insights, we propose a new sequential VAE model that can
generate sharp samples on the LSUN image dataset based on pixel-wise
reconstruction loss, and propose an optimization criterion that encourages
unsupervised learning of informative latent features.
| Shengjia Zhao, Jiaming Song, Stefano Ermon | null | 1702.08658 | null | null |
On architectural choices in deep learning: From network structure to
gradient convergence and parameter estimation | cs.LG math.OC stat.ML | We study mechanisms to characterize how the asymptotic convergence of
backpropagation in deep architectures, in general, is related to the network
structure, and how it may be influenced by other design choices including
activation type, denoising and dropout rate. We seek to analyze whether network
architecture and input data statistics may guide the choices of learning
parameters and vice versa. Given the broad applicability of deep architectures,
this issue is interesting both from theoretical and a practical standpoint.
Using properties of general nonconvex objectives (with first-order
information), we first build the association between structural, distributional
and learnability aspects of the network vis-\`a-vis their interaction with
parameter convergence rates. We identify a nice relationship between feature
denoising and dropout, and construct families of networks that achieve the same
level of convergence. We then derive a workflow that provides systematic
guidance regarding the choice of network sizes and learning parameters often
mediated4 by input statistics. Our technical results are corroborated by an
extensive set of evaluations, presented in this paper as well as independent
empirical observations reported by other groups. We also perform experiments
showing the practical implications of our framework for choosing the best
fully-connected design for a given problem.
| Vamsi K Ithapu, Sathya N Ravi, Vikas Singh | null | 1702.0867 | null | null |
Borrowing Treasures from the Wealthy: Deep Transfer Learning through
Selective Joint Fine-tuning | cs.CV cs.AI cs.LG cs.NE stat.ML | Deep neural networks require a large amount of labeled training data during
supervised learning. However, collecting and labeling so much data might be
infeasible in many cases. In this paper, we introduce a source-target selective
joint fine-tuning scheme for improving the performance of deep learning tasks
with insufficient training data. In this scheme, a target learning task with
insufficient training data is carried out simultaneously with another source
learning task with abundant training data. However, the source learning task
does not use all existing training data. Our core idea is to identify and use a
subset of training images from the original source learning task whose
low-level characteristics are similar to those from the target learning task,
and jointly fine-tune shared convolutional layers for both tasks. Specifically,
we compute descriptors from linear or nonlinear filter bank responses on
training images from both tasks, and use such descriptors to search for a
desired subset of training samples for the source learning task.
Experiments demonstrate that our selective joint fine-tuning scheme achieves
state-of-the-art performance on multiple visual classification tasks with
insufficient training data for deep learning. Such tasks include Caltech 256,
MIT Indoor 67, Oxford Flowers 102 and Stanford Dogs 120. In comparison to
fine-tuning without a source domain, the proposed method can improve the
classification accuracy by 2% - 10% using a single model.
| Weifeng Ge, Yizhou Yu | null | 1702.0869 | null | null |
Finding Statistically Significant Interactions between Continuous
Features | stat.ML cs.LG stat.ME | The search for higher-order feature interactions that are statistically
significantly associated with a class variable is of high relevance in fields
such as Genetics or Healthcare, but the combinatorial explosion of the
candidate space makes this problem extremely challenging in terms of
computational efficiency and proper correction for multiple testing. While
recent progress has been made regarding this challenge for binary features, we
here present the first solution for continuous features. We propose an
algorithm which overcomes the combinatorial explosion of the search space of
higher-order interactions by deriving a lower bound on the p-value for each
interaction, which enables us to massively prune interactions that can never
reach significance and to thereby gain more statistical power. In our
experiments, our approach efficiently detects all significant interactions in a
variety of synthetic and real-world datasets.
| Mahito Sugiyama and Karsten Borgwardt | null | 1702.08694 | null | null |
Learning rates for classification with Gaussian kernels | cs.LG math.OC stat.ML | This paper aims at refined error analysis for binary classification using
support vector machine (SVM) with Gaussian kernel and convex loss. Our first
result shows that for some loss functions such as the truncated quadratic loss
and quadratic loss, SVM with Gaussian kernel can reach the almost optimal
learning rate, provided the regression function is smooth. Our second result
shows that, for a large number of loss functions, under some Tsybakov noise
assumption, if the regression function is infinitely smooth, then SVM with
Gaussian kernel can achieve the learning rate of order $m^{-1}$, where $m$ is
the number of samples.
| Shao-Bo Lin, Jinshan Zeng, Xiangyu Chang | null | 1702.08701 | null | null |
Algorithmic stability and hypothesis complexity | stat.ML cs.LG | We introduce a notion of algorithmic stability of learning algorithms---that
we term \emph{argument stability}---that captures stability of the hypothesis
output by the learning algorithm in the normed space of functions from which
hypotheses are selected. The main result of the paper bounds the generalization
error of any learning algorithm in terms of its argument stability. The bounds
are based on martingale inequalities in the Banach space to which the
hypotheses belong. We apply the general bounds to bound the performance of some
learning algorithms based on empirical risk minimization and stochastic
gradient descent.
| Tongliang Liu and G\'abor Lugosi and Gergely Neu and Dacheng Tao | null | 1702.08712 | null | null |
Learning Discrete Representations via Information Maximizing
Self-Augmented Training | stat.ML cs.LG | Learning discrete representations of data is a central machine learning task
because of the compactness of the representations and ease of interpretation.
The task includes clustering and hash learning as special cases. Deep neural
networks are promising to be used because they can model the non-linearity of
data and scale to large datasets. However, their model complexity is huge, and
therefore, we need to carefully regularize the networks in order to learn
useful representations that exhibit intended invariance for applications of
interest. To this end, we propose a method called Information Maximizing
Self-Augmented Training (IMSAT). In IMSAT, we use data augmentation to impose
the invariance on discrete representations. More specifically, we encourage the
predicted representations of augmented data points to be close to those of the
original data points in an end-to-end fashion. At the same time, we maximize
the information-theoretic dependency between data and their predicted discrete
representations. Extensive experiments on benchmark datasets show that IMSAT
produces state-of-the-art results for both clustering and unsupervised hash
learning.
| Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi
Sugiyama | null | 1702.0872 | null | null |
ShaResNet: reducing residual network parameter number by sharing weights | cs.CV cs.LG | Deep Residual Networks have reached the state of the art in many image
processing tasks such image classification. However, the cost for a gain in
accuracy in terms of depth and memory is prohibitive as it requires a higher
number of residual blocks, up to double the initial value. To tackle this
problem, we propose in this paper a way to reduce the redundant information of
the networks. We share the weights of convolutional layers between residual
blocks operating at the same spatial scale. The signal flows multiple times in
the same convolutional layer. The resulting architecture, called ShaResNet,
contains block specific layers and shared layers. These ShaResNet are trained
exactly in the same fashion as the commonly used residual networks. We show, on
the one hand, that they are almost as efficient as their sequential
counterparts while involving less parameters, and on the other hand that they
are more efficient than a residual network with the same number of parameters.
For example, a 152-layer-deep residual network can be reduced to 106
convolutional layers, i.e. a parameter gain of 39\%, while loosing less than
0.2\% accuracy on ImageNet.
| Alexandre Boulch | null | 1702.08782 | null | null |
Robust Budget Allocation via Continuous Submodular Functions | cs.LG cs.AI cs.DS cs.SI math.OC | The optimal allocation of resources for maximizing influence, spread of
information or coverage, has gained attention in the past years, in particular
in machine learning and data mining. But in applications, the parameters of the
problem are rarely known exactly, and using wrong parameters can lead to
undesirable outcomes. We hence revisit a continuous version of the Budget
Allocation or Bipartite Influence Maximization problem introduced by Alon et
al. (2012) from a robust optimization perspective, where an adversary may
choose the least favorable parameters within a confidence set. The resulting
problem is a nonconvex-concave saddle point problem (or game). We show that
this nonconvex problem can be solved exactly by leveraging connections to
continuous submodular functions, and by solving a constrained submodular
minimization problem. Although constrained submodular minimization is hard in
general, here, we establish conditions under which such a problem can be solved
to arbitrary precision $\epsilon$.
| Matthew Staib and Stefanie Jegelka | null | 1702.08791 | null | null |
Central Moment Discrepancy (CMD) for Domain-Invariant Representation
Learning | stat.ML cs.LG | The learning of domain-invariant representations in the context of domain
adaptation with neural networks is considered. We propose a new regularization
method that minimizes the discrepancy between domain-specific latent feature
representations directly in the hidden activation space. Although some standard
distribution matching approaches exist that can be interpreted as the matching
of weighted sums of moments, e.g. Maximum Mean Discrepancy (MMD), an explicit
order-wise matching of higher order moments has not been considered before. We
propose to match the higher order central moments of probability distributions
by means of order-wise moment differences. Our model does not require
computationally expensive distance and kernel matrix computations. We utilize
the equivalent representation of probability distributions by moment sequences
to define a new distance function, called Central Moment Discrepancy (CMD). We
prove that CMD is a metric on the set of probability distributions on a compact
interval. We further prove that convergence of probability distributions on
compact intervals w.r.t. the new metric implies convergence in distribution of
the respective random variables. We test our approach on two different
benchmark data sets for object recognition (Office) and sentiment analysis of
product reviews (Amazon reviews). CMD achieves a new state-of-the-art
performance on most domain adaptation tasks of Office and outperforms networks
trained with MMD, Variational Fair Autoencoders and Domain Adversarial Neural
Networks on Amazon reviews. In addition, a post-hoc parameter sensitivity
analysis shows that the new approach is stable w.r.t. parameter changes in a
certain interval. The source code of the experiments is publicly available.
| Werner Zellinger, Thomas Grubinger, Edwin Lughofer, Thomas
Natschl\"ager, Susanne Saminger-Platz | null | 1702.08811 | null | null |
Learning Deep Nearest Neighbor Representations Using Differentiable
Boundary Trees | cs.LG | Nearest neighbor (kNN) methods have been gaining popularity in recent years
in light of advances in hardware and efficiency of algorithms. There is a
plethora of methods to choose from today, each with their own advantages and
disadvantages. One requirement shared between all kNN based methods is the need
for a good representation and distance measure between samples.
We introduce a new method called differentiable boundary tree which allows
for learning deep kNN representations. We build on the recently proposed
boundary tree algorithm which allows for efficient nearest neighbor
classification, regression and retrieval. By modelling traversals in the tree
as stochastic events, we are able to form a differentiable cost function which
is associated with the tree's predictions. Using a deep neural network to
transform the data and back-propagating through the tree allows us to learn
good representations for kNN methods.
We demonstrate that our method is able to learn suitable representations
allowing for very efficient trees with a clearly interpretable structure.
| Daniel Zoran, Balaji Lakshminarayanan, Charles Blundell | null | 1702.08833 | null | null |
Deep Forest | cs.LG stat.ML | Current deep learning models are mostly build upon neural networks, i.e.,
multiple layers of parameterized differentiable nonlinear modules that can be
trained by backpropagation. In this paper, we explore the possibility of
building deep models based on non-differentiable modules. We conjecture that
the mystery behind the success of deep neural networks owes much to three
characteristics, i.e., layer-by-layer processing, in-model feature
transformation and sufficient model complexity. We propose the gcForest
approach, which generates \textit{deep forest} holding these characteristics.
This is a decision tree ensemble approach, with much less hyper-parameters than
deep neural networks, and its model complexity can be automatically determined
in a data-dependent way. Experiments show that its performance is quite robust
to hyper-parameter settings, such that in most cases, even across different
data from different domains, it is able to get excellent performance by using
the same default setting. This study opens the door of deep learning based on
non-differentiable modules, and exhibits the possibility of constructing deep
models without using backpropagation.
| Zhi-Hua Zhou and Ji Feng | null | 1702.08835 | null | null |
Iterative Bayesian Learning for Crowdsourced Regression | cs.LG stat.ML | Crowdsourcing platforms emerged as popular venues for purchasing human
intelligence at low cost for large volume of tasks. As many low-paid workers
are prone to give noisy answers, a common practice is to add redundancy by
assigning multiple workers to each task and then simply average out these
answers. However, to fully harness the wisdom of the crowd, one needs to learn
the heterogeneous quality of each worker. We resolve this fundamental challenge
in crowdsourced regression tasks, i.e., the answer takes continuous labels,
where identifying good or bad workers becomes much more non-trivial compared to
a classification setting of discrete labels. In particular, we introduce a
Bayesian iterative scheme and show that it provably achieves the optimal mean
squared error. Our evaluations on synthetic and real-world datasets support our
theoretical results and show the superiority of the proposed scheme.
| Jungseul Ok, Sewoong Oh, Yunhun Jang, Jinwoo Shin, and Yung Yi | null | 1702.0884 | null | null |
Deep Semi-Random Features for Nonlinear Function Approximation | cs.LG cs.NE stat.ML | We propose semi-random features for nonlinear function approximation. The
flexibility of semi-random feature lies between the fully adjustable units in
deep learning and the random features used in kernel methods. For one hidden
layer models with semi-random features, we prove with no unrealistic
assumptions that the model classes contain an arbitrarily good function as the
width increases (universality), and despite non-convexity, we can find such a
good function (optimization theory) that generalizes to unseen new data
(generalization bound). For deep models, with no unrealistic assumptions, we
prove universal approximation ability, a lower bound on approximation error, a
partial optimization guarantee, and a generalization bound. Depending on the
problems, the generalization bound of deep semi-random features can be
exponentially better than the known bounds of deep ReLU nets; our
generalization error bound can be independent of the depth, the number of
trainable weights as well as the input dimensionality. In experiments, we show
that semi-random features can match the performance of neural networks by using
slightly more units, and it outperforms random features by using significantly
fewer units. Moreover, we introduce a new implicit ensemble method by using
semi-random features.
| Kenji Kawaguchi, Bo Xie, Vikas Verma, Le Song | null | 1702.08882 | null | null |
Low-rank Label Propagation for Semi-supervised Learning with 100
Millions Samples | cs.LG | The success of semi-supervised learning crucially relies on the scalability
to a huge amount of unlabelled data that are needed to capture the underlying
manifold structure for better classification. Since computing the pairwise
similarity between the training data is prohibitively expensive in most kinds
of input data, currently, there is no general ready-to-use semi-supervised
learning method/tool available for learning with tens of millions or more data
points. In this paper, we adopted the idea of two low-rank label propagation
algorithms, GLNP (Global Linear Neighborhood Propagation) and Kernel Nystr\"om
Approximation, and implemented the parallelized version of the two algorithms
accelerated with Nesterov's accelerated projected gradient descent for Big-data
Label Propagation (BigLP).
The parallel algorithms are tested on five real datasets ranging from 7000 to
10,000,000 in size and a simulation dataset of 100,000,000 samples. In the
experiments, the implementation can scale up to datasets with 100,000,000
samples and hundreds of features and the algorithms also significantly improved
the prediction accuracy when only a very small percentage of the data is
labeled. The results demonstrate that the BigLP implementation is highly
scalable to big data and effective in utilizing the unlabeled data for
semi-supervised learning.
| Raphael Petegrosso, Wei Zhang, Zhuliu Li, Yousef Saad and Rui Kuang | null | 1702.08884 | null | null |
Stabilising Experience Replay for Deep Multi-Agent Reinforcement
Learning | cs.AI cs.LG cs.MA | Many real-world problems, such as network packet routing and urban traffic
control, are naturally modeled as multi-agent reinforcement learning (RL)
problems. However, existing multi-agent RL methods typically scale poorly in
the problem size. Therefore, a key challenge is to translate the success of
deep learning on single-agent RL to the multi-agent setting. A major stumbling
block is that independent Q-learning, the most popular multi-agent RL method,
introduces nonstationarity that makes it incompatible with the experience
replay memory on which deep Q-learning relies. This paper proposes two methods
that address this problem: 1) using a multi-agent variant of importance
sampling to naturally decay obsolete data and 2) conditioning each agent's
value function on a fingerprint that disambiguates the age of the data sampled
from the replay memory. Results on a challenging decentralised variant of
StarCraft unit micromanagement confirm that these methods enable the successful
combination of experience replay with multi-agent RL.
| Jakob Foerster, Nantas Nardelli, Gregory Farquhar, Triantafyllos
Afouras, Philip H. S. Torr, Pushmeet Kohli, Shimon Whiteson | null | 1702.08887 | null | null |
Bridging the Gap Between Value and Policy Based Reinforcement Learning | cs.AI cs.LG stat.ML | We establish a new connection between value and policy based reinforcement
learning (RL) based on a relationship between softmax temporal value
consistency and policy optimality under entropy regularization. Specifically,
we show that softmax consistent action values correspond to optimal entropy
regularized policy probabilities along any action sequence, regardless of
provenance. From this observation, we develop a new RL algorithm, Path
Consistency Learning (PCL), that minimizes a notion of soft consistency error
along multi-step action sequences extracted from both on- and off-policy
traces. We examine the behavior of PCL in different scenarios and show that PCL
can be interpreted as generalizing both actor-critic and Q-learning algorithms.
We subsequently deepen the relationship by showing how a single model can be
used to represent both a policy and the corresponding softmax state values,
eliminating the need for a separate critic. The experimental evaluation
demonstrates that PCL significantly outperforms strong actor-critic and
Q-learning baselines across several benchmarks.
| Ofir Nachum, Mohammad Norouzi, Kelvin Xu, Dale Schuurmans | null | 1702.08892 | null | null |
Hierarchical Implicit Models and Likelihood-Free Variational Inference | stat.ML cs.LG stat.CO stat.ME | Implicit probabilistic models are a flexible class of models defined by a
simulation process for data. They form the basis for theories which encompass
our understanding of the physical world. Despite this fundamental nature, the
use of implicit models remains limited due to challenges in specifying complex
latent structure in them, and in performing inferences in such models with
large data sets. In this paper, we first introduce hierarchical implicit models
(HIMs). HIMs combine the idea of implicit densities with hierarchical Bayesian
modeling, thereby defining models via simulators of data with rich hidden
structure. Next, we develop likelihood-free variational inference (LFVI), a
scalable variational inference algorithm for HIMs. Key to LFVI is specifying a
variational family that is also implicit. This matches the model's flexibility
and allows for accurate approximation of the posterior. We demonstrate diverse
applications: a large-scale physical simulator for predator-prey populations in
ecology; a Bayesian generative adversarial network for discrete data; and a
deep implicit model for text generation.
| Dustin Tran, Rajesh Ranganath, David M. Blei | null | 1702.08896 | null | null |
Lipschitz Optimisation for Lipschitz Interpolation | cs.LG stat.ML | Techniques known as Nonlinear Set Membership prediction, Kinky Inference or
Lipschitz Interpolation are fast and numerically robust approaches to
nonparametric machine learning that have been proposed to be utilised in the
context of system identification and learning-based control. They utilise
presupposed Lipschitz properties in order to compute inferences over unobserved
function values. Unfortunately, most of these approaches rely on exact
knowledge about the input space metric as well as about the Lipschitz constant.
Furthermore, existing techniques to estimate the Lipschitz constants from the
data are not robust to noise or seem to be ad-hoc and typically are decoupled
from the ultimate learning and prediction task. To overcome these limitations,
we propose an approach for optimising parameters of the presupposed metrics by
minimising validation set prediction errors. To avoid poor performance due to
local minima, we propose to utilise Lipschitz properties of the optimisation
objective to ensure global optimisation success. The resulting approach is a
new flexible method for nonparametric black-box learning. We provide
experimental evidence of the competitiveness of our approach on artificial as
well as on real data.
| Jan-Peter Calliess | null | 1702.08898 | null | null |
A description length approach to determining the number of k-means
clusters | stat.ML cs.LG | We present an asymptotic criterion to determine the optimal number of
clusters in k-means. We consider k-means as data compression, and propose to
adopt the number of clusters that minimizes the estimated description length
after compression. Here we report two types of compression ratio based on two
ways to quantify the description length of data after compression. This
approach further offers a way to evaluate whether clusters obtained with
k-means have a hierarchical structure by examining whether multi-stage
compression can further reduce the description length. We applied our criteria
to determine the number of clusters to synthetic data and empirical
neuroimaging data to observe the behavior of the criteria across different
types of data set and suitability of the two types of criteria for different
datasets. We found that our method can offer reasonable clustering results that
are useful for dimension reduction. While our numerical results revealed
dependency of our criteria on the various aspects of dataset such as the
dimensionality, the description length approach proposed here provides a useful
guidance to determine the number of clusters in a principled manner when
underlying properties of the data are unknown and only inferred from
observation of data.
| Hiromitsu Mizutani (1) and Ryota Kanai (1) ((1) Araya Inc.) | null | 1703.00039 | null | null |
Provably Optimal Algorithms for Generalized Linear Contextual Bandits | cs.LG cs.AI stat.ML | Contextual bandits are widely used in Internet services from news
recommendation to advertising, and to Web search. Generalized linear models
(logistical regression in particular) have demonstrated stronger performance
than linear models in many applications where rewards are binary. However, most
theoretical analyses on contextual bandits so far are on linear bandits. In
this work, we propose an upper confidence bound based algorithm for generalized
linear contextual bandits, which achieves an $\tilde{O}(\sqrt{dT})$ regret over
$T$ rounds with $d$ dimensional feature vectors. This regret matches the
minimax lower bound, up to logarithmic terms, and improves on the best previous
result by a $\sqrt{d}$ factor, assuming the number of arms is fixed. A key
component in our analysis is to establish a new, sharp finite-sample confidence
bound for maximum-likelihood estimates in generalized linear models, which may
be of independent interest. We also analyze a simpler upper confidence bound
algorithm, which is useful in practice, and prove it to have optimal regret for
certain cases.
| Lihong Li and Yu Lu and Dengyong Zhou | null | 1703.00048 | null | null |
Achieving non-discrimination in prediction | cs.LG stat.ML | Discrimination-aware classification is receiving an increasing attention in
data science fields. The pre-process methods for constructing a
discrimination-free classifier first remove discrimination from the training
data, and then learn the classifier from the cleaned data. However, they lack a
theoretical guarantee for the potential discrimination when the classifier is
deployed for prediction. In this paper, we fill this gap by mathematically
bounding the probability of the discrimination in prediction being within a
given interval in terms of the training data and classifier. We adopt the
causal model for modeling the data generation mechanism, and formally defining
discrimination in population, in a dataset, and in prediction. We obtain two
important theoretical results: (1) the discrimination in prediction can still
exist even if the discrimination in the training data is completely removed;
and (2) not all pre-process methods can ensure non-discrimination in prediction
even though they can achieve non-discrimination in the modified training data.
Based on the results, we develop a two-phase framework for constructing a
discrimination-free classifier with a theoretical guarantee. The experiments
demonstrate the theoretical results and show the effectiveness of our two-phase
framework.
| Lu Zhang (1), Yongkai Wu (1), Xintao Wu (1) ((1) University of
Arkansas) | null | 1703.0006 | null | null |
On the Power of Learning from $k$-Wise Queries | cs.LG cs.DS | Several well-studied models of access to data samples, including statistical
queries, local differential privacy and low-communication algorithms rely on
queries that provide information about a function of a single sample. (For
example, a statistical query (SQ) gives an estimate of $Ex_{x \sim D}[q(x)]$
for any choice of the query function $q$ mapping $X$ to the reals, where $D$ is
an unknown data distribution over $X$.) Yet some data analysis algorithms rely
on properties of functions that depend on multiple samples. Such algorithms
would be naturally implemented using $k$-wise queries each of which is
specified by a function $q$ mapping $X^k$ to the reals. Hence it is natural to
ask whether algorithms using $k$-wise queries can solve learning problems more
efficiently and by how much.
Blum, Kalai and Wasserman (2003) showed that for any weak PAC learning
problem over a fixed distribution, the complexity of learning with $k$-wise SQs
is smaller than the (unary) SQ complexity by a factor of at most $2^k$. We show
that for more general problems over distributions the picture is substantially
richer. For every $k$, the complexity of distribution-independent PAC learning
with $k$-wise queries can be exponentially larger than learning with
$(k+1)$-wise queries. We then give two approaches for simulating a $k$-wise
query using unary queries. The first approach exploits the structure of the
problem that needs to be solved. It generalizes and strengthens (exponentially)
the results of Blum et al.. It allows us to derive strong lower bounds for
learning DNF formulas and stochastic constraint satisfaction problems that hold
against algorithms using $k$-wise queries. The second approach exploits the
$k$-party communication complexity of the $k$-wise query function.
| Vitaly Feldman, Badih Ghazi | null | 1703.00066 | null | null |
Multi-Sensor Data Pattern Recognition for Multi-Target Localization: A
Machine Learning Approach | cs.SY cs.LG stat.ML | Data-target pairing is an important step towards multi-target localization
for the intelligent operation of unmanned systems. Target localization plays a
crucial role in numerous applications, such as search, and rescue missions,
traffic management and surveillance. The objective of this paper is to present
an innovative target location learning approach, where numerous machine
learning approaches, including K-means clustering and supported vector machines
(SVM), are used to learn the data pattern across a list of spatially
distributed sensors. To enable the accurate data association from different
sensors for accurate target localization, appropriate data pre-processing is
essential, which is then followed by the application of different machine
learning algorithms to appropriately group data from different sensors for the
accurate localization of multiple targets. Through simulation examples, the
performance of these machine learning algorithms is quantified and compared.
| Kasthurirengan Suresh, Samuel Silva, Johnathan Votion, and Yongcan Cao | null | 1703.00084 | null | null |
Gram-CTC: Automatic Unit Selection and Target Decomposition for Sequence
Labelling | cs.CL cs.LG cs.NE | Most existing sequence labelling models rely on a fixed decomposition of a
target sequence into a sequence of basic units. These methods suffer from two
major drawbacks: 1) the set of basic units is fixed, such as the set of words,
characters or phonemes in speech recognition, and 2) the decomposition of
target sequences is fixed. These drawbacks usually result in sub-optimal
performance of modeling sequences. In this pa- per, we extend the popular CTC
loss criterion to alleviate these limitations, and propose a new loss function
called Gram-CTC. While preserving the advantages of CTC, Gram-CTC automatically
learns the best set of basic units (grams), as well as the most suitable
decomposition of tar- get sequences. Unlike CTC, Gram-CTC allows the model to
output variable number of characters at each time step, which enables the model
to capture longer term dependency and improves the computational efficiency. We
demonstrate that the proposed Gram-CTC improves CTC in terms of both
performance and efficiency on the large vocabulary speech recognition task at
multiple scales of data, and that with Gram-CTC we can outperform the
state-of-the-art on a standard speech benchmark.
| Hairong Liu, Zhenyao Zhu, Xiangang Li, Sanjeev Satheesh | null | 1703.00096 | null | null |
SARAH: A Novel Method for Machine Learning Problems Using Stochastic
Recursive Gradient | stat.ML cs.LG math.OC | In this paper, we propose a StochAstic Recursive grAdient algoritHm (SARAH),
as well as its practical variant SARAH+, as a novel approach to the finite-sum
minimization problems. Different from the vanilla SGD and other modern
stochastic methods such as SVRG, S2GD, SAG and SAGA, SARAH admits a simple
recursive framework for updating stochastic gradient estimates; when comparing
to SAG/SAGA, SARAH does not require a storage of past gradients. The linear
convergence rate of SARAH is proven under strong convexity assumption. We also
prove a linear convergence rate (in the strongly convex case) for an inner loop
of SARAH, the property that SVRG does not possess. Numerical experiments
demonstrate the efficiency of our algorithm.
| Lam M. Nguyen, Jie Liu, Katya Scheinberg, Martin Tak\'a\v{c} | null | 1703.00102 | null | null |
Dual Iterative Hard Thresholding: From Non-convex Sparse Minimization to
Non-smooth Concave Maximization | cs.LG stat.ML | Iterative Hard Thresholding (IHT) is a class of projected gradient descent
methods for optimizing sparsity-constrained minimization models, with the best
known efficiency and scalability in practice. As far as we know, the existing
IHT-style methods are designed for sparse minimization in primal form. It
remains open to explore duality theory and algorithms in such a non-convex and
NP-hard problem setting. In this paper, we bridge this gap by establishing a
duality theory for sparsity-constrained minimization with $\ell_2$-regularized
loss function and proposing an IHT-style algorithm for dual maximization. Our
sparse duality theory provides a set of sufficient and necessary conditions
under which the original NP-hard/non-convex problem can be equivalently solved
in a dual formulation. The proposed dual IHT algorithm is a super-gradient
method for maximizing the non-smooth dual objective. An interesting finding is
that the sparse recovery performance of dual IHT is invariant to the Restricted
Isometry Property (RIP), which is required by virtually all the existing primal
IHT algorithms without sparsity relaxation. Moreover, a stochastic variant of
dual IHT is proposed for large-scale stochastic optimization. Numerical results
demonstrate the superiority of dual IHT algorithms to the state-of-the-art
primal IHT-style algorithms in model estimation accuracy and computational
efficiency.
| Bo Liu, Xiao-Tong Yuan, Lezi Wang, Qingshan Liu, Dimitris N. Metaxas | null | 1703.00119 | null | null |
Revisiting Unsupervised Learning for Defect Prediction | cs.SE cs.LG | Collecting quality data from software projects can be time-consuming and
expensive. Hence, some researchers explore "unsupervised" approaches to quality
prediction that does not require labelled data. An alternate technique is to
use "supervised" approaches that learn models from project data labelled with,
say, "defective" or "not-defective". Most researchers use these supervised
models since, it is argued, they can exploit more knowledge of the projects.
At FSE'16, Yang et al. reported startling results where unsupervised defect
predictors outperformed supervised predictors for effort-aware just-in-time
defect prediction. If confirmed, these results would lead to a dramatic
simplification of a seemingly complex task (data mining) that is widely
explored in the software engineering literature.
This paper repeats and refutes those results as follows. (1) There is much
variability in the efficacy of the Yang et al. predictors so even with their
approach, some supervised data is required to prune weaker predictors away.
(2)Their findings were grouped across $N$ projects. When we repeat their
analysis on a project-by-project basis, supervised predictors are seen to work
better.
Even though this paper rejects the specific conclusions of Yang et al., we
still endorse their general goal. In our our experiments, supervised predictors
did not perform outstandingly better than unsupervised ones for effort-aware
just-in-time defect prediction. Hence, they may indeed be some combination of
unsupervised learners to achieve comparable performance to supervised ones. We
therefore encourage others to work in this promising area.
| Wei Fu, Tim Menzies | 10.1145/3106237.3106257 | 1703.00132 | null | null |
Easy over Hard: A Case Study on Deep Learning | cs.SE cs.LG | While deep learning is an exciting new technique, the benefits of this method
need to be assessed with respect to its computational cost. This is
particularly important for deep learning since these learners need hours (to
weeks) to train the model. Such long training time limits the ability of (a)~a
researcher to test the stability of their conclusion via repeated runs with
different random seeds; and (b)~other researchers to repeat, improve, or even
refute that original work.
For example, recently, deep learning was used to find which questions in the
Stack Overflow programmer discussion forum can be linked together. That deep
learning system took 14 hours to execute. We show here that applying a very
simple optimizer called DE to fine tune SVM, it can achieve similar (and
sometimes better) results. The DE approach terminated in 10 minutes; i.e. 84
times faster hours than deep learning method.
We offer these results as a cautionary tale to the software analytics
community and suggest that not every new innovation should be applied without
critical analysis. If researchers deploy some new and expensive process, that
work should be baselined against some simpler and faster alternatives.
| Wei Fu, Tim Menzies | 10.1145/3106237.3106256 | 1703.00133 | null | null |
Theoretical Properties for Neural Networks with Weight Matrices of Low
Displacement Rank | cs.LG cs.CV stat.ML | Recently low displacement rank (LDR) matrices, or so-called structured
matrices, have been proposed to compress large-scale neural networks. Empirical
results have shown that neural networks with weight matrices of LDR matrices,
referred as LDR neural networks, can achieve significant reduction in space and
computational complexity while retaining high accuracy. We formally study LDR
matrices in deep learning. First, we prove the universal approximation property
of LDR neural networks with a mild condition on the displacement operators. We
then show that the error bounds of LDR neural networks are as efficient as
general neural networks with both single-layer and multiple-layer structure.
Finally, we propose back-propagation based training algorithm for general LDR
neural networks.
| Liang Zhao, Siyu Liao, Yanzhi Wang, Zhe Li, Jian Tang, Victor Pan and
Bo Yuan | null | 1703.00144 | null | null |
Modular Representation of Layered Neural Networks | stat.ML cs.LG | Layered neural networks have greatly improved the performance of various
applications including image processing, speech recognition, natural language
processing, and bioinformatics. However, it is still difficult to discover or
interpret knowledge from the inference provided by a layered neural network,
since its internal representation has many nonlinear and complex parameters
embedded in hierarchical layers. Therefore, it becomes important to establish a
new methodology by which layered neural networks can be understood.
In this paper, we propose a new method for extracting a global and simplified
structure from a layered neural network. Based on network analysis, the
proposed method detects communities or clusters of units with similar
connection patterns. We show its effectiveness by applying it to three use
cases. (1) Network decomposition: it can decompose a trained neural network
into multiple small independent networks thus dividing the problem and reducing
the computation time. (2) Training assessment: the appropriateness of a trained
result with a given hyperparameter or randomly chosen initial parameters can be
evaluated by using a modularity index. And (3) data analysis: in practical data
it reveals the community structure in the input, hidden, and output layers,
which serves as a clue for discovering knowledge from a trained neural network.
| Chihiro Watanabe, Kaoru Hiramatsu, Kunio Kashino | null | 1703.00168 | null | null |
L$^3$-SVMs: Landmarks-based Linear Local Support Vectors Machines | stat.ML cs.LG | For their ability to capture non-linearities in the data and to scale to
large training sets, local Support Vector Machines (SVMs) have received a
special attention during the past decade. In this paper, we introduce a new
local SVM method, called L$^3$-SVMs, which clusters the input space, carries
out dimensionality reduction by projecting the data on landmarks, and jointly
learns a linear combination of local models. Simple and effective, our
algorithm is also theoretically well-founded. Using the framework of Uniform
Stability, we show that our SVM formulation comes with generalization
guarantees on the true risk. The experiments based on the simplest
configuration of our model (i.e. landmarks randomly selected, linear
projection, linear kernel) show that L$^3$-SVMs is very competitive w.r.t. the
state of the art and opens the door to new exciting lines of research.
| Valentina Zantedeschi, R\'emi Emonet, Marc Sebban | null | 1703.00284 | null | null |
Graph-based Isometry Invariant Representation Learning | cs.CV cs.LG | Learning transformation invariant representations of visual data is an
important problem in computer vision. Deep convolutional networks have
demonstrated remarkable results for image and video classification tasks.
However, they have achieved only limited success in the classification of
images that undergo geometric transformations. In this work we present a novel
Transformation Invariant Graph-based Network (TIGraNet), which learns
graph-based features that are inherently invariant to isometric transformations
such as rotation and translation of input images. In particular, images are
represented as signals on graphs, which permits to replace classical
convolution and pooling layers in deep networks with graph spectral convolution
and dynamic graph pooling layers that together contribute to invariance to
isometric transformations. Our experiments show high performance on rotated and
translated images from the test set compared to classical architectures that
are very sensitive to transformations in the data. The inherent invariance
properties of our framework provide key advantages, such as increased
resiliency to data variability and sustained performance with limited training
sets.
| Renata Khasanova and Pascal Frossard | null | 1703.00356 | null | null |
Approximate Computational Approaches for Bayesian Sensor Placement in
High Dimensions | stat.CO cs.LG | Since the cost of installing and maintaining sensors is usually high, sensor
locations are always strategically selected. For those aiming at inferring
certain quantities of interest (QoI), it is desirable to explore the dependency
between sensor measurements and QoI. One of the most popular metric for the
dependency is mutual information which naturally measures how much information
about one variable can be obtained given the other. However, computing mutual
information is always challenging, and the result is unreliable in high
dimension. In this paper, we propose an approach to find an approximate lower
bound of mutual information and compute it in a lower dimension. Then, sensors
are placed where highest mutual information (lower bound) is achieved and QoI
is inferred via Bayes rule given sensor measurements. In addition, Bayesian
optimization is introduced to provide a continuous mutual information surface
over the domain and thus reduce the number of evaluations. A chemical release
accident is simulated where multiple sensors are placed to locate the source of
the release. The result shows that the proposed approach is both effective and
efficient in inferring QoI.
| Xiao Lin, Asif Chowdhury, Xiaofan Wang, Gabriel Terejanu | null | 1703.00368 | null | null |
Gradient Boosting on Stochastic Data Streams | cs.LG | Boosting is a popular ensemble algorithm that generates more powerful
learners by linearly combining base models from a simpler hypothesis class. In
this work, we investigate the problem of adapting batch gradient boosting for
minimizing convex loss functions to online setting where the loss at each
iteration is i.i.d sampled from an unknown distribution. To generalize from
batch to online, we first introduce the definition of online weak learning edge
with which for strongly convex and smooth loss functions, we present an
algorithm, Streaming Gradient Boosting (SGB) with exponential shrinkage
guarantees in the number of weak learners. We further present an adaptation of
SGB to optimize non-smooth loss functions, for which we derive a O(ln N/N)
convergence rate. We also show that our analysis can extend to adversarial
online learning setting under a stronger assumption that the online weak
learning edge will hold in adversarial setting. We finally demonstrate
experimental results showing that in practice our algorithms can achieve
competitive results as classic gradient boosting while using less computation.
| Hanzhang Hu and Wen Sun and Arun Venkatraman and Martial Hebert and J.
Andrew Bagnell | null | 1703.00377 | null | null |
Privacy-Preserving Personal Model Training | cs.LG | Many current Internet services rely on inferences from models trained on user
data. Commonly, both the training and inference tasks are carried out using
cloud resources fed by personal data collected at scale from users. Holding and
using such large collections of personal data in the cloud creates privacy
risks to the data subjects, but is currently required for users to benefit from
such services. We explore how to provide for model training and inference in a
system where computation is pushed to the data in preference to moving data to
the cloud, obviating many current privacy risks. Specifically, we take an
initial model learnt from a small set of users and retrain it locally using
data from a single user. We evaluate on two tasks: one supervised learning
task, using a neural network to recognise users' current activity from
accelerometer traces; and one unsupervised learning task, identifying topics in
a large set of documents. In both cases the accuracy is improved. We also
analyse the robustness of our approach against adversarial attacks, as well as
its feasibility by presenting a performance evaluation on a representative
resource-constrained device (a Raspberry Pi).
| Sandra Servia-Rodriguez, Liang Wang, Jianxin R. Zhao, Richard Mortier,
Hamed Haddadi | null | 1703.0038 | null | null |
The Statistical Recurrent Unit | cs.LG cs.AI stat.ML | Sophisticated gated recurrent neural network architectures like LSTMs and
GRUs have been shown to be highly effective in a myriad of applications. We
develop an un-gated unit, the statistical recurrent unit (SRU), that is able to
learn long term dependencies in data by only keeping moving averages of
statistics. The SRU's architecture is simple, un-gated, and contains a
comparable number of parameters to LSTMs; yet, SRUs perform favorably to more
sophisticated LSTM and GRU alternatives, often outperforming one or both in
various tasks. We show the efficacy of SRUs as compared to LSTMs and GRUs in an
unbiased manner by optimizing respective architectures' hyperparameters in a
Bayesian optimization scheme for both synthetic and real-world tasks.
| Junier B. Oliva, Barnabas Poczos, Jeff Schneider | null | 1703.00381 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.