title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in
Neural Networks
|
cs.NE cs.LG stat.ML
|
Deep neural networks have become widely used, obtaining remarkable results in
domains such as computer vision, speech recognition, natural language
processing, audio recognition, social network filtering, machine translation,
and bio-informatics, where they have produced results comparable to human
experts. However, these networks can be easily fooled by adversarial
perturbations: minimal changes to correctly-classified inputs, that cause the
network to mis-classify them. This phenomenon represents a concern for both
safety and security, but it is currently unclear how to measure a network's
robustness against such perturbations. Existing techniques are limited to
checking robustness around a few individual input points, providing only very
limited guarantees. We propose a novel approach for automatically identifying
safe regions of the input space, within which the network is robust against
adversarial perturbations. The approach is data-guided, relying on clustering
to identify well-defined geometric regions as candidate safe regions. We then
utilize verification techniques to confirm that these regions are safe or to
provide counter-examples showing that they are not safe. We also introduce the
notion of targeted robustness which, for a given target label and region,
ensures that a NN does not map any input in the region to the target label. We
evaluated our technique on the MNIST dataset and on a neural network
implementation of a controller for the next-generation Airborne Collision
Avoidance System for unmanned aircraft (ACAS Xu). For these networks, our
approach identified multiple regions which were completely safe as well as some
which were only safe for specific labels. It also discovered several
adversarial perturbations of interest.
|
Divya Gopinath, Guy Katz, Corina S. Pasareanu, Clark Barrett
| null |
1710.00486
| null | null |
Online control of the false discovery rate with decaying memory
|
stat.ME cs.LG math.ST stat.ML stat.TH
|
In the online multiple testing problem, p-values corresponding to different
null hypotheses are observed one by one, and the decision of whether or not to
reject the current hypothesis must be made immediately, after which the next
p-value is observed. Alpha-investing algorithms to control the false discovery
rate (FDR), formulated by Foster and Stine, have been generalized and applied
to many settings, including quality-preserving databases in science and
multiple A/B or multi-armed bandit tests for internet commerce. This paper
improves the class of generalized alpha-investing algorithms (GAI) in four
ways: (a) we show how to uniformly improve the power of the entire class of
monotone GAI procedures by awarding more alpha-wealth for each rejection,
giving a win-win resolution to a recent dilemma raised by Javanmard and
Montanari, (b) we demonstrate how to incorporate prior weights to indicate
domain knowledge of which hypotheses are likely to be non-null, (c) we allow
for differing penalties for false discoveries to indicate that some hypotheses
may be more important than others, (d) we define a new quantity called the
decaying memory false discovery rate (mem-FDR) that may be more meaningful for
truly temporal applications, and which alleviates problems that we describe and
refer to as "piggybacking" and "alpha-death". Our GAI++ algorithms incorporate
all four generalizations simultaneously, and reduce to more powerful variants
of earlier algorithms when the weights and decay are all set to unity. Finally,
we also describe a simple method to derive new online FDR rules based on an
estimated false discovery proportion.
|
Aaditya Ramdas, Fanny Yang, Martin J. Wainwright, Michael I. Jordan
| null |
1710.00499
| null | null |
Remote Sensing Image Classification with Large Scale Gaussian Processes
|
cs.LG stat.AP stat.ML
|
Current remote sensing image classification problems have to deal with an
unprecedented amount of heterogeneous and complex data sources. Upcoming
missions will soon provide large data streams that will make land cover/use
classification difficult. Machine learning classifiers can help at this, and
many methods are currently available. A popular kernel classifier is the
Gaussian process classifier (GPC), since it approaches the classification
problem with a solid probabilistic treatment, thus yielding confidence
intervals for the predictions as well as very competitive results to
state-of-the-art neural networks and support vector machines. However, its
computational cost is prohibitive for large scale applications, and constitutes
the main obstacle precluding wide adoption. This paper tackles this problem by
introducing two novel efficient methodologies for Gaussian Process (GP)
classification. We first include the standard random Fourier features
approximation into GPC, which largely decreases its computational cost and
permits large scale remote sensing image classification. In addition, we
propose a model which avoids randomly sampling a number of Fourier frequencies,
and alternatively learns the optimal ones within a variational Bayes approach.
The performance of the proposed methods is illustrated in complex problems of
cloud detection from multispectral imagery and infrared sounding data.
Excellent empirical results support the proposal in both computational cost and
accuracy.
|
Pablo Morales-Alvarez and Adrian Perez-Suay and Rafael Molina and
Gustau Camps-Valls
|
10.1109/TGRS.2017.2758922
|
1710.00575
| null | null |
Improving speech recognition by revising gated recurrent units
|
cs.CL cs.AI cs.LG cs.NE
|
Speech recognition is largely taking advantage of deep learning, showing that
substantial benefits can be obtained by modern Recurrent Neural Networks
(RNNs). The most popular RNNs are Long Short-Term Memory (LSTMs), which
typically reach state-of-the-art performance in many tasks thanks to their
ability to learn long-term dependencies and robustness to vanishing gradients.
Nevertheless, LSTMs have a rather complex design with three multiplicative
gates, that might impair their efficient implementation. An attempt to simplify
LSTMs has recently led to Gated Recurrent Units (GRUs), which are based on just
two multiplicative gates.
This paper builds on these efforts by further revising GRUs and proposing a
simplified architecture potentially more suitable for speech recognition. The
contribution of this work is two-fold. First, we suggest to remove the reset
gate in the GRU design, resulting in a more efficient single-gate architecture.
Second, we propose to replace tanh with ReLU activations in the state update
equations. Results show that, in our implementation, the revised architecture
reduces the per-epoch training time with more than 30% and consistently
improves recognition performance across different tasks, input features, and
noisy conditions when compared to a standard GRU.
|
Mirco Ravanelli, Philemon Brakel, Maurizio Omologo, Yoshua Bengio
| null |
1710.00641
| null | null |
Scalable Nonlinear AUC Maximization Methods
|
cs.LG
|
The area under the ROC curve (AUC) is a measure of interest in various
machine learning and data mining applications. It has been widely used to
evaluate classification performance on heavily imbalanced data. The kernelized
AUC maximization machines have established a superior generalization ability
compared to linear AUC machines because of their capability in modeling the
complex nonlinear structure underlying most real-world data. However, the high
training complexity renders the kernelized AUC machines infeasible for
large-scale data. In this paper, we present two nonlinear AUC maximization
algorithms that optimize pairwise linear classifiers over a finite-dimensional
feature space constructed via the k-means Nystr\"{o}m method. Our first
algorithm maximize the AUC metric by optimizing a pairwise squared hinge loss
function using the truncated Newton method. However, the second-order batch AUC
maximization method becomes expensive to optimize for extremely massive
datasets. This motivate us to develop a first-order stochastic AUC maximization
algorithm that incorporates a scheduled regularization update and scheduled
averaging techniques to accelerate the convergence of the classifier.
Experiments on several benchmark datasets demonstrate that the proposed AUC
classifiers are more efficient than kernelized AUC machines while they are able
to surpass or at least match the AUC performance of the kernelized AUC
machines. The experiments also show that the proposed stochastic AUC classifier
outperforms the state-of-the-art online AUC maximization methods in terms of
AUC classification accuracy.
|
Majdi Khalid, Indrakshi Ray, and Hamidreza Chitsaz
| null |
1710.0076
| null | null |
Deep Learning for Unsupervised Insider Threat Detection in Structured
Cybersecurity Data Streams
|
cs.NE cs.CR cs.LG stat.ML
|
Analysis of an organization's computer network activity is a key component of
early detection and mitigation of insider threat, a growing concern for many
organizations. Raw system logs are a prototypical example of streaming data
that can quickly scale beyond the cognitive power of a human analyst. As a
prospective filter for the human analyst, we present an online unsupervised
deep learning approach to detect anomalous network activity from system logs in
real time. Our models decompose anomaly scores into the contributions of
individual user behavior features for increased interpretability to aid
analysts reviewing potential cases of insider threat. Using the CERT Insider
Threat Dataset v6.2 and threat detection recall as our performance metric, our
novel deep and recurrent neural network models outperform Principal Component
Analysis, Support Vector Machine and Isolation Forest based anomaly detection
baselines. For our best model, the events labeled as insider threat activity in
our dataset had an average anomaly score in the 95.53 percentile, demonstrating
our approach's potential to greatly reduce analyst workloads.
|
Aaron Tuor, Samuel Kaplan, Brian Hutchinson, Nicole Nichols, Sean
Robinson
| null |
1710.00811
| null | null |
Detecting Adversarial Attacks on Neural Network Policies with Visual
Foresight
|
cs.CV cs.CR cs.LG
|
Deep reinforcement learning has shown promising results in learning control
policies for complex sequential decision-making tasks. However, these neural
network-based policies are known to be vulnerable to adversarial examples. This
vulnerability poses a potentially serious threat to safety-critical systems
such as autonomous vehicles. In this paper, we propose a defense mechanism to
defend reinforcement learning agents from adversarial attacks by leveraging an
action-conditioned frame prediction module. Our core idea is that the
adversarial examples targeting at a neural network-based policy are not
effective for the frame prediction model. By comparing the action distribution
produced by a policy from processing the current observed frame to the action
distribution produced by the same policy from processing the predicted frame
from the action-conditioned frame prediction module, we can detect the presence
of adversarial examples. Beyond detecting the presence of adversarial examples,
our method allows the agent to continue performing the task using the predicted
frame when the agent is under attack. We evaluate the performance of our
algorithm using five games in Atari 2600. Our results demonstrate that the
proposed defense mechanism achieves favorable performance against baseline
algorithms in detecting adversarial examples and in earning rewards when the
agents are under attack.
|
Yen-Chen Lin, Ming-Yu Liu, Min Sun, Jia-Bin Huang
| null |
1710.00814
| null | null |
Continuous-Time Relationship Prediction in Dynamic Heterogeneous
Information Networks
|
cs.SI cs.LG
|
Online social networks, World Wide Web, media and technological networks, and
other types of so-called information networks are ubiquitous nowadays. These
information networks are inherently heterogeneous and dynamic. They are
heterogeneous as they consist of multi-typed objects and relations, and they
are dynamic as they are constantly evolving over time. One of the challenging
issues in such heterogeneous and dynamic environments is to forecast those
relationships in the network that will appear in the future. In this paper, we
try to solve the problem of continuous-time relationship prediction in dynamic
and heterogeneous information networks. This implies predicting the time it
takes for a relationship to appear in the future, given its features that have
been extracted by considering both heterogeneity and temporal dynamics of the
underlying network. To this end, we first introduce a feature extraction
framework that combines the power of meta-path-based modeling and recurrent
neural networks to effectively extract features suitable for relationship
prediction regarding heterogeneity and dynamicity of the networks. Next, we
propose a supervised non-parametric approach, called Non-Parametric Generalized
Linear Model (NP-GLM), which infers the hidden underlying probability
distribution of the relationship building time given its features. We then
present a learning algorithm to train NP-GLM and an inference method to answer
time-related queries. Extensive experiments conducted on synthetic data and
three real-world datasets, namely Delicious, MovieLens, and DBLP, demonstrate
the effectiveness of NP-GLM in solving continuous-time relationship prediction
problem vis-a-vis competitive baselines
|
Sina Sajadmanesh, Sogol Bazargani, Jiawei Zhang and Hamid R. Rabiee
|
10.1145/3333028
|
1710.00818
| null | null |
R\'enyi Differential Privacy Mechanisms for Posterior Sampling
|
cs.LG cs.AI cs.CR
|
Using a recently proposed privacy definition of R\'enyi Differential Privacy
(RDP), we re-examine the inherent privacy of releasing a single sample from a
posterior distribution. We exploit the impact of the prior distribution in
mitigating the influence of individual data points. In particular, we focus on
sampling from an exponential family and specific generalized linear models,
such as logistic regression. We propose novel RDP mechanisms as well as
offering a new RDP analysis for an existing method in order to add value to the
RDP framework. Each method is capable of achieving arbitrary RDP privacy
guarantees, and we offer experimental results of their efficacy.
|
Joseph Geumlek, Shuang Song, Kamalika Chaudhuri
| null |
1710.00892
| null | null |
Online and Distributed Robust Regressions under Adversarial Data
Corruption
|
cs.DS cs.LG stat.ML
|
In today's era of big data, robust least-squares regression becomes a more
challenging problem when considering the adversarial corruption along with
explosive growth of datasets. Traditional robust methods can handle the noise
but suffer from several challenges when applied in huge dataset including 1)
computational infeasibility of handling an entire dataset at once, 2) existence
of heterogeneously distributed corruption, and 3) difficulty in corruption
estimation when data cannot be entirely loaded. This paper proposes online and
distributed robust regression approaches, both of which can concurrently
address all the above challenges. Specifically, the distributed algorithm
optimizes the regression coefficients of each data block via heuristic hard
thresholding and combines all the estimates in a distributed robust
consolidation. Furthermore, an online version of the distributed algorithm is
proposed to incrementally update the existing estimates with new incoming data.
We also prove that our algorithms benefit from strong robustness guarantees in
terms of regression coefficient recovery with a constant upper bound on the
error of state-of-the-art batch methods. Extensive experiments on synthetic and
real datasets demonstrate that our approaches are superior to those of existing
methods in effectiveness, with competitive efficiency.
|
Xuchao Zhang, Liang Zhao, Arnold P. Boedihardjo, Chang-Tien Lu
| null |
1710.00904
| null | null |
Facial Key Points Detection using Deep Convolutional Neural Network -
NaimishNet
|
cs.CV cs.LG stat.ML
|
Facial Key Points (FKPs) Detection is an important and challenging problem in
the fields of computer vision and machine learning. It involves predicting the
co-ordinates of the FKPs, e.g. nose tip, center of eyes, etc, for a given face.
In this paper, we propose a LeNet adapted Deep CNN model - NaimishNet, to
operate on facial key points data and compare our model's performance against
existing state of the art approaches.
|
Naimish Agarwal, Artus Krohn-Grimberghe, Ranjana Vyas
| null |
1710.00977
| null | null |
Training Feedforward Neural Networks with Standard Logistic Activations
is Feasible
|
cs.NE cs.LG stat.ML
|
Training feedforward neural networks with standard logistic activations is
considered difficult because of the intrinsic properties of these sigmoidal
functions. This work aims at showing that these networks can be trained to
achieve generalization performance comparable to those based on hyperbolic
tangent activations. The solution consists on applying a set of conditions in
parameter initialization, which have been derived from the study of the
properties of a single neuron from an information-theoretic perspective. The
proposed initialization is validated through an extensive experimental
analysis.
|
Emanuele Sansone, Francesco G.B. De Natale
| null |
1710.01013
| null | null |
Learning Affinity via Spatial Propagation Networks
|
cs.CV cs.LG
|
In this paper, we propose spatial propagation networks for learning the
affinity matrix for vision tasks. We show that by constructing a row/column
linear propagation model, the spatially varying transformation matrix exactly
constitutes an affinity matrix that models dense, global pairwise relationships
of an image. Specifically, we develop a three-way connection for the linear
propagation model, which (a) formulates a sparse transformation matrix, where
all elements can be the output from a deep CNN, but (b) results in a dense
affinity matrix that effectively models any task-specific pairwise similarity
matrix. Instead of designing the similarity kernels according to image features
of two points, we can directly output all the similarities in a purely
data-driven manner. The spatial propagation network is a generic framework that
can be applied to many affinity-related tasks, including but not limited to
image matting, segmentation and colorization, to name a few. Essentially, the
model can learn semantically-aware affinity values for high-level vision tasks
due to the powerful learning capability of the deep neural network classifier.
We validate the framework on the task of refinement for image segmentation
boundaries. Experiments on the HELEN face parsing and PASCAL VOC-2012 semantic
segmentation tasks show that the spatial propagation network provides a
general, effective and efficient solution for generating high-quality
segmentation results.
|
Sifei Liu, Shalini De Mello, Jinwei Gu, Guangyu Zhong, Ming-Hsuan
Yang, Jan Kautz
| null |
1710.0102
| null | null |
A Fully Convolutional Network for Semantic Labeling of 3D Point Clouds
|
cs.CV cs.LG stat.ML
|
When classifying point clouds, a large amount of time is devoted to the
process of engineering a reliable set of features which are then passed to a
classifier of choice. Generally, such features - usually derived from the
3D-covariance matrix - are computed using the surrounding neighborhood of
points. While these features capture local information, the process is usually
time-consuming, and requires the application at multiple scales combined with
contextual methods in order to adequately describe the diversity of objects
within a scene. In this paper we present a 1D-fully convolutional network that
consumes terrain-normalized points directly with the corresponding spectral
data,if available, to generate point-wise labeling while implicitly learning
contextual features in an end-to-end fashion. Our method uses only the
3D-coordinates and three corresponding spectral features for each point.
Spectral features may either be extracted from 2D-georeferenced images, as
shown here for Light Detection and Ranging (LiDAR) point clouds, or extracted
directly for passive-derived point clouds,i.e. from muliple-view imagery. We
train our network by splitting the data into square regions, and use a pooling
layer that respects the permutation-invariance of the input points. Evaluated
using the ISPRS 3D Semantic Labeling Contest, our method scored second place
with an overall accuracy of 81.6%. We ranked third place with a mean F1-score
of 63.32%, surpassing the F1-score of the method with highest accuracy by
1.69%. In addition to labeling 3D-point clouds, we also show that our method
can be easily extended to 2D-semantic segmentation tasks, with promising
initial results.
|
Mohammed Yousefhussien, David J. Kelbe, Emmett J. Ientilucci and Carl
Salvaggio
|
10.1016/j.isprsjprs.2018.03.018
|
1710.01408
| null | null |
Usable & Scalable Learning Over Relational Data With Automatic Language
Bias
|
cs.DB cs.LG
|
Relational databases are valuable resources for learning novel and
interesting relations and concepts. In order to constraint the search through
the large space of candidate definitions, users must tune the algorithm by
specifying a language bias. Unfortunately, specifying the language bias is done
via trial and error and is guided by the expert's intuitions. We propose
AutoBias, a system that leverages information in the schema and content of the
database to automatically induce the language bias used by popular relational
learning systems. We show that AutoBias delivers the same accuracy as using
manually-written language bias by imposing only a slight overhead on the
running time of the learning algorithm.
|
Jose Picado, Arash Termehchy, Sudhanshu Pathak, Alan Fern, Praveen
Ilango, Yunqiao Cai
| null |
1710.0142
| null | null |
Mechanisms of dimensionality reduction and decorrelation in deep neural
networks
|
cs.LG cond-mat.stat-mech stat.ML
|
Deep neural networks are widely used in various domains. However, the nature
of computations at each layer of the deep networks is far from being well
understood. Increasing the interpretability of deep neural networks is thus
important. Here, we construct a mean-field framework to understand how compact
representations are developed across layers, not only in deterministic deep
networks with random weights but also in generative deep networks where an
unsupervised learning is carried out. Our theory shows that the deep
computation implements a dimensionality reduction while maintaining a finite
level of weak correlations between neurons for possible feature extraction.
Mechanisms of dimensionality reduction and decorrelation are unified in the
same framework. This work may pave the way for understanding how a sensory
hierarchy works.
|
Haiping Huang
|
10.1103/PhysRevE.98.062313
|
1710.01467
| null | null |
Image Labeling Based on Graphical Models Using Wasserstein Messages and
Geometric Assignment
|
cs.LG cs.CV cs.NA math.OC
|
We introduce a novel approach to Maximum A Posteriori inference based on
discrete graphical models. By utilizing local Wasserstein distances for
coupling assignment measures across edges of the underlying graph, a given
discrete objective function is smoothly approximated and restricted to the
assignment manifold. A corresponding multiplicative update scheme combines in a
single process (i) geometric integration of the resulting Riemannian gradient
flow and (ii) rounding to integral solutions that represent valid labelings.
Throughout this process, local marginalization constraints known from the
established LP relaxation are satisfied, whereas the smooth geometric setting
results in rapidly converging iterations that can be carried out in parallel
for every edge.
|
Ruben H\"uhnerbein, Fabrizio Savarino, Freddie \r{A}str\"om, Christoph
Schn\"orr
|
10.1137/17M1150669
|
1710.01493
| null | null |
Constructing multi-modality and multi-classifier radiomics predictive
models through reliable classifier fusion
|
cs.LG physics.med-ph stat.ML
|
Radiomics aims to extract and analyze large numbers of quantitative features
from medical images and is highly promising in staging, diagnosing, and
predicting outcomes of cancer treatments. Nevertheless, several challenges need
to be addressed to construct an optimal radiomics predictive model. First, the
predictive performance of the model may be reduced when features extracted from
an individual imaging modality are blindly combined into a single predictive
model. Second, because many different types of classifiers are available to
construct a predictive model, selecting an optimal classifier for a particular
application is still challenging. In this work, we developed multi-modality and
multi-classifier radiomics predictive models that address the aforementioned
issues in currently available models. Specifically, a new reliable classifier
fusion strategy was proposed to optimally combine output from different
modalities and classifiers. In this strategy, modality-specific classifiers
were first trained, and an analytic evidential reasoning (ER) rule was
developed to fuse the output score from each modality to construct an optimal
predictive model. One public data set and two clinical case studies were
performed to validate model performance. The experimental results indicated
that the proposed ER rule based radiomics models outperformed the traditional
models that rely on a single classifier or simply use combined features from
different modalities.
|
Zhiguo Zhou, Zhi-Jie Zhou, Hongxia Hao, Shulong Li, Xi Chen, You
Zhang, Michael Folkert, and Jing Wang
| null |
1710.01614
| null | null |
On the Sample Complexity of the Linear Quadratic Regulator
|
math.OC cs.LG stat.ML
|
This paper addresses the optimal control problem known as the Linear
Quadratic Regulator in the case when the dynamics are unknown. We propose a
multi-stage procedure, called Coarse-ID control, that estimates a model from a
few experimental trials, estimates the error in that model with respect to the
truth, and then designs a controller using both the model and uncertainty
estimate. Our technique uses contemporary tools from random matrix theory to
bound the error in the estimation procedure. We also employ a recently
developed approach to control synthesis called System Level Synthesis that
enables robust control design by solving a convex optimization problem. We
provide end-to-end bounds on the relative error in control cost that are nearly
optimal in the number of parameters and that highlight salient properties of
the system to be controlled such as closed-loop sensitivity and optimal control
magnitude. We show experimentally that the Coarse-ID approach enables efficient
computation of a stabilizing controller in regimes where simple control schemes
that do not take the model uncertainty into account fail to stabilize the true
system.
|
Sarah Dean, Horia Mania, Nikolai Matni, Benjamin Recht and Stephen Tu
| null |
1710.01688
| null | null |
Context Embedding Networks
|
cs.LG cs.AI cs.CV stat.ML
|
Low dimensional embeddings that capture the main variations of interest in
collections of data are important for many applications. One way to construct
these embeddings is to acquire estimates of similarity from the crowd. However,
similarity is a multi-dimensional concept that varies from individual to
individual. Existing models for learning embeddings from the crowd typically
make simplifying assumptions such as all individuals estimate similarity using
the same criteria, the list of criteria is known in advance, or that the crowd
workers are not influenced by the data that they see. To overcome these
limitations we introduce Context Embedding Networks (CENs). In addition to
learning interpretable embeddings from images, CENs also model worker biases
for different attributes along with the visual context i.e. the visual
attributes highlighted by a set of images. Experiments on two noisy crowd
annotated datasets show that modeling both worker bias and visual context
results in more interpretable embeddings compared to existing approaches.
|
Kun Ho Kim, Oisin Mac Aodha, Pietro Perona
| null |
1710.01691
| null | null |
IQ of Neural Networks
|
cs.LG cs.AI cs.CV
|
IQ tests are an accepted method for assessing human intelligence. The tests
consist of several parts that must be solved under a time constraint. Of all
the tested abilities, pattern recognition has been found to have the highest
correlation with general intelligence. This is primarily because pattern
recognition is the ability to find order in a noisy environment, a necessary
skill for intelligent agents. In this paper, we propose a convolutional neural
network (CNN) model for solving geometric pattern recognition problems. The CNN
receives as input multiple ordered input images and outputs the next image
according to the pattern. Our CNN is able to solve problems involving rotation,
reflection, color, size and shape patterns and score within the top 5% of human
performance.
|
Dokhyam Hoshen, Michael Werman
| null |
1710.01692
| null | null |
Model-free prediction of noisy chaotic time series by deep learning
|
cs.LG physics.comp-ph physics.data-an
|
We present a deep neural network for a model-free prediction of a chaotic
dynamical system from noisy observations. The proposed deep learning model aims
to predict the conditional probability distribution of a state variable. The
Long Short-Term Memory network (LSTM) is employed to model the nonlinear
dynamics and a softmax layer is used to approximate a probability distribution.
The LSTM model is trained by minimizing a regularized cross-entropy function.
The LSTM model is validated against delay-time chaotic dynamical systems,
Mackey-Glass and Ikeda equations. It is shown that the present LSTM makes a
good prediction of the nonlinear dynamics by effectively filtering out the
noise. It is found that the prediction uncertainty of a multiple-step forecast
of the LSTM model is not a monotonic function of time; the predicted standard
deviation may increase or decrease dynamically in time.
|
Kyongmin Yeo
| null |
1710.01693
| null | null |
DeepTFP: Mobile Time Series Data Analytics based Traffic Flow Prediction
|
cs.LG
|
Traffic flow prediction is an important research issue to avoid traffic
congestion in transportation systems. Traffic congestion avoiding can be
achieved by knowing traffic flow and then conducting transportation planning.
Achieving traffic flow prediction is challenging as the prediction is affected
by many complex factors such as inter-region traffic, vehicles' relations, and
sudden events. However, as the mobile data of vehicles has been widely
collected by sensor-embedded devices in transportation systems, it is possible
to predict the traffic flow by analysing mobile data. This study proposes a
deep learning based prediction algorithm, DeepTFP, to collectively predict the
traffic flow on each and every traffic road of a city. This algorithm uses
three deep residual neural networks to model temporal closeness, period, and
trend properties of traffic flow. Each residual neural network consists of a
branch of residual convolutional units. DeepTFP aggregates the outputs of the
three residual neural networks to optimize the parameters of a time series
prediction model. Contrast experiments on mobile time series data from the
transportation system of England demonstrate that the proposed DeepTFP
outperforms the Long Short-Term Memory (LSTM) architecture based method in
prediction accuracy.
|
Yuanfang Chen, Falin Chen, Yizhi Ren, Ting Wu, Ye Yao
| null |
1710.01695
| null | null |
Decomposition of Nonlinear Dynamical Systems Using Koopman Gramians
|
cs.SY cs.LG math.DS math.OC
|
In this paper we propose a new Koopman operator approach to the decomposition
of nonlinear dynamical systems using Koopman Gramians. We introduce the notion
of an input-Koopman operator, and show how input-Koopman operators can be used
to cast a nonlinear system into the classical state-space form, and identify
conditions under which input and state observable functions are well separated.
We then extend an existing method of dynamic mode decomposition for learning
Koopman operators from data known as deep dynamic mode decomposition to systems
with controls or disturbances. We illustrate the accuracy of the method in
learning an input-state separable Koopman operator for an example system, even
when the underlying system exhibits mixed state-input terms. We next introduce
a nonlinear decomposition algorithm, based on Koopman Gramians, that maximizes
internal subsystem observability and disturbance rejection from unwanted noise
from other subsystems. We derive a relaxation based on Koopman Gramians and
multi-way partitioning for the resulting NP-hard decomposition problem. We
lastly illustrate the proposed algorithm with the swing dynamics for an IEEE
39-bus system.
|
Zhiyuan Liu, Soumya Kundu, Lijun Chen, and Enoch Yeung
| null |
1710.01719
| null | null |
Neural Task Programming: Learning to Generalize Across Hierarchical
Tasks
|
cs.AI cs.LG cs.RO
|
In this work, we propose a novel robot learning framework called Neural Task
Programming (NTP), which bridges the idea of few-shot learning from
demonstration and neural program induction. NTP takes as input a task
specification (e.g., video demonstration of a task) and recursively decomposes
it into finer sub-task specifications. These specifications are fed to a
hierarchical neural program, where bottom-level programs are callable
subroutines that interact with the environment. We validate our method in three
robot manipulation tasks. NTP achieves strong generalization across sequential
tasks that exhibit hierarchal and compositional structures. The experimental
results show that NTP learns to generalize well to- wards unseen tasks with
increasing lengths, variable topologies, and changing objectives.
|
Danfei Xu, Suraj Nair, Yuke Zhu, Julian Gao, Animesh Garg, Li Fei-Fei,
Silvio Savarese
| null |
1710.01813
| null | null |
To prune, or not to prune: exploring the efficacy of pruning for model
compression
|
stat.ML cs.LG
|
Model pruning seeks to induce sparsity in a deep neural network's various
connection matrices, thereby reducing the number of nonzero-valued parameters
in the model. Recent reports (Han et al., 2015; Narang et al., 2017) prune deep
networks at the cost of only a marginal loss in accuracy and achieve a sizable
reduction in model size. This hints at the possibility that the baseline models
in these experiments are perhaps severely over-parameterized at the outset and
a viable alternative for model compression might be to simply reduce the number
of hidden units while maintaining the model's dense connection structure,
exposing a similar trade-off in model size and accuracy. We investigate these
two distinct paths for model compression within the context of energy-efficient
inference in resource-constrained environments and propose a new gradual
pruning technique that is simple and straightforward to apply across a variety
of models/datasets with minimal tuning and can be seamlessly incorporated
within the training process. We compare the accuracy of large, but pruned
models (large-sparse) and their smaller, but dense (small-dense) counterparts
with identical memory footprint. Across a broad range of neural network
architectures (deep CNNs, stacked LSTM, and seq2seq LSTM models), we find
large-sparse models to consistently outperform small-dense models and achieve
up to 10x reduction in number of non-zero parameters with minimal loss in
accuracy.
|
Michael Zhu, Suyog Gupta
| null |
1710.01878
| null | null |
Data Augmentation of Spectral Data for Convolutional Neural Network
(CNN) Based Deep Chemometrics
|
cs.LG
|
Deep learning methods are used on spectroscopic data to predict drug content
in tablets from near infrared (NIR) spectra. Using convolutional neural
networks (CNNs), features are ex- tracted from the spectroscopic data. Extended
multiplicative scatter correction (EMSC) and a novel spectral data augmentation
method are benchmarked as preprocessing steps. The learned models perform
better or on par with hypothetical optimal partial least squares (PLS) models
for all combinations of preprocessing. Data augmentation with subsequent EMSC
in combination gave the best results. The deep learning model CNNs also
outperform the PLS models in an extrapolation chal- lenge created using data
from a second instrument and from an analyte concentration not covered by the
training data. Qualitative investigations of the CNNs kernel activations show
their resemblance to wellknown data processing methods such as smoothing,
slope/derivative, thresholds and spectral region selection.
|
Esben Jannik Bjerrum, Mads Glahder, Thomas Skov
| null |
1710.01927
| null | null |
Alternating Iteratively Reweighted Minimization Algorithms for Low-Rank
Matrix Factorization
|
cs.LG
|
Nowadays, the availability of large-scale data in disparate application
domains urges the deployment of sophisticated tools for extracting valuable
knowledge out of this huge bulk of information. In that vein, low-rank
representations (LRRs) which seek low-dimensional embeddings of data have
naturally appeared. In an effort to reduce computational complexity and improve
estimation performance, LRR has been viewed via a matrix factorization (MF)
perspective. Recently, low-rank MF (LRMF) approaches have been proposed for
tackling the inherent weakness of MF i.e., the unawareness of the dimension of
the low-dimensional space where data reside. Herein, inspired by the merits of
iterative reweighted schemes for rank minimization, we come up with a generic
low-rank promoting regularization function. Then, focusing on a specific
instance of it, we propose a regularizer that imposes column-sparsity jointly
on the two matrix factors that result from MF, thus promoting low-rankness on
the optimization problem. The problems of denoising, matrix completion and
non-negative matrix factorization (NMF) are redefined according to the new LRMF
formulation and solved via efficient Newton-type algorithms with proven
theoretical guarantees as to their convergence and rates of convergence to
stationary points. The effectiveness of the proposed algorithms is verified in
diverse simulated and real data experiments.
|
Paris V. Giampouras, Athanasios A. Rontogiannis and Konstantinos D.
Koutroumbas
| null |
1710.02004
| null | null |
McDiarmid Drift Detection Methods for Evolving Data Streams
|
stat.ML cs.DB cs.LG
|
Increasingly, Internet of Things (IoT) domains, such as sensor networks,
smart cities, and social networks, generate vast amounts of data. Such data are
not only unbounded and rapidly evolving. Rather, the content thereof
dynamically evolves over time, often in unforeseen ways. These variations are
due to so-called concept drifts, caused by changes in the underlying data
generation mechanisms. In a classification setting, concept drift causes the
previously learned models to become inaccurate, unsafe and even unusable.
Accordingly, concept drifts need to be detected, and handled, as soon as
possible. In medical applications and emergency response settings, for example,
change in behaviours should be detected in near real-time, to avoid potential
loss of life. To this end, we introduce the McDiarmid Drift Detection Method
(MDDM), which utilizes McDiarmid's inequality in order to detect concept drift.
The MDDM approach proceeds by sliding a window over prediction results, and
associate window entries with weights. Higher weights are assigned to the most
recent entries, in order to emphasize their importance. As instances are
processed, the detection algorithm compares a weighted mean of elements inside
the sliding window with the maximum weighted mean observed so far. A
significant difference between the two weighted means, upper-bounded by the
McDiarmid inequality, implies a concept drift. Our extensive experimentation
against synthetic and real-world data streams show that our novel method
outperforms the state-of-the-art. Specifically, MDDM yields shorter detection
delays as well as lower false negative rates, while maintaining high
classification accuracies.
|
Ali Pesaranghader, Herna Viktor, Eric Paquet
| null |
1710.0203
| null | null |
Reliable Clustering of Bernoulli Mixture Models
|
cs.LG cs.IT math.IT stat.ML
|
A Bernoulli Mixture Model (BMM) is a finite mixture of random binary vectors
with independent dimensions. The problem of clustering BMM data arises in a
variety of real-world applications, ranging from population genetics to
activity analysis in social networks. In this paper, we analyze the
clusterability of BMMs from a theoretical perspective, when the number of
clusters is unknown. In particular, we stipulate a set of conditions on the
sample complexity and dimension of the model in order to guarantee the Probably
Approximately Correct (PAC)-clusterability of a dataset. To the best of our
knowledge, these findings are the first non-asymptotic bounds on the sample
complexity of learning or clustering BMMs.
|
Amir Najafi, Abolfazl Motahari, Hamid R. Rabiee
| null |
1710.02101
| null | null |
Learning Graphical Models from a Distributed Stream
|
cs.AI cs.LG stat.ML
|
A current challenge for data management systems is to support the
construction and maintenance of machine learning models over data that is
large, multi-dimensional, and evolving. While systems that could support these
tasks are emerging, the need to scale to distributed, streaming data requires
new models and algorithms. In this setting, as well as computational
scalability and model accuracy, we also need to minimize the amount of
communication between distributed processors, which is the chief component of
latency. We study Bayesian networks, the workhorse of graphical models, and
present a communication-efficient method for continuously learning and
maintaining a Bayesian network model over data that is arriving as a
distributed stream partitioned across multiple processors. We show a strategy
for maintaining model parameters that leads to an exponential reduction in
communication when compared with baseline approaches to maintain the exact MLE
(maximum likelihood estimation). Meanwhile, our strategy provides similar
prediction errors for the target distribution and for classification tasks.
|
Yu Zhang, Srikanta Tirthapura, Graham Cormode
| null |
1710.02103
| null | null |
A study of Thompson Sampling with Parameter h
|
cs.LG cs.IT math.IT
|
Thompson Sampling algorithm is a well known Bayesian algorithm for solving
stochastic multi-armed bandit. At each time step the algorithm chooses each arm
with probability proportional to it being the current best arm. We modify the
strategy by introducing a paramter h which alters the importance of the
probability of an arm being the current best arm. We show that the optimality
of Thompson sampling is robust to this perturbation within a range of parameter
values for two arm bandits.
|
Qiang Ha
| null |
1710.02174
| null | null |
Porcupine Neural Networks: (Almost) All Local Optima are Global
|
stat.ML cs.LG
|
Neural networks have been used prominently in several machine learning and
statistics applications. In general, the underlying optimization of neural
networks is non-convex which makes their performance analysis challenging. In
this paper, we take a novel approach to this problem by asking whether one can
constrain neural network weights to make its optimization landscape have good
theoretical properties while at the same time, be a good approximation for the
unconstrained one. For two-layer neural networks, we provide affirmative
answers to these questions by introducing Porcupine Neural Networks (PNNs)
whose weight vectors are constrained to lie over a finite set of lines. We show
that most local optima of PNN optimizations are global while we have a
characterization of regions where bad local optimizers may exist. Moreover, our
theoretical and empirical results suggest that an unconstrained neural network
can be approximated using a polynomially-large PNN.
|
Soheil Feizi, Hamid Javadi, Jesse Zhang and David Tse
| null |
1710.02196
| null | null |
Stacked Structure Learning for Lifted Relational Neural Networks
|
cs.LG cs.AI stat.ML
|
Lifted Relational Neural Networks (LRNNs) describe relational domains using
weighted first-order rules which act as templates for constructing feed-forward
neural networks. While previous work has shown that using LRNNs can lead to
state-of-the-art results in various ILP tasks, these results depended on
hand-crafted rules. In this paper, we extend the framework of LRNNs with
structure learning, thus enabling a fully automated learning process. Similarly
to many ILP methods, our structure learning algorithm proceeds in an iterative
fashion by top-down searching through the hypothesis space of all possible Horn
clauses, considering the predicates that occur in the training examples as well
as invented soft concepts entailed by the best weighted rules found so far. In
the experiments, we demonstrate the ability to automatically induce useful
hierarchical soft concepts leading to deep LRNNs with a competitive predictive
power.
|
Gustav Sourek, Martin Svatos, Filip Zelezny, Steven Schockaert, Ondrej
Kuzelka
| null |
1710.02221
| null | null |
Dilated Recurrent Neural Networks
|
cs.AI cs.LG
|
Learning with recurrent neural networks (RNNs) on long sequences is a
notoriously difficult task. There are three major challenges: 1) complex
dependencies, 2) vanishing and exploding gradients, and 3) efficient
parallelization. In this paper, we introduce a simple yet effective RNN
connection structure, the DilatedRNN, which simultaneously tackles all of these
challenges. The proposed architecture is characterized by multi-resolution
dilated recurrent skip connections and can be combined flexibly with diverse
RNN cells. Moreover, the DilatedRNN reduces the number of parameters needed and
enhances training efficiency significantly, while matching state-of-the-art
performance (even with standard RNN cells) in tasks involving very long-term
dependencies. To provide a theory-based quantification of the architecture's
advantages, we introduce a memory capacity measure, the mean recurrent length,
which is more suitable for RNNs with long skip connections than existing
measures. We rigorously prove the advantages of the DilatedRNN over other
recurrent neural architectures. The code for our method is publicly available
at https://github.com/code-terminator/DilatedRNN
|
Shiyu Chang, Yang Zhang, Wei Han, Mo Yu, Xiaoxiao Guo, Wei Tan,
Xiaodong Cui, Michael Witbrock, Mark Hasegawa-Johnson, Thomas S. Huang
| null |
1710.02224
| null | null |
How Much Chemistry Does a Deep Neural Network Need to Know to Make
Accurate Predictions?
|
stat.ML cs.AI cs.CV cs.LG
|
The meteoric rise of deep learning models in computer vision research, having
achieved human-level accuracy in image recognition tasks is firm evidence of
the impact of representation learning of deep neural networks. In the chemistry
domain, recent advances have also led to the development of similar CNN models,
such as Chemception, that is trained to predict chemical properties using
images of molecular drawings. In this work, we investigate the effects of
systematically removing and adding localized domain-specific information to the
image channels of the training data. By augmenting images with only 3
additional basic information, and without introducing any architectural
changes, we demonstrate that an augmented Chemception (AugChemception)
outperforms the original model in the prediction of toxicity, activity, and
solvation free energy. Then, by altering the information content in the images,
and examining the resulting model's performance, we also identify two distinct
learning patterns in predicting toxicity/activity as compared to solvation free
energy. These patterns suggest that Chemception is learning about its tasks in
the manner that is consistent with established knowledge. Thus, our work
demonstrates that advanced chemical knowledge is not a pre-requisite for deep
learning models to accurately predict complex chemical properties.
|
Garrett B. Goh, Charles Siegel, Abhinav Vishnu, Nathan O. Hodas,
Nathan Baker
| null |
1710.02238
| null | null |
Solving differential equations with unknown constitutive relations as
recurrent neural networks
|
cs.LG math.NA
|
We solve a system of ordinary differential equations with an unknown
functional form of a sink (reaction rate) term. We assume that the measurements
(time series) of state variables are partially available, and we use recurrent
neural network to "learn" the reaction rate from this data. This is achieved by
including a discretized ordinary differential equations as part of a recurrent
neural network training problem. We extend TensorFlow's recurrent neural
network architecture to create a simple but scalable and effective solver for
the unknown functions, and apply it to a fedbatch bioreactor simulation
problem. Use of techniques from recent deep learning literature enables
training of functions with behavior manifesting over thousands of time steps.
Our networks are structurally similar to recurrent neural networks, but
differences in design and function require modifications to the conventional
wisdom about training such networks.
|
Tobias Hagge, Panos Stinis, Enoch Yeung and Alexandre M. Tartakovsky
| null |
1710.02242
| null | null |
Linear-Time Sequence Classification using Restricted Boltzmann Machines
|
cs.LG stat.ML
|
Classification of sequence data is the topic of interest for dynamic Bayesian
models and Recurrent Neural Networks (RNNs). While the former can explicitly
model the temporal dependencies between class variables, the latter have a
capability of learning representations. Several attempts have been made to
improve performance by combining these two approaches or increasing the
processing capability of the hidden units in RNNs. This often results in
complex models with a large number of learning parameters. In this paper, a
compact model is proposed which offers both representation learning and
temporal inference of class variables by rolling Restricted Boltzmann Machines
(RBMs) and class variables over time. We address the key issue of
intractability in this variant of RBMs by optimising a conditional
distribution, instead of a joint distribution. Experiments reported in the
paper on melody modelling and optical character recognition show that the
proposed model can outperform the state-of-the-art. Also, the experimental
results on optical character recognition, part-of-speech tagging and text
chunking demonstrate that our model is comparable to recurrent neural networks
with complex memory gates while requiring far fewer parameters.
|
Son N. Tran, Srikanth Cherla, Artur Garcez, Tillman Weyde
| null |
1710.02245
| null | null |
Learnable Explicit Density for Continuous Latent Space and Variational
Inference
|
cs.LG cs.AI stat.ML
|
In this paper, we study two aspects of the variational autoencoder (VAE): the
prior distribution over the latent variables and its corresponding posterior.
First, we decompose the learning of VAEs into layerwise density estimation, and
argue that having a flexible prior is beneficial to both sample generation and
inference. Second, we analyze the family of inverse autoregressive flows
(inverse AF) and show that with further improvement, inverse AF could be used
as universal approximation to any complicated posterior. Our analysis results
in a unified approach to parameterizing a VAE, without the need to restrict
ourselves to use factorial Gaussians in the latent real space.
|
Chin-Wei Huang, Ahmed Touati, Laurent Dinh, Michal Drozdzal, Mohammad
Havaei, Laurent Charlin, Aaron Courville
| null |
1710.02248
| null | null |
Lattice Recurrent Unit: Improving Convergence and Statistical Efficiency
for Sequence Modeling
|
cs.LG cs.AI cs.NE
|
Recurrent neural networks have shown remarkable success in modeling
sequences. However low resource situations still adversely affect the
generalizability of these models. We introduce a new family of models, called
Lattice Recurrent Units (LRU), to address the challenge of learning deep
multi-layer recurrent models with limited resources. LRU models achieve this
goal by creating distinct (but coupled) flow of information inside the units: a
first flow along time dimension and a second flow along depth dimension. It
also offers a symmetry in how information can flow horizontally and vertically.
We analyze the effects of decoupling three different components of our LRU
model: Reset Gate, Update Gate and Projected State. We evaluate this family on
new LRU models on computational convergence rates and statistical efficiency.
Our experiments are performed on four publicly-available datasets, comparing
with Grid-LSTM and Recurrent Highway networks. Our results show that LRU has
better empirical computational convergence rates and statistical efficiency
values, along with learning more accurate language models.
|
Chaitanya Ahuja and Louis-Philippe Morency
| null |
1710.02254
| null | null |
Discovering Playing Patterns: Time Series Clustering of Free-To-Play
Game Data
|
stat.ML cs.LG
|
The classification of time series data is a challenge common to all
data-driven fields. However, there is no agreement about which are the most
efficient techniques to group unlabeled time-ordered data. This is because a
successful classification of time series patterns depends on the goal and the
domain of interest, i.e. it is application-dependent.
In this article, we study free-to-play game data. In this domain, clustering
similar time series information is increasingly important due to the large
amount of data collected by current mobile and web applications. We evaluate
which methods cluster accurately time series of mobile games, focusing on
player behavior data. We identify and validate several aspects of the
clustering: the similarity measures and the representation techniques to reduce
the high dimensionality of time series. As a robustness test, we compare
various temporal datasets of player activity from two free-to-play video-games.
With these techniques we extract temporal patterns of player behavior
relevant for the evaluation of game events and game-business diagnosis. Our
experiments provide intuitive visualizations to validate the results of the
clustering and to determine the optimal number of clusters. Additionally, we
assess the common characteristics of the players belonging to the same group.
This study allows us to improve the understanding of player dynamics and churn
behavior.
|
Alain Saas, Anna Guitart and \'Africa Peri\'a\~nez
|
10.1109/CIG.2016.7860442
|
1710.02268
| null | null |
Efficient K-Shot Learning with Regularized Deep Networks
|
cs.CV cs.LG stat.ML
|
Feature representations from pre-trained deep neural networks have been known
to exhibit excellent generalization and utility across a variety of related
tasks. Fine-tuning is by far the simplest and most widely used approach that
seeks to exploit and adapt these feature representations to novel tasks with
limited data. Despite the effectiveness of fine-tuning, itis often sub-optimal
and requires very careful optimization to prevent severe over-fitting to small
datasets. The problem of sub-optimality and over-fitting, is due in part to the
large number of parameters used in a typical deep convolutional neural network.
To address these problems, we propose a simple yet effective regularization
method for fine-tuning pre-trained deep networks for the task of k-shot
learning. To prevent overfitting, our key strategy is to cluster the model
parameters while ensuring intra-cluster similarity and inter-cluster diversity
of the parameters, effectively regularizing the dimensionality of the parameter
search space. In particular, we identify groups of neurons within each layer of
a deep network that shares similar activation patterns. When the network is to
be fine-tuned for a classification task using only k examples, we propagate a
single gradient to all of the neuron parameters that belong to the same group.
The grouping of neurons is non-trivial as neuron activations depend on the
distribution of the input data. To efficiently search for optimal groupings
conditioned on the input data, we propose a reinforcement learning search
strategy using recurrent networks to learn the optimal group assignments for
each network layer. Experimental results show that our method can be easily
applied to several popular convolutional neural networks and improve upon other
state-of-the-art fine-tuning based k-shot learning strategies by more than10%
|
Donghyun Yoo, Haoqi Fan, Vishnu Naresh Boddeti, Kris M. Kitani
| null |
1710.02277
| null | null |
Deep Convolutional Neural Networks as Generic Feature Extractors
|
cs.CV cs.LG cs.NE
|
Recognizing objects in natural images is an intricate problem involving
multiple conflicting objectives. Deep convolutional neural networks, trained on
large datasets, achieve convincing results and are currently the
state-of-the-art approach for this task. However, the long time needed to train
such deep networks is a major drawback. We tackled this problem by reusing a
previously trained network. For this purpose, we first trained a deep
convolutional network on the ILSVRC2012 dataset. We then maintained the learned
convolution kernels and only retrained the classification part on different
datasets. Using this approach, we achieved an accuracy of 67.68 % on CIFAR-100,
compared to the previous state-of-the-art result of 65.43 %. Furthermore, our
findings indicate that convolutional networks are able to learn generic feature
extractors that can be used for different tasks.
|
Lars Hertel, Erhardt Barth, Thomas K\"aster, Thomas Martinetz
| null |
1710.02286
| null | null |
Rainbow: Combining Improvements in Deep Reinforcement Learning
|
cs.AI cs.LG
|
The deep reinforcement learning community has made several independent
improvements to the DQN algorithm. However, it is unclear which of these
extensions are complementary and can be fruitfully combined. This paper
examines six extensions to the DQN algorithm and empirically studies their
combination. Our experiments show that the combination provides
state-of-the-art performance on the Atari 2600 benchmark, both in terms of data
efficiency and final performance. We also provide results from a detailed
ablation study that shows the contribution of each component to overall
performance.
|
Matteo Hessel, Joseph Modayil, Hado van Hasselt, Tom Schaul, Georg
Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, David Silver
| null |
1710.02298
| null | null |
Projection Based Weight Normalization for Deep Neural Networks
|
cs.LG cs.AI cs.CV
|
Optimizing deep neural networks (DNNs) often suffers from the ill-conditioned
problem. We observe that the scaling-based weight space symmetry property in
rectified nonlinear network will cause this negative effect. Therefore, we
propose to constrain the incoming weights of each neuron to be unit-norm, which
is formulated as an optimization problem over Oblique manifold. A simple yet
efficient method referred to as projection based weight normalization (PBWN) is
also developed to solve this problem. PBWN executes standard gradient updates,
followed by projecting the updated weight back to Oblique manifold. This
proposed method has the property of regularization and collaborates well with
the commonly used batch normalization technique. We conduct comprehensive
experiments on several widely-used image datasets including CIFAR-10,
CIFAR-100, SVHN and ImageNet for supervised learning over the state-of-the-art
convolutional neural networks, such as Inception, VGG and residual networks.
The results show that our method is able to improve the performance of DNNs
with different architectures consistently. We also apply our method to Ladder
network for semi-supervised learning on permutation invariant MNIST dataset,
and our method outperforms the state-of-the-art methods: we obtain test errors
as 2.52%, 1.06%, and 0.91% with only 20, 50, and 100 labeled samples,
respectively.
|
Lei Huang, Xianglong Liu, Bo Lang and Bo Li
| null |
1710.02338
| null | null |
Accumulated Gradient Normalization
|
stat.ML cs.DC cs.LG
|
This work addresses the instability in asynchronous data parallel
optimization. It does so by introducing a novel distributed optimizer which is
able to efficiently optimize a centralized model under communication
constraints. The optimizer achieves this by pushing a normalized sequence of
first-order gradients to a parameter server. This implies that the magnitude of
a worker delta is smaller compared to an accumulated gradient, and provides a
better direction towards a minimum compared to first-order gradients, which in
turn also forces possible implicit momentum fluctuations to be more aligned
since we make the assumption that all workers contribute towards a single
minima. As a result, our approach mitigates the parameter staleness problem
more effectively since staleness in asynchrony induces (implicit) momentum, and
achieves a better convergence rate compared to other optimizers such as
asynchronous EASGD and DynSGD, which we show empirically.
|
Joeri Hermans, Gerasimos Spanakis and Rico M\"ockel
| null |
1710.02368
| null | null |
End-to-end Driving via Conditional Imitation Learning
|
cs.RO cs.CV cs.LG
|
Deep networks trained on demonstrations of human driving have learned to
follow roads and avoid obstacles. However, driving policies trained via
imitation learning cannot be controlled at test time. A vehicle trained
end-to-end to imitate an expert cannot be guided to take a specific turn at an
upcoming intersection. This limits the utility of such systems. We propose to
condition imitation learning on high-level command input. At test time, the
learned driving policy functions as a chauffeur that handles sensorimotor
coordination but continues to respond to navigational commands. We evaluate
different architectures for conditional imitation learning in vision-based
driving. We conduct experiments in realistic three-dimensional simulations of
urban driving and on a 1/5 scale robotic truck that is trained to drive in a
residential area. Both systems drive based on visual input yet remain
responsive to high-level navigational commands. The supplementary video can be
viewed at https://youtu.be/cFtnflNe5fM
|
Felipe Codevilla, Matthias M\"uller, Antonio L\'opez, Vladlen Koltun,
Alexey Dosovitskiy
| null |
1710.0241
| null | null |
Machine Learning for Drug Overdose Surveillance
|
cs.CY cs.LG stat.ML
|
We describe two recently proposed machine learning approaches for discovering
emerging trends in fatal accidental drug overdoses. The Gaussian Process Subset
Scan enables early detection of emerging patterns in spatio-temporal data,
accounting for both the non-iid nature of the data and the fact that detecting
subtle patterns requires integration of information across multiple spatial
areas and multiple time steps. We apply this approach to 17 years of
county-aggregated data for monthly opioid overdose deaths in the New York City
metropolitan area, showing clear advantages in the utility of discovered
patterns as compared to typical anomaly detection approaches.
To detect and characterize emerging overdose patterns that differentially
affect a subpopulation of the data, including geographic, demographic, and
behavioral patterns (e.g., which combinations of drugs are involved), we apply
the Multidimensional Tensor Scan to 8 years of case-level overdose data from
Allegheny County, PA. We discover previously unidentified overdose patterns
which reveal unusual demographic clusters, show impacts of drug legislation,
and demonstrate potential for early detection and targeted intervention. These
approaches to early detection of overdose patterns can inform prevention and
response efforts, as well as understanding the effects of policy changes.
|
Daniel B. Neill (1), William Herlands (1) ((1) Carnegie Mellon
University)
| null |
1710.02458
| null | null |
Socially Compliant Navigation through Raw Depth Inputs with Generative
Adversarial Imitation Learning
|
cs.RO cs.AI cs.LG
|
We present an approach for mobile robots to learn to navigate in dynamic
environments with pedestrians via raw depth inputs, in a socially compliant
manner. To achieve this, we adopt a generative adversarial imitation learning
(GAIL) strategy, which improves upon a pre-trained behavior cloning policy. Our
approach overcomes the disadvantages of previous methods, as they heavily
depend on the full knowledge of the location and velocity information of nearby
pedestrians, which not only requires specific sensors, but also the extraction
of such state information from raw sensory input could consume much computation
time. In this paper, our proposed GAIL-based model performs directly on raw
depth inputs and plans in real-time. Experiments show that our GAIL-based
approach greatly improves the safety and efficiency of the behavior of mobile
robots from pure behavior cloning. The real-world deployment also shows that
our method is capable of guiding autonomous vehicles to navigate in a socially
compliant manner directly through raw depth inputs. In addition, we release a
simulation plugin for modeling pedestrian behaviors based on the social force
model.
|
Lei Tai and Jingwei Zhang and Ming Liu and Wolfram Burgard
| null |
1710.02543
| null | null |
Real-Time Illegal Parking Detection System Based on Deep Learning
|
cs.CV cs.LG stat.ML
|
The increasing illegal parking has become more and more serious. Nowadays the
methods of detecting illegally parked vehicles are based on background
segmentation. However, this method is weakly robust and sensitive to
environment. Benefitting from deep learning, this paper proposes a novel
illegal vehicle parking detection system. Illegal vehicles captured by camera
are firstly located and classified by the famous Single Shot MultiBox Detector
(SSD) algorithm. To improve the performance, we propose to optimize SSD by
adjusting the aspect ratio of default box to accommodate with our dataset
better. After that, a tracking and analysis of movement is adopted to judge the
illegal vehicles in the region of interest (ROI). Experiments show that the
system can achieve a 99% accuracy and real-time (25FPS) detection with strong
robustness in complex environments.
|
Xuemei Xie, Chenye Wang, Shu Chen, Guangming Shi, Zhifu Zhao
|
10.1145/3094243.3094261
|
1710.02546
| null | null |
An Optimization Approach to Learning Falling Rule Lists
|
cs.LG
|
A falling rule list is a probabilistic decision list for binary
classification, consisting of a series of if-then rules with antecedents in the
if clauses and probabilities of the desired outcome ("1") in the then clauses.
Just as in a regular decision list, the order of rules in a falling rule list
is important -- each example is classified by the first rule whose antecedent
it satisfies. Unlike a regular decision list, a falling rule list requires the
probabilities of the desired outcome ("1") to be monotonically decreasing down
the list. We propose an optimization approach to learning falling rule lists
and "softly" falling rule lists, along with Monte-Carlo search algorithms that
use bounds on the optimal solution to prune the search space.
|
Chaofan Chen, Cynthia Rudin
| null |
1710.02572
| null | null |
Ranking and Selection as Stochastic Control
|
cs.LG stat.ML
|
Under a Bayesian framework, we formulate the fully sequential sampling and
selection decision in statistical ranking and selection as a stochastic control
problem, and derive the associated Bellman equation. Using value function
approximation, we derive an approximately optimal allocation policy. We show
that this policy is not only computationally efficient but also possesses both
one-step-ahead and asymptotic optimality for independent normal sampling
distributions. Moreover, the proposed allocation policy is easily generalizable
in the approximate dynamic programming paradigm.
|
Yijie Peng, Edwin K. P. Chong, Chun-Hung Chen and Michael C. Fu
| null |
1710.02619
| null | null |
Topic Modeling based on Keywords and Context
|
cs.CL cs.IR cs.LG
|
Current topic models often suffer from discovering topics not matching human
intuition, unnatural switching of topics within documents and high
computational demands. We address these concerns by proposing a topic model and
an inference algorithm based on automatically identifying characteristic
keywords for topics. Keywords influence topic-assignments of nearby words. Our
algorithm learns (key)word-topic scores and it self-regulates the number of
topics. Inference is simple and easily parallelizable. Qualitative analysis
yields comparable results to state-of-the-art models (eg. LDA), but with
different strengths and weaknesses. Quantitative analysis using 9 datasets
shows gains in terms of classification accuracy, PMI score, computational
performance and consistency of topic assignments within documents, while most
often using less topics.
|
Johannes Schneider
| null |
1710.0265
| null | null |
Beyond Log-concavity: Provable Guarantees for Sampling Multi-modal
Distributions using Simulated Tempering Langevin Monte Carlo
|
cs.LG cs.DS math.PR stat.ML
|
A key task in Bayesian statistics is sampling from distributions that are
only specified up to a partition function (i.e., constant of proportionality).
However, without any assumptions, sampling (even approximately) can be #P-hard,
and few works have provided "beyond worst-case" guarantees for such settings.
For log-concave distributions, classical results going back to Bakry and
\'Emery (1985) show that natural continuous-time Markov chains called Langevin
diffusions mix in polynomial time. The most salient feature of log-concavity
violated in practice is uni-modality: commonly, the distributions we wish to
sample from are multi-modal. In the presence of multiple deep and
well-separated modes, Langevin diffusion suffers from torpid mixing.
We address this problem by combining Langevin diffusion with simulated
tempering. The result is a Markov chain that mixes more rapidly by
transitioning between different temperatures of the distribution. We analyze
this Markov chain for the canonical multi-modal distribution: a mixture of
gaussians (of equal variance). The algorithm based on our Markov chain provably
samples from distributions that are close to mixtures of gaussians, given
access to the gradient of the log-pdf. For the analysis, we use a spectral
decomposition theorem for graphs (Gharan and Trevisan, 2014) and a Markov chain
decomposition technique (Madras and Randall, 2002).
|
Rong Ge, Holden Lee, Andrej Risteski
| null |
1710.02736
| null | null |
A New Spectral Clustering Algorithm
|
cs.LG cs.CV physics.geo-ph
|
We present a new clustering algorithm that is based on searching for natural
gaps in the components of the lowest energy eigenvectors of the Laplacian of a
graph. In comparing the performance of the proposed method with a set of other
popular methods (KMEANS, spectral-KMEANS, and an agglomerative method) in the
context of the Lancichinetti-Fortunato-Radicchi (LFR) Benchmark for undirected
weighted overlapping networks, we find that the new method outperforms the
other spectral methods considered in certain parameter regimes. Finally, in an
application to climate data involving one of the most important modes of
interannual climate variability, the El Nino Southern Oscillation phenomenon,
we demonstrate the ability of the new algorithm to readily identify different
flavors of the phenomenon.
|
W.R. Casper and Balu Nadiga
| null |
1710.02756
| null | null |
Protein identification with deep learning: from abc to xyz
|
cs.CE cs.LG q-bio.BM
|
Proteins are the main workhorses of biological functions in a cell, a tissue,
or an organism. Identification and quantification of proteins in a given
sample, e.g. a cell type under normal/disease conditions, are fundamental tasks
for the understanding of human health and disease. In this paper, we present
DeepNovo, a deep learning-based tool to address the problem of protein
identification from tandem mass spectrometry data. The idea was first proposed
in the context of de novo peptide sequencing [1] in which convolutional neural
networks and recurrent neural networks were applied to predict the amino acid
sequence of a peptide from its spectrum, a similar task to generating a caption
from an image. We further develop DeepNovo to perform sequence database search,
the main technique for peptide identification that greatly benefits from
numerous existing protein databases. We combine two modules de novo sequencing
and database search into a single deep learning framework for peptide
identification, and integrate de Bruijn graph assembly technique to offer a
complete solution to reconstruct protein sequences from tandem mass
spectrometry data. This paper describes a comprehensive protocol of DeepNovo
for protein identification, including training neural network models, dynamic
programming search, database querying, estimation of false discovery rate, and
de Bruijn graph assembly. Training and testing data, model implementations, and
comprehensive tutorials in form of IPython notebooks are available in our
GitHub repository (https://github.com/nh2tran/DeepNovo).
|
Ngoc Hieu Tran, Zachariah Levine, Lei Xin, Baozhen Shan, Ming Li
| null |
1710.02765
| null | null |
Bayesian Alignments of Warped Multi-Output Gaussian Processes
|
stat.ML cs.LG
|
We propose a novel Bayesian approach to modelling nonlinear alignments of
time series based on latent shared information. We apply the method to the
real-world problem of finding common structure in the sensor data of wind
turbines introduced by the underlying latent and turbulent wind field. The
proposed model allows for both arbitrary alignments of the inputs and
non-parametric output warpings to transform the observations. This gives rise
to multiple deep Gaussian process models connected via latent generating
processes. We present an efficient variational approximation based on nested
variational compression and show how the model can be used to extract shared
information between dependent time series, recovering an interpretable
functional decomposition of the learning problem. We show results for an
artificial data set and real-world data of two wind turbines.
|
Markus Kaiser, Clemens Otte, Thomas Runkler, Carl Henrik Ek
| null |
1710.02766
| null | null |
Structural Feature Selection for Event Logs
|
cs.LG cs.DB cs.SE stat.ML
|
We consider the problem of classifying business process instances based on
structural features derived from event logs. The main motivation is to provide
machine learning based techniques with quick response times for interactive
computer assisted root cause analysis. In particular, we create structural
features from process mining such as activity and transition occurrence counts,
and ordering of activities to be evaluated as potential features for
classification. We show that adding such structural features increases the
amount of information thus potentially increasing classification accuracy.
However, there is an inherent trade-off as using too many features leads to too
long run-times for machine learning classification models. One way to improve
the machine learning algorithms' run-time is to only select a small number of
features by a feature selection algorithm. However, the run-time required by
the feature selection algorithm must also be taken into account. Also, the
classification accuracy should not suffer too much from the feature selection.
The main contributions of this paper are as follows: First, we propose and
compare six different feature selection algorithms by means of an experimental
setup comparing their classification accuracy and achievable response times.
Second, we discuss the potential use of feature selection results for computer
assisted root cause analysis as well as the properties of different types of
structural features in the context of feature selection.
|
Markku Hinkka, Teemu Lehto, Keijo Heljanko, Alexander Jung
|
10.1007/978-3-319-74030-0_2
|
1710.02823
| null | null |
RUM: network Representation learning throUgh Multi-level structural
information preservation
|
cs.LG cs.SI
|
We have witnessed the discovery of many techniques for network representation
learning in recent years, ranging from encoding the context in random walks to
embedding the lower order connections, to finding latent space representations
with auto-encoders. However, existing techniques are looking mostly into the
local structures in a network, while higher-level properties such as global
community structures are often neglected. We propose a novel network
representations learning model framework called RUM (network Representation
learning throUgh Multi-level structural information preservation). In RUM, we
incorporate three essential aspects of a node that capture a network's
characteristics in multiple levels: a node's affiliated local triads, its
neighborhood relationships, and its global community affiliations. Therefore
the framework explicitly and comprehensively preserves the structural
information of a network, extending the encoding process both to the local end
of the structural information spectrum and to the global end. The framework is
also flexible enough to take various community discovery algorithms as its
preprocessor. Empirical results show that the representations learned by RUM
have demonstrated substantial performance advantages in real-life tasks.
|
Yanlei Yu, Zhiwu Lu, Jiajun Liu, Guoping Zhao, Ji-Rong Wen, Kai Zheng
| null |
1710.02836
| null | null |
Reconstruction of Hidden Representation for Robust Feature Extraction
|
cs.LG cs.CV stat.ML
|
This paper aims to develop a new and robust approach to feature
representation. Motivated by the success of Auto-Encoders, we first theoretical
summarize the general properties of all algorithms that are based on
traditional Auto-Encoders: 1) The reconstruction error of the input can not be
lower than a lower bound, which can be viewed as a guiding principle for
reconstructing the input. Additionally, when the input is corrupted with
noises, the reconstruction error of the corrupted input also can not be lower
than a lower bound. 2) The reconstruction of a hidden representation achieving
its ideal situation is the necessary condition for the reconstruction of the
input to reach the ideal state. 3) Minimizing the Frobenius norm of the
Jacobian matrix of the hidden representation has a deficiency and may result in
a much worse local optimum value. We believe that minimizing the reconstruction
error of the hidden representation is more robust than minimizing the Frobenius
norm of the Jacobian matrix of the hidden representation. Based on the above
analysis, we propose a new model termed Double Denoising Auto-Encoders (DDAEs),
which uses corruption and reconstruction on both the input and the hidden
representation. We demonstrate that the proposed model is highly flexible and
extensible and has a potentially better capability to learn invariant and
robust feature representations. We also show that our model is more robust than
Denoising Auto-Encoders (DAEs) for dealing with noises or inessential features.
Furthermore, we detail how to train DDAEs with two different pre-training
methods by optimizing the objective function in a combined and separate manner,
respectively. Comparative experiments illustrate that the proposed model is
significantly better for representation learning than the state-of-the-art
models.
|
Zeng Yu, Tianrui Li, Ning Yu, Yi Pan, Hongmei Chen, Bing Liu
|
10.1145/3284174
|
1710.02844
| null | null |
An Analysis of the Value of Information when Exploring Stochastic,
Discrete Multi-Armed Bandits
|
cs.AI cs.LG stat.ML
|
In this paper, we propose an information-theoretic exploration strategy for
stochastic, discrete multi-armed bandits that achieves optimal regret. Our
strategy is based on the value of information criterion. This criterion
measures the trade-off between policy information and obtainable rewards. High
amounts of policy information are associated with exploration-dominant searches
of the space and yield high rewards. Low amounts of policy information favor
the exploitation of existing knowledge. Information, in this criterion, is
quantified by a parameter that can be varied during search. We demonstrate that
a simulated-annealing-like update of this parameter, with a sufficiently fast
cooling schedule, leads to an optimal regret that is logarithmic with respect
to the number of episodes.
|
Isaac J. Sledge, Jose C. Principe
|
10.3390/e20030155
|
1710.02869
| null | null |
Recurrent Deterministic Policy Gradient Method for Bipedal Locomotion on
Rough Terrain Challenge
|
cs.AI cs.LG cs.RO
|
This paper presents a deep learning framework that is capable of solving
partially observable locomotion tasks based on our novel interpretation of
Recurrent Deterministic Policy Gradient (RDPG). We study on bias of sampled
error measure and its variance induced by the partial observability of
environment and subtrajectory sampling, respectively. Three major improvements
are introduced in our RDPG based learning framework: tail-step bootstrap of
interpolated temporal difference, initialisation of hidden state using past
trajectory scanning, and injection of external experiences learned by other
agents. The proposed learning framework was implemented to solve the
Bipedal-Walker challenge in OpenAI's gym simulation environment where only
partial state information is available. Our simulation study shows that the
autonomous behaviors generated by the RDPG agent are highly adaptive to a
variety of obstacles and enables the agent to effectively traverse rugged
terrains for long distance with higher success rate than leading contenders.
|
Doo Re Song, Chuanyu Yang, Christopher McGreavy, Zhibin Li
|
10.1109/ICARCV.2018.8581309
|
1710.02896
| null | null |
Enhancing Interpretability of Black-box Soft-margin SVM by Integrating
Data-based Priors
|
stat.ML cs.LG
|
The lack of interpretability often makes black-box models difficult to be
applied to many practical domains. For this reason, the current work, from the
black-box model input port, proposes to incorporate data-based prior
information into the black-box soft-margin SVM model to enhance its
interpretability. The concept and incorporation mechanism of data-based prior
information are successively developed, based on which the interpretable or
partly interpretable SVM optimization model is designed and then solved through
handily rewriting the optimization problem as a nonlinear quadratic programming
problem. An algorithm for mining data-based linear prior information from data
set is also proposed, which generates a linear expression with respect to two
appropriate inputs identified from all inputs of system. At last, the proposed
interpretability enhancement strategy is applied to eight benchmark examples
for effectiveness exhibition.
|
Shaohan Chen, Chuanhou Gao, and Ping Zhang
| null |
1710.02924
| null | null |
Network Embedding as Matrix Factorization: Unifying DeepWalk, LINE, PTE,
and node2vec
|
cs.SI cs.LG stat.ML
|
Since the invention of word2vec, the skip-gram model has significantly
advanced the research of network embedding, such as the recent emergence of the
DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of
the aforementioned models with negative sampling can be unified into the matrix
factorization framework with closed forms. Our analysis and proofs reveal that:
(1) DeepWalk empirically produces a low-rank transformation of a network's
normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk
when the size of vertices' context is set to one; (3) As an extension of LINE,
PTE can be viewed as the joint factorization of multiple networks' Laplacians;
(4) node2vec is factorizing a matrix related to the stationary distribution and
transition probability tensor of a 2nd-order random walk. We further provide
the theoretical connections between skip-gram based network embedding
algorithms and the theory of graph Laplacian. Finally, we present the NetMF
method as well as its approximation algorithm for computing network embedding.
Our method offers significant improvements over DeepWalk and LINE for
conventional network mining tasks. This work lays the theoretical foundation
for skip-gram based network embedding methods, leading to a better
understanding of latent network representation learning.
|
Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Kuansan Wang, Jie Tang
|
10.1145/3159652.3159706
|
1710.02971
| null | null |
SGD for robot motion? The effectiveness of stochastic optimization on a
new benchmark for biped locomotion tasks
|
cs.RO cs.LG math.OC
|
Trajectory optimization and posture generation are hard problems in robot
locomotion, which can be non-convex and have multiple local optima. Progress on
these problems is further hindered by a lack of open benchmarks, since
comparisons of different solutions are difficult to make. In this paper we
introduce a new benchmark for trajectory optimization and posture generation of
legged robots, using a pre-defined scenario, robot and constraints, as well as
evaluation criteria. We evaluate state-of-the-art trajectory optimization
algorithms based on sequential quadratic programming (SQP) on the benchmark, as
well as new stochastic and incremental optimization methods borrowed from the
large-scale machine learning literature. Interestingly we show that some of
these stochastic and incremental methods, which are based on stochastic
gradient descent (SGD), achieve higher success rates than SQP on tough
initializations. Inspired by this observation we also propose a new incremental
variant of SQP which updates only a random subset of the costs and constraints
at each iteration. The algorithm is the best performing in both success rate
and convergence speed, improving over SQP by up to 30% in both criteria. The
benchmark's resources and a solution evaluation script are made openly
available.
|
Martim Brandao, Kenji Hashimoto, Atsuo Takanishi
| null |
1710.03029
| null | null |
Unifying Local and Global Change Detection in Dynamic Networks
|
cs.LG stat.ML
|
Many real-world networks are complex dynamical systems, where both local
(e.g., changing node attributes) and global (e.g., changing network topology)
processes unfold over time. Local dynamics may provoke global changes in the
network, and the ability to detect such effects could have profound
implications for a number of real-world problems. Most existing techniques
focus individually on either local or global aspects of the problem or treat
the two in isolation from each other. In this paper we propose a novel network
model that simultaneously accounts for both local and global dynamics. To the
best of our knowledge, this is the first attempt at modeling and detecting
local and global change points on dynamic networks via a unified generative
framework. Our model is built upon the popular mixed membership stochastic
blockmodels (MMSB) with sparse co-evolving patterns. We derive an efficient
stochastic gradient Langevin dynamics (SGLD) sampler for our proposed model,
which allows it to scale to potentially very large networks. Finally, we
validate our model on both synthetic and real-world data and demonstrate its
superiority over several baselines.
|
Wenzhe Li, Dong Guo, Greg Ver Steeg, Aram Galstyan
| null |
1710.03035
| null | null |
Learning Graph Representations with Embedding Propagation
|
cs.LG
|
We propose Embedding Propagation (EP), an unsupervised learning framework for
graph-structured data. EP learns vector representations of graphs by passing
two types of messages between neighboring nodes. Forward messages consist of
label representations such as representations of words and other attributes
associated with the nodes. Backward messages consist of gradients that result
from aggregating the label representations and applying a reconstruction loss.
Node representations are finally computed from the representation of their
labels. With significantly fewer parameters and hyperparameters an instance of
EP is competitive with and often outperforms state of the art unsupervised and
semi-supervised learning methods on a range of benchmark data sets.
|
Alberto Garcia-Duran and Mathias Niepert
| null |
1710.03059
| null | null |
full-FORCE: A Target-Based Method for Training Recurrent Networks
|
cs.NE cs.LG q-bio.NC stat.ML
|
Trained recurrent networks are powerful tools for modeling dynamic neural
computations. We present a target-based method for modifying the full
connectivity matrix of a recurrent network to train it to perform tasks
involving temporally complex input/output transformations. The method
introduces a second network during training to provide suitable "target"
dynamics useful for performing the task. Because it exploits the full recurrent
connectivity, the method produces networks that perform tasks with fewer
neurons and greater noise robustness than traditional least-squares (FORCE)
approaches. In addition, we show how introducing additional input signals into
the target-generating network, which act as task hints, greatly extends the
range of tasks that can be learned and provides control over the complexity and
nature of the dynamics of the trained, task-performing network.
|
Brian DePasquale, Christopher J. Cueva, Kanaka Rajan, G. Sean Escola,
L.F. Abbott
|
10.1371/journal.pone.0191527
|
1710.0307
| null | null |
Verification of Binarized Neural Networks via Inter-Neuron Factoring
|
cs.SE cs.LG cs.LO
|
We study the problem of formal verification of Binarized Neural Networks
(BNN), which have recently been proposed as a energy-efficient alternative to
traditional learning networks. The verification of BNNs, using the reduction to
hardware verification, can be even more scalable by factoring computations
among neurons within the same layer. By proving the NP-hardness of finding
optimal factoring as well as the hardness of PTAS approximability, we design
polynomial-time search heuristics to generate factoring solutions. The overall
framework allows applying verification techniques to moderately-sized BNNs for
embedded devices with thousands of neurons and inputs.
|
Chih-Hong Cheng, Georg N\"uhrenberg, Chung-Hao Huang, Harald Ruess
| null |
1710.03107
| null | null |
Toward Multidiversified Ensemble Clustering of High-Dimensional Data:
From Subspaces to Metrics and Beyond
|
cs.LG cs.CV
|
The rapid emergence of high-dimensional data in various areas has brought new
challenges to current ensemble clustering research. To deal with the curse of
dimensionality, recently considerable efforts in ensemble clustering have been
made by means of different subspace-based techniques. However, besides the
emphasis on subspaces, rather limited attention has been paid to the potential
diversity in similarity/dissimilarity metrics. It remains a surprisingly open
problem in ensemble clustering how to create and aggregate a large population
of diversified metrics, and furthermore, how to jointly investigate the
multi-level diversity in the large populations of metrics, subspaces, and
clusters in a unified framework. To tackle this problem, this paper proposes a
novel multidiversified ensemble clustering approach. In particular, we create a
large number of diversified metrics by randomizing a scaled exponential
similarity kernel, which are then coupled with random subspaces to form a large
set of metric-subspace pairs. Based on the similarity matrices derived from
these metric-subspace pairs, an ensemble of diversified base clusterings can
thereby be constructed. Further, an entropy-based criterion is utilized to
explore the cluster-wise diversity in ensembles, based on which three specific
ensemble clustering algorithms are presented by incorporating three types of
consensus functions. Extensive experiments are conducted on 30 high-dimensional
datasets, including 18 cancer gene expression datasets and 12 image/speech
datasets, which demonstrate the superiority of our algorithms over the
state-of-the-art. The source code is available at
https://github.com/huangdonghere/MDEC.
|
Dong Huang, Chang-Dong Wang, Jian-Huang Lai, Chee-Keong Kwoh
|
10.1109/TCYB.2021.3049633
|
1710.03113
| null | null |
Random Projection and Its Applications
|
cs.LG cs.AI
|
Random Projection is a foundational research topic that connects a bunch of
machine learning algorithms under a similar mathematical basis. It is used to
reduce the dimensionality of the dataset by projecting the data points
efficiently to a smaller dimensions while preserving the original relative
distance between the data points. In this paper, we are intended to explain
random projection method, by explaining its mathematical background and
foundation, the applications that are currently adopting it, and an overview on
its current research perspective.
|
Mahmoud Nabil
| null |
1710.03163
| null | null |
On Formalizing Fairness in Prediction with Machine Learning
|
cs.LG cs.AI stat.ML
|
Machine learning algorithms for prediction are increasingly being used in
critical decisions affecting human lives. Various fairness formalizations, with
no firm consensus yet, are employed to prevent such algorithms from
systematically discriminating against people based on certain attributes
protected by law. The aim of this article is to survey how fairness is
formalized in the machine learning literature for the task of prediction and
present these formalizations with their corresponding notions of distributive
justice from the social sciences literature. We provide theoretical as well as
empirical critiques of these notions from the social sciences literature and
explain how these critiques limit the suitability of the corresponding fairness
formalizations to certain domains. We also suggest two notions of distributive
justice which address some of these critiques and discuss avenues for
prospective fairness formalizations.
|
Pratik Gajane and Mykola Pechenizkiy
| null |
1710.03184
| null | null |
Forecasting Across Time Series Databases using Recurrent Neural Networks
on Groups of Similar Series: A Clustering Approach
|
cs.LG cs.DB econ.EM stat.AP stat.ML
|
With the advent of Big Data, nowadays in many applications databases
containing large quantities of similar time series are available. Forecasting
time series in these domains with traditional univariate forecasting procedures
leaves great potentials for producing accurate forecasts untapped. Recurrent
neural networks (RNNs), and in particular Long Short-Term Memory (LSTM)
networks, have proven recently that they are able to outperform
state-of-the-art univariate time series forecasting methods in this context
when trained across all available time series. However, if the time series
database is heterogeneous, accuracy may degenerate, so that on the way towards
fully automatic forecasting methods in this space, a notion of similarity
between the time series needs to be built into the methods. To this end, we
present a prediction model that can be used with different types of RNN models
on subgroups of similar time series, which are identified by time series
clustering techniques. We assess our proposed methodology using LSTM networks,
a widely popular RNN variant. Our method achieves competitive results on
benchmarking datasets under competition evaluation procedures. In particular,
in terms of mean sMAPE accuracy, it consistently outperforms the baseline LSTM
model and outperforms all other methods on the CIF2016 forecasting competition
dataset.
|
Kasun Bandara, Christoph Bergmeir, Slawek Smyl
| null |
1710.03222
| null | null |
Function space analysis of deep learning representation layers
|
cs.AI cs.LG stat.ML
|
In this paper we propose a function space approach to Representation Learning
and the analysis of the representation layers in deep learning architectures.
We show how to compute a weak-type Besov smoothness index that quantifies the
geometry of the clustering in the feature space. This approach was already
applied successfully to improve the performance of machine learning algorithms
such as the Random Forest and tree-based Gradient Boosting. Our experiments
demonstrate that in well-known and well-performing trained networks, the Besov
smoothness of the training set, measured in the corresponding hidden layer
feature map representation, increases from layer to layer. We also contribute
to the understanding of generalization by showing how the Besov smoothness of
the representations, decreases as we add more mis-labeling to the training
data. We hope this approach will contribute to the de-mystification of some
aspects of deep learning.
|
Oren Elisha and Shai Dekel
| null |
1710.03263
| null | null |
Checkpoint Ensembles: Ensemble Methods from a Single Training Process
|
cs.LG
|
We present the checkpoint ensembles method that can learn ensemble models on
a single training process. Although checkpoint ensembles can be applied to any
parametric iterative learning technique, here we focus on neural networks.
Neural networks' composable and simple neurons make it possible to capture many
individual and interaction effects among features. However, small sample sizes
and sampling noise may result in patterns in the training data that are not
representative of the true relationship between the features and the outcome.
As a solution, regularization during training is often used (e.g. dropout).
However, regularization is no panacea -- it does not perfectly address
overfitting. Even with methods like dropout, two methodologies are commonly
used in practice. First is to utilize a validation set independent to the
training set as a way to decide when to stop training. Second is to use
ensemble methods to further reduce overfitting and take advantage of local
optima (i.e. averaging over the predictions of several models). In this paper,
we explore checkpoint ensembles -- a simple technique that combines these two
ideas in one training process. Checkpoint ensembles improve performance by
averaging the predictions from "checkpoints" of the best models within single
training process. We use three real-world data sets -- text, image, and
electronic health record data -- using three prediction models: a vanilla
neural network, a convolutional neural network, and a long short term memory
network to show that checkpoint ensembles outperform existing methods: a method
that selects a model by minimum validation score, and two methods that average
models by weights. Our results also show that checkpoint ensembles capture a
portion of the performance gains that traditional ensembles provide.
|
Hugh Chen and Scott Lundberg and Su-In Lee
| null |
1710.03282
| null | null |
Coresets for Dependency Networks
|
cs.AI cs.LG stat.ML
|
Many applications infer the structure of a probabilistic graphical model from
data to elucidate the relationships between variables. But how can we train
graphical models on a massive data set? In this paper, we show how to construct
coresets -compressed data sets which can be used as proxy for the original data
and have provably bounded worst case error- for Gaussian dependency networks
(DNs), i.e., cyclic directed graphical models over Gaussians, where the parents
of each variable are its Markov blanket. Specifically, we prove that Gaussian
DNs admit coresets of size independent of the size of the data set.
Unfortunately, this does not extend to DNs over members of the exponential
family in general. As we will prove, Poisson DNs do not admit small coresets.
Despite this worst-case result, we will provide an argument why our coreset
construction for DNs can still work well in practice on count data. To
corroborate our theoretical results, we empirically evaluated the resulting
Core DNs on real data sets. The results
|
Alejandro Molina, Alexander Munteanu, Kristian Kersting
| null |
1710.03285
| null | null |
Sum-Product Networks for Hybrid Domains
|
cs.LG stat.ML
|
While all kinds of mixed data -from personal data, over panel and scientific
data, to public and commercial data- are collected and stored, building
probabilistic graphical models for these hybrid domains becomes more difficult.
Users spend significant amounts of time in identifying the parametric form of
the random variables (Gaussian, Poisson, Logit, etc.) involved and learning the
mixed models. To make this difficult task easier, we propose the first
trainable probabilistic deep architecture for hybrid domains that features
tractable queries. It is based on Sum-Product Networks (SPNs) with piecewise
polynomial leave distributions together with novel nonparametric decomposition
and conditioning steps using the Hirschfeld-Gebelein-R\'enyi Maximum
Correlation Coefficient. This relieves the user from deciding a-priori the
parametric form of the random variables but is still expressive enough to
effectively approximate any continuous distribution and permits efficient
learning and inference. Our empirical evidence shows that the architecture,
called Mixed SPNs, can indeed capture complex distributions across a wide range
of hybrid domains.
|
Alejandro Molina, Antonio Vergari, Nicola Di Mauro, Sriraam Natarajan,
Floriana Esposito, Kristian Kersting
| null |
1710.03297
| null | null |
Massive Open Online Courses Temporal Profiling for Dropout Prediction
|
cs.IR cs.LG
|
Massive Open Online Courses (MOOCs) are attracting the attention of people
all over the world. Regardless the platform, numbers of registrants for online
courses are impressive but in the same time, completion rates are
disappointing. Understanding the mechanisms of dropping out based on the
learner profile arises as a crucial task in MOOCs, since it will allow
intervening at the right moment in order to assist the learner in completing
the course. In this paper, the dropout behaviour of learners in a MOOC is
thoroughly studied by first extracting features that describe the behavior of
learners within the course and then by comparing three classifiers (Logistic
Regression, Random Forest and AdaBoost) in two tasks: predicting which users
will have dropped out by a certain week and predicting which users will drop
out on a specific week. The former has showed to be considerably easier, with
all three classifiers performing equally well. However, the accuracy for the
second task is lower, and Logistic Regression tends to perform slightly better
than the other two algorithms. We found that features that reflect an active
attitude of the user towards the MOOC, such as submitting their assignment,
posting on the Forum and filling their Profile, are strong indicators of
persistence.
|
Tom Rolandus Hagedoorn, Gerasimos Spanakis
| null |
1710.03323
| null | null |
Energy-efficient Amortized Inference with Cascaded Deep Classifiers
|
cs.LG
|
Deep neural networks have been remarkable successful in various AI tasks but
often cast high computation and energy cost for energy-constrained applications
such as mobile sensing. We address this problem by proposing a novel framework
that optimizes the prediction accuracy and energy cost simultaneously, thus
enabling effective cost-accuracy trade-off at test time. In our framework, each
data instance is pushed into a cascade of deep neural networks with increasing
sizes, and a selection module is used to sequentially determine when a
sufficiently accurate classifier can be used for this data instance. The
cascade of neural networks and the selection module are jointly trained in an
end-to-end fashion by the REINFORCE algorithm to optimize a trade-off between
the computational cost and the predictive accuracy. Our method is able to
simultaneously improve the accuracy and efficiency by learning to assign easy
instances to fast yet sufficiently accurate classifiers to save computation and
energy cost, while assigning harder instances to deeper and more powerful
classifiers to ensure satisfiable accuracy. With extensive experiments on
several image classification datasets using cascaded ResNet classifiers, we
demonstrate that our method outperforms the standard well-trained ResNets in
accuracy but only requires less than 20% and 50% FLOPs cost on the CIFAR-10/100
datasets and 66% on the ImageNet dataset, respectively.
|
Jiaqi Guan, Yang Liu, Qiang Liu, Jian Peng
| null |
1710.03368
| null | null |
On- and Off-Policy Monotonic Policy Improvement
|
cs.AI cs.LG stat.ML
|
Monotonic policy improvement and off-policy learning are two main desirable
properties for reinforcement learning algorithms. In this paper, by lower
bounding the performance difference of two policies, we show that the monotonic
policy improvement is guaranteed from on- and off-policy mixture samples. An
optimization procedure which applies the proposed bound can be regarded as an
off-policy natural policy gradient method. In order to support the theoretical
result, we provide a trust region policy optimization method using experience
replay as a naive application of our bound, and evaluate its performance in two
classical benchmark problems.
|
Ryo Iwaki and Minoru Asada
| null |
1710.03442
| null | null |
Safe Semi-Supervised Learning of Sum-Product Networks
|
stat.ML cs.LG
|
In several domains obtaining class annotations is expensive while at the same
time unlabelled data are abundant. While most semi-supervised approaches
enforce restrictive assumptions on the data distribution, recent work has
managed to learn semi-supervised models in a non-restrictive regime. However,
so far such approaches have only been proposed for linear models. In this work,
we introduce semi-supervised parameter learning for Sum-Product Networks
(SPNs). SPNs are deep probabilistic models admitting inference in linear time
in number of network edges. Our approach has several advantages, as it (1)
allows generative and discriminative semi-supervised learning, (2) guarantees
that adding unlabelled data can increase, but not degrade, the performance
(safe), and (3) is computationally efficient and does not enforce restrictive
assumptions on the data distribution. We show on a variety of data sets that
safe semi-supervised learning with SPNs is competitive compared to
state-of-the-art and can lead to a better generative and discriminative
objective value than a purely supervised approach.
|
Martin Trapp, Tamas Madl, Robert Peharz, Franz Pernkopf, Robert Trappl
| null |
1710.03444
| null | null |
Learning to Generalize: Meta-Learning for Domain Generalization
|
cs.LG
|
Domain shift refers to the well known problem that a model trained in one
source domain performs poorly when applied to a target domain with different
statistics. {Domain Generalization} (DG) techniques attempt to alleviate this
issue by producing models which by design generalize well to novel testing
domains. We propose a novel {meta-learning} method for domain generalization.
Rather than designing a specific model that is robust to domain shift as in
most previous DG work, we propose a model agnostic training procedure for DG.
Our algorithm simulates train/test domain shift during training by synthesizing
virtual testing domains within each mini-batch. The meta-optimization objective
requires that steps to improve training domain performance should also improve
testing domain performance. This meta-learning procedure trains models with
good generalization ability to novel domains. We evaluate our method and
achieve state of the art results on a recent cross-domain image classification
benchmark, as well demonstrating its potential on two classic reinforcement
learning tasks.
|
Da Li, Yongxin Yang, Yi-Zhe Song, Timothy M. Hospedales
| null |
1710.03463
| null | null |
An Analysis of Dropout for Matrix Factorization
|
cs.LG stat.ML
|
Dropout is a simple yet effective algorithm for regularizing neural networks
by randomly dropping out units through Bernoulli multiplicative noise, and for
some restricted problem classes, such as linear or logistic regression, several
theoretical studies have demonstrated the equivalence between dropout and a
fully deterministic optimization problem with data-dependent Tikhonov
regularization. This work presents a theoretical analysis of dropout for matrix
factorization, where Bernoulli random variables are used to drop a factor,
thereby attempting to control the size of the factorization. While recent work
has demonstrated the empirical effectiveness of dropout for matrix
factorization, a theoretical understanding of the regularization properties of
dropout in this context remains elusive. This work demonstrates the equivalence
between dropout and a fully deterministic model for matrix factorization in
which the factors are regularized by the sum of the product of the norms of the
columns. While the resulting regularizer is closely related to a variational
form of the nuclear norm, suggesting that dropout may limit the size of the
factorization, we show that it is possible to trivially lower the objective
value by doubling the size of the factorization. We show that this problem is
caused by the use of a fixed dropout rate, which motivates the use of a rate
that increases with the size of the factorization. Synthetic experiments
validate our theoretical findings.
|
Jacopo Cavazza, Connor Lane, Benjamin D. Haeffele, Vittorio Murino,
Ren\'e Vidal
| null |
1710.03487
| null | null |
Underestimated cost of targeted attacks on complex networks
|
cs.SI cs.LG physics.soc-ph
|
The robustness of complex networks under targeted attacks is deeply connected
to the resilience of complex systems, i.e., the ability to make appropriate
responses to the attacks. In this article, we investigated the state-of-the-art
targeted node attack algorithms and demonstrate that they become very
inefficient when the cost of the attack is taken into consideration. In this
paper, we made explicit assumption that the cost of removing a node is
proportional to the number of adjacent links that are removed, i.e., higher
degree nodes have higher cost. Finally, for the case when it is possible to
attack links, we propose a simple and efficient edge removal strategy named
Hierarchical Power Iterative Normalized cut (HPI-Ncut).The results on real and
artificial networks show that the HPI-Ncut algorithm outperforms all the node
removal and link removal attack algorithms when the cost of the attack is taken
into consideration. In addition, we show that on sparse networks, the
complexity of this hierarchical power iteration edge removal algorithm is only
$O(n\log^{2+\epsilon}(n))$.
|
Xiao-Long Ren, Niels Gleinig, Dijana Tolic, Nino Antulov-Fantulin
|
10.1155/2018/9826243
|
1710.03522
| null | null |
Fast and Strong Convergence of Online Learning Algorithms
|
cs.LG stat.ML
|
In this paper, we study the online learning algorithm without explicit
regularization terms. This algorithm is essentially a stochastic gradient
descent scheme in a reproducing kernel Hilbert space (RKHS). The polynomially
decaying step size in each iteration can play a role of regularization to
ensure the generalization ability of online learning algorithm. We develop a
novel capacity dependent analysis on the performance of the last iterate of
online learning algorithm. The contribution of this paper is two-fold. First,
our nice analysis can lead to the convergence rate in the standard mean square
distance which is the best so far. Second, we establish, for the first time,
the strong convergence of the last iterate with polynomially decaying step
sizes in the RKHS norm. We demonstrate that the theoretical analysis
established in this paper fully exploits the fine structure of the underlying
RKHS, and thus can lead to sharp error estimates of online learning algorithm.
|
Zheng-Chu Guo and Lei Shi
| null |
1710.036
| null | null |
CTD: Fast, Accurate, and Interpretable Method for Static and Dynamic
Tensor Decompositions
|
cs.NA cs.LG stat.ML
|
How can we find patterns and anomalies in a tensor, or multi-dimensional
array, in an efficient and directly interpretable way? How can we do this in an
online environment, where a new tensor arrives each time step? Finding patterns
and anomalies in a tensor is a crucial problem with many applications,
including building safety monitoring, patient health monitoring, cyber
security, terrorist detection, and fake user detection in social networks.
Standard PARAFAC and Tucker decomposition results are not directly
interpretable. Although a few sampling-based methods have previously been
proposed towards better interpretability, they need to be made faster, more
memory efficient, and more accurate.
In this paper, we propose CTD, a fast, accurate, and directly interpretable
tensor decomposition method based on sampling. CTD-S, the static version of
CTD, provably guarantees a high accuracy that is 17 ~ 83x more accurate than
that of the state-of-the-art method. Also, CTD-S is made 5 ~ 86x faster, and 7
~ 12x more memory-efficient than the state-of-the-art method by removing
redundancy. CTD-D, the dynamic version of CTD, is the first interpretable
dynamic tensor decomposition method ever proposed. Also, it is made 2 ~ 3x
faster than already fast CTD-S by exploiting factors at previous time step and
by reordering operations. With CTD, we demonstrate how the results can be
effectively interpreted in the online distributed denial of service (DDoS)
attack detection.
|
Jungwoo Lee, Dongjin Choi, and Lee Sael
|
10.1371/journal.pone.0200579
|
1710.03608
| null | null |
LinXGBoost: Extension of XGBoost to Generalized Local Linear Models
|
cs.LG stat.ML
|
XGBoost is often presented as the algorithm that wins every ML competition.
Surprisingly, this is true even though predictions are piecewise constant. This
might be justified in high dimensional input spaces, but when the number of
features is low, a piecewise linear model is likely to perform better. XGBoost
was extended into LinXGBoost that stores at each leaf a linear model. This
extension, equivalent to piecewise regularized least-squares, is particularly
attractive for regression of functions that exhibits jumps or discontinuities.
Those functions are notoriously hard to regress. Our extension is compared to
the vanilla XGBoost and Random Forest in experiments on both synthetic and
real-world data sets.
|
Laurent de Vito
| null |
1710.03634
| null | null |
Continuous Adaptation via Meta-Learning in Nonstationary and Competitive
Environments
|
cs.LG cs.AI
|
Ability to continuously learn and adapt from limited experience in
nonstationary environments is an important milestone on the path towards
general intelligence. In this paper, we cast the problem of continuous
adaptation into the learning-to-learn framework. We develop a simple
gradient-based meta-learning algorithm suitable for adaptation in dynamically
changing and adversarial scenarios. Additionally, we design a new multi-agent
competitive environment, RoboSumo, and define iterated adaptation games for
testing various aspects of continuous adaptation strategies. We demonstrate
that meta-learning enables significantly more efficient adaptation than
reactive baselines in the few-shot regime. Our experiments with a population of
agents that learn and compete suggest that meta-learners are the fittest.
|
Maruan Al-Shedivat, Trapit Bansal, Yuri Burda, Ilya Sutskever, Igor
Mordatch, Pieter Abbeel
| null |
1710.03641
| null | null |
High-dimensional dynamics of generalization error in neural networks
|
stat.ML cs.LG physics.data-an q-bio.NC
|
We perform an average case analysis of the generalization dynamics of large
neural networks trained using gradient descent. We study the
practically-relevant "high-dimensional" regime where the number of free
parameters in the network is on the order of or even larger than the number of
examples in the dataset. Using random matrix theory and exact solutions in
linear models, we derive the generalization error and training error dynamics
of learning and analyze how they depend on the dimensionality of data and
signal to noise ratio of the learning problem. We find that the dynamics of
gradient descent learning naturally protect against overtraining and
overfitting in large networks. Overtraining is worst at intermediate network
sizes, when the effective number of free parameters equals the number of
samples, and thus can be reduced by making a network smaller or larger.
Additionally, in the high-dimensional regime, low generalization error requires
starting with small initial weights. We then turn to non-linear neural
networks, and show that making networks very large does not harm their
generalization performance. On the contrary, it can in fact reduce
overtraining, even without early stopping or regularization of any sort. We
identify two novel phenomena underlying this behavior in overcomplete models:
first, there is a frozen subspace of the weights in which no learning occurs
under gradient descent; and second, the statistical properties of the
high-dimensional regime yield better-conditioned input correlations which
protect against overtraining. We demonstrate that naive application of
worst-case theories such as Rademacher complexity are inaccurate in predicting
the generalization performance of deep neural networks, and derive an
alternative bound which incorporates the frozen subspace and conditioning
effects and qualitatively matches the behavior observed in simulation.
|
Madhu S. Advani, Andrew M. Saxe
| null |
1710.03667
| null | null |
Fast and Safe: Accelerated gradient methods with optimality certificates
and underestimate sequences
|
math.OC cs.LG
|
In this work we introduce the concept of an Underestimate Sequence (UES),
which is motivated by Nesterov's estimate sequence. Our definition of a UES
utilizes three sequences, one of which is a lower bound (or under-estimator) of
the objective function. The question of how to construct an appropriate
sequence of lower bounds is addressed, and we present lower bounds for strongly
convex smooth functions and for strongly convex composite functions, which
adhere to the UES framework. Further, we propose several first order methods
for minimizing strongly convex functions in both the smooth and composite
cases. The algorithms, based on efficiently updating lower bounds on the
objective functions, have natural stopping conditions that provide the user
with a certificate of optimality. Convergence of all algorithms is guaranteed
through the UES framework, and we show that all presented algorithms converge
linearly, with the accelerated variants enjoying the optimal linear rate of
convergence.
|
Majid Jahani, Naga Venkata C. Gudapati, Chenxin Ma, Rachael Tappenden,
Martin Tak\'a\v{c}
| null |
1710.03695
| null | null |
Mixed Precision Training
|
cs.AI cs.LG stat.ML
|
Deep neural networks have enabled progress in a wide variety of applications.
Growing the size of the neural network typically results in improved accuracy.
As model sizes grow, the memory and compute requirements for training these
models also increases. We introduce a technique to train deep neural networks
using half precision floating point numbers. In our technique, weights,
activations and gradients are stored in IEEE half-precision format.
Half-precision floating numbers have limited numerical range compared to
single-precision numbers. We propose two techniques to handle this loss of
information. Firstly, we recommend maintaining a single-precision copy of the
weights that accumulates the gradients after each optimizer step. This
single-precision copy is rounded to half-precision format during training.
Secondly, we propose scaling the loss appropriately to handle the loss of
information with half-precision gradients. We demonstrate that this approach
works for a wide variety of models including convolution neural networks,
recurrent neural networks and generative adversarial networks. This technique
works for large scale models with more than 100 million parameters trained on
large datasets. Using this approach, we can reduce the memory consumption of
deep learning models by nearly 2x. In future processors, we can also expect a
significant computation speedup using half-precision hardware units.
|
Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos,
Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev,
Ganesh Venkatesh, Hao Wu
| null |
1710.0374
| null | null |
End-to-End Deep Learning for Steering Autonomous Vehicles Considering
Temporal Dependencies
|
cs.LG
|
Steering a car through traffic is a complex task that is difficult to cast
into algorithms. Therefore, researchers turn to training artificial neural
networks from front-facing camera data stream along with the associated
steering angles. Nevertheless, most existing solutions consider only the visual
camera frames as input, thus ignoring the temporal relationship between frames.
In this work, we propose a Convolutional Long Short-Term Memory Recurrent
Neural Network (C-LSTM), that is end-to-end trainable, to learn both visual and
dynamic temporal dependencies of driving. Additionally, We introduce posing the
steering angle regression problem as classification while imposing a spatial
relationship between the output layer neurons. Such method is based on learning
a sinusoidal function that encodes steering angles. To train and validate our
proposed methods, we used the publicly available Comma.ai dataset. Our solution
improved steering root mean square error by 35% over recent methods, and led to
a more stable steering by 87%.
|
Hesham M. Eraqi, Mohamed N. Moustafa, Jens Honer
| null |
1710.03804
| null | null |
Inference on Auctions with Weak Assumptions on Information
|
econ.EM cs.GT cs.LG math.ST stat.TH
|
Given a sample of bids from independent auctions, this paper examines the
question of inference on auction fundamentals (e.g. valuation distributions,
welfare measures) under weak assumptions on information structure. The question
is important as it allows us to learn about the valuation distribution in a
robust way, i.e., without assuming that a particular information structure
holds across observations. We leverage the recent contributions of
\cite{Bergemann2013} in the robust mechanism design literature that exploit the
link between Bayesian Correlated Equilibria and Bayesian Nash Equilibria in
incomplete information games to construct an econometrics framework for
learning about auction fundamentals using observed data on bids. We showcase
our construction of identified sets in private value and common value auctions.
Our approach for constructing these sets inherits the computational simplicity
of solving for correlated equilibria: checking whether a particular valuation
distribution belongs to the identified set is as simple as determining whether
a {\it linear} program is feasible. A similar linear program can be used to
construct the identified set on various welfare measures and counterfactual
objects. For inference and to summarize statistical uncertainty, we propose
novel finite sample methods using tail inequalities that are used to construct
confidence regions on sets. We also highlight methods based on Bayesian
bootstrap and subsampling. A set of Monte Carlo experiments show adequate
finite sample properties of our inference procedures. We illustrate our methods
using data from OCS auctions.
|
Vasilis Syrgkanis, Elie Tamer, Juba Ziani
| null |
1710.0383
| null | null |
Disentangled Representations via Synergy Minimization
|
cs.LG cs.IT math.IT
|
Scientists often seek simplified representations of complex systems to
facilitate prediction and understanding. If the factors comprising a
representation allow us to make accurate predictions about our system, but
obscuring any subset of the factors destroys our ability to make predictions,
we say that the representation exhibits informational synergy. We argue that
synergy is an undesirable feature in learned representations and that
explicitly minimizing synergy can help disentangle the true factors of
variation underlying data. We explore different ways of quantifying synergy,
deriving new closed-form expressions in some cases, and then show how to modify
learning to produce representations that are minimally synergistic. We
introduce a benchmark task to disentangle separate characters from images of
words. We demonstrate that Minimally Synergistic (MinSyn) representations
correctly disentangle characters while methods relying on statistical
independence fail.
|
Greg Ver Steeg, Rob Brekelmans, Hrayr Harutyunyan, and Aram Galstyan
| null |
1710.03839
| null | null |
Using Task Descriptions in Lifelong Machine Learning for Improved
Performance and Zero-Shot Transfer
|
cs.LG stat.ML
|
Knowledge transfer between tasks can improve the performance of learned
models, but requires an accurate estimate of the inter-task relationships to
identify the relevant knowledge to transfer. These inter-task relationships are
typically estimated based on training data for each task, which is inefficient
in lifelong learning settings where the goal is to learn each consecutive task
rapidly from as little data as possible. To reduce this burden, we develop a
lifelong learning method based on coupled dictionary learning that utilizes
high-level task descriptions to model the inter-task relationships. We show
that using task descriptors improves the performance of the learned task
policies, providing both theoretical justification for the benefit and
empirical demonstration of the improvement across a variety of learning
problems. Given only the descriptor for a new task, the lifelong learner is
also able to accurately predict a model for the new task through zero-shot
learning using the coupled dictionary, eliminating the need to gather training
data before addressing the task.
|
David Isele, Mohammad Rostami, Eric Eaton
| null |
1710.0385
| null | null |
On Estimation of $L_{r}$-Norms in Gaussian White Noise Models
|
math.ST cs.LG stat.TH
|
We provide a complete picture of asymptotically minimax estimation of
$L_r$-norms (for any $r\ge 1$) of the mean in Gaussian white noise model over
Nikolskii-Besov spaces. In this regard, we complement the work of Lepski,
Nemirovski and Spokoiny (1999), who considered the cases of $r=1$ (with
poly-logarithmic gap between upper and lower bounds) and $r$ even (with
asymptotically sharp upper and lower bounds) over H\"{o}lder spaces. We
additionally consider the case of asymptotically adaptive minimax estimation
and demonstrate a difference between even and non-even $r$ in terms of an
investigator's ability to produce asymptotically adaptive minimax estimators
without paying a penalty.
|
Yanjun Han, Jiantao Jiao, Rajarshi Mukherjee
|
10.1007/s00440-020-00982-x
|
1710.03863
| null | null |
Learning Task Specifications from Demonstrations
|
cs.LG cs.AI cs.LO
|
Real world applications often naturally decompose into several sub-tasks. In
many settings (e.g., robotics) demonstrations provide a natural way to specify
the sub-tasks. However, most methods for learning from demonstrations either do
not provide guarantees that the artifacts learned for the sub-tasks can be
safely recombined or limit the types of composition available. Motivated by
this deficit, we consider the problem of inferring Boolean non-Markovian
rewards (also known as logical trace properties or specifications) from
demonstrations provided by an agent operating in an uncertain, stochastic
environment. Crucially, specifications admit well-defined composition rules
that are typically easy to interpret. In this paper, we formulate the
specification inference task as a maximum a posteriori (MAP) probability
inference problem, apply the principle of maximum entropy to derive an analytic
demonstration likelihood model and give an efficient approach to search for the
most likely specification in a large candidate pool of specifications. In our
experiments, we demonstrate how learning specifications can help avoid common
problems that often arise due to ad-hoc reward composition.
|
Marcell Vazquez-Chanlatte, Susmit Jha, Ashish Tiwari, Mark K. Ho,
Sanjit A. Seshia
| null |
1710.03875
| null | null |
PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement
Learning and Sampling-based Planning
|
cs.AI cs.LG cs.RO
|
We present PRM-RL, a hierarchical method for long-range navigation task
completion that combines sampling based path planning with reinforcement
learning (RL). The RL agents learn short-range, point-to-point navigation
policies that capture robot dynamics and task constraints without knowledge of
the large-scale topology. Next, the sampling-based planners provide roadmaps
which connect robot configurations that can be successfully navigated by the RL
agent. The same RL agents are used to control the robot under the direction of
the planning, enabling long-range navigation. We use the Probabilistic Roadmaps
(PRMs) for the sampling-based planner. The RL agents are constructed using
feature-based and deep neural net policies in continuous state and action
spaces. We evaluate PRM-RL, both in simulation and on-robot, on two navigation
tasks with non-trivial robot dynamics: end-to-end differential drive indoor
navigation in office environments, and aerial cargo delivery in urban
environments with load displacement constraints. Our results show improvement
in task completion over both RL agents on their own and traditional
sampling-based planners. In the indoor navigation task, PRM-RL successfully
completes up to 215 m long trajectories under noisy sensor conditions, and the
aerial cargo delivery completes flights over 1000 m without violating the task
constraints in an environment 63 million times larger than used in training.
|
Aleksandra Faust, Oscar Ramirez, Marek Fiser, Kenneth Oslund, Anthony
Francis, James Davidson, and Lydia Tapia
| null |
1710.03937
| null | null |
When is Network Lasso Accurate: The Vector Case
|
cs.LG
|
A recently proposed learning algorithm for massive network-structured data
sets (big data over networks) is the network Lasso (nLasso), which extends the
well- known Lasso estimator from sparse models to network-structured datasets.
Efficient implementations of the nLasso have been presented using modern convex
optimization methods. In this paper, we provide sufficient conditions on the
network structure and available label information such that nLasso accurately
learns a vector-valued graph signal (representing label information) from the
information provided by the labels of a few data points.
|
Nguyen Tran, Saeed Basirian, Alexander Jung
| null |
1710.03942
| null | null |
Adaptive multi-penalty regularization based on a generalized Lasso path
|
stat.ML cs.LG math.NA
|
For many algorithms, parameter tuning remains a challenging and critical
task, which becomes tedious and infeasible in a multi-parameter setting.
Multi-penalty regularization, successfully used for solving undetermined sparse
regression of problems of unmixing type where signal and noise are additively
mixed, is one of such examples. In this paper, we propose a novel algorithmic
framework for an adaptive parameter choice in multi-penalty regularization with
a focus on the correct support recovery. Building upon the theory of
regularization paths and algorithms for single-penalty functionals, we extend
these ideas to a multi-penalty framework by providing an efficient procedure
for the construction of regions containing structurally similar solutions,
i.e., solutions with the same sparsity and sign pattern, over the whole range
of parameters. Combining this with a model selection criterion, we can choose
regularization parameters in a data-adaptive manner. Another advantage of our
algorithm is that it provides an overview on the solution stability over the
whole range of parameters. This can be further exploited to obtain additional
insights into the problem of interest. We provide a numerical analysis of our
method and compare it to the state-of-the-art single-penalty algorithms for
compressed sensing problems in order to demonstrate the robustness and power of
the proposed algorithm.
|
Markus Grasmair, Timo Klock, and Valeriya Naumova
| null |
1710.03971
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.