title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
A Grassmannian Approach to Zero-Shot Learning for Network Intrusion
Detection
|
cs.CR cs.LG
|
One of the main problems in Network Intrusion Detection comes from constant
rise of new attacks, so that not enough labeled examples are available for the
new classes of attacks. Traditional Machine Learning approaches hardly address
such problem. This can be overcome with Zero-Shot Learning, a new approach in
the field of Computer Vision, which can be described in two stages: the
Attribute Learning and the Inference Stage. The goal of this paper is to
propose a new Inference Stage algorithm for Network Intrusion Detection. In
order to attain this objective, we firstly put forward an experimental setup
for the evaluation of the Zero-Shot Learning in Network Intrusion Detection
related tasks. Secondly, a decision tree based algorithm is applied to extract
rules for generating the attributes in the AL stage. Finally, using a
representation of a Zero-Shot Class as a point in the Grassmann manifold, an
explicit formula for the shortest distance between points in that manifold can
be used to compute the geodesic distance between the Zero-Shot Classes which
represent the new attacks and the Known Classes corresponding to the attack
categories. The experimental results in the datasets KDD Cup 99 and NSL-KDD
show that our approach with Zero-Shot Learning successfully addresses the
Network Intrusion Detection problem.
|
Jorge Rivero, Bernardete Ribeiro, Ning Chen, F\'atima Silva Leite
| null |
1709.07984
| null | null |
Deep Learning for Secure Mobile Edge Computing
|
cs.CR cs.LG cs.NI
|
Mobile edge computing (MEC) is a promising approach for enabling
cloud-computing capabilities at the edge of cellular networks. Nonetheless,
security is becoming an increasingly important issue in MEC-based applications.
In this paper, we propose a deep-learning-based model to detect security
threats. The model uses unsupervised learning to automate the detection
process, and uses location information as an important feature to improve the
performance of detection. Our proposed model can be used to detect malicious
applications at the edge of a cellular network, which is a serious security
threat. Extensive experiments are carried out with 10 different datasets, the
results of which illustrate that our deep-learning-based model achieves an
average gain of 6% accuracy compared with state-of-the-art machine learning
algorithms.
|
Yuanfang Chen, Yan Zhang, Sabita Maharjan
| null |
1709.08025
| null | null |
Statistical Parametric Speech Synthesis Incorporating Generative
Adversarial Networks
|
cs.SD cs.LG eess.AS
|
A method for statistical parametric speech synthesis incorporating generative
adversarial networks (GANs) is proposed. Although powerful deep neural networks
(DNNs) techniques can be applied to artificially synthesize speech waveform,
the synthetic speech quality is low compared with that of natural speech. One
of the issues causing the quality degradation is an over-smoothing effect often
observed in the generated speech parameters. A GAN introduced in this paper
consists of two neural networks: a discriminator to distinguish natural and
generated samples, and a generator to deceive the discriminator. In the
proposed framework incorporating the GANs, the discriminator is trained to
distinguish natural and generated speech parameters, while the acoustic models
are trained to minimize the weighted sum of the conventional minimum generation
loss and an adversarial loss for deceiving the discriminator. Since the
objective of the GANs is to minimize the divergence (i.e., distribution
difference) between the natural and generated speech parameters, the proposed
method effectively alleviates the over-smoothing effect on the generated speech
parameters. We evaluated the effectiveness for text-to-speech and voice
conversion, and found that the proposed method can generate more natural
spectral parameters and $F_0$ than conventional minimum generation error
training algorithm regardless its hyper-parameter settings. Furthermore, we
investigated the effect of the divergence of various GANs, and found that a
Wasserstein GAN minimizing the Earth-Mover's distance works the best in terms
of improving synthetic speech quality.
|
Yuki Saito, Shinnosuke Takamichi, Hiroshi Saruwatari
| null |
1709.08041
| null | null |
Feature-based time-series analysis
|
cs.LG
|
This work presents an introduction to feature-based time-series analysis. The
time series as a data type is first described, along with an overview of the
interdisciplinary time-series analysis literature. I then summarize the range
of feature-based representations for time series that have been developed to
aid interpretable insights into time-series structure. Particular emphasis is
given to emerging research that facilitates wide comparison of feature-based
representations that allow us to understand the properties of a time-series
dataset that make it suited to a particular feature-based representation or
analysis algorithm. The future of time-series analysis is likely to embrace
approaches that exploit machine learning methods to partially automate human
learning to aid understanding of the complex dynamical patterns in the time
series we measure from the world.
|
Ben D. Fulcher
| null |
1709.08055
| null | null |
Cross-modal Recurrent Models for Weight Objective Prediction from
Multimodal Time-series Data
|
stat.ML cs.AI cs.LG q-bio.QM
|
We analyse multimodal time-series data corresponding to weight, sleep and
steps measurements. We focus on predicting whether a user will successfully
achieve his/her weight objective. For this, we design several deep long
short-term memory (LSTM) architectures, including a novel cross-modal LSTM
(X-LSTM), and demonstrate their superiority over baseline approaches. The
X-LSTM improves parameter efficiency by processing each modality separately and
allowing for information flow between them by way of recurrent
cross-connections. We present a general hyperparameter optimisation technique
for X-LSTMs, which allows us to significantly improve on the LSTM and a prior
state-of-the-art cross-modal approach, using a comparable number of parameters.
Finally, we visualise the model's predictions, revealing implications about
latent variables in this task.
|
Petar Veli\v{c}kovi\'c, Laurynas Karazija, Nicholas D. Lane, Sourav
Bhattacharya, Edgar Liberis, Pietro Li\`o, Angela Chieh, Otmane Bellahsen,
Matthieu Vegreville
|
10.1145/3240925.3240937
|
1709.08073
| null | null |
GP-SUM. Gaussian Processes Filtering of non-Gaussian Beliefs
|
cs.RO cs.LG stat.ML
|
This work studies the problem of stochastic dynamic filtering and state
propagation with complex beliefs. The main contribution is GP-SUM, a filtering
algorithm tailored to dynamic systems and observation models expressed as
Gaussian Processes (GP), and to states represented as a weighted sum of
Gaussians. The key attribute of GP-SUM is that it does not rely on
linearizations of the dynamic or observation models, or on unimodal Gaussian
approximations of the belief, hence enables tracking complex state
distributions. The algorithm can be seen as a combination of a sampling-based
filter with a probabilistic Bayes filter. On the one hand, GP-SUM operates by
sampling the state distribution and propagating each sample through the dynamic
system and observation models. On the other hand, it achieves effective
sampling and accurate probabilistic propagation by relying on the GP form of
the system, and the sum-of-Gaussian form of the belief. We show that GP-SUM
outperforms several GP-Bayes and Particle Filters on a standard benchmark. We
also demonstrate its use in a pushing task, predicting with experimental
accuracy the naturally occurring non-Gaussian distributions.
|
Maria Bauza, Alberto Rodriguez
| null |
1709.0812
| null | null |
Self-supervised learning: When is fusion of the primary and secondary
sensor cue useful?
|
cs.RO cs.AI cs.LG
|
Self-supervised learning (SSL) is a reliable learning mechanism in which a
robot enhances its perceptual capabilities. Typically, in SSL a trusted,
primary sensor cue provides supervised training data to a secondary sensor cue.
In this article, a theoretical analysis is performed on the fusion of the
primary and secondary cue in a minimal model of SSL. A proof is provided that
determines the specific conditions under which it is favorable to perform
fusion. In short, it is favorable when (i) the prior on the target value is
strong or (ii) the secondary cue is sufficiently accurate. The theoretical
findings are validated with computational experiments. Subsequently, a
real-world case study is performed to investigate if fusion in SSL is also
beneficial when assumptions of the minimal model are not met. In particular, a
flying robot learns to map pressure measurements to sonar height measurements
and then fuses the two, resulting in better height estimation. Fusion is also
beneficial in the opposite case, when pressure is the primary cue. The analysis
and results are encouraging to study SSL fusion also for other robots and
sensors.
|
G.C.H.E. de Croon
| null |
1709.08126
| null | null |
Function approximation with zonal function networks with activation
functions analogous to the rectified linear unit functions
|
cs.LG math.NA
|
A zonal function (ZF) network on the $q$ dimensional sphere $\mathbb{S}^q$ is
a network of the form $\mathbf{x}\mapsto \sum_{k=1}^n
a_k\phi(\mathbf{x}\cdot\mathbf{x}_k)$ where $\phi :[-1,1]\to\mathbf{R}$ is the
activation function, $\mathbf{x}_k\in\mathbb{S}^q$ are the centers, and
$a_k\in\mathbb{R}$. While the approximation properties of such networks are
well studied in the context of positive definite activation functions, recent
interest in deep and shallow networks motivate the study of activation
functions of the form $\phi(t)=|t|$, which are not positive definite. In this
paper, we define an appropriate smoothess class and establish approximation
properties of such networks for functions in this class. The centers can be
chosen independently of the target function, and the coefficients are linear
combinations of the training data. The constructions preserve rotational
symmetries.
|
Hrushikesh N. Mhaskar
| null |
1709.08174
| null | null |
An Optimal Online Method of Selecting Source Policies for Reinforcement
Learning
|
cs.AI cs.LG stat.ML
|
Transfer learning significantly accelerates the reinforcement learning
process by exploiting relevant knowledge from previous experiences. The problem
of optimally selecting source policies during the learning process is of great
importance yet challenging. There has been little theoretical analysis of this
problem. In this paper, we develop an optimal online method to select source
policies for reinforcement learning. This method formulates online source
policy selection as a multi-armed bandit problem and augments Q-learning with
policy reuse. We provide theoretical guarantees of the optimal selection
process and convergence to the optimal policy. In addition, we conduct
experiments on a grid-based robot navigation domain to demonstrate its
efficiency and robustness by comparing to the state-of-the-art transfer
learning method.
|
Siyuan Li and Chongjie Zhang
| null |
1709.08201
| null | null |
Learning crystal plasticity using digital image correlation: Examples
from discrete dislocation dynamics
|
cond-mat.mtrl-sci cond-mat.mes-hall cond-mat.stat-mech cs.CE cs.LG
|
Digital image correlation (DIC) is a well-established, non-invasive technique
for tracking and quantifying the deformation of mechanical samples under
strain. While it provides an obvious way to observe incremental and aggregate
displacement information, it seems likely that DIC data sets, which after all
reflect the spatially-resolved response of a microstructure to loads, contain
much richer information than has generally been extracted from them. In this
paper, we demonstrate a machine-learning approach to quantifying the prior
deformation history of a crystalline sample based on its response to a
subsequent DIC test. This prior deformation history is encoded in the
microstructure through the inhomogeneity of the dislocation microstructure, and
in the spatial correlations of the dislocation patterns, which mediate the
system's response to the DIC test load. Our domain consists of deformed
crystalline thin films generated by a discrete dislocation plasticity
simulation. We explore the range of applicability of machine learning (ML) for
typical experimental protocols, and as a function of possible size effects and
stochasticity. Plasticity size effects may directly influence the data,
rendering unsupervised techniques unable to distinguish different plasticity
regimes.
|
Stefanos Papanikolaou, Michail Tzimas, Andrew C.E. Reid and Stephen A.
Langer
| null |
1709.08225
| null | null |
HDLTex: Hierarchical Deep Learning for Text Classification
|
cs.LG cs.AI cs.CL cs.CV cs.IR
|
The continually increasing number of documents produced each year
necessitates ever improving information processing methods for searching,
retrieving, and organizing text. Central to these information processing
methods is document classification, which has become an important application
for supervised learning. Recently the performance of these traditional
classifiers has degraded as the number of documents has increased. This is
because along with this growth in the number of documents has come an increase
in the number of categories. This paper approaches this problem differently
from current document classification methods that view the problem as
multi-class classification. Instead we perform hierarchical classification
using an approach we call Hierarchical Deep Learning for Text classification
(HDLTex). HDLTex employs stacks of deep learning architectures to provide
specialized understanding at each level of the document hierarchy.
|
Kamran Kowsari, Donald E. Brown, Mojtaba Heidarysafa, Kiana Jafari
Meimandi, Matthew S. Gerber, Laura E. Barnes
|
10.1109/ICMLA.2017.0-134
|
1709.08267
| null | null |
Learning Graph-Structured Sum-Product Networks for Probabilistic
Semantic Maps
|
cs.LG
|
We introduce Graph-Structured Sum-Product Networks (GraphSPNs), a
probabilistic approach to structured prediction for problems where dependencies
between latent variables are expressed in terms of arbitrary, dynamic graphs.
While many approaches to structured prediction place strict constraints on the
interactions between inferred variables, many real-world problems can be only
characterized using complex graph structures of varying size, often
contaminated with noise when obtained from real data. Here, we focus on one
such problem in the domain of robotics. We demonstrate how GraphSPNs can be
used to bolster inference about semantic, conceptual place descriptions using
noisy topological relations discovered by a robot exploring large-scale office
spaces. Through experiments, we show that GraphSPNs consistently outperform the
traditional approach based on undirected graphical models, successfully
disambiguating information in global semantic maps built from uncertain, noisy
local evidence. We further exploit the probabilistic nature of the model to
infer marginal distributions over semantic descriptions of as yet unexplored
places and detect spatial environment configurations that are novel and
incongruent with the known evidence.
|
Kaiyu Zheng, Andrzej Pronobis, Rajesh P. N. Rao
| null |
1709.08274
| null | null |
Underwater Multi-Robot Convoying using Visual Tracking by Detection
|
cs.RO cs.AI cs.LG
|
We present a robust multi-robot convoying approach that relies on visual
detection of the leading agent, thus enabling target following in unstructured
3-D environments. Our method is based on the idea of tracking-by-detection,
which interleaves efficient model-based object detection with temporal
filtering of image-based bounding box estimation. This approach has the
important advantage of mitigating tracking drift (i.e. drifting away from the
target object), which is a common symptom of model-free trackers and is
detrimental to sustained convoying in practice. To illustrate our solution, we
collected extensive footage of an underwater robot in ocean settings, and
hand-annotated its location in each frame. Based on this dataset, we present an
empirical comparison of multiple tracker variants, including the use of several
convolutional neural networks, both with and without recurrent connections, as
well as frequency-based model-free trackers. We also demonstrate the
practicality of this tracking-by-detection strategy in real-world scenarios by
successfully controlling a legged underwater robot in five degrees of freedom
to follow another robot's independent motion.
|
Florian Shkurti, Wei-Di Chang, Peter Henderson, Md Jahidul Islam, Juan
Camilo Gamboa Higuera, Jimmy Li, Travis Manderson, Anqi Xu, Gregory Dudek,
Junaed Sattar
| null |
1709.08292
| null | null |
Learning Context-Sensitive Convolutional Filters for Text Processing
|
cs.CL cs.LG stat.ML
|
Convolutional neural networks (CNNs) have recently emerged as a popular
building block for natural language processing (NLP). Despite their success,
most existing CNN models employed in NLP share the same learned (and static)
set of filters for all input sentences. In this paper, we consider an approach
of using a small meta network to learn context-sensitive convolutional filters
for text processing. The role of meta network is to abstract the contextual
information of a sentence or document into a set of input-aware filters. We
further generalize this framework to model sentence pairs, where a
bidirectional filter generation mechanism is introduced to encapsulate
co-dependent sentence representations. In our benchmarks on four different
tasks, including ontology classification, sentiment analysis, answer sentence
selection, and paraphrase identification, our proposed model, a modified CNN
with context-sensitive filters, consistently outperforms the standard CNN and
attention-based CNN baselines. By visualizing the learned context-sensitive
filters, we further validate and rationalize the effectiveness of proposed
framework.
|
Dinghan Shen, Martin Renqiang Min, Yitong Li, Lawrence Carin
| null |
1709.08294
| null | null |
Non-iterative Label Propagation in Optimal Leading Forest
|
cs.LG cs.AI
|
Graph based semi-supervised learning (GSSL) has intuitive representation and
can be improved by exploiting the matrix calculation. However, it has to
perform iterative optimization to achieve a preset objective, which usually
leads to low efficiency. Another inconvenience lying in GSSL is that when new
data come, the graph construction and the optimization have to be conducted all
over again. We propose a sound assumption, arguing that: the neighboring data
points are not in peer-to-peer relation, but in a partial-ordered relation
induced by the local density and distance between the data; and the label of a
center can be regarded as the contribution of its followers. Starting from the
assumption, we develop a highly efficient non-iterative label propagation
algorithm based on a novel data structure named as optimal leading forest
(LaPOLeaF). The major weaknesses of the traditional GSSL are addressed by this
study. We further scale LaPOLeaF to accommodate big data by utilizing block
distance matrix technique, parallel computing, and Locality-Sensitive Hashing
(LSH). Experiments on large datasets have shown the promising results of the
proposed methods.
|
Ji Xu and Guoyin Wang
| null |
1709.08426
| null | null |
Towards continuous control of flippers for a multi-terrain robot using
deep reinforcement learning
|
cs.RO cs.AI cs.LG
|
In this paper we focus on developing a control algorithm for multi-terrain
tracked robots with flippers using a reinforcement learning (RL) approach. The
work is based on the deep deterministic policy gradient (DDPG) algorithm,
proven to be very successful in simple simulation environments. The algorithm
works in an end-to-end fashion in order to control the continuous position of
the flippers. This end-to-end approach makes it easy to apply the controller to
a wide array of circumstances, but the huge flexibility comes to the cost of an
increased difficulty of solution. The complexity of the task is enlarged even
more by the fact that real multi-terrain robots move in partially observable
environments. Notwithstanding these complications, being able to smoothly
control a multi-terrain robot can produce huge benefits in impaired people
daily lives or in search and rescue situations.
|
Giuseppe Paolo, Lei Tai and Ming Liu
| null |
1709.0843
| null | null |
House Price Prediction Using LSTM
|
cs.LG stat.ML
|
In this paper, we use the house price data ranging from January 2004 to
October 2016 to predict the average house price of November and December in
2016 for each district in Beijing, Shanghai, Guangzhou and Shenzhen. We apply
Autoregressive Integrated Moving Average model to generate the baseline while
LSTM networks to build prediction model. These algorithms are compared in terms
of Mean Squared Error. The result shows that the LSTM model has excellent
properties with respect to predict time series. Also, stateful LSTM networks
and stack LSTM networks are employed to further study the improvement of
accuracy of the house prediction model.
|
Xiaochen Chen, Lai Wei, Jiaxin Xu
| null |
1709.08432
| null | null |
An efficient clustering algorithm from the measure of local Gaussian
distribution
|
cs.DB cs.LG
|
In this paper, I will introduce a fast and novel clustering algorithm based
on Gaussian distribution and it can guarantee the separation of each cluster
centroid as a given parameter, $d_s$. The worst run time complexity of this
algorithm is approximately $\sim$O$(T\times N \times \log(N))$ where $T$ is the
iteration steps and $N$ is the number of features.
|
Yuan-Yen Tai
| null |
1709.0847
| null | null |
J-MOD$^{2}$: Joint Monocular Obstacle Detection and Depth Estimation
|
cs.LG cs.RO
|
In this work, we propose an end-to-end deep architecture that jointly learns
to detect obstacles and estimate their depth for MAV flight applications. Most
of the existing approaches either rely on Visual SLAM systems or on depth
estimation models to build 3D maps and detect obstacles. However, for the task
of avoiding obstacles this level of complexity is not required. Recent works
have proposed multi task architectures to both perform scene understanding and
depth estimation. We follow their track and propose a specific architecture to
jointly estimate depth and obstacles, without the need to compute a global map,
but maintaining compatibility with a global SLAM system if needed. The network
architecture is devised to exploit the joint information of the obstacle
detection task, that produces more reliable bounding boxes, with the depth
estimation one, increasing the robustness of both to scenario changes. We call
this architecture J-MOD$^{2}$. We test the effectiveness of our approach with
experiments on sequences with different appearance and focal lengths and
compare it to SotA multi task methods that jointly perform semantic
segmentation and depth estimation. In addition, we show the integration in a
full system using a set of simulated navigation experiments where a MAV
explores an unknown scenario and plans safe trajectories by using our detection
model.
|
Michele Mancini, Gabriele Costante, Paolo Valigi and Thomas A.
Ciarfuglia
|
10.1109/LRA.2018.2800083
|
1709.0848
| null | null |
Enhanced Quantum Synchronization via Quantum Machine Learning
|
quant-ph cond-mat.mes-hall cs.AI cs.LG stat.ML
|
We study the quantum synchronization between a pair of two-level systems
inside two coupled cavities. By using a digital-analog decomposition of the
master equation that rules the system dynamics, we show that this approach
leads to quantum synchronization between both two-level systems. Moreover, we
can identify in this digital-analog block decomposition the fundamental
elements of a quantum machine learning protocol, in which the agent and the
environment (learning units) interact through a mediating system, namely, the
register. If we can additionally equip this algorithm with a classical feedback
mechanism, which consists of projective measurements in the register,
reinitialization of the register state and local conditional operations on the
agent and environment subspace, a powerful and flexible quantum machine
learning protocol emerges. Indeed, numerical simulations show that this
protocol enhances the synchronization process, even when every subsystem
experience different loss/decoherence mechanisms, and give us the flexibility
to choose the synchronization state. Finally, we propose an implementation
based on current technologies in superconducting circuits.
|
F. A. C\'ardenas-L\'opez, M. Sanz, J. C. Retamal, E. Solano
|
10.1002/qute.201800076
|
1709.08519
| null | null |
Predictive-State Decoders: Encoding the Future into Recurrent Networks
|
stat.ML cs.LG
|
Recurrent neural networks (RNNs) are a vital modeling technique that rely on
internal states learned indirectly by optimization of a supervised,
unsupervised, or reinforcement training loss. RNNs are used to model dynamic
processes that are characterized by underlying latent states whose form is
often unknown, precluding its analytic representation inside an RNN. In the
Predictive-State Representation (PSR) literature, latent state processes are
modeled by an internal state representation that directly models the
distribution of future observations, and most recent work in this area has
relied on explicitly representing and targeting sufficient statistics of this
probability distribution. We seek to combine the advantages of RNNs and PSRs by
augmenting existing state-of-the-art recurrent neural networks with
Predictive-State Decoders (PSDs), which add supervision to the network's
internal state representation to target predicting future observations.
Predictive-State Decoders are simple to implement and easily incorporated into
existing training pipelines via additional loss regularization. We demonstrate
the effectiveness of PSDs with experimental results in three different domains:
probabilistic filtering, Imitation Learning, and Reinforcement Learning. In
each, our method improves statistical performance of state-of-the-art recurrent
baselines and does so with fewer iterations and less data.
|
Arun Venkatraman, Nicholas Rhinehart, Wen Sun, Lerrel Pinto, Martial
Hebert, Byron Boots, Kris M. Kitani, J. Andrew Bagnell
| null |
1709.0852
| null | null |
Generative learning for deep networks
|
cs.LG cs.CV cs.NE stat.ML
|
Learning, taking into account full distribution of the data, referred to as
generative, is not feasible with deep neural networks (DNNs) because they model
only the conditional distribution of the outputs given the inputs. Current
solutions are either based on joint probability models facing difficult
estimation problems or learn two separate networks, mapping inputs to outputs
(recognition) and vice-versa (generation). We propose an intermediate approach.
First, we show that forward computation in DNNs with logistic sigmoid
activations corresponds to a simplified approximate Bayesian inference in a
directed probabilistic multi-layer model. This connection allows to interpret
DNN as a probabilistic model of the output and all hidden units given the
input. Second, we propose that in order for the recognition and generation
networks to be more consistent with the joint model of the data, weights of the
recognition and generator network should be related by transposition. We
demonstrate in a tentative experiment that such a coupled pair can be learned
generatively, modelling the full distribution of the data, and has enough
capacity to perform well in both recognition and generation.
|
Boris Flach, Alexander Shekhovtsov, Ondrej Fikar
| null |
1709.08524
| null | null |
Analytic solution and stationary phase approximation for the Bayesian
lasso and elastic net
|
stat.ME cs.LG math.ST q-bio.QM stat.TH
|
The lasso and elastic net linear regression models impose a
double-exponential prior distribution on the model parameters to achieve
regression shrinkage and variable selection, allowing the inference of robust
models from large data sets. However, there has been limited success in
deriving estimates for the full posterior distribution of regression
coefficients in these models, due to a need to evaluate analytically
intractable partition function integrals. Here, the Fourier transform is used
to express these integrals as complex-valued oscillatory integrals over
"regression frequencies". This results in an analytic expansion and stationary
phase approximation for the partition functions of the Bayesian lasso and
elastic net, where the non-differentiability of the double-exponential prior
has so far eluded such an approach. Use of this approximation leads to highly
accurate numerical estimates for the expectation values and marginal posterior
distributions of the regression coefficients, and allows for Bayesian inference
of much higher dimensional models than previously possible.
|
Tom Michoel
| null |
1709.08535
| null | null |
The Consciousness Prior
|
cs.LG cs.AI stat.ML
|
A new prior is proposed for learning representations of high-level concepts
of the kind we manipulate with language. This prior can be combined with other
priors in order to help disentangling abstract factors from each other. It is
inspired by cognitive neuroscience theories of consciousness, seen as a
bottleneck through which just a few elements, after having been selected by
attention from a broader pool, are then broadcast and condition further
processing, both in perception and decision-making. The set of recently
selected elements one becomes aware of is seen as forming a low-dimensional
conscious state. This conscious state is combining the few concepts
constituting a conscious thought, i.e., what one is immediately conscious of at
a particular moment. We claim that this architectural and
information-processing constraint corresponds to assumptions about the joint
distribution between high-level concepts. To the extent that these assumptions
are generally true (and the form of natural language seems consistent with
them), they can form a useful prior for representation learning. A
low-dimensional thought or conscious state is analogous to a sentence: it
involves only a few variables and yet can make a statement with very high
probability of being true. This is consistent with a joint distribution (over
high-level concepts) which has the form of a sparse factor graph, i.e., where
the dependencies captured by each factor of the factor graph involve only very
few variables while creating a strong dip in the overall energy function. The
consciousness prior also makes it natural to map conscious states to natural
language utterances or to express classical AI knowledge in a form similar to
facts and rules, albeit capturing uncertainty as well as efficient search
mechanisms implemented by attention mechanisms.
|
Yoshua Bengio
| null |
1709.08568
| null | null |
EZLearn: Exploiting Organic Supervision in Large-Scale Data Annotation
|
cs.CL cs.LG
|
Many real-world applications require automated data annotation, such as
identifying tissue origins based on gene expressions and classifying images
into semantic categories. Annotation classes are often numerous and subject to
changes over time, and annotating examples has become the major bottleneck for
supervised learning methods. In science and other high-value domains, large
repositories of data samples are often available, together with two sources of
organic supervision: a lexicon for the annotation classes, and text
descriptions that accompany some data samples. Distant supervision has emerged
as a promising paradigm for exploiting such indirect supervision by
automatically annotating examples where the text description contains a class
mention in the lexicon. However, due to linguistic variations and ambiguities,
such training data is inherently noisy, which limits the accuracy of this
approach. In this paper, we introduce an auxiliary natural language processing
system for the text modality, and incorporate co-training to reduce noise and
augment signal in distant supervision. Without using any manually labeled data,
our EZLearn system learned to accurately annotate data samples in functional
genomics and scientific figure comprehension, substantially outperforming
state-of-the-art supervised methods trained on tens of thousands of annotated
examples.
|
Maxim Grechkin, Hoifung Poon, Bill Howe
| null |
1709.086
| null | null |
Towards automation of data quality system for CERN CMS experiment
|
physics.data-an cs.AI cs.LG hep-ex
|
Daily operation of a large-scale experiment is a challenging task,
particularly from perspectives of routine monitoring of quality for data being
taken. We describe an approach that uses Machine Learning for the automated
system to monitor data quality, which is based on partial use of data qualified
manually by detector experts. The system automatically classifies marginal
cases: both of good an bad data, and use human expert decision to classify
remaining "grey area" cases.
This study uses collision data collected by the CMS experiment at LHC in
2010. We demonstrate that proposed workflow is able to automatically process at
least 20\% of samples without noticeable degradation of the result.
|
Maxim Borisyak, Fedor Ratnikov, Denis Derkach and Andrey Ustyuzhanin
|
10.1088/1742-6596/898/9/092041
|
1709.08607
| null | null |
Long Text Generation via Adversarial Training with Leaked Information
|
cs.CL cs.AI cs.LG
|
Automatically generating coherent and semantically meaningful text has many
applications in machine translation, dialogue systems, image captioning, etc.
Recently, by combining with policy gradient, Generative Adversarial Nets (GAN)
that use a discriminative model to guide the training of the generative model
as a reinforcement learning policy has shown promising results in text
generation. However, the scalar guiding signal is only available after the
entire text has been generated and lacks intermediate information about text
structure during the generative process. As such, it limits its success when
the length of the generated text samples is long (more than 20 words). In this
paper, we propose a new framework, called LeakGAN, to address the problem for
long text generation. We allow the discriminative net to leak its own
high-level extracted features to the generative net to further help the
guidance. The generator incorporates such informative signals into all
generation steps through an additional Manager module, which takes the
extracted features of current generated words and outputs a latent vector to
guide the Worker module for next-word generation. Our extensive experiments on
synthetic data and various real-world tasks with Turing test demonstrate that
LeakGAN is highly effective in long text generation and also improves the
performance in short text generation scenarios. More importantly, without any
supervision, LeakGAN would be able to implicitly learn sentence structures only
through the interaction between Manager and Worker.
|
Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, Jun Wang
| null |
1709.08624
| null | null |
Glass-Box Program Synthesis: A Machine Learning Approach
|
cs.LG cs.AI stat.ML
|
Recently proposed models which learn to write computer programs from data use
either input/output examples or rich execution traces. Instead, we argue that a
novel alternative is to use a glass-box loss function, given as a program
itself that can be directly inspected. Glass-box optimization covers a wide
range of problems, from computing the greatest common divisor of two integers,
to learning-to-learn problems.
In this paper, we present an intelligent search system which learns, given
the partial program and the glass-box problem, the probabilities over the space
of programs. We empirically demonstrate that our informed search procedure
leads to significant improvements compared to brute-force program search, both
in terms of accuracy and time. For our experiments we use rich context free
grammars inspired by number theory, text processing, and algebra. Our results
show that (i) performing 4 rounds of our framework typically solves about 70%
of the target problems, (ii) our framework can improve itself even in domain
agnostic scenarios, and (iii) it can solve problems that would be otherwise too
slow to solve with brute-force search.
|
Konstantina Christakopoulou, Adam Tauman Kalai
| null |
1709.08669
| null | null |
Methodology and Results for the Competition on Semantic Similarity
Evaluation and Entailment Recognition for PROPOR 2016
|
cs.CL cs.LG
|
In this paper, we present the methodology and the results obtained by our
teams, dubbed Blue Man Group, in the ASSIN (from the Portuguese {\it
Avalia\c{c}\~ao de Similaridade Sem\^antica e Infer\^encia Textual})
competition, held at PROPOR 2016\footnote{International Conference on the
Computational Processing of the Portuguese Language -
http://propor2016.di.fc.ul.pt/}. Our team's strategy consisted of evaluating
methods based on semantic word vectors, following two distinct directions: 1)
to make use of low-dimensional, compact, feature sets, and 2) deep
learning-based strategies dealing with high-dimensional feature vectors.
Evaluation results demonstrated that the first strategy was more promising, so
that the results from the second strategy have been discarded. As a result, by
considering the best run of each of the six teams, we have been able to achieve
the best accuracy and F1 values in entailment recognition, in the Brazilian
Portuguese set, and the best F1 score overall. In the semantic similarity task,
our team was ranked second in the Brazilian Portuguese set, and third
considering both sets.
|
Luciano Barbosa, Paulo R. Cavalin, Victor Guimaraes and Matthias
Kormaksson
| null |
1709.08694
| null | null |
Stochastic Nonconvex Optimization with Large Minibatches
|
cs.LG
|
We study stochastic optimization of nonconvex loss functions, which are
typical objectives for training neural networks. We propose stochastic
approximation algorithms which optimize a series of regularized, nonlinearized
losses on large minibatches of samples, using only first-order gradient
information. Our algorithms provably converge to an approximate critical point
of the expected objective with faster rates than minibatch stochastic gradient
descent, and facilitate better parallelization by allowing larger minibatches.
|
Weiran Wang, Nathan Srebro
| null |
1709.08728
| null | null |
Understanding a Version of Multivariate Symmetric Uncertainty to assist
in Feature Selection
|
cs.LG cs.IT math.IT stat.ML
|
In this paper, we analyze the behavior of the multivariate symmetric
uncertainty (MSU) measure through the use of statistical simulation techniques
under various mixes of informative and non-informative randomly generated
features. Experiments show how the number of attributes, their cardinalities,
and the sample size affect the MSU. We discovered a condition that preserves
good quality in the MSU under different combinations of these three factors,
providing a new useful criterion to help drive the process of dimension
reduction.
|
Gustavo Sosa-Cabrera, Miguel Garc\'ia-Torres, Santiago G\'omez,
Christian Schaerer, Federico Divina
| null |
1709.0873
| null | null |
A Deep Learning Model for Traffic Flow State Classification Based on
Smart Phone Sensor Data
|
cs.LG
|
This study proposes a Deep Belief Network model to classify traffic flow
states. The model is capable of processing massive, high-density, and
noise-contaminated data sets generated from smartphone sensors. The statistical
features of Vehicle acceleration, angular acceleration, and GPS speed data,
recorded by smartphone software, are analyzed, and then used as input for
traffic flow state classification. Data from a five-day experiment is used to
train and test the proposed model. A total of 747,856 sets of data are
generated and used for both traffic flow states classification and sensitivity
analysis of input variables. The result shows that the proposed Deep Belief
Network model is superior to traditional machine learning methods in both
classification performance and computational efficiency.
|
Wenwen Tu, Feng Xiao, Liping Fu, Guangyuan Pan
| null |
1709.08802
| null | null |
Catching Anomalous Distributed Photovoltaics: An Edge-based Multi-modal
Anomaly Detection
|
cs.SY cs.CR cs.LG
|
A significant challenge in energy system cyber security is the current
inability to detect cyber-physical attacks targeting and originating from
distributed grid-edge devices such as photovoltaics (PV) panels, smart flexible
loads, and electric vehicles. We address this concern by designing and
developing a distributed, multi-modal anomaly detection approach that can sense
the health of the device and the electric power grid from the edge. This is
realized by exploiting unsupervised machine learning algorithms on multiple
sources of time-series data, fusing these multiple local observations and
flagging anomalies when a deviation from the normal behavior is observed.
We particularly focus on the cyber-physical threats to the distributed PVs
that has the potential to cause local disturbances or grid instabilities by
creating supply-demand mismatch, reverse power flow conditions etc. We use an
open source power system simulation tool called GridLAB-D, loaded with real
smart home and solar datasets to simulate the smart grid scenarios and to
illustrate the impact of PV attacks on the power system. Various attacks
targeting PV panels that create voltage fluctuations, reverse power flow etc
were designed and performed. We observe that while individual unsupervised
learning algorithms such as OCSVMs, Corrupt RF and PCA surpasses in identifying
particular attack type, PCA with Convex Hull outperforms all algorithms in
identifying all designed attacks with a true positive rate of 83.64% and an
accuracy of 95.78%. Our key insight is that due to the heterogeneous nature of
the distribution grid and the uncertainty in the type of the attack being
launched, relying on single mode of information for defense can lead to
increased false alarms and missed detection rates as one can design attacks to
hide within those uncertainties and remain stealthy.
|
Devu Manikantan Shilay, Kin Gwn Lorey, Tianshu Weiz, Teems Lovetty,
and Yu Cheng
| null |
1709.0883
| null | null |
Learning a Predictive Model for Music Using PULSE
|
cs.LG cs.SD eess.AS
|
Predictive models for music are studied by researchers of algorithmic
composition, the cognitive sciences and machine learning. They serve as base
models for composition, can simulate human prediction and provide a
multidisciplinary application domain for learning algorithms. A particularly
well established and constantly advanced subtask is the prediction of
monophonic melodies. As melodies typically involve non-Markovian dependencies
their prediction requires a capable learning algorithm. In this thesis, I apply
the recent feature discovery and learning method PULSE to the realm of symbolic
music modeling. PULSE is comprised of a feature generating operation and
L1-regularized optimization. These are used to iteratively expand and cull the
feature set, effectively exploring feature spaces that are too large for common
feature selection approaches. I design a general Python framework for PULSE,
propose task-optimized feature generating operations and various
music-theoretically motivated features that are evaluated on a standard corpus
of monophonic folk and chorale melodies. The proposed method significantly
outperforms comparable state-of-the-art models. I further discuss the free
parameters of the learning algorithm and analyze the feature composition of the
learned models. The models learned by PULSE afford an easy inspection and are
musicologically interpreted for the first time.
|
Jonas Langhabel
| null |
1709.08842
| null | null |
Active Learning amidst Logical Knowledge
|
cs.AI cs.LG cs.LO
|
Structured prediction is ubiquitous in applications of machine learning such
as knowledge extraction and natural language processing. Structure often can be
formulated in terms of logical constraints. We consider the question of how to
perform efficient active learning in the presence of logical constraints among
variables inferred by different classifiers. We propose several methods and
provide theoretical results that demonstrate the inappropriateness of employing
uncertainty guided sampling, a commonly used active learning method.
Furthermore, experiments on ten different datasets demonstrate that the methods
significantly outperform alternatives in practice. The results are of practical
significance in situations where labeled data is scarce.
|
Emmanouil Antonios Platanios and Ashish Kapoor and Eric Horvitz
| null |
1709.0885
| null | null |
Object-oriented Neural Programming (OONP) for Document Understanding
|
cs.LG cs.AI cs.CL cs.NE
|
We propose Object-oriented Neural Programming (OONP), a framework for
semantically parsing documents in specific domains. Basically, OONP reads a
document and parses it into a predesigned object-oriented data structure
(referred to as ontology in this paper) that reflects the domain-specific
semantics of the document. An OONP parser models semantic parsing as a decision
process: a neural net-based Reader sequentially goes through the document, and
during the process it builds and updates an intermediate ontology to summarize
its partial understanding of the text it covers. OONP supports a rich family of
operations (both symbolic and differentiable) for composing the ontology, and a
big variety of forms (both symbolic and differentiable) for representing the
state and the document. An OONP parser can be trained with supervision of
different forms and strength, including supervised learning (SL) ,
reinforcement learning (RL) and hybrid of the two. Our experiments on both
synthetic and real-world document parsing tasks have shown that OONP can learn
to handle fairly complicated ontology with training data of modest sizes.
|
Zhengdong Lu and Xianggen Liu and Haotian Cui and Yukun Yan and Daqi
Zheng
| null |
1709.08853
| null | null |
Generating Sentences by Editing Prototypes
|
cs.CL cs.AI cs.LG cs.NE stat.ML
|
We propose a new generative model of sentences that first samples a prototype
sentence from the training corpus and then edits it into a new sentence.
Compared to traditional models that generate from scratch either left-to-right
or by first sampling a latent sentence vector, our prototype-then-edit model
improves perplexity on language modeling and generates higher quality outputs
according to human evaluation. Furthermore, the model gives rise to a latent
edit vector that captures interpretable semantics such as sentence similarity
and sentence-level analogies.
|
Kelvin Guu, Tatsunori B. Hashimoto, Yonatan Oren, Percy Liang
| null |
1709.08878
| null | null |
On the regularization of Wasserstein GANs
|
stat.ML cs.LG
|
Since their invention, generative adversarial networks (GANs) have become a
popular approach for learning to model a distribution of real (unlabeled) data.
Convergence problems during training are overcome by Wasserstein GANs which
minimize the distance between the model and the empirical distribution in terms
of a different metric, but thereby introduce a Lipschitz constraint into the
optimization problem. A simple way to enforce the Lipschitz constraint on the
class of functions, which can be modeled by the neural network, is weight
clipping. It was proposed that training can be improved by instead augmenting
the loss by a regularization term that penalizes the deviation of the gradient
of the critic (as a function of the network's input) from one. We present
theoretical arguments why using a weaker regularization term enforcing the
Lipschitz constraint is preferable. These arguments are supported by
experimental results on toy data sets.
|
Henning Petzka, Asja Fischer, Denis Lukovnicov
| null |
1709.08894
| null | null |
A physical model for efficient ranking in networks
|
physics.soc-ph cs.LG cs.SI physics.data-an
|
We present a physically-inspired model and an efficient algorithm to infer
hierarchical rankings of nodes in directed networks. It assigns real-valued
ranks to nodes rather than simply ordinal ranks, and it formalizes the
assumption that interactions are more likely to occur between individuals with
similar ranks. It provides a natural statistical significance test for the
inferred hierarchy, and it can be used to perform inference tasks such as
predicting the existence or direction of edges. The ranking is obtained by
solving a linear system of equations, which is sparse if the network is; thus
the resulting algorithm is extremely efficient and scalable. We illustrate
these findings by analyzing real and synthetic data, including datasets from
animal behavior, faculty hiring, social support networks, and sports
tournaments. We show that our method often outperforms a variety of others, in
both speed and accuracy, in recovering the underlying ranks and predicting edge
directions.
|
Caterina De Bacco, Daniel B. Larremore and Cristopher Moore
| null |
1709.09002
| null | null |
AutoEncoder by Forest
|
cs.LG stat.ML
|
Auto-encoding is an important task which is typically realized by deep neural
networks (DNNs) such as convolutional neural networks (CNN). In this paper, we
propose EncoderForest (abbrv. eForest), the first tree ensemble based
auto-encoder. We present a procedure for enabling forests to do backward
reconstruction by utilizing the equivalent classes defined by decision paths of
the trees, and demonstrate its usage in both supervised and unsupervised
setting. Experiments show that, compared with DNN autoencoders, eForest is able
to obtain lower reconstruction error with fast training speed, while the model
itself is reusable and damage-tolerable.
|
Ji Feng and Zhi-Hua Zhou
| null |
1709.09018
| null | null |
MDP environments for the OpenAI Gym
|
cs.LG
|
The OpenAI Gym provides researchers and enthusiasts with simple to use
environments for reinforcement learning. Even the simplest environment have a
level of complexity that can obfuscate the inner workings of RL approaches and
make debugging difficult. This whitepaper describes a Python framework that
makes it very easy to create simple Markov-Decision-Process environments
programmatically by specifying state transitions and rewards of deterministic
and non-deterministic MDPs in a domain-specific language in Python. It then
presents results and visualizations created with this MDP framework.
|
Andreas Kirsch
| null |
1709.09069
| null | null |
Output Range Analysis for Deep Neural Networks
|
cs.LG stat.ML
|
Deep neural networks (NN) are extensively used for machine learning tasks
such as image classification, perception and control of autonomous systems.
Increasingly, these deep NNs are also been deployed in high-assurance
applications. Thus, there is a pressing need for developing techniques to
verify neural networks to check whether certain user-expected properties are
satisfied. In this paper, we study a specific verification problem of computing
a guaranteed range for the output of a deep neural network given a set of
inputs represented as a convex polyhedron. Range estimation is a key primitive
for verifying deep NNs. We present an efficient range estimation algorithm that
uses a combination of local search and linear programming problems to
efficiently find the maximum and minimum values taken by the outputs of the NN
over the given input set. In contrast to recently proposed "monolithic"
optimization approaches, we use local gradient descent to repeatedly find and
eliminate local minima of the function. The final global optimum is certified
using a mixed integer programming instance. We implement our approach and
compare it with Reluplex, a recently proposed solver for deep neural networks.
We demonstrate the effectiveness of the proposed approach for verification of
NNs used in automated control as well as those used in classification.
|
Souradeep Dutta, Susmit Jha, Sriram Sanakaranarayanan, Ashish Tiwari
| null |
1709.0913
| null | null |
EDEN: Evolutionary Deep Networks for Efficient Machine Learning
|
stat.ML cs.LG cs.NE
|
Deep neural networks continue to show improved performance with increasing
depth, an encouraging trend that implies an explosion in the possible
permutations of network architectures and hyperparameters for which there is
little intuitive guidance. To address this increasing complexity, we propose
Evolutionary DEep Networks (EDEN), a computationally efficient
neuro-evolutionary algorithm which interfaces to any deep neural network
platform, such as TensorFlow. We show that EDEN evolves simple yet successful
architectures built from embedding, 1D and 2D convolutional, max pooling and
fully connected layers along with their hyperparameters. Evaluation of EDEN
across seven image and sentiment classification datasets shows that it reliably
finds good networks -- and in three cases achieves state-of-the-art results --
even on a single GPU, in just 6-24 hours. Our study provides a first attempt at
applying neuro-evolution to the creation of 1D convolutional networks for
sentiment analysis including the optimisation of the embedding layer.
|
Emmanuel Dufourq, Bruce A. Bassett
| null |
1709.09161
| null | null |
A feasibility study for predicting optimal radiation therapy dose
distributions of prostate cancer patients from patient anatomy using deep
learning
|
physics.med-ph cs.AI cs.CV cs.LG
|
With the advancement of treatment modalities in radiation therapy for cancer
patients, outcomes have improved, but at the cost of increased treatment plan
complexity and planning time. The accurate prediction of dose distributions
would alleviate this issue by guiding clinical plan optimization to save time
and maintain high quality plans. We have modified a convolutional deep network
model, U-net (originally designed for segmentation purposes), for predicting
dose from patient image contours of the planning target volume (PTV) and organs
at risk (OAR). We show that, as an example, we are able to accurately predict
the dose of intensity-modulated radiation therapy (IMRT) for prostate cancer
patients, where the average Dice similarity coefficient is 0.91 when comparing
the predicted vs. true isodose volumes between 0% and 100% of the prescription
dose. The average value of the absolute differences in [max, mean] dose is
found to be under 5% of the prescription dose, specifically for each structure
is [1.80%, 1.03%](PTV), [1.94%, 4.22%](Bladder), [1.80%, 0.48%](Body), [3.87%,
1.79%](L Femoral Head), [5.07%, 2.55%](R Femoral Head), and [1.26%,
1.62%](Rectum) of the prescription dose. We thus managed to map a desired
radiation dose distribution from a patient's PTV and OAR contours. As an
additional advantage, relatively little data was used in the techniques and
models described in this paper.
|
Dan Nguyen, Troy Long, Xun Jia, Weiguo Lu, Xuejun Gu, Zohaib Iqbal,
Steve Jiang
| null |
1709.09233
| null | null |
FSL-BM: Fuzzy Supervised Learning with Binary Meta-Feature for
Classification
|
cs.LG cs.AI stat.AP stat.ML
|
This paper introduces a novel real-time Fuzzy Supervised Learning with Binary
Meta-Feature (FSL-BM) for big data classification task. The study of real-time
algorithms addresses several major concerns, which are namely: accuracy, memory
consumption, and ability to stretch assumptions and time complexity. Attaining
a fast computational model providing fuzzy logic and supervised learning is one
of the main challenges in the machine learning. In this research paper, we
present FSL-BM algorithm as an efficient solution of supervised learning with
fuzzy logic processing using binary meta-feature representation using Hamming
Distance and Hash function to relax assumptions. While many studies focused on
reducing time complexity and increasing accuracy during the last decade, the
novel contribution of this proposed solution comes through integration of
Hamming Distance, Hash function, binary meta-features, binary classification to
provide real time supervised method. Hash Tables (HT) component gives a fast
access to existing indices; and therefore, the generation of new indices in a
constant time complexity, which supersedes existing fuzzy supervised algorithms
with better or comparable results. To summarize, the main contribution of this
technique for real-time Fuzzy Supervised Learning is to represent hypothesis
through binary input as meta-feature space and creating the Fuzzy Supervised
Hash table to train and validate model.
|
Kamran Kowsari, Nima Bari, Roman Vichr, Farhad A. Goodarzi
|
10.1007/978-3-030-03405-4_46
|
1709.09268
| null | null |
Cold-Start Reinforcement Learning with Softmax Policy Gradient
|
cs.LG
|
Policy-gradient approaches to reinforcement learning have two common and
undesirable overhead procedures, namely warm-start training and sample variance
reduction. In this paper, we describe a reinforcement learning method based on
a softmax value function that requires neither of these procedures. Our method
combines the advantages of policy-gradient methods with the efficiency and
simplicity of maximum-likelihood approaches. We apply this new cold-start
reinforcement learning method in training sequence generation models for
structured output prediction problems. Empirical evidence validates this method
on automatic summarization and image captioning tasks.
|
Nan Ding, Radu Soricut
| null |
1709.09346
| null | null |
Slim-DP: A Light Communication Data Parallelism for DNN
|
cs.DC cs.LG
|
Data parallelism has emerged as a necessary technique to accelerate the
training of deep neural networks (DNN). In a typical data parallelism approach,
the local workers push the latest updates of all the parameters to the
parameter server and pull all merged parameters back periodically. However,
with the increasing size of DNN models and the large number of workers in
practice, this typical data parallelism cannot achieve satisfactory training
acceleration, since it usually suffers from the heavy communication cost due to
transferring huge amount of information between workers and the parameter
server. In-depth understanding on DNN has revealed that it is usually highly
redundant, that deleting a considerable proportion of the parameters will not
significantly decline the model performance. This redundancy property exposes a
great opportunity to reduce the communication cost by only transferring the
information of those significant parameters during the parallel training.
However, if we only transfer information of temporally significant parameters
of the latest snapshot, we may miss the parameters that are insignificant now
but have potential to become significant as the training process goes on. To
this end, we design an Explore-Exploit framework to dynamically choose the
subset to be communicated, which is comprised of the significant parameters in
the latest snapshot together with a random explored set of other parameters. We
propose to measure the significance of the parameter by the combination of its
magnitude and gradient. Our experimental results demonstrate that our proposed
Slim-DP can achieve better training acceleration than standard data parallelism
and its communication-efficient version by saving communication time without
loss of accuracy.
|
Shizhao Sun, Wei Chen, Jiang Bian, Xiaoguang Liu, Tie-Yan Liu
| null |
1709.09393
| null | null |
A Benchmark Environment Motivated by Industrial Control Problems
|
cs.AI cs.LG cs.SY
|
In the research area of reinforcement learning (RL), frequently novel and
promising methods are developed and introduced to the RL community. However,
although many researchers are keen to apply their methods on real-world
problems, implementing such methods in real industry environments often is a
frustrating and tedious process. Generally, academic research groups have only
limited access to real industrial data and applications. For this reason, new
methods are usually developed, evaluated and compared by using artificial
software benchmarks. On one hand, these benchmarks are designed to provide
interpretable RL training scenarios and detailed insight into the learning
process of the method on hand. On the other hand, they usually do not share
much similarity with industrial real-world applications. For this reason we
used our industry experience to design a benchmark which bridges the gap
between freely available, documented, and motivated artificial benchmarks and
properties of real industrial problems. The resulting industrial benchmark (IB)
has been made publicly available to the RL community by publishing its Java and
Python code, including an OpenAI Gym wrapper, on Github. In this paper we
motivate and describe in detail the IB's dynamics and identify prototypic
experimental settings that capture common situations in real-world industry
control problems.
|
Daniel Hein, Stefan Depeweg, Michel Tokic, Steffen Udluft, Alexander
Hentschel, Thomas A. Runkler, Volkmar Sterzing
|
10.1109/SSCI.2017.8280935
|
1709.0948
| null | null |
Neural networks for topology optimization
|
cs.LG math.NA
|
In this research, we propose a deep learning based approach for speeding up
the topology optimization methods. The problem we seek to solve is the layout
problem. The main novelty of this work is to state the problem as an image
segmentation task. We leverage the power of deep learning methods as the
efficient pixel-wise image labeling technique to perform the topology
optimization. We introduce convolutional encoder-decoder architecture and the
overall approach of solving the above-described problem with high performance.
The conducted experiments demonstrate the significant acceleration of the
optimization process. The proposed approach has excellent generalization
properties. We demonstrate the ability of the application of the proposed model
to other problems. The successful results, as well as the drawbacks of the
current method, are discussed.
|
Ivan Sosnovik, Ivan Oseledets
| null |
1709.09578
| null | null |
Connectivity Learning in Multi-Branch Networks
|
cs.LG cs.CV
|
While much of the work in the design of convolutional networks over the last
five years has revolved around the empirical investigation of the importance of
depth, filter sizes, and number of feature channels, recent studies have shown
that branching, i.e., splitting the computation along parallel but distinct
threads and then aggregating their outputs, represents a new promising
dimension for significant improvements in performance. To combat the complexity
of design choices in multi-branch architectures, prior work has adopted simple
strategies, such as a fixed branching factor, the same input being fed to all
parallel branches, and an additive combination of the outputs produced by all
branches at aggregation points.
In this work we remove these predefined choices and propose an algorithm to
learn the connections between branches in the network. Instead of being chosen
a priori by the human designer, the multi-branch connectivity is learned
simultaneously with the weights of the network by optimizing a single loss
function defined with respect to the end task. We demonstrate our approach on
the problem of multi-class image classification using three different datasets
where it yields consistently higher accuracy compared to the state-of-the-art
"ResNeXt" multi-branch network given the same learning capacity.
|
Karim Ahmed and Lorenzo Torresani
| null |
1709.09582
| null | null |
Riemannian approach to batch normalization
|
cs.LG
|
Batch Normalization (BN) has proven to be an effective algorithm for deep
neural network training by normalizing the input to each neuron and reducing
the internal covariate shift. The space of weight vectors in the BN layer can
be naturally interpreted as a Riemannian manifold, which is invariant to linear
scaling of weights. Following the intrinsic geometry of this manifold provides
a new learning rule that is more efficient and easier to analyze. We also
propose intuitive and effective gradient clipping and regularization methods
for the proposed algorithm by utilizing the geometry of the manifold. The
resulting algorithm consistently outperforms the original BN on various types
of network architectures and datasets.
|
Minhyung Cho, Jaehyung Lee
| null |
1709.09603
| null | null |
How regularization affects the critical points in linear networks
|
math.OC cs.LG
|
This paper is concerned with the problem of representing and learning a
linear transformation using a linear neural network. In recent years, there has
been a growing interest in the study of such networks in part due to the
successes of deep learning. The main question of this body of research and also
of this paper pertains to the existence and optimality properties of the
critical points of the mean-squared loss function. The primary concern here is
the robustness of the critical points with regularization of the loss function.
An optimal control model is introduced for this purpose and a learning
algorithm (regularized form of backprop) derived for the same using the
Hamilton's formulation of optimal control. The formulation is used to provide a
complete characterization of the critical points in terms of the solutions of a
nonlinear matrix-valued equation, referred to as the characteristic equation.
Analytical and numerical tools from bifurcation theory are used to compute the
critical points via the solutions of the characteristic equation. The main
conclusion is that the critical point diagram can be fundamentally different
even with arbitrary small amounts of regularization.
|
Amirhossein Taghvaei, Jin W. Kim, Prashant G. Mehta
| null |
1709.09625
| null | null |
Lower Bounds on the Bayes Risk of the Bayesian BTL Model with
Applications to Comparison Graphs
|
cs.IT cs.LG math.IT
|
We consider the problem of aggregating pairwise comparisons to obtain a
consensus ranking order over a collection of objects. We use the popular
Bradley-Terry-Luce (BTL) model which allows us to probabilistically describe
pairwise comparisons between objects. In particular, we employ the Bayesian BTL
model which allows for meaningful prior assumptions and to cope with situations
where the number of objects is large and the number of comparisons between some
objects is small or even zero. For the conventional Bayesian BTL model, we
derive information-theoretic lower bounds on the Bayes risk of estimators for
norm-based distortion functions. We compare the information-theoretic lower
bound with the Bayesian Cram\'{e}r-Rao lower bound we derive for the case when
the Bayes risk is the mean squared error. We illustrate the utility of the
bounds through simulations by comparing them with the error performance of an
expectation-maximization based inference algorithm proposed for the Bayesian
BTL model. We draw parallels between pairwise comparisons in the BTL model and
inter-player games represented as edges in a comparison graph and analyze the
effect of various graph structures on the lower bounds. We also extend the
information-theoretic and Bayesian Cram\'{e}r-Rao lower bounds to the more
general Bayesian BTL model which takes into account home-field advantage.
|
Mine Alsan, Ranjitha Prasad and Vincent Y. F. Tan
|
10.1109/JSTSP.2018.2827303
|
1709.09676
| null | null |
KeyVec: Key-semantics Preserving Document Representations
|
cs.CL cs.LG cs.NE
|
Previous studies have demonstrated the empirical success of word embeddings
in various applications. In this paper, we investigate the problem of learning
distributed representations for text documents which many machine learning
algorithms take as input for a number of NLP tasks.
We propose a neural network model, KeyVec, which learns document
representations with the goal of preserving key semantics of the input text. It
enables the learned low-dimensional vectors to retain the topics and important
information from the documents that will flow to downstream tasks. Our
empirical evaluations show the superior quality of KeyVec representations in
two different document understanding tasks.
|
Bin Bi and Hao Ma
| null |
1709.09749
| null | null |
Sampling Without Compromising Accuracy in Adaptive Data Analysis
|
cs.LG cs.DS
|
In this work, we study how to use sampling to speed up mechanisms for
answering adaptive queries into datasets without reducing the accuracy of those
mechanisms. This is important to do when both the datasets and the number of
queries asked are very large. In particular, we describe a mechanism that
provides a polynomial speed-up per query over previous mechanisms, without
needing to increase the total amount of data required to maintain the same
generalization error as before. We prove that this speed-up holds for arbitrary
statistical queries. We also provide an even faster method for achieving
statistically-meaningful responses wherein the mechanism is only allowed to see
a constant number of samples from the data per query. Finally, we show that our
general results yield a simple, fast, and unified approach for adaptively
optimizing convex and strongly convex functions over a dataset.
|
Benjamin Fish, Lev Reyzin, Benjamin I. P. Rubinstein
| null |
1709.09778
| null | null |
Generative Adversarial Mapping Networks
|
cs.LG
|
Generative Adversarial Networks (GANs) have shown impressive performance in
generating photo-realistic images. They fit generative models by minimizing
certain distance measure between the real image distribution and the generated
data distribution. Several distance measures have been used, such as
Jensen-Shannon divergence, $f$-divergence, and Wasserstein distance, and
choosing an appropriate distance measure is very important for training the
generative network. In this paper, we choose to use the maximum mean
discrepancy (MMD) as the distance metric, which has several nice theoretical
guarantees. In fact, generative moment matching network (GMMN) (Li, Swersky,
and Zemel 2015) is such a generative model which contains only one generator
network $G$ trained by directly minimizing MMD between the real and generated
distributions. However, it fails to generate meaningful samples on challenging
benchmark datasets, such as CIFAR-10 and LSUN. To improve on GMMN, we propose
to add an extra network $F$, called mapper. $F$ maps both real data
distribution and generated data distribution from the original data space to a
feature representation space $\mathcal{R}$, and it is trained to maximize MMD
between the two mapped distributions in $\mathcal{R}$, while the generator $G$
tries to minimize the MMD. We call the new model generative adversarial mapping
networks (GAMNs). We demonstrate that the adversarial mapper $F$ can help $G$
to better capture the underlying data distribution. We also show that GAMN
significantly outperforms GMMN, and is also superior to or comparable with
other state-of-the-art GAN based methods on MNIST, CIFAR-10 and LSUN-Bedrooms
datasets.
|
Jianbo Guo, Guangxiang Zhu, Jian Li
| null |
1709.0982
| null | null |
Distance-based Confidence Score for Neural Network Classifiers
|
cs.AI cs.CV cs.LG stat.ML
|
The reliable measurement of confidence in classifiers' predictions is very
important for many applications and is, therefore, an important part of
classifier design. Yet, although deep learning has received tremendous
attention in recent years, not much progress has been made in quantifying the
prediction confidence of neural network classifiers. Bayesian models offer a
mathematically grounded framework to reason about model uncertainty, but
usually come with prohibitive computational costs. In this paper we propose a
simple, scalable method to achieve a reliable confidence score, based on the
data embedding derived from the penultimate layer of the network. We
investigate two ways to achieve desirable embeddings, by using either a
distance-based loss or Adversarial Training. We then test the benefits of our
method when used for classification error prediction, weighting an ensemble of
classifiers, and novelty detection. In all tasks we show significant
improvement over traditional, commonly used confidence scores.
|
Amit Mandelbaum and Daphna Weinshall
| null |
1709.09844
| null | null |
A Generative Model for Score Normalization in Speaker Recognition
|
stat.ML cs.LG cs.SD eess.AS
|
We propose a theoretical framework for thinking about score normalization,
which confirms that normalization is not needed under (admittedly fragile)
ideal conditions. If, however, these conditions are not met, e.g. under
data-set shift between training and runtime, our theory reveals dependencies
between scores that could be exploited by strategies such as score
normalization. Indeed, it has been demonstrated over and over experimentally,
that various ad-hoc score normalization recipes do work. We present a first
attempt at using probability theory to design a generative score-space
normalization model which gives similar improvements to ZT-norm on the
text-dependent RSR 2015 database.
|
Albert Swart and Niko Brummer
| null |
1709.09868
| null | null |
Are we done with object recognition? The iCub robot's perspective
|
cs.RO cs.AI cs.CV cs.LG
|
We report on an extensive study of the benefits and limitations of current
deep learning approaches to object recognition in robot vision scenarios,
introducing a novel dataset used for our investigation. To avoid the biases in
currently available datasets, we consider a natural human-robot interaction
setting to design a data-acquisition protocol for visual object recognition on
the iCub humanoid robot. Analyzing the performance of off-the-shelf models
trained off-line on large-scale image retrieval datasets, we show the necessity
for knowledge transfer. We evaluate different ways in which this last step can
be done, and identify the major bottlenecks affecting robotic scenarios. By
studying both object categorization and identification problems, we highlight
key differences between object recognition in robotics applications and in
image retrieval tasks, for which the considered deep learning approaches have
been originally designed. In a nutshell, our results confirm the remarkable
improvements yield by deep learning in this setting, while pointing to specific
open challenges that need be addressed for seamless deployment in robotics.
|
Giulia Pasquale, Carlo Ciliberto, Francesca Odone, Lorenzo Rosasco and
Lorenzo Natale
|
10.1016/j.robot.2018.11.001
|
1709.09882
| null | null |
The model of an anomaly detector for HiLumi LHC magnets based on
Recurrent Neural Networks and adaptive quantization
|
cs.LG physics.acc-ph physics.ins-det
|
This paper focuses on an examination of an applicability of Recurrent Neural
Network models for detecting anomalous behavior of the CERN superconducting
magnets. In order to conduct the experiments, the authors designed and
implemented an adaptive signal quantization algorithm and a custom GRU-based
detector and developed a method for the detector parameters selection. Three
different datasets were used for testing the detector. Two artificially
generated datasets were used to assess the raw performance of the system
whereas the 231 MB dataset composed of the signals acquired from HiLumi magnets
was intended for real-life experiments and model training. Several different
setups of the developed anomaly detection system were evaluated and compared
with state-of-the-art OC-SVM reference model operating on the same data. The
OC-SVM model was equipped with a rich set of feature extractors accounting for
a range of the input signal properties. It was determined in the course of the
experiments that the detector, along with its supporting design methodology,
reaches F1 equal or very close to 1 for almost all test sets. Due to the
profile of the data, the best_length setup of the detector turned out to
perform the best among all five tested configuration schemes of the detection
system. The quantization parameters have the biggest impact on the overall
performance of the detector with the best values of input/output grid equal to
16 and 8, respectively. The proposed solution of the detection significantly
outperformed OC-SVM-based detector in most of the cases, with much more stable
performance across all the datasets.
|
Maciej Wielgosz, Matej Mertik, Andrzej Skocze\'n, Ernesto De Matteis
|
10.1016/j.engappai.2018.06.012
|
1709.09883
| null | null |
SUBIC: A Supervised Bi-Clustering Approach for Precision Medicine
|
cs.LG stat.ML
|
Traditional medicine typically applies one-size-fits-all treatment for the
entire patient population whereas precision medicine develops tailored
treatment schemes for different patient subgroups. The fact that some factors
may be more significant for a specific patient subgroup motivates clinicians
and medical researchers to develop new approaches to subgroup detection and
analysis, which is an effective strategy to personalize treatment. In this
study, we propose a novel patient subgroup detection method, called Supervised
Biclustring (SUBIC) using convex optimization and apply our approach to detect
patient subgroups and prioritize risk factors for hypertension (HTN) in a
vulnerable demographic subgroup (African-American). Our approach not only finds
patient subgroups with guidance of a clinically relevant target variable but
also identifies and prioritizes risk factors by pursuing sparsity of the input
variables and encouraging similarity among the input variables and between the
input and target variables
|
Milad Zafar Nezhad, Dongxiao Zhu, Najibesadat Sadati, Kai Yang,
Phillip Levy
|
10.1109/ICMLA.2017.00-68
|
1709.09929
| null | null |
Premise Selection for Theorem Proving by Deep Graph Embedding
|
cs.AI cs.LG cs.LO
|
We propose a deep learning-based approach to the problem of premise
selection: selecting mathematical statements relevant for proving a given
conjecture. We represent a higher-order logic formula as a graph that is
invariant to variable renaming but still fully preserves syntactic and semantic
information. We then embed the graph into a vector via a novel embedding method
that preserves the information of edge ordering. Our approach achieves
state-of-the-art results on the HolStep dataset, improving the classification
accuracy from 83% to 90.3%.
|
Mingzhe Wang, Yihe Tang, Jian Wang, Jia Deng
| null |
1709.09994
| null | null |
Sparse Hierarchical Regression with Polynomials
|
math.OC cs.LG stat.ML
|
We present a novel method for exact hierarchical sparse polynomial
regression. Our regressor is that degree $r$ polynomial which depends on at
most $k$ inputs, counting at most $\ell$ monomial terms, which minimizes the
sum of the squares of its prediction errors. The previous hierarchical sparse
specification aligns well with modern big data settings where many inputs are
not relevant for prediction purposes and the functional complexity of the
regressor needs to be controlled as to avoid overfitting. We present a two-step
approach to this hierarchical sparse regression problem. First, we discard
irrelevant inputs using an extremely fast input ranking heuristic. Secondly, we
take advantage of modern cutting plane methods for integer optimization to
solve our resulting reduced hierarchical $(k, \ell)$-sparse problem exactly.
The ability of our method to identify all $k$ relevant inputs and all $\ell$
monomial terms is shown empirically to experience a phase transition.
Crucially, the same transition also presents itself in our ability to reject
all irrelevant features and monomials as well. In the regime where our method
is statistically powerful, its computational complexity is interestingly on par
with Lasso based heuristics. The presented work fills a void in terms of a lack
of powerful disciplined nonlinear sparse regression methods in high-dimensional
settings. Our method is shown empirically to scale to regression problems with
$n\approx 10,000$ observations for input dimension $p\approx 1,000$.
|
Dimitris Bertsimas, Bart Van Parys
| null |
1709.1003
| null | null |
Introducing DeepBalance: Random Deep Belief Network Ensembles to Address
Class Imbalance
|
stat.ML cs.LG
|
Class imbalance problems manifest in domains such as financial fraud
detection or network intrusion analysis, where the prevalence of one class is
much higher than another. Typically, practitioners are more interested in
predicting the minority class than the majority class as the minority class may
carry a higher misclassification cost. However, classifier performance
deteriorates in the face of class imbalance as oftentimes classifiers may
predict every point as the majority class. Methods for dealing with class
imbalance include cost-sensitive learning or resampling techniques. In this
paper, we introduce DeepBalance, an ensemble of deep belief networks trained
with balanced bootstraps and random feature selection. We demonstrate that our
proposed method outperforms baseline resampling methods such as SMOTE and
under- and over-sampling in metrics such as AUC and sensitivity when applied to
a highly imbalanced financial transaction data. Additionally, we explore
performance and training time implications of various model parameters.
Furthermore, we show that our model is easily parallelizable, which can reduce
training times. Finally, we present an implementation of DeepBalance in R.
|
Peter Xenopoulos
| null |
1709.10056
| null | null |
Towards Optimally Decentralized Multi-Robot Collision Avoidance via Deep
Reinforcement Learning
|
cs.RO cs.AI cs.LG cs.MA
|
Developing a safe and efficient collision avoidance policy for multiple
robots is challenging in the decentralized scenarios where each robot generate
its paths without observing other robots' states and intents. While other
distributed multi-robot collision avoidance systems exist, they often require
extracting agent-level features to plan a local collision-free action, which
can be computationally prohibitive and not robust. More importantly, in
practice the performance of these methods are much lower than their centralized
counterparts.
We present a decentralized sensor-level collision avoidance policy for
multi-robot systems, which directly maps raw sensor measurements to an agent's
steering commands in terms of movement velocity. As a first step toward
reducing the performance gap between decentralized and centralized methods, we
present a multi-scenario multi-stage training framework to find an optimal
policy which is trained over a large number of robots on rich, complex
environments simultaneously using a policy gradient based reinforcement
learning algorithm. We validate the learned sensor-level collision avoidance
policy in a variety of simulated scenarios with thorough performance
evaluations and show that the final learned policy is able to find time
efficient, collision-free paths for a large-scale robot system. We also
demonstrate that the learned policy can be well generalized to new scenarios
that do not appear in the entire training period, including navigating a
heterogeneous group of robots and a large-scale scenario with 100 robots.
Videos are available at https://sites.google.com/view/drlmaca
|
Pinxin Long, Tingxiang Fan, Xinyi Liao, Wenxi Liu, Hao Zhang and Jia
Pan
| null |
1709.10082
| null | null |
Learning Complex Dexterous Manipulation with Deep Reinforcement Learning
and Demonstrations
|
cs.LG cs.AI cs.RO
|
Dexterous multi-fingered hands are extremely versatile and provide a generic
way to perform a multitude of tasks in human-centric environments. However,
effectively controlling them remains challenging due to their high
dimensionality and large number of potential contacts. Deep reinforcement
learning (DRL) provides a model-agnostic approach to control complex dynamical
systems, but has not been shown to scale to high-dimensional dexterous
manipulation. Furthermore, deployment of DRL on physical systems remains
challenging due to sample inefficiency. Consequently, the success of DRL in
robotics has thus far been limited to simpler manipulators and tasks. In this
work, we show that model-free DRL can effectively scale up to complex
manipulation tasks with a high-dimensional 24-DoF hand, and solve them from
scratch in simulated experiments. Furthermore, with the use of a small number
of human demonstrations, the sample complexity can be significantly reduced,
which enables learning with sample sizes equivalent to a few hours of robot
experience. The use of demonstrations result in policies that exhibit very
natural movements and, surprisingly, are also substantially more robust.
|
Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, Giulia Vezzani, John
Schulman, Emanuel Todorov, Sergey Levine
| null |
1709.10087
| null | null |
Overcoming Exploration in Reinforcement Learning with Demonstrations
|
cs.LG cs.AI cs.NE cs.RO
|
Exploration in environments with sparse rewards has been a persistent problem
in reinforcement learning (RL). Many tasks are natural to specify with a sparse
reward, and manually shaping a reward function can result in suboptimal
performance. However, finding a non-zero reward is exponentially more difficult
with increasing task horizon or action dimensionality. This puts many
real-world tasks out of practical reach of RL methods. In this work, we use
demonstrations to overcome the exploration problem and successfully learn to
perform long-horizon, multi-step robotics tasks with continuous control such as
stacking blocks with a robot arm. Our method, which builds on top of Deep
Deterministic Policy Gradients and Hindsight Experience Replay, provides an
order of magnitude of speedup over RL on simulated robotics tasks. It is simple
to implement and makes only the additional assumption that we can collect a
small set of demonstrations. Furthermore, our method is able to solve tasks not
solvable by either RL or behavior cloning alone, and often ends up
outperforming the demonstrator policy.
|
Ashvin Nair, Bob McGrew, Marcin Andrychowicz, Wojciech Zaremba, Pieter
Abbeel
| null |
1709.10089
| null | null |
Online Learning with Randomized Feedback Graphs for Optimal PUE Attacks
in Cognitive Radio Networks
|
cs.NI cs.CR cs.LG
|
In a cognitive radio network, a secondary user learns the spectrum
environment and dynamically accesses the channel where the primary user is
inactive. At the same time, a primary user emulation (PUE) attacker can send
falsified primary user signals and prevent the secondary user from utilizing
the available channel. The best attacking strategies that an attacker can apply
have not been well studied. In this paper, for the first time, we study optimal
PUE attack strategies by formulating an online learning problem where the
attacker needs to dynamically decide the attacking channel in each time slot
based on its attacking experience. The challenge in our problem is that since
the PUE attack happens in the spectrum sensing phase, the attacker cannot
observe the reward on the attacked channel. To address this challenge, we
utilize the attacker's observation capability. We propose online learning-based
attacking strategies based on the attacker's observation capabilities. Through
our analysis, we show that with no observation within the attacking slot, the
attacker loses on the regret order, and with the observation of at least one
channel, there is a significant improvement on the attacking performance.
Observation of multiple channels does not give additional benefit to the
attacker (only a constant scaling) though it gives insight on the number of
observations required to achieve the minimum constant factor. Our proposed
algorithms are optimal in the sense that their regret upper bounds match their
corresponding regret lower-bounds. We show consistency between simulation and
analytical results under various system parameters.
|
Monireh Dabaghchian, Amir Alipour-Fanid, Kai Zeng, Qingsi Wang, Peter
Auer
| null |
1709.10128
| null | null |
A Simple and Fast Algorithm for L1-norm Kernel PCA
|
stat.ML cs.LG
|
We present an algorithm for L1-norm kernel PCA and provide a convergence
analysis for it. While an optimal solution of L2-norm kernel PCA can be
obtained through matrix decomposition, finding that of L1-norm kernel PCA is
not trivial due to its non-convexity and non-smoothness. We provide a novel
reformulation through which an equivalent, geometrically interpretable problem
is obtained. Based on the geometric interpretation of the reformulated problem,
we present a fixed-point type algorithm that iteratively computes a binary
weight for each observation. As the algorithm requires only inner products of
data vectors, it is computationally efficient and the kernel trick is
applicable. In the convergence analysis, we show that the algorithm converges
to a local optimal solution in a finite number of steps. Moreover, we provide a
rate of convergence analysis, which has been never done for any L1-norm PCA
algorithm, proving that the sequence of objective values converges at a linear
rate. In numerical experiments, we show that the algorithm is robust in the
presence of entry-wise perturbations and computationally scalable, especially
in a large-scale setting. Lastly, we introduce an application to outlier
detection where the model based on the proposed algorithm outperforms the
benchmark algorithms.
|
Cheolmin Kim, Diego Klabjan
|
10.1109/TPAMI.2019.2903505
|
1709.10152
| null | null |
Deep TAMER: Interactive Agent Shaping in High-Dimensional State Spaces
|
cs.AI cs.LG
|
While recent advances in deep reinforcement learning have allowed autonomous
learning agents to succeed at a variety of complex tasks, existing algorithms
generally require a lot of training data. One way to increase the speed at
which agents are able to learn to perform tasks is by leveraging the input of
human trainers. Although such input can take many forms, real-time,
scalar-valued feedback is especially useful in situations where it proves
difficult or impossible for humans to provide expert demonstrations. Previous
approaches have shown the usefulness of human input provided in this fashion
(e.g., the TAMER framework), but they have thus far not considered
high-dimensional state spaces or employed the use of deep learning. In this
paper, we do both: we propose Deep TAMER, an extension of the TAMER framework
that leverages the representational power of deep neural networks in order to
learn complex tasks in just a short amount of time with a human trainer. We
demonstrate Deep TAMER's success by using it and just 15 minutes of
human-provided feedback to train an agent that performs better than humans on
the Atari game of Bowling - a task that has proven difficult for even
state-of-the-art reinforcement learning methods.
|
Garrett Warnell, Nicholas Waytowich, Vernon Lawhern, Peter Stone
| null |
1709.10163
| null | null |
A Neural Comprehensive Ranker (NCR) for Open-Domain Question Answering
|
cs.CL cs.AI cs.LG cs.NE
|
This paper proposes a novel neural machine reading model for open-domain
question answering at scale. Existing machine comprehension models typically
assume that a short piece of relevant text containing answers is already
identified and given to the models, from which the models are designed to
extract answers. This assumption, however, is not realistic for building a
large-scale open-domain question answering system which requires both deep text
understanding and identifying relevant text from corpus simultaneously.
In this paper, we introduce Neural Comprehensive Ranker (NCR) that integrates
both passage ranking and answer extraction in one single framework. A Q&A
system based on this framework allows users to issue an open-domain question
without needing to provide a piece of text that must contain the answer.
Experiments show that the unified NCR model is able to outperform the
states-of-the-art in both retrieval of relevant text and answer extraction.
|
Bin Bi and Hao Ma
| null |
1709.10204
| null | null |
Provably Minimally-Distorted Adversarial Examples
|
cs.LG cs.AI cs.CR
|
The ability to deploy neural networks in real-world, safety-critical systems
is severely limited by the presence of adversarial examples: slightly perturbed
inputs that are misclassified by the network. In recent years, several
techniques have been proposed for increasing robustness to adversarial examples
--- and yet most of these have been quickly shown to be vulnerable to future
attacks. For example, over half of the defenses proposed by papers accepted at
ICLR 2018 have already been broken. We propose to address this difficulty
through formal verification techniques. We show how to construct provably
minimally distorted adversarial examples: given an arbitrary neural network and
input sample, we can construct adversarial examples which we prove are of
minimal distortion. Using this approach, we demonstrate that one of the recent
ICLR defense proposals, adversarial retraining, provably succeeds at increasing
the distortion required to construct adversarial examples by a factor of 4.2.
|
Nicholas Carlini, Guy Katz, Clark Barrett, David L. Dill
| null |
1709.10207
| null | null |
Comparison of PCA with ICA from data distribution perspective
|
stat.ML cs.LG
|
We performed an empirical comparison of ICA and PCA algorithms by applying
them on two simulated noisy time series with varying distribution parameters
and level of noise. In general, ICA shows better results than PCA because it
takes into account higher moments of data distribution. On the other hand, PCA
remains quite sensitive to the level of correlations among signals.
|
Miron Ivanov
| null |
1709.10222
| null | null |
DAGGER: A sequential algorithm for FDR control on DAGs
|
stat.ME cs.LG math.ST stat.ML stat.TH
|
We propose a linear-time, single-pass, top-down algorithm for multiple
testing on directed acyclic graphs (DAGs), where nodes represent hypotheses and
edges specify a partial ordering in which hypotheses must be tested. The
procedure is guaranteed to reject a sub-DAG with bounded false discovery rate
(FDR) while satisfying the logical constraint that a rejected node's parents
must also be rejected. It is designed for sequential testing settings, when the
DAG structure is known a priori, but the $p$-values are obtained selectively
(such as in a sequence of experiments), but the algorithm is also applicable in
non-sequential settings when all $p$-values can be calculated in advance (such
as variable/model selection). Our DAGGER algorithm, shorthand for Greedily
Evolving Rejections on DAGs, provably controls the false discovery rate under
independence, positive dependence or arbitrary dependence of the $p$-values.
The DAGGER procedure specializes to known algorithms in the special cases of
trees and line graphs, and simplifies to the classical Benjamini-Hochberg
procedure when the DAG has no edges. We explore the empirical performance of
DAGGER using simulations, as well as a real dataset corresponding to a gene
ontology, showing favorable performance in terms of time and power.
|
Aaditya Ramdas, Jianbo Chen, Martin J. Wainwright, Michael I. Jordan
| null |
1709.1025
| null | null |
Privacy Preserving Identification Using Sparse Approximation with
Ambiguization
|
cs.CR cs.LG stat.ML
|
In this paper, we consider a privacy preserving encoding framework for
identification applications covering biometrics, physical object security and
the Internet of Things (IoT). The proposed framework is based on a sparsifying
transform, which consists of a trained linear map, an element-wise
nonlinearity, and privacy amplification. The sparsifying transform and privacy
amplification are not symmetric for the data owner and data user. We
demonstrate that the proposed approach is closely related to sparse ternary
codes (STC), a recent information-theoretic concept proposed for fast
approximate nearest neighbor (ANN) search in high dimensional feature spaces
that being machine learning in nature also offers significant benefits in
comparison to sparse approximation and binary embedding approaches. We
demonstrate that the privacy of the database outsourced to a server as well as
the privacy of the data user are preserved at a low computational cost, storage
and communication burdens.
|
Behrooz Razeghi, Slava Voloshynovskiy, Dimche Kostadinov and Olga
Taran
| null |
1709.10297
| null | null |
A Nonlinear Orthogonal Non-Negative Matrix Factorization Approach to
Subspace Clustering
|
stat.ML cs.LG
|
A recent theoretical analysis shows the equivalence between non-negative
matrix factorization (NMF) and spectral clustering based approach to subspace
clustering. As NMF and many of its variants are essentially linear, we
introduce a nonlinear NMF with explicit orthogonality and derive general
kernel-based orthogonal multiplicative update rules to solve the subspace
clustering problem. In nonlinear orthogonal NMF framework, we propose two
subspace clustering algorithms, named kernel-based non-negative subspace
clustering KNSC-Ncut and KNSC-Rcut and establish their connection with spectral
normalized cut and ratio cut clustering. We further extend the nonlinear
orthogonal NMF framework and introduce a graph regularization to obtain a
factorization that respects a local geometric structure of the data after the
nonlinear mapping. The proposed NMF-based approach to subspace clustering takes
into account the nonlinear nature of the manifold, as well as its intrinsic
local geometry, which considerably improves the clustering performance when
compared to the several recently proposed state-of-the-art methods.
|
Dijana Tolic, Nino Antulov-Fantulin, Ivica Kopriva
|
10.1016/j.patcog.2018.04.029
|
1709.10323
| null | null |
Structured Embedding Models for Grouped Data
|
cs.CL cs.LG stat.ML
|
Word embeddings are a powerful approach for analyzing language, and
exponential family embeddings (EFE) extend them to other types of data. Here we
develop structured exponential family embeddings (S-EFE), a method for
discovering embeddings that vary across related groups of data. We study how
the word usage of U.S. Congressional speeches varies across states and party
affiliation, how words are used differently across sections of the ArXiv, and
how the co-purchase patterns of groceries can vary across seasons. Key to the
success of our method is that the groups share statistical information. We
develop two sharing strategies: hierarchical modeling and amortization. We
demonstrate the benefits of this approach in empirical studies of speeches,
abstracts, and shopping baskets. We show how S-EFE enables group-specific
interpretation of word usage, and outperforms EFE in predicting held-out data.
|
Maja Rudolph, Francisco Ruiz, Susan Athey, David Blei
| null |
1709.10367
| null | null |
An Empirical Evaluation of Rule Extraction from Recurrent Neural
Networks
|
cs.LG
|
Rule extraction from black-box models is critical in domains that require
model validation before implementation, as can be the case in credit scoring
and medical diagnosis. Though already a challenging problem in statistical
learning in general, the difficulty is even greater when highly non-linear,
recursive models, such as recurrent neural networks (RNNs), are fit to data.
Here, we study the extraction of rules from second-order recurrent neural
networks trained to recognize the Tomita grammars. We show that production
rules can be stably extracted from trained RNNs and that in certain cases the
rules outperform the trained RNNs.
|
Qinglong Wang, Kaixuan Zhang, Alexander G. Ororbia II, Xinyu Xing, Xue
Liu, C. Lee Giles
| null |
1709.1038
| null | null |
Learning how to learn: an adaptive dialogue agent for incrementally
learning visually grounded word meanings
|
cs.CL cs.AI cs.LG cs.RO
|
We present an optimised multi-modal dialogue agent for interactive learning
of visually grounded word meanings from a human tutor, trained on real
human-human tutoring data. Within a life-long interactive learning period, the
agent, trained using Reinforcement Learning (RL), must be able to handle
natural conversations with human users and achieve good learning performance
(accuracy) while minimising human effort in the learning process. We train and
evaluate this system in interaction with a simulated human tutor, which is
built on the BURCHAK corpus -- a Human-Human Dialogue dataset for the visual
learning task. The results show that: 1) The learned policy can coherently
interact with the simulated user to achieve the goal of the task (i.e. learning
visual attributes of objects, e.g. colour and shape); and 2) it finds a better
trade-off between classifier accuracy and tutoring costs than hand-crafted
rule-based policies, including ones with dynamic policies.
|
Yanchao Yu, Arash Eshghi, Oliver Lemon
|
10.18653/v1/W17-2802
|
1709.10423
| null | null |
Training an adaptive dialogue policy for interactive learning of
visually grounded word meanings
|
cs.CL cs.AI cs.LG cs.RO
|
We present a multi-modal dialogue system for interactive learning of
perceptually grounded word meanings from a human tutor. The system integrates
an incremental, semantic parsing/generation framework - Dynamic Syntax and Type
Theory with Records (DS-TTR) - with a set of visual classifiers that are
learned throughout the interaction and which ground the meaning representations
that it produces. We use this system in interaction with a simulated human
tutor to study the effects of different dialogue policies and capabilities on
the accuracy of learned meanings, learning rates, and efforts/costs to the
tutor. We show that the overall performance of the learning agent is affected
by (1) who takes initiative in the dialogues; (2) the ability to express/use
their confidence level about visual attributes; and (3) the ability to process
elliptical and incrementally constructed dialogue turns. Ultimately, we train
an adaptive dialogue policy which optimises the trade-off between classifier
accuracy and tutoring costs.
|
Yanchao Yu, Arash Eshghi, Oliver Lemon
|
10.18653/v1/W16-3643
|
1709.10426
| null | null |
The BURCHAK corpus: a Challenge Data Set for Interactive Learning of
Visually Grounded Word Meanings
|
cs.CL cs.AI cs.LG cs.RO
|
We motivate and describe a new freely available human-human dialogue dataset
for interactive learning of visually grounded word meanings through ostensive
definition by a tutor to a learner. The data has been collected using a novel,
character-by-character variant of the DiET chat tool (Healey et al., 2003;
Mills and Healey, submitted) with a novel task, where a Learner needs to learn
invented visual attribute words (such as " burchak " for square) from a tutor.
As such, the text-based interactions closely resemble face-to-face conversation
and thus contain many of the linguistic phenomena encountered in natural,
spontaneous dialogue. These include self-and other-correction, mid-sentence
continuations, interruptions, overlaps, fillers, and hedges. We also present a
generic n-gram framework for building user (i.e. tutor) simulations from this
type of incremental data, which is freely available to researchers. We show
that the simulations produce outputs that are similar to the original data
(e.g. 78% turn match similarity). Finally, we train and evaluate a
Reinforcement Learning dialogue control agent for learning visually grounded
word meanings, trained from the BURCHAK corpus. The learned policy shows
comparable performance to a rule-based system built previously.
|
Yanchao Yu, Arash Eshghi, Gregory Mills, Oliver Joseph Lemon
|
10.18653/v1/W17-2001
|
1709.10431
| null | null |
Convergence Analysis of Distributed Stochastic Gradient Descent with
Shuffling
|
stat.ML cs.LG
|
When using stochastic gradient descent to solve large-scale machine learning
problems, a common practice of data processing is to shuffle the training data,
partition the data across multiple machines if needed, and then perform several
epochs of training on the re-shuffled (either locally or globally) data. The
above procedure makes the instances used to compute the gradients no longer
independently sampled from the training data set. Then does the distributed SGD
method have desirable convergence properties in this practical situation? In
this paper, we give answers to this question. First, we give a mathematical
formulation for the practical data processing procedure in distributed machine
learning, which we call data partition with global/local shuffling. We observe
that global shuffling is equivalent to without-replacement sampling if the
shuffling operations are independent. We prove that SGD with global shuffling
has convergence guarantee in both convex and non-convex cases. An interesting
finding is that, the non-convex tasks like deep learning are more suitable to
apply shuffling comparing to the convex tasks. Second, we conduct the
convergence analysis for SGD with local shuffling. The convergence rate for
local shuffling is slower than that for global shuffling, since it will lose
some information if there's no communication between partitioned data. Finally,
we consider the situation when the permutation after shuffling is not uniformly
distributed (insufficient shuffling), and discuss the condition under which
this insufficiency will not influence the convergence rate. Our theoretical
results provide important insights to large-scale machine learning, especially
in the selection of data processing methods in order to achieve faster
convergence and good speedup. Our theoretical findings are verified by
extensive experiments on logistic regression and deep neural networks.
|
Qi Meng, Wei Chen, Yue Wang, Zhi-Ming Ma, Tie-Yan Liu
| null |
1709.10432
| null | null |
A representer theorem for deep kernel learning
|
cs.LG math.NA
|
In this paper we provide a finite-sample and an infinite-sample representer
theorem for the concatenation of (linear combinations of) kernel functions of
reproducing kernel Hilbert spaces. These results serve as mathematical
foundation for the analysis of machine learning algorithms based on
compositions of functions. As a direct consequence in the finite-sample case,
the corresponding infinite-dimensional minimization problems can be recast into
(nonlinear) finite-dimensional minimization problems, which can be tackled with
nonlinear optimization algorithms. Moreover, we show how concatenated machine
learning problems can be reformulated as neural networks and how our
representer theorem applies to a broad class of state-of-the-art deep learning
methods.
|
Bastian Bohn, Michael Griebel, Christian Rieger
| null |
1709.10441
| null | null |
Improving image generative models with human interactions
|
cs.CV cs.LG cs.NE
|
GANs provide a framework for training generative models which mimic a data
distribution. However, in many cases we wish to train these generative models
to optimize some auxiliary objective function within the data it generates,
such as making more aesthetically pleasing images. In some cases, these
objective functions are difficult to evaluate, e.g. they may require human
interaction. Here, we develop a system for efficiently improving a GAN to
target an objective involving human interaction, specifically generating images
that increase rates of positive user interactions. To improve the generative
model, we build a model of human behavior in the targeted domain from a
relatively small set of interactions, and then use this behavioral model as an
auxiliary loss function to improve the generative model. We show that this
system is successful at improving positive interaction rates, at least on
simulated data, and characterize some of the factors that affect its
performance.
|
Andrew Kyle Lampinen, David So, Douglas Eck, and Fred Bertsch
| null |
1709.10459
| null | null |
Self-supervised Deep Reinforcement Learning with Generalized Computation
Graphs for Robot Navigation
|
cs.LG cs.AI cs.RO
|
Enabling robots to autonomously navigate complex environments is essential
for real-world deployment. Prior methods approach this problem by having the
robot maintain an internal map of the world, and then use a localization and
planning method to navigate through the internal map. However, these approaches
often include a variety of assumptions, are computationally intensive, and do
not learn from failures. In contrast, learning-based methods improve as the
robot acts in the environment, but are difficult to deploy in the real-world
due to their high sample complexity. To address the need to learn complex
policies with few samples, we propose a generalized computation graph that
subsumes value-based model-free methods and model-based methods, with specific
instantiations interpolating between model-free and model-based. We then
instantiate this graph to form a navigation model that learns from raw images
and is sample efficient. Our simulated car experiments explore the design
decisions of our navigation model, and show our approach outperforms
single-step and $N$-step double Q-learning. We also evaluate our approach on a
real-world RC car and show it can learn to navigate through a complex indoor
environment with a few hours of fully autonomous, self-supervised training.
Videos of the experiments and code can be found at github.com/gkahn13/gcg
|
Gregory Kahn, Adam Villaflor, Bosen Ding, Pieter Abbeel, Sergey Levine
| null |
1709.10489
| null | null |
A generalization of the Jensen divergence: The chord gap divergence
|
cs.LG cs.IT math.IT
|
We introduce a novel family of distances, called the chord gap divergences,
that generalizes the Jensen divergences (also called the Burbea-Rao distances),
and study its properties. It follows a generalization of the celebrated
statistical Bhattacharyya distance that is frequently met in applications. We
report an iterative concave-convex procedure for computing centroids, and
analyze the performance of the $k$-means++ clustering with respect to that new
dissimilarity measure by introducing the Taylor-Lagrange remainder form of the
skew Jensen divergences.
|
Frank Nielsen
| null |
1709.10498
| null | null |
Unsupervised Domain Adaptation with Copula Models
|
cs.LG cs.CV stat.ML
|
We study the task of unsupervised domain adaptation, where no labeled data
from the target domain is provided during training time. To deal with the
potential discrepancy between the source and target distributions, both in
features and labels, we exploit a copula-based regression framework. The
benefits of this approach are two-fold: (a) it allows us to model a broader
range of conditional predictive densities beyond the common exponential family,
(b) we show how to leverage Sklar's theorem, the essence of the copula
formulation relating the joint density to the copula dependency functions, to
find effective feature mappings that mitigate the domain mismatch. By
transforming the data to a copula domain, we show on a number of benchmark
datasets (including human emotion estimation), and using different regression
models for prediction, that we can achieve a more robust and accurate
estimation of target labels, compared to recently proposed feature
transformation (adaptation) methods.
|
Cuong D. Tran and Ognjen Rudovic and Vladimir Pavlovic
| null |
1710.00018
| null | null |
Learning the Exact Topology of Undirected Consensus Networks
|
cs.SY cs.LG
|
In this article, we present a method to learn the interaction topology of a
network of agents undergoing linear consensus updates in a non invasive manner.
Our approach is based on multivariate Wiener filtering, which is known to
recover spurious edges apart from the true edges in the topology. The main
contribution of this work is to show that in the case of undirected consensus
networks, all spurious links obtained using Wiener filtering can be identified
using frequency response of the Wiener filters. Thus, the exact interaction
topology of the agents is unveiled. The method presented requires time series
measurements of the state of the agents and does not require any knowledge of
link weights. To the best of our knowledge this is the first approach that
provably reconstructs the structure of undirected consensus networks with
correlated noise. We illustrate the effectiveness of the method developed
through numerical simulations as well as experiments on a five node network of
Raspberry Pis.
|
Saurav Talukdar, Deepjyoti Deka, Sandeep Attree, Donatello Materassi
and Murti V. Salapaka
| null |
1710.00032
| null | null |
Language-depedent I-Vectors for LRE15
|
stat.ML cs.LG
|
A standard recipe for spoken language recognition is to apply a Gaussian
back-end to i-vectors. This ignores the uncertainty in the i-vector extraction,
which could be important especially for short utterances. A recent paper by
Cumani, Plchot and Fer proposes a solution to propagate that uncertainty into
the backend. We propose an alternative method of propagating the uncertainty.
|
Niko Br\"ummer and Albert Swart
| null |
1710.00085
| null | null |
User-friendly guarantees for the Langevin Monte Carlo with inaccurate
gradient
|
math.ST cs.LG math.PR stat.CO stat.ML stat.TH
|
In this paper, we study the problem of sampling from a given probability
density function that is known to be smooth and strongly log-concave. We
analyze several methods of approximate sampling based on discretizations of the
(highly overdamped) Langevin diffusion and establish guarantees on its error
measured in the Wasserstein-2 distance. Our guarantees improve or extend the
state-of-the-art results in three directions. First, we provide an upper bound
on the error of the first-order Langevin Monte Carlo (LMC) algorithm with
optimized varying step-size. This result has the advantage of being horizon
free (we do not need to know in advance the target precision) and to improve by
a logarithmic factor the corresponding result for the constant step-size.
Second, we study the case where accurate evaluations of the gradient of the
log-density are unavailable, but one can have access to approximations of the
aforementioned gradient. In such a situation, we consider both deterministic
and stochastic approximations of the gradient and provide an upper bound on the
sampling error of the first-order LMC that quantifies the impact of the
gradient evaluation inaccuracies. Third, we establish upper bounds for two
versions of the second-order LMC, which leverage the Hessian of the
log-density. We nonasymptotic guarantees on the sampling error of these
second-order LMCs. These guarantees reveal that the second-order LMC algorithms
improve on the first-order LMC in ill-conditioned settings.
|
Arnak S. Dalalyan and Avetik G. Karagulyan
| null |
1710.00095
| null | null |
Toward Scalable Machine Learning and Data Mining: the Bioinformatics
Case
|
cs.DC cs.LG stat.ML
|
In an effort to overcome the data deluge in computational biology and
bioinformatics and to facilitate bioinformatics research in the era of big
data, we identify some of the most influential algorithms that have been widely
used in the bioinformatics community. These top data mining and machine
learning algorithms cover classification, clustering, regression, graphical
model-based learning, and dimensionality reduction. The goal of this study is
to guide the focus of scalable computing experts in the endeavor of applying
new storage and scalable computation designs to bioinformatics algorithms that
merit their attention most, following the engineering maxim of "optimize the
common case".
|
Faraz Faghri, Sayed Hadi Hashemi, Mohammad Babaeizadeh, Mike A. Nalls,
Saurabh Sinha, Roy H. Campbell
| null |
1710.00112
| null | null |
Improved Training for Self-Training by Confidence Assessments
|
cs.LG
|
It is well known that for some tasks, labeled data sets may be hard to
gather. Therefore, we wished to tackle here the problem of having insufficient
training data. We examined learning methods from unlabeled data after an
initial training on a limited labeled data set. The suggested approach can be
used as an online learning method on the unlabeled test set. In the general
classification task, whenever we predict a label with high enough confidence,
we treat it as a true label and train the data accordingly. For the semantic
segmentation task, a classic example for an expensive data labeling process, we
do so pixel-wise. Our suggested approaches were applied on the MNIST data-set
as a proof of concept for a vision classification task and on the ADE20K
data-set in order to tackle the semi-supervised semantic segmentation problem.
|
Gal Hyams, Daniel Greenfeld, Dor Bank
| null |
1710.00209
| null | null |
The Deep Ritz method: A deep learning-based numerical algorithm for
solving variational problems
|
cs.LG stat.ML
|
We propose a deep learning based method, the Deep Ritz Method, for
numerically solving variational problems, particularly the ones that arise from
partial differential equations. The Deep Ritz method is naturally nonlinear,
naturally adaptive and has the potential to work in rather high dimensions. The
framework is quite simple and fits well with the stochastic gradient descent
method used in deep learning. We illustrate the method on several problems
including some eigenvalue problems.
|
Weinan E, Bing Yu
| null |
1710.00211
| null | null |
Bayesian estimation from few samples: community detection and related
problems
|
cs.DS cs.CC cs.LG math.PR stat.ML
|
We propose an efficient meta-algorithm for Bayesian estimation problems that
is based on low-degree polynomials, semidefinite programming, and tensor
decomposition. The algorithm is inspired by recent lower bound constructions
for sum-of-squares and related to the method of moments. Our focus is on sample
complexity bounds that are as tight as possible (up to additive lower-order
terms) and often achieve statistical thresholds or conjectured computational
thresholds.
Our algorithm recovers the best known bounds for community detection in the
sparse stochastic block model, a widely-studied class of estimation problems
for community detection in graphs. We obtain the first recovery guarantees for
the mixed-membership stochastic block model (Airoldi et el.) in constant
average degree graphs---up to what we conjecture to be the computational
threshold for this model. We show that our algorithm exhibits a sharp
computational threshold for the stochastic block model with multiple
communities beyond the Kesten--Stigum bound---giving evidence that this task
may require exponential time.
The basic strategy of our algorithm is strikingly simple: we compute the
best-possible low-degree approximation for the moments of the posterior
distribution of the parameters and use a robust tensor decomposition algorithm
to recover the parameters from these approximate posterior moments.
|
Samuel B. Hopkins and David Steurer
| null |
1710.00264
| null | null |
A Versatile Approach to Evaluating and Testing Automated Vehicles based
on Kernel Methods
|
cs.LG cs.RO stat.ML
|
Evaluation and validation of complicated control systems are crucial to
guarantee usability and safety. Usually, failure happens in some very rarely
encountered situations, but once triggered, the consequence is disastrous.
Accelerated Evaluation is a methodology that efficiently tests those
rarely-occurring yet critical failures via smartly-sampled test cases. The
distribution used in sampling is pivotal to the performance of the method, but
building a suitable distribution requires case-by-case analysis. This paper
proposes a versatile approach for constructing sampling distribution using
kernel method. The approach uses statistical learning tools to approximate the
critical event sets and constructs distributions based on the unique properties
of Gaussian distributions. We applied the method to evaluate the automated
vehicles. Numerical experiments show proposed approach can robustly identify
the rare failures and significantly reduce the evaluation time.
|
Zhiyuan Huang, Yaohui Guo, Henry Lam, Ding Zhao
| null |
1710.00283
| null | null |
libact: Pool-based Active Learning in Python
|
cs.LG
|
libact is a Python package designed to make active learning easier for
general users. The package not only implements several popular active learning
strategies, but also features the active-learning-by-learning meta-algorithm
that assists the users to automatically select the best strategy on the fly.
Furthermore, the package provides a unified interface for implementing more
strategies, models and application-specific labelers. The package is
open-source on Github, and can be easily installed from Python Package Index
repository.
|
Yao-Yuan Yang, Shao-Chuan Lee, Yu-An Chung, Tung-En Wu, Si-An Chen,
Hsuan-Tien Lin
| null |
1710.00379
| null | null |
Privacy with Estimation Guarantees
|
cs.IT cs.LG math.IT
|
We study the central problem in data privacy: how to share data with an
analyst while providing both privacy and utility guarantees to the user that
owns the data. In this setting, we present an estimation-theoretic analysis of
the privacy-utility trade-off (PUT). Here, an analyst is allowed to reconstruct
(in a mean-squared error sense) certain functions of the data (utility), while
other private functions should not be reconstructed with distortion below a
certain threshold (privacy). We demonstrate how chi-square information captures
the fundamental PUT in this case and provide bounds for the best PUT. We
propose a convex program to compute privacy-assuring mappings when the
functions to be disclosed and hidden are known a priori and the data
distribution is known. We derive lower bounds on the minimum mean-squared error
of estimating a target function from the disclosed data and evaluate the
robustness of our approach when an empirical distribution is used to compute
the privacy-assuring mappings instead of the true data distribution. We
illustrate the proposed approach through two numerical experiments.
|
Hao Wang, Lisa Vo, Flavio P. Calmon, Muriel M\'edard, Ken R. Duffy,
Mayank Varia
| null |
1710.00447
| null | null |
Asymptotic Allocation Rules for a Class of Dynamic Multi-armed Bandit
Problems
|
cs.LG
|
This paper presents a class of Dynamic Multi-Armed Bandit problems where the
reward can be modeled as the noisy output of a time varying linear stochastic
dynamic system that satisfies some boundedness constraints. The class allows
many seemingly different problems with time varying option characteristics to
be considered in a single framework. It also opens up the possibility of
considering many new problems of practical importance. For instance it affords
the simultaneous consideration of temporal option unavailabilities and the
depen- dencies between options with time varying option characteristics in a
seamless manner. We show that, for this class of problems, the combination of
any Upper Confidence Bound type algorithm with any efficient reward estimator
for the expected reward ensures the logarithmic bounding of the expected
cumulative regret. We demonstrate the versatility of the approach by the
explicit consideration of a new example of practical interest.
|
T. W. U. Madhushani, D. H. S. Maithripala and N. E. Leonard
| null |
1710.0045
| null | null |
Deep Abstract Q-Networks
|
cs.LG cs.AI
|
We examine the problem of learning and planning on high-dimensional domains
with long horizons and sparse rewards. Recent approaches have shown great
successes in many Atari 2600 domains. However, domains with long horizons and
sparse rewards, such as Montezuma's Revenge and Venture, remain challenging for
existing methods. Methods using abstraction (Dietterich 2000; Sutton, Precup,
and Singh 1999) have shown to be useful in tackling long-horizon problems. We
combine recent techniques of deep reinforcement learning with existing
model-based approaches using an expert-provided state abstraction. We construct
toy domains that elucidate the problem of long horizons, sparse rewards and
high-dimensional inputs, and show that our algorithm significantly outperforms
previous methods on these domains. Our abstraction-based approach outperforms
Deep Q-Networks (Mnih et al. 2015) on Montezuma's Revenge and Venture, and
exhibits backtracking behavior that is absent from previous methods.
|
Melrose Roderick, Christopher Grimm, Stefanie Tellex
| null |
1710.00459
| null | null |
Weighted-SVD: Matrix Factorization with Weights on the Latent Factors
|
cs.IR cs.LG stat.ML
|
The Matrix Factorization models, sometimes called the latent factor models,
are a family of methods in the recommender system research area to (1) generate
the latent factors for the users and the items and (2) predict users' ratings
on items based on their latent factors. However, current Matrix Factorization
models presume that all the latent factors are equally weighted, which may not
always be a reasonable assumption in practice. In this paper, we propose a new
model, called Weighted-SVD, to integrate the linear regression model with the
SVD model such that each latent factor accompanies with a corresponding weight
parameter. This mechanism allows the latent factors have different weights to
influence the final ratings. The complexity of the Weighted-SVD model is
slightly larger than the SVD model but much smaller than the SVD++ model. We
compared the Weighted-SVD model with several latent factor models on five
public datasets based on the Root-Mean-Squared-Errors (RMSEs). The results show
that the Weighted-SVD model outperforms the baseline methods in all the
experimental datasets under almost all settings.
|
Hung-Hsuan Chen
| null |
1710.00482
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.