title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
A Practically Competitive and Provably Consistent Algorithm for Uplift
Modeling | cs.LG cs.AI stat.ML | Randomized experiments have been critical tools of decision making for
decades. However, subjects can show significant heterogeneity in response to
treatments in many important applications. Therefore it is not enough to simply
know which treatment is optimal for the entire population. What we need is a
model that correctly customize treatment assignment base on subject
characteristics. The problem of constructing such models from randomized
experiments data is known as Uplift Modeling in the literature. Many algorithms
have been proposed for uplift modeling and some have generated promising
results on various data sets. Yet little is known about the theoretical
properties of these algorithms. In this paper, we propose a new tree-based
ensemble algorithm for uplift modeling. Experiments show that our algorithm can
achieve competitive results on both synthetic and industry-provided data. In
addition, by properly tuning the "node size" parameter, our algorithm is proved
to be consistent under mild regularity conditions. This is the first consistent
algorithm for uplift modeling that we are aware of.
| Yan Zhao, Xiao Fang, David Simchi-Levi | null | 1709.03683 | null | null |
RRA: Recurrent Residual Attention for Sequence Learning | cs.LG cs.AI | In this paper, we propose a recurrent neural network (RNN) with residual
attention (RRA) to learn long-range dependencies from sequential data. We
propose to add residual connections across timesteps to RNN, which explicitly
enhances the interaction between current state and hidden states that are
several timesteps apart. This also allows training errors to be directly
back-propagated through residual connections and effectively alleviates
gradient vanishing problem. We further reformulate an attention mechanism over
residual connections. An attention gate is defined to summarize the individual
contribution from multiple previous hidden states in computing the current
state. We evaluate RRA on three tasks: the adding problem, pixel-by-pixel MNIST
classification and sentiment analysis on the IMDB dataset. Our experiments
demonstrate that RRA yields better performance, faster convergence and more
stable training compared to a standard LSTM network. Furthermore, RRA shows
highly competitive performance to the state-of-the-art methods.
| Cheng Wang | null | 1709.03714 | null | null |
Adaptive Graph Signal Processing: Algorithms and Optimal Sampling
Strategies | cs.LG cs.SY | The goal of this paper is to propose novel strategies for adaptive learning
of signals defined over graphs, which are observed over a (randomly
time-varying) subset of vertices. We recast two classical adaptive algorithms
in the graph signal processing framework, namely, the least mean squares (LMS)
and the recursive least squares (RLS) adaptive estimation strategies. For both
methods, a detailed mean-square analysis illustrates the effect of random
sampling on the adaptive reconstruction capability and the steady-state
performance. Then, several probabilistic sampling strategies are proposed to
design the sampling probability at each node in the graph, with the aim of
optimizing the tradeoff between steady-state performance, graph sampling rate,
and convergence rate of the adaptive algorithms. Finally, a distributed RLS
strategy is derived and is shown to be convergent to its centralized
counterpart. Numerical simulations carried out over both synthetic and real
data illustrate the good performance of the proposed sampling and
reconstruction strategies for (possibly distributed) adaptive learning of
signals defined over graphs.
| Paolo Di Lorenzo, Paolo Banelli, Elvin Isufi, Sergio Barbarossa, Geert
Leus | 10.1109/TSP.2018.2835384 | 1709.03726 | null | null |
Interpreting Shared Deep Learning Models via Explicable Boundary Trees | cs.LG cs.HC | Despite outperforming the human in many tasks, deep neural network models are
also criticized for the lack of transparency and interpretability in decision
making. The opaqueness results in uncertainty and low confidence when deploying
such a model in model sharing scenarios, when the model is developed by a third
party. For a supervised machine learning model, sharing training process
including training data provides an effective way to gain trust and to better
understand model predictions. However, it is not always possible to share all
training data due to privacy and policy constraints. In this paper, we propose
a method to disclose a small set of training data that is just sufficient for
users to get the insight of a complicated model. The method constructs a
boundary tree using selected training data and the tree is able to approximate
the complicated model with high fidelity. We show that traversing data points
in the tree gives users significantly better understanding of the model and
paves the way for trustworthy model sharing.
| Huijun Wu, Chen Wang, Jie Yin, Kai Lu, Liming Zhu | null | 1709.0373 | null | null |
Learning Graph-Level Representation for Drug Discovery | cs.LG stat.ML | Predicating macroscopic influences of drugs on human body, like efficacy and
toxicity, is a central problem of small-molecule based drug discovery.
Molecules can be represented as an undirected graph, and we can utilize graph
convolution networks to predication molecular properties. However, graph
convolutional networks and other graph neural networks all focus on learning
node-level representation rather than graph-level representation. Previous
works simply sum all feature vectors for all nodes in the graph to obtain the
graph feature vector for drug predication. In this paper, we introduce a dummy
super node that is connected with all nodes in the graph by a directed edge as
the representation of the graph and modify the graph operation to help the
dummy super node learn graph-level feature. Thus, we can handle graph-level
classification and regression in the same way as node-level classification and
regression. In addition, we apply focal loss to address class imbalance in drug
datasets. The experiments on MoleculeNet show that our method can effectively
improve the performance of molecular properties predication.
| Junying Li, Deng Cai, Xiaofei He | null | 1709.03741 | null | null |
Deep Mean-Shift Priors for Image Restoration | cs.CV cs.LG | In this paper we introduce a natural image prior that directly represents a
Gaussian-smoothed version of the natural image distribution. We include our
prior in a formulation of image restoration as a Bayes estimator that also
allows us to solve noise-blind image restoration problems. We show that the
gradient of our prior corresponds to the mean-shift vector on the natural image
distribution. In addition, we learn the mean-shift vector field using denoising
autoencoders, and use it in a gradient descent approach to perform Bayes risk
minimization. We demonstrate competitive results for noise-blind deblurring,
super-resolution, and demosaicing.
| Siavash Arjomand Bigdeli, Meiguang Jin, Paolo Favaro, Matthias Zwicker | null | 1709.03749 | null | null |
Learning with Bounded Instance- and Label-dependent Label Noise | stat.ML cs.LG | Instance- and Label-dependent label Noise (ILN) widely exists in real-world
datasets but has been rarely studied. In this paper, we focus on Bounded
Instance- and Label-dependent label Noise (BILN), a particular case of ILN
where the label noise rates -- the probabilities that the true labels of
examples flip into the corrupted ones -- have upper bound less than $1$.
Specifically, we introduce the concept of distilled examples, i.e. examples
whose labels are identical with the labels assigned for them by the Bayes
optimal classifier, and prove that under certain conditions classifiers learnt
on distilled examples will converge to the Bayes optimal classifier. Inspired
by the idea of learning with distilled examples, we then propose a learning
algorithm with theoretical guarantees for its robustness to BILN. At last,
empirical evaluations on both synthetic and real-world datasets show
effectiveness of our algorithm in learning with BILN.
| Jiacheng Cheng, Tongliang Liu, Kotagiri Ramamohanarao, Dacheng Tao | null | 1709.03768 | null | null |
Dual Discriminator Generative Adversarial Nets | cs.LG stat.ML | We propose in this paper a novel approach to tackle the problem of mode
collapse encountered in generative adversarial network (GAN). Our idea is
intuitive but proven to be very effective, especially in addressing some key
limitations of GAN. In essence, it combines the Kullback-Leibler (KL) and
reverse KL divergences into a unified objective function, thus it exploits the
complementary statistical properties from these divergences to effectively
diversify the estimated density in capturing multi-modes. We term our method
dual discriminator generative adversarial nets (D2GAN) which, unlike GAN, has
two discriminators; and together with a generator, it also has the analogy of a
minimax game, wherein a discriminator rewards high scores for samples from data
distribution whilst another discriminator, conversely, favoring data from the
generator, and the generator produces data to fool both two discriminators. We
develop theoretical analysis to show that, given the maximal discriminators,
optimizing the generator of D2GAN reduces to minimizing both KL and reverse KL
divergences between data distribution and the distribution induced from the
data generated by the generator, hence effectively avoiding the mode collapsing
problem. We conduct extensive experiments on synthetic and real-world
large-scale datasets (MNIST, CIFAR-10, STL-10, ImageNet), where we have made
our best effort to compare our D2GAN with the latest state-of-the-art GAN's
variants in comprehensive qualitative and quantitative evaluations. The
experimental results demonstrate the competitive and superior performance of
our approach in generating good quality and diverse samples over baselines, and
the capability of our method to scale up to ImageNet database.
| Tu Dinh Nguyen, Trung Le, Hung Vu, Dinh Phung | null | 1709.03831 | null | null |
Imitation Learning for Vision-based Lane Keeping Assistance | cs.LG | This paper aims to investigate direct imitation learning from human drivers
for the task of lane keeping assistance in highway and country roads using
grayscale images from a single front view camera. The employed method utilizes
convolutional neural networks (CNN) to act as a policy that is driving a
vehicle. The policy is successfully learned via imitation learning using
real-world data collected from human drivers and is evaluated in closed-loop
simulated environments, demonstrating good driving behaviour and a robustness
for domain changes. Evaluation is based on two proposed performance metrics
measuring how well the vehicle is positioned in a lane and the smoothness of
the driven trajectory.
| Christopher Innocenti, Henrik Lind\'en, Ghazaleh Panahandeh, Lennart
Svensson, Nasser Mohammadiha | null | 1709.03853 | null | null |
Meta-QSAR: a large-scale application of meta-learning to drug design and
discovery | cs.AI cs.LG | We investigate the learning of quantitative structure activity relationships
(QSARs) as a case-study of meta-learning. This application area is of the
highest societal importance, as it is a key step in the development of new
medicines. The standard QSAR learning problem is: given a target (usually a
protein) and a set of chemical compounds (small molecules) with associated
bioactivities (e.g. inhibition of the target), learn a predictive mapping from
molecular representation to activity. Although almost every type of machine
learning method has been applied to QSAR learning there is no agreed single
best way of learning QSARs, and therefore the problem area is well-suited to
meta-learning. We first carried out the most comprehensive ever comparison of
machine learning methods for QSAR learning: 18 regression methods, 6 molecular
representations, applied to more than 2,700 QSAR problems. (These results have
been made publicly available on OpenML and represent a valuable resource for
testing novel meta-learning methods.) We then investigated the utility of
algorithm selection for QSAR problems. We found that this meta-learning
approach outperformed the best individual QSAR learning method (random forests
using a molecular fingerprint representation) by up to 13%, on average. We
conclude that meta-learning outperforms base-learning methods for QSAR
learning, and as this investigation is one of the most extensive ever
comparisons of base and meta-learning methods ever made, it provides evidence
for the general effectiveness of meta-learning over base-learning.
| Ivan Olier, Noureddin Sadawi, G. Richard Bickerton, Joaquin
Vanschoren, Crina Grosan, Larisa Soldatova and Ross D. King | null | 1709.03854 | null | null |
Agnostic Learning by Refuting | cs.LG | The sample complexity of learning a Boolean-valued function class is
precisely characterized by its Rademacher complexity. This has little bearing,
however, on the sample complexity of \emph{efficient} agnostic learning.
We introduce \emph{refutation complexity}, a natural computational analog of
Rademacher complexity of a Boolean concept class and show that it exactly
characterizes the sample complexity of \emph{efficient} agnostic learning.
Informally, refutation complexity of a class $\mathcal{C}$ is the minimum
number of example-label pairs required to efficiently distinguish between the
case that the labels correlate with the evaluation of some member of
$\mathcal{C}$ (\emph{structure}) and the case where the labels are i.i.d.
Rademacher random variables (\emph{noise}). The easy direction of this
relationship was implicitly used in the recent framework for improper PAC
learning lower bounds of Daniely and co-authors via connections to the hardness
of refuting random constraint satisfaction problems. Our work can be seen as
making the relationship between agnostic learning and refutation implicit in
their work into an explicit equivalence. In a recent, independent work, Salil
Vadhan discovered a similar relationship between refutation and PAC-learning in
the realizable (i.e. noiseless) case.
| Pravesh K. Kothari and Roi Livni | null | 1709.03871 | null | null |
High-Dimensional Dependency Structure Learning for Physical Processes | cs.LG stat.ML | In this paper, we consider the use of structure learning methods for
probabilistic graphical models to identify statistical dependencies in
high-dimensional physical processes. Such processes are often synthetically
characterized using PDEs (partial differential equations) and are observed in a
variety of natural phenomena, including geoscience data capturing atmospheric
and hydrological phenomena. Classical structure learning approaches such as the
PC algorithm and variants are challenging to apply due to their high
computational and sample requirements. Modern approaches, often based on sparse
regression and variants, do come with finite sample guarantees, but are usually
highly sensitive to the choice of hyper-parameters, e.g., parameter $\lambda$
for sparsity inducing constraint or regularization. In this paper, we present
ACLIME-ADMM, an efficient two-step algorithm for adaptive structure learning,
which estimates an edge specific parameter $\lambda_{ij}$ in the first step,
and uses these parameters to learn the structure in the second step. Both steps
of our algorithm use (inexact) ADMM to solve suitable linear programs, and all
iterations can be done in closed form in an efficient block parallel manner. We
compare ACLIME-ADMM with baselines on both synthetic data simulated by partial
differential equations (PDEs) that model advection-diffusion processes, and
real data (50 years) of daily global geopotential heights to study information
flow in the atmosphere. ACLIME-ADMM is shown to be efficient, stable, and
competitive, usually better than the baselines especially on difficult
problems. On real data, ACLIME-ADMM recovers the underlying structure of global
atmospheric circulation, including switches in wind directions at the equator
and tropics entirely from the data.
| Jamal Golmohammadi, Imme Ebert-Uphoff, Sijie He, Yi Deng and Arindam
Banerjee | null | 1709.03891 | null | null |
End-to-End United Video Dehazing and Detection | cs.CV cs.AI cs.LG | The recent development of CNN-based image dehazing has revealed the
effectiveness of end-to-end modeling. However, extending the idea to end-to-end
video dehazing has not been explored yet. In this paper, we propose an
End-to-End Video Dehazing Network (EVD-Net), to exploit the temporal
consistency between consecutive video frames. A thorough study has been
conducted over a number of structure options, to identify the best temporal
fusion strategy. Furthermore, we build an End-to-End United Video Dehazing and
Detection Network(EVDD-Net), which concatenates and jointly trains EVD-Net with
a video object detection model. The resulting augmented end-to-end pipeline has
demonstrated much more stable and accurate detection results in hazy video.
| Boyi Li and Xiulian Peng and Zhangyang Wang and Jizheng Xu and Dan
Feng | null | 1709.03919 | null | null |
Deep Reinforcement Learning with Surrogate Agent-Environment Interface | cs.LG | In this paper, we propose surrogate agent-environment interface (SAEI) in
reinforcement learning. We also state that learning based on probability
surrogate agent-environment interface provides optimal policy of task
agent-environment interface. We introduce surrogate probability action and
develop the probability surrogate action deterministic policy gradient (PSADPG)
algorithm based on SAEI. This algorithm enables continuous control of discrete
action. The experiments show PSADPG achieves the performance of DQN in certain
tasks with the stochastic optimal policy nature in the initial training stage.
| Song Wang, Yu Jing | null | 1709.03942 | null | null |
Support Spinor Machine | cs.LG eess.SP q-fin.ST stat.ML | We generalize a support vector machine to a support spinor machine by using
the mathematical structure of wedge product over vector machine in order to
extend field from vector field to spinor field. The separated hyperplane is
extended to Kolmogorov space in time series data which allow us to extend a
structure of support vector machine to a support tensor machine and a support
tensor machine moduli space. Our performance test on support spinor machine is
done over one class classification of end point in physiology state of time
series data after empirical mode analysis and compared with support vector
machine test. We implement algorithm of support spinor machine by using
Holo-Hilbert amplitude modulation for fully nonlinear and nonstationary time
series data analysis.
| Kabin Kanjamapornkul, Richard Pin\v{c}\'ak, Sanphet Chunithpaisan,
Erik Barto\v{s} | 10.1016/j.dsp.2017.07.023 | 1709.03943 | null | null |
Multimodal Content Analysis for Effective Advertisements on YouTube | cs.AI cs.LG cs.MM cs.NE | The rapid advances in e-commerce and Web 2.0 technologies have greatly
increased the impact of commercial advertisements on the general public. As a
key enabling technology, a multitude of recommender systems exists which
analyzes user features and browsing patterns to recommend appealing
advertisements to users. In this work, we seek to study the characteristics or
attributes that characterize an effective advertisement and recommend a useful
set of features to aid the designing and production processes of commercial
advertisements. We analyze the temporal patterns from multimedia content of
advertisement videos including auditory, visual and textual components, and
study their individual roles and synergies in the success of an advertisement.
The objective of this work is then to measure the effectiveness of an
advertisement, and to recommend a useful set of features to advertisement
designers to make it more successful and approachable to users. Our proposed
framework employs the signal processing technique of cross modality feature
learning where data streams from different components are employed to train
separate neural network models and are then fused together to learn a shared
representation. Subsequently, a neural network model trained on this joint
feature embedding representation is utilized as a classifier to predict
advertisement effectiveness. We validate our approach using subjective ratings
from a dedicated user study, the sentiment strength of online viewer comments,
and a viewer opinion metric of the ratio of the Likes and Views received by
each advertisement from an online platform.
| Nikhita Vedula, Wei Sun, Hyunhwan Lee, Harsh Gupta, Mitsunori Ogihara,
Joseph Johnson, Gang Ren, Srinivasan Parthasarathy | null | 1709.03946 | null | null |
Refining Source Representations with Relation Networks for Neural
Machine Translation | cs.CL cs.AI cs.LG | Although neural machine translation (NMT) with the encoder-decoder framework
has achieved great success in recent times, it still suffers from some
drawbacks: RNNs tend to forget old information which is often useful and the
encoder only operates through words without considering word relationship. To
solve these problems, we introduce a relation networks (RN) into NMT to refine
the encoding representations of the source. In our method, the RN first
augments the representation of each source word with its neighbors and reasons
all the possible pairwise relations between them. Then the source
representations and all the relations are fed to the attention module and the
decoder together, keeping the main encoder-decoder architecture unchanged.
Experiments on two Chinese-to-English data sets in different scales both show
that our method can outperform the competitive baselines significantly.
| Wen Zhang, Jiawei Hu, Yang Feng, Qun Liu | null | 1709.0398 | null | null |
Adaptive Exploration-Exploitation Tradeoff for Opportunistic Bandits | cs.LG stat.ML | In this paper, we propose and study opportunistic bandits - a new variant of
bandits where the regret of pulling a suboptimal arm varies under different
environmental conditions, such as network load or produce price. When the
load/price is low, so is the cost/regret of pulling a suboptimal arm (e.g.,
trying a suboptimal network configuration). Therefore, intuitively, we could
explore more when the load/price is low and exploit more when the load/price is
high. Inspired by this intuition, we propose an Adaptive Upper-Confidence-Bound
(AdaUCB) algorithm to adaptively balance the exploration-exploitation tradeoff
for opportunistic bandits. We prove that AdaUCB achieves $O(\log T)$ regret
with a smaller coefficient than the traditional UCB algorithm. Furthermore,
AdaUCB achieves $O(1)$ regret with respect to $T$ if the exploration cost is
zero when the load level is below a certain threshold. Last, based on both
synthetic data and real-world traces, experimental results show that AdaUCB
significantly outperforms other bandit algorithms, such as UCB and TS (Thompson
Sampling), under large load/price fluctuations.
| Huasen Wu, Xueying Guo, Xin Liu | null | 1709.04004 | null | null |
Shifting Mean Activation Towards Zero with Bipolar Activation Functions | stat.ML cs.LG cs.NE | We propose a simple extension to the ReLU-family of activation functions that
allows them to shift the mean activation across a layer towards zero. Combined
with proper weight initialization, this alleviates the need for normalization
layers. We explore the training of deep vanilla recurrent neural networks
(RNNs) with up to 144 layers, and show that bipolar activation functions help
learning in this setting. On the Penn Treebank and Text8 language modeling
tasks we obtain competitive results, improving on the best reported results for
non-gated networks. In experiments with convolutional neural networks without
batch normalization, we find that bipolar activations produce a faster drop in
training error, and results in a lower test error on the CIFAR-10
classification task.
| Lars Eidnes, Arild N{\o}kland | null | 1709.04054 | null | null |
Parallelizing Linear Recurrent Neural Nets Over Sequence Length | cs.NE cs.AI cs.LG | Recurrent neural networks (RNNs) are widely used to model sequential data but
their non-linear dependencies between sequence elements prevent parallelizing
training over sequence length. We show the training of RNNs with only linear
sequential dependencies can be parallelized over the sequence length using the
parallel scan algorithm, leading to rapid training on long sequences even with
small minibatch size. We develop a parallel linear recurrence CUDA kernel and
show that it can be applied to immediately speed up training and inference of
several state of the art RNN architectures by up to 9x. We abstract recent work
on linear RNNs into a new framework of linear surrogate RNNs and develop a
linear surrogate model for the long short-term memory unit, the GILR-LSTM, that
utilizes parallel linear recurrence. We extend sequence learning to new
extremely long sequence regimes that were previously out of reach by
successfully training a GILR-LSTM on a synthetic sequence classification task
with a one million timestep dependency.
| Eric Martin, Chris Cundy | null | 1709.04057 | null | null |
Variational Reasoning for Question Answering with Knowledge Graph | cs.LG cs.AI cs.CL | Knowledge graph (KG) is known to be helpful for the task of question
answering (QA), since it provides well-structured relational information
between entities, and allows one to further infer indirect facts. However, it
is challenging to build QA systems which can learn to reason over knowledge
graphs based on question-answer pairs alone. First, when people ask questions,
their expressions are noisy (for example, typos in texts, or variations in
pronunciations), which is non-trivial for the QA system to match those
mentioned entities to the knowledge graph. Second, many questions require
multi-hop logic reasoning over the knowledge graph to retrieve the answers. To
address these challenges, we propose a novel and unified deep learning
architecture, and an end-to-end variational learning algorithm which can handle
noise in questions, and learn multi-hop reasoning simultaneously. Our method
achieves state-of-the-art performance on a recent benchmark dataset in the
literature. We also derive a series of new benchmark datasets, including
questions for multi-hop reasoning, questions paraphrased by neural translation
model, and questions in human voice. Our method yields very promising results
on all these challenging datasets.
| Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexander J. Smola, Le Song | null | 1709.04071 | null | null |
Linear Stochastic Approximation: Constant Step-Size and Iterate
Averaging | cs.LG cs.SY stat.ML | We consider $d$-dimensional linear stochastic approximation algorithms (LSAs)
with a constant step-size and the so called Polyak-Ruppert (PR) averaging of
iterates. LSAs are widely applied in machine learning and reinforcement
learning (RL), where the aim is to compute an appropriate $\theta_{*} \in
\mathbb{R}^d$ (that is an optimum or a fixed point) using noisy data and $O(d)$
updates per iteration. In this paper, we are motivated by the problem (in RL)
of policy evaluation from experience replay using the \emph{temporal
difference} (TD) class of learning algorithms that are also LSAs. For LSAs with
a constant step-size, and PR averaging, we provide bounds for the mean squared
error (MSE) after $t$ iterations. We assume that data is \iid with finite
variance (underlying distribution being $P$) and that the expected dynamics is
Hurwitz. For a given LSA with PR averaging, and data distribution $P$
satisfying the said assumptions, we show that there exists a range of constant
step-sizes such that its MSE decays as $O(\frac{1}{t})$.
We examine the conditions under which a constant step-size can be chosen
uniformly for a class of data distributions $\mathcal{P}$, and show that not
all data distributions `admit' such a uniform constant step-size. We also
suggest a heuristic step-size tuning algorithm to choose a constant step-size
of a given LSA for a given data distribution $P$. We compare our results with
related work and also discuss the implication of our results in the context of
TD algorithms that are LSAs.
| Chandrashekar Lakshminarayanan and Csaba Szepesv\'ari | null | 1709.04073 | null | null |
Pre-training Neural Networks with Human Demonstrations for Deep
Reinforcement Learning | cs.LG cs.AI | Deep reinforcement learning (deep RL) has achieved superior performance in
complex sequential tasks by using a deep neural network as its function
approximator and by learning directly from raw images. A drawback of using raw
images is that deep RL must learn the state feature representation from the raw
images in addition to learning a policy. As a result, deep RL can require a
prohibitively large amount of training time and data to reach reasonable
performance, making it difficult to use deep RL in real-world applications,
especially when data is expensive. In this work, we speed up training by
addressing half of what deep RL is trying to solve --- learning features. Our
approach is to learn some of the important features by pre-training deep RL
network's hidden layers via supervised learning using a small set of human
demonstrations. We empirically evaluate our approach using deep Q-network (DQN)
and asynchronous advantage actor-critic (A3C) algorithms on the Atari 2600
games of Pong, Freeway, and Beamrider. Our results show that: 1) pre-training
with human demonstrations in a supervised learning manner is better at
discovering features relative to pre-training naively in DQN, and 2)
initializing a deep RL network with a pre-trained model provides a significant
improvement in training time even when pre-training from a small number of
human demonstrations.
| Gabriel V. de la Cruz Jr, Yunshu Du and Matthew E. Taylor | null | 1709.04083 | null | null |
A Constrained, Weighted-L1 Minimization Approach for Joint Discovery of
Heterogeneous Neural Connectivity Graphs | q-bio.NC cs.LG | Determining functional brain connectivity is crucial to understanding the
brain and neural differences underlying disorders such as autism. Recent
studies have used Gaussian graphical models to learn brain connectivity via
statistical dependencies across brain regions from neuroimaging. However,
previous studies often fail to properly incorporate priors tailored to
neuroscience, such as preferring shorter connections. To remedy this problem,
the paper here introduces a novel, weighted-$\ell_1$, multi-task graphical
model (W-SIMULE). This model elegantly incorporates a flexible prior, along
with a parallelizable formulation. Additionally, W-SIMULE extends the
often-used Gaussian assumption, leading to considerable performance increases.
Here, applications to fMRI data show that W-SIMULE succeeds in determining
functional connectivity in terms of (1) log-likelihood, (2) finding edges that
differentiate groups, and (3) classifying different groups based on their
connectivity, achieving 58.6\% accuracy on the ABIDE dataset. Having
established W-SIMULE's effectiveness, it links four key areas to autism, all of
which are consistent with the literature. Due to its elegant domain adaptivity,
W-SIMULE can be readily applied to various data types to effectively estimate
connectivity.
| Chandan Singh, Beilun Wang, Yanjun Qi | null | 1709.0409 | null | null |
Co-training for Demographic Classification Using Deep Learning from
Label Proportions | cs.CV cs.LG stat.ML | Deep learning algorithms have recently produced state-of-the-art accuracy in
many classification tasks, but this success is typically dependent on access to
many annotated training examples. For domains without such data, an attractive
alternative is to train models with light, or distant supervision. In this
paper, we introduce a deep neural network for the Learning from Label
Proportion (LLP) setting, in which the training data consist of bags of
unlabeled instances with associated label distributions for each bag. We
introduce a new regularization layer, Batch Averager, that can be appended to
the last layer of any deep neural network to convert it from supervised
learning to LLP. This layer can be implemented readily with existing deep
learning packages. To further support domains in which the data consist of two
conditionally independent feature views (e.g. image and text), we propose a
co-training algorithm that iteratively generates pseudo bags and refits the
deep LLP model to improve classification accuracy. We demonstrate our models on
demographic attribute classification (gender and race/ethnicity), which has
many applications in social media analysis, public health, and marketing. We
conduct experiments to predict demographics of Twitter users based on their
tweets and profile image, without requiring any user-level annotations for
training. We find that the deep LLP approach outperforms baselines for both
text and image features separately. Additionally, we find that co-training
algorithm improves image and text classification by 4% and 8% absolute F1,
respectively. Finally, an ensemble of text and image classifiers further
improves the absolute F1 measure by 4% on average.
| Ehsan Mohammady Ardehaly, Aron Culotta | 10.1109/ICDMW.2017.144 | 1709.04108 | null | null |
Empower Sequence Labeling with Task-Aware Neural Language Model | cs.CL cs.LG | Linguistic sequence labeling is a general modeling approach that encompasses
a variety of problems, such as part-of-speech tagging and named entity
recognition. Recent advances in neural networks (NNs) make it possible to build
reliable models without handcrafted features. However, in many cases, it is
hard to obtain sufficient annotations to train these models. In this study, we
develop a novel neural framework to extract abundant knowledge hidden in raw
texts to empower the sequence labeling task. Besides word-level knowledge
contained in pre-trained word embeddings, character-aware neural language
models are incorporated to extract character-level knowledge. Transfer learning
techniques are further adopted to mediate different components and guide the
language model towards the key knowledge. Comparing to previous methods, these
task-specific knowledge allows us to adopt a more concise model and conduct
more efficient training. Different from most transfer learning methods, the
proposed framework does not rely on any additional supervision. It extracts
knowledge from self-contained order information of training sequences.
Extensive experiments on benchmark datasets demonstrate the effectiveness of
leveraging character-level knowledge and the efficiency of co-training. For
example, on the CoNLL03 NER task, model training completes in about 6 hours on
a single GPU, reaching F1 score of 91.71$\pm$0.10 without using any extra
annotation.
| Liyuan Liu, Jingbo Shang, Frank F. Xu, Xiang Ren, Huan Gui, Jian Peng,
Jiawei Han | null | 1709.04109 | null | null |
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial
Examples | stat.ML cs.CR cs.LG | Recent studies have highlighted the vulnerability of deep neural networks
(DNNs) to adversarial examples - a visually indistinguishable adversarial image
can easily be crafted to cause a well-trained model to misclassify. Existing
methods for crafting adversarial examples are based on $L_2$ and $L_\infty$
distortion metrics. However, despite the fact that $L_1$ distortion accounts
for the total variation and encourages sparsity in the perturbation, little has
been developed for crafting $L_1$-based adversarial examples. In this paper, we
formulate the process of attacking DNNs via adversarial examples as an
elastic-net regularized optimization problem. Our elastic-net attacks to DNNs
(EAD) feature $L_1$-oriented adversarial examples and include the
state-of-the-art $L_2$ attack as a special case. Experimental results on MNIST,
CIFAR10 and ImageNet show that EAD can yield a distinct set of adversarial
examples with small $L_1$ distortion and attains similar attack performance to
the state-of-the-art methods in different attack scenarios. More importantly,
EAD leads to improved attack transferability and complements adversarial
training for DNNs, suggesting novel insights on leveraging $L_1$ distortion in
adversarial machine learning and security implications of DNNs.
| Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi and Cho-Jui Hsieh | null | 1709.04114 | null | null |
HitFraud: A Broad Learning Approach for Collective Fraud Detection in
Heterogeneous Information Networks | cs.LG cs.CR | On electronic game platforms, different payment transactions have different
levels of risk. Risk is generally higher for digital goods in e-commerce.
However, it differs based on product and its popularity, the offer type
(packaged game, virtual currency to a game or subscription service), storefront
and geography. Existing fraud policies and models make decisions independently
for each transaction based on transaction attributes, payment velocities, user
characteristics, and other relevant information. However, suspicious
transactions may still evade detection and hence we propose a broad learning
approach leveraging a graph based perspective to uncover relationships among
suspicious transactions, i.e., inter-transaction dependency. Our focus is to
detect suspicious transactions by capturing common fraudulent behaviors that
would not be considered suspicious when being considered in isolation. In this
paper, we present HitFraud that leverages heterogeneous information networks
for collective fraud detection by exploring correlated and fast evolving
fraudulent behaviors. First, a heterogeneous information network is designed to
link entities of interest in the transaction database via different semantics.
Then, graph based features are efficiently discovered from the network
exploiting the concept of meta-paths, and decisions on frauds are made
collectively on test instances. Experiments on real-world payment transaction
data from Electronic Arts demonstrate that the prediction performance is
effectively boosted by HitFraud with fast convergence where the computation of
meta-path based features is largely optimized. Notably, recall can be improved
up to 7.93% and F-score 4.62% compared to baselines.
| Bokai Cao, Mia Mao, Siim Viidu, Philip S. Yu | null | 1709.04129 | null | null |
Recursive Exponential Weighting for Online Non-convex Optimization | cs.LG | In this paper, we investigate the online non-convex optimization problem
which generalizes the classic {online convex optimization problem by relaxing
the convexity assumption on the cost function.
For this type of problem, the classic exponential weighting online algorithm
has recently been shown to attain a sub-linear regret of $O(\sqrt{T\log T})$.
In this paper, we introduce a novel recursive structure to the online
algorithm to define a recursive exponential weighting algorithm that attains a
regret of $O(\sqrt{T})$, matching the well-known regret lower bound.
To the best of our knowledge, this is the first online algorithm with
provable $O(\sqrt{T})$ regret for the online non-convex optimization problem.
| Lin Yang, Cheng Tan and Wing Shing Wong | null | 1709.04136 | null | null |
Asymptotic Bayesian Generalization Error in Latent Dirichlet Allocation
and Stochastic Matrix Factorization | math.ST cs.LG stat.ML stat.TH | Latent Dirichlet allocation (LDA) is useful in document analysis, image
processing, and many information systems; however, its generalization
performance has been left unknown because it is a singular learning machine to
which regular statistical theory can not be applied.
Stochastic matrix factorization (SMF) is a restricted matrix factorization in
which matrix factors are stochastic; the column of the matrix is in a simplex.
SMF is being applied to image recognition and text mining. We can understand
SMF as a statistical model by which a stochastic matrix of given data is
represented by a product of two stochastic matrices, whose generalization
performance has also been left unknown because of non-regularity.
In this paper, by using an algebraic and geometric method, we show the
analytic equivalence of LDA and SMF, both of which have the same real log
canonical threshold (RLCT), resulting in that they asymptotically have the same
Bayesian generalization error and the same log marginal likelihood. Moreover,
we derive the upper bound of the RLCT and prove that it is smaller than the
dimension of the parameter divided by two, hence the Bayesian generalization
errors of them are smaller than those of regular statistical models.
| Naoki Hayashi and Sumio Watanabe | 10.1007/s42979-020-0071-3 | 1709.04212 | null | null |
Action Schema Networks: Generalised Policies with Deep Learning | cs.AI cs.LG | In this paper, we introduce the Action Schema Network (ASNet): a neural
network architecture for learning generalised policies for probabilistic
planning problems. By mimicking the relational structure of planning problems,
ASNets are able to adopt a weight-sharing scheme which allows the network to be
applied to any problem from a given planning domain. This allows the cost of
training the network to be amortised over all problems in that domain. Further,
we propose a training method which balances exploration and supervised training
on small problems to produce a policy which remains robust when evaluated on
larger problems. In experiments, we show that ASNet's learning capability
allows it to significantly outperform traditional non-learning planners in
several challenging domains.
| Sam Toyer, Felipe Trevizan, Sylvie Thi\'ebaux, Lexing Xie | null | 1709.04271 | null | null |
Automated Cloud Provisioning on AWS using Deep Reinforcement Learning | cs.DC cs.AI cs.LG | As the use of cloud computing continues to rise, controlling cost becomes
increasingly important. Yet there is evidence that 30\% - 45\% of cloud spend
is wasted. Existing tools for cloud provisioning typically rely on highly
trained human experts to specify what to monitor, thresholds for triggering
action, and actions. In this paper we explore the use of reinforcement learning
(RL) to acquire policies to balance performance and spend, allowing humans to
specify what they want as opposed to how to do it, minimizing the need for
cloud expertise. Empirical results with tabular, deep, and dueling double deep
Q-learning with the CloudSim simulator show the utility of RL and the relative
merits of the approaches. We also demonstrate effective policy transfer
learning from an extremely simple simulator to CloudSim, with the next step
being transfer from CloudSim to an Amazon Web Services physical environment.
| Zhiguang Wang, Chul Gwon, Tim Oates, Adam Iezzi | null | 1709.04305 | null | null |
Pattern Recognition using Artificial Immune System | cs.NE cs.LG | In this thesis, the uses of Artificial Immune Systems (AIS) in Machine
learning is studded. the thesis focus on some of immune inspired algorithms
such as clonal selection algorithm and artificial immune network. The effect of
changing the algorithm parameter on its performance is studded. Then a new
immune inspired algorithm for unsupervised classification is proposed. The new
algorithm is based on clonal selection principle and named Unsupervised Clonal
Selection Classification (UCSC). The new proposed algorithm is almost parameter
free. The algorithm parameters are data driven and it adjusts itself to make
the classification as fast as possible. The performance of UCSC is evaluated.
The experiments show that the proposed UCSC algorithm has a good performance
and more reliable.
| Mohammad Tarek Al Muallim | null | 1709.04317 | null | null |
Neural Network Based Nonlinear Weighted Finite Automata | cs.FL cs.AI cs.CL cs.LG | Weighted finite automata (WFA) can expressively model functions defined over
strings but are inherently linear models. Given the recent successes of
nonlinear models in machine learning, it is natural to wonder whether
ex-tending WFA to the nonlinear setting would be beneficial. In this paper, we
propose a novel model of neural network based nonlinearWFA model (NL-WFA) along
with a learning algorithm. Our learning algorithm is inspired by the spectral
learning algorithm for WFAand relies on a nonlinear decomposition of the
so-called Hankel matrix, by means of an auto-encoder network. The expressive
power of NL-WFA and the proposed learning algorithm are assessed on both
synthetic and real-world data, showing that NL-WFA can lead to smaller model
sizes and infer complex grammatical structures from data.
| Tianyu Li, Guillaume Rabusseau, Doina Precup | null | 1709.0438 | null | null |
Generating Music Medleys via Playing Music Puzzle Games | stat.ML cs.LG cs.SD | Generating music medleys is about finding an optimal permutation of a given
set of music clips. Toward this goal, we propose a self-supervised learning
task, called the music puzzle game, to train neural network models to learn the
sequential patterns in music. In essence, such a game requires machines to
correctly sort a few multisecond music fragments. In the training stage, we
learn the model by sampling multiple non-overlapping fragment pairs from the
same songs and seeking to predict whether a given pair is consecutive and is in
the correct chronological order. For testing, we design a number of puzzle
games with different difficulty levels, the most difficult one being music
medley, which requiring sorting fragments from different songs. On the basis of
state-of-the-art Siamese convolutional network, we propose an improved
architecture that learns to embed frame-level similarity scores computed from
the input fragment pairs to a common space, where fragment pairs in the correct
order can be more easily identified. Our result shows that the resulting model,
dubbed as the similarity embedding network (SEN), performs better than
competing models across different games, including music jigsaw puzzle, music
sequencing, and music medley. Example results can be found at our project
website, https://remyhuang.github.io/DJnet.
| Yu-Siang Huang, Szu-Yu Chou, Yi-Hsuan Yang | null | 1709.04384 | null | null |
Tight Semi-Nonnegative Matrix Factorization | stat.ML cs.LG | The nonnegative matrix factorization is a widely used, flexible matrix
decomposition, finding applications in biology, image and signal processing and
information retrieval, among other areas. Here we present a related matrix
factorization. A multi-objective optimization problem finds conical
combinations of templates that approximate a given data matrix. The templates
are chosen so that as far as possible only the initial data set can be
represented this way. However, the templates are not required to be nonnegative
nor convex combinations of the original data.
| David W Dreisigmeyer | null | 1709.04395 | null | null |
On Early-stage Debunking Rumors on Twitter: Leveraging the Wisdom of
Weak Learners | cs.SI cs.LG | Recently a lot of progress has been made in rumor modeling and rumor
detection for micro-blogging streams. However, existing automated methods do
not perform very well for early rumor detection, which is crucial in many
settings, e.g., in crisis situations. One reason for this is that aggregated
rumor features such as propagation features, which work well on the long run,
are - due to their accumulating characteristic - not very helpful in the early
phase of a rumor. In this work, we present an approach for early rumor
detection, which leverages Convolutional Neural Networks for learning the
hidden representations of individual rumor-related tweets to gain insights on
the credibility of each tweets. We then aggregate the predictions from the very
beginning of a rumor to obtain the overall event credits (so-called wisdom),
and finally combine it with a time series based rumor classification model. Our
extensive experiments show a clearly improved classification performance within
the critical very first hours of a rumor. For a better understanding, we also
conduct an extensive feature evaluation that emphasized on the early stage and
shows that the low-level credibility has best predictability at all phases of
the rumor lifetime.
| Tu Ngoc Nguyen, Cheng Li, Claudia Nieder\'ee | null | 1709.04402 | null | null |
An Inversion-Based Learning Approach for Improving Impromptu Trajectory
Tracking of Robots with Non-Minimum Phase Dynamics | cs.RO cs.LG cs.SY | This paper presents a learning-based approach for impromptu trajectory
tracking for non-minimum phase systems, i.e., systems with unstable inverse
dynamics. Inversion-based feedforward approaches are commonly used for
improving tracking performance; however, these approaches are not directly
applicable to non-minimum phase systems due to their inherent instability. In
order to resolve the instability issue, existing methods have assumed that the
system model is known and used pre-actuation or inverse approximation
techniques. In this work, we propose an approach for learning a stable,
approximate inverse of a non-minimum phase baseline system directly from its
input-output data. Through theoretical discussions, simulations, and
experiments on two different platforms, we show the stability of our proposed
approach and its effectiveness for high-accuracy, impromptu tracking. Our
approach also shows that including more information in the training, as is
commonly assumed to be useful, does not lead to better performance but may
trigger instability and impact the effectiveness of the overall approach.
| Siqi Zhou, Mohamed K. Helwa and Angela P. Schoellig | 10.1109/LRA.2018.2801471 | 1709.04407 | null | null |
A Learning and Masking Approach to Secure Learning | cs.CR cs.CV cs.LG | Deep Neural Networks (DNNs) have been shown to be vulnerable against
adversarial examples, which are data points cleverly constructed to fool the
classifier. Such attacks can be devastating in practice, especially as DNNs are
being applied to ever increasing critical tasks like image recognition in
autonomous driving. In this paper, we introduce a new perspective on the
problem. We do so by first defining robustness of a classifier to adversarial
exploitation. Next, we show that the problem of adversarial example generation
can be posed as learning problem. We also categorize attacks in literature into
high and low perturbation attacks; well-known attacks like fast-gradient sign
method (FGSM) and our attack produce higher perturbation adversarial examples
while the more potent but computationally inefficient Carlini-Wagner (CW)
attack is low perturbation. Next, we show that the dual approach of the attack
learning problem can be used as a defensive technique that is effective against
high perturbation attacks. Finally, we show that a classifier masking method
achieved by adding noise to the a neural network's logit output protects
against low distortion attacks such as the CW attack. We also show that both
our learning and masking defense can work simultaneously to protect against
multiple attacks. We demonstrate the efficacy of our techniques by
experimenting with the MNIST and CIFAR-10 datasets.
| Linh Nguyen, Sky Wang, Arunesh Sinha | null | 1709.04447 | null | null |
A Study of AI Population Dynamics with Million-agent Reinforcement
Learning | cs.AI cs.LG cs.MA | We conduct an empirical study on discovering the ordered collective dynamics
obtained by a population of intelligence agents, driven by million-agent
reinforcement learning. Our intention is to put intelligent agents into a
simulated natural context and verify if the principles developed in the real
world could also be used in understanding an artificially-created intelligent
population. To achieve this, we simulate a large-scale predator-prey world,
where the laws of the world are designed by only the findings or logical
equivalence that have been discovered in nature. We endow the agents with the
intelligence based on deep reinforcement learning (DRL). In order to scale the
population size up to millions agents, a large-scale DRL training platform with
redesigned experience buffer is proposed. Our results show that the population
dynamics of AI agents, driven only by each agent's individual self-interest,
reveals an ordered pattern that is similar to the Lotka-Volterra model studied
in population biology. We further discover the emergent behaviors of collective
adaptations in studying how the agents' grouping behaviors will change with the
environmental resources. Both of the two findings could be explained by the
self-organization theory in nature.
| Yaodong Yang, Lantao Yu, Yiwei Bai, Jun Wang, Weinan Zhang, Ying Wen,
Yong Yu | null | 1709.04511 | null | null |
Differentially Private Mixture of Generative Neural Networks | cs.LG cs.CR | Generative models are used in a wide range of applications building on large
amounts of contextually rich information. Due to possible privacy violations of
the individuals whose data is used to train these models, however, publishing
or sharing generative models is not always viable. In this paper, we present a
novel technique for privately releasing generative models and entire
high-dimensional datasets produced by these models. We model the generator
distribution of the training data with a mixture of $k$ generative neural
networks. These are trained together and collectively learn the generator
distribution of a dataset. Data is divided into $k$ clusters, using a novel
differentially private kernel $k$-means, then each cluster is given to separate
generative neural networks, such as Restricted Boltzmann Machines or
Variational Autoencoders, which are trained only on their own cluster using
differentially private gradient descent. We evaluate our approach using the
MNIST dataset, as well as call detail records and transit datasets, showing
that it produces realistic synthetic samples, which can also be used to
accurately compute arbitrary number of counting queries.
| Gergely Acs, Luca Melis, Claude Castelluccia, and Emiliano De
Cristofaro | null | 1709.04514 | null | null |
Normalized Direction-preserving Adam | cs.LG stat.ML | Adaptive optimization algorithms, such as Adam and RMSprop, have shown better
optimization performance than stochastic gradient descent (SGD) in some
scenarios. However, recent studies show that they often lead to worse
generalization performance than SGD, especially for training deep neural
networks (DNNs). In this work, we identify the reasons that Adam generalizes
worse than SGD, and develop a variant of Adam to eliminate the generalization
gap. The proposed method, normalized direction-preserving Adam (ND-Adam),
enables more precise control of the direction and step size for updating weight
vectors, leading to significantly improved generalization performance.
Following a similar rationale, we further improve the generalization
performance in classification tasks by regularizing the softmax logits. By
bridging the gap between SGD and Adam, we also hope to shed light on why
certain optimization algorithms generalize better than others.
| Zijun Zhang, Lin Ma, Zongpeng Li, Chuan Wu | null | 1709.04546 | null | null |
Ignoring Distractors in the Absence of Labels: Optimal Linear Projection
to Remove False Positives During Anomaly Detection | cs.LG | In the anomaly detection setting, the native feature embedding can be a
crucial source of bias. We present a technique, Feature Omission using Context
in Unsupervised Settings (FOCUS) to learn a feature mapping that is invariant
to changes exemplified in training sets while retaining as much descriptive
power as possible. While this method could apply to many unsupervised settings,
we focus on applications in anomaly detection, where little task-labeled data
is available. Our algorithm requires only non-anomalous sets of data, and does
not require that the contexts in the training sets match the context of the
test set. By maximizing within-set variance and minimizing between-set
variance, we are able to identify and remove distracting features while
retaining fidelity to the descriptiveness needed at test time. In the linear
case, our formulation reduces to a generalized eigenvalue problem that can be
solved quickly and applied to test sets outside the context of the training
sets. This technique allows us to align technical definitions of anomaly
detection with human definitions through appropriate mappings of the feature
space. We demonstrate that this method is able to remove uninformative parts of
the feature space for the anomaly detection setting.
| Allison Del Giorno, J. Andrew Bagnell, Martial Hebert | null | 1709.04549 | null | null |
MOLTE: a Modular Optimal Learning Testing Environment | cs.LG | We address the relative paucity of empirical testing of learning algorithms
(of any type) by introducing a new public-domain, Modular, Optimal Learning
Testing Environment (MOLTE) for Bayesian ranking and selection problem,
stochastic bandits or sequential experimental design problems. The Matlab-based
simulator allows the comparison of a number of learning policies (represented
as a series of .m modules) in the context of a wide range of problems (each
represented in its own .m module) which makes it easy to add new algorithms and
new test problems. State-of-the-art policies and various problem classes are
provided in the package. The choice of problems and policies is guided through
a spreadsheet-based interface. Different graphical metrics are included. MOLTE
is designed to be compatible with parallel computing to scale up from local
desktop to clusters and clouds. We offer MOLTE as an easy-to-use tool for the
research community that will make it possible to perform much more
comprehensive testing, spanning a broader selection of algorithms and test
problems. We demonstrate the capabilities of MOLTE through a series of
comparisons of policies on a starter library of test problems. We also address
the problem of tuning and constructing priors that have been largely overlooked
in optimal learning literature. We envision MOLTE as a modest spur to provide
researchers an easy environment to study interesting questions involved in
optimal learning.
| Yingfei Wang, Warren Powell | null | 1709.04553 | null | null |
Predicting Organic Reaction Outcomes with Weisfeiler-Lehman Network | cs.LG cs.AI stat.ML | The prediction of organic reaction outcomes is a fundamental problem in
computational chemistry. Since a reaction may involve hundreds of atoms, fully
exploring the space of possible transformations is intractable. The current
solution utilizes reaction templates to limit the space, but it suffers from
coverage and efficiency issues. In this paper, we propose a template-free
approach to efficiently explore the space of product molecules by first
pinpointing the reaction center -- the set of nodes and edges where graph edits
occur. Since only a small number of atoms contribute to reaction center, we can
directly enumerate candidate products. The generated candidates are scored by a
Weisfeiler-Lehman Difference Network that models high-order interactions
between changes occurring at nodes across the molecule. Our framework
outperforms the top-performing template-based approach with a 10\% margin,
while running orders of magnitude faster. Finally, we demonstrate that the
model accuracy rivals the performance of domain experts.
| Wengong Jin, Connor W. Coley, Regina Barzilay, Tommi Jaakkola | null | 1709.04555 | null | null |
Learning Unknown Markov Decision Processes: A Thompson Sampling Approach | cs.LG | We consider the problem of learning an unknown Markov Decision Process (MDP)
that is weakly communicating in the infinite horizon setting. We propose a
Thompson Sampling-based reinforcement learning algorithm with dynamic episodes
(TSDE). At the beginning of each episode, the algorithm generates a sample from
the posterior distribution over the unknown model parameters. It then follows
the optimal stationary policy for the sampled model for the rest of the
episode. The duration of each episode is dynamically determined by two stopping
criteria. The first stopping criterion controls the growth rate of episode
length. The second stopping criterion happens when the number of visits to any
state-action pair is doubled. We establish $\tilde O(HS\sqrt{AT})$ bounds on
expected regret under a Bayesian setting, where $S$ and $A$ are the sizes of
the state and action spaces, $T$ is time, and $H$ is the bound of the span.
This regret bound matches the best available bound for weakly communicating
MDPs. Numerical results show it to perform better than existing algorithms for
infinite horizon MDPs.
| Yi Ouyang, Mukul Gagrani, Ashutosh Nayyar, Rahul Jain | null | 1709.0457 | null | null |
A Framework for Generalizing Graph-based Representation Learning Methods | stat.ML cs.AI cs.LG cs.SI | Random walks are at the heart of many existing deep learning algorithms for
graph data. However, such algorithms have many limitations that arise from the
use of random walks, e.g., the features resulting from these methods are unable
to transfer to new nodes and graphs as they are tied to node identity. In this
work, we introduce the notion of attributed random walks which serves as a
basis for generalizing existing methods such as DeepWalk, node2vec, and many
others that leverage random walks. Our proposed framework enables these methods
to be more widely applicable for both transductive and inductive learning as
well as for use on graphs with attributes (if available). This is achieved by
learning functions that generalize to new nodes and graphs. We show that our
proposed framework is effective with an average AUC improvement of 16.1% while
requiring on average 853 times less space than existing methods on a variety of
graphs from several domains.
| Nesreen K. Ahmed, Ryan A. Rossi, Rong Zhou, John Boaz Lee, Xiangnan
Kong, Theodore L. Willke and Hoda Eldardiry | null | 1709.04596 | null | null |
Random matrix approach for primal-dual portfolio optimization problems | q-fin.PM cond-mat.dis-nn cs.CE cs.LG math.OC | In this paper, we revisit the portfolio optimization problems of the
minimization/maximization of investment risk under constraints of budget and
investment concentration (primal problem) and the maximization/minimization of
investment concentration under constraints of budget and investment risk (dual
problem) for the case that the variances of the return rates of the assets are
identical. We analyze both optimization problems by using the Lagrange
multiplier method and the random matrix approach. Thereafter, we compare the
results obtained from our proposed approach with the results obtained in
previous work. Moreover, we use numerical experiments to validate the results
obtained from the replica approach and the random matrix approach as methods
for analyzing both the primal and dual portfolio optimization problems.
| Daichi Tada, Hisashi Yamamoto and Takashi Shinzato | 10.7566/JPSJ.86.124804 | 1709.0462 | null | null |
Subspace Clustering using Ensembles of $K$-Subspaces | cs.CV cs.LG stat.ML | Subspace clustering is the unsupervised grouping of points lying near a union
of low-dimensional linear subspaces. Algorithms based directly on geometric
properties of such data tend to either provide poor empirical performance, lack
theoretical guarantees, or depend heavily on their initialization. We present a
novel geometric approach to the subspace clustering problem that leverages
ensembles of the K-subspaces (KSS) algorithm via the evidence accumulation
clustering framework. Our algorithm, referred to as ensemble K-subspaces
(EKSS), forms a co-association matrix whose (i,j)th entry is the number of
times points i and j are clustered together by several runs of KSS with random
initializations. We prove general recovery guarantees for any algorithm that
forms an affinity matrix with entries close to a monotonic transformation of
pairwise absolute inner products. We then show that a specific instance of EKSS
results in an affinity matrix with entries of this form, and hence our proposed
algorithm can provably recover subspaces under similar conditions to
state-of-the-art algorithms. The finding is, to the best of our knowledge, the
first recovery guarantee for evidence accumulation clustering and for KSS
variants. We show on synthetic data that our method performs well in the
traditionally challenging settings of subspaces with large intersection,
subspaces with small principal angles, and noisy data. Finally, we evaluate our
algorithm on six common benchmark datasets and show that unlike existing
methods, EKSS achieves excellent empirical performance when there are both a
small and large number of points per subspace.
| John Lipor, David Hong, Yan Shuo Tan, and Laura Balzano | null | 1709.04744 | null | null |
From Plants to Landmarks: Time-invariant Plant Localization that uses
Deep Pose Regression in Agricultural Fields | cs.RO cs.CV cs.LG | Agricultural robots are expected to increase yields in a sustainable way and
automate precision tasks, such as weeding and plant monitoring. At the same
time, they move in a continuously changing, semi-structured field environment,
in which features can hardly be found and reproduced at a later time.
Challenges for Lidar and visual detection systems stem from the fact that
plants can be very small, overlapping and have a steadily changing appearance.
Therefore, a popular way to localize vehicles with high accuracy is based on
ex- pensive global navigation satellite systems and not on natural landmarks.
The contribution of this work is a novel image- based plant localization
technique that uses the time-invariant stem emerging point as a reference. Our
approach is based on a fully convolutional neural network that learns landmark
localization from RGB and NIR image input in an end-to-end manner. The network
performs pose regression to generate a plant location likelihood map. Our
approach allows us to cope with visual variances of plants both for different
species and different growth stages. We achieve high localization accuracies as
shown in detailed evaluations of a sugar beet cultivation phase. In experiments
with our BoniRob we demonstrate that detections can be robustly reproduced with
centimeter accuracy.
| Florian Kraemer, Alexander Schaefer, Andreas Eitel, Johan Vertens,
Wolfram Burgard | null | 1709.04751 | null | null |
Denoising Autoencoders for Overgeneralization in Neural Networks | cs.AI cs.CV cs.LG | Despite the recent developments that allowed neural networks to achieve
impressive performance on a variety of applications, these models are
intrinsically affected by the problem of overgeneralization, due to their
partitioning of the full input space into the fixed set of target classes used
during training. Thus it is possible for novel inputs belonging to categories
unknown during training or even completely unrecognizable to humans to fool the
system into classifying them as one of the known classes, even with a high
degree of confidence. Solving this problem may help improve the security of
such systems in critical applications, and may further lead to applications in
the context of open set recognition and 1-class recognition. This paper
presents a novel way to compute a confidence score using denoising autoencoders
and shows that such confidence score can correctly identify the regions of the
input space close to the training distribution by approximately identifying its
local maxima.
| Giacomo Spigler | 10.1109/TPAMI.2019.2909876 | 1709.04762 | null | null |
Interpretable Graph-Based Semi-Supervised Learning via Flows | stat.ML cs.LG | In this paper, we consider the interpretability of the foundational
Laplacian-based semi-supervised learning approaches on graphs. We introduce a
novel flow-based learning framework that subsumes the foundational approaches
and additionally provides a detailed, transparent, and easily understood
expression of the learning process in terms of graph flows. As a result, one
can visualize and interactively explore the precise subgraph along which the
information from labeled nodes flows to an unlabeled node of interest.
Surprisingly, the proposed framework avoids trading accuracy for
interpretability, but in fact leads to improved prediction accuracy, which is
supported both by theoretical considerations and empirical results. The
flow-based framework guarantees the maximum principle by construction and can
handle directed graphs in an out-of-the-box manner.
| Raif M. Rustamov and James T. Klosowski | null | 1709.04764 | null | null |
On Multi-Relational Link Prediction with Bilinear Models | cs.LG | We study bilinear embedding models for the task of multi-relational link
prediction and knowledge graph completion. Bilinear models belong to the most
basic models for this task, they are comparably efficient to train and use, and
they can provide good prediction performance. The main goal of this paper is to
explore the expressiveness of and the connections between various bilinear
models proposed in the literature. In particular, a substantial number of
models can be represented as bilinear models with certain additional
constraints enforced on the embeddings. We explore whether or not these
constraints lead to universal models, which can in principle represent every
set of relations, and whether or not there are subsumption relationships
between various models. We report results of an independent experimental study
that evaluates recent bilinear models in a common experimental setup. Finally,
we provide evidence that relation-level ensembles of multiple bilinear models
can achieve state-of-the art prediction performance.
| Yanjie Wang, Rainer Gemulla, Hui Li | null | 1709.04808 | null | null |
Informed Non-convex Robust Principal Component Analysis with Features | stat.ML cs.CV cs.LG | We revisit the problem of robust principal component analysis with features
acting as prior side information. To this aim, a novel, elegant, non-convex
optimization approach is proposed to decompose a given observation matrix into
a low-rank core and the corresponding sparse residual. Rigorous theoretical
analysis of the proposed algorithm results in exact recovery guarantees with
low computational complexity. Aptly designed synthetic experiments demonstrate
that our method is the first to wholly harness the power of non-convexity over
convexity in terms of both recoverability and speed. That is, the proposed
non-convex approach is more accurate and faster compared to the best available
algorithms for the problem under study. Two real-world applications, namely
image classification and face denoising further exemplify the practical
superiority of the proposed method.
| Niannan Xue, Jiankang Deng, Yannis Panagakis, Stefanos Zafeiriou | null | 1709.04836 | null | null |
Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework
for Traffic Forecasting | cs.LG stat.ML | Timely accurate traffic forecast is crucial for urban traffic control and
guidance. Due to the high nonlinearity and complexity of traffic flow,
traditional methods cannot satisfy the requirements of mid-and-long term
prediction tasks and often neglect spatial and temporal dependencies. In this
paper, we propose a novel deep learning framework, Spatio-Temporal Graph
Convolutional Networks (STGCN), to tackle the time series prediction problem in
traffic domain. Instead of applying regular convolutional and recurrent units,
we formulate the problem on graphs and build the model with complete
convolutional structures, which enable much faster training speed with fewer
parameters. Experiments show that our model STGCN effectively captures
comprehensive spatio-temporal correlations through modeling multi-scale traffic
networks and consistently outperforms state-of-the-art baselines on various
real-world traffic datasets.
| Bing Yu, Haoteng Yin, Zhanxing Zhu | 10.24963/ijcai.2018/505 | 1709.04875 | null | null |
Control-Oriented Learning on the Fly | math.OC cs.LG cs.RO cs.SY | This paper focuses on developing a strategy for control of systems whose
dynamics are almost entirely unknown. This situation arises naturally in a
scenario where a system undergoes a critical failure. In that case, it is
imperative to retain the ability to satisfy basic control objectives in order
to avert an imminent catastrophe. A prime example of such an objective is the
reach-avoid problem, where a system needs to move to a certain state in a
constrained state space. To deal with limitations on our knowledge of system
dynamics, we develop a theory of myopic control. The primary goal of myopic
control is to, at any given time, optimize the current direction of the system
trajectory, given solely the information obtained about the system until that
time. We propose an algorithm that uses small perturbations in the control
effort to learn local dynamics while simultaneously ensuring that the system
moves in a direction that appears to be nearly optimal, and provide hard bounds
for its suboptimality. We additionally verify the usefulness of the algorithm
on a simulation of a damaged aircraft seeking to avoid a crash, as well as on
an example of a Van der Pol oscillator.
| Melkior Ornik, Arie Israel, Ufuk Topcu | null | 1709.04889 | null | null |
Convolutional Networks for Spherical Signals | cs.LG | The success of convolutional networks in learning problems involving planar
signals such as images is due to their ability to exploit the translation
symmetry of the data distribution through weight sharing. Many areas of science
and egineering deal with signals with other symmetries, such as rotation
invariant data on the sphere. Examples include climate and weather science,
astrophysics, and chemistry. In this paper we present spherical convolutional
networks. These networks use convolutions on the sphere and rotation group,
which results in rotational weight sharing and rotation equivariance. Using a
synthetic spherical MNIST dataset, we show that spherical convolutional
networks are very effective at dealing with rotationally invariant
classification problems.
| Taco Cohen, Mario Geiger, Jonas K\"ohler and Max Welling | null | 1709.04893 | null | null |
One-Shot Visual Imitation Learning via Meta-Learning | cs.LG cs.AI cs.CV cs.RO | In order for a robot to be a generalist that can perform a wide range of
jobs, it must be able to acquire a wide variety of skills quickly and
efficiently in complex unstructured environments. High-capacity models such as
deep neural networks can enable a robot to represent complex skills, but
learning each skill from scratch then becomes infeasible. In this work, we
present a meta-imitation learning method that enables a robot to learn how to
learn more efficiently, allowing it to acquire new skills from just a single
demonstration. Unlike prior methods for one-shot imitation, our method can
scale to raw pixel inputs and requires data from significantly fewer prior
tasks for effective learning of new skills. Our experiments on both simulated
and real robot platforms demonstrate the ability to learn new tasks,
end-to-end, from a single visual demonstration.
| Chelsea Finn, Tianhe Yu, Tianhao Zhang, Pieter Abbeel, Sergey Levine | null | 1709.04905 | null | null |
Shared Learning : Enhancing Reinforcement in $Q$-Ensembles | cs.LG cs.AI | Deep Reinforcement Learning has been able to achieve amazing successes in a
variety of domains from video games to continuous control by trying to maximize
the cumulative reward. However, most of these successes rely on algorithms that
require a large amount of data to train in order to obtain results on par with
human-level performance. This is not feasible if we are to deploy these systems
on real world tasks and hence there has been an increased thrust in exploring
data efficient algorithms. To this end, we propose the Shared Learning
framework aimed at making $Q$-ensemble algorithms data-efficient. For achieving
this, we look into some principles of transfer learning which aim to study the
benefits of information exchange across tasks in reinforcement learning and
adapt transfer to learning our value function estimates in a novel manner. In
this paper, we consider the special case of transfer between the value function
estimates in the $Q$-ensemble architecture of BootstrappedDQN. We further
empirically demonstrate how our proposed framework can help in speeding up the
learning process in $Q$-ensembles with minimum computational overhead on a
suite of Atari 2600 Games.
| Rakesh R Menon, Balaraman Ravindran | null | 1709.04909 | null | null |
Dynamic Pricing in Competitive Markets | cs.LG cs.GT | Dynamic pricing of goods in a competitive environment to maximize revenue is
a natural objective and has been a subject of research over the years. In this
paper, we focus on a class of markets exhibiting the substitutes property with
sellers having divisible and replenishable goods. Depending on the prices
chosen, each seller observes a certain demand which is satisfied subject to the
supply constraint. The goal of the seller is to price her good dynamically so
as to maximize her revenue. For the static market case, when the consumer
utility satisfies the Constant Elasticity of Substitution (CES) property, we
give a $O(\sqrt{T})$ regret bound on the maximum loss in revenue of a seller
using a modified version of the celebrated Online Gradient Descent Algorithm by
Zinkevich. For a more specialized set of consumer utilities satisfying the
iso-elasticity condition, we show that when each seller uses a
regret-minimizing algorithm satisfying a certain technical property, the regret
with respect to $(1-\alpha)$ times optimal revenue is bounded as $O(T^{1/4} /
\sqrt{\alpha})$. We extend this result to markets with dynamic supplies and
prove a corresponding dynamic regret bound, whose guarantee deteriorates
smoothly with the inherent instability of the market. As a side-result, we also
extend the previously known convergence results of these algorithms in a
general game to the dynamic setting.
| Paresh Nakhe | null | 1709.0496 | null | null |
Two-sample Statistics Based on Anisotropic Kernels | stat.ML cs.LG stat.AP stat.CO | The paper introduces a new kernel-based Maximum Mean Discrepancy (MMD)
statistic for measuring the distance between two distributions given
finitely-many multivariate samples. When the distributions are locally
low-dimensional, the proposed test can be made more powerful to distinguish
certain alternatives by incorporating local covariance matrices and
constructing an anisotropic kernel. The kernel matrix is asymmetric; it
computes the affinity between $n$ data points and a set of $n_R$ reference
points, where $n_R$ can be drastically smaller than $n$. While the proposed
statistic can be viewed as a special class of Reproducing Kernel Hilbert Space
MMD, the consistency of the test is proved, under mild assumptions of the
kernel, as long as $\|p-q\| \sqrt{n} \to \infty $, and a finite-sample lower
bound of the testing power is obtained. Applications to flow cytometry and
diffusion MRI datasets are demonstrated, which motivate the proposed approach
to compare distributions.
| Xiuyuan Cheng, Alexander Cloninger and Ronald R. Coifman | null | 1709.05006 | null | null |
Learning Intrinsic Sparse Structures within Long Short-Term Memory | cs.LG cs.AI cs.CL cs.NE | Model compression is significant for the wide adoption of Recurrent Neural
Networks (RNNs) in both user devices possessing limited resources and business
clusters requiring quick responses to large-scale service requests. This work
aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the
sizes of basic structures within LSTM units, including input updates, gates,
hidden states, cell states and outputs. Independently reducing the sizes of
basic structures can result in inconsistent dimensions among them, and
consequently, end up with invalid LSTM units. To overcome the problem, we
propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS
will simultaneously decrease the sizes of all basic structures by one and
thereby always maintain the dimension consistency. By learning ISS within LSTM
units, the obtained LSTMs remain regular while having much smaller basic
structures. Based on group Lasso regularization, our method achieves 10.59x
speedup without losing any perplexity of a language modeling of Penn TreeBank
dataset. It is also successfully evaluated through a compact model with only
2.69M weights for machine Question Answering of SQuAD dataset. Our approach is
successfully extended to non- LSTM RNNs, like Recurrent Highway Networks
(RHNs). Our source code is publicly available at
https://github.com/wenwei202/iss-rnns
| Wei Wen, Yuxiong He, Samyam Rajbhandari, Minjia Zhang, Wenhan Wang,
Fang Liu, Bin Hu, Yiran Chen, Hai Li | null | 1709.05027 | null | null |
Self-Guiding Multimodal LSTM - when we do not have a perfect training
dataset for image captioning | cs.CV cs.CL cs.LG | In this paper, a self-guiding multimodal LSTM (sg-LSTM) image captioning
model is proposed to handle uncontrolled imbalanced real-world image-sentence
dataset. We collect FlickrNYC dataset from Flickr as our testbed with 306,165
images and the original text descriptions uploaded by the users are utilized as
the ground truth for training. Descriptions in FlickrNYC dataset vary
dramatically ranging from short term-descriptions to long
paragraph-descriptions and can describe any visual aspects, or even refer to
objects that are not depicted. To deal with the imbalanced and noisy situation
and to fully explore the dataset itself, we propose a novel guiding textual
feature extracted utilizing a multimodal LSTM (m-LSTM) model. Training of
m-LSTM is based on the portion of data in which the image content and the
corresponding descriptions are strongly bonded. Afterwards, during the training
of sg-LSTM on the rest training data, this guiding information serves as
additional input to the network along with the image representations and the
ground-truth descriptions. By integrating these input components into a
multimodal block, we aim to form a training scheme with the textual information
tightly coupled with the image content. The experimental results demonstrate
that the proposed sg-LSTM model outperforms the traditional state-of-the-art
multimodal RNN captioning framework in successfully describing the key
components of the input images.
| Yang Xian, Yingli Tian | null | 1709.05038 | null | null |
Disentangled Variational Auto-Encoder for Semi-supervised Learning | cs.LG cs.AI | Semi-supervised learning is attracting increasing attention due to the fact
that datasets of many domains lack enough labeled data. Variational
Auto-Encoder (VAE), in particular, has demonstrated the benefits of
semi-supervised learning. The majority of existing semi-supervised VAEs utilize
a classifier to exploit label information, where the parameters of the
classifier are introduced to the VAE. Given the limited labeled data, learning
the parameters for the classifiers may not be an optimal solution for
exploiting label information. Therefore, in this paper, we develop a novel
approach for semi-supervised VAE without classifier. Specifically, we propose a
new model called Semi-supervised Disentangled VAE (SDVAE), which encodes the
input data into disentangled representation and non-interpretable
representation, then the category information is directly utilized to
regularize the disentangled representation via the equality constraint. To
further enhance the feature learning ability of the proposed VAE, we
incorporate reinforcement learning to relieve the lack of data. The dynamic
framework is capable of dealing with both image and text data with its
corresponding encoder and decoder networks. Extensive experiments on image and
text datasets demonstrate the effectiveness of the proposed framework.
| Yang Li, Quan Pan, Suhang Wang, Haiyun Peng, Tao Yang, Erik Cambria | null | 1709.05047 | null | null |
Learning Compact Geometric Features | cs.CV cs.GR cs.LG | We present an approach to learning features that represent the local geometry
around a point in an unstructured point cloud. Such features play a central
role in geometric registration, which supports diverse applications in robotics
and 3D vision. Current state-of-the-art local features for unstructured point
clouds have been manually crafted and none combines the desirable properties of
precision, compactness, and robustness. We show that features with these
properties can be learned from data, by optimizing deep networks that map
high-dimensional histograms into low-dimensional Euclidean spaces. The
presented approach yields a family of features, parameterized by dimension,
that are both more compact and more accurate than existing descriptors.
| Marc Khoury, Qian-Yi Zhou, Vladlen Koltun | null | 1709.05056 | null | null |
Accelerating SGD for Distributed Deep-Learning Using Approximated
Hessian Matrix | cs.LG | We introduce a novel method to compute a rank $m$ approximation of the
inverse of the Hessian matrix in the distributed regime. By leveraging the
differences in gradients and parameters of multiple Workers, we are able to
efficiently implement a distributed approximation of the Newton-Raphson method.
We also present preliminary results which underline advantages and challenges
of second-order methods for large stochastic optimization problems. In
particular, our work suggests that novel strategies for combining gradients
provide further information on the loss surface.
| S\'ebastien M. R. Arnold and Chunming Wang | null | 1709.05069 | null | null |
Shapechanger: Environments for Transfer Learning | cs.LG cs.RO | We present Shapechanger, a library for transfer reinforcement learning
specifically designed for robotic tasks. We consider three types of knowledge
transfer---from simulation to simulation, from simulation to real, and from
real to real---and a wide range of tasks with continuous states and actions.
Shapechanger is under active development and open-sourced at:
https://github.com/seba-1511/shapechanger/.
| S\'ebastien M. R. Arnold, Tsam Kiu Pun, Th\'eo-Tim J. Denisart and
Francisco J. Valero-Cuevas | null | 1709.0507 | null | null |
Multi-Label Zero-Shot Human Action Recognition via Joint Latent Ranking
Embedding | cs.CV cs.AI cs.LG | Human action recognition refers to automatic recognizing human actions from a
video clip. In reality, there often exist multiple human actions in a video
stream. Such a video stream is often weakly-annotated with a set of relevant
human action labels at a global level rather than assigning each label to a
specific video episode corresponding to a single action, which leads to a
multi-label learning problem. Furthermore, there are many meaningful human
actions in reality but it would be extremely difficult to collect/annotate
video clips regarding all of various human actions, which leads to a zero-shot
learning scenario. To the best of our knowledge, there is no work that has
addressed all the above issues together in human action recognition. In this
paper, we formulate a real-world human action recognition task as a multi-label
zero-shot learning problem and propose a framework to tackle this problem in a
holistic way. Our framework holistically tackles the issue of unknown temporal
boundaries between different actions for multi-label learning and exploits the
side information regarding the semantic relationship between different human
actions for knowledge transfer. Consequently, our framework leads to a joint
latent ranking embedding for multi-label zero-shot human action recognition. A
novel neural architecture of two component models and an alternate learning
algorithm are proposed to carry out the joint latent ranking embedding
learning. Thus, multi-label zero-shot recognition is done by measuring
relatedness scores of action labels to a test video clip in the joint latent
visual and semantic embedding spaces. We evaluate our framework with different
settings, including a novel data split scheme designed especially for
evaluating multi-label zero-shot learning, on two datasets: Breakfast and
Charades. The experimental results demonstrate the effectiveness of our
framework.
| Qian Wang and Ke Chen | null | 1709.05107 | null | null |
Trend Detection based Regret Minimization for Bandit Problems | cs.LG | We study a variation of the classical multi-armed bandits problem. In this
problem, the learner has to make a sequence of decisions, picking from a fixed
set of choices. In each round, she receives as feedback only the loss incurred
from the chosen action. Conventionally, this problem has been studied when
losses of the actions are drawn from an unknown distribution or when they are
adversarial. In this paper, we study this problem when the losses of the
actions also satisfy certain structural properties, and especially, do show a
trend structure. When this is true, we show that using \textit{trend
detection}, we can achieve regret of order $\tilde{O} (N \sqrt{TK})$ with
respect to a switching strategy for the version of the problem where a single
action is chosen in each round and $\tilde{O} (Nm \sqrt{TK})$ when $m$ actions
are chosen each round. This guarantee is a significant improvement over the
conventional benchmark. Our approach can, as a framework, be applied in
combination with various well-known bandit algorithms, like Exp3. For both
versions of the problem, we give regret guarantees also for the
\textit{anytime} setting, i.e. when the length of the choice-sequence is not
known in advance. Finally, we pinpoint the advantages of our method by
comparing it to some well-known other strategies.
| Paresh Nakhe, Rebecca Reiffenh\"auser | 10.1109/DSAA.2016.35 | 1709.05156 | null | null |
LSTM Fully Convolutional Networks for Time Series Classification | cs.LG stat.ML | Fully convolutional neural networks (FCN) have been shown to achieve
state-of-the-art performance on the task of classifying time series sequences.
We propose the augmentation of fully convolutional networks with long short
term memory recurrent neural network (LSTM RNN) sub-modules for time series
classification. Our proposed models significantly enhance the performance of
fully convolutional networks with a nominal increase in model size and require
minimal preprocessing of the dataset. The proposed Long Short Term Memory Fully
Convolutional Network (LSTM-FCN) achieves state-of-the-art performance compared
to others. We also explore the usage of attention mechanism to improve time
series classification with the Attention Long Short Term Memory Fully
Convolutional Network (ALSTM-FCN). Utilization of the attention mechanism
allows one to visualize the decision process of the LSTM cell. Furthermore, we
propose fine-tuning as a method to enhance the performance of trained models.
An overall analysis of the performance of our model is provided and compared to
other techniques.
| Fazle Karim, Somshubra Majumdar, Houshang Darabi and Shun Chen | 10.1109/ACCESS.2017.2779939 | 1709.05206 | null | null |
A Spectral Method for Activity Shaping in Continuous-Time Information
Cascades | stat.ML cs.AI cs.LG cs.SI math.OC | Information Cascades Model captures dynamical properties of user activity in
a social network. In this work, we develop a novel framework for activity
shaping under the Continuous-Time Information Cascades Model which allows the
administrator for local control actions by allocating targeted resources that
can alter the spread of the process. Our framework employs the optimization of
the spectral radius of the Hazard matrix, a quantity that has been shown to
drive the maximum influence in a network, while enjoying a simple convex
relaxation when used to minimize the influence of the cascade. In addition,
use-cases such as quarantine and node immunization are discussed to highlight
the generality of the proposed activity shaping framework. Finally, we present
the NetShape influence minimization method which is compared favorably to
baseline and state-of-the-art approaches through simulations on real social
networks.
| Kevin Scaman, Argyris Kalogeratos, Luca Corinzia, Nicolas Vayatis | null | 1709.05231 | null | null |
A Generic Framework for Interesting Subspace Cluster Detection in
Multi-attributed Networks | cs.LG cs.AI | Detection of interesting (e.g., coherent or anomalous) clusters has been
studied extensively on plain or univariate networks, with various applications.
Recently, algorithms have been extended to networks with multiple attributes
for each node in the real-world. In a multi-attributed network, often, a
cluster of nodes is only interesting for a subset (subspace) of attributes, and
this type of clusters is called subspace clusters. However, in the current
literature, few methods are capable of detecting subspace clusters, which
involves concurrent feature selection and network cluster detection. These
relevant methods are mostly heuristic-driven and customized for specific
application scenarios.
In this work, we present a generic and theoretical framework for detection of
interesting subspace clusters in large multi-attributed networks. Specifically,
we propose a subspace graph-structured matching pursuit algorithm, namely,
SG-Pursuit, to address a broad class of such problems for different score
functions (e.g., coherence or anomalous functions) and topology constraints
(e.g., connected subgraphs and dense subgraphs). We prove that our algorithm 1)
runs in nearly-linear time on the network size and the total number of
attributes and 2) enjoys rigorous guarantees (geometrical convergence rate and
tight error bound) analogous to those of the state-of-the-art algorithms for
sparse feature selection problems and subgraph detection problems. As a case
study, we specialize SG-Pursuit to optimize a number of well-known score
functions for two typical tasks, including detection of coherent dense and
anomalous connected subspace clusters in real-world networks. Empirical
evidence demonstrates that our proposed generic algorithm SG-Pursuit performs
superior over state-of-the-art methods that are designed specifically for these
two tasks.
| Feng Chen, Baojian Zhou, Adil Alim, Liang Zhao | null | 1709.05246 | null | null |
Detection of Anomalies in Large Scale Accounting Data using Deep
Autoencoder Networks | cs.LG cs.CE | Learning to detect fraud in large-scale accounting data is one of the
long-standing challenges in financial statement audits or fraud investigations.
Nowadays, the majority of applied techniques refer to handcrafted rules derived
from known fraud scenarios. While fairly successful, these rules exhibit the
drawback that they often fail to generalize beyond known fraud scenarios and
fraudsters gradually find ways to circumvent them. To overcome this
disadvantage and inspired by the recent success of deep learning we propose the
application of deep autoencoder neural networks to detect anomalous journal
entries. We demonstrate that the trained network's reconstruction error
obtainable for a journal entry and regularized by the entry's individual
attribute probabilities can be interpreted as a highly adaptive anomaly
assessment. Experiments on two real-world datasets of journal entries, show the
effectiveness of the approach resulting in high f1-scores of 32.93 (dataset A)
and 16.95 (dataset B) and less false positive alerts compared to state of the
art baseline methods. Initial feedback received by chartered accountants and
fraud examiners underpinned the quality of the approach in capturing highly
relevant accounting anomalies.
| Marco Schreyer, Timur Sattarov, Damian Borth, Andreas Dengel and Bernd
Reimer | null | 1709.05254 | null | null |
Supervising Unsupervised Learning | cs.AI cs.LG stat.ML | We introduce a framework to leverage knowledge acquired from a repository of
(heterogeneous) supervised datasets to new unsupervised datasets. Our
perspective avoids the subjectivity inherent in unsupervised learning by
reducing it to supervised learning, and provides a principled way to evaluate
unsupervised algorithms. We demonstrate the versatility of our framework via
simple agnostic bounds on unsupervised problems. In the context of clustering,
our approach helps choose the number of clusters and the clustering algorithm,
remove the outliers, and provably circumvent the Kleinberg's impossibility
result. Experimental results across hundreds of problems demonstrate improved
performance on unsupervised data with simple algorithms, despite the fact that
our problems come from heterogeneous domains. Additionally, our framework lets
us leverage deep networks to learn common features from many such small
datasets, and perform zero shot learning.
| Vikas K. Garg, Adam Kalai | null | 1709.05262 | null | null |
Optimal approximation of piecewise smooth functions using deep ReLU
neural networks | math.FA cs.LG stat.ML | We study the necessary and sufficient complexity of ReLU neural networks---in
terms of depth and number of weights---which is required for approximating
classifier functions in $L^2$. As a model class, we consider the set
$\mathcal{E}^\beta (\mathbb R^d)$ of possibly discontinuous piecewise $C^\beta$
functions $f : [-1/2, 1/2]^d \to \mathbb R$, where the different smooth regions
of $f$ are separated by $C^\beta$ hypersurfaces. For dimension $d \geq 2$,
regularity $\beta > 0$, and accuracy $\varepsilon > 0$, we construct artificial
neural networks with ReLU activation function that approximate functions from
$\mathcal{E}^\beta(\mathbb R^d)$ up to $L^2$ error of $\varepsilon$. The
constructed networks have a fixed number of layers, depending only on $d$ and
$\beta$, and they have $O(\varepsilon^{-2(d-1)/\beta})$ many nonzero weights,
which we prove to be optimal. In addition to the optimality in terms of the
number of weights, we show that in order to achieve the optimal approximation
rate, one needs ReLU networks of a certain depth. Precisely, for piecewise
$C^\beta(\mathbb R^d)$ functions, this minimal depth is given---up to a
multiplicative constant---by $\beta/d$. Up to a log factor, our constructed
networks match this bound. This partly explains the benefits of depth for ReLU
networks by showing that deep networks are necessary to achieve efficient
approximation of (piecewise) smooth functions. Finally, we analyze
approximation in high-dimensional spaces where the function $f$ to be
approximated can be factorized into a smooth dimension reducing feature map
$\tau$ and classifier function $g$---defined on a low-dimensional feature
space---as $f = g \circ \tau$. We show that in this case the approximation rate
depends only on the dimension of the feature space and not the input dimension.
| Philipp Petersen, Felix Voigtlaender | null | 1709.05289 | null | null |
Dynamic Capacity Estimation in Hopfield Networks | cs.NE cs.LG | Understanding the memory capacity of neural networks remains a challenging
problem in implementing artificial intelligence systems. In this paper, we
address the notion of capacity with respect to Hopfield networks and propose a
dynamic approach to monitoring a network's capacity. We define our
understanding of capacity as the maximum number of stored patterns which can be
retrieved when probed by the stored patterns. Prior work in this area has
presented static expressions dependent on neuron count $N$, forcing network
designers to assume worst-case input characteristics for bias and correlation
when setting the capacity of the network. Instead, our model operates
simultaneously with the learning Hopfield network and concludes on a capacity
estimate based on the patterns which were stored. By continuously updating the
crosstalk associated with the stored patterns, our model guards the network
from overwriting its memory traces and exceeding its capacity. We simulate our
model using artificially generated random patterns, which can be set to a
desired bias and correlation, and observe capacity estimates between 93% and
97% accurate. As a result, our model doubles the memory efficiency of Hopfield
networks in comparison to the static and worst-case capacity estimate while
minimizing the risk of lost patterns.
| Saarthak Sarup and Mingoo Seok | null | 1709.0534 | null | null |
Anomaly Detection for a Water Treatment System Using Unsupervised
Machine Learning | cs.LG | In this paper, we propose and evaluate the application of unsupervised
machine learning to anomaly detection for a Cyber-Physical System (CPS). We
compare two methods: Deep Neural Networks (DNN) adapted to time series data
generated by a CPS, and one-class Support Vector Machines (SVM). These methods
are evaluated against data from the Secure Water Treatment (SWaT) testbed, a
scaled-down but fully operational raw water purification plant. For both
methods, we first train detectors using a log generated by SWaT operating under
normal conditions. Then, we evaluate the performance of both methods using a
log generated by SWaT operating under 36 different attack scenarios. We find
that our DNN generates fewer false positives than our one-class SVM while our
SVM detects slightly more anomalies. Overall, our DNN has a slightly better F
measure than our SVM. We discuss the characteristics of the DNN and one-class
SVM used in this experiment, and compare the advantages and disadvantages of
the two methods.
| Jun Inoue, Yoriyuki Yamagata, Yuqi Chen, Christopher M. Poskitt, Jun
Sun | 10.1109/ICDMW.2017.149 | 1709.05342 | null | null |
Supervised and Unsupervised Speech Enhancement Using Nonnegative Matrix
Factorization | cs.SD cs.LG | Reducing the interference noise in a monaural noisy speech signal has been a
challenging task for many years. Compared to traditional unsupervised speech
enhancement methods, e.g., Wiener filtering, supervised approaches, such as
algorithms based on hidden Markov models (HMM), lead to higher-quality enhanced
speech signals. However, the main practical difficulty of these approaches is
that for each noise type a model is required to be trained a priori. In this
paper, we investigate a new class of supervised speech denoising algorithms
using nonnegative matrix factorization (NMF). We propose a novel speech
enhancement method that is based on a Bayesian formulation of NMF (BNMF). To
circumvent the mismatch problem between the training and testing stages, we
propose two solutions. First, we use an HMM in combination with BNMF (BNMF-HMM)
to derive a minimum mean square error (MMSE) estimator for the speech signal
with no information about the underlying noise type. Second, we suggest a
scheme to learn the required noise BNMF model online, which is then used to
develop an unsupervised speech enhancement system. Extensive experiments are
carried out to investigate the performance of the proposed methods under
different conditions. Moreover, we compare the performance of the developed
algorithms with state-of-the-art speech enhancement schemes using various
objective measures. Our simulations show that the proposed BNMF-based methods
outperform the competing algorithms substantially.
| Nasser Mohammadiha, Paris Smaragdis, Arne Leijon | 10.1109/TASL.2013.2270369 | 1709.05362 | null | null |
Road Friction Estimation for Connected Vehicles using Supervised Machine
Learning | cs.LG stat.ML | In this paper, the problem of road friction prediction from a fleet of
connected vehicles is investigated. A framework is proposed to predict the road
friction level using both historical friction data from the connected cars and
data from weather stations, and comparative results from different methods are
presented. The problem is formulated as a classification task where the
available data is used to train three machine learning models including
logistic regression, support vector machine, and neural networks to predict the
friction class (slippery or non-slippery) in the future for specific road
segments. In addition to the friction values, which are measured by moving
vehicles, additional parameters such as humidity, temperature, and rainfall are
used to obtain a set of descriptive feature vectors as input to the
classification methods. The proposed prediction models are evaluated for
different prediction horizons (0 to 120 minutes in the future) where the
evaluation shows that the neural networks method leads to more stable results
in different conditions.
| Ghazaleh Panahandeh, Erik Ek, Nasser Mohammadiha | 10.1109/IVS.2017.7995885 | 1709.05379 | null | null |
The Uncertainty Bellman Equation and Exploration | cs.AI cs.LG math.OC stat.ML | We consider the exploration/exploitation problem in reinforcement learning.
For exploitation, it is well known that the Bellman equation connects the value
at any time-step to the expected value at subsequent time-steps. In this paper
we consider a similar \textit{uncertainty} Bellman equation (UBE), which
connects the uncertainty at any time-step to the expected uncertainties at
subsequent time-steps, thereby extending the potential exploratory benefit of a
policy beyond individual time-steps. We prove that the unique fixed point of
the UBE yields an upper bound on the variance of the posterior distribution of
the Q-values induced by any policy. This bound can be much tighter than
traditional count-based bonuses that compound standard deviation rather than
variance. Importantly, and unlike several existing approaches to optimism, this
method scales naturally to large systems with complex generalization.
Substituting our UBE-exploration strategy for $\epsilon$-greedy improves DQN
performance on 51 out of 57 games in the Atari suite.
| Brendan O'Donoghue, Ian Osband, Remi Munos, Volodymyr Mnih | null | 1709.0538 | null | null |
Multi-Agent Distributed Lifelong Learning for Collective Knowledge
Acquisition | cs.LG | Lifelong machine learning methods acquire knowledge over a series of
consecutive tasks, continually building upon their experience. Current lifelong
learning algorithms rely upon a single learning agent that has centralized
access to all data. In this paper, we extend the idea of lifelong learning from
a single agent to a network of multiple agents that collectively learn a series
of tasks. Each agent faces some (potentially unique) set of tasks; the key idea
is that knowledge learned from these tasks may benefit other agents trying to
learn different (but related) tasks. Our Collective Lifelong Learning Algorithm
(CoLLA) provides an efficient way for a network of agents to share their
learned knowledge in a distributed and decentralized manner, while preserving
the privacy of the locally observed data. Note that a decentralized scheme is a
subclass of distributed algorithms where a central server does not exist and in
addition to data, computations are also distributed among the agents. We
provide theoretical guarantees for robust performance of the algorithm and
empirically demonstrate that CoLLA outperforms existing approaches for
distributed multi-task learning on a variety of data sets.
| Mohammad Rostami, Soheil Kolouri, Kyungnam Kim, Eric Eaton | null | 1709.05412 | null | null |
Deep Scattering: Rendering Atmospheric Clouds with Radiance-Predicting
Neural Networks | cs.LG cs.GR stat.ML | We present a technique for efficiently synthesizing images of atmospheric
clouds using a combination of Monte Carlo integration and neural networks. The
intricacies of Lorenz-Mie scattering and the high albedo of cloud-forming
aerosols make rendering of clouds---e.g. the characteristic silverlining and
the "whiteness" of the inner body---challenging for methods based solely on
Monte Carlo integration or diffusion theory. We approach the problem
differently. Instead of simulating all light transport during rendering, we
pre-learn the spatial and directional distribution of radiant flux from tens of
cloud exemplars. To render a new scene, we sample visible points of the cloud
and, for each, extract a hierarchical 3D descriptor of the cloud geometry with
respect to the shading location and the light source. The descriptor is input
to a deep neural network that predicts the radiance function for each shading
configuration. We make the key observation that progressively feeding the
hierarchical descriptor into the network enhances the network's ability to
learn faster and predict with high accuracy while using few coefficients. We
also employ a block design with residual connections to further improve
performance. A GPU implementation of our method synthesizes images of clouds
that are nearly indistinguishable from the reference solution within seconds
interactively. Our method thus represents a viable solution for applications
such as cloud design and, thanks to its temporal stability, also for
high-quality production of animated content.
| Simon Kallweit and Thomas M\"uller and Brian McWilliams and Markus
Gross and Jan Nov\'ak | 10.1145/3130800.3130880 | 1709.05418 | null | null |
Grade Prediction with Temporal Course-wise Influence | cs.LG | There is a critical need to develop new educational technology applications
that analyze the data collected by universities to ensure that students
graduate in a timely fashion (4 to 6 years); and they are well prepared for
jobs in their respective fields of study. In this paper, we present a novel
approach for analyzing historical educational records from a large, public
university to perform next-term grade prediction; i.e., to estimate the grades
that a student will get in a course that he/she will enroll in the next term.
Accurate next-term grade prediction holds the promise for better student degree
planning, personalized advising and automated interventions to ensure that
students stay on track in their chosen degree program and graduate on time. We
present a factorization-based approach called Matrix Factorization with
Temporal Course-wise Influence that incorporates course-wise influence effects
and temporal effects for grade prediction. In this model, students and courses
are represented in a latent "knowledge" space. The grade of a student on a
course is modeled as the similarity of their latent representation in the
"knowledge" space. Course-wise influence is considered as an additional factor
in the grade prediction. Our experimental results show that the proposed method
outperforms several baseline approaches and infer meaningful patterns between
pairs of courses within academic programs.
| Zhiyun Ren, Xia Ning, Huzefa Rangwala | null | 1709.05433 | null | null |
Learning Sampling Distributions for Robot Motion Planning | cs.RO cs.LG | A defining feature of sampling-based motion planning is the reliance on an
implicit representation of the state space, which is enabled by a set of
probing samples. Traditionally, these samples are drawn either
probabilistically or deterministically to uniformly cover the state space. Yet,
the motion of many robotic systems is often restricted to "small" regions of
the state space, due to, for example, differential constraints or
collision-avoidance constraints. To accelerate the planning process, it is thus
desirable to devise non-uniform sampling strategies that favor sampling in
those regions where an optimal solution might lie. This paper proposes a
methodology for non-uniform sampling, whereby a sampling distribution is
learned from demonstrations, and then used to bias sampling. The sampling
distribution is computed through a conditional variational autoencoder,
allowing sample generation from the latent space conditioned on the specific
planning problem. This methodology is general, can be used in combination with
any sampling-based planner, and can effectively exploit the underlying
structure of a planning problem while maintaining the theoretical guarantees of
sampling-based approaches. Specifically, on several planning problems, the
proposed methodology is shown to effectively learn representations for the
relevant regions of the state space, resulting in an order of magnitude
improvement in terms of success rate and convergence to the optimal cost.
| Brian Ichter, James Harrison, Marco Pavone | null | 1709.05448 | null | null |
Subset Labeled LDA for Large-Scale Multi-Label Classification | stat.ML cs.LG | Labeled Latent Dirichlet Allocation (LLDA) is an extension of the standard
unsupervised Latent Dirichlet Allocation (LDA) algorithm, to address
multi-label learning tasks. Previous work has shown it to perform in par with
other state-of-the-art multi-label methods. Nonetheless, with increasing label
sets sizes LLDA encounters scalability issues. In this work, we introduce
Subset LLDA, a simple variant of the standard LLDA algorithm, that not only can
effectively scale up to problems with hundreds of thousands of labels but also
improves over the LLDA state-of-the-art. We conduct extensive experiments on
eight data sets, with label sets sizes ranging from hundreds to hundreds of
thousands, comparing our proposed algorithm with the previously proposed LLDA
algorithms (Prior--LDA, Dep--LDA), as well as the state of the art in extreme
multi-label classification. The results show a steady advantage of our method
over the other LLDA algorithms and competitive results compared to the extreme
multi-label classification algorithms.
| Yannis Papanikolaou and Grigorios Tsoumakas | null | 1709.0548 | null | null |
A statistical interpretation of spectral embedding: the generalised
random dot product graph | stat.ML cs.LG | Spectral embedding is a procedure which can be used to obtain vector
representations of the nodes of a graph. This paper proposes a generalisation
of the latent position network model known as the random dot product graph, to
allow interpretation of those vector representations as latent position
estimates. The generalisation is needed to model heterophilic connectivity
(e.g., `opposites attract') and to cope with negative eigenvalues more
generally. We show that, whether the adjacency or normalised Laplacian matrix
is used, spectral embedding produces uniformly consistent latent position
estimates with asymptotically Gaussian error (up to identifiability). The
standard and mixed membership stochastic block models are special cases in
which the latent positions take only $K$ distinct vector values, representing
communities, or live in the $(K-1)$-simplex with those vertices, respectively.
Under the stochastic block model, our theory suggests spectral clustering using
a Gaussian mixture model (rather than $K$-means) and, under mixed membership,
fitting the minimum volume enclosing simplex, existing recommendations
previously only supported under non-negative-definite assumptions. Empirical
improvements in link prediction (over the random dot product graph), and the
potential to uncover richer latent structure (than posited under the standard
or mixed membership stochastic block models) are demonstrated in a
cyber-security example.
| Patrick Rubin-Delanchy, Joshua Cape, Minh Tang and Carey E. Priebe | null | 1709.05506 | null | null |
DeepLung: 3D Deep Convolutional Nets for Automated Pulmonary Nodule
Detection and Classification | cs.CV cs.LG cs.NE | In this work, we present a fully automated lung CT cancer diagnosis system,
DeepLung. DeepLung contains two parts, nodule detection and classification.
Considering the 3D nature of lung CT data, two 3D networks are designed for the
nodule detection and classification respectively. Specifically, a 3D Faster
R-CNN is designed for nodule detection with a U-net-like encoder-decoder
structure to effectively learn nodule features. For nodule classification,
gradient boosting machine (GBM) with 3D dual path network (DPN) features is
proposed. The nodule classification subnetwork is validated on a public dataset
from LIDC-IDRI, on which it achieves better performance than state-of-the-art
approaches, and surpasses the average performance of four experienced doctors.
For the DeepLung system, candidate nodules are detected first by the nodule
detection subnetwork, and nodule diagnosis is conducted by the classification
subnetwork. Extensive experimental results demonstrate the DeepLung is
comparable to the experienced doctors both for the nodule-level and
patient-level diagnosis on the LIDC-IDRI dataset.
| Wentao Zhu, Chaochun Liu, Wei Fan, Xiaohui Xie | null | 1709.05538 | null | null |
Generating Compact Tree Ensembles via Annealing | stat.ML cs.LG | Tree ensembles are flexible predictive models that can capture relevant
variables and to some extent their interactions in a compact and interpretable
manner. Most algorithms for obtaining tree ensembles are based on versions of
boosting or Random Forest. Previous work showed that boosting algorithms
exhibit a cyclic behavior of selecting the same tree again and again due to the
way the loss is optimized. At the same time, Random Forest is not based on loss
optimization and obtains a more complex and less interpretable model. In this
paper we present a novel method for obtaining compact tree ensembles by growing
a large pool of trees in parallel with many independent boosting threads and
then selecting a small subset and updating their leaf weights by loss
optimization. We allow for the trees in the initial pool to have different
depths which further helps with generalization. Experiments on real datasets
show that the obtained model has usually a smaller loss than boosting, which is
also reflected in a lower misclassification error on the test set.
| Gitesh Dawer, Yangzi Guo, Adrian Barbu | null | 1709.05545 | null | null |
Deep Automated Multi-task Learning | cs.LG stat.ML | Multi-task learning (MTL) has recently contributed to learning better
representations in service of various NLP tasks. MTL aims at improving the
performance of a primary task, by jointly training on a secondary task. This
paper introduces automated tasks, which exploit the sequential nature of the
input data, as secondary tasks in an MTL model. We explore next word
prediction, next character prediction, and missing word completion as potential
automated tasks. Our results show that training on a primary task in parallel
with a secondary automated task improves both the convergence speed and
accuracy for the primary task. We suggest two methods for augmenting an
existing network with automated tasks and establish better performance in topic
prediction, sentiment analysis, and hashtag recommendation. Finally, we show
that the MTL models can perform well on datasets that are small and colloquial
by nature.
| Davis Liang, Yan Shu | null | 1709.05554 | null | null |
Speech Dereverberation Using Nonnegative Convolutive Transfer Function
and Spectro temporal Modeling | cs.SD cs.LG | This paper presents two single channel speech dereverberation methods to
enhance the quality of speech signals that have been recorded in an enclosed
space. For both methods, the room acoustics are modeled using a nonnegative
approximation of the convolutive transfer function (NCTF), and to additionally
exploit the spectral properties of the speech signal, such as the low rank
nature of the speech spectrogram, the speech spectrogram is modeled using
nonnegative matrix factorization (NMF). Two methods are described to combine
the NCTF and NMF models. In the first method, referred to as the integrated
method, a cost function is constructed by directly integrating the speech NMF
model into the NCTF model, while in the second method, referred to as the
weighted method, the NCTF and NMF based cost functions are weighted and summed.
Efficient update rules are derived to solve both optimization problems. In
addition, an extension of the integrated method is presented, which exploits
the temporal dependencies of the speech signal. Several experiments are
performed on reverberant speech signals with and without background noise,
where the integrated method yields a considerably higher speech quality than
the baseline NCTF method and a state of the art spectral enhancement method.
Moreover, the experimental results indicate that the weighted method can even
lead to a better performance in terms of instrumental quality measures, but
that the optimal weighting parameter depends on the room acoustics and the
utilized NMF model. Modeling the temporal dependencies in the integrated method
was found to be useful only for highly reverberant conditions.
| Nasser Mohammadiha, Simon Doclo | 10.1109/TASLP.2015.2501724 | 1709.05557 | null | null |
Nonnegative HMM for Babble Noise Derived from Speech HMM: Application to
Speech Enhancement | cs.SD cs.LG | Deriving a good model for multitalker babble noise can facilitate different
speech processing algorithms, e.g. noise reduction, to reduce the so-called
cocktail party difficulty. In the available systems, the fact that the babble
waveform is generated as a sum of N different speech waveforms is not exploited
explicitly. In this paper, first we develop a gamma hidden Markov model for
power spectra of the speech signal, and then formulate it as a sparse
nonnegative matrix factorization (NMF). Second, the sparse NMF is extended by
relaxing the sparsity constraint, and a novel model for babble noise (gamma
nonnegative HMM) is proposed in which the babble basis matrix is the same as
the speech basis matrix, and only the activation factors (weights) of the basis
vectors are different for the two signals over time. Finally, a noise reduction
algorithm is proposed using the derived speech and babble models. All of the
stationary model parameters are estimated using the expectation-maximization
(EM) algorithm, whereas the time-varying parameters, i.e. the gain parameters
of speech and babble signals, are estimated using a recursive EM algorithm. The
objective and subjective listening evaluations show that the proposed babble
model and the final noise reduction algorithm significantly outperform the
conventional methods.
| Nasser Mohammadiha, Arne Leijon | 10.1109/TASL.2013.2243435 | 1709.05559 | null | null |
MultiNet: Multi-Modal Multi-Task Learning for Autonomous Driving | cs.LG cs.RO | Autonomous driving requires operation in different behavioral modes ranging
from lane following and intersection crossing to turning and stopping. However,
most existing deep learning approaches to autonomous driving do not consider
the behavioral mode in the training strategy. This paper describes a technique
for learning multiple distinct behavioral modes in a single deep neural network
through the use of multi-modal multi-task learning. We study the effectiveness
of this approach, denoted MultiNet, using self-driving model cars for driving
in unstructured environments such as sidewalks and unpaved roads. Using labeled
data from over one hundred hours of driving our fleet of 1/10th scale model
cars, we trained different neural networks to predict the steering angle and
driving speed of the vehicle in different behavioral modes. We show that in
each case, MultiNet networks outperform networks trained on individual modes
while using a fraction of the total number of parameters.
| Sauhaarda Chowdhuri, Tushar Pankaj, Karl Zipser | null | 1709.05581 | null | null |
Mitigating Evasion Attacks to Deep Neural Networks via Region-based
Classification | cs.CR cs.LG stat.ML | Deep neural networks (DNNs) have transformed several artificial intelligence
research areas including computer vision, speech recognition, and natural
language processing. However, recent studies demonstrated that DNNs are
vulnerable to adversarial manipulations at testing time. Specifically, suppose
we have a testing example, whose label can be correctly predicted by a DNN
classifier. An attacker can add a small carefully crafted noise to the testing
example such that the DNN classifier predicts an incorrect label, where the
crafted testing example is called adversarial example. Such attacks are called
evasion attacks. Evasion attacks are one of the biggest challenges for
deploying DNNs in safety and security critical applications such as
self-driving cars. In this work, we develop new methods to defend against
evasion attacks. Our key observation is that adversarial examples are close to
the classification boundary. Therefore, we propose region-based classification
to be robust to adversarial examples. For a benign/adversarial testing example,
we ensemble information in a hypercube centered at the example to predict its
label. In contrast, traditional classifiers are point-based classification,
i.e., given a testing example, the classifier predicts its label based on the
testing example alone. Our evaluation results on MNIST and CIFAR-10 datasets
demonstrate that our region-based classification can significantly mitigate
evasion attacks without sacrificing classification accuracy on benign examples.
Specifically, our region-based classification achieves the same classification
accuracy on testing benign examples as point-based classification, but our
region-based classification is significantly more robust than point-based
classification to various evasion attacks.
| Xiaoyu Cao, Neil Zhenqiang Gong | null | 1709.05583 | null | null |
Representation Learning on Graphs: Methods and Applications | cs.SI cs.LG | Machine learning on graphs is an important and ubiquitous task with
applications ranging from drug design to friendship recommendation in social
networks. The primary challenge in this domain is finding a way to represent,
or encode, graph structure so that it can be easily exploited by machine
learning models. Traditionally, machine learning approaches relied on
user-defined heuristics to extract features encoding structural information
about a graph (e.g., degree statistics or kernel functions). However, recent
years have seen a surge in approaches that automatically learn to encode graph
structure into low-dimensional embeddings, using techniques based on deep
learning and nonlinear dimensionality reduction. Here we provide a conceptual
review of key advancements in this area of representation learning on graphs,
including matrix factorization-based methods, random-walk based algorithms, and
graph neural networks. We review methods to embed individual nodes as well as
approaches to embed entire (sub)graphs. In doing so, we develop a unified
framework to describe these recent approaches, and we highlight a number of
important applications and directions for future work.
| William L. Hamilton, Rex Ying, and Jure Leskovec | null | 1709.05584 | null | null |
Characterization of Hemodynamic Signal by Learning Multi-View
Relationships | stat.ML cs.LG | Multi-view data are increasingly prevalent in practice. It is often relevant
to analyze the relationships between pairs of views by multi-view component
analysis techniques such as Canonical Correlation Analysis (CCA). However, data
may easily exhibit nonlinear relations, which CCA cannot reveal. We aim to
investigate the usefulness of nonlinear multi-view relations to characterize
multi-view data in an explainable manner. To address this challenge, we propose
a method to characterize globally nonlinear multi-view relationships as a
mixture of linear relationships. A clustering method, it identifies partitions
of observations that exhibit the same relationships and learns those
relationships simultaneously. It defines cluster variables by multi-view rather
than spatial relationships, unlike almost all other clustering methods.
Furthermore, we introduce a supervised classification method that builds on our
clustering method by employing multi-view relationships as discriminative
factors. The value of these methods resides in their capability to find useful
structure in the data that single-view or current multi-view methods may
struggle to find. We demonstrate the potential utility of the proposed approach
using an application in clinical informatics to detect and characterize slow
bleeding in patients whose central venous pressure (CVP) is monitored at the
bedside. Presently, CVP is considered an insensitive measure of a subject's
intravascular volume status or its change. However, we reason that features of
CVP during inspiration and expiration should be informative in early
identification of emerging changes of patient status. We empirically show how
the proposed method can help discover and analyze multiple-to-multiple
correlations, which could be nonlinear or vary throughout the population, by
finding explainable structure of operational interest to practitioners.
| Eric Lei, Kyle Miller, Michael R. Pinsky, Artur Dubrawski | null | 1709.05602 | null | null |
Multi-Entity Dependence Learning with Rich Context via Conditional
Variational Auto-encoder | cs.LG stat.ML | Multi-Entity Dependence Learning (MEDL) explores conditional correlations
among multiple entities. The availability of rich contextual information
requires a nimble learning scheme that tightly integrates with deep neural
networks and has the ability to capture correlation structures among
exponentially many outcomes. We propose MEDL_CVAE, which encodes a conditional
multivariate distribution as a generating process. As a result, the variational
lower bound of the joint likelihood can be optimized via a conditional
variational auto-encoder and trained end-to-end on GPUs. Our MEDL_CVAE was
motivated by two real-world applications in computational sustainability: one
studies the spatial correlation among multiple bird species using the eBird
data and the other models multi-dimensional landscape composition and human
footprint in the Amazon rainforest with satellite images. We show that
MEDL_CVAE captures rich dependency structures, scales better than previous
methods, and further improves on the joint likelihood taking advantage of very
large datasets that are beyond the capacity of previous methods.
| Luming Tang, Yexiang Xue, Di Chen, Carla P. Gomes | null | 1709.05612 | null | null |
On Inductive Abilities of Latent Factor Models for Relational Learning | cs.LG cs.AI stat.ML | Latent factor models are increasingly popular for modeling multi-relational
knowledge graphs. By their vectorial nature, it is not only hard to interpret
why this class of models works so well, but also to understand where they fail
and how they might be improved. We conduct an experimental survey of
state-of-the-art models, not towards a purely comparative end, but as a means
to get insight about their inductive abilities. To assess the strengths and
weaknesses of each model, we create simple tasks that exhibit first, atomic
properties of binary relations, and then, common inter-relational inference
through synthetic genealogies. Based on these experimental results, we propose
new research directions to improve on existing models.
| Th\'eo Trouillon, \'Eric Gaussier, Christopher R. Dance, Guillaume
Bouchard | null | 1709.05666 | null | null |
Neural Affine Grayscale Image Denoising | cs.CV cs.LG | We propose a new grayscale image denoiser, dubbed as Neural Affine Image
Denoiser (Neural AIDE), which utilizes neural network in a novel way. Unlike
other neural network based image denoising methods, which typically apply
simple supervised learning to learn a mapping from a noisy patch to a clean
patch, we formulate to train a neural network to learn an \emph{affine} mapping
that gets applied to a noisy pixel, based on its context. Our formulation
enables both supervised training of the network from the labeled training
dataset and adaptive fine-tuning of the network parameters using the given
noisy image subject to denoising. The key tool for devising Neural AIDE is to
devise an estimated loss function of the MSE of the affine mapping, solely
based on the noisy data. As a result, our algorithm can outperform most of the
recent state-of-the-art methods in the standard benchmark datasets. Moreover,
our fine-tuning method can nicely overcome one of the drawbacks of the
patch-level supervised learning methods in image denoising; namely, a
supervised trained model with a mismatched noise variance can be mostly
corrected as long as we have the matched noise variance during the fine-tuning
step.
| Sungmin Cha, Taesup Moon | null | 1709.05672 | null | null |
FlashProfile: A Framework for Synthesizing Data Profiles | cs.LG | We address the problem of learning a syntactic profile for a collection of
strings, i.e. a set of regex-like patterns that succinctly describe the
syntactic variations in the strings. Real-world datasets, typically curated
from multiple sources, often contain data in various syntactic formats. Thus,
any data processing task is preceded by the critical step of data format
identification. However, manual inspection of data to identify the different
formats is infeasible in standard big-data scenarios.
Prior techniques are restricted to a small set of pre-defined patterns (e.g.
digits, letters, words, etc.), and provide no control over granularity of
profiles. We define syntactic profiling as a problem of clustering strings
based on syntactic similarity, followed by identifying patterns that succinctly
describe each cluster. We present a technique for synthesizing such profiles
over a given language of patterns, that also allows for interactive refinement
by requesting a desired number of clusters.
Using a state-of-the-art inductive synthesis framework, PROSE, we have
implemented our technique as FlashProfile. Across $153$ tasks over $75$ large
real datasets, we observe a median profiling time of only $\sim\,0.7\,$s.
Furthermore, we show that access to syntactic profiles may allow for more
accurate synthesis of programs, i.e. using fewer examples, in
programming-by-example (PBE) workflows such as FlashFill.
| Saswat Padhi, Prateek Jain, Daniel Perelman, Oleksandr Polozov, Sumit
Gulwani, Todd Millstein | 10.1145/3276520 | 1709.05725 | null | null |
Adversarial Discriminative Sim-to-real Transfer of Visuo-motor Policies | cs.RO cs.AI cs.CV cs.LG cs.SY | Various approaches have been proposed to learn visuo-motor policies for
real-world robotic applications. One solution is first learning in simulation
then transferring to the real world. In the transfer, most existing approaches
need real-world images with labels. However, the labelling process is often
expensive or even impractical in many robotic applications. In this paper, we
propose an adversarial discriminative sim-to-real transfer approach to reduce
the cost of labelling real data. The effectiveness of the approach is
demonstrated with modular networks in a table-top object reaching task where a
7 DoF arm is controlled in velocity mode to reach a blue cuboid in clutter
through visual observations. The adversarial transfer approach reduced the
labelled real data requirement by 50%. Policies can be transferred to real
environments with only 93 labelled and 186 unlabelled real images. The
transferred visuo-motor policies are robust to novel (not seen in training)
objects in clutter and even a moving target, achieving a 97.8% success rate and
1.8 cm control accuracy.
| Fangyi Zhang, J\"urgen Leitner, Zongyuan Ge, Michael Milford, Peter
Corke | null | 1709.05746 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.