title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Communication-Free Parallel Supervised Topic Models | cs.LG cs.CL cs.IR stat.ML | Embarrassingly (communication-free) parallel Markov chain Monte Carlo (MCMC)
methods are commonly used in learning graphical models. However, MCMC cannot be
directly applied in learning topic models because of the quasi-ergodicity
problem caused by multimodal distribution of topics. In this paper, we develop
an embarrassingly parallel MCMC algorithm for sLDA. Our algorithm works by
switching the order of sampled topics combination and labeling variable
prediction in sLDA, in which it overcomes the quasi-ergodicity problem because
high-dimension topics that follow a multimodal distribution are projected into
one-dimension document labels that follow a unimodal distribution. Our
empirical experiments confirm that the out-of-sample prediction performance
using our embarrassingly parallel algorithm is comparable to non-parallel sLDA
while the computation time is significantly reduced.
| Lee Gao, Ronghuo Zheng | null | 1708.03052 | null | null |
Online Interactive Collaborative Filtering Using Multi-Armed Bandit with
Dependent Arms | cs.IR cs.LG | Online interactive recommender systems strive to promptly suggest to
consumers appropriate items (e.g., movies, news articles) according to the
current context including both the consumer and item content information.
However, such context information is often unavailable in practice for the
recommendation, where only the users' interaction data on items can be
utilized. Moreover, the lack of interaction records, especially for new users
and items, worsens the performance of recommendation further. To address these
issues, collaborative filtering (CF), one of the recommendation techniques
relying on the interaction data only, as well as the online multi-armed bandit
mechanisms, capable of achieving the balance between exploitation and
exploration, are adopted in the online interactive recommendation settings, by
assuming independent items (i.e., arms). Nonetheless, the assumption rarely
holds in reality, since the real-world items tend to be correlated with each
other (e.g., two articles with similar topics). In this paper, we study online
interactive collaborative filtering problems by considering the dependencies
among items. We explicitly formulate the item dependencies as the clusters on
arms, where the arms within a single cluster share the similar latent topics.
In light of the topic modeling techniques, we come up with a generative model
to generate the items from their underlying topics. Furthermore, an efficient
online algorithm based on particle learning is developed for inferring both
latent parameters and states of our model. Additionally, our inferred model can
be naturally integrated with existing multi-armed selection strategies in the
online interactive collaborating setting. Empirical studies on two real-world
applications, online recommendations of movies and news, demonstrate both the
effectiveness and efficiency of the proposed approach.
| Qing Wang, Chunqiu Zeng, Wubai Zhou, Tao Li, Larisa Shwartz, Genady
Ya. Grabarnik | 10.1109/TKDE.2018.2866041 | 1708.03058 | null | null |
A Machine Learning Approach to Routing | cs.NI cs.LG | Can ideas and techniques from machine learning be leveraged to automatically
generate "good" routing configurations? We investigate the power of data-driven
routing protocols. Our results suggest that applying ideas and techniques from
deep reinforcement learning to this context yields high performance, motivating
further research along these lines.
| Asaf Valadarsky, Michael Schapira, Dafna Shahaf, Aviv Tamar | null | 1708.03074 | null | null |
Hypotheses testing on infinite random graphs | cs.LG cs.IT math.IT math.ST stat.ML stat.TH | Drawing on some recent results that provide the formalism necessary to
definite stationarity for infinite random graphs, this paper initiates the
study of statistical and learning questions pertaining to these objects.
Specifically, a criterion for the existence of a consistent test for complex
hypotheses is presented, generalizing the corresponding results on time series.
As an application, it is shown how one can test that a tree has the Markov
property, or, more generally, to estimate its memory.
| Daniil Ryabko | null | 1708.03131 | null | null |
DNN and CNN with Weighted and Multi-task Loss Functions for Audio Event
Detection | cs.SD cs.LG | This report presents our audio event detection system submitted for Task 2,
"Detection of rare sound events", of DCASE 2017 challenge. The proposed system
is based on convolutional neural networks (CNNs) and deep neural networks
(DNNs) coupled with novel weighted and multi-task loss functions and
state-of-the-art phase-aware signal enhancement. The loss functions are
tailored for audio event detection in audio streams. The weighted loss is
designed to tackle the common issue of imbalanced data in background/foreground
classification while the multi-task loss enables the networks to simultaneously
model the class distribution and the temporal structures of the target events
for recognition. Our proposed systems significantly outperform the challenge
baseline, improving F-score from 72.7% to 90.0% and reducing detection error
rate from 0.53 to 0.18 on average on the development data. On the evaluation
data, our submission obtains an average F1-score of 88.3% and an error rate of
0.22 which are significantly better than those obtained by the DCASE baseline
(i.e. an F1-score of 64.1% and an error rate of 0.64).
| Huy Phan, Martin Krawczyk-Becker, Timo Gerkmann, Alfred Mertins | null | 1708.03211 | null | null |
Improved Fixed-Rank Nystr\"om Approximation via QR Decomposition:
Practical and Theoretical Aspects | stat.ML cs.CV cs.LG | The Nystrom method is a popular technique that uses a small number of
landmark points to compute a fixed-rank approximation of large kernel matrices
that arise in machine learning problems. In practice, to ensure high quality
approximations, the number of landmark points is chosen to be greater than the
target rank. However, for simplicity the standard Nystrom method uses a
sub-optimal procedure for rank reduction. In this paper, we examine the
drawbacks of the standard Nystrom method in terms of poor performance and lack
of theoretical guarantees. To address these issues, we present an efficient
modification for generating improved fixed-rank Nystrom approximations.
Theoretical analysis and numerical experiments are provided to demonstrate the
advantages of the modified method over the standard Nystrom method. Overall,
the aim of this paper is to convince researchers to use the modified method, as
it has nearly identical computational complexity, is easy to code, has greatly
improved accuracy in many cases, and is optimal in a sense that we make
precise.
| Farhad Pourkamali-Anaraki, Stephen Becker | 10.1016/j.neucom.2019.06.070 | 1708.03218 | null | null |
Automatic Selection of t-SNE Perplexity | cs.AI cs.LG stat.AP stat.ML | t-Distributed Stochastic Neighbor Embedding (t-SNE) is one of the most widely
used dimensionality reduction methods for data visualization, but it has a
perplexity hyperparameter that requires manual selection. In practice, proper
tuning of t-SNE perplexity requires users to understand the inner working of
the method as well as to have hands-on experience. We propose a model selection
objective for t-SNE perplexity that requires negligible extra computation
beyond that of the t-SNE itself. We empirically validate that the perplexity
settings found by our approach are consistent with preferences elicited from
human experts across a number of datasets. The similarities of our approach to
Bayesian information criteria (BIC) and minimum description length (MDL) are
also analyzed.
| Yanshuai Cao, Luyu Wang | null | 1708.03229 | null | null |
Robust polynomial regression up to the information theoretic limit | cs.DS cs.LG | We consider the problem of robust polynomial regression, where one receives
samples $(x_i, y_i)$ that are usually within $\sigma$ of a polynomial $y =
p(x)$, but have a $\rho$ chance of being arbitrary adversarial outliers.
Previously, it was known how to efficiently estimate $p$ only when $\rho <
\frac{1}{\log d}$. We give an algorithm that works for the entire feasible
range of $\rho < 1/2$, while simultaneously improving other parameters of the
problem. We complement our algorithm, which gives a factor 2 approximation,
with impossibility results that show, for example, that a $1.09$ approximation
is impossible even with infinitely many samples.
| Daniel Kane, Sushrut Karmalkar, Eric Price | null | 1708.03257 | null | null |
Output Reachable Set Estimation and Verification for Multi-Layer Neural
Networks | cs.LG | In this paper, the output reachable estimation and safety verification
problems for multi-layer perceptron neural networks are addressed. First, a
conception called maximum sensitivity in introduced and, for a class of
multi-layer perceptrons whose activation functions are monotonic functions, the
maximum sensitivity can be computed via solving convex optimization problems.
Then, using a simulation-based method, the output reachable set estimation
problem for neural networks is formulated into a chain of optimization
problems. Finally, an automated safety verification is developed based on the
output reachable set estimation result. An application to the safety
verification for a robotic arm model with two joints is presented to show the
effectiveness of proposed approaches.
| Weiming Xiang, Hoang-Dung Tran, Taylor T. Johnson | null | 1708.03322 | null | null |
Resilient Linear Classification: An Approach to Deal with Attacks on
Training Data | cs.LG cs.AI cs.CR cs.SY | Data-driven techniques are used in cyber-physical systems (CPS) for
controlling autonomous vehicles, handling demand responses for energy
management, and modeling human physiology for medical devices. These
data-driven techniques extract models from training data, where their
performance is often analyzed with respect to random errors in the training
data. However, if the training data is maliciously altered by attackers, the
effect of these attacks on the learning algorithms underpinning data-driven CPS
have yet to be considered. In this paper, we analyze the resilience of
classification algorithms to training data attacks. Specifically, a generic
metric is proposed that is tailored to measure resilience of classification
algorithms with respect to worst-case tampering of the training data. Using the
metric, we show that traditional linear classification algorithms are resilient
under restricted conditions. To overcome these limitations, we propose a linear
classification algorithm with a majority constraint and prove that it is
strictly more resilient than the traditional algorithms. Evaluations on both
synthetic data and a real-world retrospective arrhythmia medical case-study
show that the traditional algorithms are vulnerable to tampered training data,
whereas the proposed algorithm is more resilient (as measured by worst-case
tampering).
| Sangdon Park, James Weimer and Insup Lee | 10.1145/3055004.3055006 | 1708.03366 | null | null |
Topical Behavior Prediction from Massive Logs | cs.LG | In this paper, we study the topical behavior in a large scale. We use the
network logs where each entry contains the entity ID, the timestamp, and the
meta data about the activity. Both the temporal and the spatial relationships
of the behavior are explored with the deep learning architectures combing the
recurrent neural network (RNN) and the convolutional neural network (CNN). To
make the behavioral data appropriate for the spatial learning in the CNN, we
propose several reduction steps to form the topical metrics and to place them
homogeneously like pixels in the images. The experimental result shows both
temporal and spatial gains when compared against a multilayer perceptron (MLP)
network. A new learning framework called the spatially connected convolutional
networks (SCCN) is introduced to predict the topical metrics more efficiently.
| Shih-Chieh Su | null | 1708.03381 | null | null |
Jumping across biomedical contexts using compressive data fusion | cs.LG q-bio.MN stat.ML | Motivation: The rapid growth of diverse biological data allows us to consider
interactions between a variety of objects, such as genes, chemicals, molecular
signatures, diseases, pathways and environmental exposures. Often, any pair of
objects--such as a gene and a disease--can be related in different ways, for
example, directly via gene-disease associations or indirectly via functional
annotations, chemicals and pathways. Different ways of relating these objects
carry different semantic meanings. However, traditional methods disregard these
semantics and thus cannot fully exploit their value in data modeling.
Results: We present Medusa, an approach to detect size-k modules of objects
that, taken together, appear most significant to another set of objects. Medusa
operates on large-scale collections of heterogeneous data sets and explicitly
distinguishes between diverse data semantics. It advances research along two
dimensions: it builds on collective matrix factorization to derive different
semantics, and it formulates the growing of the modules as a submodular
optimization program. Medusa is flexible in choosing or combining semantic
meanings and provides theoretical guarantees about detection quality. In a
systematic study on 310 complex diseases, we show the effectiveness of Medusa
in associating genes with diseases and detecting disease modules. We
demonstrate that in predicting gene-disease associations Medusa compares
favorably to methods that ignore diverse semantic meanings. We find that the
utility of different semantics depends on disease categories and that, overall,
Medusa recovers disease modules more accurately when combining different
semantics.
| Marinka Zitnik and Blaz Zupan | null | 1708.03392 | null | null |
Optimal Errors and Phase Transitions in High-Dimensional Generalized
Linear Models | cs.IT cond-mat.dis-nn cs.AI cs.LG math-ph math.IT math.MP | Generalized linear models (GLMs) arise in high-dimensional machine learning,
statistics, communications and signal processing. In this paper we analyze GLMs
when the data matrix is random, as relevant in problems such as compressed
sensing, error-correcting codes or benchmark models in neural networks. We
evaluate the mutual information (or "free entropy") from which we deduce the
Bayes-optimal estimation and generalization errors. Our analysis applies to the
high-dimensional limit where both the number of samples and the dimension are
large and their ratio is fixed. Non-rigorous predictions for the optimal errors
existed for special cases of GLMs, e.g. for the perceptron, in the field of
statistical physics based on the so-called replica method. Our present paper
rigorously establishes those decades old conjectures and brings forward their
algorithmic interpretation in terms of performance of the generalized
approximate message-passing algorithm. Furthermore, we tightly characterize,
for many learning problems, regions of parameters for which this algorithm
achieves the optimal performance, and locate the associated sharp phase
transitions separating learnable and non-learnable regions. We believe that
this random version of GLMs can serve as a challenging benchmark for
multi-purpose algorithms. This paper is divided in two parts that can be read
independently: The first part (main part) presents the model and main results,
discusses some applications and sketches the main ideas of the proof. The
second part (supplementary informations) is much more detailed and provides
more examples as well as all the proofs.
| Jean Barbier, Florent Krzakala, Nicolas Macris, L\'eo Miolane, Lenka
Zdeborov\'a | 10.1073/pnas.1802705116 | 1708.03395 | null | null |
Variational Deep Semantic Hashing for Text Documents | cs.IR cs.LG | As the amount of textual data has been rapidly increasing over the past
decade, efficient similarity search methods have become a crucial component of
large-scale information retrieval systems. A popular strategy is to represent
original data samples by compact binary codes through hashing. A spectrum of
machine learning methods have been utilized, but they often lack expressiveness
and flexibility in modeling to learn effective representations. The recent
advances of deep learning in a wide range of applications has demonstrated its
capability to learn robust and powerful feature representations for complex
data. Especially, deep generative models naturally combine the expressiveness
of probabilistic generative models with the high capacity of deep neural
networks, which is very suitable for text modeling. However, little work has
leveraged the recent progress in deep learning for text hashing.
In this paper, we propose a series of novel deep document generative models
for text hashing. The first proposed model is unsupervised while the second one
is supervised by utilizing document labels/tags for hashing. The third model
further considers document-specific factors that affect the generation of
words. The probabilistic generative formulation of the proposed models provides
a principled framework for model extension, uncertainty estimation, simulation,
and interpretability. Based on variational inference and reparameterization,
the proposed models can be interpreted as encoder-decoder deep neural networks
and thus they are capable of learning complex nonlinear distributed
representations of the original documents. We conduct a comprehensive set of
experiments on four public testbeds. The experimental results have demonstrated
the effectiveness of the proposed supervised learning models for text hashing.
| Suthee Chaidaroon and Yi Fang | null | 1708.03436 | null | null |
An Ensemble Classification Algorithm Based on Information Entropy for
Data Streams | cs.DS cs.LG | Data stream mining problem has caused widely concerns in the area of machine
learning and data mining. In some recent studies, ensemble classification has
been widely used in concept drift detection, however, most of them regard
classification accuracy as a criterion for judging whether concept drift
happening or not. Information entropy is an important and effective method for
measuring uncertainty. Based on the information entropy theory, a new algorithm
using information entropy to evaluate a classification result is developed. It
uses ensemble classification techniques, and the weight of each classifier is
decided through the entropy of the result produced by an ensemble classifiers
system. When the concept in data streams changing, the classifiers' weight
below a threshold value will be abandoned to adapt to a new concept in one
time. In the experimental analysis section, six databases and four proposed
algorithms are executed. The results show that the proposed method can not only
handle concept drift effectively, but also have a better classification
accuracy and time performance than the contrastive algorithms.
| Junhong Wang, Shuliang Xu, Bingqian Duan, Caifeng Liu, Jiye Liang | null | 1708.03496 | null | null |
Neural Expectation Maximization | cs.LG cs.NE stat.ML | Many real world tasks such as reasoning and physical interaction require
identification and manipulation of conceptual entities. A first step towards
solving these tasks is the automated discovery of distributed symbol-like
representations. In this paper, we explicitly formalize this problem as
inference in a spatial mixture model where each component is parametrized by a
neural network. Based on the Expectation Maximization framework we then derive
a differentiable clustering method that simultaneously learns how to group and
represent individual entities. We evaluate our method on the (sequential)
perceptual grouping task and find that it is able to accurately recover the
constituent objects. We demonstrate that the learned representations are useful
for next-step prediction.
| Klaus Greff, Sjoerd van Steenkiste, J\"urgen Schmidhuber | null | 1708.03498 | null | null |
A Fast Noniterative Algorithm for Compressive Sensing Using Binary
Measurement Matrices | cs.IT cs.LG math.IT | In this paper we present a new algorithm for compressive sensing that makes
use of binary measurement matrices and achieves exact recovery of ultra sparse
vectors, in a single pass and without any iterations. Due to its noniterative
nature, our algorithm is hundreds of times faster than $\ell_1$-norm
minimization, and methods based on expander graphs, both of which require
multiple iterations. Our algorithm can accommodate nearly sparse vectors, in
which case it recovers index set of the largest components, and can also
accommodate burst noise measurements. Compared to compressive sensing methods
that are guaranteed to achieve exact recovery of all sparse vectors, our method
requires fewer measurements However, methods that achieve statistical recovery,
that is, recovery of almost all but not all sparse vectors, can require fewer
measurements than our method.
| Mahsa Lotfi and Mathukumalli Vidyasagar | null | 1708.03608 | null | null |
Time Series Anomaly Detection; Detection of anomalous drops with limited
features and sparse examples in noisy highly periodic data | stat.ML cs.LG | Google uses continuous streams of data from industry partners in order to
deliver accurate results to users. Unexpected drops in traffic can be an
indication of an underlying issue and may be an early warning that remedial
action may be necessary. Detecting such drops is non-trivial because streams
are variable and noisy, with roughly regular spikes (in many different shapes)
in traffic data. We investigated the question of whether or not we can predict
anomalies in these data streams. Our goal is to utilize Machine Learning and
statistical approaches to classify anomalous drops in periodic, but noisy,
traffic patterns. Since we do not have a large body of labeled examples to
directly apply supervised learning for anomaly classification, we approached
the problem in two parts. First we used TensorFlow to train our various models
including DNNs, RNNs, and LSTMs to perform regression and predict the expected
value in the time series. Secondly we created anomaly detection rules that
compared the actual values to predicted values. Since the problem requires
finding sustained anomalies, rather than just short delays or momentary
inactivity in the data, our two detection methods focused on continuous
sections of activity rather than just single points. We tried multiple
combinations of our models and rules and found that using the intersection of
our two anomaly detection methods proved to be an effective method of detecting
anomalies on almost all of our models. In the process we also found that not
all data fell within our experimental assumptions, as one data stream had no
periodicity, and therefore no time based model could predict it.
| Dominique T. Shipmon, Jason M. Gurevitch, Paolo M. Piselli and Stephen
T. Edwards | null | 1708.03665 | null | null |
Deep Incremental Boosting | stat.ML cs.CV cs.LG | This paper introduces Deep Incremental Boosting, a new technique derived from
AdaBoost, specifically adapted to work with Deep Learning methods, that reduces
the required training time and improves generalisation. We draw inspiration
from Transfer of Learning approaches to reduce the start-up time to training
each incremental Ensemble member. We show a set of experiments that outlines
some preliminary results on some common Deep Learning datasets and discuss the
potential improvements Deep Incremental Boosting brings to traditional Ensemble
methods in Deep Learning.
| Alan Mosca, George D Magoulas | null | 1708.03704 | null | null |
Eigenvalue Decay Implies Polynomial-Time Learnability for Neural
Networks | cs.LG cs.DS | We consider the problem of learning function classes computed by neural
networks with various activations (e.g. ReLU or Sigmoid), a task believed to be
computationally intractable in the worst-case. A major open problem is to
understand the minimal assumptions under which these classes admit provably
efficient algorithms. In this work we show that a natural distributional
assumption corresponding to {\em eigenvalue decay} of the Gram matrix yields
polynomial-time algorithms in the non-realizable setting for expressive classes
of networks (e.g. feed-forward networks of ReLUs). We make no assumptions on
the structure of the network or the labels. Given sufficiently-strong
polynomial eigenvalue decay, we obtain {\em fully}-polynomial time algorithms
in {\em all} the relevant parameters with respect to square-loss. Milder decay
assumptions also lead to improved algorithms. This is the first purely
distributional assumption that leads to polynomial-time algorithms for networks
of ReLUs, even with one hidden layer. Further, unlike prior distributional
assumptions (e.g., the marginal distribution is Gaussian), eigenvalue decay has
been observed in practice on common data sets.
| Surbhi Goel, Adam Klivans | null | 1708.03708 | null | null |
OpenML Benchmarking Suites | stat.ML cs.LG | Machine learning research depends on objectively interpretable, comparable,
and reproducible algorithm benchmarks. We advocate the use of curated,
comprehensive suites of machine learning tasks to standardize the setup,
execution, and reporting of benchmarks. We enable this through software tools
that help to create and leverage these benchmarking suites. These are
seamlessly integrated into the OpenML platform, and accessible through
interfaces in Python, Java, and R. OpenML benchmarking suites (a) are easy to
use through standardized data formats, APIs, and client libraries; (b) come
with extensive meta-information on the included datasets; and (c) allow
benchmarks to be shared and reused in future studies. We then present a first,
carefully curated and practical benchmarking suite for classification: the
OpenML Curated Classification benchmarking suite 2018 (OpenML-CC18). Finally,
we discuss use cases and applications which demonstrate the usefulness of
OpenML benchmarking suites and the OpenML-CC18 in particular.
| Bernd Bischl, Giuseppe Casalicchio, Matthias Feurer, Pieter Gijsbers,
Frank Hutter, Michel Lang, Rafael G. Mantovani, Jan N. van Rijn, Joaquin
Vanschoren | null | 1708.03731 | null | null |
Sparse Coding and Autoencoders | cs.LG math.OC stat.ML | In "Dictionary Learning" one tries to recover incoherent matrices $A^* \in
\mathbb{R}^{n \times h}$ (typically overcomplete and whose columns are assumed
to be normalized) and sparse vectors $x^* \in \mathbb{R}^h$ with a small
support of size $h^p$ for some $0 <p < 1$ while having access to observations
$y \in \mathbb{R}^n$ where $y = A^*x^*$. In this work we undertake a rigorous
analysis of whether gradient descent on the squared loss of an autoencoder can
solve the dictionary learning problem. The "Autoencoder" architecture we
consider is a $\mathbb{R}^n \rightarrow \mathbb{R}^n$ mapping with a single
ReLU activation layer of size $h$.
Under very mild distributional assumptions on $x^*$, we prove that the norm
of the expected gradient of the standard squared loss function is
asymptotically (in sparse code dimension) negligible for all points in a small
neighborhood of $A^*$. This is supported with experimental evidence using
synthetic data. We also conduct experiments to suggest that $A^*$ is a local
minimum. Along the way we prove that a layer of ReLU gates can be set up to
automatically recover the support of the sparse codes. This property holds
independent of the loss function. We believe that it could be of independent
interest.
| Akshay Rangamani, Anirbit Mukherjee, Amitabh Basu, Tejaswini
Ganapathy, Ashish Arora, Sang Chin and Trac D. Tran | null | 1708.03735 | null | null |
Direct-Manipulation Visualization of Deep Networks | cs.LG cs.HC stat.ML | The recent successes of deep learning have led to a wave of interest from
non-experts. Gaining an understanding of this technology, however, is
difficult. While the theory is important, it is also helpful for novices to
develop an intuitive feel for the effect of different hyperparameters and
structural variations. We describe TensorFlow Playground, an interactive, open
sourced visualization that allows users to experiment via direct manipulation
rather than coding, enabling them to quickly build an intuition about neural
nets.
| Daniel Smilkov, Shan Carter, D. Sculley, Fernanda B. Vi\'egas, Martin
Wattenberg | null | 1708.03788 | null | null |
Training Support Vector Machines using Coresets | cs.DS cs.LG | We present a novel coreset construction algorithm for solving classification
tasks using Support Vector Machines (SVMs) in a computationally efficient
manner. A coreset is a weighted subset of the original data points that
provably approximates the original set. We show that coresets of size
polylogarithmic in $n$ and polynomial in $d$ exist for a set of $n$ input
points with $d$ features and present an $(\epsilon,\delta)$-FPRAS for
constructing coresets for scalable SVM training. Our method leverages the
insight that data points are often redundant and uses an importance sampling
scheme based on the sensitivity of each data point to construct coresets
efficiently. We evaluate the performance of our algorithm in accelerating SVM
training against real-world data sets and compare our algorithm to
state-of-the-art coreset approaches. Our empirical results show that our
approach outperforms a state-of-the-art coreset approach and uniform sampling
in enabling computational speedups while achieving low approximation error.
| Cenk Baykal, Lucas Liebenwein, Wilko Schwarting | null | 1708.03835 | null | null |
IoT Data Analytics Using Deep Learning | cs.NI cs.LG | Deep learning is a popular machine learning approach which has achieved a lot
of progress in all traditional machine learning areas. Internet of thing (IoT)
and Smart City deployments are generating large amounts of time-series sensor
data in need of analysis. Applying deep learning to these domains has been an
important topic of research. The Long-Short Term Memory (LSTM) network has been
proven to be well suited for dealing with and predicting important events with
long intervals and delays in the time series. LTSM networks have the ability to
maintain long-term memory. In an LTSM network, a stacked LSTM hidden layer also
makes it possible to learn a high level temporal feature without the need of
any fine tuning and preprocessing which would be required by other techniques.
In this paper, we construct a long-short term memory (LSTM) recurrent neural
network structure, use the normal time series training set to build the
prediction model. And then we use the predicted error from the prediction model
to construct a Gaussian naive Bayes model to detect whether the original sample
is abnormal. This method is called LSTM-Gauss-NBayes for short. We use three
real-world data sets, each of which involve long-term time-dependence or
short-term time-dependence, even very weak time dependence. The experimental
results show that LSTM-Gauss-NBayes is an effective and robust model.
| Xiaofeng Xie, Di Wu, Siping Liu, Renfa Li | null | 1708.03854 | null | null |
Image Quality Assessment Guided Deep Neural Networks Training | cs.CV cs.LG cs.MM | For many computer vision problems, the deep neural networks are trained and
validated based on the assumption that the input images are pristine (i.e.,
artifact-free). However, digital images are subject to a wide range of
distortions in real application scenarios, while the practical issues regarding
image quality in high level visual information understanding have been largely
ignored. In this paper, in view of the fact that most widely deployed deep
learning models are susceptible to various image distortions, the distorted
images are involved for data augmentation in the deep neural network training
process to learn a reliable model for practical applications. In particular, an
image quality assessment based label smoothing method, which aims at
regularizing the label distribution of training images, is further proposed to
tune the objective functions in learning the neural network. Experimental
results show that the proposed method is effective in dealing with both low and
high quality images in the typical image classification task.
| Zhuo Chen, Weisi Lin, Shiqi Wang, Long Xu, Leida Li | null | 1708.0388 | null | null |
Leveraging Sparse and Dense Feature Combinations for Sentiment
Classification | cs.CL cs.IR cs.LG | Neural networks are one of the most popular approaches for many natural
language processing tasks such as sentiment analysis. They often outperform
traditional machine learning models and achieve the state-of-art results on
most tasks. However, many existing deep learning models are complex, difficult
to train and provide a limited improvement over simpler methods. We propose a
simple, robust and powerful model for sentiment classification. This model
outperforms many deep learning models and achieves comparable results to other
deep learning models with complex architectures on sentiment analysis datasets.
We publish the code online.
| Tao Yu, Christopher Hidey, Owen Rambow and Kathleen McKeown | null | 1708.0394 | null | null |
Gradient Methods for Submodular Maximization | cs.LG math.OC | In this paper, we study the problem of maximizing continuous submodular
functions that naturally arise in many learning applications such as those
involving utility functions in active learning and sensing, matrix
approximations and network inference. Despite the apparent lack of convexity in
such functions, we prove that stochastic projected gradient methods can provide
strong approximation guarantees for maximizing continuous submodular functions
with convex constraints. More specifically, we prove that for monotone
continuous DR-submodular functions, all fixed points of projected gradient
ascent provide a factor $1/2$ approximation to the global maxima. We also study
stochastic gradient and mirror methods and show that after
$\mathcal{O}(1/\epsilon^2)$ iterations these methods reach solutions which
achieve in expectation objective values exceeding
$(\frac{\text{OPT}}{2}-\epsilon)$. An immediate application of our results is
to maximize submodular functions that are defined stochastically, i.e. the
submodular function is defined as an expectation over a family of submodular
functions with an unknown distribution. We will show how stochastic gradient
methods are naturally well-suited for this setting, leading to a factor $1/2$
approximation when the function is monotone. In particular, it allows us to
approximately maximize discrete, monotone submodular optimization problems via
projected gradient descent on a continuous relaxation, directly connecting the
discrete and continuous domains. Finally, experiments on real data demonstrate
that our projected gradient methods consistently achieve the best utility
compared to other continuous baselines while remaining competitive in terms of
computational effort.
| Hamed Hassani, Mahdi Soltanolkotabi and Amin Karbasi | null | 1708.03949 | null | null |
Sentiment Analysis by Joint Learning of Word Embeddings and Classifier | cs.CL cs.AI cs.LG stat.ML | Word embeddings are representations of individual words of a text document in
a vector space and they are often use- ful for performing natural language pro-
cessing tasks. Current state of the art al- gorithms for learning word
embeddings learn vector representations from large corpora of text documents in
an unsu- pervised fashion. This paper introduces SWESA (Supervised Word
Embeddings for Sentiment Analysis), an algorithm for sentiment analysis via
word embeddings. SWESA leverages document label infor- mation to learn vector
representations of words from a modest corpus of text doc- uments by solving an
optimization prob- lem that minimizes a cost function with respect to both word
embeddings as well as classification accuracy. Analysis re- veals that SWESA
provides an efficient way of estimating the dimension of the word embeddings
that are to be learned. Experiments on several real world data sets show that
SWESA has superior per- formance when compared to previously suggested
approaches to word embeddings and sentiment analysis tasks.
| Prathusha Kameswara Sarma, Bill Sethares | null | 1708.03995 | null | null |
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural
Networks without Training Substitute Models | stat.ML cs.CR cs.LG | Deep neural networks (DNNs) are one of the most prominent technologies of our
time, as they achieve state-of-the-art performance in many machine learning
tasks, including but not limited to image classification, text mining, and
speech processing. However, recent research on DNNs has indicated
ever-increasing concern on the robustness to adversarial examples, especially
for security-critical tasks such as traffic sign identification for autonomous
driving. Studies have unveiled the vulnerability of a well-trained DNN by
demonstrating the ability of generating barely noticeable (to both human and
machines) adversarial images that lead to misclassification. Furthermore,
researchers have shown that these adversarial images are highly transferable by
simply training and attacking a substitute model built upon the target model,
known as a black-box attack to DNNs.
Similar to the setting of training substitute models, in this paper we
propose an effective black-box attack that also only has access to the input
(images) and the output (confidence scores) of a targeted DNN. However,
different from leveraging attack transferability from substitute models, we
propose zeroth order optimization (ZOO) based attacks to directly estimate the
gradients of the targeted DNN for generating adversarial examples. We use
zeroth order stochastic coordinate descent along with dimension reduction,
hierarchical attack and importance sampling techniques to efficiently attack
black-box models. By exploiting zeroth order optimization, improved attacks to
the targeted DNN can be accomplished, sparing the need for training substitute
models and avoiding the loss in attack transferability. Experimental results on
MNIST, CIFAR10 and ImageNet show that the proposed ZOO attack is as effective
as the state-of-the-art white-box attack and significantly outperforms existing
black-box attacks via substitute models.
| Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, Cho-Jui Hsieh | 10.1145/3128572.3140448 | 1708.03999 | null | null |
Group-driven Reinforcement Learning for Personalized mHealth
Intervention | cs.LG cs.CY | Due to the popularity of smartphones and wearable devices nowadays, mobile
health (mHealth) technologies are promising to bring positive and wide impacts
on people's health. State-of-the-art decision-making methods for mHealth rely
on some ideal assumptions. Those methods either assume that the users are
completely homogenous or completely heterogeneous. However, in reality, a user
might be similar with some, but not all, users. In this paper, we propose a
novel group-driven reinforcement learning method for the mHealth. We aim to
understand how to share information among similar users to better convert the
limited user information into sharper learned RL policies. Specifically, we
employ the K-means clustering method to group users based on their trajectory
information similarity and learn a shared RL policy for each group. Extensive
experiment results have shown that our method can achieve clear gains over the
state-of-the-art RL methods for mHealth.
| Feiyun Zhu and Jun Guo and Zheng Xu and Peng Liao and Junzhou Huang | null | 1708.04001 | null | null |
Rocket Launching: A Universal and Efficient Framework for Training
Well-performing Light Net | stat.ML cs.LG | Models applied on real time response task, like click-through rate (CTR)
prediction model, require high accuracy and rigorous response time. Therefore,
top-performing deep models of high depth and complexity are not well suited for
these applications with the limitations on the inference time. In order to
further improve the neural networks' performance given the time and
computational limitations, we propose an approach that exploits a cumbersome
net to help train the lightweight net for prediction. We dub the whole process
rocket launching, where the cumbersome booster net is used to guide the
learning of the target light net throughout the whole training process. We
analyze different loss functions aiming at pushing the light net to behave
similarly to the booster net, and adopt the loss with best performance in our
experiments. We use one technique called gradient block to improve the
performance of the light net and booster net further. Experiments on benchmark
datasets and real-life industrial advertisement data present that our light
model can get performance only previously achievable with more complex models.
| Guorui Zhou, Ying Fan, Runpeng Cui, Weijie Bian, Xiaoqiang Zhu, Kun
Gai | null | 1708.04106 | null | null |
Early Improving Recurrent Elastic Highway Network | cs.LG | To model time-varying nonlinear temporal dynamics in sequential data, a
recurrent network capable of varying and adjusting the recurrence depth between
input intervals is examined. The recurrence depth is extended by several
intermediate hidden state units, and the weight parameters involved in
determining these units are dynamically calculated. The motivation behind the
paper lies on overcoming a deficiency in Recurrent Highway Networks and
improving their performances which are currently at the forefront of RNNs: 1)
Determining the appropriate number of recurrent depth in RHN for different
tasks is a huge burden and just setting it to a large number is computationally
wasteful with possible repercussion in terms of performance degradation and
high latency. Expanding on the idea of adaptive computation time (ACT), with
the use of an elastic gate in the form of a rectified exponentially decreasing
function taking on as arguments as previous hidden state and input, the
proposed model is able to evaluate the appropriate recurrent depth for each
input. The rectified gating function enables the most significant intermediate
hidden state updates to come early such that significant performance gain is
achieved early. 2) Updating the weights from that of previous intermediate
layer offers a richer representation than the use of shared weights across all
intermediate recurrence layers. The weight update procedure is just an
expansion of the idea underlying hypernetworks. To substantiate the
effectiveness of the proposed network, we conducted three experiments:
regression on synthetic data, human activity recognition, and language modeling
on the Penn Treebank dataset. The proposed networks showed better performance
than other state-of-the-art recurrent networks in all three experiments.
| Hyunsin Park and Chang D. Yoo | null | 1708.04116 | null | null |
Reproducibility of Benchmarked Deep Reinforcement Learning Tasks for
Continuous Control | cs.LG | Policy gradient methods in reinforcement learning have become increasingly
prevalent for state-of-the-art performance in continuous control tasks. Novel
methods typically benchmark against a few key algorithms such as deep
deterministic policy gradients and trust region policy optimization. As such,
it is important to present and use consistent baselines experiments. However,
this can be difficult due to general variance in the algorithms,
hyper-parameter tuning, and environment stochasticity. We investigate and
discuss: the significance of hyper-parameters in policy gradients for
continuous control, general variance in the algorithms, and reproducibility of
reported results. We provide guidelines on reporting novel results as
comparisons against baseline methods such that future researchers can make
informed decisions when investigating novel methods.
| Riashat Islam, Peter Henderson, Maziar Gomrokchi, Doina Precup | null | 1708.04133 | null | null |
Learning to Plan Chemical Syntheses | cs.AI cs.LG physics.chem-ph | From medicines to materials, small organic molecules are indispensable for
human well-being. To plan their syntheses, chemists employ a problem solving
technique called retrosynthesis. In retrosynthesis, target molecules are
recursively transformed into increasingly simpler precursor compounds until a
set of readily available starting materials is obtained. Computer-aided
retrosynthesis would be a highly valuable tool, however, past approaches were
slow and provided results of unsatisfactory quality. Here, we employ Monte
Carlo Tree Search (MCTS) to efficiently discover retrosynthetic routes. MCTS
was combined with an expansion policy network that guides the search, and an
"in-scope" filter network to pre-select the most promising retrosynthetic
steps. These deep neural networks were trained on 12 million reactions, which
represents essentially all reactions ever published in organic chemistry. Our
system solves almost twice as many molecules and is 30 times faster in
comparison to the traditional search method based on extracted rules and
hand-coded heuristics. Finally after a 60 year history of computer-aided
synthesis planning, chemists can no longer distinguish between routes generated
by a computer system and real routes taken from the scientific literature. We
anticipate that our method will accelerate drug and materials discovery by
assisting chemists to plan better syntheses faster, and by enabling fully
automated robot synthesis.
| Marwin H.S. Segler, Mike Preuss, Mark P. Waller | 10.1038/nature25978 | 1708.04202 | null | null |
Sampling High Throughput Data for Anomaly Detection of Data-Base
Activity | cs.CR cs.LG | Data leakage and theft from databases is a dangerous threat to organizations.
Data Security and Data Privacy protection systems (DSDP) monitor data access
and usage to identify leakage or suspicious activities that should be
investigated. Because of the high velocity nature of database systems, such
systems audit only a portion of the vast number of transactions that take
place. Anomalies are investigated by a Security Officer (SO) in order to choose
the proper response. In this paper we investigate the effect of sampling
methods based on the risk the transaction poses and propose a new method for
"combined sampling" for capturing a more varied sample.
| Hagit Grushka-Cohen, Oded Sofer, Ofer Biller, Michael Dymshits, Lior
Rokach, Bracha Shapira | null | 1708.04278 | null | null |
MHTN: Modal-adversarial Hybrid Transfer Network for Cross-modal
Retrieval | cs.MM cs.CV cs.LG | Cross-modal retrieval has drawn wide interest for retrieval across different
modalities of data. However, existing methods based on DNN face the challenge
of insufficient cross-modal training data, which limits the training
effectiveness and easily leads to overfitting. Transfer learning is for
relieving the problem of insufficient training data, but it mainly focuses on
knowledge transfer only from large-scale datasets as single-modal source domain
to single-modal target domain. Such large-scale single-modal datasets also
contain rich modal-independent semantic knowledge that can be shared across
different modalities. Besides, large-scale cross-modal datasets are very
labor-consuming to collect and label, so it is significant to fully exploit the
knowledge in single-modal datasets for boosting cross-modal retrieval. This
paper proposes modal-adversarial hybrid transfer network (MHTN), which to the
best of our knowledge is the first work to realize knowledge transfer from
single-modal source domain to cross-modal target domain, and learn cross-modal
common representation. It is an end-to-end architecture with two subnetworks:
(1) Modal-sharing knowledge transfer subnetwork is proposed to jointly transfer
knowledge from a large-scale single-modal dataset in source domain to all
modalities in target domain with a star network structure, which distills
modal-independent supplementary knowledge for promoting cross-modal common
representation learning. (2) Modal-adversarial semantic learning subnetwork is
proposed to construct an adversarial training mechanism between common
representation generator and modality discriminator, making the common
representation discriminative for semantics but indiscriminative for modalities
to enhance cross-modal semantic consistency during transfer process.
Comprehensive experiments on 4 widely-used datasets show its effectiveness and
generality.
| Xin Huang, Yuxin Peng and Mingkuan Yuan | null | 1708.04308 | null | null |
Collaborative Filtering using Denoising Auto-Encoders for Market Basket
Data | stat.ML cs.LG | Recommender systems (RS) help users navigate large sets of items in the
search for "interesting" ones. One approach to RS is Collaborative Filtering
(CF), which is based on the idea that similar users are interested in similar
items. Most model-based approaches to CF seek to train a
machine-learning/data-mining model based on sparse data; the model is then used
to provide recommendations. While most of the proposed approaches are effective
for small-size situations, the combinatorial nature of the problem makes it
impractical for medium-to-large instances. In this work we present a novel
approach to CF that works by training a Denoising Auto-Encoder (DAE) on
corrupted baskets, i.e., baskets from which one or more items have been
removed. The DAE is then forced to learn to reconstruct the original basket
given its corrupted input. Due to recent advancements in optimization and other
technologies for training neural-network models (such as DAE), the proposed
method results in a scalable and practical approach to CF. The contribution of
this work is twofold: (1) to identify missing items in observed baskets and,
thus, directly providing a CF model; and, (2) to construct a generative model
of baskets which may be used, for instance, in simulation analysis or as part
of a more complex analytical method.
| Andres G. Abad and Luis I. Reyes-Castro | null | 1708.04312 | null | null |
Distance and Similarity Measures Effect on the Performance of K-Nearest
Neighbor Classifier -- A Review | cs.LG cs.AI | The K-nearest neighbor (KNN) classifier is one of the simplest and most
common classifiers, yet its performance competes with the most complex
classifiers in the literature. The core of this classifier depends mainly on
measuring the distance or similarity between the tested examples and the
training examples. This raises a major question about which distance measures
to be used for the KNN classifier among a large number of distance and
similarity measures available? This review attempts to answer this question
through evaluating the performance (measured by accuracy, precision and recall)
of the KNN using a large number of distance measures, tested on a number of
real-world datasets, with and without adding different levels of noise. The
experimental results show that the performance of KNN classifier depends
significantly on the distance used, and the results showed large gaps between
the performances of different distances. We found that a recently proposed
non-convex distance performed the best when applied on most datasets comparing
to the other tested distances. In addition, the performance of the KNN with
this top performing distance degraded only about $20\%$ while the noise level
reaches $90\%$, this is true for most of the distances used as well. This means
that the KNN classifier using any of the top $10$ distances tolerate noise to a
certain degree. Moreover, the results show that some distances are less
affected by the added noise comparing to other distances.
| V. B. Surya Prasath, Haneen Arafat Abu Alfeilat, Ahmad B. A. Hassanat,
Omar Lasassmeh, Ahmad S. Tarawneh, Mahmoud Bashir Alhasanat, Hamzeh S. Eyal
Salman | 10.1089/big.2018.0175 | 1708.04321 | null | null |
Graph Classification via Deep Learning with Virtual Nodes | cs.LG cs.AI stat.ML | Learning representation for graph classification turns a variable-size graph
into a fixed-size vector (or matrix). Such a representation works nicely with
algebraic manipulations. Here we introduce a simple method to augment an
attributed graph with a virtual node that is bidirectionally connected to all
existing nodes. The virtual node represents the latent aspects of the graph,
which are not immediately available from the attributes and local connectivity
structures. The expanded graph is then put through any node representation
method. The representation of the virtual node is then the representation of
the entire graph. In this paper, we use the recently introduced Column Network
for the expanded graph, resulting in a new end-to-end graph classification
model dubbed Virtual Column Network (VCN). The model is validated on two tasks:
(i) predicting bio-activity of chemical compounds, and (ii) finding software
vulnerability from source code. Results demonstrate that VCN is competitive
against well-established rivals.
| Trang Pham, Truyen Tran, Hoa Dam, Svetha Venkatesh | null | 1708.04357 | null | null |
Theoretical Foundation of Co-Training and Disagreement-Based Algorithms | cs.LG cs.AI stat.ML | Disagreement-based approaches generate multiple classifiers and exploit the
disagreement among them with unlabeled data to improve learning performance.
Co-training is a representative paradigm of them, which trains two classifiers
separately on two sufficient and redundant views; while for the applications
where there is only one view, several successful variants of co-training with
two different classifiers on single-view data instead of two views have been
proposed. For these disagreement-based approaches, there are several important
issues which still are unsolved, in this article we present theoretical
analyses to address these issues, which provides a theoretical foundation of
co-training and disagreement-based approaches.
| Wei Wang and Zhi-Hua Zhou | null | 1708.04403 | null | null |
Extractive Summarization using Deep Learning | cs.CL cs.IR cs.LG | This paper proposes a text summarization approach for factual reports using a
deep learning model. This approach consists of three phases: feature
extraction, feature enhancement, and summary generation, which work together to
assimilate core information and generate a coherent, understandable summary. We
are exploring various features to improve the set of sentences selected for the
summary, and are using a Restricted Boltzmann Machine to enhance and abstract
those features to improve resultant accuracy without losing any important
information. The sentences are scored based on those enhanced features and an
extractive summary is constructed. Experimentation carried out on several
articles demonstrates the effectiveness of the proposed approach. Source code
available at: https://github.com/vagisha-nidhi/TextSummarizer
| Sukriti Verma and Vagisha Nidhi | null | 1708.04439 | null | null |
Actively Learning what makes a Discrete Sequence Valid | stat.ML cs.LG | Deep learning techniques have been hugely successful for traditional
supervised and unsupervised machine learning problems. In large part, these
techniques solve continuous optimization problems. Recently however, discrete
generative deep learning models have been successfully used to efficiently
search high-dimensional discrete spaces. These methods work by representing
discrete objects as sequences, for which powerful sequence-based deep models
can be employed. Unfortunately, these techniques are significantly hindered by
the fact that these generative models often produce invalid sequences. As a
step towards solving this problem, we propose to learn a deep recurrent
validator model. Given a partial sequence, our model learns the probability of
that sequence occurring as the beginning of a full valid sequence. Thus this
identifies valid versus invalid sequences and crucially it also provides
insight about how individual sequence elements influence the validity of
discrete objects. To learn this model we propose an approach inspired by
seminal work in Bayesian active learning. On a synthetic dataset, we
demonstrate the ability of our model to distinguish valid and invalid
sequences. We believe this is a key step toward learning generative models that
faithfully produce valid discrete objects.
| David Janz, Jos van der Westhuizen, Jos\'e Miguel Hern\'andez-Lobato | null | 1708.04465 | null | null |
SCNN: An Accelerator for Compressed-sparse Convolutional Neural Networks | cs.NE cs.AR cs.LG | Convolutional Neural Networks (CNNs) have emerged as a fundamental technology
for machine learning. High performance and extreme energy efficiency are
critical for deployments of CNNs in a wide range of situations, especially
mobile platforms such as autonomous vehicles, cameras, and electronic personal
assistants. This paper introduces the Sparse CNN (SCNN) accelerator
architecture, which improves performance and energy efficiency by exploiting
the zero-valued weights that stem from network pruning during training and
zero-valued activations that arise from the common ReLU operator applied during
inference. Specifically, SCNN employs a novel dataflow that enables maintaining
the sparse weights and activations in a compressed encoding, which eliminates
unnecessary data transfers and reduces storage requirements. Furthermore, the
SCNN dataflow facilitates efficient delivery of those weights and activations
to the multiplier array, where they are extensively reused. In addition, the
accumulation of multiplication products are performed in a novel accumulator
array. Our results show that on contemporary neural networks, SCNN can improve
both performance and energy by a factor of 2.7x and 2.3x, respectively, over a
comparably provisioned dense CNN accelerator.
| Angshuman Parashar, Minsoo Rhu, Anurag Mukkara, Antonio Puglielli,
Rangharajan Venkatesan, Brucek Khailany, Joel Emer, Stephen W. Keckler, and
William J. Dally | null | 1708.04485 | null | null |
Self-adaptive node-based PCA encodings | cs.NE cs.LG cs.SE | In this paper we propose an algorithm, Simple Hebbian PCA, and prove that it
is able to calculate the principal component analysis (PCA) in a distributed
fashion across nodes. It simplifies existing network structures by removing
intralayer weights, essentially cutting the number of weights that need to be
trained in half.
| Leonard Johard, Victor Rivera, Manuel Mazzara, and JooYoung Lee | null | 1708.04498 | null | null |
Learning from Noisy Label Distributions | cs.LG cs.AI stat.ML | In this paper, we consider a novel machine learning problem, that is,
learning a classifier from noisy label distributions. In this problem, each
instance with a feature vector belongs to at least one group. Then, instead of
the true label of each instance, we observe the label distribution of the
instances associated with a group, where the label distribution is distorted by
an unknown noise. Our goals are to (1) estimate the true label of each
instance, and (2) learn a classifier that predicts the true label of a new
instance. We propose a probabilistic model that considers true label
distributions of groups and parameters that represent the noise as hidden
variables. The model can be learned based on a variational Bayesian method. In
numerical experiments, we show that the proposed model outperforms existing
methods in terms of the estimation of the true labels of instances.
| Yuya Yoshikawa | null | 1708.04529 | null | null |
Real-time Load Prediction with High Velocity Smart Home Data Stream | cs.LG | This paper addresses the use of smart-home sensor streams for continuous
prediction of energy loads of individual households which participate as an
agent in local markets. We introduces a new device level energy consumption
dataset recorded over three years wich includes high resolution energy
measurements from electrical devices collected within a pilot program. Using
data from that pilot, we analyze the applicability of various machine learning
mechanisms for continuous load prediction. Specifically, we address short-term
load prediction that is required for load balancing in electrical micro-grids.
We report on the prediction performance and the computational requirements of a
broad range of prediction mechanisms. Furthermore we present an architecture
and experimental evaluation when this prediction is applied in the stream.
| Christoph Doblander and Martin Strohbach and Holger Ziekow and
Hans-Arno Jacobsen | null | 1708.04613 | null | null |
Attentional Factorization Machines: Learning the Weight of Feature
Interactions via Attention Networks | cs.LG | Factorization Machines (FMs) are a supervised learning approach that enhances
the linear regression model by incorporating the second-order feature
interactions. Despite effectiveness, FM can be hindered by its modelling of all
feature interactions with the same weight, as not all feature interactions are
equally useful and predictive. For example, the interactions with useless
features may even introduce noises and adversely degrade the performance. In
this work, we improve FM by discriminating the importance of different feature
interactions. We propose a novel model named Attentional Factorization Machine
(AFM), which learns the importance of each feature interaction from data via a
neural attention network. Extensive experiments on two real-world datasets
demonstrate the effectiveness of AFM. Empirically, it is shown on regression
task AFM betters FM with a $8.6\%$ relative improvement, and consistently
outperforms the state-of-the-art deep learning methods Wide&Deep and DeepCross
with a much simpler structure and fewer model parameters. Our implementation of
AFM is publicly available at:
https://github.com/hexiangnan/attentional_factorization_machine
| Jun Xiao, Hao Ye, Xiangnan He, Hanwang Zhang, Fei Wu, Tat-Seng Chua | null | 1708.04617 | null | null |
Deep Learning the Ising Model Near Criticality | cond-mat.dis-nn cs.LG stat.ML | It is well established that neural networks with deep architectures perform
better than shallow networks for many tasks in machine learning. In statistical
physics, while there has been recent interest in representing physical data
with generative modelling, the focus has been on shallow neural networks. A
natural question to ask is whether deep neural networks hold any advantage over
shallow networks in representing such data. We investigate this question by
using unsupervised, generative graphical models to learn the probability
distribution of a two-dimensional Ising system. Deep Boltzmann machines, deep
belief networks, and deep restricted Boltzmann networks are trained on thermal
spin configurations from this system, and compared to the shallow architecture
of the restricted Boltzmann machine. We benchmark the models, focussing on the
accuracy of generating energetic observables near the phase transition, where
these quantities are most difficult to approximate. Interestingly, after
training the generative networks, we observe that the accuracy essentially
depends only on the number of neurons in the first hidden layer of the network,
and not on other model details such as network depth or model type. This is
evidence that shallow networks are more efficient than deep networks at
representing physical probability distributions associated with Ising systems
near criticality.
| Alan Morningstar and Roger G. Melko | null | 1708.04622 | null | null |
Machine Learning for Survival Analysis: A Survey | cs.LG stat.ML | Accurately predicting the time of occurrence of an event of interest is a
critical problem in longitudinal data analysis. One of the main challenges in
this context is the presence of instances whose event outcomes become
unobservable after a certain time point or when some instances do not
experience any event during the monitoring period. Such a phenomenon is called
censoring which can be effectively handled using survival analysis techniques.
Traditionally, statistical approaches have been widely developed in the
literature to overcome this censoring issue. In addition, many machine learning
algorithms are adapted to effectively handle survival data and tackle other
challenging problems that arise in real-world data. In this survey, we provide
a comprehensive and structured review of the representative statistical methods
along with the machine learning techniques used in survival analysis and
provide a detailed taxonomy of the existing methods. We also discuss several
topics that are closely related to survival analysis and illustrate several
successful applications in various real-world application domains. We hope that
this paper will provide a more thorough understanding of the recent advances in
survival analysis and offer some guidelines on applying these approaches to
solve new problems that arise in applications with censored data.
| Ping Wang, Yan Li, Chandan K. Reddy | null | 1708.04649 | null | null |
Guiding Network Analysis using Graph Slepians: An Illustration for the
C. Elegans Connectome | cs.LG q-bio.NC | Spectral approaches of network analysis heavily rely upon the
eigendecomposition of the graph Laplacian. For instance, in graph signal
processing, the Laplacian eigendecomposition is used to define the graph
Fourier transform and then transpose signal processing operations to graphs by
implementing them in the spectral domain. Here, we build on recent work that
generalized Slepian functions to the graph setting. In particular, graph
Slepians are band-limited graph signals with maximal energy concentration in a
given subgraph. We show how this approach can be used to guide network
analysis; i.e., we propose a visualization that reveals network organization of
a subgraph, but while striking a balance with global network structure. These
developments are illustrated for the structural connectome of the C. Elegans.
| Dimitri Van De Ville, Robin Demesmaeker, Maria Giulia Preti | null | 1708.04657 | null | null |
DeepFaceLIFT: Interpretable Personalized Models for Automatic Estimation
of Self-Reported Pain | cs.CV cs.AI cs.LG | Previous research on automatic pain estimation from facial expressions has
focused primarily on "one-size-fits-all" metrics (such as PSPI). In this work,
we focus on directly estimating each individual's self-reported visual-analog
scale (VAS) pain metric, as this is considered the gold standard for pain
measurement. The VAS pain score is highly subjective and context-dependent, and
its range can vary significantly among different persons. To tackle these
issues, we propose a novel two-stage personalized model, named DeepFaceLIFT,
for automatic estimation of VAS. This model is based on (1) Neural Network and
(2) Gaussian process regression models, and is used to personalize the
estimation of self-reported pain via a set of hand-crafted personal features
and multi-task learning. We show on the benchmark dataset for pain analysis
(The UNBC-McMaster Shoulder Pain Expression Archive) that the proposed
personalized model largely outperforms the traditional, unpersonalized models:
the intra-class correlation improves from a baseline performance of 19\% to a
personalized performance of 35\% while also providing confidence in the
model\textquotesingle s estimates -- in contrast to existing models for the
target task. Additionally, DeepFaceLIFT automatically discovers the
pain-relevant facial regions for each person, allowing for an easy
interpretation of the pain-related facial cues.
| Dianbo Liu, Fengjiao Peng, Andrew Shea, Ognjen (Oggi) Rudovic,
Rosalind Picard | null | 1708.0467 | null | null |
Learning Graph While Training: An Evolving Graph Convolutional Neural
Network | cs.LG cs.CV | Convolution Neural Networks on Graphs are important generalization and
extension of classical CNNs. While previous works generally assumed that the
graph structures of samples are regular with unified dimensions, in many
applications, they are highly diverse or even not well defined. Under some
circumstances, e.g. chemical molecular data, clustering or coarsening for
simplifying the graphs is hard to be justified chemically. In this paper, we
propose a more general and flexible graph convolution network (EGCN) fed by
batch of arbitrarily shaped data together with their evolving graph Laplacians
trained in supervised fashion. Extensive experiments have been conducted to
demonstrate the superior performance in terms of both the acceleration of
parameter fitting and the significantly improved prediction accuracy on
multiple graph-structured datasets.
| Ruoyu Li, Junzhou Huang | null | 1708.04675 | null | null |
Augmentor: An Image Augmentation Library for Machine Learning | cs.CV cs.LG stat.ML | The generation of artificial data based on existing observations, known as
data augmentation, is a technique used in machine learning to improve model
accuracy, generalisation, and to control overfitting. Augmentor is a software
package, available in both Python and Julia versions, that provides a high
level API for the expansion of image data using a stochastic, pipeline-based
approach which effectively allows for images to be sampled from a distribution
of augmented images at runtime. Augmentor provides methods for most standard
augmentation practices as well as several advanced features such as
label-preserving, randomised elastic distortions, and provides many helper
functions for typical augmentation tasks used in machine learning.
| Marcus D. Bloice, Christof Stocker, Andreas Holzinger | null | 1708.0468 | null | null |
VQS: Linking Segmentations to Questions and Answers for Supervised
Attention in VQA and Question-Focused Semantic Segmentation | cs.CV cs.CL cs.LG | Rich and dense human labeled datasets are among the main enabling factors for
the recent advance on vision-language understanding. Many seemingly distant
annotations (e.g., semantic segmentation and visual question answering (VQA))
are inherently connected in that they reveal different levels and perspectives
of human understandings about the same visual scenes --- and even the same set
of images (e.g., of COCO). The popularity of COCO correlates those annotations
and tasks. Explicitly linking them up may significantly benefit both individual
tasks and the unified vision and language modeling. We present the preliminary
work of linking the instance segmentations provided by COCO to the questions
and answers (QAs) in the VQA dataset, and name the collected links visual
questions and segmentation answers (VQS). They transfer human supervision
between the previously separate tasks, offer more effective leverage to
existing problems, and also open the door for new research problems and models.
We study two applications of the VQS data in this paper: supervised attention
for VQA and a novel question-focused semantic segmentation task. For the
former, we obtain state-of-the-art results on the VQA real multiple-choice task
by simply augmenting the multilayer perceptrons with some attention features
that are learned using the segmentation-QA links as explicit supervision. To
put the latter in perspective, we study two plausible methods and compare them
to an oracle method assuming that the instance segmentations are given at the
test stage.
| Chuang Gan, Yandong Li, Haoxiang Li, Chen Sun, Boqing Gong | null | 1708.04686 | null | null |
GANs for Biological Image Synthesis | cs.CV cs.LG stat.ML | In this paper, we propose a novel application of Generative Adversarial
Networks (GAN) to the synthesis of cells imaged by fluorescence microscopy.
Compared to natural images, cells tend to have a simpler and more geometric
global structure that facilitates image generation. However, the correlation
between the spatial pattern of different fluorescent proteins reflects
important biological functions, and synthesized images have to capture these
relationships to be relevant for biological applications. We adapt GANs to the
task at hand and propose new models with casual dependencies between image
channels that can generate multi-channel images, which would be impossible to
obtain experimentally. We evaluate our approach using two independent
techniques and compare it against sensible baselines. Finally, we demonstrate
that by interpolating across the latent space we can mimic the known changes in
protein localization that occur through time during the cell cycle, allowing us
to predict temporal evolution from static images.
| Anton Osokin, Anatole Chessel, Rafael E. Carazo Salas and Federico
Vaggi | null | 1708.04692 | null | null |
Learning Rich Geographical Representations: Predicting Colorectal Cancer
Survival in the State of Iowa | cs.LG | Neural networks are capable of learning rich, nonlinear feature
representations shown to be beneficial in many predictive tasks. In this work,
we use these models to explore the use of geographical features in predicting
colorectal cancer survival curves for patients in the state of Iowa, spanning
the years 1989 to 2012. Specifically, we compare model performance using a
newly defined metric -- area between the curves (ABC) -- to assess (a) whether
survival curves can be reasonably predicted for colorectal cancer patients in
the state of Iowa, (b) whether geographical features improve predictive
performance, and (c) whether a simple binary representation or richer, spectral
clustering-based representation perform better. Our findings suggest that
survival curves can be reasonably estimated on average, with predictive
performance deviating at the five-year survival mark. We also find that
geographical features improve predictive performance, and that the best
performance is obtained using richer, spectral analysis-elicited features.
| Michael T. Lash, Yuqi Sun, Xun Zhou, Charles F. Lynch, W. Nick Street | null | 1708.04714 | null | null |
Privacy-Enabled Biometric Search | cs.CR cs.LG | Biometrics have a long-held hope of replacing passwords by establishing a
non-repudiated identity and providing authentication with convenience.
Convenience drives consumers toward biometrics-based access management
solutions. Unlike passwords, biometrics cannot be script-injected; however,
biometric data is considered highly sensitive due to its personal nature and
unique association with users. Biometrics differ from passwords in that
compromised passwords may be reset. Compromised biometrics offer no such
relief. A compromised biometric offers unlimited risk in privacy (anyone can
view the biometric) and authentication (anyone may use the biometric).
Standards such as the Biometric Open Protocol Standard (BOPS) (IEEE 2410-2016)
provide a detailed mechanism to authenticate biometrics based on pre-enrolled
devices and a previous identity by storing the biometric in encrypted form.
This paper describes a biometric-agnostic approach that addresses the privacy
concerns of biometrics through the implementation of BOPS. Specifically, two
novel concepts are introduced. First, a biometric is applied to a neural
network to create a feature vector. This neural network alone can be used for
one-to-one matching (authentication), but would require a search in linear time
for the one-to-many case (identity lookup). The classifying algorithm described
in this paper addresses this concern by producing normalized floating-point
values for each feature vector. This allows authentication lookup to occur in
up to polynomial time, allowing for search in encrypted biometric databases
with speed, accuracy and privacy.
| Scott Streit, Brian Streit, Stephen Suffian | null | 1708.04726 | null | null |
Deconvolutional Paragraph Representation Learning | cs.CL cs.LG stat.ML | Learning latent representations from long text sequences is an important
first step in many natural language processing applications. Recurrent Neural
Networks (RNNs) have become a cornerstone for this challenging task. However,
the quality of sentences during RNN-based decoding (reconstruction) decreases
with the length of the text. We propose a sequence-to-sequence, purely
convolutional and deconvolutional autoencoding framework that is free of the
above issue, while also being computationally efficient. The proposed method is
simple, easy to implement and can be leveraged as a building block for many
applications. We show empirically that compared to RNNs, our framework is
better at reconstructing and correcting long paragraphs. Quantitative
evaluation on semi-supervised text classification and summarization tasks
demonstrate the potential for better utilization of long unlabeled text data.
| Yizhe Zhang, Dinghan Shen, Guoyin Wang, Zhe Gan, Ricardo Henao,
Lawrence Carin | null | 1708.04729 | null | null |
Geometric Enclosing Networks | cs.LG cs.AI stat.ML | Training model to generate data has increasingly attracted research attention
and become important in modern world applications. We propose in this paper a
new geometry-based optimization approach to address this problem. Orthogonal to
current state-of-the-art density-based approaches, most notably VAE and GAN, we
present a fresh new idea that borrows the principle of minimal enclosing ball
to train a generator G\left(\bz\right) in such a way that both training and
generated data, after being mapped to the feature space, are enclosed in the
same sphere. We develop theory to guarantee that the mapping is bijective so
that its inverse from feature space to data space results in expressive
nonlinear contours to describe the data manifold, hence ensuring data generated
are also lying on the data manifold learned from training data. Our model
enjoys a nice geometric interpretation, hence termed Geometric Enclosing
Networks (GEN), and possesses some key advantages over its rivals, namely
simple and easy-to-control optimization formulation, avoidance of mode
collapsing and efficiently learn data manifold representation in a completely
unsupervised manner. We conducted extensive experiments on synthesis and
real-world datasets to illustrate the behaviors, strength and weakness of our
proposed GEN, in particular its ability to handle multi-modal data and quality
of generated data.
| Trung Le, Hung Vu, Tu Dinh Nguyen, Dinh Phung | null | 1708.04733 | null | null |
Scalable Joint Models for Reliable Uncertainty-Aware Event Prediction | stat.ML cs.AI cs.LG | Missing data and noisy observations pose significant challenges for reliably
predicting events from irregularly sampled multivariate time series
(longitudinal) data. Imputation methods, which are typically used for
completing the data prior to event prediction, lack a principled mechanism to
account for the uncertainty due to missingness. Alternatively, state-of-the-art
joint modeling techniques can be used for jointly modeling the longitudinal and
event data and compute event probabilities conditioned on the longitudinal
observations. These approaches, however, make strong parametric assumptions and
do not easily scale to multivariate signals with many observations. Our
proposed approach consists of several key innovations. First, we develop a
flexible and scalable joint model based upon sparse multiple-output Gaussian
processes. Unlike state-of-the-art joint models, the proposed model can explain
highly challenging structure including non-Gaussian noise while scaling to
large data. Second, we derive an optimal policy for predicting events using the
distribution of the event occurrence estimated by the joint model. The derived
policy trades-off the cost of a delayed detection versus incorrect assessments
and abstains from making decisions when the estimated event probability does
not satisfy the derived confidence criteria. Experiments on a large dataset
show that the proposed framework significantly outperforms state-of-the-art
techniques in event prediction.
| Hossein Soleimani, James Hensman, Suchi Saria | null | 1708.04757 | null | null |
Active Orthogonal Matching Pursuit for Sparse Subspace Clustering | cs.LG cs.CV cs.IT math.IT stat.ML | Sparse Subspace Clustering (SSC) is a state-of-the-art method for clustering
high-dimensional data points lying in a union of low-dimensional subspaces.
However, while $\ell_1$ optimization-based SSC algorithms suffer from high
computational complexity, other variants of SSC, such as Orthogonal Matching
Pursuit-based SSC (OMP-SSC), lose clustering accuracy in pursuit of improving
time efficiency. In this letter, we propose a novel Active OMP-SSC, which
improves clustering accuracy of OMP-SSC by adaptively updating data points and
randomly dropping data points in the OMP process, while still enjoying the low
computational complexity of greedy pursuit algorithms. We provide heuristic
analysis of our approach, and explain how these two active steps achieve a
better tradeoff between connectivity and separation. Numerical results on both
synthetic data and real-world data validate our analyses and show the
advantages of the proposed active algorithm.
| Yanxi Chen, Gen Li and Yuantao Gu | 10.1109/LSP.2017.2741509 | 1708.04764 | null | null |
Racing Thompson: an Efficient Algorithm for Thompson Sampling with
Non-conjugate Priors | cs.LG stat.ML | Thompson sampling has impressive empirical performance for many multi-armed
bandit problems. But current algorithms for Thompson sampling only work for the
case of conjugate priors since these algorithms require to infer the posterior,
which is often computationally intractable when the prior is not conjugate. In
this paper, we propose a novel algorithm for Thompson sampling which only
requires to draw samples from a tractable distribution, so our algorithm is
efficient even when the prior is non-conjugate. To do this, we reformulate
Thompson sampling as an optimization problem via the Gumbel-Max trick. After
that we construct a set of random variables and our goal is to identify the one
with highest mean. Finally, we solve it with techniques in best arm
identification.
| Yichi Zhou, Jun Zhu, Jingwei Zhuo | null | 1708.04781 | null | null |
StarCraft II: A New Challenge for Reinforcement Learning | cs.LG cs.AI | This paper introduces SC2LE (StarCraft II Learning Environment), a
reinforcement learning environment based on the StarCraft II game. This domain
poses a new grand challenge for reinforcement learning, representing a more
difficult class of problems than considered in most prior work. It is a
multi-agent problem with multiple players interacting; there is imperfect
information due to a partially observed map; it has a large action space
involving the selection and control of hundreds of units; it has a large state
space that must be observed solely from raw input feature planes; and it has
delayed credit assignment requiring long-term strategies over thousands of
steps. We describe the observation, action, and reward specification for the
StarCraft II domain and provide an open source Python-based interface for
communicating with the game engine. In addition to the main game maps, we
provide a suite of mini-games focusing on different elements of StarCraft II
gameplay. For the main game maps, we also provide an accompanying dataset of
game replay data from human expert players. We give initial baseline results
for neural networks trained from this data to predict game outcomes and player
actions. Finally, we present initial baseline results for canonical deep
reinforcement learning agents applied to the StarCraft II domain. On the
mini-games, these agents learn to achieve a level of play that is comparable to
a novice player. However, when trained on the main game, these agents are
unable to make significant progress. Thus, SC2LE offers a new and challenging
environment for exploring deep reinforcement learning algorithms and
architectures.
| Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander
Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich K\"uttler, John
Agapiou, Julian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen,
Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy
Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence,
Anders Ekermo, Jacob Repp, Rodney Tsing | null | 1708.04782 | null | null |
BitNet: Bit-Regularized Deep Neural Networks | cs.LG stat.ML | We present a novel optimization strategy for training neural networks which
we call "BitNet". The parameters of neural networks are usually unconstrained
and have a dynamic range dispersed over all real values. Our key idea is to
limit the expressive power of the network by dynamically controlling the range
and set of values that the parameters can take. We formulate this idea using a
novel end-to-end approach that circumvents the discrete parameter space by
optimizing a relaxed continuous and differentiable upper bound of the typical
classification loss function. The approach can be interpreted as a
regularization inspired by the Minimum Description Length (MDL) principle. For
each layer of the network, our approach optimizes real-valued translation and
scaling factors and arbitrary precision integer-valued parameters (weights). We
empirically compare BitNet to an equivalent unregularized model on the MNIST
and CIFAR-10 datasets. We show that BitNet converges faster to a superior
quality solution. Additionally, the resulting model has significant savings in
memory due to the use of integer-valued parameters.
| Aswin Raghavan, Mohamed Amer, Sek Chai, Graham Taylor | null | 1708.04788 | null | null |
Efficient Compression Technique for Sparse Sets | cs.IT cs.LG math.IT | Recent technological advancements have led to the generation of huge amounts
of data over the web, such as text, image, audio and video. Most of this data
is high dimensional and sparse, for e.g., the bag-of-words representation used
for representing text. Often, an efficient search for similar data points needs
to be performed in many applications like clustering, nearest neighbour search,
ranking and indexing. Even though there have been significant increases in
computational power, a simple brute-force similarity-search on such datasets is
inefficient and at times impossible. Thus, it is desirable to get a compressed
representation which preserves the similarity between data points. In this
work, we consider the data points as sets and use Jaccard similarity as the
similarity measure. Compression techniques are generally evaluated on the
following parameters --1) Randomness required for compression, 2) Time required
for compression, 3) Dimension of the data after compression, and 4) Space
required to store the compressed data. Ideally, the compressed representation
of the data should be such, that the similarity between each pair of data
points is preserved, while keeping the time and the randomness required for
compression as low as possible.
We show that the compression technique suggested by Pratap and Kulkarni also
works well for Jaccard similarity. We present a theoretical proof of the same
and complement it with rigorous experimentations on synthetic as well as
real-world datasets. We also compare our results with the state-of-the-art
"min-wise independent permutation", and show that our compression algorithm
achieves almost equal accuracy while significantly reducing the compression
time and the randomness.
| Rameshwar Pratap, Ishan Sohony, Raghav Kulkarni | null | 1708.04799 | null | null |
Weighted parallel SGD for distributed unbalanced-workload training
system | cs.LG cs.AI stat.ML | Stochastic gradient descent (SGD) is a popular stochastic optimization method
in machine learning. Traditional parallel SGD algorithms, e.g., SimuParallel
SGD, often require all nodes to have the same performance or to consume equal
quantities of data. However, these requirements are difficult to satisfy when
the parallel SGD algorithms run in a heterogeneous computing environment;
low-performance nodes will exert a negative influence on the final result. In
this paper, we propose an algorithm called weighted parallel SGD (WP-SGD).
WP-SGD combines weighted model parameters from different nodes in the system to
produce the final output. WP-SGD makes use of the reduction in standard
deviation to compensate for the loss from the inconsistency in performance of
nodes in the cluster, which means that WP-SGD does not require that all nodes
consume equal quantities of data. We also analyze the theoretical feasibility
of running two other parallel SGD algorithms combined with WP-SGD in a
heterogeneous environment. The experimental results show that WP-SGD
significantly outperforms the traditional parallel SGD algorithms on
distributed training systems with an unbalanced workload.
| Cheng Daning, Li Shigang and Zhang Yunquan | null | 1708.04801 | null | null |
Fixed effects testing in high-dimensional linear mixed models | stat.ME cs.LG math.ST stat.ML stat.TH | Many scientific and engineering challenges -- ranging from pharmacokinetic
drug dosage allocation and personalized medicine to marketing mix (4Ps)
recommendations -- require an understanding of the unobserved heterogeneity in
order to develop the best decision making-processes. In this paper, we develop
a hypothesis test and the corresponding p-value for testing for the
significance of the homogeneous structure in linear mixed models. A robust
matching moment construction is used for creating a test that adapts to the
size of the model sparsity. When unobserved heterogeneity at a cluster level is
constant, we show that our test is both consistent and unbiased even when the
dimension of the model is extremely high. Our theoretical results rely on a new
family of adaptive sparse estimators of the fixed effects that do not require
consistent estimation of the random effects. Moreover, our inference results do
not require consistent model selection. We showcase that moment matching can be
extended to nonlinear mixed effects models and to generalized linear mixed
effects models. In numerical and real data experiments, we find that the
developed method is extremely accurate, that it adapts to the size of the
underlying model and is decidedly powerful in the presence of irrelevant
covariates.
| Jelena Bradic, Gerda Claeskens, Thomas Gueuning | null | 1708.04887 | null | null |
mAnI: Movie Amalgamation using Neural Imitation | cs.CL cs.LG | Cross-modal data retrieval has been the basis of various creative tasks
performed by Artificial Intelligence (AI). One such highly challenging task for
AI is to convert a book into its corresponding movie, which most of the
creative film makers do as of today. In this research, we take the first step
towards it by visualizing the content of a book using its corresponding movie
visuals. Given a set of sentences from a book or even a fan-fiction written in
the same universe, we employ deep learning models to visualize the input by
stitching together relevant frames from the movie. We studied and compared
three different types of setting to match the book with the movie content: (i)
Dialog model: using only the dialog from the movie, (ii) Visual model: using
only the visual content from the movie, and (iii) Hybrid model: using the
dialog and the visual content from the movie. Experiments on the publicly
available MovieBook dataset shows the effectiveness of the proposed models.
| Naveen Panwar, Shreya Khare, Neelamadhav Gantayat, Rahul Aralikatte,
Senthil Mani, Anush Sankaran | null | 1708.04923 | null | null |
Fault in your stars: An Analysis of Android App Reviews | cs.LG cs.CL | Mobile app distribution platforms such as Google Play Store allow users to
share their feedback about downloaded apps in the form of a review comment and
a corresponding star rating. Typically, the star rating ranges from one to five
stars, with one star denoting a high sense of dissatisfaction with the app and
five stars denoting a high sense of satisfaction.
Unfortunately, due to a variety of reasons, often the star rating provided by
a user is inconsistent with the opinion expressed in the review. For example,
consider the following review for the Facebook App on Android; "Awesome App".
One would reasonably expect the rating for this review to be five stars, but
the actual rating is one star!
Such inconsistent ratings can lead to a deflated (or inflated) overall
average rating of an app which can affect user downloads, as typically users
look at the average star ratings while making a decision on downloading an app.
Also, the app developers receive a biased feedback about the application that
does not represent ground reality. This is especially significant for small
apps with a few thousand downloads as even a small number of mismatched reviews
can bring down the average rating drastically.
In this paper, we conducted a study on this review-rating mismatch problem.
We manually examined 8600 reviews from 10 popular Android apps and found that
20% of the ratings in our dataset were inconsistent with the review. Further,
we developed three systems; two of which were based on traditional machine
learning and one on deep learning to automatically identify reviews whose
rating did not match with the opinion expressed in the review. Our deep
learning system performed the best and had an accuracy of 92% in identifying
the correct star rating to be associated with a given review.
| Rahul Aralikatte, Giriprasad Sridhara, Neelamadhav Gantayat, Senthil
Mani | 10.1145/3152494.3152500 | 1708.04968 | null | null |
Adaptive Threshold Sampling | stat.ML cs.LG | Sampling is a fundamental problem in computer science and statistics.
However, for a given task and stream, it is often not possible to choose good
sampling probabilities in advance. We derive a general framework for adaptively
changing the sampling probabilities via a collection of thresholds.In general,
adaptive sampling procedures introduce dependence amongst the sampled points,
making it difficult to compute expectations and ensure estimators are unbiased
or consistent. Our framework address this issue and further shows when adaptive
thresholds can be treated as if they were fixed thresholds which samples items
independently. This makes our adaptive sampling schemes simple to apply as
there is no need to create custom estimators for the sampling method.
Using our framework, we derive new samplers that can address a broad range of
new and existing problems including sampling with memory rather than sample
size budgets, stratified samples, multiple objectives, distinct counting, and
sliding windows. In particular, we design a sampling procedure for the top-K
problem where, unlike in the heavy-hitter problem, the sketch size and sampling
probabilities are adaptively chosen.
| Daniel Ting | null | 1708.0497 | null | null |
ANI-1: A data set of 20M off-equilibrium DFT calculations for organic
molecules | physics.chem-ph cs.LG physics.data-an | One of the grand challenges in modern theoretical chemistry is designing and
implementing approximations that expedite ab initio methods without loss of
accuracy. Machine learning (ML), in particular neural networks, are emerging as
a powerful approach to constructing various forms of transferable atomistic
potentials. They have been successfully applied in a variety of applications in
chemistry, biology, catalysis, and solid-state physics. However, these models
are heavily dependent on the quality and quantity of data used in their
fitting. Fitting highly flexible ML potentials comes at a cost: a vast amount
of reference data is required to properly train these models. We address this
need by providing access to a large computational DFT database, which consists
of 20M conformations for 57,454 small organic molecules. We believe it will
become a new standard benchmark for comparison of current and future methods in
the ML potential community.
| Justin S. Smith, Olexandr Isayev, and Adrian E. Roitberg | 10.1038/sdata.2017.193 | 1708.04987 | null | null |
Neural Factorization Machines for Sparse Predictive Analytics | cs.LG | Many predictive tasks of web applications need to model categorical
variables, such as user IDs and demographics like genders and occupations. To
apply standard machine learning techniques, these categorical predictors are
always converted to a set of binary features via one-hot encoding, making the
resultant feature vector highly sparse. To learn from such sparse data
effectively, it is crucial to account for the interactions between features.
Factorization Machines (FMs) are a popular solution for efficiently using the
second-order feature interactions. However, FM models feature interactions in a
linear way, which can be insufficient for capturing the non-linear and complex
inherent structure of real-world data. While deep neural networks have recently
been applied to learn non-linear feature interactions in industry, such as the
Wide&Deep by Google and DeepCross by Microsoft, the deep structure meanwhile
makes them difficult to train.
In this paper, we propose a novel model Neural Factorization Machine (NFM)
for prediction under sparse settings. NFM seamlessly combines the linearity of
FM in modelling second-order feature interactions and the non-linearity of
neural network in modelling higher-order feature interactions. Conceptually,
NFM is more expressive than FM since FM can be seen as a special case of NFM
without hidden layers. Empirical results on two regression tasks show that with
one hidden layer only, NFM significantly outperforms FM with a 7.3% relative
improvement. Compared to the recent deep learning methods Wide&Deep and
DeepCross, our NFM uses a shallower structure but offers better performance,
being much easier to train and tune in practice.
| Xiangnan He, Tat-Seng Chua | null | 1708.05027 | null | null |
Corrupt Bandits for Preserving Local Privacy | cs.LG stat.ML | We study a variant of the stochastic multi-armed bandit (MAB) problem in
which the rewards are corrupted. In this framework, motivated by privacy
preservation in online recommender systems, the goal is to maximize the sum of
the (unobserved) rewards, based on the observation of transformation of these
rewards through a stochastic corruption process with known parameters. We
provide a lower bound on the expected regret of any bandit algorithm in this
corrupted setting. We devise a frequentist algorithm, KLUCB-CF, and a Bayesian
algorithm, TS-CF and give upper bounds on their regret. We also provide the
appropriate corruption parameters to guarantee a desired level of local privacy
and analyze how this impacts the regret. Finally, we present some experimental
results that confirm our analysis.
| Pratik Gajane, Tanguy Urvoy, Emilie Kaufmann | null | 1708.05033 | null | null |
Data-driven Advice for Applying Machine Learning to Bioinformatics
Problems | q-bio.QM cs.LG stat.ML | As the bioinformatics field grows, it must keep pace not only with new data
but with new algorithms. Here we contribute a thorough analysis of 13
state-of-the-art, commonly used machine learning algorithms on a set of 165
publicly available classification problems in order to provide data-driven
algorithm recommendations to current researchers. We present a number of
statistical and visual comparisons of algorithm performance and quantify the
effect of model selection and algorithm tuning for each algorithm and dataset.
The analysis culminates in the recommendation of five algorithms with
hyperparameters that maximize classifier performance across the tested
problems, as well as general guidelines for applying machine learning to
supervised classification problems.
| Randal S. Olson, William La Cava, Zairah Mustahsan, Akshay Varik,
Jason H. Moore | null | 1708.0507 | null | null |
The Mean and Median Criterion for Automatic Kernel Bandwidth Selection
for Support Vector Data Description | cs.LG cs.AI stat.ML | Support vector data description (SVDD) is a popular technique for detecting
anomalies. The SVDD classifier partitions the whole space into an inlier
region, which consists of the region near the training data, and an outlier
region, which consists of points away from the training data. The computation
of the SVDD classifier requires a kernel function, and the Gaussian kernel is a
common choice for the kernel function. The Gaussian kernel has a bandwidth
parameter, whose value is important for good results. A small bandwidth leads
to overfitting, and the resulting SVDD classifier overestimates the number of
anomalies. A large bandwidth leads to underfitting, and the classifier fails to
detect many anomalies. In this paper we present a new automatic, unsupervised
method for selecting the Gaussian kernel bandwidth. The selected value can be
computed quickly, and it is competitive with existing bandwidth selection
methods.
| Arin Chaudhuri, Deovrat Kakde, Carol Sadek, Laura Gonzalez, Seunghyun
Kong | 10.1109/ICDMW.2017.116 | 1708.05106 | null | null |
Structure Learning of $H$-colorings | cs.DM cs.LG math.CO | We study the structure learning problem for $H$-colorings, an important class
of Markov random fields that capture key combinatorial structures on graphs,
including proper colorings and independent sets, as well as spin systems from
statistical physics. The learning problem is as follows: for a fixed (and
known) constraint graph $H$ with $q$ colors and an unknown graph $G=(V,E)$ with
$n$ vertices, given uniformly random $H$-colorings of $G$, how many samples are
required to learn the edges of the unknown graph $G$? We give a
characterization of $H$ for which the problem is identifiable for every $G$,
i.e., we can learn $G$ with an infinite number of samples. We also show that
there are identifiable constraint graphs for which one cannot hope to learn
every graph $G$ efficiently.
We focus particular attention on the case of proper vertex $q$-colorings of
graphs of maximum degree $d$ where intriguing connections to statistical
physics phase transitions appear. We prove that in the tree uniqueness region
(when $q>d$) the problem is identifiable and we can learn $G$ in ${\rm
poly}(d,q) \times O(n^2\log{n})$ time. In contrast for soft-constraint systems,
such as the Ising model, the best possible running time is exponential in $d$.
In the tree non-uniqueness region (when $q\leq d$) we prove that the problem is
not identifiable and thus $G$ cannot be learned. Moreover, when $q<d-\sqrt{d} +
\Theta(1)$ we prove that even learning an equivalent graph (any graph with the
same set of $H$-colorings) is computationally hard---sample complexity is
exponential in $n$ in the worst case. We further explore the connection between
the efficiency/hardness of the structure learning problem and the
uniqueness/non-uniqueness phase transition for general $H$-colorings and prove
that under the well-known Dobrushin uniqueness condition, we can learn $G$ in
${\rm poly}(d,q)\times O(n^2\log{n})$ time.
| Antonio Blanca, Zongchen Chen, Daniel \v{S}tefankovi\v{c}, Eric Vigoda | null | 1708.05118 | null | null |
Deep & Cross Network for Ad Click Predictions | cs.LG stat.ML | Feature engineering has been the key to the success of many prediction
models. However, the process is non-trivial and often requires manual feature
engineering or exhaustive searching. DNNs are able to automatically learn
feature interactions; however, they generate all the interactions implicitly,
and are not necessarily efficient in learning all types of cross features. In
this paper, we propose the Deep & Cross Network (DCN) which keeps the benefits
of a DNN model, and beyond that, it introduces a novel cross network that is
more efficient in learning certain bounded-degree feature interactions. In
particular, DCN explicitly applies feature crossing at each layer, requires no
manual feature engineering, and adds negligible extra complexity to the DNN
model. Our experimental results have demonstrated its superiority over the
state-of-art algorithms on the CTR prediction dataset and dense classification
dataset, in terms of both model accuracy and memory usage.
| Ruoxi Wang, Bin Fu, Gang Fu, Mingliang Wang | null | 1708.05123 | null | null |
Scalable trust-region method for deep reinforcement learning using
Kronecker-factored approximation | cs.LG | In this work, we propose to apply trust region optimization to deep
reinforcement learning using a recently proposed Kronecker-factored
approximation to the curvature. We extend the framework of natural policy
gradient and propose to optimize both the actor and the critic using
Kronecker-factored approximate curvature (K-FAC) with trust region; hence we
call our method Actor Critic using Kronecker-Factored Trust Region (ACKTR). To
the best of our knowledge, this is the first scalable trust region natural
gradient method for actor-critic methods. It is also a method that learns
non-trivial tasks in continuous control as well as discrete control policies
directly from raw pixel inputs. We tested our approach across discrete domains
in Atari games as well as continuous domains in the MuJoCo environment. With
the proposed methods, we are able to achieve higher rewards and a 2- to 3-fold
improvement in sample efficiency on average, compared to previous
state-of-the-art on-policy actor-critic methods. Code is available at
https://github.com/openai/baselines
| Yuhuai Wu, Elman Mansimov, Shun Liao, Roger Grosse, Jimmy Ba | null | 1708.05144 | null | null |
Revisiting revisits in trajectory recommendation | cs.LG | Trajectory recommendation is the problem of recommending a sequence of places
in a city for a tourist to visit. It is strongly desirable for the recommended
sequence to avoid loops, as tourists typically would not wish to revisit the
same location. Given some learned model that scores sequences, how can we then
find the highest-scoring sequence that is loop-free? This paper studies this
problem, with three contributions. First, we detail three distinct approaches
to the problem -- graph-based heuristics, integer linear programming, and list
extensions of the Viterbi algorithm -- and qualitatively summarise their
strengths and weaknesses. Second, we explicate how two ostensibly different
approaches to the list Viterbi algorithm are in fact fundamentally identical.
Third, we conduct experiments on real-world trajectory recommendation datasets
to identify the tradeoffs imposed by each of the three approaches. Overall, our
results indicate that a greedy graph-based heuristic offer excellent
performance and runtime, leading us to recommend its use for removing loops at
prediction time.
| Aditya Krishna Menon, Dawei Chen, Lexing Xie, Cheng Soon Ong | null | 1708.05165 | null | null |
Learning Universal Adversarial Perturbations with Generative Models | cs.CR cs.LG stat.ML | Neural networks are known to be vulnerable to adversarial examples, inputs
that have been intentionally perturbed to remain visually similar to the source
input, but cause a misclassification. It was recently shown that given a
dataset and classifier, there exists so called universal adversarial
perturbations, a single perturbation that causes a misclassification when
applied to any input. In this work, we introduce universal adversarial
networks, a generative network that is capable of fooling a target classifier
when it's generated output is added to a clean sample from a dataset. We show
that this technique improves on known universal adversarial attacks.
| Jamie Hayes and George Danezis | null | 1708.05207 | null | null |
Deep Learning at 15PF: Supervised and Semi-Supervised Classification for
Scientific Data | cs.PF cs.CV cs.LG | This paper presents the first, 15-PetaFLOP Deep Learning system for solving
scientific pattern classification problems on contemporary HPC architectures.
We develop supervised convolutional architectures for discriminating signals in
high-energy physics data as well as semi-supervised architectures for
localizing and classifying extreme weather in climate data. Our
Intelcaffe-based implementation obtains $\sim$2TFLOP/s on a single Cori
Phase-II Xeon-Phi node. We use a hybrid strategy employing synchronous
node-groups, while using asynchronous communication across groups. We use this
strategy to scale training of a single model to $\sim$9600 Xeon-Phi nodes;
obtaining peak performance of 11.73-15.07 PFLOP/s and sustained performance of
11.41-13.27 PFLOP/s. At scale, our HEP architecture produces state-of-the-art
classification accuracy on a dataset with 10M images, exceeding that achieved
by selections on high-level physics-motivated features. Our semi-supervised
architecture successfully extracts weather patterns in a 15TB climate dataset.
Our results demonstrate that Deep Learning can be optimized and scaled
effectively on many-core, HPC systems.
| Thorsten Kurth, Jian Zhang, Nadathur Satish, Ioannis Mitliagkas, Evan
Racah, Mostofa Ali Patwary, Tareq Malas, Narayanan Sundaram, Wahid Bhimji,
Mikhail Smorkalov, Jack Deslippe, Mikhail Shiryaev, Srinivas Sridharan,
Prabhat, Pradeep Dubey | null | 1708.05256 | null | null |
Designing and building the mlpack open-source machine learning library | cs.MS cs.LG cs.SE | mlpack is an open-source C++ machine learning library with an emphasis on
speed and flexibility. Since its original inception in 2007, it has grown to be
a large project implementing a wide variety of machine learning algorithms,
from standard techniques such as decision trees and logistic regression to
modern techniques such as deep neural networks as well as other
recently-published cutting-edge techniques not found in any other library.
mlpack is quite fast, with benchmarks showing mlpack outperforming other
libraries' implementations of the same methods. mlpack has an active community,
with contributors from around the world---including some from PUST. This short
paper describes the goals and design of mlpack, discusses how the open-source
community functions, and shows an example usage of mlpack for a simple data
science problem.
| Ryan R. Curtin, Marcus Edel | null | 1708.05279 | null | null |
Learning Musical Relations using Gated Autoencoders | cs.SD cs.AI cs.LG | Music is usually highly structured and it is still an open question how to
design models which can successfully learn to recognize and represent musical
structure. A fundamental problem is that structurally related patterns can have
very distinct appearances, because the structural relationships are often based
on transformations of musical material, like chromatic or diatonic
transposition, inversion, retrograde, or rhythm change. In this preliminary
work, we study the potential of two unsupervised learning techniques -
Restricted Boltzmann Machines (RBMs) and Gated Autoencoders (GAEs) - to capture
pre-defined transformations from constructed data pairs. We evaluate the models
by using the learned representations as inputs in a discriminative task where
for a given type of transformation (e.g. diatonic transposition), the specific
relation between two musical patterns must be recognized (e.g. an upward
transposition of diatonic steps). Furthermore, we measure the reconstruction
error of models when reconstructing musical transformed patterns. Lastly, we
test the models in an analogy-making task. We find that it is difficult to
learn musical transformations with the RBM and that the GAE is much more
adequate for this task, since it is able to learn representations of specific
transformations that are largely content-invariant. We believe these results
show that models such as GAEs may provide the basis for more encompassing music
analysis systems, by endowing them with a better understanding of the
structures underlying music.
| Stefan Lattner, Maarten Grachten, Gerhard Widmer | null | 1708.05325 | null | null |
SMASH: One-Shot Model Architecture Search through HyperNetworks | cs.LG | Designing architectures for deep neural networks requires expert knowledge
and substantial computation time. We propose a technique to accelerate
architecture selection by learning an auxiliary HyperNet that generates the
weights of a main model conditioned on that model's architecture. By comparing
the relative validation performance of networks with HyperNet-generated
weights, we can effectively search over a wide range of architectures at the
cost of a single training run. To facilitate this search, we develop a flexible
mechanism based on memory read-writes that allows us to define a wide range of
network connectivity patterns, with ResNet, DenseNet, and FractalNet blocks as
special cases. We validate our method (SMASH) on CIFAR-10 and CIFAR-100,
STL-10, ModelNet10, and Imagenet32x32, achieving competitive performance with
similarly-sized hand-designed networks. Our code is available at
https://github.com/ajbrock/SMASH
| Andrew Brock, Theodore Lim, J.M. Ritchie, Nick Weston | null | 1708.05344 | null | null |
PixelNN: Example-based Image Synthesis | cs.CV cs.GR cs.LG | We present a simple nearest-neighbor (NN) approach that synthesizes
high-frequency photorealistic images from an "incomplete" signal such as a
low-resolution image, a surface normal map, or edges. Current state-of-the-art
deep generative models designed for such conditional image synthesis lack two
important things: (1) they are unable to generate a large set of diverse
outputs, due to the mode collapse problem. (2) they are not interpretable,
making it difficult to control the synthesized output. We demonstrate that NN
approaches potentially address such limitations, but suffer in accuracy on
small datasets. We design a simple pipeline that combines the best of both
worlds: the first stage uses a convolutional neural network (CNN) to maps the
input to a (overly-smoothed) image, and the second stage uses a pixel-wise
nearest neighbor method to map the smoothed output to multiple high-quality,
high-frequency outputs in a controllable manner. We demonstrate our approach
for various input modalities, and for various domains ranging from human faces
to cats-and-dogs to shoes and handbags.
| Aayush Bansal and Yaser Sheikh and Deva Ramanan | null | 1708.05349 | null | null |
Unsupervised Heart-rate Estimation in Wearables With Liquid States and A
Probabilistic Readout | cs.NE cs.LG | Heart-rate estimation is a fundamental feature of modern wearable devices. In
this paper we propose a machine intelligent approach for heart-rate estimation
from electrocardiogram (ECG) data collected using wearable devices. The novelty
of our approach lies in (1) encoding spatio-temporal properties of ECG signals
directly into spike train and using this to excite recurrently connected
spiking neurons in a Liquid State Machine computation model; (2) a novel
learning algorithm; and (3) an intelligently designed unsupervised readout
based on Fuzzy c-Means clustering of spike responses from a subset of neurons
(Liquid states), selected using particle swarm optimization. Our approach
differs from existing works by learning directly from ECG signals (allowing
personalization), without requiring costly data annotations. Additionally, our
approach can be easily implemented on state-of-the-art spiking-based
neuromorphic systems, offering high accuracy, yet significantly low energy
footprint, leading to an extended battery life of wearable devices. We
validated our approach with CARLsim, a GPU accelerated spiking neural network
simulator modeling Izhikevich spiking neurons with Spike Timing Dependent
Plasticity (STDP) and homeostatic scaling. A range of subjects are considered
from in-house clinical trials and public ECG databases. Results show high
accuracy and low energy footprint in heart-rate estimation across subjects with
and without cardiac irregularities, signifying the strong potential of this
approach to be integrated in future wearable devices.
| Anup Das, Paruthi Pradhapan, Willemijn Groenendaal, Prathyusha
Adiraju, Raj Thilak Rajan, Francky Catthoor, Siebren Schaafsma, Jeffrey L.
Krichmar, Nikil Dutt and Chris Van Hoof | 10.1016/j.neunet.2017.12.015 | 1708.05356 | null | null |
Efficient Use of Limited-Memory Accelerators for Linear Learning on
Heterogeneous Systems | cs.LG cs.DC math.OC stat.ML | We propose a generic algorithmic building block to accelerate training of
machine learning models on heterogeneous compute systems. Our scheme allows to
efficiently employ compute accelerators such as GPUs and FPGAs for the training
of large-scale machine learning models, when the training data exceeds their
memory capacity. Also, it provides adaptivity to any system's memory hierarchy
in terms of size and processing speed. Our technique is built upon novel
theoretical insights regarding primal-dual coordinate methods, and uses duality
gap information to dynamically decide which part of the data should be made
available for fast processing. To illustrate the power of our approach we
demonstrate its performance for training of generalized linear models on a
large-scale dataset exceeding the memory size of a modern GPU, showing an
order-of-magnitude speedup over existing approaches.
| Celestine D\"unner, Thomas Parnell, Martin Jaggi | null | 1708.05357 | null | null |
Robust Contextual Bandit via the Capped-$\ell_{2}$ norm | cs.LG stat.ML | This paper considers the actor-critic contextual bandit for the mobile health
(mHealth) intervention. The state-of-the-art decision-making methods in mHealth
generally assume that the noise in the dynamic system follows the Gaussian
distribution. Those methods use the least-square-based algorithm to estimate
the expected reward, which is prone to the existence of outliers. To deal with
the issue of outliers, we propose a novel robust actor-critic contextual bandit
method for the mHealth intervention. In the critic updating, the
capped-$\ell_{2}$ norm is used to measure the approximation error, which
prevents outliers from dominating our objective. A set of weights could be
achieved from the critic updating. Considering them gives a weighted objective
for the actor updating. It provides the badly noised sample in the critic
updating with zero weights for the actor updating. As a result, the robustness
of both actor-critic updating is enhanced. There is a key parameter in the
capped-$\ell_{2}$ norm. We provide a reliable method to properly set it by
making use of one of the most fundamental definitions of outliers in
statistics. Extensive experiment results demonstrate that our method can
achieve almost identical results compared with the state-of-the-art methods on
the dataset without outliers and dramatically outperform them on the datasets
noised by outliers.
| Feiyun Zhu, Xinliang Zhu, Sheng Wang, Jiawen Yao, Junzhou Huang | null | 1708.05446 | null | null |
Large Margin Learning in Set to Set Similarity Comparison for Person
Re-identification | cs.CV cs.LG stat.ML | Person re-identification (Re-ID) aims at matching images of the same person
across disjoint camera views, which is a challenging problem in multimedia
analysis, multimedia editing and content-based media retrieval communities. The
major challenge lies in how to preserve similarity of the same person across
video footages with large appearance variations, while discriminating different
individuals. To address this problem, conventional methods usually consider the
pairwise similarity between persons by only measuring the point to point (P2P)
distance. In this paper, we propose to use deep learning technique to model a
novel set to set (S2S) distance, in which the underline objective focuses on
preserving the compactness of intra-class samples for each camera view, while
maximizing the margin between the intra-class set and inter-class set. The S2S
distance metric is consisted of three terms, namely the class-identity term,
the relative distance term and the regularization term. The class-identity term
keeps the intra-class samples within each camera view gathering together, the
relative distance term maximizes the distance between the intra-class class set
and inter-class set across different camera views, and the regularization term
smoothness the parameters of deep convolutional neural network (CNN). As a
result, the final learned deep model can effectively find out the matched
target to the probe object among various candidates in the video gallery by
learning discriminative and stable feature representations. Using the CUHK01,
CUHK03, PRID2011 and Market1501 benchmark datasets, we extensively conducted
comparative evaluations to demonstrate the advantages of our method over the
state-of-the-art approaches.
| Sanping Zhou, Jinjun Wang, Rui Shi, Qiqi Hou, Yihong Gong, Nanning
Zheng | null | 1708.05512 | null | null |
Practical Block-wise Neural Network Architecture Generation | cs.CV cs.LG | Convolutional neural networks have gained a remarkable success in computer
vision. However, most usable network architectures are hand-crafted and usually
require expertise and elaborate design. In this paper, we provide a block-wise
network generation pipeline called BlockQNN which automatically builds
high-performance networks using the Q-Learning paradigm with epsilon-greedy
exploration strategy. The optimal network block is constructed by the learning
agent which is trained sequentially to choose component layers. We stack the
block to construct the whole auto-generated network. To accelerate the
generation process, we also propose a distributed asynchronous framework and an
early stop strategy. The block-wise generation brings unique advantages: (1) it
performs competitive results in comparison to the hand-crafted state-of-the-art
networks on image classification, additionally, the best network generated by
BlockQNN achieves 3.54% top-1 error rate on CIFAR-10 which beats all existing
auto-generate networks. (2) in the meanwhile, it offers tremendous reduction of
the search space in designing networks which only spends 3 days with 32 GPUs,
and (3) moreover, it has strong generalizability that the network built on
CIFAR also performs well on a larger-scale ImageNet dataset.
| Zhao Zhong, Junjie Yan, Wei Wu, Jing Shao, Cheng-Lin Liu | null | 1708.05552 | null | null |
Induction of Decision Trees based on Generalized Graph Queries | cs.LG cs.AI | Usually, decision tree induction algorithms are limited to work with non
relational data. Given a record, they do not take into account other objects
attributes even though they can provide valuable information for the learning
task. In this paper we present GGQ-ID3, a multi-relational decision tree
learning algorithm that uses Generalized Graph Queries (GGQ) as predicates in
the decision nodes. GGQs allow to express complex patterns (including cycles)
and they can be refined step-by-step. Also, they can evaluate structures (not
only single records) and perform Regular Pattern Matching. GGQ are built
dynamically (pattern mining) during the GGQ-ID3 tree construction process. We
will show how to use GGQ-ID3 to perform multi-relational machine learning
keeping complexity under control. Finally, some real examples of automatically
obtained classification trees and semantic patterns are shown.
-----
Normalmente, los algoritmos de inducci\'on de \'arboles de decisi\'on
trabajan con datos no relacionales. Dado un registro, no tienen en cuenta los
atributos de otros objetos a pesar de que \'estos pueden proporcionar
informaci\'on \'util para la tarea de aprendizaje. En este art\'iculo
presentamos GGQ-ID3, un algoritmo de aprendizaje de \'arboles de decisiones
multi-relacional que utiliza Generalized Graph Queries (GGQ) como predicados en
los nodos de decisi\'on. Los GGQs permiten expresar patrones complejos
(incluyendo ciclos) y pueden ser refinados paso a paso. Adem\'as, pueden
evaluar estructuras (no solo registros) y llevar a cabo Regular Pattern
Matching. En GGQ-ID3, los GGQ son construidos din\'amicamente (pattern mining)
durante el proceso de construcci\'on del \'arbol. Adem\'as, se muestran algunos
ejemplos reales de \'arboles de clasificaci\'on multi-relacionales y patrones
sem\'anticos obtenidos autom\'aticamente.
| Pedro Almagro-Blanco, Fernando Sancho-Caparrini | null | 1708.05563 | null | null |
LADDER: A Human-Level Bidding Agent for Large-Scale Real-Time Online
Auctions | cs.LG cs.AI cs.CL cs.GT | We present LADDER, the first deep reinforcement learning agent that can
successfully learn control policies for large-scale real-world problems
directly from raw inputs composed of high-level semantic information. The agent
is based on an asynchronous stochastic variant of DQN (Deep Q Network) named
DASQN. The inputs of the agent are plain-text descriptions of states of a game
of incomplete information, i.e. real-time large scale online auctions, and the
rewards are auction profits of very large scale. We apply the agent to an
essential portion of JD's online RTB (real-time bidding) advertising business
and find that it easily beats the former state-of-the-art bidding policy that
had been carefully engineered and calibrated by human experts: during JD.com's
June 18th anniversary sale, the agent increased the company's ads revenue from
the portion by more than 50%, while the advertisers' ROI (return on investment)
also improved significantly.
| Yu Wang, Jiayi Liu, Yuxiang Liu, Jun Hao, Yang He, Jinghe Hu, Weipeng
P. Yan, Mantian Li | null | 1708.05565 | null | null |
Statistical Latent Space Approach for Mixed Data Modelling and
Applications | cs.LG stat.ML | The analysis of mixed data has been raising challenges in statistics and
machine learning. One of two most prominent challenges is to develop new
statistical techniques and methodologies to effectively handle mixed data by
making the data less heterogeneous with minimum loss of information. The other
challenge is that such methods must be able to apply in large-scale tasks when
dealing with huge amount of mixed data. To tackle these challenges, we
introduce parameter sharing and balancing extensions to our recent model, the
mixed-variate restricted Boltzmann machine (MV.RBM) which can transform
heterogeneous data into homogeneous representation. We also integrate
structured sparsity and distance metric learning into RBM-based models. Our
proposed methods are applied in various applications including latent patient
profile modelling in medical data analysis and representation learning for
image retrieval. The experimental results demonstrate the models perform better
than baseline methods in medical data and outperform state-of-the-art rivals in
image dataset.
| Tu Dinh Nguyen, Truyen Tran, Dinh Phung, Svetha Venkatesh | null | 1708.05594 | null | null |
Nonnegative Restricted Boltzmann Machines for Parts-based
Representations Discovery and Predictive Model Stabilization | cs.LG | The success of any machine learning system depends critically on effective
representations of data. In many cases, it is desirable that a representation
scheme uncovers the parts-based, additive nature of the data. Of current
representation learning schemes, restricted Boltzmann machines (RBMs) have
proved to be highly effective in unsupervised settings. However, when it comes
to parts-based discovery, RBMs do not usually produce satisfactory results. We
enhance such capacity of RBMs by introducing nonnegativity into the model
weights, resulting in a variant called nonnegative restricted Boltzmann machine
(NRBM). The NRBM produces not only controllable decomposition of data into
interpretable parts but also offers a way to estimate the intrinsic nonlinear
dimensionality of data, and helps to stabilize linear predictive models. We
demonstrate the capacity of our model on applications such as handwritten digit
recognition, face recognition, document classification and patient readmission
prognosis. The decomposition quality on images is comparable with or better
than what produced by the nonnegative matrix factorization (NMF), and the
thematic features uncovered from text are qualitatively interpretable in a
similar manner to that of the latent Dirichlet allocation (LDA). The stability
performance of feature selection on medical data is better than RBM and
competitive with NMF. The learned features, when used for classification, are
more discriminative than those discovered by both NMF and LDA and comparable
with those by RBM.
| Tu Dinh Nguyen, Truyen Tran, Dinh Phung, Svetha Venkatesh | null | 1708.05603 | null | null |
Accelerating recurrent neural network training using sequence bucketing
and multi-GPU data parallelization | cs.LG cs.NE | An efficient algorithm for recurrent neural network training is presented.
The approach increases the training speed for tasks where a length of the input
sequence may vary significantly. The proposed approach is based on the optimal
batch bucketing by input sequence length and data parallelization on multiple
graphical processing units. The baseline training performance without sequence
bucketing is compared with the proposed solution for a different number of
buckets. An example is given for the online handwriting recognition task using
an LSTM recurrent neural network. The evaluation is performed in terms of the
wall clock time, number of epochs, and validation loss value.
| Viacheslav Khomenko (1), Oleg Shyshkov (1), Olga Radyvonenko (1),
Kostiantyn Bokhan (1) ((1) Samsung R&D Institute Ukraine SRK) | 10.1109/DSMP.2016.7583516 | 1708.05604 | null | null |
Learning to Transfer | cs.AI cs.LG stat.ML | Transfer learning borrows knowledge from a source domain to facilitate
learning in a target domain. Two primary issues to be addressed in transfer
learning are what and how to transfer. For a pair of domains, adopting
different transfer learning algorithms results in different knowledge
transferred between them. To discover the optimal transfer learning algorithm
that maximally improves the learning performance in the target domain,
researchers have to exhaustively explore all existing transfer learning
algorithms, which is computationally intractable. As a trade-off, a sub-optimal
algorithm is selected, which requires considerable expertise in an ad-hoc way.
Meanwhile, it is widely accepted in educational psychology that human beings
improve transfer learning skills of deciding what to transfer through
meta-cognitive reflection on inductive transfer learning practices. Motivated
by this, we propose a novel transfer learning framework known as Learning to
Transfer (L2T) to automatically determine what and how to transfer are the best
by leveraging previous transfer learning experiences. We establish the L2T
framework in two stages: 1) we first learn a reflection function encrypting
transfer learning skills from experiences; and 2) we infer what and how to
transfer for a newly arrived pair of domains by optimizing the reflection
function. Extensive experiments demonstrate the L2T's superiority over several
state-of-the-art transfer learning algorithms and its effectiveness on
discovering more transferable knowledge.
| Ying Wei, Yu Zhang, Qiang Yang | null | 1708.05629 | null | null |
Multi-objective Contextual Multi-armed Bandit with a Dominant Objective | cs.LG | In this paper, we propose a new multi-objective contextual multi-armed bandit
(MAB) problem with two objectives, where one of the objectives dominates the
other objective. Unlike single-objective MAB problems in which the learner
obtains a random scalar reward for each arm it selects, in the proposed
problem, the learner obtains a random reward vector, where each component of
the reward vector corresponds to one of the objectives and the distribution of
the reward depends on the context that is provided to the learner at the
beginning of each round. We call this problem contextual multi-armed bandit
with a dominant objective (CMAB-DO). In CMAB-DO, the goal of the learner is to
maximize its total reward in the non-dominant objective while ensuring that it
maximizes its total reward in the dominant objective. In this case, the optimal
arm given a context is the one that maximizes the expected reward in the
non-dominant objective among all arms that maximize the expected reward in the
dominant objective. First, we show that the optimal arm lies in the Pareto
front. Then, we propose the multi-objective contextual multi-armed bandit
algorithm (MOC-MAB), and define two performance measures: the 2-dimensional
(2D) regret and the Pareto regret. We show that both the 2D regret and the
Pareto regret of MOC-MAB are sublinear in the number of rounds. We also compare
the performance of the proposed algorithm with other state-of-the-art methods
in synthetic and real-world datasets. The proposed model and the algorithm have
a wide range of real-world applications that involve multiple and possibly
conflicting objectives ranging from wireless communication to medical diagnosis
and recommender systems.
| Cem Tekin and Eralp Turgay | 10.1109/TSP.2018.2841822 | 1708.05655 | null | null |
Data-Driven Tree Transforms and Metrics | stat.ML cs.LG q-bio.QM | We consider the analysis of high dimensional data given in the form of a
matrix with columns consisting of observations and rows consisting of features.
Often the data is such that the observations do not reside on a regular grid,
and the given order of the features is arbitrary and does not convey a notion
of locality. Therefore, traditional transforms and metrics cannot be used for
data organization and analysis. In this paper, our goal is to organize the data
by defining an appropriate representation and metric such that they respect the
smoothness and structure underlying the data. We also aim to generalize the
joint clustering of observations and features in the case the data does not
fall into clear disjoint groups. For this purpose, we propose multiscale
data-driven transforms and metrics based on trees. Their construction is
implemented in an iterative refinement procedure that exploits the
co-dependencies between features and observations. Beyond the organization of a
single dataset, our approach enables us to transfer the organization learned
from one dataset to another and to integrate several datasets together. We
present an application to breast cancer gene expression analysis: learning
metrics on the genes to cluster the tumor samples into cancer sub-types and
validating the joint organization of both the genes and the samples. We
demonstrate that using our approach to combine information from multiple gene
expression cohorts, acquired by different profiling technologies, improves the
clustering of tumor samples.
| Gal Mishne, Ronen Talmon, Israel Cohen, Ronald R. Coifman and Yuval
Kluger | null | 1708.05768 | null | null |
Semi-supervised Conditional GANs | stat.ML cs.LG | We introduce a new model for building conditional generative models in a
semi-supervised setting to conditionally generate data given attributes by
adapting the GAN framework. The proposed semi-supervised GAN (SS-GAN) model
uses a pair of stacked discriminators to learn the marginal distribution of the
data, and the conditional distribution of the attributes given the data
respectively. In the semi-supervised setting, the marginal distribution (which
is often harder to learn) is learned from the labeled + unlabeled data, and the
conditional distribution is learned purely from the labeled data. Our
experimental results demonstrate that this model performs significantly better
compared to existing semi-supervised conditional GAN models.
| Kumar Sricharan, Raja Bala, Matthew Shreve, Hui Ding, Kumar Saketh,
Jin Sun | null | 1708.05789 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.