title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
AutoMOS: Learning a non-intrusive assessor of naturalness-of-speech | cs.CL cs.LG stat.ML | Developers of text-to-speech synthesizers (TTS) often make use of human
raters to assess the quality of synthesized speech. We demonstrate that we can
model human raters' mean opinion scores (MOS) of synthesized speech using a
deep recurrent neural network whose inputs consist solely of a raw waveform.
Our best models provide utterance-level estimates of MOS only moderately
inferior to sampled human ratings, as shown by Pearson and Spearman
correlations. When multiple utterances are scored and averaged, a scenario
common in synthesizer quality assessment, AutoMOS achieves correlations
approaching those of human raters. The AutoMOS model has a number of
applications, such as the ability to explore the parameter space of a speech
synthesizer without requiring a human-in-the-loop.
| Brian Patton, Yannis Agiomyrgiannakis, Michael Terry, Kevin Wilson,
Rif A. Saurous, D. Sculley | null | 1611.09207 | null | null |
Robust Variational Inference | cs.LG stat.ML | Variational inference is a powerful tool for approximate inference. However,
it mainly focuses on the evidence lower bound as variational objective and the
development of other measures for variational inference is a promising area of
research. This paper proposes a robust modification of evidence and a lower
bound for the evidence, which is applicable when the majority of the training
set samples are random noise objects. We provide experiments for variational
autoencoders to show advantage of the objective over the evidence lower bound
on synthetic datasets obtained by adding uninformative noise objects to MNIST
and OMNIGLOT. Additionally, for the original MNIST and OMNIGLOT datasets we
observe a small improvement over the non-robust evidence lower bound.
| Michael Figurnov, Kirill Struminsky, Dmitry Vetrov | null | 1611.09226 | null | null |
Efficient Convolutional Auto-Encoding via Random Convexification and
Frequency-Domain Minimization | stat.ML cs.LG cs.NE | The omnipresence of deep learning architectures such as deep convolutional
neural networks (CNN)s is fueled by the synergistic combination of
ever-increasing labeled datasets and specialized hardware. Despite the
indisputable success, the reliance on huge amounts of labeled data and
specialized hardware can be a limiting factor when approaching new
applications. To help alleviating these limitations, we propose an efficient
learning strategy for layer-wise unsupervised training of deep CNNs on
conventional hardware in acceptable time. Our proposed strategy consists of
randomly convexifying the reconstruction contractive auto-encoding (RCAE)
learning objective and solving the resulting large-scale convex minimization
problem in the frequency domain via coordinate descent (CD). The main
advantages of our proposed learning strategy are: (1) single tunable
optimization parameter; (2) fast and guaranteed convergence; (3) possibilities
for full parallelization. Numerical experiments show that our proposed learning
strategy scales (in the worst case) linearly with image size, number of filters
and filter size.
| Meshia C\'edric Oveneke, Mitchel Aliosha-Perez, Yong Zhao, Dongmei
Jiang and Hichem Sahli | null | 1611.09232 | null | null |
Dense Prediction on Sequences with Time-Dilated Convolutions for Speech
Recognition | cs.CL cs.LG cs.NE | In computer vision pixelwise dense prediction is the task of predicting a
label for each pixel in the image. Convolutional neural networks achieve good
performance on this task, while being computationally efficient. In this paper
we carry these ideas over to the problem of assigning a sequence of labels to a
set of speech frames, a task commonly known as framewise classification. We
show that dense prediction view of framewise classification offers several
advantages and insights, including computational efficiency and the ability to
apply batch normalization. When doing dense prediction we pay specific
attention to strided pooling in time and introduce an asymmetric dilated
convolution, called time-dilated convolution, that allows for efficient and
elegant implementation of pooling in time. We show results using time-dilated
convolutions in a very deep VGG-style CNN with batch normalization on the Hub5
Switchboard-2000 benchmark task. With a big n-gram language model, we achieve
7.7% WER which is the best single model single-pass performance reported so
far.
| Tom Sercu and Vaibhava Goel | null | 1611.09288 | null | null |
Improving Policy Gradient by Exploring Under-appreciated Rewards | cs.LG cs.AI | This paper presents a novel form of policy gradient for model-free
reinforcement learning (RL) with improved exploration properties. Current
policy-based methods use entropy regularization to encourage undirected
exploration of the reward landscape, which is ineffective in high dimensional
spaces with sparse rewards. We propose a more directed exploration strategy
that promotes exploration of under-appreciated reward regions. An action
sequence is considered under-appreciated if its log-probability under the
current policy under-estimates its resulting reward. The proposed exploration
strategy is easy to implement, requiring small modifications to an
implementation of the REINFORCE algorithm. We evaluate the approach on a set of
algorithmic tasks that have long challenged RL methods. Our approach reduces
hyper-parameter sensitivity and demonstrates significant improvements over
baseline methods. Our algorithm successfully solves a benchmark multi-digit
addition task and generalizes to long sequences. This is, to our knowledge, the
first time that a pure RL method has solved addition using only reward
feedback.
| Ofir Nachum, Mohammad Norouzi, Dale Schuurmans | null | 1611.09321 | null | null |
Accelerated Gradient Temporal Difference Learning | cs.AI cs.LG stat.ML | The family of temporal difference (TD) methods span a spectrum from
computationally frugal linear methods like TD({\lambda}) to data efficient
least squares methods. Least square methods make the best use of available data
directly computing the TD solution and thus do not require tuning a typically
highly sensitive learning rate parameter, but require quadratic computation and
storage. Recent algorithmic developments have yielded several sub-quadratic
methods that use an approximation to the least squares TD solution, but incur
bias. In this paper, we propose a new family of accelerated gradient TD (ATD)
methods that (1) provide similar data efficiency benefits to least-squares
methods, at a fraction of the computation and storage (2) significantly reduce
parameter sensitivity compared to linear TD methods, and (3) are asymptotically
unbiased. We illustrate these claims with a proof of convergence in expectation
and experiments on several benchmark domains and a large-scale industrial
energy allocation domain.
| Yangchen Pan, Adam White, Martha White | null | 1611.09328 | null | null |
Dictionary Learning with Equiprobable Matching Pursuit | cs.LG | Sparse signal representations based on linear combinations of learned atoms
have been used to obtain state-of-the-art results in several practical signal
processing applications. Approximation methods are needed to process
high-dimensional signals in this way because the problem to calculate optimal
atoms for sparse coding is NP-hard. Here we study greedy algorithms for
unsupervised learning of dictionaries of shift-invariant atoms and propose a
new method where each atom is selected with the same probability on average,
which corresponds to the homeostatic regulation of a recurrent convolutional
neural network. Equiprobable selection can be used with several greedy
algorithms for dictionary learning to ensure that all atoms adapt during
training and that no particular atom is more likely to take part in the linear
combination on average. We demonstrate via simulation experiments that
dictionary learning with equiprobable selection results in higher entropy of
the sparse representation and lower reconstruction and denoising errors, both
in the case of ordinary matching pursuit and orthogonal matching pursuit with
shift-invariant dictionaries. Furthermore, we show that the computational costs
of the matching pursuits are lower with equiprobable selection, leading to
faster and more accurate dictionary learning algorithms.
| Fredrik Sandin, Sergio Martin-del-Campo | 10.1109/IJCNN.2017.7965902 | 1611.09333 | null | null |
Diet Networks: Thin Parameters for Fat Genomics | cs.LG stat.ML | Learning tasks such as those involving genomic data often poses a serious
challenge: the number of input features can be orders of magnitude larger than
the number of training examples, making it difficult to avoid overfitting, even
when using the known regularization techniques. We focus here on tasks in which
the input is a description of the genetic variation specific to a patient, the
single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs.
Improving the ability of deep learning to handle such datasets could have an
important impact in precision medicine, where high-dimensional data regarding a
particular patient is used to make predictions of interest. Even though the
amount of data for such tasks is increasing, this mismatch between the number
of examples and the number of inputs remains a concern. Naive implementations
of classifier neural networks involve a huge number of free parameters in their
first layer: each input feature is associated with as many parameters as there
are hidden units. We propose a novel neural network parametrization which
considerably reduces the number of free parameters. It is based on the idea
that we can first learn or provide a distributed representation for each input
feature (e.g. for each position in the genome where variations are observed),
and then learn (with another neural network called the parameter prediction
network) how to map a feature's distributed representation to the vector of
parameters specific to that feature in the classifier neural network (the
weights which link the value of the feature to each of the hidden units). We
show experimentally on a population stratification task of interest to medical
studies that the proposed approach can significantly reduce both the number of
parameters and the error rate of the classifier.
| Adriana Romero, Pierre Luc Carrier, Akram Erraqabi, Tristan Sylvain,
Alex Auvolat, Etienne Dejoie, Marc-Andr\'e Legault, Marie-Pierre Dub\'e,
Julie G. Hussin, Yoshua Bengio | null | 1611.0934 | null | null |
Unifying Multi-Domain Multi-Task Learning: Tensor and Neural Network
Perspectives | cs.LG | Multi-domain learning aims to benefit from simultaneously learning across
several different but related domains. In this chapter, we propose a single
framework that unifies multi-domain learning (MDL) and the related but better
studied area of multi-task learning (MTL). By exploiting the concept of a
\emph{semantic descriptor} we show how our framework encompasses various
classic and recent MDL/MTL algorithms as special cases with different semantic
descriptor encodings. As a second contribution, we present a higher order
generalisation of this framework, capable of simultaneous
multi-task-multi-domain learning. This generalisation has two mathematically
equivalent views in multi-linear algebra and gated neural networks
respectively. Moreover, by exploiting the semantic descriptor, it provides
neural networks the capability of zero-shot learning (ZSL), where a classifier
is generated for an unseen class without any training data; as well as
zero-shot domain adaptation (ZSDA), where a model is generated for an unseen
domain without any training data. In practice, this framework provides a
powerful yet easy to implement method that can be flexibly applied to MTL, MDL,
ZSL and ZSDA.
| Yongxin Yang, Timothy M. Hospedales | null | 1611.09345 | null | null |
The Emergence of Organizing Structure in Conceptual Representation | cs.LG stat.ML | Both scientists and children make important structural discoveries, yet their
computational underpinnings are not well understood. Structure discovery has
previously been formalized as probabilistic inference about the right
structural form --- where form could be a tree, ring, chain, grid, etc. [Kemp &
Tenenbaum (2008). The discovery of structural form. PNAS, 105(3), 10687-10692].
While this approach can learn intuitive organizations, including a tree for
animals and a ring for the color circle, it assumes a strong inductive bias
that considers only these particular forms, and each form is explicitly
provided as initial knowledge. Here we introduce a new computational model of
how organizing structure can be discovered, utilizing a broad hypothesis space
with a preference for sparse connectivity. Given that the inductive bias is
more general, the model's initial knowledge shows little qualitative
resemblance to some of the discoveries it supports. As a consequence, the model
can also learn complex structures for domains that lack intuitive description,
as well as predict human property induction judgments without explicit
structural forms. By allowing form to emerge from sparsity, our approach
clarifies how both the richness and flexibility of human conceptual
organization can coexist.
| Brenden M. Lake, Neil D. Lawrence, Joshua B. Tenenbaum | null | 1611.09384 | null | null |
Safety-Aware Robot Damage Recovery Using Constrained Bayesian
Optimization and Simulated Priors | cs.RO cs.LG | The recently introduced Intelligent Trial-and-Error (IT&E) algorithm showed
that robots can adapt to damage in a matter of a few trials. The success of
this algorithm relies on two components: prior knowledge acquired through
simulation with an intact robot, and Bayesian optimization (BO) that operates
on-line, on the damaged robot. While IT&E leads to fast damage recovery, it
does not incorporate any safety constraints that prevent the robot from
attempting harmful behaviors. In this work, we address this limitation by
replacing the BO component with a constrained BO procedure. We evaluate our
approach on a simulated damaged humanoid robot that needs to crawl as fast as
possible, while performing as few unsafe trials as possible. We compare our new
"safety-aware IT&E" algorithm to IT&E and a multi-objective version of IT&E in
which the safety constraints are dealt as separate objectives. Our results show
that our algorithm outperforms the other approaches, both in crawling speed
within the safe regions and number of unsafe trials.
| Vaios Papaspyros, Konstantinos Chatzilygeroudis, Vassilis Vassiliades
and Jean-Baptiste Mouret | null | 1611.09419 | null | null |
Emergence of foveal image sampling from learning to attend in visual
scenes | cs.NE cs.AI cs.LG | We describe a neural attention model with a learnable retinal sampling
lattice. The model is trained on a visual search task requiring the
classification of an object embedded in a visual scene amidst background
distractors using the smallest number of fixations. We explore the tiling
properties that emerge in the model's retinal sampling lattice after training.
Specifically, we show that this lattice resembles the eccentricity dependent
sampling lattice of the primate retina, with a high resolution region in the
fovea surrounded by a low resolution periphery. Furthermore, we find conditions
where these emergent properties are amplified or eliminated providing clues to
their function.
| Brian Cheung, Eric Weiss, Bruno Olshausen | null | 1611.0943 | null | null |
Input Switched Affine Networks: An RNN Architecture Designed for
Interpretability | cs.AI cs.CL cs.LG cs.NE | There exist many problem domains where the interpretability of neural network
models is essential for deployment. Here we introduce a recurrent architecture
composed of input-switched affine transformations - in other words an RNN
without any explicit nonlinearities, but with input-dependent recurrent
weights. This simple form allows the RNN to be analyzed via straightforward
linear methods: we can exactly characterize the linear contribution of each
input to the model predictions; we can use a change-of-basis to disentangle
input, output, and computational hidden unit subspaces; we can fully
reverse-engineer the architecture's solution to a simple task. Despite this
ease of interpretation, the input switched affine network achieves reasonable
performance on a text modeling tasks, and allows greater computational
efficiency than networks with standard nonlinearities.
| Jakob N. Foerster, Justin Gilmer, Jan Chorowski, Jascha
Sohl-Dickstein, David Sussillo | null | 1611.09434 | null | null |
The empirical size of trained neural networks | stat.ML cs.LG | ReLU neural networks define piecewise linear functions of their inputs.
However, initializing and training a neural network is very different from
fitting a linear spline. In this paper, we expand empirically upon previous
theoretical work to demonstrate features of trained neural networks. Standard
network initialization and training produce networks vastly simpler than a
naive parameter count would suggest and can impart odd features to the trained
network. However, we also show the forced simplicity is beneficial and, indeed,
critical for the wide success of these networks.
| Kevin K. Chen, Anthony Gamst, Alden Walker | null | 1611.09444 | null | null |
The Upper Bound on Knots in Neural Networks | stat.ML cs.LG | Neural networks with rectified linear unit activations are essentially
multivariate linear splines. As such, one of many ways to measure the
"complexity" or "expressivity" of a neural network is to count the number of
knots in the spline model. We study the number of knots in fully-connected
feedforward neural networks with rectified linear unit activation functions. We
intentionally keep the neural networks very simple, so as to make theoretical
analyses more approachable. An induction on the number of layers $l$ reveals a
tight upper bound on the number of knots in $\mathbb{R} \to \mathbb{R}^p$ deep
neural networks. With $n_i \gg 1$ neurons in layer $i = 1, \dots, l$, the upper
bound is approximately $n_1 \dots n_l$. We then show that the exact upper bound
is tight, and we demonstrate the upper bound with an example. The purpose of
these analyses is to pave a path for understanding the behavior of general
$\mathbb{R}^q \to \mathbb{R}^p$ neural networks.
| Kevin K. Chen | null | 1611.09448 | null | null |
Cost-Sensitive Reference Pair Encoding for Multi-Label Learning | cs.LG stat.ML | Label space expansion for multi-label classification (MLC) is a methodology
that encodes the original label vectors to higher dimensional codes before
training and decodes the predicted codes back to the label vectors during
testing. The methodology has been demonstrated to improve the performance of
MLC algorithms when coupled with off-the-shelf error-correcting codes for
encoding and decoding. Nevertheless, such a coding scheme can be complicated to
implement, and cannot easily satisfy a common application need of
cost-sensitive MLC---adapting to different evaluation criteria of interest. In
this work, we show that a simpler coding scheme based on the concept of a
reference pair of label vectors achieves cost-sensitivity more naturally. In
particular, our proposed cost-sensitive reference pair encoding (CSRPE)
algorithm contains cluster-based encoding, weight-based training and
voting-based decoding steps, all utilizing the cost information. Furthermore,
we leverage the cost information embedded in the code space of CSRPE to propose
a novel active learning algorithm for cost-sensitive MLC. Extensive
experimental results verify that CSRPE performs better than state-of-the-art
algorithms across different MLC criteria. The results also demonstrate that the
CSRPE-backed active learning algorithm is superior to existing algorithms for
active MLC, and further justify the usefulness of CSRPE.
| Yao-Yuan Yang, Kuan-Hao Huang, Chih-Wei Chang, Hsuan-Tien Lin | 10.1007/978-3-319-93034-3_12 | 1611.09461 | null | null |
Fast Wavenet Generation Algorithm | cs.SD cs.DS cs.LG | This paper presents an efficient implementation of the Wavenet generation
process called Fast Wavenet. Compared to a naive implementation that has
complexity O(2^L) (L denotes the number of layers in the network), our proposed
approach removes redundant convolution operations by caching previous
calculations, thereby reducing the complexity to O(L) time. Timing experiments
show significant advantages of our fast implementation over a naive one. While
this method is presented for Wavenet, the same scheme can be applied anytime
one wants to perform autoregressive generation or online prediction using a
model with dilated convolution layers. The code for our method is publicly
available.
| Tom Le Paine, Pooya Khorrami, Shiyu Chang, Yang Zhang, Prajit
Ramachandran, Mark A. Hasegawa-Johnson, Thomas S. Huang | null | 1611.09482 | null | null |
Graph-Based Manifold Frequency Analysis for Denoising | cs.LG stat.ML | We propose a new framework for manifold denoising based on processing in the
graph Fourier frequency domain, derived from the spectral decomposition of the
discrete graph Laplacian. Our approach uses the Spectral Graph Wavelet
transform in order to per- form non-iterative denoising directly in the graph
frequency domain, an approach inspired by conventional wavelet-based signal
denoising methods. We theoretically justify our approach, based on the fact
that for smooth manifolds the coordinate information energy is localized in the
low spectral graph wavelet sub-bands, while the noise affects all frequency
bands in a similar way. Experimental results show that our proposed manifold
frequency denoising (MFD) approach significantly outperforms the state of the
art denoising meth- ods, and is robust to a wide range of parameter selections,
e.g., the choice of k nearest neighbor connectivity of the graph.
| Shay Deutsch, Antonio Ortega, Gerard Medioni | null | 1611.0951 | null | null |
Associative Memory using Dictionary Learning and Expander Decoding | stat.ML cs.IT cs.LG math.IT | An associative memory is a framework of content-addressable memory that
stores a collection of message vectors (or a dataset) over a neural network
while enabling a neurally feasible mechanism to recover any message in the
dataset from its noisy version. Designing an associative memory requires
addressing two main tasks: 1) learning phase: given a dataset, learn a concise
representation of the dataset in the form of a graphical model (or a neural
network), 2) recall phase: given a noisy version of a message vector from the
dataset, output the correct message vector via a neurally feasible algorithm
over the network learnt during the learning phase. This paper studies the
problem of designing a class of neural associative memories which learns a
network representation for a large dataset that ensures correction against a
large number of adversarial errors during the recall phase. Specifically, the
associative memories designed in this paper can store dataset containing
$\exp(n)$ $n$-length message vectors over a network with $O(n)$ nodes and can
tolerate $\Omega(\frac{n}{{\rm polylog} n})$ adversarial errors. This paper
carries out this memory design by mapping the learning phase and recall phase
to the tasks of dictionary learning with a square dictionary and iterative
error correction in an expander code, respectively.
| Arya Mazumdar and Ankit Singh Rawat | null | 1611.09621 | null | null |
Improving Variational Auto-Encoders using Householder Flow | cs.LG stat.ML | Variational auto-encoders (VAE) are scalable and powerful generative models.
However, the choice of the variational posterior determines tractability and
flexibility of the VAE. Commonly, latent variables are modeled using the normal
distribution with a diagonal covariance matrix. This results in computational
efficiency but typically it is not flexible enough to match the true posterior
distribution. One fashion of enriching the variational posterior distribution
is application of normalizing flows, i.e., a series of invertible
transformations to latent variables with a simple posterior. In this paper, we
follow this line of thinking and propose a volume-preserving flow that uses a
series of Householder transformations. We show empirically on MNIST dataset and
histopathology data that the proposed flow allows to obtain more flexible
variational posterior and competitive results comparing to other normalizing
flows.
| Jakub M. Tomczak and Max Welling | null | 1611.0963 | null | null |
Gossip training for deep learning | cs.CV cs.LG stat.ML | We address the issue of speeding up the training of convolutional networks.
Here we study a distributed method adapted to stochastic gradient descent
(SGD). The parallel optimization setup uses several threads, each applying
individual gradient descents on a local variable. We propose a new way to share
information between different threads inspired by gossip algorithms and showing
good consensus convergence properties. Our method called GoSGD has the
advantage to be fully asynchronous and decentralized. We compared our method to
the recent EASGD in \cite{elastic} on CIFAR-10 show encouraging results.
| Michael Blot, David Picard, Matthieu Cord, Nicolas Thome | null | 1611.09726 | null | null |
Co-adaptive learning over a countable space | stat.ML cs.LG | Co-adaptation is a special form of on-line learning where an algorithm
$\mathcal{A}$ must assist an unknown algorithm $\mathcal{B}$ to perform some
task. This is a general framework and has applications in recommendation
systems, search, education, and much more. Today, the most common use of
co-adaptive algorithms is in brain-computer interfacing (BCI), where algorithms
help patients gain and maintain control over prosthetic devices. While previous
studies have shown strong empirical results Kowalski et al. (2013); Orsborn et
al. (2014) or have been studied in specific examples Merel et al. (2013, 2015),
there is no general analysis of the co-adaptive learning problem. Here we will
study the co-adaptive learning problem in the online, closed-loop setting. We
will prove that, with high probability, co-adaptive learning is guaranteed to
outperform learning with a fixed decoder as long as a particular condition is
met.
| Michael Rabadi | null | 1611.09816 | null | null |
Measuring and modeling the perception of natural and unconstrained gaze
in humans and machines | q-bio.NC cs.AI cs.CV cs.LG | Humans are remarkably adept at interpreting the gaze direction of other
individuals in their surroundings. This skill is at the core of the ability to
engage in joint visual attention, which is essential for establishing social
interactions. How accurate are humans in determining the gaze direction of
others in lifelike scenes, when they can move their heads and eyes freely, and
what are the sources of information for the underlying perceptual processes?
These questions pose a challenge from both empirical and computational
perspectives, due to the complexity of the visual input in real-life
situations. Here we measure empirically human accuracy in perceiving the gaze
direction of others in lifelike scenes, and study computationally the sources
of information and representations underlying this cognitive capacity. We show
that humans perform better in face-to-face conditions compared with recorded
conditions, and that this advantage is not due to the availability of input
dynamics. We further show that humans are still performing well when only the
eyes-region is visible, rather than the whole face. We develop a computational
model, which replicates the pattern of human performance, including the finding
that the eyes-region contains on its own, the required information for
estimating both head orientation and direction of gaze. Consistent with
neurophysiological findings on task-specific face regions in the brain, the
learned computational representations reproduce perceptual effects such as the
Wollaston illusion, when trained to estimate direction of gaze, but not when
trained to recognize objects or faces.
| Daniel Harari, Tao Gao, Nancy Kanwisher, Joshua Tenenbaum, Shimon
Ullman | null | 1611.09819 | null | null |
Learning Features of Music from Scratch | stat.ML cs.LG cs.SD | This paper introduces a new large-scale music dataset, MusicNet, to serve as
a source of supervision and evaluation of machine learning methods for music
research. MusicNet consists of hundreds of freely-licensed classical music
recordings by 10 composers, written for 11 instruments, together with
instrument/note annotations resulting in over 1 million temporal labels on 34
hours of chamber music performances under various studio and microphone
conditions.
The paper defines a multi-label classification task to predict notes in
musical recordings, along with an evaluation protocol, and benchmarks several
machine learning architectures for this task: i) learning from spectrogram
features; ii) end-to-end learning with a neural net; iii) end-to-end learning
with a convolutional neural net. These experiments show that end-to-end models
trained for note prediction learn frequency selective filters as a low-level
representation of audio.
| John Thickstun, Zaid Harchaoui, Sham Kakade | null | 1611.09827 | null | null |
Identity-sensitive Word Embedding through Heterogeneous Networks | cs.CL cs.LG stat.ML | Most existing word embedding approaches do not distinguish the same words in
different contexts, therefore ignoring their contextual meanings. As a result,
the learned embeddings of these words are usually a mixture of multiple
meanings. In this paper, we acknowledge multiple identities of the same word in
different contexts and learn the \textbf{identity-sensitive} word embeddings.
Based on an identity-labeled text corpora, a heterogeneous network of words and
word identities is constructed to model different-levels of word
co-occurrences. The heterogeneous network is further embedded into a
low-dimensional space through a principled network embedding approach, through
which we are able to obtain the embeddings of words and the embeddings of word
identities. We study three different types of word identities including topics,
sentiments and categories. Experimental results on real-world data sets show
that the identity-sensitive word embeddings learned by our approach indeed
capture different meanings of words and outperforms competitive methods on
tasks including text classification and word similarity computation.
| Jian Tang, Meng Qu, and Qiaozhu Mei | null | 1611.09878 | null | null |
Exploration for Multi-task Reinforcement Learning with Deep Generative
Models | cs.AI cs.LG stat.ML | Exploration in multi-task reinforcement learning is critical in training
agents to deduce the underlying MDP. Many of the existing exploration
frameworks such as $E^3$, $R_{max}$, Thompson sampling assume a single
stationary MDP and are not suitable for system identification in the multi-task
setting. We present a novel method to facilitate exploration in multi-task
reinforcement learning using deep generative models. We supplement our method
with a low dimensional energy model to learn the underlying MDP distribution
and provide a resilient and adaptive exploration signal to the agent. We
evaluate our method on a new set of environments and provide intuitive
interpretation of our results.
| Sai Praveen Bangaru, JS Suhas and Balaraman Ravindran | null | 1611.09894 | null | null |
Autism Spectrum Disorder Classification using Graph Kernels on
Multidimensional Time Series | stat.ML cs.LG | We present an approach to model time series data from resting state fMRI for
autism spectrum disorder (ASD) severity classification. We propose to adopt
kernel machines and employ graph kernels that define a kernel dot product
between two graphs. This enables us to take advantage of spatio-temporal
information to capture the dynamics of the brain network, as opposed to
aggregating them in the spatial or temporal dimension. In addition to the
conventional similarity graphs, we explore the use of L1 graph using sparse
coding, and the persistent homology of time delay embeddings, in the proposed
pipeline for ASD classification. In our experiments on two datasets from the
ABIDE collection, we demonstrate a consistent and significant advantage in
using graph kernels over traditional linear or non linear kernels for a variety
of time series features.
| Rushil Anirudh, Jayaraman J. Thiagarajan, Irene Kim, Wolfgang Polonik | null | 1611.09897 | null | null |
C-RNN-GAN: Continuous recurrent neural networks with adversarial
training | cs.AI cs.LG | Generative adversarial networks have been proposed as a way of efficiently
training deep generative neural networks. We propose a generative adversarial
model that works on continuous sequential data, and apply it by training it on
a collection of classical music. We conclude that it generates music that
sounds better and better as the model is trained, report statistics on
generated music, and let the reader judge the quality by downloading the
generated songs.
| Olof Mogren | null | 1611.09904 | null | null |
Capacity and Trainability in Recurrent Neural Networks | stat.ML cs.AI cs.LG cs.NE | Two potential bottlenecks on the expressiveness of recurrent neural networks
(RNNs) are their ability to store information about the task in their
parameters, and to store information about the input history in their units. We
show experimentally that all common RNN architectures achieve nearly the same
per-task and per-unit capacity bounds with careful training, for a variety of
tasks and stacking depths. They can store an amount of task information which
is linear in the number of parameters, and is approximately 5 bits per
parameter. They can additionally store approximately one real number from their
input history per hidden unit. We further find that for several tasks it is the
per-task parameter capacity bound that determines performance. These results
suggest that many previous results comparing RNN architectures are driven
primarily by differences in training effectiveness, rather than differences in
capacity. Supporting this observation, we compare training difficulty for
several architectures, and show that vanilla RNNs are far more difficult to
train, yet have slightly higher capacity. Finally, we propose two novel RNN
architectures, one of which is easier to train than the LSTM or GRU for deeply
stacked architectures.
| Jasmine Collins, Jascha Sohl-Dickstein and David Sussillo | null | 1611.09913 | null | null |
Less is More: Learning Prominent and Diverse Topics for Data
Summarization | cs.LG cs.CL cs.IR | Statistical topic models efficiently facilitate the exploration of
large-scale data sets. Many models have been developed and broadly used to
summarize the semantic structure in news, science, social media, and digital
humanities. However, a common and practical objective in data exploration tasks
is not to enumerate all existing topics, but to quickly extract representative
ones that broadly cover the content of the corpus, i.e., a few topics that
serve as a good summary of the data. Most existing topic models fit exactly the
same number of topics as a user specifies, which have imposed an unnecessary
burden to the users who have limited prior knowledge. We instead propose new
models that are able to learn fewer but more representative topics for the
purpose of data summarization. We propose a reinforced random walk that allows
prominent topics to absorb tokens from similar and smaller topics, thus
enhances the diversity among the top topics extracted. With this reinforced
random walk as a general process embedded in classical topic models, we obtain
\textit{diverse topic models} that are able to extract the most prominent and
diverse topics from data. The inference procedures of these diverse topic
models remain as simple and efficient as the classical models. Experimental
results demonstrate that the diverse topic models not only discover topics that
better summarize the data, but also require minimal prior knowledge of the
users.
| Jian Tang, Cheng Li, Ming Zhang, and Qiaozhu Mei | null | 1611.09921 | null | null |
Neural Combinatorial Optimization with Reinforcement Learning | cs.AI cs.LG stat.ML | This paper presents a framework to tackle combinatorial optimization problems
using neural networks and reinforcement learning. We focus on the traveling
salesman problem (TSP) and train a recurrent network that, given a set of city
coordinates, predicts a distribution over different city permutations. Using
negative tour length as the reward signal, we optimize the parameters of the
recurrent network using a policy gradient method. We compare learning the
network parameters on a set of training graphs against learning them on
individual test graphs. Despite the computational expense, without much
engineering and heuristic designing, Neural Combinatorial Optimization achieves
close to optimal results on 2D Euclidean graphs with up to 100 nodes. Applied
to the KnapSack, another NP-hard problem, the same method obtains optimal
solutions for instances with up to 200 items.
| Irwan Bello, Hieu Pham, Quoc V. Le, Mohammad Norouzi, Samy Bengio | null | 1611.0994 | null | null |
Low-dimensional Data Embedding via Robust Ranking | cs.AI cs.LG stat.ML | We describe a new method called t-ETE for finding a low-dimensional embedding
of a set of objects in Euclidean space. We formulate the embedding problem as a
joint ranking problem over a set of triplets, where each triplet captures the
relative similarities between three objects in the set. By exploiting recent
advances in robust ranking, t-ETE produces high-quality embeddings even in the
presence of a significant amount of noise and better preserves local scale than
known methods, such as t-STE and t-SNE. In particular, our method produces
significantly better results than t-SNE on signature datasets while also being
faster to compute.
| Ehsan Amid, Nikos Vlassis, Manfred K. Warmuth | null | 1611.09957 | null | null |
Machine Learning for Dental Image Analysis | stat.ML cs.CV cs.LG | In order to study the application of artificial intelligence (AI) to dental
imaging, we applied AI technology to classify a set of panoramic radiographs
using (a) a convolutional neural network (CNN) which is a form of an artificial
neural network (ANN), (b) representative image cognition algorithms that
implement scale-invariant feature transform (SIFT), and (c) histogram of
oriented gradients (HOG).
| Young-jun Yu | null | 1611.09958 | null | null |
Fast Supervised Discrete Hashing and its Analysis | cs.CV cs.LG cs.MM | In this paper, we propose a learning-based supervised discrete hashing
method. Binary hashing is widely used for large-scale image retrieval as well
as video and document searches because the compact representation of binary
code is essential for data storage and reasonable for query searches using
bit-operations. The recently proposed Supervised Discrete Hashing (SDH)
efficiently solves mixed-integer programming problems by alternating
optimization and the Discrete Cyclic Coordinate descent (DCC) method. We show
that the SDH model can be simplified without performance degradation based on
some preliminary experiments; we call the approximate model for this the "Fast
SDH" (FSDH) model. We analyze the FSDH model and provide a mathematically exact
solution for it. In contrast to SDH, our model does not require an alternating
optimization algorithm and does not depend on initial values. FSDH is also
easier to implement than Iterative Quantization (ITQ). Experimental results
involving a large-scale database showed that FSDH outperforms conventional SDH
in terms of precision, recall, and computation time.
| Gou Koutaki, Keiichiro Shirai, Mitsuru Ambai | null | 1611.10017 | null | null |
Active Deep Learning for Classification of Hyperspectral Images | cs.LG cs.CV stat.ML | Active deep learning classification of hyperspectral images is considered in
this paper. Deep learning has achieved success in many applications, but
good-quality labeled samples are needed to construct a deep learning network.
It is expensive getting good labeled samples in hyperspectral images for remote
sensing applications. An active learning algorithm based on a weighted
incremental dictionary learning is proposed for such applications. The proposed
algorithm selects training samples that maximize two selection criteria, namely
representative and uncertainty. This algorithm trains a deep network
efficiently by actively selecting training samples at each iteration. The
proposed algorithm is applied for the classification of hyperspectral images,
and compared with other classification algorithms employing active learning. It
is shown that the proposed algorithm is efficient and effective in classifying
hyperspectral images.
| Peng Liu, Hui Zhang, and Kie B. Eom | null | 1611.10031 | null | null |
Subsampled online matrix factorization with convergence guarantees | math.OC cs.LG stat.ML | We present a matrix factorization algorithm that scales to input matrices
that are large in both dimensions (i.e., that contains morethan 1TB of data).
The algorithm streams the matrix columns while subsampling them, resulting in
low complexity per iteration andreasonable memory footprint. In contrast to
previous online matrix factorization methods, our approach relies on
low-dimensional statistics from past iterates to control the extra variance
introduced by subsampling. We present a convergence analysis that guarantees us
to reach a stationary point of the problem. Large speed-ups can be obtained
compared to previous online algorithms that do not perform subsampling, thanks
to the feature redundancy that often exists in high-dimensional settings.
| Arthur Mensch (PARIETAL), Julien Mairal (LEAR), Ga\"el Varoquaux
(PARIETAL), Bertrand Thirion (PARIETAL) | null | 1611.10041 | null | null |
Performance Tuning of Hadoop MapReduce: A Noisy Gradient Approach | cs.DC cs.LG | Hadoop MapReduce is a framework for distributed storage and processing of
large datasets that is quite popular in big data analytics. It has various
configuration parameters (knobs) which play an important role in deciding the
performance i.e., the execution time of a given big data processing job.
Default values of these parameters do not always result in good performance and
hence it is important to tune them. However, there is inherent difficulty in
tuning the parameters due to two important reasons - firstly, the parameter
search space is large and secondly, there are cross-parameter interactions.
Hence, there is a need for a dimensionality-free method which can automatically
tune the configuration parameters by taking into account the cross-parameter
dependencies. In this paper, we propose a novel Hadoop parameter tuning
methodology, based on a noisy gradient algorithm known as the simultaneous
perturbation stochastic approximation (SPSA). The SPSA algorithm tunes the
parameters by directly observing the performance of the Hadoop MapReduce
system. The approach followed is independent of parameter dimensions and
requires only $2$ observations per iteration while tuning. We demonstrate the
effectiveness of our methodology in achieving good performance on popular
Hadoop benchmarks namely \emph{Grep}, \emph{Bigram}, \emph{Inverted Index},
\emph{Word Co-occurrence} and \emph{Terasort}. Our method, when tested on a 25
node Hadoop cluster shows 66\% decrease in execution time of Hadoop jobs on an
average, when compared to the default configuration. Further, we also observe a
reduction of 45\% in execution times, when compared to prior methods.
| Sandeep Kumar, Sindhu Padakandla, Chandrashekar L, Priyank Parihar, K
Gopinath, Shalabh Bhatnagar | 10.1109/CLOUD.2017.55 | 1611.10052 | null | null |
Effective Quantization Methods for Recurrent Neural Networks | cs.LG cs.CV | Reducing bit-widths of weights, activations, and gradients of a Neural
Network can shrink its storage size and memory usage, and also allow for faster
training and inference by exploiting bitwise operations. However, previous
attempts for quantization of RNNs show considerable performance degradation
when using low bit-width weights and activations. In this paper, we propose
methods to quantize the structure of gates and interlinks in LSTM and GRU
cells. In addition, we propose balanced quantization methods for weights to
further reduce performance degradation. Experiments on PTB and IMDB datasets
confirm effectiveness of our methods as performances of our models match or
surpass the previous state-of-the-art of quantized RNN.
| Qinyao He, He Wen, Shuchang Zhou, Yuxin Wu, Cong Yao, Xinyu Zhou,
Yuheng Zou | null | 1611.10176 | null | null |
Unit Commitment using Nearest Neighbor as a Short-Term Proxy | cs.LG cs.AI | We devise the Unit Commitment Nearest Neighbor (UCNN) algorithm to be used as
a proxy for quickly approximating outcomes of short-term decisions, to make
tractable hierarchical long-term assessment and planning for large power
systems. Experimental results on updated versions of IEEE-RTS79 and IEEE-RTS96
show high accuracy measured on operational cost, achieved in runtimes that are
lower in several orders of magnitude than the traditional approach.
| Gal Dalal, Elad Gilboa, Shie Mannor, Louis Wehenkel | null | 1611.10215 | null | null |
Behavior-Based Machine-Learning: A Hybrid Approach for Predicting Human
Decision Making | cs.LG cs.GT | A large body of work in behavioral fields attempts to develop models that
describe the way people, as opposed to rational agents, make decisions. A
recent Choice Prediction Competition (2015) challenged researchers to suggest a
model that captures 14 classic choice biases and can predict human decisions
under risk and ambiguity. The competition focused on simple decision problems,
in which human subjects were asked to repeatedly choose between two gamble
options.
In this paper we present our approach for predicting human decision behavior:
we suggest to use machine learning algorithms with features that are based on
well-established behavioral theories. The basic idea is that these
psychological features are essential for the representation of the data and are
important for the success of the learning process. We implement a vanilla model
in which we train SVM models using behavioral features that rely on the
psychological properties underlying the competition baseline model. We show
that this basic model captures the 14 choice biases and outperforms all the
other learning-based models in the competition. The preliminary results suggest
that such hybrid models can significantly improve the prediction of human
decision making, and are a promising direction for future research.
| Gali Noti, Effi Levi, Yoav Kolumbus and Amit Daniely | null | 1611.10228 | null | null |
SeDMiD for Confusion Detection: Uncovering Mind State from Time Series
Brain Wave Data | q-bio.NC cs.AI cs.LG | Understanding how brain functions has been an intriguing topic for years.
With the recent progress on collecting massive data and developing advanced
technology, people have become interested in addressing the challenge of
decoding brain wave data into meaningful mind states, with many machine
learning models and algorithms being revisited and developed, especially the
ones that handle time series data because of the nature of brain waves.
However, many of these time series models, like HMM with hidden state in
discrete space or State Space Model with hidden state in continuous space, only
work with one source of data and cannot handle different sources of information
simultaneously. In this paper, we propose an extension of State Space Model to
work with different sources of information together with its learning and
inference algorithms. We apply this model to decode the mind state of students
during lectures based on their brain waves and reach a significant better
results compared to traditional methods.
| Jingkang Yang, Haohan Wang, Jun Zhu, Eric P. Xing | null | 1611.10252 | null | null |
Reliably Learning the ReLU in Polynomial Time | cs.LG cs.CC stat.ML | We give the first dimension-efficient algorithms for learning Rectified
Linear Units (ReLUs), which are functions of the form $\mathbf{x} \mapsto
\max(0, \mathbf{w} \cdot \mathbf{x})$ with $\mathbf{w} \in \mathbb{S}^{n-1}$.
Our algorithm works in the challenging Reliable Agnostic learning model of
Kalai, Kanade, and Mansour (2009) where the learner is given access to a
distribution $\cal{D}$ on labeled examples but the labeling may be arbitrary.
We construct a hypothesis that simultaneously minimizes the false-positive rate
and the loss on inputs given positive labels by $\cal{D}$, for any convex,
bounded, and Lipschitz loss function.
The algorithm runs in polynomial-time (in $n$) with respect to any
distribution on $\mathbb{S}^{n-1}$ (the unit sphere in $n$ dimensions) and for
any error parameter $\epsilon = \Omega(1/\log n)$ (this yields a PTAS for a
question raised by F. Bach on the complexity of maximizing ReLUs). These
results are in contrast to known efficient algorithms for reliably learning
linear threshold functions, where $\epsilon$ must be $\Omega(1)$ and strong
assumptions are required on the marginal distribution. We can compose our
results to obtain the first set of efficient algorithms for learning
constant-depth networks of ReLUs.
Our techniques combine kernel methods and polynomial approximations with a
"dual-loss" approach to convex programming. As a byproduct we obtain a number
of applications including the first set of efficient algorithms for "convex
piecewise-linear fitting" and the first efficient algorithms for noisy
polynomial reconstruction of low-weight polynomials on the unit sphere.
| Surbhi Goel, Varun Kanade, Adam Klivans, Justin Thaler | null | 1611.10258 | null | null |
Weighted bandits or: How bandits learn distorted values that are not
expected | cs.LG stat.ML | Motivated by models of human decision making proposed to explain commonly
observed deviations from conventional expected value preferences, we formulate
two stochastic multi-armed bandit problems with distorted probabilities on the
cost distributions: the classic $K$-armed bandit and the linearly parameterized
bandit. In both settings, we propose algorithms that are inspired by Upper
Confidence Bound (UCB), incorporate cost distortions, and exhibit sublinear
regret assuming \holder continuous weight distortion functions. For the
$K$-armed setting, we show that the algorithm, called W-UCB, achieves
problem-dependent regret $O(L^2 M^2 \log n/ \Delta^{\frac{2}{\alpha}-1})$,
where $n$ is the number of plays, $\Delta$ is the gap in distorted expected
value between the best and next best arm, $L$ and $\alpha$ are the H\"{o}lder
constants for the distortion function, and $M$ is an upper bound on costs, and
a problem-independent regret bound of
$O((KL^2M^2)^{\alpha/2}n^{(2-\alpha)/2})$. We also present a matching lower
bound on the regret, showing that the regret of W-UCB is essentially
unimprovable over the class of H\"{o}lder-continuous weight distortions. For
the linearly parameterized setting, we develop a new algorithm, a variant of
the Optimism in the Face of Uncertainty Linear bandit (OFUL) algorithm called
WOFUL (Weight-distorted OFUL), and show that it has regret $O(d\sqrt{n} \;
\mbox{polylog}(n))$ with high probability, for sub-Gaussian cost distributions.
Finally, numerical examples demonstrate the advantages resulting from using
distortion-aware learning algorithms.
| Aditya Gopalan, L.A. Prashanth, Michael Fu and Steve Marcus | null | 1611.10283 | null | null |
Influential Node Detection in Implicit Social Networks using Multi-task
Gaussian Copula Models | cs.SI cs.LG stat.ML | Influential node detection is a central research topic in social network
analysis. Many existing methods rely on the assumption that the network
structure is completely known \textit{a priori}. However, in many applications,
network structure is unavailable to explain the underlying information
diffusion phenomenon. To address the challenge of information diffusion
analysis with incomplete knowledge of network structure, we develop a
multi-task low rank linear influence model. By exploiting the relationships
between contagions, our approach can simultaneously predict the volume (i.e.
time series prediction) for each contagion (or topic) and automatically
identify the most influential nodes for each contagion. The proposed model is
validated using synthetic data and an ISIS twitter dataset. In addition to
improving the volume prediction performance significantly, we show that the
proposed approach can reliably infer the most influential users for specific
contagions.
| Qunwei Li, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Zhenliang
Zhang, Pramod K. Varshney | null | 1611.10305 | null | null |
The observer-assisted method for adjusting hyper-parameters in deep
learning algorithms | cs.LG cs.AI | This paper presents a concept of a novel method for adjusting
hyper-parameters in Deep Learning (DL) algorithms. An external agent-observer
monitors a performance of a selected Deep Learning algorithm. The observer
learns to model the DL algorithm using a series of random experiments.
Consequently, it may be used for predicting a response of the DL algorithm in
terms of a selected quality measurement to a set of hyper-parameters. This
allows to construct an ensemble composed of a series of evaluators which
constitute an observer-assisted architecture. The architecture may be used to
gradually iterate towards to the best achievable quality score in tiny steps
governed by a unit of progress. The algorithm is stopped when the maximum
number of steps is reached or no further progress is made.
| Maciej Wielgosz | null | 1611.10328 | null | null |
SLA Violation Prediction In Cloud Computing: A Machine Learning
Perspective | cs.DC cs.LG | Service level agreement (SLA) is an essential part of cloud systems to ensure
maximum availability of services for customers. With a violation of SLA, the
provider has to pay penalties. In this paper, we explore two machine learning
models: Naive Bayes and Random Forest Classifiers to predict SLA violations.
Since SLA violations are a rare event in the real world (~0.2 %), the
classification task becomes more challenging. In order to overcome these
challenges, we use several re-sampling methods. We find that random forests
with SMOTE-ENN re-sampling have the best performance among other methods with
the accuracy of 99.88 % and F_1 score of 0.9980.
| Reyhane Askari Hemmat, Abdelhakim Hafid | null | 1611.10338 | null | null |
Joint Causal Inference from Multiple Contexts | cs.LG cs.AI stat.ML | The gold standard for discovering causal relations is by means of
experimentation. Over the last decades, alternative methods have been proposed
that can infer causal relations between variables from certain statistical
patterns in purely observational data. We introduce Joint Causal Inference
(JCI), a novel approach to causal discovery from multiple data sets from
different contexts that elegantly unifies both approaches. JCI is a causal
modeling framework rather than a specific algorithm, and it can be implemented
using any causal discovery algorithm that can take into account certain
background knowledge. JCI can deal with different types of interventions (e.g.,
perfect, imperfect, stochastic, etc.) in a unified fashion, and does not
require knowledge of intervention targets or types in case of interventional
data. We explain how several well-known causal discovery algorithms can be seen
as addressing special cases of the JCI framework, and we also propose novel
implementations that extend existing causal discovery methods for purely
observational data to the JCI setting. We evaluate different JCI
implementations on synthetic data and on flow cytometry protein expression data
and conclude that JCI implementations can considerably outperform
state-of-the-art causal discovery algorithms.
| Joris M. Mooij, Sara Magliacane, Tom Claassen | null | 1611.10351 | null | null |
Semi-supervised Kernel Metric Learning Using Relative Comparisons | cs.LG stat.ML | We consider the problem of metric learning subject to a set of constraints on
relative-distance comparisons between the data items. Such constraints are
meant to reflect side-information that is not expressed directly in the feature
vectors of the data items. The relative-distance constraints used in this work
are particularly effective in expressing structures at finer level of detail
than must-link (ML) and cannot-link (CL) constraints, which are most commonly
used for semi-supervised clustering. Relative-distance constraints are thus
useful in settings where providing an ML or a CL constraint is difficult
because the granularity of the true clustering is unknown.
Our main contribution is an efficient algorithm for learning a kernel matrix
using the log determinant divergence --- a variant of the Bregman divergence
--- subject to a set of relative-distance constraints. The learned kernel
matrix can then be employed by many different kernel methods in a wide range of
applications. In our experimental evaluations, we consider a semi-supervised
clustering setting and show empirically that kernels found by our algorithm
yield clusterings of higher quality than existing approaches that either use
ML/CL constraints or a different means to implement the supervision using
relative comparisons.
| Ehsan Amid, Aristides Gionis, Antti Ukkonen | null | 1612.00086 | null | null |
Noise-Tolerant Life-Long Matrix Completion via Adaptive Sampling | cs.LG stat.ML | We study the problem of recovering an incomplete $m\times n$ matrix of rank
$r$ with columns arriving online over time. This is known as the problem of
life-long matrix completion, and is widely applied to recommendation system,
computer vision, system identification, etc. The challenge is to design
provable algorithms tolerant to a large amount of noises, with small sample
complexity. In this work, we give algorithms achieving strong guarantee under
two realistic noise models. In bounded deterministic noise, an adversary can
add any bounded yet unstructured noise to each column. For this problem, we
present an algorithm that returns a matrix of a small error, with sample
complexity almost as small as the best prior results in the noiseless case. For
sparse random noise, where the corrupted columns are sparse and drawn randomly,
we give an algorithm that exactly recovers an $\mu_0$-incoherent matrix by
probability at least $1-\delta$ with sample complexity as small as
$O\left(\mu_0rn\log (r/\delta)\right)$. This result advances the
state-of-the-art work and matches the lower bound in a worst case. We also
study the scenario where the hidden matrix lies on a mixture of subspaces and
show that the sample complexity can be even smaller. Our proposed algorithms
perform well experimentally in both synthetic and real-world datasets.
| Maria-Florina Balcan and Hongyang Zhang | null | 1612.001 | null | null |
When to Reset Your Keys: Optimal Timing of Security Updates via Learning | cs.LG cs.AI cs.CR | Cybersecurity is increasingly threatened by advanced and persistent attacks.
As these attacks are often designed to disable a system (or a critical
resource, e.g., a user account) repeatedly, it is crucial for the defender to
keep updating its security measures to strike a balance between the risk of
being compromised and the cost of security updates. Moreover, these decisions
often need to be made with limited and delayed feedback due to the stealthy
nature of advanced attacks. In addition to targeted attacks, such an optimal
timing policy under incomplete information has broad applications in
cybersecurity. Examples include key rotation, password change, application of
patches, and virtual machine refreshing. However, rigorous studies of optimal
timing are rare. Further, existing solutions typically rely on a pre-defined
attack model that is known to the defender, which is often not the case in
practice. In this work, we make an initial effort towards achieving optimal
timing of security updates in the face of unknown stealthy attacks. We consider
a variant of the influential FlipIt game model with asymmetric feedback and
unknown attack time distribution, which provides a general model to consecutive
security updates. The defender's problem is then modeled as a time associative
bandit problem with dependent arms. We derive upper confidence bound based
learning policies that achieve low regret compared with optimal periodic
defense strategies that can only be derived when attack time distributions are
known.
| Zizhan Zheng, Ness B. Shroff, Prasant Mohapatra | null | 1612.00108 | null | null |
A New Method for Classification of Datasets for Data Mining | cs.LG cs.DB stat.ML | Decision tree is an important method for both induction research and data
mining, which is mainly used for model classification and prediction. ID3
algorithm is the most widely used algorithm in the decision tree so far. In
this paper, the shortcoming of ID3's inclining to choose attributes with many
values is discussed, and then a new decision tree algorithm which is improved
version of ID3. In our proposed algorithm attributes are divided into groups
and then we apply the selection measure 5 for these groups. If information gain
is not good then again divide attributes values into groups. These steps are
done until we get good classification/misclassification ratio. The proposed
algorithms classify the data sets more accurately and efficiently.
| Singh Vijendra, Hemjyotsana Parashar and Nisha Vasudeva | null | 1612.00151 | null | null |
Adversarial Images for Variational Autoencoders | cs.NE cs.CV cs.LG | We investigate adversarial attacks for autoencoders. We propose a procedure
that distorts the input image to mislead the autoencoder in reconstructing a
completely different target image. We attack the internal latent
representations, attempting to make the adversarial input produce an internal
representation as similar as possible as the target's. We find that
autoencoders are much more robust to the attack than classifiers: while some
examples have tolerably small input distortion, and reasonable similarity to
the target image, there is a quasi-linear trade-off between those aims. We
report results on MNIST and SVHN datasets, and also test regular deterministic
autoencoders, reaching similar conclusions in all cases. Finally, we show that
the usual adversarial attack for classifiers, while being much easier, also
presents a direct proportion between distortion on the input, and misdirection
on the output. That proportionality however is hidden by the normalization of
the output, which maps a linear layer into non-linear probabilities.
| Pedro Tabacof, Julia Tavares, Eduardo Valle | null | 1612.00155 | null | null |
Efficient Orthogonal Parametrisation of Recurrent Neural Networks Using
Householder Reflections | cs.LG | The problem of learning long-term dependencies in sequences using Recurrent
Neural Networks (RNNs) is still a major challenge. Recent methods have been
suggested to solve this problem by constraining the transition matrix to be
unitary during training which ensures that its norm is equal to one and
prevents exploding gradients. These methods either have limited expressiveness
or scale poorly with the size of the network when compared with the simple RNN
case, especially when using stochastic gradient descent with a small mini-batch
size. Our contributions are as follows; we first show that constraining the
transition matrix to be unitary is a special case of an orthogonal constraint.
Then we present a new parametrisation of the transition matrix which allows
efficient training of an RNN while ensuring that the matrix is always
orthogonal. Our results show that the orthogonal constraint on the transition
matrix applied through our parametrisation gives similar benefits to the
unitary constraint, without the time complexity limitations.
| Zakaria Mhammedi, Andrew Hellicar, Ashfaqur Rahman, James Bailey | null | 1612.00188 | null | null |
Learning molecular energies using localized graph kernels | physics.comp-ph cs.LG stat.ML | Recent machine learning methods make it possible to model potential energy of
atomic configurations with chemical-level accuracy (as calculated from
ab-initio calculations) and at speeds suitable for molecular dynam- ics
simulation. Best performance is achieved when the known physical constraints
are encoded in the machine learning models. For example, the atomic energy is
invariant under global translations and rotations, it is also invariant to
permutations of same-species atoms. Although simple to state, these symmetries
are complicated to encode into machine learning algorithms. In this paper, we
present a machine learning approach based on graph theory that naturally
incorporates translation, rotation, and permutation symmetries. Specifically,
we use a random walk graph kernel to measure the similarity of two adjacency
matrices, each of which represents a local atomic environment. This Graph
Approximated Energy (GRAPE) approach is flexible and admits many possible
extensions. We benchmark a simple version of GRAPE by predicting atomization
energies on a standard dataset of organic molecules.
| G. Ferr\'e, T. Haut and K. Barros | 10.1063/1.4978623 | 1612.00193 | null | null |
Training Bit Fully Convolutional Network for Fast Semantic Segmentation | cs.CV cs.LG | Fully convolutional neural networks give accurate, per-pixel prediction for
input images and have applications like semantic segmentation. However, a
typical FCN usually requires lots of floating point computation and large
run-time memory, which effectively limits its usability. We propose a method to
train Bit Fully Convolution Network (BFCN), a fully convolutional neural
network that has low bit-width weights and activations. Because most of its
computation-intensive convolutions are accomplished between low bit-width
numbers, a BFCN can be accelerated by an efficient bit-convolution
implementation. On CPU, the dot product operation between two bit vectors can
be reduced to bitwise operations and popcounts, which can offer much higher
throughput than 32-bit multiplications and additions.
To validate the effectiveness of BFCN, we conduct experiments on the PASCAL
VOC 2012 semantic segmentation task and Cityscapes. Our BFCN with 1-bit weights
and 2-bit activations, which runs 7.8x faster on CPU or requires less than 1\%
resources on FPGA, can achieve comparable performance as the 32-bit
counterpart.
| He Wen, Shuchang Zhou, Zhe Liang, Yuxiang Zhang, Dieqiao Feng, Xinyu
Zhou, Cong Yao | null | 1612.00212 | null | null |
The Coconut Model with Heterogeneous Strategies and Learning | q-fin.EC cs.LG nlin.AO | In this paper, we develop an agent-based version of the Diamond search
equilibrium model - also called Coconut Model. In this model, agents are faced
with production decisions that have to be evaluated based on their expectations
about the future utility of the produced entity which in turn depends on the
global production level via a trading mechanism. While the original dynamical
systems formulation assumes an infinite number of homogeneously adapting agents
obeying strong rationality conditions, the agent-based setting allows to
discuss the effects of heterogeneous and adaptive expectations and enables the
analysis of non-equilibrium trajectories. Starting from a baseline
implementation that matches the asymptotic behavior of the original model, we
show how agent heterogeneity can be accounted for in the aggregate dynamical
equations. We then show that when agents adapt their strategies by a simple
temporal difference learning scheme, the system converges to one of the fixed
points of the original system. Systematic simulations reveal that this is the
only stable equilibrium solution.
| Sven Banisch and Eckehard Olbrich | null | 1612.00221 | null | null |
Interaction Networks for Learning about Objects, Relations and Physics | cs.AI cs.LG | Reasoning about objects, relations, and physics is central to human
intelligence, and a key goal of artificial intelligence. Here we introduce the
interaction network, a model which can reason about how objects in complex
systems interact, supporting dynamical predictions, as well as inferences about
the abstract properties of the system. Our model takes graphs as input,
performs object- and relation-centric reasoning in a way that is analogous to a
simulation, and is implemented using deep neural networks. We evaluate its
ability to reason about several challenging physical domains: n-body problems,
rigid-body collision, and non-rigid dynamics. Our results show it can be
trained to accurately simulate the physical trajectories of dozens of objects
over thousands of time steps, estimate abstract quantities such as energy, and
generalize automatically to systems with different numbers and configurations
of objects and relations. Our interaction network implementation is the first
general-purpose, learnable physics engine, and a powerful general framework for
reasoning about object and relations in a wide variety of complex real-world
domains.
| Peter W. Battaglia, Razvan Pascanu, Matthew Lai, Danilo Rezende, Koray
Kavukcuoglu | null | 1612.00222 | null | null |
A Theoretical Framework for Robustness of (Deep) Classifiers against
Adversarial Examples | cs.LG cs.CR cs.CV | Most machine learning classifiers, including deep neural networks, are
vulnerable to adversarial examples. Such inputs are typically generated by
adding small but purposeful modifications that lead to incorrect outputs while
imperceptible to human eyes. The goal of this paper is not to introduce a
single method, but to make theoretical steps towards fully understanding
adversarial examples. By using concepts from topology, our theoretical analysis
brings forth the key reasons why an adversarial example can fool a classifier
($f_1$) and adds its oracle ($f_2$, like human eyes) in such analysis. By
investigating the topological relationship between two (pseudo)metric spaces
corresponding to predictor $f_1$ and oracle $f_2$, we develop necessary and
sufficient conditions that can determine if $f_1$ is always robust
(strong-robust) against adversarial examples according to $f_2$. Interestingly
our theorems indicate that just one unnecessary feature can make $f_1$ not
strong-robust, and the right feature representation learning is the key to
getting a classifier that is both accurate and strong-robust.
| Beilun Wang, Ji Gao, Yanjun Qi | null | 1612.00334 | null | null |
A Compositional Object-Based Approach to Learning Physical Dynamics | cs.AI cs.LG | We present the Neural Physics Engine (NPE), a framework for learning
simulators of intuitive physics that naturally generalize across variable
object count and different scene configurations. We propose a factorization of
a physical scene into composable object-based representations and a neural
network architecture whose compositional structure factorizes object dynamics
into pairwise interactions. Like a symbolic physics engine, the NPE is endowed
with generic notions of objects and their interactions; realized as a neural
network, it can be trained via stochastic gradient descent to adapt to specific
object properties and dynamics of different worlds. We evaluate the efficacy of
our approach on simple rigid body dynamics in two-dimensional worlds. By
comparing to less structured architectures, we show that the NPE's
compositional representation of the structure in physical interactions improves
its ability to predict movement, generalize across variable object count and
different scene configurations, and infer latent properties of objects such as
mass.
| Michael B. Chang, Tomer Ullman, Antonio Torralba, Joshua B. Tenenbaum | null | 1612.00341 | null | null |
Large-scale Validation of Counterfactual Learning Methods: A Test-Bed | cs.LG cs.AI stat.ML | The ability to perform effective off-policy learning would revolutionize the
process of building better interactive systems, such as search engines and
recommendation systems for e-commerce, computational advertising and news.
Recent approaches for off-policy evaluation and learning in these settings
appear promising. With this paper, we provide real-world data and a
standardized test-bed to systematically investigate these algorithms using data
from display advertising. In particular, we consider the problem of filling a
banner ad with an aggregate of multiple products the user may want to purchase.
This paper presents our test-bed, the sanity checks we ran to ensure its
validity, and shows results comparing state-of-the-art off-policy learning
methods like doubly robust optimization, POEM, and reductions to supervised
learning using regression baselines. Our results show experimental evidence
that recent off-policy learning methods can improve upon state-of-the-art
supervised learning techniques on a large-scale real-world data set.
| Damien Lefortier, Adith Swaminathan, Xiaotao Gu, Thorsten Joachims,
Maarten de Rijke | null | 1612.00367 | null | null |
Spatial Decompositions for Large Scale SVMs | stat.ML cs.LG | Although support vector machines (SVMs) are theoretically well understood,
their underlying optimization problem becomes very expensive, if, for example,
hundreds of thousands of samples and a non-linear kernel are considered.
Several approaches have been proposed in the past to address this serious
limitation. In this work we investigate a decomposition strategy that learns on
small, spatially defined data chunks. Our contributions are two fold: On the
theoretical side we establish an oracle inequality for the overall learning
method using the hinge loss, and show that the resulting rates match those
known for SVMs solving the complete optimization problem with Gaussian kernels.
On the practical side we compare our approach to learning SVMs on small,
randomly chosen chunks. Here it turns out that for comparable training times
our approach is significantly faster during testing and also reduces the test
error in most cases significantly. Furthermore, we show that our approach
easily scales up to 10 million training samples: including hyper-parameter
selection using cross validation, the entire training only takes a few hours on
a single machine. Finally, we report an experiment on 32 million training
samples. All experiments used liquidSVM (Steinwart and Thomann, 2017).
| Philipp Thomann and Ingrid Blaschzyk and Mona Meister and Ingo
Steinwart | null | 1612.00374 | null | null |
Piecewise Latent Variables for Neural Variational Text Processing | cs.CL cs.AI cs.LG cs.NE | Advances in neural variational inference have facilitated the learning of
powerful directed graphical models with continuous latent variables, such as
variational autoencoders. The hope is that such models will learn to represent
rich, multi-modal latent factors in real-world data, such as natural language
text. However, current models often assume simplistic priors on the latent
variables - such as the uni-modal Gaussian distribution - which are incapable
of representing complex latent factors efficiently. To overcome this
restriction, we propose the simple, but highly flexible, piecewise constant
distribution. This distribution has the capacity to represent an exponential
number of modes of a latent target distribution, while remaining mathematically
tractable. Our results demonstrate that incorporating this new latent
distribution into different models yields substantial improvements in natural
language processing tasks such as document modeling and natural language
generation for dialogue.
| Iulian V. Serban, Alexander G. Ororbia II, Joelle Pineau, Aaron
Courville | null | 1612.00377 | null | null |
Tuning the Scheduling of Distributed Stochastic Gradient Descent with
Bayesian Optimization | stat.ML cs.LG | We present an optimizer which uses Bayesian optimization to tune the system
parameters of distributed stochastic gradient descent (SGD). Given a specific
context, our goal is to quickly find efficient configurations which
appropriately balance the load between the available machines to minimize the
average SGD iteration time. Our experiments consider setups with over thirty
parameters. Traditional Bayesian optimization, which uses a Gaussian process as
its model, is not well suited to such high dimensional domains. To reduce
convergence time, we exploit the available structure. We design a probabilistic
model which simulates the behavior of distributed SGD and use it within
Bayesian optimization. Our model can exploit many runtime measurements for
inference per evaluation of the objective function. Our experiments show that
our resulting optimizer converges to efficient configurations within ten
iterations, the optimized configurations outperform those found by generic
optimizer in thirty iterations by up to 2X.
| Valentin Dalibard, Michael Schaarschmidt, Eiko Yoneki | null | 1612.00383 | null | null |
Diet2Vec: Multi-scale analysis of massive dietary data | stat.ML cs.LG stat.AP | Smart phone apps that enable users to easily track their diets have become
widespread in the last decade. This has created an opportunity to discover new
insights into obesity and weight loss by analyzing the eating habits of the
users of such apps. In this paper, we present diet2vec: an approach to modeling
latent structure in a massive database of electronic diet journals. Through an
iterative contract-and-expand process, our model learns real-valued embeddings
of users' diets, as well as embeddings for individual foods and meals. We
demonstrate the effectiveness of our approach on a real dataset of 55K users of
the popular diet-tracking app LoseIt\footnote{http://www.loseit.com/}. To the
best of our knowledge, this is the largest fine-grained diet tracking study in
the history of nutrition and obesity research. Our results suggest that
diet2vec finds interpretable results at all levels, discovering intuitive
representations of foods, meals, and diets.
| Wesley Tansey and Edward W. Lowe Jr. and James G. Scott | null | 1612.00388 | null | null |
Hypervolume-based Multi-objective Bayesian Optimization with Student-t
Processes | stat.ML cs.LG | Student-$t$ processes have recently been proposed as an appealing alternative
non-parameteric function prior. They feature enhanced flexibility and
predictive variance. In this work the use of Student-$t$ processes are explored
for multi-objective Bayesian optimization. In particular, an analytical
expression for the hypervolume-based probability of improvement is developed
for independent Student-$t$ process priors of the objectives. Its effectiveness
is shown on a multi-objective optimization problem which is known to be
difficult with traditional Gaussian processes.
| Joachim van der Herten and Ivo Couckuyt and Tom Dhaene | null | 1612.00393 | null | null |
Deep Variational Information Bottleneck | cs.LG cs.IT math.IT | We present a variational approximation to the information bottleneck of
Tishby et al. (1999). This variational approach allows us to parameterize the
information bottleneck model using a neural network and leverage the
reparameterization trick for efficient training. We call this method "Deep
Variational Information Bottleneck", or Deep VIB. We show that models trained
with the VIB objective outperform those that are trained with other forms of
regularization, in terms of generalization performance and robustness to
adversarial attack.
| Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, Kevin Murphy | null | 1612.0041 | null | null |
Generalizing Skills with Semi-Supervised Reinforcement Learning | cs.LG cs.AI cs.RO | Deep reinforcement learning (RL) can acquire complex behaviors from low-level
inputs, such as images. However, real-world applications of such methods
require generalizing to the vast variability of the real world. Deep networks
are known to achieve remarkable generalization when provided with massive
amounts of labeled data, but can we provide this breadth of experience to an RL
agent, such as a robot? The robot might continuously learn as it explores the
world around it, even while deployed. However, this learning requires access to
a reward function, which is often hard to measure in real-world domains, where
the reward could depend on, for example, unknown positions of objects or the
emotional state of the user. Conversely, it is often quite practical to provide
the agent with reward functions in a limited set of situations, such as when a
human supervisor is present or in a controlled setting. Can we make use of this
limited supervision, and still benefit from the breadth of experience an agent
might collect on its own? In this paper, we formalize this problem as
semisupervised reinforcement learning, where the reward function can only be
evaluated in a set of "labeled" MDPs, and the agent must generalize its
behavior to the wide range of states it might encounter in a set of "unlabeled"
MDPs, by using experience from both settings. Our proposed method infers the
task objective in the unlabeled MDPs through an algorithm that resembles
inverse RL, using the agent's own prior experience in the labeled MDPs as a
kind of demonstration of optimal behavior. We evaluate our method on
challenging tasks that require control directly from images, and show that our
approach can improve the generalization of a learned deep neural network policy
by using experience for which no reward function is available. We also show
that our method outperforms direct supervised learning of the reward.
| Chelsea Finn, Tianhe Yu, Justin Fu, Pieter Abbeel, Sergey Levine | null | 1612.00429 | null | null |
Transfer Learning Across Patient Variations with Hidden Parameter Markov
Decision Processes | stat.ML cs.AI cs.LG | Due to physiological variation, patients diagnosed with the same condition
may exhibit divergent, but related, responses to the same treatments. Hidden
Parameter Markov Decision Processes (HiP-MDPs) tackle this transfer-learning
problem by embedding these tasks into a low-dimensional space. However, the
original formulation of HiP-MDP had a critical flaw: the embedding uncertainty
was modeled independently of the agent's state uncertainty, requiring an
unnatural training procedure in which all tasks visited every part of the state
space---possible for robots that can be moved to a particular location,
impossible for human patients. We update the HiP-MDP framework and extend it to
more robustly develop personalized medicine strategies for HIV treatment.
| Taylor Killian, George Konidaris, Finale Doshi-Velez | null | 1612.00475 | null | null |
Canonical Correlation Analysis for Analyzing Sequences of Medical
Billing Codes | stat.ML cs.LG | We propose using canonical correlation analysis (CCA) to generate features
from sequences of medical billing codes. Applying this novel use of CCA to a
database of medical billing codes for patients with diverticulitis, we first
demonstrate that the CCA embeddings capture meaningful relationships among the
codes. We then generate features from these embeddings and establish their
usefulness in predicting future elective surgery for diverticulitis, an
important marker in efforts for reducing costs in healthcare.
| Corinne L. Jones, Sham M. Kakade, Lucas W. Thornblade, David R. Flum,
Abraham D. Flaxman | null | 1612.00516 | null | null |
A Noise-Filtering Approach for Cancer Drug Sensitivity Prediction | cs.LG q-bio.GN stat.ML | Accurately predicting drug responses to cancer is an important problem
hindering oncologists' efforts to find the most effective drugs to treat
cancer, which is a core goal in precision medicine. The scientific community
has focused on improving this prediction based on genomic, epigenomic, and
proteomic datasets measured in human cancer cell lines. Real-world cancer cell
lines contain noise, which degrades the performance of machine learning
algorithms. This problem is rarely addressed in the existing approaches. In
this paper, we present a noise-filtering approach that integrates techniques
from numerical linear algebra and information retrieval targeted at filtering
out noisy cancer cell lines. By filtering out noisy cancer cell lines, we can
train machine learning algorithms on better quality cancer cell lines. We
evaluate the performance of our approach and compare it with an existing
approach using the Area Under the ROC Curve (AUC) on clinical trial data. The
experimental results show that our proposed approach is stable and also yields
the highest AUC at a statistically significant level.
| Turki Turki and Zhi Wei | null | 1612.00525 | null | null |
Breast Mass Classification from Mammograms using Deep Convolutional
Neural Networks | cs.CV cs.LG | Mammography is the most widely used method to screen breast cancer. Because
of its mostly manual nature, variability in mass appearance, and low
signal-to-noise ratio, a significant number of breast masses are missed or
misdiagnosed. In this work, we present how Convolutional Neural Networks can be
used to directly classify pre-segmented breast masses in mammograms as benign
or malignant, using a combination of transfer learning, careful pre-processing
and data augmentation to overcome limited training data. We achieve
state-of-the-art results on the DDSM dataset, surpassing human performance, and
show interpretability of our model.
| Daniel L\'evy, Arzav Jain | null | 1612.00542 | null | null |
Higher Order Mutual Information Approximation for Feature Selection | cs.LG | Feature selection is a process of choosing a subset of relevant features so
that the quality of prediction models can be improved. An extensive body of
work exists on information-theoretic feature selection, based on maximizing
Mutual Information (MI) between subsets of features and class labels. The prior
methods use a lower order approximation, by treating the joint entropy as a
summation of several single variable entropies. This leads to locally optimal
selections and misses multi-way feature combinations. We present a higher order
MI based approximation technique called Higher Order Feature Selection (HOFS).
Instead of producing a single list of features, our method produces a ranked
collection of feature subsets that maximizes MI, giving better comprehension
(feature ranking) as to which features work best together when selected, due to
their underlying interdependent structure. Our experiments demonstrate that the
proposed method performs better than existing feature selection approaches
while keeping similar running times and computational complexity.
| Jilin Wu and Soumyajit Gupta and Chandrajit Bajaj | null | 1612.00554 | null | null |
Self-critical Sequence Training for Image Captioning | cs.LG cs.AI cs.CV | Recently it has been shown that policy-gradient methods for reinforcement
learning can be utilized to train deep end-to-end systems directly on
non-differentiable metrics for the task at hand. In this paper we consider the
problem of optimizing image captioning systems using reinforcement learning,
and show that by carefully optimizing our systems using the test metrics of the
MSCOCO task, significant gains in performance can be realized. Our systems are
built using a new optimization approach that we call self-critical sequence
training (SCST). SCST is a form of the popular REINFORCE algorithm that, rather
than estimating a "baseline" to normalize the rewards and reduce variance,
utilizes the output of its own test-time inference algorithm to normalize the
rewards it experiences. Using this approach, estimating the reward signal (as
actor-critic methods must do) and estimating normalization (as REINFORCE
algorithms typically do) is avoided, while at the same time harmonizing the
model with respect to its test-time inference procedure. Empirically we find
that directly optimizing the CIDEr metric with SCST and greedy decoding at
test-time is highly effective. Our results on the MSCOCO evaluation sever
establish a new state-of-the-art on the task, improving the best result in
terms of CIDEr from 104.9 to 114.7.
| Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross and
Vaibhava Goel | null | 1612.00563 | null | null |
Active Search for Sparse Signals with Region Sensing | stat.ML cs.AI cs.LG | Autonomous systems can be used to search for sparse signals in a large space;
e.g., aerial robots can be deployed to localize threats, detect gas leaks, or
respond to distress calls. Intuitively, search algorithms may increase
efficiency by collecting aggregate measurements summarizing large contiguous
regions. However, most existing search methods either ignore the possibility of
such region observations (e.g., Bayesian optimization and multi-armed bandits)
or make strong assumptions about the sensing mechanism that allow each
measurement to arbitrarily encode all signals in the entire environment (e.g.,
compressive sensing). We propose an algorithm that actively collects data to
search for sparse signals using only noisy measurements of the average values
on rectangular regions (including single points), based on the greedy
maximization of information gain. We analyze our algorithm in 1d and show that
it requires $\tilde{O}(\frac{n}{\mu^2}+k^2)$ measurements to recover all of $k$
signal locations with small Bayes error, where $\mu$ and $n$ are the signal
strength and the size of the search space, respectively. We also show that
active designs can be fundamentally more efficient than passive designs with
region sensing, contrasting with the results of Arias-Castro, Candes, and
Davenport (2013). We demonstrate the empirical performance of our algorithm on
a search problem using satellite image data and in high dimensions.
| Yifei Ma and Roman Garnett and Jeff Schneider | null | 1612.00583 | null | null |
Development of a hybrid learning system based on SVM, ANFIS and domain
knowledge: DKFIS | cs.LG cs.CE stat.AP stat.ML | This paper presents the development of a hybrid learning system based on
Support Vector Machines (SVM), Adaptive Neuro-Fuzzy Inference System (ANFIS)
and domain knowledge to solve prediction problem. The proposed two-stage Domain
Knowledge based Fuzzy Information System (DKFIS) improves the prediction
accuracy attained by ANFIS alone. The proposed framework has been implemented
on a noisy and incomplete dataset acquired from a hydrocarbon field located at
western part of India. Here, oil saturation has been predicted from four
different well logs i.e. gamma ray, resistivity, density, and clay volume. In
the first stage, depending on zero or near zero and non-zero oil saturation
levels the input vector is classified into two classes (Class 0 and Class 1)
using SVM. The classification results have been further fine-tuned applying
expert knowledge based on the relationship among predictor variables i.e. well
logs and target variable - oil saturation. Second, an ANFIS is designed to
predict non-zero (Class 1) oil saturation values from predictor logs. The
predicted output has been further refined based on expert knowledge. It is
apparent from the experimental results that the expert intervention with
qualitative judgment at each stage has rendered the prediction into the
feasible and realistic ranges. The performance analysis of the prediction in
terms of four performance metrics such as correlation coefficient (CC), root
mean square error (RMSE), and absolute error mean (AEM), scatter index (SI) has
established DKFIS as a useful tool for reservoir characterization.
| Soumi Chaki, Aurobinda Routray, William K. Mohanty, Mamata Jenamani | null | 1612.00585 | null | null |
Communication Lower Bounds for Distributed Convex Optimization:
Partition Data on Features | cs.LG stat.ML | Recently, there has been an increasing interest in designing distributed
convex optimization algorithms under the setting where the data matrix is
partitioned on features. Algorithms under this setting sometimes have many
advantages over those under the setting where data is partitioned on samples,
especially when the number of features is huge. Therefore, it is important to
understand the inherent limitations of these optimization problems. In this
paper, with certain restrictions on the communication allowed in the
procedures, we develop tight lower bounds on communication rounds for a broad
class of non-incremental algorithms under this setting. We also provide a lower
bound on communication rounds for a class of (randomized) incremental
algorithms.
| Zihao Chen, Luo Luo, Zhihua Zhang | null | 1612.00599 | null | null |
Predictive Clinical Decision Support System with RNN Encoding and Tensor
Decoding | cs.LG | With the introduction of the Electric Health Records, large amounts of
digital data become available for analysis and decision support. When
physicians are prescribing treatments to a patient, they need to consider a
large range of data variety and volume, making decisions increasingly complex.
Machine learning based Clinical Decision Support systems can be a solution to
the data challenges. In this work we focus on a class of decision support in
which the physicians' decision is directly predicted. Concretely, the model
would assign higher probabilities to decisions that it presumes the physician
are more likely to make. Thus the CDS system can provide physicians with
rational recommendations. We also address the problem of correlation in target
features: Often a physician is required to make multiple (sub-)decisions in a
block, and that these decisions are mutually dependent. We propose a solution
to the target correlation problem using a tensor factorization model. In order
to handle the patients' historical information as sequential data, we apply the
so-called Encoder-Decoder-Framework which is based on Recurrent Neural Networks
(RNN) as encoders and a tensor factorization model as a decoder, a combination
which is novel in machine learning. With experiments with real-world datasets
we show that the proposed model does achieve better prediction performances.
| Yinchong Yang, Peter A. Fasching, Markus Wallwiener, Tanja N. Fehm,
Sara Y. Brucker, Volker Tresp | null | 1612.00611 | null | null |
A temporal model for multiple sclerosis course evolution | stat.ML cs.LG | Multiple Sclerosis is a degenerative condition of the central nervous system
that affects nearly 2.5 million of individuals in terms of their physical,
cognitive, psychological and social capabilities. Researchers are currently
investigating on the use of patient reported outcome measures for the
assessment of impact and evolution of the disease on the life of the patients.
To date, a clear understanding on the use of such measures to predict the
evolution of the disease is still lacking. In this work we resort to
regularized machine learning methods for binary classification and multiple
output regression. We propose a pipeline that can be used to predict the
disease progression from patient reported measures. The obtained model is
tested on a data set collected from an ongoing clinical research project.
| Samuele Fiorini, Andrea Tacchino, Giampaolo Brichetto, Alessandro
Verri, Annalisa Barla | null | 1612.00615 | null | null |
A General Framework for Density Based Time Series Clustering Exploiting
a Novel Admissible Pruning Strategy | cs.LG | Time Series Clustering is an important subroutine in many higher-level data
mining analyses, including data editing for classifiers, summarization, and
outlier detection. It is well known that for similarity search the superiority
of Dynamic Time Warping (DTW) over Euclidean distance gradually diminishes as
we consider ever larger datasets. However, as we shall show, the same is not
true for clustering. Clustering time series under DTW remains a computationally
expensive operation. In this work, we address this issue in two ways. We
propose a novel pruning strategy that exploits both the upper and lower bounds
to prune off a very large fraction of the expensive distance calculations. This
pruning strategy is admissible and gives us provably identical results to the
brute force algorithm, but is at least an order of magnitude faster. For
datasets where even this level of speedup is inadequate, we show that we can
use a simple heuristic to order the unavoidable calculations in a
most-useful-first ordering, thus casting the clustering into an anytime
framework. We demonstrate the utility of our ideas with both single and
multidimensional case studies in the domains of astronomy, speech physiology,
medicine and entomology. In addition, we show the generality of our clustering
framework to other domains by efficiently obtaining semantically significant
clusters in protein sequences using the Edit Distance, the discrete data
analogue of DTW.
| Nurjahan Begum, Liudmila Ulanova, Hoang Anh Dau, Jun Wang and Eamonn
Keogh | null | 1612.00637 | null | null |
Inferring Cognitive Models from Data using Approximate Bayesian
Computation | cs.HC cs.AI cs.LG stat.ML | An important problem for HCI researchers is to estimate the parameter values
of a cognitive model from behavioral data. This is a difficult problem, because
of the substantial complexity and variety in human behavioral strategies. We
report an investigation into a new approach using approximate Bayesian
computation (ABC) to condition model parameters to data and prior knowledge. As
the case study we examine menu interaction, where we have click time data only
to infer a cognitive model that implements a search behaviour with parameters
such as fixation duration and recall probability. Our results demonstrate that
ABC (i) improves estimates of model parameter values, (ii) enables meaningful
comparisons between model variants, and (iii) supports fitting models to
individual users. ABC provides ample opportunities for theoretical HCI research
by allowing principled inference of model parameter values and their
uncertainty.
| Antti Kangasr\"a\"asi\"o, Kumaripaba Athukorala, Andrew Howes, Jukka
Corander, Samuel Kaski, Antti Oulasvirta | 10.1145/3025453.3025576 | 1612.00653 | null | null |
Predicting Patient State-of-Health using Sliding Window and Recurrent
Classifiers | stat.ML cs.LG | Bedside monitors in Intensive Care Units (ICUs) frequently sound incorrectly,
slowing response times and desensitising nurses to alarms (Chambrin, 2001),
causing true alarms to be missed (Hug et al., 2011). We compare sliding window
predictors with recurrent predictors to classify patient state-of-health from
ICU multivariate time series; we report slightly improved performance for the
RNN for three out of four targets.
| Adam McCarthy and Christopher K.I. Williams | null | 1612.00662 | null | null |
Voxelwise nonlinear regression toolbox for neuroimage analysis:
Application to aging and neurodegenerative disease modeling | stat.ML cs.CV cs.LG q-bio.NC stat.AP | This paper describes a new neuroimaging analysis toolbox that allows for the
modeling of nonlinear effects at the voxel level, overcoming limitations of
methods based on linear models like the GLM. We illustrate its features using a
relevant example in which distinct nonlinear trajectories of Alzheimer's
disease related brain atrophy patterns were found across the full biological
spectrum of the disease. The open-source toolbox presented in this paper is
available at https://github.com/imatge-upc/VNeAT.
| Santi Puch, Asier Aduriz, Adri\`a Casamitjana, Veronica Vilaplana,
Paula Petrone, Gr\'egory Operto, Raffaele Cacciaglia, Stavros Skouras, Carles
Falcon, Jos\'e Luis Molinuevo, Juan Domingo Gispert | null | 1612.00667 | null | null |
Reliable Evaluation of Neural Network for Multiclass Classification of
Real-world Data | cs.NE cs.LG | This paper presents a systematic evaluation of Neural Network (NN) for
classification of real-world data. In the field of machine learning, it is
often seen that a single parameter that is 'predictive accuracy' is being used
for evaluating the performance of a classifier model. However, this parameter
might not be considered reliable given a dataset with very high level of
skewness. To demonstrate such behavior, seven different types of datasets have
been used to evaluate a Multilayer Perceptron (MLP) using twelve(12) different
parameters which include micro- and macro-level estimation. In the present
study, the most common problem of prediction called 'multiclass' classification
has been considered. The results that are obtained for different parameters for
each of the dataset could demonstrate interesting findings to support the
usability of these set of performance evaluation parameters.
| Siddharth Dinesh, Tirtharaj Dash | null | 1612.00671 | null | null |
Identifying and Categorizing Anomalies in Retinal Imaging Data | cs.LG cs.CV | The identification and quantification of markers in medical images is
critical for diagnosis, prognosis and management of patients in clinical
practice. Supervised- or weakly supervised training enables the detection of
findings that are known a priori. It does not scale well, and a priori
definition limits the vocabulary of markers to known entities reducing the
accuracy of diagnosis and prognosis. Here, we propose the identification of
anomalies in large-scale medical imaging data using healthy examples as a
reference. We detect and categorize candidates for anomaly findings untypical
for the observed data. A deep convolutional autoencoder is trained on healthy
retinal images. The learned model generates a new feature representation, and
the distribution of healthy retinal patches is estimated by a one-class support
vector machine. Results demonstrate that we can identify pathologic regions in
images without using expert annotations. A subsequent clustering categorizes
findings into clinically meaningful classes. In addition the learned features
outperform standard embedding approaches in a classification task.
| Philipp Seeb\"ock, Sebastian Waldstein, Sophie Klimscha, Bianca S.
Gerendas, Ren\'e Donner, Thomas Schlegl, Ursula Schmidt-Erfurth and Georg
Langs | null | 1612.00686 | null | null |
Probabilistic Neural Programs | cs.NE cs.AI cs.LG | We present probabilistic neural programs, a framework for program induction
that permits flexible specification of both a computational model and inference
algorithm while simultaneously enabling the use of deep neural networks.
Probabilistic neural programs combine a computation graph for specifying a
neural network with an operator for weighted nondeterministic choice. Thus, a
program describes both a collection of decisions as well as the neural network
architecture used to make each one. We evaluate our approach on a challenging
diagram question answering task where probabilistic neural programs correctly
execute nearly twice as many programs as a baseline model.
| Kenton W. Murray and Jayant Krishnamurthy | null | 1612.00712 | null | null |
Cognitive Deep Machine Can Train Itself | cs.LG cs.AI cs.NE | Machine learning is making substantial progress in diverse applications. The
success is mostly due to advances in deep learning. However, deep learning can
make mistakes and its generalization abilities to new tasks are questionable.
We ask when and how one can combine network outputs, when (i) details of the
observations are evaluated by learned deep components and (ii) facts and
confirmation rules are available in knowledge based systems. We show that in
limited contexts the required number of training samples can be low and
self-improvement of pre-trained networks in more general context is possible.
We argue that the combination of sparse outlier detection with deep components
that can support each other diminish the fragility of deep methods, an
important requirement for engineering applications. We argue that supervised
learning of labels may be fully eliminated under certain conditions: a
component based architecture together with a knowledge based system can train
itself and provide high quality answers. We demonstrate these concepts on the
State Farm Distracted Driver Detection benchmark. We argue that the view of the
Study Panel (2016) may overestimate the requirements on `years of focused
research' and `careful, unique construction' for `AI systems'.
| Andr\'as L\H{o}rincz, M\'at\'e Cs\'akv\'ari, \'Aron F\'othi, Zolt\'an
\'Ad\'am Milacski, Andr\'as S\'ark\'any, Zolt\'an T\H{o}s\'er | null | 1612.00745 | null | null |
Asynchronous Stochastic Gradient MCMC with Elastic Coupling | stat.ML cs.AI cs.LG | We consider parallel asynchronous Markov Chain Monte Carlo (MCMC) sampling
for problems where we can leverage (stochastic) gradients to define continuous
dynamics which explore the target distribution. We outline a solution strategy
for this setting based on stochastic gradient Hamiltonian Monte Carlo sampling
(SGHMC) which we alter to include an elastic coupling term that ties together
multiple MCMC instances. The proposed strategy turns inherently sequential HMC
algorithms into asynchronous parallel versions. First experiments empirically
show that the resulting parallel sampler significantly speeds up exploration of
the target distribution, when compared to standard SGHMC, and is less prone to
the harmful effects of stale gradients than a naive parallelization approach.
| Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, Frank Hutter | null | 1612.00767 | null | null |
A simple squared-error reformulation for ordinal classification | stat.ML cs.LG | In this paper, we explore ordinal classification (in the context of deep
neural networks) through a simple modification of the squared error loss which
not only allows it to not only be sensitive to class ordering, but also allows
the possibility of having a discrete probability distribution over the classes.
Our formulation is based on the use of a softmax hidden layer, which has
received relatively little attention in the literature. We empirically evaluate
its performance on the Kaggle diabetic retinopathy dataset, an ordinal and
high-resolution dataset and show that it outperforms all of the baselines
employed.
| Christopher Beckham, Christopher Pal | null | 1612.00775 | null | null |
Overcoming catastrophic forgetting in neural networks | cs.LG cs.AI stat.ML | The ability to learn tasks in a sequential fashion is crucial to the
development of artificial intelligence. Neural networks are not, in general,
capable of this and it has been widely thought that catastrophic forgetting is
an inevitable feature of connectionist models. We show that it is possible to
overcome this limitation and train networks that can maintain expertise on
tasks which they have not experienced for a long time. Our approach remembers
old tasks by selectively slowing down learning on the weights important for
those tasks. We demonstrate our approach is scalable and effective by solving a
set of classification tasks based on the MNIST hand written digit dataset and
by learning several Atari 2600 games sequentially.
| James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness,
Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho,
Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan
Kumaran, Raia Hadsell | 10.1073/pnas.1611835114 | 1612.00796 | null | null |
Restricted Strong Convexity Implies Weak Submodularity | stat.ML cs.IT cs.LG math.IT | We connect high-dimensional subset selection and submodular maximization. Our
results extend the work of Das and Kempe (2011) from the setting of linear
regression to arbitrary objective functions. For greedy feature selection, this
connection allows us to obtain strong multiplicative performance bounds on
several methods without statistical modeling assumptions. We also derive
recovery guarantees of this form under standard assumptions. Our work shows
that greedy algorithms perform within a constant factor from the best possible
subset-selection solution for a broad class of general objective functions. Our
methods allow a direct control over the number of obtained features as opposed
to regularization parameters that only implicitly control sparsity. Our proof
technique uses the concept of weak submodularity initially defined by Das and
Kempe. We draw a connection between convex analysis and submodular set function
theory which may be of independent interest for other statistical learning
applications that have combinatorial structure.
| Ethan R. Elenberg, Rajiv Khanna, Alexandros G. Dimakis, Sahand
Negahban | null | 1612.00804 | null | null |
Perspective Transformer Nets: Learning Single-View 3D Object
Reconstruction without 3D Supervision | cs.CV cs.GR cs.LG | Understanding the 3D world is a fundamental problem in computer vision.
However, learning a good representation of 3D objects is still an open problem
due to the high dimensionality of the data and many factors of variation
involved. In this work, we investigate the task of single-view 3D object
reconstruction from a learning agent's perspective. We formulate the learning
process as an interaction between 3D and 2D representations and propose an
encoder-decoder network with a novel projection loss defined by the perspective
transformation. More importantly, the projection loss enables the unsupervised
learning using 2D observation without explicit 3D supervision. We demonstrate
the ability of the model in generating 3D volume from a single 2D image with
three sets of experiments: (1) learning from single-class objects; (2) learning
from multi-class objects and (3) testing on novel object classes. Results show
superior performance and better generalization ability for 3D object
reconstruction when the projection loss is involved.
| Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo, Honglak Lee | null | 1612.00814 | null | null |
Summary - TerpreT: A Probabilistic Programming Language for Program
Induction | cs.LG cs.AI cs.NE | We study machine learning formulations of inductive program synthesis; that
is, given input-output examples, synthesize source code that maps inputs to
corresponding outputs. Our key contribution is TerpreT, a domain-specific
language for expressing program synthesis problems. A TerpreT model is composed
of a specification of a program representation and an interpreter that
describes how programs map inputs to outputs. The inference task is to observe
a set of input-output examples and infer the underlying program. From a TerpreT
model we automatically perform inference using four different back-ends:
gradient descent (thus each TerpreT model can be seen as defining a
differentiable interpreter), linear program (LP) relaxations for graphical
models, discrete satisfiability solving, and the Sketch program synthesis
system. TerpreT has two main benefits. First, it enables rapid exploration of a
range of domains, program representations, and interpreter models. Second, it
separates the model specification from the inference algorithm, allowing proper
comparisons between different approaches to inference.
We illustrate the value of TerpreT by developing several interpreter models
and performing an extensive empirical comparison between alternative inference
algorithms on a variety of program models. To our knowledge, this is the first
work to compare gradient-based search over program space to traditional
search-based alternatives. Our key empirical finding is that constraint solvers
dominate the gradient descent and LP-based formulations.
This is a workshop summary of a longer report at arXiv:1608.04428
| Alexander L. Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman,
Pushmeet Kohli, Jonathan Taylor, Daniel Tarlow | null | 1612.00817 | null | null |
Learning with Hierarchical Gaussian Kernels | stat.ML cs.LG | We investigate iterated compositions of weighted sums of Gaussian kernels and
provide an interpretation of the construction that shows some similarities with
the architectures of deep neural networks. On the theoretical side, we show
that these kernels are universal and that SVMs using these kernels are
universally consistent. We further describe a parameter optimization method for
the kernel parameters and empirically compare this method to SVMs, random
forests, a multiple kernel learning approach, and to some deep neural networks.
| Ingo Steinwart and Philipp Thomann and Nico Schmid | null | 1612.00824 | null | null |
Learning Operations on a Stack with Neural Turing Machines | cs.LG | Multiple extensions of Recurrent Neural Networks (RNNs) have been proposed
recently to address the difficulty of storing information over long time
periods. In this paper, we experiment with the capacity of Neural Turing
Machines (NTMs) to deal with these long-term dependencies on well-balanced
strings of parentheses. We show that not only does the NTM emulate a stack with
its heads and learn an algorithm to recognize such words, but it is also
capable of strongly generalizing to much longer sequences.
| Tristan Deleu, Joseph Dureau | null | 1612.00827 | null | null |
Scribbler: Controlling Deep Image Synthesis with Sketch and Color | cs.CV cs.LG | Recently, there have been several promising methods to generate realistic
imagery from deep convolutional networks. These methods sidestep the
traditional computer graphics rendering pipeline and instead generate imagery
at the pixel level by learning from large collections of photos (e.g. faces or
bedrooms). However, these methods are of limited utility because it is
difficult for a user to control what the network produces. In this paper, we
propose a deep adversarial image synthesis architecture that is conditioned on
sketched boundaries and sparse color strokes to generate realistic cars,
bedrooms, or faces. We demonstrate a sketch based image synthesis system which
allows users to 'scribble' over the sketch to indicate preferred color for
objects. Our network can then generate convincing images that satisfy both the
color and the sketch constraints of user. The network is feed-forward which
allows users to see the effect of their edits in real time. We compare to
recent work on sketch to image synthesis and show that our approach can
generate more realistic, more diverse, and more controllable outputs. The
architecture is also effective at user-guided colorization of grayscale images.
| Patsorn Sangkloy, Jingwan Lu, Chen Fang, Fisher Yu, James Hays | null | 1612.00835 | null | null |
Making the V in VQA Matter: Elevating the Role of Image Understanding in
Visual Question Answering | cs.CV cs.AI cs.CL cs.LG | Problems at the intersection of vision and language are of significant
importance both as challenging research questions and for the rich set of
applications they enable. However, inherent structure in our world and bias in
our language tend to be a simpler signal for learning than visual modalities,
resulting in models that ignore visual information, leading to an inflated
sense of their capability.
We propose to counter these language priors for the task of Visual Question
Answering (VQA) and make vision (the V in VQA) matter! Specifically, we balance
the popular VQA dataset by collecting complementary images such that every
question in our balanced dataset is associated with not just a single image,
but rather a pair of similar images that result in two different answers to the
question. Our dataset is by construction more balanced than the original VQA
dataset and has approximately twice the number of image-question pairs. Our
complete balanced dataset is publicly available at www.visualqa.org as part of
the 2nd iteration of the Visual Question Answering Dataset and Challenge (VQA
v2.0).
We further benchmark a number of state-of-art VQA models on our balanced
dataset. All models perform significantly worse on our balanced dataset,
suggesting that these models have indeed learned to exploit language priors.
This finding provides the first concrete empirical evidence for what seems to
be a qualitative sense among practitioners.
Finally, our data collection protocol for identifying complementary images
enables us to develop a novel interpretable model, which in addition to
providing an answer to the given (image, question) pair, also provides a
counter-example based explanation. Specifically, it identifies an image that is
similar to the original image, but it believes has a different answer to the
same question. This can help in building trust for machines among their users.
| Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, Devi Parikh | null | 1612.00837 | null | null |
A novel multiclassSVM based framework to classify lithology from well
logs: a real-world application | cs.LG stat.AP stat.ML | Support vector machines (SVMs) have been recognized as a potential tool for
supervised classification analyses in different domains of research. In
essence, SVM is a binary classifier. Therefore, in case of a multiclass
problem, the problem is divided into a series of binary problems which are
solved by binary classifiers, and finally the classification results are
combined following either the one-against-one or one-against-all strategies. In
this paper, an attempt has been made to classify lithology using a multiclass
SVM based framework using well logs as predictor variables. Here, the lithology
is classified into four classes such as sand, shaly sand, sandy shale and shale
based on the relative values of sand and shale fractions as suggested by an
expert geologist. The available dataset consisting well logs (gamma ray,
neutron porosity, density, and P-sonic) and class information from four closely
spaced wells from an onshore hydrocarbon field is divided into training and
testing sets. We have used one-against-all strategy to combine the results of
multiple binary classifiers. The reported results established the superiority
of multiclass SVM compared to other classifiers in terms of classification
accuracy. The selection of kernel function and associated parameters has also
been investigated here. It can be envisaged from the results achieved in this
study that the proposed framework based on multiclass SVM can further be used
to solve classification problems. In future research endeavor, seismic
attributes can be introduced in the framework to classify the lithology
throughout a study area from seismic inputs.
| Soumi Chaki, Aurobinda Routray, William K. Mohanty, Mamata Jenamani | null | 1612.0084 | null | null |
A Novel Framework based on SVDD to Classify Water Saturation from
Seismic Attributes | cs.LG stat.ML | Water saturation is an important property in reservoir engineering domain.
Thus, satisfactory classification of water saturation from seismic attributes
is beneficial for reservoir characterization. However, diverse and non-linear
nature of subsurface attributes makes the classification task difficult. In
this context, this paper proposes a generalized Support Vector Data Description
(SVDD) based novel classification framework to classify water saturation into
two classes (Class high and Class low) from three seismic attributes seismic
impedance, amplitude envelop, and seismic sweetness. G-metric means and program
execution time are used to quantify the performance of the proposed framework
along with established supervised classifiers. The documented results imply
that the proposed framework is superior to existing classifiers. The present
study is envisioned to contribute in further reservoir modeling.
| Soumi Chaki, Akhilesh Kumar Verma, Aurobinda Routray, William K.
Mohanty, Mamata Jenamani | null | 1612.00841 | null | null |
Success Probability of Exploration: a Concrete Analysis of Learning
Efficiency | cs.LG | Exploration has been a crucial part of reinforcement learning, yet several
important questions concerning exploration efficiency are still not answered
satisfactorily by existing analytical frameworks. These questions include
exploration parameter setting, situation analysis, and hardness of MDPs, all of
which are unavoidable for practitioners. To bridge the gap between the theory
and practice, we propose a new analytical framework called the success
probability of exploration. We show that those important questions of
exploration above can all be answered under our framework, and the answers
provided by our framework meet the needs of practitioners better than the
existing ones. More importantly, we introduce a concrete and practical approach
to evaluating the success probabilities in certain MDPs without the need of
actually running the learning algorithm. We then provide empirical results to
verify our approach, and demonstrate how the success probability of exploration
can be used to analyse and predict the behaviours and possible outcomes of
exploration, which are the keys to the answer of the important questions of
exploration.
| Liangpeng Zhang, Ke Tang, Xin Yao | null | 1612.00882 | null | null |
Parameter Compression of Recurrent Neural Networks and Degradation of
Short-term Memory | cs.CV cs.LG cs.NE | The significant computational costs of deploying neural networks in
large-scale or resource constrained environments, such as data centers and
mobile devices, has spurred interest in model compression, which can achieve a
reduction in both arithmetic operations and storage memory. Several techniques
have been proposed for reducing or compressing the parameters for feed-forward
and convolutional neural networks, but less is understood about the effect of
parameter compression on recurrent neural networks (RNN). In particular, the
extent to which the recurrent parameters can be compressed and the impact on
short-term memory performance, is not well understood. In this paper, we study
the effect of complexity reduction, through singular value decomposition rank
reduction, on RNN and minimal gated recurrent unit (MGRU) networks for several
tasks. We show that considerable rank reduction is possible when compressing
recurrent weights, even without fine tuning. Furthermore, we propose a
perturbation model for the effect of general perturbations, such as a
compression, on the recurrent parameters of RNNs. The model is tested against a
noiseless memorization experiment that elucidates the short-term memory
performance. In this way, we demonstrate that the effect of compression of
recurrent parameters is dependent on the degree of temporal coherence present
in the data and task. This work can guide on-the-fly RNN compression for novel
environments or tasks, and provides insight for applying RNN compression in
low-power devices, such as hearing aids.
| Jonathan A. Cox | null | 1612.00891 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.