categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG stat.ML
| null |
1611.0963
| null | null | null | null | null |
Improving Variational Auto-Encoders using Householder Flow
|
Variational auto-encoders (VAE) are scalable and powerful generative models.
However, the choice of the variational posterior determines tractability and
flexibility of the VAE. Commonly, latent variables are modeled using the normal
distribution with a diagonal covariance matrix. This results in computational
efficiency but typically it is not flexible enough to match the true posterior
distribution. One fashion of enriching the variational posterior distribution
is application of normalizing flows, i.e., a series of invertible
transformations to latent variables with a simple posterior. In this paper, we
follow this line of thinking and propose a volume-preserving flow that uses a
series of Householder transformations. We show empirically on MNIST dataset and
histopathology data that the proposed flow allows to obtain more flexible
variational posterior and competitive results comparing to other normalizing
flows.
|
[
"Jakub M. Tomczak and Max Welling"
] |
null | null |
1611.09630
| null | null |
http://arxiv.org/pdf/1611.09630v4
|
2017-01-27T00:36:51Z
|
2016-11-29T13:49:31Z
|
Improving Variational Auto-Encoders using Householder Flow
|
Variational auto-encoders (VAE) are scalable and powerful generative models. However, the choice of the variational posterior determines tractability and flexibility of the VAE. Commonly, latent variables are modeled using the normal distribution with a diagonal covariance matrix. This results in computational efficiency but typically it is not flexible enough to match the true posterior distribution. One fashion of enriching the variational posterior distribution is application of normalizing flows, i.e., a series of invertible transformations to latent variables with a simple posterior. In this paper, we follow this line of thinking and propose a volume-preserving flow that uses a series of Householder transformations. We show empirically on MNIST dataset and histopathology data that the proposed flow allows to obtain more flexible variational posterior and competitive results comparing to other normalizing flows.
|
[
"['Jakub M. Tomczak' 'Max Welling']"
] |
cs.CV cs.LG stat.ML
| null |
1611.09726
| null | null |
http://arxiv.org/pdf/1611.09726v1
|
2016-11-29T17:01:31Z
|
2016-11-29T17:01:31Z
|
Gossip training for deep learning
|
We address the issue of speeding up the training of convolutional networks.
Here we study a distributed method adapted to stochastic gradient descent
(SGD). The parallel optimization setup uses several threads, each applying
individual gradient descents on a local variable. We propose a new way to share
information between different threads inspired by gossip algorithms and showing
good consensus convergence properties. Our method called GoSGD has the
advantage to be fully asynchronous and decentralized. We compared our method to
the recent EASGD in \cite{elastic} on CIFAR-10 show encouraging results.
|
[
"['Michael Blot' 'David Picard' 'Matthieu Cord' 'Nicolas Thome']",
"Michael Blot, David Picard, Matthieu Cord, Nicolas Thome"
] |
stat.ML cs.LG
| null |
1611.09816
| null | null |
http://arxiv.org/pdf/1611.09816v2
|
2016-11-30T20:25:36Z
|
2016-11-29T20:07:32Z
|
Co-adaptive learning over a countable space
|
Co-adaptation is a special form of on-line learning where an algorithm
$\mathcal{A}$ must assist an unknown algorithm $\mathcal{B}$ to perform some
task. This is a general framework and has applications in recommendation
systems, search, education, and much more. Today, the most common use of
co-adaptive algorithms is in brain-computer interfacing (BCI), where algorithms
help patients gain and maintain control over prosthetic devices. While previous
studies have shown strong empirical results Kowalski et al. (2013); Orsborn et
al. (2014) or have been studied in specific examples Merel et al. (2013, 2015),
there is no general analysis of the co-adaptive learning problem. Here we will
study the co-adaptive learning problem in the online, closed-loop setting. We
will prove that, with high probability, co-adaptive learning is guaranteed to
outperform learning with a fixed decoder as long as a particular condition is
met.
|
[
"Michael Rabadi",
"['Michael Rabadi']"
] |
q-bio.NC cs.AI cs.CV cs.LG
| null |
1611.09819
| null | null |
http://arxiv.org/pdf/1611.09819v1
|
2016-11-29T20:11:09Z
|
2016-11-29T20:11:09Z
|
Measuring and modeling the perception of natural and unconstrained gaze
in humans and machines
|
Humans are remarkably adept at interpreting the gaze direction of other
individuals in their surroundings. This skill is at the core of the ability to
engage in joint visual attention, which is essential for establishing social
interactions. How accurate are humans in determining the gaze direction of
others in lifelike scenes, when they can move their heads and eyes freely, and
what are the sources of information for the underlying perceptual processes?
These questions pose a challenge from both empirical and computational
perspectives, due to the complexity of the visual input in real-life
situations. Here we measure empirically human accuracy in perceiving the gaze
direction of others in lifelike scenes, and study computationally the sources
of information and representations underlying this cognitive capacity. We show
that humans perform better in face-to-face conditions compared with recorded
conditions, and that this advantage is not due to the availability of input
dynamics. We further show that humans are still performing well when only the
eyes-region is visible, rather than the whole face. We develop a computational
model, which replicates the pattern of human performance, including the finding
that the eyes-region contains on its own, the required information for
estimating both head orientation and direction of gaze. Consistent with
neurophysiological findings on task-specific face regions in the brain, the
learned computational representations reproduce perceptual effects such as the
Wollaston illusion, when trained to estimate direction of gaze, but not when
trained to recognize objects or faces.
|
[
"['Daniel Harari' 'Tao Gao' 'Nancy Kanwisher' 'Joshua Tenenbaum'\n 'Shimon Ullman']",
"Daniel Harari, Tao Gao, Nancy Kanwisher, Joshua Tenenbaum, Shimon\n Ullman"
] |
stat.ML cs.LG cs.SD
| null |
1611.09827
| null | null |
http://arxiv.org/pdf/1611.09827v2
|
2017-04-06T01:13:41Z
|
2016-11-29T20:26:00Z
|
Learning Features of Music from Scratch
|
This paper introduces a new large-scale music dataset, MusicNet, to serve as
a source of supervision and evaluation of machine learning methods for music
research. MusicNet consists of hundreds of freely-licensed classical music
recordings by 10 composers, written for 11 instruments, together with
instrument/note annotations resulting in over 1 million temporal labels on 34
hours of chamber music performances under various studio and microphone
conditions.
The paper defines a multi-label classification task to predict notes in
musical recordings, along with an evaluation protocol, and benchmarks several
machine learning architectures for this task: i) learning from spectrogram
features; ii) end-to-end learning with a neural net; iii) end-to-end learning
with a convolutional neural net. These experiments show that end-to-end models
trained for note prediction learn frequency selective filters as a low-level
representation of audio.
|
[
"['John Thickstun' 'Zaid Harchaoui' 'Sham Kakade']",
"John Thickstun, Zaid Harchaoui, Sham Kakade"
] |
cs.CL cs.LG stat.ML
| null |
1611.09878
| null | null |
http://arxiv.org/pdf/1611.09878v1
|
2016-11-29T21:12:04Z
|
2016-11-29T21:12:04Z
|
Identity-sensitive Word Embedding through Heterogeneous Networks
|
Most existing word embedding approaches do not distinguish the same words in
different contexts, therefore ignoring their contextual meanings. As a result,
the learned embeddings of these words are usually a mixture of multiple
meanings. In this paper, we acknowledge multiple identities of the same word in
different contexts and learn the \textbf{identity-sensitive} word embeddings.
Based on an identity-labeled text corpora, a heterogeneous network of words and
word identities is constructed to model different-levels of word
co-occurrences. The heterogeneous network is further embedded into a
low-dimensional space through a principled network embedding approach, through
which we are able to obtain the embeddings of words and the embeddings of word
identities. We study three different types of word identities including topics,
sentiments and categories. Experimental results on real-world data sets show
that the identity-sensitive word embeddings learned by our approach indeed
capture different meanings of words and outperforms competitive methods on
tasks including text classification and word similarity computation.
|
[
"Jian Tang, Meng Qu, and Qiaozhu Mei",
"['Jian Tang' 'Meng Qu' 'Qiaozhu Mei']"
] |
cs.AI cs.LG stat.ML
| null |
1611.09894
| null | null |
http://arxiv.org/pdf/1611.09894v1
|
2016-11-29T21:32:25Z
|
2016-11-29T21:32:25Z
|
Exploration for Multi-task Reinforcement Learning with Deep Generative
Models
|
Exploration in multi-task reinforcement learning is critical in training
agents to deduce the underlying MDP. Many of the existing exploration
frameworks such as $E^3$, $R_{max}$, Thompson sampling assume a single
stationary MDP and are not suitable for system identification in the multi-task
setting. We present a novel method to facilitate exploration in multi-task
reinforcement learning using deep generative models. We supplement our method
with a low dimensional energy model to learn the underlying MDP distribution
and provide a resilient and adaptive exploration signal to the agent. We
evaluate our method on a new set of environments and provide intuitive
interpretation of our results.
|
[
"Sai Praveen Bangaru, JS Suhas and Balaraman Ravindran",
"['Sai Praveen Bangaru' 'JS Suhas' 'Balaraman Ravindran']"
] |
stat.ML cs.LG
| null |
1611.09897
| null | null |
http://arxiv.org/pdf/1611.09897v1
|
2016-11-29T21:39:23Z
|
2016-11-29T21:39:23Z
|
Autism Spectrum Disorder Classification using Graph Kernels on
Multidimensional Time Series
|
We present an approach to model time series data from resting state fMRI for
autism spectrum disorder (ASD) severity classification. We propose to adopt
kernel machines and employ graph kernels that define a kernel dot product
between two graphs. This enables us to take advantage of spatio-temporal
information to capture the dynamics of the brain network, as opposed to
aggregating them in the spatial or temporal dimension. In addition to the
conventional similarity graphs, we explore the use of L1 graph using sparse
coding, and the persistent homology of time delay embeddings, in the proposed
pipeline for ASD classification. In our experiments on two datasets from the
ABIDE collection, we demonstrate a consistent and significant advantage in
using graph kernels over traditional linear or non linear kernels for a variety
of time series features.
|
[
"Rushil Anirudh, Jayaraman J. Thiagarajan, Irene Kim, Wolfgang Polonik",
"['Rushil Anirudh' 'Jayaraman J. Thiagarajan' 'Irene Kim'\n 'Wolfgang Polonik']"
] |
cs.AI cs.LG
| null |
1611.09904
| null | null |
http://arxiv.org/pdf/1611.09904v1
|
2016-11-29T21:53:09Z
|
2016-11-29T21:53:09Z
|
C-RNN-GAN: Continuous recurrent neural networks with adversarial
training
|
Generative adversarial networks have been proposed as a way of efficiently
training deep generative neural networks. We propose a generative adversarial
model that works on continuous sequential data, and apply it by training it on
a collection of classical music. We conclude that it generates music that
sounds better and better as the model is trained, report statistics on
generated music, and let the reader judge the quality by downloading the
generated songs.
|
[
"['Olof Mogren']",
"Olof Mogren"
] |
stat.ML cs.AI cs.LG cs.NE
| null |
1611.09913
| null | null |
http://arxiv.org/pdf/1611.09913v3
|
2017-03-03T17:39:34Z
|
2016-11-29T22:13:20Z
|
Capacity and Trainability in Recurrent Neural Networks
|
Two potential bottlenecks on the expressiveness of recurrent neural networks
(RNNs) are their ability to store information about the task in their
parameters, and to store information about the input history in their units. We
show experimentally that all common RNN architectures achieve nearly the same
per-task and per-unit capacity bounds with careful training, for a variety of
tasks and stacking depths. They can store an amount of task information which
is linear in the number of parameters, and is approximately 5 bits per
parameter. They can additionally store approximately one real number from their
input history per hidden unit. We further find that for several tasks it is the
per-task parameter capacity bound that determines performance. These results
suggest that many previous results comparing RNN architectures are driven
primarily by differences in training effectiveness, rather than differences in
capacity. Supporting this observation, we compare training difficulty for
several architectures, and show that vanilla RNNs are far more difficult to
train, yet have slightly higher capacity. Finally, we propose two novel RNN
architectures, one of which is easier to train than the LSTM or GRU for deeply
stacked architectures.
|
[
"Jasmine Collins, Jascha Sohl-Dickstein and David Sussillo",
"['Jasmine Collins' 'Jascha Sohl-Dickstein' 'David Sussillo']"
] |
cs.LG cs.CL cs.IR
| null |
1611.09921
| null | null |
http://arxiv.org/pdf/1611.09921v2
|
2016-12-01T02:45:34Z
|
2016-11-29T22:24:30Z
|
Less is More: Learning Prominent and Diverse Topics for Data
Summarization
|
Statistical topic models efficiently facilitate the exploration of
large-scale data sets. Many models have been developed and broadly used to
summarize the semantic structure in news, science, social media, and digital
humanities. However, a common and practical objective in data exploration tasks
is not to enumerate all existing topics, but to quickly extract representative
ones that broadly cover the content of the corpus, i.e., a few topics that
serve as a good summary of the data. Most existing topic models fit exactly the
same number of topics as a user specifies, which have imposed an unnecessary
burden to the users who have limited prior knowledge. We instead propose new
models that are able to learn fewer but more representative topics for the
purpose of data summarization. We propose a reinforced random walk that allows
prominent topics to absorb tokens from similar and smaller topics, thus
enhances the diversity among the top topics extracted. With this reinforced
random walk as a general process embedded in classical topic models, we obtain
\textit{diverse topic models} that are able to extract the most prominent and
diverse topics from data. The inference procedures of these diverse topic
models remain as simple and efficient as the classical models. Experimental
results demonstrate that the diverse topic models not only discover topics that
better summarize the data, but also require minimal prior knowledge of the
users.
|
[
"['Jian Tang' 'Cheng Li' 'Ming Zhang' 'Qiaozhu Mei']",
"Jian Tang, Cheng Li, Ming Zhang, and Qiaozhu Mei"
] |
cs.AI cs.LG stat.ML
| null |
1611.0994
| null | null | null | null | null |
Neural Combinatorial Optimization with Reinforcement Learning
|
This paper presents a framework to tackle combinatorial optimization problems
using neural networks and reinforcement learning. We focus on the traveling
salesman problem (TSP) and train a recurrent network that, given a set of city
coordinates, predicts a distribution over different city permutations. Using
negative tour length as the reward signal, we optimize the parameters of the
recurrent network using a policy gradient method. We compare learning the
network parameters on a set of training graphs against learning them on
individual test graphs. Despite the computational expense, without much
engineering and heuristic designing, Neural Combinatorial Optimization achieves
close to optimal results on 2D Euclidean graphs with up to 100 nodes. Applied
to the KnapSack, another NP-hard problem, the same method obtains optimal
solutions for instances with up to 200 items.
|
[
"Irwan Bello, Hieu Pham, Quoc V. Le, Mohammad Norouzi, Samy Bengio"
] |
null | null |
1611.09940
| null | null |
http://arxiv.org/pdf/1611.09940v3
|
2017-01-12T23:55:36Z
|
2016-11-29T23:22:39Z
|
Neural Combinatorial Optimization with Reinforcement Learning
|
This paper presents a framework to tackle combinatorial optimization problems using neural networks and reinforcement learning. We focus on the traveling salesman problem (TSP) and train a recurrent network that, given a set of city coordinates, predicts a distribution over different city permutations. Using negative tour length as the reward signal, we optimize the parameters of the recurrent network using a policy gradient method. We compare learning the network parameters on a set of training graphs against learning them on individual test graphs. Despite the computational expense, without much engineering and heuristic designing, Neural Combinatorial Optimization achieves close to optimal results on 2D Euclidean graphs with up to 100 nodes. Applied to the KnapSack, another NP-hard problem, the same method obtains optimal solutions for instances with up to 200 items.
|
[
"['Irwan Bello' 'Hieu Pham' 'Quoc V. Le' 'Mohammad Norouzi' 'Samy Bengio']"
] |
cs.AI cs.LG stat.ML
| null |
1611.09957
| null | null |
http://arxiv.org/pdf/1611.09957v2
|
2017-05-16T21:21:03Z
|
2016-11-30T01:03:11Z
|
Low-dimensional Data Embedding via Robust Ranking
|
We describe a new method called t-ETE for finding a low-dimensional embedding
of a set of objects in Euclidean space. We formulate the embedding problem as a
joint ranking problem over a set of triplets, where each triplet captures the
relative similarities between three objects in the set. By exploiting recent
advances in robust ranking, t-ETE produces high-quality embeddings even in the
presence of a significant amount of noise and better preserves local scale than
known methods, such as t-STE and t-SNE. In particular, our method produces
significantly better results than t-SNE on signature datasets while also being
faster to compute.
|
[
"Ehsan Amid, Nikos Vlassis, Manfred K. Warmuth",
"['Ehsan Amid' 'Nikos Vlassis' 'Manfred K. Warmuth']"
] |
stat.ML cs.CV cs.LG
| null |
1611.09958
| null | null |
http://arxiv.org/pdf/1611.09958v2
|
2016-12-02T11:27:37Z
|
2016-11-30T01:17:34Z
|
Machine Learning for Dental Image Analysis
|
In order to study the application of artificial intelligence (AI) to dental
imaging, we applied AI technology to classify a set of panoramic radiographs
using (a) a convolutional neural network (CNN) which is a form of an artificial
neural network (ANN), (b) representative image cognition algorithms that
implement scale-invariant feature transform (SIFT), and (c) histogram of
oriented gradients (HOG).
|
[
"Young-jun Yu",
"['Young-jun Yu']"
] |
cs.CV cs.LG cs.MM
| null |
1611.10017
| null | null |
http://arxiv.org/pdf/1611.10017v1
|
2016-11-30T06:35:39Z
|
2016-11-30T06:35:39Z
|
Fast Supervised Discrete Hashing and its Analysis
|
In this paper, we propose a learning-based supervised discrete hashing
method. Binary hashing is widely used for large-scale image retrieval as well
as video and document searches because the compact representation of binary
code is essential for data storage and reasonable for query searches using
bit-operations. The recently proposed Supervised Discrete Hashing (SDH)
efficiently solves mixed-integer programming problems by alternating
optimization and the Discrete Cyclic Coordinate descent (DCC) method. We show
that the SDH model can be simplified without performance degradation based on
some preliminary experiments; we call the approximate model for this the "Fast
SDH" (FSDH) model. We analyze the FSDH model and provide a mathematically exact
solution for it. In contrast to SDH, our model does not require an alternating
optimization algorithm and does not depend on initial values. FSDH is also
easier to implement than Iterative Quantization (ITQ). Experimental results
involving a large-scale database showed that FSDH outperforms conventional SDH
in terms of precision, recall, and computation time.
|
[
"Gou Koutaki, Keiichiro Shirai, Mitsuru Ambai",
"['Gou Koutaki' 'Keiichiro Shirai' 'Mitsuru Ambai']"
] |
cs.LG cs.CV stat.ML
| null |
1611.10031
| null | null |
http://arxiv.org/pdf/1611.10031v1
|
2016-11-30T07:34:46Z
|
2016-11-30T07:34:46Z
|
Active Deep Learning for Classification of Hyperspectral Images
|
Active deep learning classification of hyperspectral images is considered in
this paper. Deep learning has achieved success in many applications, but
good-quality labeled samples are needed to construct a deep learning network.
It is expensive getting good labeled samples in hyperspectral images for remote
sensing applications. An active learning algorithm based on a weighted
incremental dictionary learning is proposed for such applications. The proposed
algorithm selects training samples that maximize two selection criteria, namely
representative and uncertainty. This algorithm trains a deep network
efficiently by actively selecting training samples at each iteration. The
proposed algorithm is applied for the classification of hyperspectral images,
and compared with other classification algorithms employing active learning. It
is shown that the proposed algorithm is efficient and effective in classifying
hyperspectral images.
|
[
"['Peng Liu' 'Hui Zhang' 'Kie B. Eom']",
"Peng Liu, Hui Zhang, and Kie B. Eom"
] |
math.OC cs.LG stat.ML
| null |
1611.10041
| null | null |
http://arxiv.org/pdf/1611.10041v1
|
2016-11-30T08:10:58Z
|
2016-11-30T08:10:58Z
|
Subsampled online matrix factorization with convergence guarantees
|
We present a matrix factorization algorithm that scales to input matrices
that are large in both dimensions (i.e., that contains morethan 1TB of data).
The algorithm streams the matrix columns while subsampling them, resulting in
low complexity per iteration andreasonable memory footprint. In contrast to
previous online matrix factorization methods, our approach relies on
low-dimensional statistics from past iterates to control the extra variance
introduced by subsampling. We present a convergence analysis that guarantees us
to reach a stationary point of the problem. Large speed-ups can be obtained
compared to previous online algorithms that do not perform subsampling, thanks
to the feature redundancy that often exists in high-dimensional settings.
|
[
"Arthur Mensch (PARIETAL), Julien Mairal (LEAR), Ga\\\"el Varoquaux\n (PARIETAL), Bertrand Thirion (PARIETAL)",
"['Arthur Mensch' 'Julien Mairal' 'Gaël Varoquaux' 'Bertrand Thirion']"
] |
cs.DC cs.LG
|
10.1109/CLOUD.2017.55
|
1611.10052
| null | null |
http://arxiv.org/abs/1611.10052v2
|
2016-12-16T09:45:04Z
|
2016-11-30T08:52:11Z
|
Performance Tuning of Hadoop MapReduce: A Noisy Gradient Approach
|
Hadoop MapReduce is a framework for distributed storage and processing of
large datasets that is quite popular in big data analytics. It has various
configuration parameters (knobs) which play an important role in deciding the
performance i.e., the execution time of a given big data processing job.
Default values of these parameters do not always result in good performance and
hence it is important to tune them. However, there is inherent difficulty in
tuning the parameters due to two important reasons - firstly, the parameter
search space is large and secondly, there are cross-parameter interactions.
Hence, there is a need for a dimensionality-free method which can automatically
tune the configuration parameters by taking into account the cross-parameter
dependencies. In this paper, we propose a novel Hadoop parameter tuning
methodology, based on a noisy gradient algorithm known as the simultaneous
perturbation stochastic approximation (SPSA). The SPSA algorithm tunes the
parameters by directly observing the performance of the Hadoop MapReduce
system. The approach followed is independent of parameter dimensions and
requires only $2$ observations per iteration while tuning. We demonstrate the
effectiveness of our methodology in achieving good performance on popular
Hadoop benchmarks namely \emph{Grep}, \emph{Bigram}, \emph{Inverted Index},
\emph{Word Co-occurrence} and \emph{Terasort}. Our method, when tested on a 25
node Hadoop cluster shows 66\% decrease in execution time of Hadoop jobs on an
average, when compared to the default configuration. Further, we also observe a
reduction of 45\% in execution times, when compared to prior methods.
|
[
"['Sandeep Kumar' 'Sindhu Padakandla' 'Chandrashekar L' 'Priyank Parihar'\n 'K Gopinath' 'Shalabh Bhatnagar']",
"Sandeep Kumar, Sindhu Padakandla, Chandrashekar L, Priyank Parihar, K\n Gopinath, Shalabh Bhatnagar"
] |
cs.LG cs.CV
| null |
1611.10176
| null | null |
http://arxiv.org/pdf/1611.10176v1
|
2016-11-30T14:33:08Z
|
2016-11-30T14:33:08Z
|
Effective Quantization Methods for Recurrent Neural Networks
|
Reducing bit-widths of weights, activations, and gradients of a Neural
Network can shrink its storage size and memory usage, and also allow for faster
training and inference by exploiting bitwise operations. However, previous
attempts for quantization of RNNs show considerable performance degradation
when using low bit-width weights and activations. In this paper, we propose
methods to quantize the structure of gates and interlinks in LSTM and GRU
cells. In addition, we propose balanced quantization methods for weights to
further reduce performance degradation. Experiments on PTB and IMDB datasets
confirm effectiveness of our methods as performances of our models match or
surpass the previous state-of-the-art of quantized RNN.
|
[
"['Qinyao He' 'He Wen' 'Shuchang Zhou' 'Yuxin Wu' 'Cong Yao' 'Xinyu Zhou'\n 'Yuheng Zou']",
"Qinyao He, He Wen, Shuchang Zhou, Yuxin Wu, Cong Yao, Xinyu Zhou,\n Yuheng Zou"
] |
cs.LG cs.AI
| null |
1611.10215
| null | null |
http://arxiv.org/pdf/1611.10215v3
|
2018-02-28T11:28:55Z
|
2016-11-30T15:24:55Z
|
Unit Commitment using Nearest Neighbor as a Short-Term Proxy
|
We devise the Unit Commitment Nearest Neighbor (UCNN) algorithm to be used as
a proxy for quickly approximating outcomes of short-term decisions, to make
tractable hierarchical long-term assessment and planning for large power
systems. Experimental results on updated versions of IEEE-RTS79 and IEEE-RTS96
show high accuracy measured on operational cost, achieved in runtimes that are
lower in several orders of magnitude than the traditional approach.
|
[
"Gal Dalal, Elad Gilboa, Shie Mannor, Louis Wehenkel",
"['Gal Dalal' 'Elad Gilboa' 'Shie Mannor' 'Louis Wehenkel']"
] |
cs.LG cs.GT
| null |
1611.10228
| null | null |
http://arxiv.org/pdf/1611.10228v1
|
2016-11-30T15:44:57Z
|
2016-11-30T15:44:57Z
|
Behavior-Based Machine-Learning: A Hybrid Approach for Predicting Human
Decision Making
|
A large body of work in behavioral fields attempts to develop models that
describe the way people, as opposed to rational agents, make decisions. A
recent Choice Prediction Competition (2015) challenged researchers to suggest a
model that captures 14 classic choice biases and can predict human decisions
under risk and ambiguity. The competition focused on simple decision problems,
in which human subjects were asked to repeatedly choose between two gamble
options.
In this paper we present our approach for predicting human decision behavior:
we suggest to use machine learning algorithms with features that are based on
well-established behavioral theories. The basic idea is that these
psychological features are essential for the representation of the data and are
important for the success of the learning process. We implement a vanilla model
in which we train SVM models using behavioral features that rely on the
psychological properties underlying the competition baseline model. We show
that this basic model captures the 14 choice biases and outperforms all the
other learning-based models in the competition. The preliminary results suggest
that such hybrid models can significantly improve the prediction of human
decision making, and are a promising direction for future research.
|
[
"['Gali Noti' 'Effi Levi' 'Yoav Kolumbus' 'Amit Daniely']",
"Gali Noti, Effi Levi, Yoav Kolumbus and Amit Daniely"
] |
q-bio.NC cs.AI cs.LG
| null |
1611.10252
| null | null |
http://arxiv.org/pdf/1611.10252v1
|
2016-11-29T18:11:00Z
|
2016-11-29T18:11:00Z
|
SeDMiD for Confusion Detection: Uncovering Mind State from Time Series
Brain Wave Data
|
Understanding how brain functions has been an intriguing topic for years.
With the recent progress on collecting massive data and developing advanced
technology, people have become interested in addressing the challenge of
decoding brain wave data into meaningful mind states, with many machine
learning models and algorithms being revisited and developed, especially the
ones that handle time series data because of the nature of brain waves.
However, many of these time series models, like HMM with hidden state in
discrete space or State Space Model with hidden state in continuous space, only
work with one source of data and cannot handle different sources of information
simultaneously. In this paper, we propose an extension of State Space Model to
work with different sources of information together with its learning and
inference algorithms. We apply this model to decode the mind state of students
during lectures based on their brain waves and reach a significant better
results compared to traditional methods.
|
[
"['Jingkang Yang' 'Haohan Wang' 'Jun Zhu' 'Eric P. Xing']",
"Jingkang Yang, Haohan Wang, Jun Zhu, Eric P. Xing"
] |
cs.LG cs.CC stat.ML
| null |
1611.10258
| null | null |
http://arxiv.org/pdf/1611.10258v1
|
2016-11-30T16:42:23Z
|
2016-11-30T16:42:23Z
|
Reliably Learning the ReLU in Polynomial Time
|
We give the first dimension-efficient algorithms for learning Rectified
Linear Units (ReLUs), which are functions of the form $\mathbf{x} \mapsto
\max(0, \mathbf{w} \cdot \mathbf{x})$ with $\mathbf{w} \in \mathbb{S}^{n-1}$.
Our algorithm works in the challenging Reliable Agnostic learning model of
Kalai, Kanade, and Mansour (2009) where the learner is given access to a
distribution $\cal{D}$ on labeled examples but the labeling may be arbitrary.
We construct a hypothesis that simultaneously minimizes the false-positive rate
and the loss on inputs given positive labels by $\cal{D}$, for any convex,
bounded, and Lipschitz loss function.
The algorithm runs in polynomial-time (in $n$) with respect to any
distribution on $\mathbb{S}^{n-1}$ (the unit sphere in $n$ dimensions) and for
any error parameter $\epsilon = \Omega(1/\log n)$ (this yields a PTAS for a
question raised by F. Bach on the complexity of maximizing ReLUs). These
results are in contrast to known efficient algorithms for reliably learning
linear threshold functions, where $\epsilon$ must be $\Omega(1)$ and strong
assumptions are required on the marginal distribution. We can compose our
results to obtain the first set of efficient algorithms for learning
constant-depth networks of ReLUs.
Our techniques combine kernel methods and polynomial approximations with a
"dual-loss" approach to convex programming. As a byproduct we obtain a number
of applications including the first set of efficient algorithms for "convex
piecewise-linear fitting" and the first efficient algorithms for noisy
polynomial reconstruction of low-weight polynomials on the unit sphere.
|
[
"['Surbhi Goel' 'Varun Kanade' 'Adam Klivans' 'Justin Thaler']",
"Surbhi Goel, Varun Kanade, Adam Klivans, Justin Thaler"
] |
cs.LG stat.ML
| null |
1611.10283
| null | null |
http://arxiv.org/pdf/1611.10283v3
|
2023-10-31T05:53:46Z
|
2016-11-30T17:37:51Z
|
Weighted bandits or: How bandits learn distorted values that are not
expected
|
Motivated by models of human decision making proposed to explain commonly
observed deviations from conventional expected value preferences, we formulate
two stochastic multi-armed bandit problems with distorted probabilities on the
cost distributions: the classic $K$-armed bandit and the linearly parameterized
bandit. In both settings, we propose algorithms that are inspired by Upper
Confidence Bound (UCB), incorporate cost distortions, and exhibit sublinear
regret assuming \holder continuous weight distortion functions. For the
$K$-armed setting, we show that the algorithm, called W-UCB, achieves
problem-dependent regret $O(L^2 M^2 \log n/ \Delta^{\frac{2}{\alpha}-1})$,
where $n$ is the number of plays, $\Delta$ is the gap in distorted expected
value between the best and next best arm, $L$ and $\alpha$ are the H\"{o}lder
constants for the distortion function, and $M$ is an upper bound on costs, and
a problem-independent regret bound of
$O((KL^2M^2)^{\alpha/2}n^{(2-\alpha)/2})$. We also present a matching lower
bound on the regret, showing that the regret of W-UCB is essentially
unimprovable over the class of H\"{o}lder-continuous weight distortions. For
the linearly parameterized setting, we develop a new algorithm, a variant of
the Optimism in the Face of Uncertainty Linear bandit (OFUL) algorithm called
WOFUL (Weight-distorted OFUL), and show that it has regret $O(d\sqrt{n} \;
\mbox{polylog}(n))$ with high probability, for sub-Gaussian cost distributions.
Finally, numerical examples demonstrate the advantages resulting from using
distortion-aware learning algorithms.
|
[
"Aditya Gopalan, L.A. Prashanth, Michael Fu and Steve Marcus",
"['Ravi Kumar Kolla' 'Prashanth L. A.' 'Aditya Gopalan'\n 'Krishna Jagannathan' 'Michael Fu' 'Steve Marcus']"
] |
cs.SI cs.LG stat.ML
| null |
1611.10305
| null | null |
http://arxiv.org/pdf/1611.10305v1
|
2016-11-30T18:46:55Z
|
2016-11-30T18:46:55Z
|
Influential Node Detection in Implicit Social Networks using Multi-task
Gaussian Copula Models
|
Influential node detection is a central research topic in social network
analysis. Many existing methods rely on the assumption that the network
structure is completely known \textit{a priori}. However, in many applications,
network structure is unavailable to explain the underlying information
diffusion phenomenon. To address the challenge of information diffusion
analysis with incomplete knowledge of network structure, we develop a
multi-task low rank linear influence model. By exploiting the relationships
between contagions, our approach can simultaneously predict the volume (i.e.
time series prediction) for each contagion (or topic) and automatically
identify the most influential nodes for each contagion. The proposed model is
validated using synthetic data and an ISIS twitter dataset. In addition to
improving the volume prediction performance significantly, we show that the
proposed approach can reliably infer the most influential users for specific
contagions.
|
[
"['Qunwei Li' 'Bhavya Kailkhura' 'Jayaraman J. Thiagarajan'\n 'Zhenliang Zhang' 'Pramod K. Varshney']",
"Qunwei Li, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Zhenliang\n Zhang, Pramod K. Varshney"
] |
cs.LG cs.AI
| null |
1611.10328
| null | null |
http://arxiv.org/pdf/1611.10328v1
|
2016-11-30T19:37:48Z
|
2016-11-30T19:37:48Z
|
The observer-assisted method for adjusting hyper-parameters in deep
learning algorithms
|
This paper presents a concept of a novel method for adjusting
hyper-parameters in Deep Learning (DL) algorithms. An external agent-observer
monitors a performance of a selected Deep Learning algorithm. The observer
learns to model the DL algorithm using a series of random experiments.
Consequently, it may be used for predicting a response of the DL algorithm in
terms of a selected quality measurement to a set of hyper-parameters. This
allows to construct an ensemble composed of a series of evaluators which
constitute an observer-assisted architecture. The architecture may be used to
gradually iterate towards to the best achievable quality score in tiny steps
governed by a unit of progress. The algorithm is stopped when the maximum
number of steps is reached or no further progress is made.
|
[
"['Maciej Wielgosz']",
"Maciej Wielgosz"
] |
cs.DC cs.LG
| null |
1611.10338
| null | null |
http://arxiv.org/pdf/1611.10338v1
|
2016-11-30T20:07:34Z
|
2016-11-30T20:07:34Z
|
SLA Violation Prediction In Cloud Computing: A Machine Learning
Perspective
|
Service level agreement (SLA) is an essential part of cloud systems to ensure
maximum availability of services for customers. With a violation of SLA, the
provider has to pay penalties. In this paper, we explore two machine learning
models: Naive Bayes and Random Forest Classifiers to predict SLA violations.
Since SLA violations are a rare event in the real world (~0.2 %), the
classification task becomes more challenging. In order to overcome these
challenges, we use several re-sampling methods. We find that random forests
with SMOTE-ENN re-sampling have the best performance among other methods with
the accuracy of 99.88 % and F_1 score of 0.9980.
|
[
"['Reyhane Askari Hemmat' 'Abdelhakim Hafid']",
"Reyhane Askari Hemmat, Abdelhakim Hafid"
] |
cs.LG cs.AI stat.ML
| null |
1611.10351
| null | null | null | null | null |
Joint Causal Inference from Multiple Contexts
|
The gold standard for discovering causal relations is by means of
experimentation. Over the last decades, alternative methods have been proposed
that can infer causal relations between variables from certain statistical
patterns in purely observational data. We introduce Joint Causal Inference
(JCI), a novel approach to causal discovery from multiple data sets from
different contexts that elegantly unifies both approaches. JCI is a causal
modeling framework rather than a specific algorithm, and it can be implemented
using any causal discovery algorithm that can take into account certain
background knowledge. JCI can deal with different types of interventions (e.g.,
perfect, imperfect, stochastic, etc.) in a unified fashion, and does not
require knowledge of intervention targets or types in case of interventional
data. We explain how several well-known causal discovery algorithms can be seen
as addressing special cases of the JCI framework, and we also propose novel
implementations that extend existing causal discovery methods for purely
observational data to the JCI setting. We evaluate different JCI
implementations on synthetic data and on flow cytometry protein expression data
and conclude that JCI implementations can considerably outperform
state-of-the-art causal discovery algorithms.
|
[
"Joris M. Mooij, Sara Magliacane, Tom Claassen"
] |
cs.LG stat.ML
| null |
1612.00086
| null | null |
http://arxiv.org/pdf/1612.00086v2
|
2016-12-03T09:09:05Z
|
2016-12-01T00:16:53Z
|
Semi-supervised Kernel Metric Learning Using Relative Comparisons
|
We consider the problem of metric learning subject to a set of constraints on
relative-distance comparisons between the data items. Such constraints are
meant to reflect side-information that is not expressed directly in the feature
vectors of the data items. The relative-distance constraints used in this work
are particularly effective in expressing structures at finer level of detail
than must-link (ML) and cannot-link (CL) constraints, which are most commonly
used for semi-supervised clustering. Relative-distance constraints are thus
useful in settings where providing an ML or a CL constraint is difficult
because the granularity of the true clustering is unknown.
Our main contribution is an efficient algorithm for learning a kernel matrix
using the log determinant divergence --- a variant of the Bregman divergence
--- subject to a set of relative-distance constraints. The learned kernel
matrix can then be employed by many different kernel methods in a wide range of
applications. In our experimental evaluations, we consider a semi-supervised
clustering setting and show empirically that kernels found by our algorithm
yield clusterings of higher quality than existing approaches that either use
ML/CL constraints or a different means to implement the supervision using
relative comparisons.
|
[
"['Ehsan Amid' 'Aristides Gionis' 'Antti Ukkonen']",
"Ehsan Amid, Aristides Gionis, Antti Ukkonen"
] |
cs.LG stat.ML
| null |
1612.001
| null | null | null | null | null |
Noise-Tolerant Life-Long Matrix Completion via Adaptive Sampling
|
We study the problem of recovering an incomplete $m\times n$ matrix of rank
$r$ with columns arriving online over time. This is known as the problem of
life-long matrix completion, and is widely applied to recommendation system,
computer vision, system identification, etc. The challenge is to design
provable algorithms tolerant to a large amount of noises, with small sample
complexity. In this work, we give algorithms achieving strong guarantee under
two realistic noise models. In bounded deterministic noise, an adversary can
add any bounded yet unstructured noise to each column. For this problem, we
present an algorithm that returns a matrix of a small error, with sample
complexity almost as small as the best prior results in the noiseless case. For
sparse random noise, where the corrupted columns are sparse and drawn randomly,
we give an algorithm that exactly recovers an $\mu_0$-incoherent matrix by
probability at least $1-\delta$ with sample complexity as small as
$O\left(\mu_0rn\log (r/\delta)\right)$. This result advances the
state-of-the-art work and matches the lower bound in a worst case. We also
study the scenario where the hidden matrix lies on a mixture of subspaces and
show that the sample complexity can be even smaller. Our proposed algorithms
perform well experimentally in both synthetic and real-world datasets.
|
[
"Maria-Florina Balcan and Hongyang Zhang"
] |
null | null |
1612.00100
| null | null |
http://arxiv.org/pdf/1612.00100v1
|
2016-12-01T01:10:07Z
|
2016-12-01T01:10:07Z
|
Noise-Tolerant Life-Long Matrix Completion via Adaptive Sampling
|
We study the problem of recovering an incomplete $mtimes n$ matrix of rank $r$ with columns arriving online over time. This is known as the problem of life-long matrix completion, and is widely applied to recommendation system, computer vision, system identification, etc. The challenge is to design provable algorithms tolerant to a large amount of noises, with small sample complexity. In this work, we give algorithms achieving strong guarantee under two realistic noise models. In bounded deterministic noise, an adversary can add any bounded yet unstructured noise to each column. For this problem, we present an algorithm that returns a matrix of a small error, with sample complexity almost as small as the best prior results in the noiseless case. For sparse random noise, where the corrupted columns are sparse and drawn randomly, we give an algorithm that exactly recovers an $mu_0$-incoherent matrix by probability at least $1-delta$ with sample complexity as small as $Oleft(mu_0rnlog (r/delta)right)$. This result advances the state-of-the-art work and matches the lower bound in a worst case. We also study the scenario where the hidden matrix lies on a mixture of subspaces and show that the sample complexity can be even smaller. Our proposed algorithms perform well experimentally in both synthetic and real-world datasets.
|
[
"['Maria-Florina Balcan' 'Hongyang Zhang']"
] |
cs.LG cs.AI cs.CR
| null |
1612.00108
| null | null |
http://arxiv.org/pdf/1612.00108v2
|
2016-12-02T18:26:18Z
|
2016-12-01T01:43:24Z
|
When to Reset Your Keys: Optimal Timing of Security Updates via Learning
|
Cybersecurity is increasingly threatened by advanced and persistent attacks.
As these attacks are often designed to disable a system (or a critical
resource, e.g., a user account) repeatedly, it is crucial for the defender to
keep updating its security measures to strike a balance between the risk of
being compromised and the cost of security updates. Moreover, these decisions
often need to be made with limited and delayed feedback due to the stealthy
nature of advanced attacks. In addition to targeted attacks, such an optimal
timing policy under incomplete information has broad applications in
cybersecurity. Examples include key rotation, password change, application of
patches, and virtual machine refreshing. However, rigorous studies of optimal
timing are rare. Further, existing solutions typically rely on a pre-defined
attack model that is known to the defender, which is often not the case in
practice. In this work, we make an initial effort towards achieving optimal
timing of security updates in the face of unknown stealthy attacks. We consider
a variant of the influential FlipIt game model with asymmetric feedback and
unknown attack time distribution, which provides a general model to consecutive
security updates. The defender's problem is then modeled as a time associative
bandit problem with dependent arms. We derive upper confidence bound based
learning policies that achieve low regret compared with optimal periodic
defense strategies that can only be derived when attack time distributions are
known.
|
[
"['Zizhan Zheng' 'Ness B. Shroff' 'Prasant Mohapatra']",
"Zizhan Zheng, Ness B. Shroff, Prasant Mohapatra"
] |
cs.LG cs.DB stat.ML
| null |
1612.00151
| null | null |
http://arxiv.org/pdf/1612.00151v1
|
2016-12-01T05:24:36Z
|
2016-12-01T05:24:36Z
|
A New Method for Classification of Datasets for Data Mining
|
Decision tree is an important method for both induction research and data
mining, which is mainly used for model classification and prediction. ID3
algorithm is the most widely used algorithm in the decision tree so far. In
this paper, the shortcoming of ID3's inclining to choose attributes with many
values is discussed, and then a new decision tree algorithm which is improved
version of ID3. In our proposed algorithm attributes are divided into groups
and then we apply the selection measure 5 for these groups. If information gain
is not good then again divide attributes values into groups. These steps are
done until we get good classification/misclassification ratio. The proposed
algorithms classify the data sets more accurately and efficiently.
|
[
"['Singh Vijendra' 'Hemjyotsana Parashar' 'Nisha Vasudeva']",
"Singh Vijendra, Hemjyotsana Parashar and Nisha Vasudeva"
] |
cs.NE cs.CV cs.LG
| null |
1612.00155
| null | null |
http://arxiv.org/pdf/1612.00155v1
|
2016-12-01T05:59:57Z
|
2016-12-01T05:59:57Z
|
Adversarial Images for Variational Autoencoders
|
We investigate adversarial attacks for autoencoders. We propose a procedure
that distorts the input image to mislead the autoencoder in reconstructing a
completely different target image. We attack the internal latent
representations, attempting to make the adversarial input produce an internal
representation as similar as possible as the target's. We find that
autoencoders are much more robust to the attack than classifiers: while some
examples have tolerably small input distortion, and reasonable similarity to
the target image, there is a quasi-linear trade-off between those aims. We
report results on MNIST and SVHN datasets, and also test regular deterministic
autoencoders, reaching similar conclusions in all cases. Finally, we show that
the usual adversarial attack for classifiers, while being much easier, also
presents a direct proportion between distortion on the input, and misdirection
on the output. That proportionality however is hidden by the normalization of
the output, which maps a linear layer into non-linear probabilities.
|
[
"['Pedro Tabacof' 'Julia Tavares' 'Eduardo Valle']",
"Pedro Tabacof, Julia Tavares, Eduardo Valle"
] |
cs.LG
| null |
1612.00188
| null | null |
http://arxiv.org/pdf/1612.00188v5
|
2017-06-13T07:07:33Z
|
2016-12-01T09:55:10Z
|
Efficient Orthogonal Parametrisation of Recurrent Neural Networks Using
Householder Reflections
|
The problem of learning long-term dependencies in sequences using Recurrent
Neural Networks (RNNs) is still a major challenge. Recent methods have been
suggested to solve this problem by constraining the transition matrix to be
unitary during training which ensures that its norm is equal to one and
prevents exploding gradients. These methods either have limited expressiveness
or scale poorly with the size of the network when compared with the simple RNN
case, especially when using stochastic gradient descent with a small mini-batch
size. Our contributions are as follows; we first show that constraining the
transition matrix to be unitary is a special case of an orthogonal constraint.
Then we present a new parametrisation of the transition matrix which allows
efficient training of an RNN while ensuring that the matrix is always
orthogonal. Our results show that the orthogonal constraint on the transition
matrix applied through our parametrisation gives similar benefits to the
unitary constraint, without the time complexity limitations.
|
[
"['Zakaria Mhammedi' 'Andrew Hellicar' 'Ashfaqur Rahman' 'James Bailey']",
"Zakaria Mhammedi, Andrew Hellicar, Ashfaqur Rahman, James Bailey"
] |
physics.comp-ph cs.LG stat.ML
|
10.1063/1.4978623
|
1612.00193
| null | null |
http://arxiv.org/abs/1612.00193v2
|
2017-04-25T10:03:41Z
|
2016-12-01T10:23:59Z
|
Learning molecular energies using localized graph kernels
|
Recent machine learning methods make it possible to model potential energy of
atomic configurations with chemical-level accuracy (as calculated from
ab-initio calculations) and at speeds suitable for molecular dynam- ics
simulation. Best performance is achieved when the known physical constraints
are encoded in the machine learning models. For example, the atomic energy is
invariant under global translations and rotations, it is also invariant to
permutations of same-species atoms. Although simple to state, these symmetries
are complicated to encode into machine learning algorithms. In this paper, we
present a machine learning approach based on graph theory that naturally
incorporates translation, rotation, and permutation symmetries. Specifically,
we use a random walk graph kernel to measure the similarity of two adjacency
matrices, each of which represents a local atomic environment. This Graph
Approximated Energy (GRAPE) approach is flexible and admits many possible
extensions. We benchmark a simple version of GRAPE by predicting atomization
energies on a standard dataset of organic molecules.
|
[
"G. Ferr\\'e, T. Haut and K. Barros",
"['G. Ferré' 'T. Haut' 'K. Barros']"
] |
cs.CV cs.LG
| null |
1612.00212
| null | null |
http://arxiv.org/pdf/1612.00212v1
|
2016-12-01T11:56:15Z
|
2016-12-01T11:56:15Z
|
Training Bit Fully Convolutional Network for Fast Semantic Segmentation
|
Fully convolutional neural networks give accurate, per-pixel prediction for
input images and have applications like semantic segmentation. However, a
typical FCN usually requires lots of floating point computation and large
run-time memory, which effectively limits its usability. We propose a method to
train Bit Fully Convolution Network (BFCN), a fully convolutional neural
network that has low bit-width weights and activations. Because most of its
computation-intensive convolutions are accomplished between low bit-width
numbers, a BFCN can be accelerated by an efficient bit-convolution
implementation. On CPU, the dot product operation between two bit vectors can
be reduced to bitwise operations and popcounts, which can offer much higher
throughput than 32-bit multiplications and additions.
To validate the effectiveness of BFCN, we conduct experiments on the PASCAL
VOC 2012 semantic segmentation task and Cityscapes. Our BFCN with 1-bit weights
and 2-bit activations, which runs 7.8x faster on CPU or requires less than 1\%
resources on FPGA, can achieve comparable performance as the 32-bit
counterpart.
|
[
"['He Wen' 'Shuchang Zhou' 'Zhe Liang' 'Yuxiang Zhang' 'Dieqiao Feng'\n 'Xinyu Zhou' 'Cong Yao']",
"He Wen, Shuchang Zhou, Zhe Liang, Yuxiang Zhang, Dieqiao Feng, Xinyu\n Zhou, Cong Yao"
] |
q-fin.EC cs.LG nlin.AO
| null |
1612.00221
| null | null |
http://arxiv.org/pdf/1612.00221v1
|
2016-12-01T12:24:46Z
|
2016-12-01T12:24:46Z
|
The Coconut Model with Heterogeneous Strategies and Learning
|
In this paper, we develop an agent-based version of the Diamond search
equilibrium model - also called Coconut Model. In this model, agents are faced
with production decisions that have to be evaluated based on their expectations
about the future utility of the produced entity which in turn depends on the
global production level via a trading mechanism. While the original dynamical
systems formulation assumes an infinite number of homogeneously adapting agents
obeying strong rationality conditions, the agent-based setting allows to
discuss the effects of heterogeneous and adaptive expectations and enables the
analysis of non-equilibrium trajectories. Starting from a baseline
implementation that matches the asymptotic behavior of the original model, we
show how agent heterogeneity can be accounted for in the aggregate dynamical
equations. We then show that when agents adapt their strategies by a simple
temporal difference learning scheme, the system converges to one of the fixed
points of the original system. Systematic simulations reveal that this is the
only stable equilibrium solution.
|
[
"Sven Banisch and Eckehard Olbrich",
"['Sven Banisch' 'Eckehard Olbrich']"
] |
cs.AI cs.LG
| null |
1612.00222
| null | null |
http://arxiv.org/pdf/1612.00222v1
|
2016-12-01T12:34:54Z
|
2016-12-01T12:34:54Z
|
Interaction Networks for Learning about Objects, Relations and Physics
|
Reasoning about objects, relations, and physics is central to human
intelligence, and a key goal of artificial intelligence. Here we introduce the
interaction network, a model which can reason about how objects in complex
systems interact, supporting dynamical predictions, as well as inferences about
the abstract properties of the system. Our model takes graphs as input,
performs object- and relation-centric reasoning in a way that is analogous to a
simulation, and is implemented using deep neural networks. We evaluate its
ability to reason about several challenging physical domains: n-body problems,
rigid-body collision, and non-rigid dynamics. Our results show it can be
trained to accurately simulate the physical trajectories of dozens of objects
over thousands of time steps, estimate abstract quantities such as energy, and
generalize automatically to systems with different numbers and configurations
of objects and relations. Our interaction network implementation is the first
general-purpose, learnable physics engine, and a powerful general framework for
reasoning about object and relations in a wide variety of complex real-world
domains.
|
[
"Peter W. Battaglia, Razvan Pascanu, Matthew Lai, Danilo Rezende, Koray\n Kavukcuoglu",
"['Peter W. Battaglia' 'Razvan Pascanu' 'Matthew Lai' 'Danilo Rezende'\n 'Koray Kavukcuoglu']"
] |
cs.LG cs.CR cs.CV
| null |
1612.00334
| null | null | null | null | null |
A Theoretical Framework for Robustness of (Deep) Classifiers against
Adversarial Examples
|
Most machine learning classifiers, including deep neural networks, are
vulnerable to adversarial examples. Such inputs are typically generated by
adding small but purposeful modifications that lead to incorrect outputs while
imperceptible to human eyes. The goal of this paper is not to introduce a
single method, but to make theoretical steps towards fully understanding
adversarial examples. By using concepts from topology, our theoretical analysis
brings forth the key reasons why an adversarial example can fool a classifier
($f_1$) and adds its oracle ($f_2$, like human eyes) in such analysis. By
investigating the topological relationship between two (pseudo)metric spaces
corresponding to predictor $f_1$ and oracle $f_2$, we develop necessary and
sufficient conditions that can determine if $f_1$ is always robust
(strong-robust) against adversarial examples according to $f_2$. Interestingly
our theorems indicate that just one unnecessary feature can make $f_1$ not
strong-robust, and the right feature representation learning is the key to
getting a classifier that is both accurate and strong-robust.
|
[
"Beilun Wang, Ji Gao, Yanjun Qi"
] |
null | null |
1612.00334v
| null | null |
http://arxiv.org/pdf/1612.00334v12
|
2017-09-27T16:02:48Z
|
2016-12-01T16:20:39Z
|
A Theoretical Framework for Robustness of (Deep) Classifiers against
Adversarial Examples
|
Most machine learning classifiers, including deep neural networks, are vulnerable to adversarial examples. Such inputs are typically generated by adding small but purposeful modifications that lead to incorrect outputs while imperceptible to human eyes. The goal of this paper is not to introduce a single method, but to make theoretical steps towards fully understanding adversarial examples. By using concepts from topology, our theoretical analysis brings forth the key reasons why an adversarial example can fool a classifier ($f_1$) and adds its oracle ($f_2$, like human eyes) in such analysis. By investigating the topological relationship between two (pseudo)metric spaces corresponding to predictor $f_1$ and oracle $f_2$, we develop necessary and sufficient conditions that can determine if $f_1$ is always robust (strong-robust) against adversarial examples according to $f_2$. Interestingly our theorems indicate that just one unnecessary feature can make $f_1$ not strong-robust, and the right feature representation learning is the key to getting a classifier that is both accurate and strong-robust.
|
[
"['Beilun Wang' 'Ji Gao' 'Yanjun Qi']"
] |
cs.AI cs.LG
| null |
1612.00341
| null | null |
http://arxiv.org/pdf/1612.00341v2
|
2017-03-04T17:44:06Z
|
2016-12-01T16:39:04Z
|
A Compositional Object-Based Approach to Learning Physical Dynamics
|
We present the Neural Physics Engine (NPE), a framework for learning
simulators of intuitive physics that naturally generalize across variable
object count and different scene configurations. We propose a factorization of
a physical scene into composable object-based representations and a neural
network architecture whose compositional structure factorizes object dynamics
into pairwise interactions. Like a symbolic physics engine, the NPE is endowed
with generic notions of objects and their interactions; realized as a neural
network, it can be trained via stochastic gradient descent to adapt to specific
object properties and dynamics of different worlds. We evaluate the efficacy of
our approach on simple rigid body dynamics in two-dimensional worlds. By
comparing to less structured architectures, we show that the NPE's
compositional representation of the structure in physical interactions improves
its ability to predict movement, generalize across variable object count and
different scene configurations, and infer latent properties of objects such as
mass.
|
[
"Michael B. Chang, Tomer Ullman, Antonio Torralba, Joshua B. Tenenbaum",
"['Michael B. Chang' 'Tomer Ullman' 'Antonio Torralba'\n 'Joshua B. Tenenbaum']"
] |
cs.LG cs.AI stat.ML
| null |
1612.00367
| null | null |
http://arxiv.org/pdf/1612.00367v2
|
2017-06-25T11:00:30Z
|
2016-12-01T17:59:53Z
|
Large-scale Validation of Counterfactual Learning Methods: A Test-Bed
|
The ability to perform effective off-policy learning would revolutionize the
process of building better interactive systems, such as search engines and
recommendation systems for e-commerce, computational advertising and news.
Recent approaches for off-policy evaluation and learning in these settings
appear promising. With this paper, we provide real-world data and a
standardized test-bed to systematically investigate these algorithms using data
from display advertising. In particular, we consider the problem of filling a
banner ad with an aggregate of multiple products the user may want to purchase.
This paper presents our test-bed, the sanity checks we ran to ensure its
validity, and shows results comparing state-of-the-art off-policy learning
methods like doubly robust optimization, POEM, and reductions to supervised
learning using regression baselines. Our results show experimental evidence
that recent off-policy learning methods can improve upon state-of-the-art
supervised learning techniques on a large-scale real-world data set.
|
[
"['Damien Lefortier' 'Adith Swaminathan' 'Xiaotao Gu' 'Thorsten Joachims'\n 'Maarten de Rijke']",
"Damien Lefortier, Adith Swaminathan, Xiaotao Gu, Thorsten Joachims,\n Maarten de Rijke"
] |
stat.ML cs.LG
| null |
1612.00374
| null | null |
http://arxiv.org/pdf/1612.00374v2
|
2018-02-08T14:54:51Z
|
2016-12-01T18:14:33Z
|
Spatial Decompositions for Large Scale SVMs
|
Although support vector machines (SVMs) are theoretically well understood,
their underlying optimization problem becomes very expensive, if, for example,
hundreds of thousands of samples and a non-linear kernel are considered.
Several approaches have been proposed in the past to address this serious
limitation. In this work we investigate a decomposition strategy that learns on
small, spatially defined data chunks. Our contributions are two fold: On the
theoretical side we establish an oracle inequality for the overall learning
method using the hinge loss, and show that the resulting rates match those
known for SVMs solving the complete optimization problem with Gaussian kernels.
On the practical side we compare our approach to learning SVMs on small,
randomly chosen chunks. Here it turns out that for comparable training times
our approach is significantly faster during testing and also reduces the test
error in most cases significantly. Furthermore, we show that our approach
easily scales up to 10 million training samples: including hyper-parameter
selection using cross validation, the entire training only takes a few hours on
a single machine. Finally, we report an experiment on 32 million training
samples. All experiments used liquidSVM (Steinwart and Thomann, 2017).
|
[
"['Philipp Thomann' 'Ingrid Blaschzyk' 'Mona Meister' 'Ingo Steinwart']",
"Philipp Thomann and Ingrid Blaschzyk and Mona Meister and Ingo\n Steinwart"
] |
cs.CL cs.AI cs.LG cs.NE
| null |
1612.00377
| null | null |
http://arxiv.org/pdf/1612.00377v4
|
2017-09-23T13:33:55Z
|
2016-12-01T18:49:23Z
|
Piecewise Latent Variables for Neural Variational Text Processing
|
Advances in neural variational inference have facilitated the learning of
powerful directed graphical models with continuous latent variables, such as
variational autoencoders. The hope is that such models will learn to represent
rich, multi-modal latent factors in real-world data, such as natural language
text. However, current models often assume simplistic priors on the latent
variables - such as the uni-modal Gaussian distribution - which are incapable
of representing complex latent factors efficiently. To overcome this
restriction, we propose the simple, but highly flexible, piecewise constant
distribution. This distribution has the capacity to represent an exponential
number of modes of a latent target distribution, while remaining mathematically
tractable. Our results demonstrate that incorporating this new latent
distribution into different models yields substantial improvements in natural
language processing tasks such as document modeling and natural language
generation for dialogue.
|
[
"['Iulian V. Serban' 'Alexander G. Ororbia II' 'Joelle Pineau'\n 'Aaron Courville']",
"Iulian V. Serban, Alexander G. Ororbia II, Joelle Pineau, Aaron\n Courville"
] |
stat.ML cs.LG
| null |
1612.00383
| null | null |
http://arxiv.org/pdf/1612.00383v1
|
2016-12-01T19:08:12Z
|
2016-12-01T19:08:12Z
|
Tuning the Scheduling of Distributed Stochastic Gradient Descent with
Bayesian Optimization
|
We present an optimizer which uses Bayesian optimization to tune the system
parameters of distributed stochastic gradient descent (SGD). Given a specific
context, our goal is to quickly find efficient configurations which
appropriately balance the load between the available machines to minimize the
average SGD iteration time. Our experiments consider setups with over thirty
parameters. Traditional Bayesian optimization, which uses a Gaussian process as
its model, is not well suited to such high dimensional domains. To reduce
convergence time, we exploit the available structure. We design a probabilistic
model which simulates the behavior of distributed SGD and use it within
Bayesian optimization. Our model can exploit many runtime measurements for
inference per evaluation of the objective function. Our experiments show that
our resulting optimizer converges to efficient configurations within ten
iterations, the optimized configurations outperform those found by generic
optimizer in thirty iterations by up to 2X.
|
[
"Valentin Dalibard, Michael Schaarschmidt, Eiko Yoneki",
"['Valentin Dalibard' 'Michael Schaarschmidt' 'Eiko Yoneki']"
] |
stat.ML cs.LG stat.AP
| null |
1612.00388
| null | null |
http://arxiv.org/pdf/1612.00388v1
|
2016-12-01T19:21:22Z
|
2016-12-01T19:21:22Z
|
Diet2Vec: Multi-scale analysis of massive dietary data
|
Smart phone apps that enable users to easily track their diets have become
widespread in the last decade. This has created an opportunity to discover new
insights into obesity and weight loss by analyzing the eating habits of the
users of such apps. In this paper, we present diet2vec: an approach to modeling
latent structure in a massive database of electronic diet journals. Through an
iterative contract-and-expand process, our model learns real-valued embeddings
of users' diets, as well as embeddings for individual foods and meals. We
demonstrate the effectiveness of our approach on a real dataset of 55K users of
the popular diet-tracking app LoseIt\footnote{http://www.loseit.com/}. To the
best of our knowledge, this is the largest fine-grained diet tracking study in
the history of nutrition and obesity research. Our results suggest that
diet2vec finds interpretable results at all levels, discovering intuitive
representations of foods, meals, and diets.
|
[
"Wesley Tansey and Edward W. Lowe Jr. and James G. Scott",
"['Wesley Tansey' 'Edward W. Lowe Jr.' 'James G. Scott']"
] |
stat.ML cs.LG
| null |
1612.00393
| null | null |
http://arxiv.org/pdf/1612.00393v1
|
2016-12-01T19:41:50Z
|
2016-12-01T19:41:50Z
|
Hypervolume-based Multi-objective Bayesian Optimization with Student-t
Processes
|
Student-$t$ processes have recently been proposed as an appealing alternative
non-parameteric function prior. They feature enhanced flexibility and
predictive variance. In this work the use of Student-$t$ processes are explored
for multi-objective Bayesian optimization. In particular, an analytical
expression for the hypervolume-based probability of improvement is developed
for independent Student-$t$ process priors of the objectives. Its effectiveness
is shown on a multi-objective optimization problem which is known to be
difficult with traditional Gaussian processes.
|
[
"Joachim van der Herten and Ivo Couckuyt and Tom Dhaene",
"['Joachim van der Herten' 'Ivo Couckuyt' 'Tom Dhaene']"
] |
cs.LG cs.IT math.IT
| null |
1612.0041
| null | null | null | null | null |
Deep Variational Information Bottleneck
|
We present a variational approximation to the information bottleneck of
Tishby et al. (1999). This variational approach allows us to parameterize the
information bottleneck model using a neural network and leverage the
reparameterization trick for efficient training. We call this method "Deep
Variational Information Bottleneck", or Deep VIB. We show that models trained
with the VIB objective outperform those that are trained with other forms of
regularization, in terms of generalization performance and robustness to
adversarial attack.
|
[
"Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, Kevin Murphy"
] |
null | null |
1612.00410
| null | null |
http://arxiv.org/pdf/1612.00410v7
|
2019-10-23T22:47:44Z
|
2016-12-01T20:12:40Z
|
Deep Variational Information Bottleneck
|
We present a variational approximation to the information bottleneck of Tishby et al. (1999). This variational approach allows us to parameterize the information bottleneck model using a neural network and leverage the reparameterization trick for efficient training. We call this method "Deep Variational Information Bottleneck", or Deep VIB. We show that models trained with the VIB objective outperform those that are trained with other forms of regularization, in terms of generalization performance and robustness to adversarial attack.
|
[
"['Alexander A. Alemi' 'Ian Fischer' 'Joshua V. Dillon' 'Kevin Murphy']"
] |
cs.LG cs.AI cs.RO
| null |
1612.00429
| null | null |
http://arxiv.org/pdf/1612.00429v2
|
2017-03-09T19:46:12Z
|
2016-12-01T20:48:39Z
|
Generalizing Skills with Semi-Supervised Reinforcement Learning
|
Deep reinforcement learning (RL) can acquire complex behaviors from low-level
inputs, such as images. However, real-world applications of such methods
require generalizing to the vast variability of the real world. Deep networks
are known to achieve remarkable generalization when provided with massive
amounts of labeled data, but can we provide this breadth of experience to an RL
agent, such as a robot? The robot might continuously learn as it explores the
world around it, even while deployed. However, this learning requires access to
a reward function, which is often hard to measure in real-world domains, where
the reward could depend on, for example, unknown positions of objects or the
emotional state of the user. Conversely, it is often quite practical to provide
the agent with reward functions in a limited set of situations, such as when a
human supervisor is present or in a controlled setting. Can we make use of this
limited supervision, and still benefit from the breadth of experience an agent
might collect on its own? In this paper, we formalize this problem as
semisupervised reinforcement learning, where the reward function can only be
evaluated in a set of "labeled" MDPs, and the agent must generalize its
behavior to the wide range of states it might encounter in a set of "unlabeled"
MDPs, by using experience from both settings. Our proposed method infers the
task objective in the unlabeled MDPs through an algorithm that resembles
inverse RL, using the agent's own prior experience in the labeled MDPs as a
kind of demonstration of optimal behavior. We evaluate our method on
challenging tasks that require control directly from images, and show that our
approach can improve the generalization of a learned deep neural network policy
by using experience for which no reward function is available. We also show
that our method outperforms direct supervised learning of the reward.
|
[
"['Chelsea Finn' 'Tianhe Yu' 'Justin Fu' 'Pieter Abbeel' 'Sergey Levine']",
"Chelsea Finn, Tianhe Yu, Justin Fu, Pieter Abbeel, Sergey Levine"
] |
stat.ML cs.AI cs.LG
| null |
1612.00475
| null | null |
http://arxiv.org/pdf/1612.00475v1
|
2016-12-01T21:26:52Z
|
2016-12-01T21:26:52Z
|
Transfer Learning Across Patient Variations with Hidden Parameter Markov
Decision Processes
|
Due to physiological variation, patients diagnosed with the same condition
may exhibit divergent, but related, responses to the same treatments. Hidden
Parameter Markov Decision Processes (HiP-MDPs) tackle this transfer-learning
problem by embedding these tasks into a low-dimensional space. However, the
original formulation of HiP-MDP had a critical flaw: the embedding uncertainty
was modeled independently of the agent's state uncertainty, requiring an
unnatural training procedure in which all tasks visited every part of the state
space---possible for robots that can be moved to a particular location,
impossible for human patients. We update the HiP-MDP framework and extend it to
more robustly develop personalized medicine strategies for HIV treatment.
|
[
"['Taylor Killian' 'George Konidaris' 'Finale Doshi-Velez']",
"Taylor Killian, George Konidaris, Finale Doshi-Velez"
] |
stat.ML cs.LG
| null |
1612.00516
| null | null |
http://arxiv.org/pdf/1612.00516v2
|
2017-01-06T16:42:36Z
|
2016-12-01T23:38:34Z
|
Canonical Correlation Analysis for Analyzing Sequences of Medical
Billing Codes
|
We propose using canonical correlation analysis (CCA) to generate features
from sequences of medical billing codes. Applying this novel use of CCA to a
database of medical billing codes for patients with diverticulitis, we first
demonstrate that the CCA embeddings capture meaningful relationships among the
codes. We then generate features from these embeddings and establish their
usefulness in predicting future elective surgery for diverticulitis, an
important marker in efforts for reducing costs in healthcare.
|
[
"Corinne L. Jones, Sham M. Kakade, Lucas W. Thornblade, David R. Flum,\n Abraham D. Flaxman",
"['Corinne L. Jones' 'Sham M. Kakade' 'Lucas W. Thornblade' 'David R. Flum'\n 'Abraham D. Flaxman']"
] |
cs.LG q-bio.GN stat.ML
| null |
1612.00525
| null | null |
http://arxiv.org/pdf/1612.00525v2
|
2016-12-05T05:15:51Z
|
2016-12-02T00:41:11Z
|
A Noise-Filtering Approach for Cancer Drug Sensitivity Prediction
|
Accurately predicting drug responses to cancer is an important problem
hindering oncologists' efforts to find the most effective drugs to treat
cancer, which is a core goal in precision medicine. The scientific community
has focused on improving this prediction based on genomic, epigenomic, and
proteomic datasets measured in human cancer cell lines. Real-world cancer cell
lines contain noise, which degrades the performance of machine learning
algorithms. This problem is rarely addressed in the existing approaches. In
this paper, we present a noise-filtering approach that integrates techniques
from numerical linear algebra and information retrieval targeted at filtering
out noisy cancer cell lines. By filtering out noisy cancer cell lines, we can
train machine learning algorithms on better quality cancer cell lines. We
evaluate the performance of our approach and compare it with an existing
approach using the Area Under the ROC Curve (AUC) on clinical trial data. The
experimental results show that our proposed approach is stable and also yields
the highest AUC at a statistically significant level.
|
[
"['Turki Turki' 'Zhi Wei']",
"Turki Turki and Zhi Wei"
] |
cs.CV cs.LG
| null |
1612.00542
| null | null |
http://arxiv.org/pdf/1612.00542v1
|
2016-12-02T02:06:15Z
|
2016-12-02T02:06:15Z
|
Breast Mass Classification from Mammograms using Deep Convolutional
Neural Networks
|
Mammography is the most widely used method to screen breast cancer. Because
of its mostly manual nature, variability in mass appearance, and low
signal-to-noise ratio, a significant number of breast masses are missed or
misdiagnosed. In this work, we present how Convolutional Neural Networks can be
used to directly classify pre-segmented breast masses in mammograms as benign
or malignant, using a combination of transfer learning, careful pre-processing
and data augmentation to overcome limited training data. We achieve
state-of-the-art results on the DDSM dataset, surpassing human performance, and
show interpretability of our model.
|
[
"Daniel L\\'evy, Arzav Jain",
"['Daniel Lévy' 'Arzav Jain']"
] |
cs.LG
| null |
1612.00554
| null | null |
http://arxiv.org/pdf/1612.00554v1
|
2016-12-02T03:34:44Z
|
2016-12-02T03:34:44Z
|
Higher Order Mutual Information Approximation for Feature Selection
|
Feature selection is a process of choosing a subset of relevant features so
that the quality of prediction models can be improved. An extensive body of
work exists on information-theoretic feature selection, based on maximizing
Mutual Information (MI) between subsets of features and class labels. The prior
methods use a lower order approximation, by treating the joint entropy as a
summation of several single variable entropies. This leads to locally optimal
selections and misses multi-way feature combinations. We present a higher order
MI based approximation technique called Higher Order Feature Selection (HOFS).
Instead of producing a single list of features, our method produces a ranked
collection of feature subsets that maximizes MI, giving better comprehension
(feature ranking) as to which features work best together when selected, due to
their underlying interdependent structure. Our experiments demonstrate that the
proposed method performs better than existing feature selection approaches
while keeping similar running times and computational complexity.
|
[
"Jilin Wu and Soumyajit Gupta and Chandrajit Bajaj",
"['Jilin Wu' 'Soumyajit Gupta' 'Chandrajit Bajaj']"
] |
cs.LG cs.AI cs.CV
| null |
1612.00563
| null | null |
http://arxiv.org/pdf/1612.00563v2
|
2017-11-16T02:38:37Z
|
2016-12-02T04:37:22Z
|
Self-critical Sequence Training for Image Captioning
|
Recently it has been shown that policy-gradient methods for reinforcement
learning can be utilized to train deep end-to-end systems directly on
non-differentiable metrics for the task at hand. In this paper we consider the
problem of optimizing image captioning systems using reinforcement learning,
and show that by carefully optimizing our systems using the test metrics of the
MSCOCO task, significant gains in performance can be realized. Our systems are
built using a new optimization approach that we call self-critical sequence
training (SCST). SCST is a form of the popular REINFORCE algorithm that, rather
than estimating a "baseline" to normalize the rewards and reduce variance,
utilizes the output of its own test-time inference algorithm to normalize the
rewards it experiences. Using this approach, estimating the reward signal (as
actor-critic methods must do) and estimating normalization (as REINFORCE
algorithms typically do) is avoided, while at the same time harmonizing the
model with respect to its test-time inference procedure. Empirically we find
that directly optimizing the CIDEr metric with SCST and greedy decoding at
test-time is highly effective. Our results on the MSCOCO evaluation sever
establish a new state-of-the-art on the task, improving the best result in
terms of CIDEr from 104.9 to 114.7.
|
[
"Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross and\n Vaibhava Goel",
"['Steven J. Rennie' 'Etienne Marcheret' 'Youssef Mroueh' 'Jarret Ross'\n 'Vaibhava Goel']"
] |
stat.ML cs.AI cs.LG
| null |
1612.00583
| null | null |
http://arxiv.org/pdf/1612.00583v1
|
2016-12-02T07:44:45Z
|
2016-12-02T07:44:45Z
|
Active Search for Sparse Signals with Region Sensing
|
Autonomous systems can be used to search for sparse signals in a large space;
e.g., aerial robots can be deployed to localize threats, detect gas leaks, or
respond to distress calls. Intuitively, search algorithms may increase
efficiency by collecting aggregate measurements summarizing large contiguous
regions. However, most existing search methods either ignore the possibility of
such region observations (e.g., Bayesian optimization and multi-armed bandits)
or make strong assumptions about the sensing mechanism that allow each
measurement to arbitrarily encode all signals in the entire environment (e.g.,
compressive sensing). We propose an algorithm that actively collects data to
search for sparse signals using only noisy measurements of the average values
on rectangular regions (including single points), based on the greedy
maximization of information gain. We analyze our algorithm in 1d and show that
it requires $\tilde{O}(\frac{n}{\mu^2}+k^2)$ measurements to recover all of $k$
signal locations with small Bayes error, where $\mu$ and $n$ are the signal
strength and the size of the search space, respectively. We also show that
active designs can be fundamentally more efficient than passive designs with
region sensing, contrasting with the results of Arias-Castro, Candes, and
Davenport (2013). We demonstrate the empirical performance of our algorithm on
a search problem using satellite image data and in high dimensions.
|
[
"['Yifei Ma' 'Roman Garnett' 'Jeff Schneider']",
"Yifei Ma and Roman Garnett and Jeff Schneider"
] |
cs.LG cs.CE stat.AP stat.ML
| null |
1612.00585
| null | null |
http://arxiv.org/pdf/1612.00585v1
|
2016-12-02T07:56:23Z
|
2016-12-02T07:56:23Z
|
Development of a hybrid learning system based on SVM, ANFIS and domain
knowledge: DKFIS
|
This paper presents the development of a hybrid learning system based on
Support Vector Machines (SVM), Adaptive Neuro-Fuzzy Inference System (ANFIS)
and domain knowledge to solve prediction problem. The proposed two-stage Domain
Knowledge based Fuzzy Information System (DKFIS) improves the prediction
accuracy attained by ANFIS alone. The proposed framework has been implemented
on a noisy and incomplete dataset acquired from a hydrocarbon field located at
western part of India. Here, oil saturation has been predicted from four
different well logs i.e. gamma ray, resistivity, density, and clay volume. In
the first stage, depending on zero or near zero and non-zero oil saturation
levels the input vector is classified into two classes (Class 0 and Class 1)
using SVM. The classification results have been further fine-tuned applying
expert knowledge based on the relationship among predictor variables i.e. well
logs and target variable - oil saturation. Second, an ANFIS is designed to
predict non-zero (Class 1) oil saturation values from predictor logs. The
predicted output has been further refined based on expert knowledge. It is
apparent from the experimental results that the expert intervention with
qualitative judgment at each stage has rendered the prediction into the
feasible and realistic ranges. The performance analysis of the prediction in
terms of four performance metrics such as correlation coefficient (CC), root
mean square error (RMSE), and absolute error mean (AEM), scatter index (SI) has
established DKFIS as a useful tool for reservoir characterization.
|
[
"Soumi Chaki, Aurobinda Routray, William K. Mohanty, Mamata Jenamani",
"['Soumi Chaki' 'Aurobinda Routray' 'William K. Mohanty' 'Mamata Jenamani']"
] |
cs.LG stat.ML
| null |
1612.00599
| null | null |
http://arxiv.org/pdf/1612.00599v1
|
2016-12-02T09:01:57Z
|
2016-12-02T09:01:57Z
|
Communication Lower Bounds for Distributed Convex Optimization:
Partition Data on Features
|
Recently, there has been an increasing interest in designing distributed
convex optimization algorithms under the setting where the data matrix is
partitioned on features. Algorithms under this setting sometimes have many
advantages over those under the setting where data is partitioned on samples,
especially when the number of features is huge. Therefore, it is important to
understand the inherent limitations of these optimization problems. In this
paper, with certain restrictions on the communication allowed in the
procedures, we develop tight lower bounds on communication rounds for a broad
class of non-incremental algorithms under this setting. We also provide a lower
bound on communication rounds for a class of (randomized) incremental
algorithms.
|
[
"['Zihao Chen' 'Luo Luo' 'Zhihua Zhang']",
"Zihao Chen, Luo Luo, Zhihua Zhang"
] |
cs.LG
| null |
1612.00611
| null | null |
http://arxiv.org/pdf/1612.00611v1
|
2016-12-02T10:03:09Z
|
2016-12-02T10:03:09Z
|
Predictive Clinical Decision Support System with RNN Encoding and Tensor
Decoding
|
With the introduction of the Electric Health Records, large amounts of
digital data become available for analysis and decision support. When
physicians are prescribing treatments to a patient, they need to consider a
large range of data variety and volume, making decisions increasingly complex.
Machine learning based Clinical Decision Support systems can be a solution to
the data challenges. In this work we focus on a class of decision support in
which the physicians' decision is directly predicted. Concretely, the model
would assign higher probabilities to decisions that it presumes the physician
are more likely to make. Thus the CDS system can provide physicians with
rational recommendations. We also address the problem of correlation in target
features: Often a physician is required to make multiple (sub-)decisions in a
block, and that these decisions are mutually dependent. We propose a solution
to the target correlation problem using a tensor factorization model. In order
to handle the patients' historical information as sequential data, we apply the
so-called Encoder-Decoder-Framework which is based on Recurrent Neural Networks
(RNN) as encoders and a tensor factorization model as a decoder, a combination
which is novel in machine learning. With experiments with real-world datasets
we show that the proposed model does achieve better prediction performances.
|
[
"Yinchong Yang, Peter A. Fasching, Markus Wallwiener, Tanja N. Fehm,\n Sara Y. Brucker, Volker Tresp",
"['Yinchong Yang' 'Peter A. Fasching' 'Markus Wallwiener' 'Tanja N. Fehm'\n 'Sara Y. Brucker' 'Volker Tresp']"
] |
stat.ML cs.LG
| null |
1612.00615
| null | null |
http://arxiv.org/pdf/1612.00615v1
|
2016-12-02T10:13:16Z
|
2016-12-02T10:13:16Z
|
A temporal model for multiple sclerosis course evolution
|
Multiple Sclerosis is a degenerative condition of the central nervous system
that affects nearly 2.5 million of individuals in terms of their physical,
cognitive, psychological and social capabilities. Researchers are currently
investigating on the use of patient reported outcome measures for the
assessment of impact and evolution of the disease on the life of the patients.
To date, a clear understanding on the use of such measures to predict the
evolution of the disease is still lacking. In this work we resort to
regularized machine learning methods for binary classification and multiple
output regression. We propose a pipeline that can be used to predict the
disease progression from patient reported measures. The obtained model is
tested on a data set collected from an ongoing clinical research project.
|
[
"Samuele Fiorini, Andrea Tacchino, Giampaolo Brichetto, Alessandro\n Verri, Annalisa Barla",
"['Samuele Fiorini' 'Andrea Tacchino' 'Giampaolo Brichetto'\n 'Alessandro Verri' 'Annalisa Barla']"
] |
cs.LG
| null |
1612.00637
| null | null |
http://arxiv.org/pdf/1612.00637v1
|
2016-12-02T11:27:44Z
|
2016-12-02T11:27:44Z
|
A General Framework for Density Based Time Series Clustering Exploiting
a Novel Admissible Pruning Strategy
|
Time Series Clustering is an important subroutine in many higher-level data
mining analyses, including data editing for classifiers, summarization, and
outlier detection. It is well known that for similarity search the superiority
of Dynamic Time Warping (DTW) over Euclidean distance gradually diminishes as
we consider ever larger datasets. However, as we shall show, the same is not
true for clustering. Clustering time series under DTW remains a computationally
expensive operation. In this work, we address this issue in two ways. We
propose a novel pruning strategy that exploits both the upper and lower bounds
to prune off a very large fraction of the expensive distance calculations. This
pruning strategy is admissible and gives us provably identical results to the
brute force algorithm, but is at least an order of magnitude faster. For
datasets where even this level of speedup is inadequate, we show that we can
use a simple heuristic to order the unavoidable calculations in a
most-useful-first ordering, thus casting the clustering into an anytime
framework. We demonstrate the utility of our ideas with both single and
multidimensional case studies in the domains of astronomy, speech physiology,
medicine and entomology. In addition, we show the generality of our clustering
framework to other domains by efficiently obtaining semantically significant
clusters in protein sequences using the Edit Distance, the discrete data
analogue of DTW.
|
[
"Nurjahan Begum, Liudmila Ulanova, Hoang Anh Dau, Jun Wang and Eamonn\n Keogh",
"['Nurjahan Begum' 'Liudmila Ulanova' 'Hoang Anh Dau' 'Jun Wang'\n 'Eamonn Keogh']"
] |
cs.HC cs.AI cs.LG stat.ML
|
10.1145/3025453.3025576
|
1612.00653
| null | null |
http://arxiv.org/abs/1612.00653v2
|
2017-01-13T12:15:47Z
|
2016-12-02T12:20:47Z
|
Inferring Cognitive Models from Data using Approximate Bayesian
Computation
|
An important problem for HCI researchers is to estimate the parameter values
of a cognitive model from behavioral data. This is a difficult problem, because
of the substantial complexity and variety in human behavioral strategies. We
report an investigation into a new approach using approximate Bayesian
computation (ABC) to condition model parameters to data and prior knowledge. As
the case study we examine menu interaction, where we have click time data only
to infer a cognitive model that implements a search behaviour with parameters
such as fixation duration and recall probability. Our results demonstrate that
ABC (i) improves estimates of model parameter values, (ii) enables meaningful
comparisons between model variants, and (iii) supports fitting models to
individual users. ABC provides ample opportunities for theoretical HCI research
by allowing principled inference of model parameter values and their
uncertainty.
|
[
"['Antti Kangasrääsiö' 'Kumaripaba Athukorala' 'Andrew Howes'\n 'Jukka Corander' 'Samuel Kaski' 'Antti Oulasvirta']",
"Antti Kangasr\\\"a\\\"asi\\\"o, Kumaripaba Athukorala, Andrew Howes, Jukka\n Corander, Samuel Kaski, Antti Oulasvirta"
] |
stat.ML cs.LG
| null |
1612.00662
| null | null |
http://arxiv.org/pdf/1612.00662v1
|
2016-12-02T12:44:31Z
|
2016-12-02T12:44:31Z
|
Predicting Patient State-of-Health using Sliding Window and Recurrent
Classifiers
|
Bedside monitors in Intensive Care Units (ICUs) frequently sound incorrectly,
slowing response times and desensitising nurses to alarms (Chambrin, 2001),
causing true alarms to be missed (Hug et al., 2011). We compare sliding window
predictors with recurrent predictors to classify patient state-of-health from
ICU multivariate time series; we report slightly improved performance for the
RNN for three out of four targets.
|
[
"['Adam McCarthy' 'Christopher K. I. Williams']",
"Adam McCarthy and Christopher K.I. Williams"
] |
stat.ML cs.CV cs.LG q-bio.NC stat.AP
| null |
1612.00667
| null | null |
http://arxiv.org/pdf/1612.00667v3
|
2017-04-18T20:12:16Z
|
2016-12-02T12:59:11Z
|
Voxelwise nonlinear regression toolbox for neuroimage analysis:
Application to aging and neurodegenerative disease modeling
|
This paper describes a new neuroimaging analysis toolbox that allows for the
modeling of nonlinear effects at the voxel level, overcoming limitations of
methods based on linear models like the GLM. We illustrate its features using a
relevant example in which distinct nonlinear trajectories of Alzheimer's
disease related brain atrophy patterns were found across the full biological
spectrum of the disease. The open-source toolbox presented in this paper is
available at https://github.com/imatge-upc/VNeAT.
|
[
"['Santi Puch' 'Asier Aduriz' 'Adrià Casamitjana' 'Veronica Vilaplana'\n 'Paula Petrone' 'Grégory Operto' 'Raffaele Cacciaglia' 'Stavros Skouras'\n 'Carles Falcon' 'José Luis Molinuevo' 'Juan Domingo Gispert']",
"Santi Puch, Asier Aduriz, Adri\\`a Casamitjana, Veronica Vilaplana,\n Paula Petrone, Gr\\'egory Operto, Raffaele Cacciaglia, Stavros Skouras, Carles\n Falcon, Jos\\'e Luis Molinuevo, Juan Domingo Gispert"
] |
cs.NE cs.LG
| null |
1612.00671
| null | null |
http://arxiv.org/pdf/1612.00671v1
|
2016-11-30T19:58:44Z
|
2016-11-30T19:58:44Z
|
Reliable Evaluation of Neural Network for Multiclass Classification of
Real-world Data
|
This paper presents a systematic evaluation of Neural Network (NN) for
classification of real-world data. In the field of machine learning, it is
often seen that a single parameter that is 'predictive accuracy' is being used
for evaluating the performance of a classifier model. However, this parameter
might not be considered reliable given a dataset with very high level of
skewness. To demonstrate such behavior, seven different types of datasets have
been used to evaluate a Multilayer Perceptron (MLP) using twelve(12) different
parameters which include micro- and macro-level estimation. In the present
study, the most common problem of prediction called 'multiclass' classification
has been considered. The results that are obtained for different parameters for
each of the dataset could demonstrate interesting findings to support the
usability of these set of performance evaluation parameters.
|
[
"['Siddharth Dinesh' 'Tirtharaj Dash']",
"Siddharth Dinesh, Tirtharaj Dash"
] |
cs.LG cs.CV
| null |
1612.00686
| null | null |
http://arxiv.org/pdf/1612.00686v1
|
2016-12-02T14:05:49Z
|
2016-12-02T14:05:49Z
|
Identifying and Categorizing Anomalies in Retinal Imaging Data
|
The identification and quantification of markers in medical images is
critical for diagnosis, prognosis and management of patients in clinical
practice. Supervised- or weakly supervised training enables the detection of
findings that are known a priori. It does not scale well, and a priori
definition limits the vocabulary of markers to known entities reducing the
accuracy of diagnosis and prognosis. Here, we propose the identification of
anomalies in large-scale medical imaging data using healthy examples as a
reference. We detect and categorize candidates for anomaly findings untypical
for the observed data. A deep convolutional autoencoder is trained on healthy
retinal images. The learned model generates a new feature representation, and
the distribution of healthy retinal patches is estimated by a one-class support
vector machine. Results demonstrate that we can identify pathologic regions in
images without using expert annotations. A subsequent clustering categorizes
findings into clinically meaningful classes. In addition the learned features
outperform standard embedding approaches in a classification task.
|
[
"Philipp Seeb\\\"ock, Sebastian Waldstein, Sophie Klimscha, Bianca S.\n Gerendas, Ren\\'e Donner, Thomas Schlegl, Ursula Schmidt-Erfurth and Georg\n Langs",
"['Philipp Seeböck' 'Sebastian Waldstein' 'Sophie Klimscha'\n 'Bianca S. Gerendas' 'René Donner' 'Thomas Schlegl'\n 'Ursula Schmidt-Erfurth' 'Georg Langs']"
] |
cs.NE cs.AI cs.LG
| null |
1612.00712
| null | null |
http://arxiv.org/pdf/1612.00712v1
|
2016-12-02T15:46:09Z
|
2016-12-02T15:46:09Z
|
Probabilistic Neural Programs
|
We present probabilistic neural programs, a framework for program induction
that permits flexible specification of both a computational model and inference
algorithm while simultaneously enabling the use of deep neural networks.
Probabilistic neural programs combine a computation graph for specifying a
neural network with an operator for weighted nondeterministic choice. Thus, a
program describes both a collection of decisions as well as the neural network
architecture used to make each one. We evaluate our approach on a challenging
diagram question answering task where probabilistic neural programs correctly
execute nearly twice as many programs as a baseline model.
|
[
"Kenton W. Murray and Jayant Krishnamurthy",
"['Kenton W. Murray' 'Jayant Krishnamurthy']"
] |
cs.LG cs.AI cs.NE
| null |
1612.00745
| null | null |
http://arxiv.org/pdf/1612.00745v1
|
2016-12-02T16:49:07Z
|
2016-12-02T16:49:07Z
|
Cognitive Deep Machine Can Train Itself
|
Machine learning is making substantial progress in diverse applications. The
success is mostly due to advances in deep learning. However, deep learning can
make mistakes and its generalization abilities to new tasks are questionable.
We ask when and how one can combine network outputs, when (i) details of the
observations are evaluated by learned deep components and (ii) facts and
confirmation rules are available in knowledge based systems. We show that in
limited contexts the required number of training samples can be low and
self-improvement of pre-trained networks in more general context is possible.
We argue that the combination of sparse outlier detection with deep components
that can support each other diminish the fragility of deep methods, an
important requirement for engineering applications. We argue that supervised
learning of labels may be fully eliminated under certain conditions: a
component based architecture together with a knowledge based system can train
itself and provide high quality answers. We demonstrate these concepts on the
State Farm Distracted Driver Detection benchmark. We argue that the view of the
Study Panel (2016) may overestimate the requirements on `years of focused
research' and `careful, unique construction' for `AI systems'.
|
[
"Andr\\'as L\\H{o}rincz, M\\'at\\'e Cs\\'akv\\'ari, \\'Aron F\\'othi, Zolt\\'an\n \\'Ad\\'am Milacski, Andr\\'as S\\'ark\\'any, Zolt\\'an T\\H{o}s\\'er",
"['András Lőrincz' 'Máté Csákvári' 'Áron Fóthi' 'Zoltán Ádám Milacski'\n 'András Sárkány' 'Zoltán Tősér']"
] |
stat.ML cs.AI cs.LG
| null |
1612.00767
| null | null |
http://arxiv.org/pdf/1612.00767v2
|
2016-12-08T09:19:30Z
|
2016-12-02T17:43:33Z
|
Asynchronous Stochastic Gradient MCMC with Elastic Coupling
|
We consider parallel asynchronous Markov Chain Monte Carlo (MCMC) sampling
for problems where we can leverage (stochastic) gradients to define continuous
dynamics which explore the target distribution. We outline a solution strategy
for this setting based on stochastic gradient Hamiltonian Monte Carlo sampling
(SGHMC) which we alter to include an elastic coupling term that ties together
multiple MCMC instances. The proposed strategy turns inherently sequential HMC
algorithms into asynchronous parallel versions. First experiments empirically
show that the resulting parallel sampler significantly speeds up exploration of
the target distribution, when compared to standard SGHMC, and is less prone to
the harmful effects of stale gradients than a naive parallelization approach.
|
[
"Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, Frank Hutter",
"['Jost Tobias Springenberg' 'Aaron Klein' 'Stefan Falkner' 'Frank Hutter']"
] |
stat.ML cs.LG
| null |
1612.00775
| null | null |
http://arxiv.org/pdf/1612.00775v2
|
2017-01-09T16:04:38Z
|
2016-12-02T17:57:04Z
|
A simple squared-error reformulation for ordinal classification
|
In this paper, we explore ordinal classification (in the context of deep
neural networks) through a simple modification of the squared error loss which
not only allows it to not only be sensitive to class ordering, but also allows
the possibility of having a discrete probability distribution over the classes.
Our formulation is based on the use of a softmax hidden layer, which has
received relatively little attention in the literature. We empirically evaluate
its performance on the Kaggle diabetic retinopathy dataset, an ordinal and
high-resolution dataset and show that it outperforms all of the baselines
employed.
|
[
"Christopher Beckham, Christopher Pal",
"['Christopher Beckham' 'Christopher Pal']"
] |
cs.LG cs.AI stat.ML
|
10.1073/pnas.1611835114
|
1612.00796
| null | null |
http://arxiv.org/abs/1612.00796v2
|
2017-01-25T13:01:51Z
|
2016-12-02T19:18:37Z
|
Overcoming catastrophic forgetting in neural networks
|
The ability to learn tasks in a sequential fashion is crucial to the
development of artificial intelligence. Neural networks are not, in general,
capable of this and it has been widely thought that catastrophic forgetting is
an inevitable feature of connectionist models. We show that it is possible to
overcome this limitation and train networks that can maintain expertise on
tasks which they have not experienced for a long time. Our approach remembers
old tasks by selectively slowing down learning on the weights important for
those tasks. We demonstrate our approach is scalable and effective by solving a
set of classification tasks based on the MNIST hand written digit dataset and
by learning several Atari 2600 games sequentially.
|
[
"['James Kirkpatrick' 'Razvan Pascanu' 'Neil Rabinowitz' 'Joel Veness'\n 'Guillaume Desjardins' 'Andrei A. Rusu' 'Kieran Milan' 'John Quan'\n 'Tiago Ramalho' 'Agnieszka Grabska-Barwinska' 'Demis Hassabis'\n 'Claudia Clopath' 'Dharshan Kumaran' 'Raia Hadsell']",
"James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness,\n Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho,\n Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan\n Kumaran, Raia Hadsell"
] |
stat.ML cs.IT cs.LG math.IT
| null |
1612.00804
| null | null |
http://arxiv.org/pdf/1612.00804v2
|
2017-10-12T05:25:40Z
|
2016-12-02T19:32:55Z
|
Restricted Strong Convexity Implies Weak Submodularity
|
We connect high-dimensional subset selection and submodular maximization. Our
results extend the work of Das and Kempe (2011) from the setting of linear
regression to arbitrary objective functions. For greedy feature selection, this
connection allows us to obtain strong multiplicative performance bounds on
several methods without statistical modeling assumptions. We also derive
recovery guarantees of this form under standard assumptions. Our work shows
that greedy algorithms perform within a constant factor from the best possible
subset-selection solution for a broad class of general objective functions. Our
methods allow a direct control over the number of obtained features as opposed
to regularization parameters that only implicitly control sparsity. Our proof
technique uses the concept of weak submodularity initially defined by Das and
Kempe. We draw a connection between convex analysis and submodular set function
theory which may be of independent interest for other statistical learning
applications that have combinatorial structure.
|
[
"['Ethan R. Elenberg' 'Rajiv Khanna' 'Alexandros G. Dimakis'\n 'Sahand Negahban']",
"Ethan R. Elenberg, Rajiv Khanna, Alexandros G. Dimakis, Sahand\n Negahban"
] |
cs.CV cs.GR cs.LG
| null |
1612.00814
| null | null |
http://arxiv.org/pdf/1612.00814v3
|
2017-08-13T02:40:50Z
|
2016-12-01T05:51:37Z
|
Perspective Transformer Nets: Learning Single-View 3D Object
Reconstruction without 3D Supervision
|
Understanding the 3D world is a fundamental problem in computer vision.
However, learning a good representation of 3D objects is still an open problem
due to the high dimensionality of the data and many factors of variation
involved. In this work, we investigate the task of single-view 3D object
reconstruction from a learning agent's perspective. We formulate the learning
process as an interaction between 3D and 2D representations and propose an
encoder-decoder network with a novel projection loss defined by the perspective
transformation. More importantly, the projection loss enables the unsupervised
learning using 2D observation without explicit 3D supervision. We demonstrate
the ability of the model in generating 3D volume from a single 2D image with
three sets of experiments: (1) learning from single-class objects; (2) learning
from multi-class objects and (3) testing on novel object classes. Results show
superior performance and better generalization ability for 3D object
reconstruction when the projection loss is involved.
|
[
"['Xinchen Yan' 'Jimei Yang' 'Ersin Yumer' 'Yijie Guo' 'Honglak Lee']",
"Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo, Honglak Lee"
] |
cs.LG cs.AI cs.NE
| null |
1612.00817
| null | null |
http://arxiv.org/pdf/1612.00817v1
|
2016-12-02T20:08:22Z
|
2016-12-02T20:08:22Z
|
Summary - TerpreT: A Probabilistic Programming Language for Program
Induction
|
We study machine learning formulations of inductive program synthesis; that
is, given input-output examples, synthesize source code that maps inputs to
corresponding outputs. Our key contribution is TerpreT, a domain-specific
language for expressing program synthesis problems. A TerpreT model is composed
of a specification of a program representation and an interpreter that
describes how programs map inputs to outputs. The inference task is to observe
a set of input-output examples and infer the underlying program. From a TerpreT
model we automatically perform inference using four different back-ends:
gradient descent (thus each TerpreT model can be seen as defining a
differentiable interpreter), linear program (LP) relaxations for graphical
models, discrete satisfiability solving, and the Sketch program synthesis
system. TerpreT has two main benefits. First, it enables rapid exploration of a
range of domains, program representations, and interpreter models. Second, it
separates the model specification from the inference algorithm, allowing proper
comparisons between different approaches to inference.
We illustrate the value of TerpreT by developing several interpreter models
and performing an extensive empirical comparison between alternative inference
algorithms on a variety of program models. To our knowledge, this is the first
work to compare gradient-based search over program space to traditional
search-based alternatives. Our key empirical finding is that constraint solvers
dominate the gradient descent and LP-based formulations.
This is a workshop summary of a longer report at arXiv:1608.04428
|
[
"['Alexander L. Gaunt' 'Marc Brockschmidt' 'Rishabh Singh' 'Nate Kushman'\n 'Pushmeet Kohli' 'Jonathan Taylor' 'Daniel Tarlow']",
"Alexander L. Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman,\n Pushmeet Kohli, Jonathan Taylor, Daniel Tarlow"
] |
stat.ML cs.LG
| null |
1612.00824
| null | null |
http://arxiv.org/pdf/1612.00824v1
|
2016-12-02T20:23:31Z
|
2016-12-02T20:23:31Z
|
Learning with Hierarchical Gaussian Kernels
|
We investigate iterated compositions of weighted sums of Gaussian kernels and
provide an interpretation of the construction that shows some similarities with
the architectures of deep neural networks. On the theoretical side, we show
that these kernels are universal and that SVMs using these kernels are
universally consistent. We further describe a parameter optimization method for
the kernel parameters and empirically compare this method to SVMs, random
forests, a multiple kernel learning approach, and to some deep neural networks.
|
[
"Ingo Steinwart and Philipp Thomann and Nico Schmid",
"['Ingo Steinwart' 'Philipp Thomann' 'Nico Schmid']"
] |
cs.LG
| null |
1612.00827
| null | null |
http://arxiv.org/pdf/1612.00827v1
|
2016-12-02T20:31:44Z
|
2016-12-02T20:31:44Z
|
Learning Operations on a Stack with Neural Turing Machines
|
Multiple extensions of Recurrent Neural Networks (RNNs) have been proposed
recently to address the difficulty of storing information over long time
periods. In this paper, we experiment with the capacity of Neural Turing
Machines (NTMs) to deal with these long-term dependencies on well-balanced
strings of parentheses. We show that not only does the NTM emulate a stack with
its heads and learn an algorithm to recognize such words, but it is also
capable of strongly generalizing to much longer sequences.
|
[
"['Tristan Deleu' 'Joseph Dureau']",
"Tristan Deleu, Joseph Dureau"
] |
cs.CV cs.LG
| null |
1612.00835
| null | null |
http://arxiv.org/pdf/1612.00835v2
|
2016-12-05T20:06:57Z
|
2016-12-02T20:53:01Z
|
Scribbler: Controlling Deep Image Synthesis with Sketch and Color
|
Recently, there have been several promising methods to generate realistic
imagery from deep convolutional networks. These methods sidestep the
traditional computer graphics rendering pipeline and instead generate imagery
at the pixel level by learning from large collections of photos (e.g. faces or
bedrooms). However, these methods are of limited utility because it is
difficult for a user to control what the network produces. In this paper, we
propose a deep adversarial image synthesis architecture that is conditioned on
sketched boundaries and sparse color strokes to generate realistic cars,
bedrooms, or faces. We demonstrate a sketch based image synthesis system which
allows users to 'scribble' over the sketch to indicate preferred color for
objects. Our network can then generate convincing images that satisfy both the
color and the sketch constraints of user. The network is feed-forward which
allows users to see the effect of their edits in real time. We compare to
recent work on sketch to image synthesis and show that our approach can
generate more realistic, more diverse, and more controllable outputs. The
architecture is also effective at user-guided colorization of grayscale images.
|
[
"['Patsorn Sangkloy' 'Jingwan Lu' 'Chen Fang' 'Fisher Yu' 'James Hays']",
"Patsorn Sangkloy, Jingwan Lu, Chen Fang, Fisher Yu, James Hays"
] |
cs.CV cs.AI cs.CL cs.LG
| null |
1612.00837
| null | null |
http://arxiv.org/pdf/1612.00837v3
|
2017-05-15T17:58:49Z
|
2016-12-02T20:57:07Z
|
Making the V in VQA Matter: Elevating the Role of Image Understanding in
Visual Question Answering
|
Problems at the intersection of vision and language are of significant
importance both as challenging research questions and for the rich set of
applications they enable. However, inherent structure in our world and bias in
our language tend to be a simpler signal for learning than visual modalities,
resulting in models that ignore visual information, leading to an inflated
sense of their capability.
We propose to counter these language priors for the task of Visual Question
Answering (VQA) and make vision (the V in VQA) matter! Specifically, we balance
the popular VQA dataset by collecting complementary images such that every
question in our balanced dataset is associated with not just a single image,
but rather a pair of similar images that result in two different answers to the
question. Our dataset is by construction more balanced than the original VQA
dataset and has approximately twice the number of image-question pairs. Our
complete balanced dataset is publicly available at www.visualqa.org as part of
the 2nd iteration of the Visual Question Answering Dataset and Challenge (VQA
v2.0).
We further benchmark a number of state-of-art VQA models on our balanced
dataset. All models perform significantly worse on our balanced dataset,
suggesting that these models have indeed learned to exploit language priors.
This finding provides the first concrete empirical evidence for what seems to
be a qualitative sense among practitioners.
Finally, our data collection protocol for identifying complementary images
enables us to develop a novel interpretable model, which in addition to
providing an answer to the given (image, question) pair, also provides a
counter-example based explanation. Specifically, it identifies an image that is
similar to the original image, but it believes has a different answer to the
same question. This can help in building trust for machines among their users.
|
[
"['Yash Goyal' 'Tejas Khot' 'Douglas Summers-Stay' 'Dhruv Batra'\n 'Devi Parikh']",
"Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, Devi Parikh"
] |
cs.LG stat.AP stat.ML
| null |
1612.0084
| null | null | null | null | null |
A novel multiclassSVM based framework to classify lithology from well
logs: a real-world application
|
Support vector machines (SVMs) have been recognized as a potential tool for
supervised classification analyses in different domains of research. In
essence, SVM is a binary classifier. Therefore, in case of a multiclass
problem, the problem is divided into a series of binary problems which are
solved by binary classifiers, and finally the classification results are
combined following either the one-against-one or one-against-all strategies. In
this paper, an attempt has been made to classify lithology using a multiclass
SVM based framework using well logs as predictor variables. Here, the lithology
is classified into four classes such as sand, shaly sand, sandy shale and shale
based on the relative values of sand and shale fractions as suggested by an
expert geologist. The available dataset consisting well logs (gamma ray,
neutron porosity, density, and P-sonic) and class information from four closely
spaced wells from an onshore hydrocarbon field is divided into training and
testing sets. We have used one-against-all strategy to combine the results of
multiple binary classifiers. The reported results established the superiority
of multiclass SVM compared to other classifiers in terms of classification
accuracy. The selection of kernel function and associated parameters has also
been investigated here. It can be envisaged from the results achieved in this
study that the proposed framework based on multiclass SVM can further be used
to solve classification problems. In future research endeavor, seismic
attributes can be introduced in the framework to classify the lithology
throughout a study area from seismic inputs.
|
[
"Soumi Chaki, Aurobinda Routray, William K. Mohanty, Mamata Jenamani"
] |
null | null |
1612.00840
| null | null |
http://arxiv.org/pdf/1612.00840v1
|
2016-12-02T07:55:16Z
|
2016-12-02T07:55:16Z
|
A novel multiclassSVM based framework to classify lithology from well
logs: a real-world application
|
Support vector machines (SVMs) have been recognized as a potential tool for supervised classification analyses in different domains of research. In essence, SVM is a binary classifier. Therefore, in case of a multiclass problem, the problem is divided into a series of binary problems which are solved by binary classifiers, and finally the classification results are combined following either the one-against-one or one-against-all strategies. In this paper, an attempt has been made to classify lithology using a multiclass SVM based framework using well logs as predictor variables. Here, the lithology is classified into four classes such as sand, shaly sand, sandy shale and shale based on the relative values of sand and shale fractions as suggested by an expert geologist. The available dataset consisting well logs (gamma ray, neutron porosity, density, and P-sonic) and class information from four closely spaced wells from an onshore hydrocarbon field is divided into training and testing sets. We have used one-against-all strategy to combine the results of multiple binary classifiers. The reported results established the superiority of multiclass SVM compared to other classifiers in terms of classification accuracy. The selection of kernel function and associated parameters has also been investigated here. It can be envisaged from the results achieved in this study that the proposed framework based on multiclass SVM can further be used to solve classification problems. In future research endeavor, seismic attributes can be introduced in the framework to classify the lithology throughout a study area from seismic inputs.
|
[
"['Soumi Chaki' 'Aurobinda Routray' 'William K. Mohanty' 'Mamata Jenamani']"
] |
cs.LG stat.ML
| null |
1612.00841
| null | null |
http://arxiv.org/pdf/1612.00841v1
|
2016-12-02T07:57:08Z
|
2016-12-02T07:57:08Z
|
A Novel Framework based on SVDD to Classify Water Saturation from
Seismic Attributes
|
Water saturation is an important property in reservoir engineering domain.
Thus, satisfactory classification of water saturation from seismic attributes
is beneficial for reservoir characterization. However, diverse and non-linear
nature of subsurface attributes makes the classification task difficult. In
this context, this paper proposes a generalized Support Vector Data Description
(SVDD) based novel classification framework to classify water saturation into
two classes (Class high and Class low) from three seismic attributes seismic
impedance, amplitude envelop, and seismic sweetness. G-metric means and program
execution time are used to quantify the performance of the proposed framework
along with established supervised classifiers. The documented results imply
that the proposed framework is superior to existing classifiers. The present
study is envisioned to contribute in further reservoir modeling.
|
[
"Soumi Chaki, Akhilesh Kumar Verma, Aurobinda Routray, William K.\n Mohanty, Mamata Jenamani",
"['Soumi Chaki' 'Akhilesh Kumar Verma' 'Aurobinda Routray'\n 'William K. Mohanty' 'Mamata Jenamani']"
] |
cs.LG
| null |
1612.00882
| null | null |
http://arxiv.org/pdf/1612.00882v1
|
2016-12-02T22:38:37Z
|
2016-12-02T22:38:37Z
|
Success Probability of Exploration: a Concrete Analysis of Learning
Efficiency
|
Exploration has been a crucial part of reinforcement learning, yet several
important questions concerning exploration efficiency are still not answered
satisfactorily by existing analytical frameworks. These questions include
exploration parameter setting, situation analysis, and hardness of MDPs, all of
which are unavoidable for practitioners. To bridge the gap between the theory
and practice, we propose a new analytical framework called the success
probability of exploration. We show that those important questions of
exploration above can all be answered under our framework, and the answers
provided by our framework meet the needs of practitioners better than the
existing ones. More importantly, we introduce a concrete and practical approach
to evaluating the success probabilities in certain MDPs without the need of
actually running the learning algorithm. We then provide empirical results to
verify our approach, and demonstrate how the success probability of exploration
can be used to analyse and predict the behaviours and possible outcomes of
exploration, which are the keys to the answer of the important questions of
exploration.
|
[
"Liangpeng Zhang, Ke Tang, Xin Yao",
"['Liangpeng Zhang' 'Ke Tang' 'Xin Yao']"
] |
cs.CV cs.LG cs.NE
| null |
1612.00891
| null | null |
http://arxiv.org/pdf/1612.00891v2
|
2017-02-24T18:22:30Z
|
2016-12-02T23:11:10Z
|
Parameter Compression of Recurrent Neural Networks and Degradation of
Short-term Memory
|
The significant computational costs of deploying neural networks in
large-scale or resource constrained environments, such as data centers and
mobile devices, has spurred interest in model compression, which can achieve a
reduction in both arithmetic operations and storage memory. Several techniques
have been proposed for reducing or compressing the parameters for feed-forward
and convolutional neural networks, but less is understood about the effect of
parameter compression on recurrent neural networks (RNN). In particular, the
extent to which the recurrent parameters can be compressed and the impact on
short-term memory performance, is not well understood. In this paper, we study
the effect of complexity reduction, through singular value decomposition rank
reduction, on RNN and minimal gated recurrent unit (MGRU) networks for several
tasks. We show that considerable rank reduction is possible when compressing
recurrent weights, even without fine tuning. Furthermore, we propose a
perturbation model for the effect of general perturbations, such as a
compression, on the recurrent parameters of RNNs. The model is tested against a
noiseless memorization experiment that elucidates the short-term memory
performance. In this way, we demonstrate that the effect of compression of
recurrent parameters is dependent on the degree of temporal coherence present
in the data and task. This work can guide on-the-fly RNN compression for novel
environments or tasks, and provides insight for applying RNN compression in
low-power devices, such as hearing aids.
|
[
"Jonathan A. Cox",
"['Jonathan A. Cox']"
] |
cs.CL cs.LG
| null |
1612.00913
| null | null |
http://arxiv.org/pdf/1612.00913v2
|
2017-01-04T08:39:09Z
|
2016-12-03T02:13:18Z
|
End-to-End Joint Learning of Natural Language Understanding and Dialogue
Manager
|
Natural language understanding and dialogue policy learning are both
essential in conversational systems that predict the next system actions in
response to a current user utterance. Conventional approaches aggregate
separate models of natural language understanding (NLU) and system action
prediction (SAP) as a pipeline that is sensitive to noisy outputs of
error-prone NLU. To address the issues, we propose an end-to-end deep recurrent
neural network with limited contextual dialogue memory by jointly training NLU
and SAP on DSTC4 multi-domain human-human dialogues. Experiments show that our
proposed model significantly outperforms the state-of-the-art pipeline models
for both NLU and SAP, which indicates that our joint model is capable of
mitigating the affects of noisy NLU outputs, and NLU model can be refined by
error flows backpropagating from the extra supervised signals of system
actions.
|
[
"['Xuesong Yang' 'Yun-Nung Chen' 'Dilek Hakkani-Tur' 'Paul Crook'\n 'Xiujun Li' 'Jianfeng Gao' 'Li Deng']",
"Xuesong Yang, Yun-Nung Chen, Dilek Hakkani-Tur, Paul Crook, Xiujun Li,\n Jianfeng Gao, Li Deng"
] |
cs.LG cs.NE q-bio.QM stat.ML
| null |
1612.00962
| null | null |
http://arxiv.org/pdf/1612.00962v1
|
2016-12-03T12:16:21Z
|
2016-12-03T12:16:21Z
|
Positive blood culture detection in time series data using a BiLSTM
network
|
The presence of bacteria or fungi in the bloodstream of patients is abnormal
and can lead to life-threatening conditions. A computational model based on a
bidirectional long short-term memory artificial neural network, is explored to
assist doctors in the intensive care unit to predict whether examination of
blood cultures of patients will return positive. As input it uses nine
monitored clinical parameters, presented as time series data, collected from
2177 ICU admissions at the Ghent University Hospital. Our main goal is to
determine if general machine learning methods and more specific, temporal
models, can be used to create an early detection system. This preliminary
research obtains an area of 71.95% under the precision recall curve, proving
the potential of temporal neural networks in this context.
|
[
"['Leen De Baets' 'Joeri Ruyssinck' 'Thomas Peiffer' 'Johan Decruyenaere'\n 'Filip De Turck' 'Femke Ongenae' 'Tom Dhaene']",
"Leen De Baets, Joeri Ruyssinck, Thomas Peiffer, Johan Decruyenaere,\n Filip De Turck, Femke Ongenae, Tom Dhaene"
] |
cs.SI cs.LG stat.ML
| null |
1612.00984
| null | null |
http://arxiv.org/pdf/1612.00984v2
|
2017-10-07T18:18:55Z
|
2016-12-03T16:42:59Z
|
Estimating latent feature-feature interactions in large feature-rich
graphs
|
Real-world complex networks describe connections between objects; in reality,
those objects are often endowed with some kind of features. How does the
presence or absence of such features interplay with the network link structure?
Although the situation here described is truly ubiquitous, there is a limited
body of research dealing with large graphs of this kind. Many previous works
considered homophily as the only possible transmission mechanism translating
node features into links. Other authors, instead, developed more sophisticated
models, that are able to handle complex feature interactions, but are unfit to
scale to very large networks. We expand on the MGJ model, where interactions
between pairs of features can foster or discourage link formation. In this
work, we will investigate how to estimate the latent feature-feature
interactions in this model. We shall propose two solutions: the first one
assumes feature independence and it is essentially based on Naive Bayes; the
second one, which relaxes the independence assumption assumption, is based on
perceptrons. In fact, we show it is possible to cast the model equation in
order to see it as the prediction rule of a perceptron. We analyze how
classical results for the perceptrons can be interpreted in this context; then,
we define a fast and simple perceptron-like algorithm for this task, which can
process $10^8$ links in minutes. We then compare these two techniques, first
with synthetic datasets that follows our model, gaining evidence that the Naive
independence assumptions are detrimental in practice. Secondly, we consider a
real, large-scale citation network where each node (i.e., paper) can be
described by different types of characteristics; there, our algorithm can
assess how well each set of features can explain the links, and thus finding
meaningful latent feature-feature interactions.
|
[
"Corrado Monti and Paolo Boldi",
"['Corrado Monti' 'Paolo Boldi']"
] |
stat.ML cs.LG
| null |
1612.0102
| null | null | null | null | null |
Hypothesis Transfer Learning via Transformation Functions
|
We consider the Hypothesis Transfer Learning (HTL) problem where one
incorporates a hypothesis trained on the source domain into the learning
procedure of the target domain. Existing theoretical analysis either only
studies specific algorithms or only presents upper bounds on the generalization
error but not on the excess risk. In this paper, we propose a unified
algorithm-dependent framework for HTL through a novel notion of transformation
function, which characterizes the relation between the source and the target
domains. We conduct a general risk analysis of this framework and in
particular, we show for the first time, if two domains are related, HTL enjoys
faster convergence rates of excess risks for Kernel Smoothing and Kernel Ridge
Regression than those of the classical non-transfer learning settings.
Experiments on real world data demonstrate the effectiveness of our framework.
|
[
"Simon Shaolei Du, Jayanth Koushik, Aarti Singh, and Barnabas Poczos"
] |
null | null |
1612.01020
| null | null |
http://arxiv.org/pdf/1612.01020v4
|
2017-11-05T16:24:27Z
|
2016-12-03T21:22:43Z
|
Hypothesis Transfer Learning via Transformation Functions
|
We consider the Hypothesis Transfer Learning (HTL) problem where one incorporates a hypothesis trained on the source domain into the learning procedure of the target domain. Existing theoretical analysis either only studies specific algorithms or only presents upper bounds on the generalization error but not on the excess risk. In this paper, we propose a unified algorithm-dependent framework for HTL through a novel notion of transformation function, which characterizes the relation between the source and the target domains. We conduct a general risk analysis of this framework and in particular, we show for the first time, if two domains are related, HTL enjoys faster convergence rates of excess risks for Kernel Smoothing and Kernel Ridge Regression than those of the classical non-transfer learning settings. Experiments on real world data demonstrate the effectiveness of our framework.
|
[
"['Simon Shaolei Du' 'Jayanth Koushik' 'Aarti Singh' 'Barnabas Poczos']"
] |
q-bio.GN cs.LG stat.ML
| null |
1612.0103
| null | null | null | null | null |
Large scale modeling of antimicrobial resistance with interpretable
classifiers
|
Antimicrobial resistance is an important public health concern that has
implications in the practice of medicine worldwide. Accurately predicting
resistance phenotypes from genome sequences shows great promise in promoting
better use of antimicrobial agents, by determining which antibiotics are likely
to be effective in specific clinical cases. In healthcare, this would allow for
the design of treatment plans tailored for specific individuals, likely
resulting in better clinical outcomes for patients with bacterial infections.
In this work, we present the recent work of Drouin et al. (2016) on using Set
Covering Machines to learn highly interpretable models of antibiotic resistance
and complement it by providing a large scale application of their method to the
entire PATRIC database. We report prediction results for 36 new datasets and
present the Kover AMR platform, a new web-based tool allowing the visualization
and interpretation of the generated models.
|
[
"Alexandre Drouin, Fr\\'ed\\'eric Raymond, Ga\\\"el Letarte St-Pierre,\n Mario Marchand, Jacques Corbeil, Fran\\c{c}ois Laviolette"
] |
null | null |
1612.01030
| null | null |
http://arxiv.org/pdf/1612.01030v1
|
2016-12-03T22:52:44Z
|
2016-12-03T22:52:44Z
|
Large scale modeling of antimicrobial resistance with interpretable
classifiers
|
Antimicrobial resistance is an important public health concern that has implications in the practice of medicine worldwide. Accurately predicting resistance phenotypes from genome sequences shows great promise in promoting better use of antimicrobial agents, by determining which antibiotics are likely to be effective in specific clinical cases. In healthcare, this would allow for the design of treatment plans tailored for specific individuals, likely resulting in better clinical outcomes for patients with bacterial infections. In this work, we present the recent work of Drouin et al. (2016) on using Set Covering Machines to learn highly interpretable models of antibiotic resistance and complement it by providing a large scale application of their method to the entire PATRIC database. We report prediction results for 36 new datasets and present the Kover AMR platform, a new web-based tool allowing the visualization and interpretation of the generated models.
|
[
"['Alexandre Drouin' 'Frédéric Raymond' 'Gaël Letarte St-Pierre'\n 'Mario Marchand' 'Jacques Corbeil' 'François Laviolette']"
] |
stat.ML cs.LG stat.AP
| null |
1612.01055
| null | null |
http://arxiv.org/pdf/1612.01055v1
|
2016-12-04T03:20:54Z
|
2016-12-04T03:20:54Z
|
Modeling trajectories of mental health: challenges and opportunities
|
More than two thirds of mental health problems have their onset during
childhood or adolescence. Identifying children at risk for mental illness later
in life and predicting the type of illness is not easy. We set out to develop a
platform to define subtypes of childhood social-emotional development using
longitudinal, multifactorial trait-based measures. Subtypes discovered through
this study could ultimately advance psychiatric knowledge of the early
behavioural signs of mental illness. To this extent we have examined two types
of models: latent class mixture models and GP-based models. Our findings
indicate that while GP models come close in accuracy of predicting future
trajectories, LCMMs predict the trajectories as well in a fraction of the time.
Unfortunately, neither of the models are currently accurate enough to lead to
immediate clinical impact. The available data related to the development of
childhood mental health is often sparse with only a few time points measured
and require novel methods with improved efficiency and accuracy.
|
[
"['Lauren Erdman' 'Ekansh Sharma' 'Eva Unternahrer' 'Shantala Hari Dass'\n 'Kieran ODonnell' 'Sara Mostafavi' 'Rachel Edgar' 'Michael Kobor'\n 'Helene Gaudreau' 'Michael Meaney' 'Anna Goldenberg']",
"Lauren Erdman, Ekansh Sharma, Eva Unternahrer, Shantala Hari Dass,\n Kieran ODonnell, Sara Mostafavi, Rachel Edgar, Michael Kobor, Helene\n Gaudreau, Michael Meaney, Anna Goldenberg"
] |
cs.AI cs.LG cs.MM cs.SD
| null |
1612.01058
| null | null |
http://arxiv.org/pdf/1612.01058v1
|
2016-12-04T03:36:51Z
|
2016-12-04T03:36:51Z
|
Algorithmic Songwriting with ALYSIA
|
This paper introduces ALYSIA: Automated LYrical SongwrIting Application.
ALYSIA is based on a machine learning model using Random Forests, and we
discuss its success at pitch and rhythm prediction. Next, we show how ALYSIA
was used to create original pop songs that were subsequently recorded and
produced. Finally, we discuss our vision for the future of Automated
Songwriting for both co-creative and autonomous systems.
|
[
"['Margareta Ackerman' 'David Loker']",
"Margareta Ackerman and David Loker"
] |
cs.LG
| null |
1612.01064
| null | null |
http://arxiv.org/pdf/1612.01064v3
|
2017-02-23T06:52:28Z
|
2016-12-04T05:00:22Z
|
Trained Ternary Quantization
|
Deep neural networks are widely used in machine learning applications.
However, the deployment of large neural networks models can be difficult to
deploy on mobile devices with limited power budgets. To solve this problem, we
propose Trained Ternary Quantization (TTQ), a method that can reduce the
precision of weights in neural networks to ternary values. This method has very
little accuracy degradation and can even improve the accuracy of some models
(32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet. And our AlexNet
model is trained from scratch, which means it's as easy as to train normal full
precision model. We highlight our trained quantization method that can learn
both ternary values and ternary assignment. During inference, only ternary
values (2-bit weights) and scaling factors are needed, therefore our models are
nearly 16x smaller than full-precision models. Our ternary models can also be
viewed as sparse binary weight networks, which can potentially be accelerated
with custom circuit. Experiments on CIFAR-10 show that the ternary models
obtained by trained quantization method outperform full-precision models of
ResNet-32,44,56 by 0.04%, 0.16%, 0.36%, respectively. On ImageNet, our model
outperforms full-precision AlexNet model by 0.3% of Top-1 accuracy and
outperforms previous ternary models by 3%.
|
[
"['Chenzhuo Zhu' 'Song Han' 'Huizi Mao' 'William J. Dally']",
"Chenzhuo Zhu, Song Han, Huizi Mao, William J. Dally"
] |
cs.SE cs.AI cs.LG
| null |
1612.01078
| null | null |
http://arxiv.org/pdf/1612.01078v1
|
2016-12-04T06:59:14Z
|
2016-12-04T06:59:14Z
|
Enhancing Use Case Points Estimation Method Using Soft Computing
Techniques
|
Software estimation is a crucial task in software engineering. Software
estimation encompasses cost, effort, schedule, and size. The importance of
software estimation becomes critical in the early stages of the software life
cycle when the details of software have not been revealed yet. Several
commercial and non-commercial tools exist to estimate software in the early
stages. Most software effort estimation methods require software size as one of
the important metric inputs and consequently, software size estimation in the
early stages becomes essential. One of the approaches that has been used for
about two decades in the early size and effort estimation is called use case
points. Use case points method relies on the use case diagram to estimate the
size and effort of software projects. Although the use case points method has
been widely used, it has some limitations that might adversely affect the
accuracy of estimation. This paper presents some techniques using fuzzy logic
and neural networks to improve the accuracy of the use case points method.
Results showed that an improvement up to 22% can be obtained using the proposed
approach.
|
[
"Ali Bou Nassif, Luiz Fernando Capretz, Danny Ho",
"['Ali Bou Nassif' 'Luiz Fernando Capretz' 'Danny Ho']"
] |
cs.AI cs.LG cs.RO
| null |
1612.01086
| null | null |
http://arxiv.org/pdf/1612.01086v3
|
2017-03-26T08:43:23Z
|
2016-12-04T08:28:38Z
|
Deep Learning of Robotic Tasks without a Simulator using Strong and Weak
Human Supervision
|
We propose a scheme for training a computerized agent to perform complex
human tasks such as highway steering. The scheme is designed to follow a
natural learning process whereby a human instructor teaches a computerized
trainee. The learning process consists of five elements: (i) unsupervised
feature learning; (ii) supervised imitation learning; (iii) supervised reward
induction; (iv) supervised safety module construction; and (v) reinforcement
learning. We implemented the last four elements of the scheme using deep
convolutional networks and applied it to successfully create a computerized
agent capable of autonomous highway steering over the well-known racing game
Assetto Corsa. We demonstrate that the use of the last four elements is
essential to effectively carry out the steering task using vision alone,
without access to a driving simulator internals, and operating in wall-clock
time. This is made possible also through the introduction of a safety network,
a novel way for preventing the agent from performing catastrophic mistakes
during the reinforcement learning stage.
|
[
"Bar Hilleli and Ran El-Yaniv",
"['Bar Hilleli' 'Ran El-Yaniv']"
] |
cs.LG
| null |
1612.01094
| null | null |
http://arxiv.org/pdf/1612.01094v1
|
2016-12-04T10:39:49Z
|
2016-12-04T10:39:49Z
|
Learning to superoptimize programs - Workshop Version
|
Superoptimization requires the estimation of the best program for a given
computational task. In order to deal with large programs, superoptimization
techniques perform a stochastic search. This involves proposing a modification
of the current program, which is accepted or rejected based on the improvement
achieved. The state of the art method uses uniform proposal distributions,
which fails to exploit the problem structure to the fullest. To alleviate this
deficiency, we learn a proposal distribution over possible modifications using
Reinforcement Learning. We provide convincing results on the superoptimization
of "Hacker's Delight" programs.
|
[
"['Rudy Bunel' 'Alban Desmaison' 'M. Pawan Kumar' 'Philip H. S. Torr'\n 'Pushmeet Kohli']",
"Rudy Bunel, Alban Desmaison, M. Pawan Kumar, Philip H.S.Torr, Pushmeet\n Kohli"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.