title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Online and Differentially-Private Tensor Decomposition | stat.ML cs.LG | In this paper, we resolve many of the key algorithmic questions regarding
robustness, memory efficiency, and differential privacy of tensor
decomposition. We propose simple variants of the tensor power method which
enjoy these strong properties. We present the first guarantees for online
tensor power method which has a linear memory requirement. Moreover, we present
a noise calibrated tensor power method with efficient privacy guarantees. At
the heart of all these guarantees lies a careful perturbation analysis derived
in this paper which improves up on the existing results significantly.
| Yining Wang, Animashree Anandkumar | null | 1606.06237 | null | null |
Learning in Games: Robustness of Fast Convergence | cs.GT cs.LG | We show that learning algorithms satisfying a $\textit{low approximate
regret}$ property experience fast convergence to approximate optimality in a
large class of repeated games. Our property, which simply requires that each
learner has small regret compared to a $(1+\epsilon)$-multiplicative
approximation to the best action in hindsight, is ubiquitous among learning
algorithms; it is satisfied even by the vanilla Hedge forecaster. Our results
improve upon recent work of Syrgkanis et al. [SALS15] in a number of ways. We
require only that players observe payoffs under other players' realized
actions, as opposed to expected payoffs. We further show that convergence
occurs with high probability, and show convergence under bandit feedback.
Finally, we improve upon the speed of convergence by a factor of $n$, the
number of players. Both the scope of settings and the class of algorithms for
which our analysis provides fast convergence are considerably broader than in
previous work.
Our framework applies to dynamic population games via a low approximate
regret property for shifting experts. Here we strengthen the results of
Lykouris et al. [LST16] in two ways: We allow players to select learning
algorithms from a larger class, which includes a minor variant of the basic
Hedge algorithm, and we increase the maximum churn in players for which
approximate optimality is achieved.
In the bandit setting we present a new algorithm which provides a "small
loss"-type bound with improved dependence on the number of actions in utility
settings, and is both simple and efficient. This result may be of independent
interest.
| Dylan J. Foster, Zhiyuan Li, Thodoris Lykouris, Karthik Sridharan, Eva
Tardos | null | 1606.06244 | null | null |
An Empirical Comparison of Sampling Quality Metrics: A Case Study for
Bayesian Nonnegative Matrix Factorization | cs.LG stat.ML | In this work, we empirically explore the question: how can we assess the
quality of samples from some target distribution? We assume that the samples
are provided by some valid Monte Carlo procedure, so we are guaranteed that the
collection of samples will asymptotically approximate the true distribution.
Most current evaluation approaches focus on two questions: (1) Has the chain
mixed, that is, is it sampling from the distribution? and (2) How independent
are the samples (as MCMC procedures produce correlated samples)? Focusing on
the case of Bayesian nonnegative matrix factorization, we empirically evaluate
standard metrics of sampler quality as well as propose new metrics to capture
aspects that these measures fail to expose. The aspect of sampling that is of
particular interest to us is the ability (or inability) of sampling methods to
move between multiple optima in NMF problems. As a proxy, we propose and study
a number of metrics that might quantify the diversity of a set of NMF
factorizations obtained by a sampler through quantifying the coverage of the
posterior distribution. We compare the performance of a number of standard
sampling methods for NMF in terms of these new metrics.
| Arjumand Masood and Weiwei Pan and Finale Doshi-Velez | null | 1606.06250 | null | null |
Visualizing textual models with in-text and word-as-pixel highlighting | stat.ML cs.CL cs.LG | We explore two techniques which use color to make sense of statistical text
models. One method uses in-text annotations to illustrate a model's view of
particular tokens in particular documents. Another uses a high-level,
"words-as-pixels" graphic to display an entire corpus. Together, these methods
offer both zoomed-in and zoomed-out perspectives into a model's understanding
of text. We show how these interconnected methods help diagnose a classifier's
poor performance on Twitter slang, and make sense of a topic model on
historical political texts.
| Abram Handler, Su Lin Blodgett, Brendan O'Connor | null | 1606.06352 | null | null |
Complex Embeddings for Simple Link Prediction | cs.AI cs.LG stat.ML | In statistical relational learning, the link prediction problem is key to
automatically understand the structure of large knowledge bases. As in previous
studies, we propose to solve this problem through latent factorization.
However, here we make use of complex valued embeddings. The composition of
complex embeddings can handle a large variety of binary relations, among them
symmetric and antisymmetric relations. Compared to state-of-the-art models such
as Neural Tensor Network and Holographic Embeddings, our approach based on
complex embeddings is arguably simpler, as it only uses the Hermitian dot
product, the complex counterpart of the standard dot product between real
vectors. Our approach is scalable to large datasets as it remains linear in
both space and time, while consistently outperforming alternative approaches on
standard link prediction benchmarks.
| Th\'eo Trouillon, Johannes Welbl, Sebastian Riedel, \'Eric Gaussier,
Guillaume Bouchard | null | 1606.06357 | null | null |
A Probabilistic Generative Grammar for Semantic Parsing | cs.CL cs.LG stat.ML | Domain-general semantic parsing is a long-standing goal in natural language
processing, where the semantic parser is capable of robustly parsing sentences
from domains outside of which it was trained. Current approaches largely rely
on additional supervision from new domains in order to generalize to those
domains. We present a generative model of natural language utterances and
logical forms and demonstrate its application to semantic parsing. Our approach
relies on domain-independent supervision to generalize to new domains. We
derive and implement efficient algorithms for training, parsing, and sentence
generation. The work relies on a novel application of hierarchical Dirichlet
processes (HDPs) for structured prediction, which we also present in this
manuscript.
This manuscript is an excerpt of chapter 4 from the Ph.D. thesis of Saparov
(2022), where the model plays a central role in a larger natural language
understanding system.
This manuscript provides a new simplified and more complete presentation of
the work first introduced in Saparov, Saraswat, and Mitchell (2017). The
description and proofs of correctness of the training algorithm, parsing
algorithm, and sentence generation algorithm are much simplified in this new
presentation. We also describe the novel application of hierarchical Dirichlet
processes for structured prediction. In addition, we extend the earlier work
with a new model of word morphology, which utilizes the comprehensive
morphological data from Wiktionary.
| Abulhair Saparov | null | 1606.06361 | null | null |
FSMJ: Feature Selection with Maximum Jensen-Shannon Divergence for Text
Categorization | stat.ML cs.LG | In this paper, we present a new wrapper feature selection approach based on
Jensen-Shannon (JS) divergence, termed feature selection with maximum
JS-divergence (FSMJ), for text categorization. Unlike most existing feature
selection approaches, the proposed FSMJ approach is based on real-valued
features which provide more information for discrimination than binary-valued
features used in conventional approaches. We show that the FSMJ is a greedy
approach and the JS-divergence monotonically increases when more features are
selected. We conduct several experiments on real-life data sets, compared with
the state-of-the-art feature selection approaches for text categorization. The
superior performance of the proposed FSMJ approach demonstrates its
effectiveness and further indicates its wide potential applications on data
mining.
| Bo Tang, Haibo He | null | 1606.06366 | null | null |
Unanimous Prediction for 100% Precision with Application to Learning
Semantic Mappings | cs.LG cs.AI cs.CL | Can we train a system that, on any new input, either says "don't know" or
makes a prediction that is guaranteed to be correct? We answer the question in
the affirmative provided our model family is well-specified. Specifically, we
introduce the unanimity principle: only predict when all models consistent with
the training data predict the same output. We operationalize this principle for
semantic parsing, the task of mapping utterances to logical forms. We develop a
simple, efficient method that reasons over the infinite set of all consistent
models by only checking two of the models. We prove that our method obtains
100% precision even with a modest amount of training data from a possibly
adversarial distribution. Empirically, we demonstrate the effectiveness of our
approach on the standard GeoQuery dataset.
| Fereshte Khani, Martin Rinard, Percy Liang | null | 1606.06368 | null | null |
Contextual Weisfeiler-Lehman Graph Kernel For Malware Detection | cs.CR cs.LG | In this paper, we propose a novel graph kernel specifically to address a
challenging problem in the field of cyber-security, namely, malware detection.
Previous research has revealed the following: (1) Graph representations of
programs are ideally suited for malware detection as they are robust against
several attacks, (2) Besides capturing topological neighbourhoods (i.e.,
structural information) from these graphs it is important to capture the
context under which the neighbourhoods are reachable to accurately detect
malicious neighbourhoods.
We observe that state-of-the-art graph kernels, such as Weisfeiler-Lehman
kernel (WLK) capture the structural information well but fail to capture
contextual information. To address this, we develop the Contextual
Weisfeiler-Lehman kernel (CWLK) which is capable of capturing both these types
of information. We show that for the malware detection problem, CWLK is more
expressive and hence more accurate than WLK while maintaining comparable
efficiency. Through our large-scale experiments with more than 50,000
real-world Android apps, we demonstrate that CWLK outperforms two
state-of-the-art graph kernels (including WLK) and three malware detection
techniques by more than 5.27% and 4.87% F-measure, respectively, while
maintaining high efficiency. This high accuracy and efficiency make CWLK
suitable for large-scale real-world malware detection.
| Annamalai Narayanan, Guozhu Meng, Liu Yang, Jinliang Liu and Lihui
Chen | null | 1606.06369 | null | null |
Kernel-based Generative Learning in Distortion Feature Space | stat.ML cs.LG | This paper presents a novel kernel-based generative classifier which is
defined in a distortion subspace using polynomial series expansion, named
Kernel-Distortion (KD) classifier. An iterative kernel selection algorithm is
developed to steadily improve classification performance by repeatedly removing
and adding kernels. The experimental results on character recognition
application not only show that the proposed generative classifier performs
better than many existing classifiers, but also illustrate that it has
different recognition capability compared to the state-of-the-art
discriminative classifier - deep belief network. The recognition diversity
indicates that a hybrid combination of the proposed generative classifier and
the discriminative classifier could further improve the classification
performance. Two hybrid combination methods, cascading and stacking, have been
implemented to verify the diversity and the improvement of the proposed
classifier.
| Bo Tang, Paul M. Baggenstoss, Haibo He | null | 1606.06377 | null | null |
A Novel Framework to Expedite Systematic Reviews by Automatically
Building Information Extraction Training Corpora | cs.IR cs.CL cs.LG | A systematic review identifies and collates various clinical studies and
compares data elements and results in order to provide an evidence based answer
for a particular clinical question. The process is manual and involves lot of
time. A tool to automate this process is lacking. The aim of this work is to
develop a framework using natural language processing and machine learning to
build information extraction algorithms to identify data elements in a new
primary publication, without having to go through the expensive task of manual
annotation to build gold standards for each data element type. The system is
developed in two stages. Initially, it uses information contained in existing
systematic reviews to identify the sentences from the PDF files of the included
references that contain specific data elements of interest using a modified
Jaccard similarity measure. These sentences have been treated as labeled data.A
Support Vector Machine (SVM) classifier is trained on this labeled data to
extract data elements of interests from a new article. We conducted experiments
on Cochrane Database systematic reviews related to congestive heart failure
using inclusion criteria as an example data element. The empirical results show
that the proposed system automatically identifies sentences containing the data
element of interest with a high recall (93.75%) and reasonable precision
(27.05% - which means the reviewers have to read only 3.7 sentences on
average). The empirical results suggest that the tool is retrieving valuable
information from the reference articles, even when it is time-consuming to
identify them manually. Thus we hope that the tool will be useful for automatic
data extraction from biomedical research publications. The future scope of this
work is to generalize this information framework for all types of systematic
reviews.
| Tanmay Basu, Shraman Kumar, Abhishek Kalyan, Priyanka Jayaswal, Pawan
Goyal, Stephen Pettifer and Siddhartha R. Jonnalagadda | null | 1606.06424 | null | null |
An artificial neural network to find correlation patterns in an
arbitrary number of variables | cs.LG q-bio.NC stat.ML | Methods to find correlation among variables are of interest to many
disciplines, including statistics, machine learning, (big) data mining and
neurosciences. Parameters that measure correlation between two variables are of
limited utility when used with multiple variables. In this work, I propose a
simple criterion to measure correlation among an arbitrary number of variables,
based on a data set. The central idea is to i) design a function of the
variables that can take different forms depending on a set of parameters, ii)
calculate the difference between a statistics associated to the function
computed on the data set and the same statistics computed on a randomised
version of the data set, called "scrambled" data set, and iii) optimise the
parameters to maximise this difference. Many such functions can be organised in
layers, which can in turn be stacked one on top of the other, forming a neural
network. The function parameters are searched with an enhanced genetic
algortihm called POET and the resulting method is tested on a cancer gene data
set. The method may have potential implications for some issues that affect the
field of neural networks, such as overfitting, the need to process huge amounts
of data for training and the presence of "adversarial examples".
| Alessandro Fontana | null | 1606.06564 | null | null |
Concrete Problems in AI Safety | cs.AI cs.LG | Rapid progress in machine learning and artificial intelligence (AI) has
brought increasing attention to the potential impacts of AI technologies on
society. In this paper we discuss one such potential impact: the problem of
accidents in machine learning systems, defined as unintended and harmful
behavior that may emerge from poor design of real-world AI systems. We present
a list of five practical research problems related to accident risk,
categorized according to whether the problem originates from having the wrong
objective function ("avoiding side effects" and "avoiding reward hacking"), an
objective function that is too expensive to evaluate frequently ("scalable
supervision"), or undesirable behavior during the learning process ("safe
exploration" and "distributional shift"). We review previous work in these
areas as well as suggesting research directions with a focus on relevance to
cutting-edge AI systems. Finally, we consider the high-level question of how to
think most productively about the safety of forward-looking applications of AI.
| Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John
Schulman, Dan Man\'e | null | 1606.06565 | null | null |
Augmenting Supervised Neural Networks with Unsupervised Objectives for
Large-scale Image Classification | cs.LG cs.CV | Unsupervised learning and supervised learning are key research topics in deep
learning. However, as high-capacity supervised neural networks trained with a
large amount of labels have achieved remarkable success in many computer vision
tasks, the availability of large-scale labeled images reduced the significance
of unsupervised learning. Inspired by the recent trend toward revisiting the
importance of unsupervised learning, we investigate joint supervised and
unsupervised learning in a large-scale setting by augmenting existing neural
networks with decoding pathways for reconstruction. First, we demonstrate that
the intermediate activations of pretrained large-scale classification networks
preserve almost all the information of input images except a portion of local
spatial details. Then, by end-to-end training of the entire augmented
architecture with the reconstructive objective, we show improvement of the
network performance for supervised tasks. We evaluate several variants of
autoencoders, including the recently proposed "what-where" autoencoder that
uses the encoder pooling switches, to study the importance of the architecture
design. Taking the 16-layer VGGNet trained under the ImageNet ILSVRC 2012
protocol as a strong baseline for image classification, our methods improve the
validation-set accuracy by a noticeable margin.
| Yuting Zhang, Kibok Lee, Honglak Lee | null | 1606.06582 | null | null |
ML-based tactile sensor calibration: A universal approach | cs.RO cs.LG | We study the responses of two tactile sensors, the fingertip sensor from the
iCub and the BioTac under different external stimuli. The question of interest
is to which degree both sensors i) allow the estimation of force exerted on the
sensor and ii) enable the recognition of differing degrees of curvature. Making
use of a force controlled linear motor affecting the tactile sensors we acquire
several high-quality data sets allowing the study of both sensors under exactly
the same conditions. We also examined the structure of the representation of
tactile stimuli in the recorded tactile sensor data using t-SNE embeddings. The
experiments show that both the iCub and the BioTac excel in different settings.
| Maximilian Karl, Artur Lohrer, Dhananjay Shah, Frederik Diehl, Max
Fiedler, Saahil Ognawala, Justin Bayer, Patrick van der Smagt | null | 1606.06588 | null | null |
Question Relevance in VQA: Identifying Non-Visual And False-Premise
Questions | cs.CV cs.CL cs.LG | Visual Question Answering (VQA) is the task of answering natural-language
questions about images. We introduce the novel problem of determining the
relevance of questions to images in VQA. Current VQA models do not reason about
whether a question is even related to the given image (e.g. What is the capital
of Argentina?) or if it requires information from external resources to answer
correctly. This can break the continuity of a dialogue in human-machine
interaction. Our approaches for determining relevance are composed of two
stages. Given an image and a question, (1) we first determine whether the
question is visual or not, (2) if visual, we determine whether the question is
relevant to the given image or not. Our approaches, based on LSTM-RNNs, VQA
model uncertainty, and caption-question similarity, are able to outperform
strong baselines on both relevance tasks. We also present human studies showing
that VQA models augmented with such question relevance reasoning are perceived
as more intelligent, reasonable, and human-like.
| Arijit Ray, Gordon Christie, Mohit Bansal, Dhruv Batra, Devi Parikh | null | 1606.06622 | null | null |
On Multiplicative Integration with Recurrent Neural Networks | cs.LG | We introduce a general and simple structural design called Multiplicative
Integration (MI) to improve recurrent neural networks (RNNs). MI changes the
way in which information from difference sources flows and is integrated in the
computational building block of an RNN, while introducing almost no extra
parameters. The new structure can be easily embedded into many popular RNN
models, including LSTMs and GRUs. We empirically analyze its learning behaviour
and conduct evaluations on several tasks using different RNN models. Our
experimental results demonstrate that Multiplicative Integration can provide a
substantial performance boost over many of the existing RNN models.
| Yuhuai Wu, Saizheng Zhang, Ying Zhang, Yoshua Bengio and Ruslan
Salakhutdinov | null | 1606.06630 | null | null |
Tracking Time-Vertex Propagation using Dynamic Graph Wavelets | cs.LG | Graph Signal Processing generalizes classical signal processing to signal or
data indexed by the vertices of a weighted graph. So far, the research efforts
have been focused on static graph signals. However numerous applications
involve graph signals evolving in time, such as spreading or propagation of
waves on a network. The analysis of this type of data requires a new set of
methods that fully takes into account the time and graph dimensions. We propose
a novel class of wavelet frames named Dynamic Graph Wavelets, whose time-vertex
evolution follows a dynamic process. We demonstrate that this set of functions
can be combined with sparsity based approaches such as compressive sensing to
reveal information on the dynamic processes occurring on a graph. Experiments
on real seismological data show the efficiency of the technique, allowing to
estimate the epicenter of earthquake events recorded by a seismic network.
| Francesco Grassi, Nathanael Perraudin, Benjamin Ricaud | null | 1606.06653 | null | null |
A Stackelberg Game Perspective on the Conflict Between Machine Learning
and Data Obfuscation | cs.CR cs.LG | Data is the new oil; this refrain is repeated extensively in the age of
internet tracking, machine learning, and data analytics. Social network
analysis, cookie-based advertising, and government surveillance are all
evidence of the use of data for commercial and national interests. Public
pressure, however, is mounting for the protection of privacy. Frameworks such
as differential privacy offer machine learning algorithms methods to guarantee
limits to information disclosure, but they are seldom implemented. Recently,
however, developers have made significant efforts to undermine tracking through
obfuscation tools that hide user characteristics in a sea of noise. These
services highlight an emerging clash between tracking and data obfuscation. In
this paper, we conceptualize this conflict through a dynamic game between users
and a machine learning algorithm that uses empirical risk minimization. First,
a machine learner declares a privacy protection level, and then users respond
by choosing their own perturbation amounts. We study the interaction between
the users and the learner using a Stackelberg game. The utility functions
quantify accuracy using expected loss and privacy in terms of the bounds of
differential privacy. In equilibrium, we find selfish users tend to cause
significant utility loss to trackers by perturbing heavily, in a phenomenon
reminiscent of public good games. Trackers, however, can improve the balance by
proactively perturbing the data themselves. While other work in this area has
studied privacy markets and mechanism design for truthful reporting of user
information, we take a different viewpoint by considering both user and learner
perturbation.
| Jeffrey Pawlick and Quanyan Zhu | null | 1606.06771 | null | null |
Scalable Semi-supervised Learning with Graph-based Kernel Machine | cs.LG | Acquiring labels are often costly, whereas unlabeled data are usually easy to
obtain in modern machine learning applications. Semi-supervised learning
provides a principled machine learning framework to address such situations,
and has been applied successfully in many real-word applications and
industries. Nonetheless, most of existing semi-supervised learning methods
encounter two serious limitations when applied to modern and large-scale
datasets: computational burden and memory usage demand. To this end, we present
in this paper the Graph-based semi-supervised Kernel Machine (GKM), a method
that leverages the generalization ability of kernel-based method with the
geometrical and distributive information formulated through a spectral graph
induced from data for semi-supervised learning purpose. Our proposed GKM can be
solved directly in the primal form using the Stochastic Gradient Descent method
with the ideal convergence rate $O(\frac{1}{T})$. Besides, our formulation is
suitable for a wide spectrum of important loss functions in the literature of
machine learning (e.g., Hinge, smooth Hinge, Logistic, L1, and
{\epsilon}-insensitive) and smoothness functions (i.e., $l_p(t) = |t|^p$ with
$p\ge1$). We further show that the well-known Laplacian Support Vector Machine
is a special case of our formulation. We validate our proposed method on
several benchmark datasets to demonstrate that GKM is appropriate for the
large-scale datasets since it is optimal in memory usage and yields superior
classification accuracy whilst simultaneously achieving a significant
computation speed-up in comparison with the state-of-the-art baselines.
| Trung Le, Khanh Nguyen, Van Nguyen, Vu Nguyen, Dinh Phung | null | 1606.06793 | null | null |
Link Prediction via Matrix Completion | cs.SI cs.LG physics.soc-ph | Inspired by practical importance of social networks, economic networks,
biological networks and so on, studies on large and complex networks have
attracted a surge of attentions in the recent years. Link prediction is a
fundamental issue to understand the mechanisms by which new links are added to
the networks. We introduce the method of robust principal component analysis
(robust PCA) into link prediction, and estimate the missing entries of the
adjacency matrix. On one hand, our algorithm is based on the sparsity and low
rank property of the matrix, on the other hand, it also performs very well when
the network is dense. This is because a relatively dense real network is also
sparse in comparison to the complete graph. According to extensive experiments
on real networks from disparate fields, when the target network is connected
and sufficiently dense, whatever it is weighted or unweighted, our method is
demonstrated to be very effective and with prediction accuracy being
considerably improved comparing with many state-of-the-art algorithms.
| Ratha Pech, Dong Hao, Liming Pan, Hong Cheng and Tao Zhou | 10.1209/0295-5075/117/38002 | 1606.06812 | null | null |
A Curriculum Learning Method for Improved Noise Robustness in Automatic
Speech Recognition | cs.CL cs.LG cs.SD | The performance of automatic speech recognition systems under noisy
environments still leaves room for improvement. Speech enhancement or feature
enhancement techniques for increasing noise robustness of these systems usually
add components to the recognition system that need careful optimization. In
this work, we propose the use of a relatively simple curriculum training
strategy called accordion annealing (ACCAN). It uses a multi-stage training
schedule where samples at signal-to-noise ratio (SNR) values as low as 0dB are
first added and samples at increasing higher SNR values are gradually added up
to an SNR value of 50dB. We also use a method called per-epoch noise mixing
(PEM) that generates noisy training samples online during training and thus
enables dynamically changing the SNR of our training data. Both the ACCAN and
the PEM methods are evaluated on a end-to-end speech recognition pipeline on
the Wall Street Journal corpus. ACCAN decreases the average word error rate
(WER) on the 20dB to -10dB SNR range by up to 31.4% when compared to a
conventional multi-condition training method.
| Stefan Braun, Daniel Neil, Shih-Chii Liu | null | 1606.06864 | null | null |
A Comprehensive Study of Deep Bidirectional LSTM RNNs for Acoustic
Modeling in Speech Recognition | cs.NE cs.CL cs.LG cs.SD | We present a comprehensive study of deep bidirectional long short-term memory
(LSTM) recurrent neural network (RNN) based acoustic models for automatic
speech recognition (ASR). We study the effect of size and depth and train
models of up to 8 layers. We investigate the training aspect and study
different variants of optimization methods, batching, truncated
backpropagation, different regularization techniques such as dropout and $L_2$
regularization, and different gradient clipping variants.
The major part of the experimental analysis was performed on the Quaero
corpus. Additional experiments also were performed on the Switchboard corpus.
Our best LSTM model has a relative improvement in word error rate of over 14\%
compared to our best feed-forward neural network (FFNN) baseline on the Quaero
task. On this task, we get our best result with an 8 layer bidirectional LSTM
and we show that a pretraining scheme with layer-wise construction helps for
deep LSTMs.
Finally we compare the training calculation time of many of the presented
experiments in relation with recognition performance.
All the experiments were done with RETURNN, the RWTH extensible training
framework for universal recurrent neural networks in combination with RASR, the
RWTH ASR toolkit.
| Albert Zeyer, Patrick Doetsch, Paul Voigtlaender, Ralf Schl\"uter,
Hermann Ney | 10.1109/ICASSP.2017.7952599 | 1606.06871 | null | null |
A segmental framework for fully-unsupervised large-vocabulary speech
recognition | cs.CL cs.LG | Zero-resource speech technology is a growing research area that aims to
develop methods for speech processing in the absence of transcriptions,
lexicons, or language modelling text. Early term discovery systems focused on
identifying isolated recurring patterns in a corpus, while more recent
full-coverage systems attempt to completely segment and cluster the audio into
word-like units---effectively performing unsupervised speech recognition. This
article presents the first attempt we are aware of to apply such a system to
large-vocabulary multi-speaker data. Our system uses a Bayesian modelling
framework with segmental word representations: each word segment is represented
as a fixed-dimensional acoustic embedding obtained by mapping the sequence of
feature frames to a single embedding vector. We compare our system on English
and Xitsonga datasets to state-of-the-art baselines, using a variety of
measures including word error rate (obtained by mapping the unsupervised output
to ground truth transcriptions). Very high word error rates are reported---in
the order of 70--80% for speaker-dependent and 80--95% for speaker-independent
systems---highlighting the difficulty of this task. Nevertheless, in terms of
cluster quality and word segmentation metrics, we show that by imposing a
consistent top-down segmentation while also using bottom-up knowledge from
detected syllable boundaries, both single-speaker and multi-speaker versions of
our system outperform a purely bottom-up single-speaker syllable-based
approach. We also show that the discovered clusters can be made less speaker-
and gender-specific by using an unsupervised autoencoder-like feature extractor
to learn better frame-level features (prior to embedding). Our system's
discovered clusters are still less pure than those of unsupervised term
discovery systems, but provide far greater coverage.
| Herman Kamper, Aren Jansen, Sharon Goldwater | 10.1016/j.csl.2017.04.008 | 1606.06950 | null | null |
Towards stationary time-vertex signal processing | cs.LG cs.SI stat.ML | Graph-based methods for signal processing have shown promise for the analysis
of data exhibiting irregular structure, such as those found in social,
transportation, and sensor networks. Yet, though these systems are often
dynamic, state-of-the-art methods for signal processing on graphs ignore the
dimension of time, treating successive graph signals independently or taking a
global average. To address this shortcoming, this paper considers the
statistical analysis of time-varying graph signals. We introduce a novel
definition of joint (time-vertex) stationarity, which generalizes the classical
definition of time stationarity and the more recent definition appropriate for
graphs. Joint stationarity gives rise to a scalable Wiener optimization
framework for joint denoising, semi-supervised learning, or more generally
inversing a linear operator, that is provably optimal. Experimental results on
real weather data demonstrate that taking into account graph and time
dimensions jointly can yield significant accuracy improvements in the
reconstruction effort.
| Nathanael Perraudin and Andreas Loukas and Francesco Grassi and Pierre
Vandergheynst | null | 1606.06962 | null | null |
Ancestral Causal Inference | cs.LG cs.AI stat.ML | Constraint-based causal discovery from limited data is a notoriously
difficult challenge due to the many borderline independence test decisions.
Several approaches to improve the reliability of the predictions by exploiting
redundancy in the independence information have been proposed recently. Though
promising, existing approaches can still be greatly improved in terms of
accuracy and scalability. We present a novel method that reduces the
combinatorial explosion of the search space by using a more coarse-grained
representation of causal information, drastically reducing computation time.
Additionally, we propose a method to score causal predictions based on their
confidence. Crucially, our implementation also allows one to easily combine
observational and interventional data and to incorporate various types of
available background knowledge. We prove soundness and asymptotic consistency
of our method and demonstrate that it can outperform the state-of-the-art on
synthetic data, achieving a speedup of several orders of magnitude. We
illustrate its practical feasibility by applying it on a challenging protein
data set.
| Sara Magliacane, Tom Claassen, Joris M. Mooij | null | 1606.07035 | null | null |
Toward Interpretable Topic Discovery via Anchored Correlation
Explanation | stat.ML cs.CL cs.LG | Many predictive tasks, such as diagnosing a patient based on their medical
chart, are ultimately defined by the decisions of human experts. Unfortunately,
encoding experts' knowledge is often time consuming and expensive. We propose a
simple way to use fuzzy and informal knowledge from experts to guide discovery
of interpretable latent topics in text. The underlying intuition of our
approach is that latent factors should be informative about both correlations
in the data and a set of relevance variables specified by an expert.
Mathematically, this approach is a combination of the information bottleneck
and Total Correlation Explanation (CorEx). We give a preliminary evaluation of
Anchored CorEx, showing that it produces more coherent and interpretable topics
on two distinct corpora.
| Kyle Reing, David C. Kale, Greg Ver Steeg, Aram Galstyan | null | 1606.07043 | null | null |
Finite Sample Prediction and Recovery Bounds for Ordinal Embedding | stat.ML cs.LG | The goal of ordinal embedding is to represent items as points in a
low-dimensional Euclidean space given a set of constraints in the form of
distance comparisons like "item $i$ is closer to item $j$ than item $k$".
Ordinal constraints like this often come from human judgments. To account for
errors and variation in judgments, we consider the noisy situation in which the
given constraints are independently corrupted by reversing the correct
constraint with some probability. This paper makes several new contributions to
this problem. First, we derive prediction error bounds for ordinal embedding
with noise by exploiting the fact that the rank of a distance matrix of points
in $\mathbb{R}^d$ is at most $d+2$. These bounds characterize how well a
learned embedding predicts new comparative judgments. Second, we investigate
the special case of a known noise model and study the Maximum Likelihood
estimator. Third, knowledge of the noise model enables us to relate prediction
errors to embedding accuracy. This relationship is highly non-trivial since we
show that the linear map corresponding to distance comparisons is
non-invertible, but there exists a nonlinear map that is invertible. Fourth,
two new algorithms for ordinal embedding are proposed and evaluated in
experiments.
| Lalit Jain, Kevin Jamieson, Robert Nowak | null | 1606.07081 | null | null |
Manifold Approximation by Moving Least-Squares Projection (MMLS) | cs.GR cs.LG math.DG | In order to avoid the curse of dimensionality, frequently encountered in Big
Data analysis, there was a vast development in the field of linear and
nonlinear dimension reduction techniques in recent years. These techniques
(sometimes referred to as manifold learning) assume that the scattered input
data is lying on a lower dimensional manifold, thus the high dimensionality
problem can be overcome by learning the lower dimensionality behavior. However,
in real life applications, data is often very noisy. In this work, we propose a
method to approximate $\mathcal{M}$ a $d$-dimensional $C^{m+1}$ smooth
submanifold of $\mathbb{R}^n$ ($d \ll n$) based upon noisy scattered data
points (i.e., a data cloud). We assume that the data points are located "near"
the lower dimensional manifold and suggest a non-linear moving least-squares
projection on an approximating $d$-dimensional manifold. Under some mild
assumptions, the resulting approximant is shown to be infinitely smooth and of
high approximation order (i.e., $O(h^{m+1})$, where $h$ is the fill distance
and $m$ is the degree of the local polynomial approximation). The method
presented here assumes no analytic knowledge of the approximated manifold and
the approximation algorithm is linear in the large dimension $n$. Furthermore,
the approximating manifold can serve as a framework to perform operations
directly on the high dimensional data in a computationally efficient manner.
This way, the preparatory step of dimension reduction, which induces
distortions to the data, can be avoided altogether.
| Barak Sober and David Levin | 10.1007/s00365-019-09489-8 | 1606.07104 | null | null |
Visualizing Dynamics: from t-SNE to SEMI-MDPs | stat.ML cs.LG | Deep Reinforcement Learning (DRL) is a trending field of research, showing
great promise in many challenging problems such as playing Atari, solving Go
and controlling robots. While DRL agents perform well in practice we are still
missing the tools to analayze their performance and visualize the temporal
abstractions that they learn. In this paper, we present a novel method that
automatically discovers an internal Semi Markov Decision Process (SMDP) model
in the Deep Q Network's (DQN) learned representation. We suggest a novel
visualization method that represents the SMDP model by a directed graph and
visualize it above a t-SNE map. We show how can we interpret the agent's policy
and give evidence for the hierarchical state aggregation that DQNs are learning
automatically. Our algorithm is fully automatic, does not require any domain
specific knowledge and is evaluated by a novel likelihood based evaluation
criteria.
| Nir Ben Zrihem, Tom Zahavy, Shie Mannor | null | 1606.07112 | null | null |
Explainable Restricted Boltzmann Machines for Collaborative Filtering | stat.ML cs.LG | Most accurate recommender systems are black-box models, hiding the reasoning
behind their recommendations. Yet explanations have been shown to increase the
user's trust in the system in addition to providing other benefits such as
scrutability, meaning the ability to verify the validity of recommendations.
This gap between accuracy and transparency or explainability has generated an
interest in automated explanation generation methods. Restricted Boltzmann
Machines (RBM) are accurate models for CF that also lack interpretability. In
this paper, we focus on RBM based collaborative filtering recommendations, and
further assume the absence of any additional data source, such as item content
or user attributes. We thus propose a new Explainable RBM technique that
computes the top-n recommendation list from items that are explainable.
Experimental results show that our method is effective in generating accurate
and explainable recommendations.
| Behnoush Abdollahi, Olfa Nasraoui | null | 1606.07129 | null | null |
An Approach to Stable Gradient Descent Adaptation of Higher-Order Neural
Units | cs.NE cs.AI cs.CE cs.LG cs.SY | Stability evaluation of a weight-update system of higher-order neural units
(HONUs) with polynomial aggregation of neural inputs (also known as classes of
polynomial neural networks) for adaptation of both feedforward and recurrent
HONUs by a gradient descent method is introduced. An essential core of the
approach is based on spectral radius of a weight-update system, and it allows
stability monitoring and its maintenance at every adaptation step individually.
Assuring stability of the weight-update system (at every single adaptation
step) naturally results in adaptation stability of the whole neural
architecture that adapts to target data. As an aside, the used approach
highlights the fact that the weight optimization of HONU is a linear problem,
so the proposed approach can be generally extended to any neural architecture
that is linear in its adaptable parameters.
| Ivo Bukovsky and Noriyasu Homma | 10.1109/TNNLS.2016.2572310 | 1606.07149 | null | null |
Adaptive and Scalable Android Malware Detection through Online Learning | cs.CR cs.LG | It is well-known that malware constantly evolves so as to evade detection and
this causes the entire malware population to be non-stationary. Contrary to
this fact, prior works on machine learning based Android malware detection have
assumed that the distribution of the observed malware characteristics (i.e.,
features) do not change over time. In this work, we address the problem of
malware population drift and propose a novel online machine learning based
framework, named DroidOL to handle it and effectively detect malware. In order
to perform accurate detection, security-sensitive behaviors are captured from
apps in the form of inter-procedural control-flow sub-graph features using a
state-of-the-art graph kernel. In order to perform scalable detection and to
adapt to the drift and evolution in malware population, an online
passive-aggressive classifier is used.
In a large-scale comparative analysis with more than 87,000 apps, DroidOL
achieves 84.29% accuracy outperforming two state-of-the-art malware techniques
by more than 20% in their typical batch learning setting and more than 3% when
they are continuously re-trained. Our experimental findings strongly indicate
that online learning based approaches are highly suitable for real-world
malware detection.
| Annamalai Narayanan, Liu Yang, Lihui Chen and Liu Jinliang | null | 1606.07150 | null | null |
Interpretable Machine Learning Models for the Digital Clock Drawing Test | stat.ML cs.LG | The Clock Drawing Test (CDT) is a rapid, inexpensive, and popular
neuropsychological screening tool for cognitive conditions. The Digital Clock
Drawing Test (dCDT) uses novel software to analyze data from a digitizing
ballpoint pen that reports its position with considerable spatial and temporal
precision, making possible the analysis of both the drawing process and final
product. We developed methodology to analyze pen stroke data from these
drawings, and computed a large collection of features which were then analyzed
with a variety of machine learning techniques. The resulting scoring systems
were designed to be more accurate than the systems currently used by
clinicians, but just as interpretable and easy to use. The systems also allow
us to quantify the tradeoff between accuracy and interpretability. We created
automated versions of the CDT scoring systems currently used by clinicians,
allowing us to benchmark our models, which indicated that our machine learning
models substantially outperformed the existing scoring systems.
| William Souillard-Mandar, Randall Davis, Cynthia Rudin, Rhoda Au, Dana
Penney | null | 1606.07163 | null | null |
Learning Dynamic Classes of Events using Stacked Multilayer Perceptron
Networks | cs.IR cs.LG | People often use a web search engine to find information about events of
interest, for example, sport competitions, political elections, festivals and
entertainment news. In this paper, we study a problem of detecting
event-related queries, which is the first step before selecting a suitable
time-aware retrieval model. In general, event-related information needs can be
observed in query streams through various temporal patterns of user search
behavior, e.g., spiky peaks for popular events, and periodicities for
repetitive events. However, it is also common that users search for non-popular
events, which may not exhibit temporal variations in query streams, e.g., past
events recently occurred, historical events triggered by anniversaries or
similar events, and future events anticipated to happen. To address the
challenge of detecting dynamic classes of events, we propose a novel deep
learning model to classify a given query into a predetermined set of multiple
event types. Our proposed model, a Stacked Multilayer Perceptron (S-MLP)
network, consists of multilayer perceptron used as a basic learning unit. We
assemble stacked units to further learn complex relationships between neutrons
in successive layers. To evaluate our proposed model, we conduct experiments
using real-world queries and a set of manually created ground truth.
Preliminary results have shown that our proposed deep learning model
outperforms the state-of-the-art classification models significantly.
| Nattiya Kanhabua and Huamin Ren and Thomas B. Moeslund | null | 1606.07219 | null | null |
Deep Learning Markov Random Field for Semantic Segmentation | cs.CV cs.LG | Semantic segmentation tasks can be well modeled by Markov Random Field (MRF).
This paper addresses semantic segmentation by incorporating high-order
relations and mixture of label contexts into MRF. Unlike previous works that
optimized MRFs using iterative algorithm, we solve MRF by proposing a
Convolutional Neural Network (CNN), namely Deep Parsing Network (DPN), which
enables deterministic end-to-end computation in a single forward pass.
Specifically, DPN extends a contemporary CNN to model unary terms and
additional layers are devised to approximate the mean field (MF) algorithm for
pairwise terms. It has several appealing properties. First, different from the
recent works that required many iterations of MF during back-propagation, DPN
is able to achieve high performance by approximating one iteration of MF.
Second, DPN represents various types of pairwise terms, making many existing
models as its special cases. Furthermore, pairwise terms in DPN provide a
unified framework to encode rich contextual information in high-dimensional
data, such as images and videos. Third, DPN makes MF easier to be parallelized
and speeded up, thus enabling efficient inference. DPN is thoroughly evaluated
on standard semantic image/video segmentation benchmarks, where a single DPN
model yields state-of-the-art segmentation accuracies on PASCAL VOC 2012,
Cityscapes dataset and CamVid dataset.
| Ziwei Liu, Xiaoxiao Li, Ping Luo, Chen Change Loy, Xiaoou Tang | null | 1606.07230 | null | null |
Algorithmic Composition of Melodies with Deep Recurrent Neural Networks | stat.ML cs.LG | A big challenge in algorithmic composition is to devise a model that is both
easily trainable and able to reproduce the long-range temporal dependencies
typical of music. Here we investigate how artificial neural networks can be
trained on a large corpus of melodies and turned into automated music composers
able to generate new melodies coherent with the style they have been trained
on. We employ gated recurrent unit networks that have been shown to be
particularly efficient in learning complex sequential activations with
arbitrary long time lags. Our model processes rhythm and melody in parallel
while modeling the relation between these two features. Using such an approach,
we were able to generate interesting complete melodies or suggest possible
continuations of a melody fragment that is coherent with the characteristics of
the fragment itself.
| Florian Colombo, Samuel P. Muscinelli, Alexander Seeholzer, Johanni
Brea and Wulfram Gerstner | 10.13140/RG.2.1.2436.5683 | 1606.07251 | null | null |
On the Theoretical Capacity of Evolution Strategies to Statistically
Learn the Landscape Hessian | cs.NE cs.LG | We study the theoretical capacity to statistically learn local landscape
information by Evolution Strategies (ESs). Specifically, we investigate the
covariance matrix when constructed by ESs operating with the selection operator
alone. We model continuous generation of candidate solutions about quadratic
basins of attraction, with deterministic selection of the decision vectors that
minimize the objective function values. Our goal is to rigorously show that
accumulation of winning individuals carries the potential to reveal valuable
information about the search landscape, e.g., as already practically utilized
by derandomized ES variants. We first show that the statistically-constructed
covariance matrix over such winning decision vectors shares the same
eigenvectors with the Hessian matrix about the optimum. We then provide an
analytic approximation of this covariance matrix for a non-elitist multi-child
$(1,\lambda)$-strategy, which holds for a large population size $\lambda$.
Finally, we also numerically corroborate our results.
| Ofer M. Shir, Jonathan Roslund and Amir Yehudayoff | null | 1606.07262 | null | null |
Multiclass feature learning for hyperspectral image classification:
sparse and hierarchical solutions | stat.ML cs.LG | In this paper, we tackle the question of discovering an effective set of
spatial filters to solve hyperspectral classification problems. Instead of
fixing a priori the filters and their parameters using expert knowledge, we let
the model find them within random draws in the (possibly infinite) space of
possible filters. We define an active set feature learner that includes in the
model only features that improve the classifier. To this end, we consider a
fast and linear classifier, multiclass logistic classification, and show that
with a good representation (the filters discovered), such a simple classifier
can reach at least state of the art performances. We apply the proposed active
set learner in four hyperspectral image classification problems, including
agricultural and urban classification at different resolutions, as well as
multimodal data. We also propose a hierarchical setting, which allows to
generate more complex banks of features that can better describe the
nonlinearities present in the data.
| Devis Tuia, R\'emi Flamary, Nicolas Courty | 10.1016/j.isprsjprs.2015.01.006 | 1606.07279 | null | null |
Event Abstraction for Process Mining using Supervised Learning
Techniques | cs.LG | Process mining techniques focus on extracting insight in processes from event
logs. In many cases, events recorded in the event log are too fine-grained,
causing process discovery algorithms to discover incomprehensible process
models or process models that are not representative of the event log. We show
that when process discovery algorithms are only able to discover an
unrepresentative process model from a low-level event log, structure in the
process can in some cases still be discovered by first abstracting the event
log to a higher level of granularity. This gives rise to the challenge to
bridge the gap between an original low-level event log and a desired high-level
perspective on this log, such that a more structured or more comprehensible
process model can be discovered. We show that supervised learning can be
leveraged for the event abstraction task when annotations with high-level
interpretations of the low-level events are available for a subset of the
sequences (i.e., traces). We present a method to generate feature vector
representations of events based on XES extensions, and describe an approach to
abstract events in an event log with Condition Random Fields using these event
features. Furthermore, we propose a sequence-focused metric to evaluate
supervised event abstraction results that fits closely to the tasks of process
discovery and conformance checking. We conclude this paper by demonstrating the
usefulness of supervised event abstraction for obtaining more structured and/or
more comprehensible process models using both real life event data and
synthetic event data.
| Niek Tax, Natalia Sidorova, Reinder Haakma, Wil M. P. van der Aalst | 10.1007/978-3-319-56994-9_18 | 1606.07283 | null | null |
Importance sampling strategy for non-convex randomized block-coordinate
descent | cs.LG math.OC | As the number of samples and dimensionality of optimization problems related
to statistics an machine learning explode, block coordinate descent algorithms
have gained popularity since they reduce the original problem to several
smaller ones. Coordinates to be optimized are usually selected randomly
according to a given probability distribution. We introduce an importance
sampling strategy that helps randomized coordinate descent algorithms to focus
on blocks that are still far from convergence. The framework applies to
problems composed of the sum of two possibly non-convex terms, one being
separable and non-smooth. We have compared our algorithm to a full gradient
proximal approach as well as to a randomized block coordinate algorithm that
considers uniform sampling and cyclic block coordinate descent. Experimental
evidences show the clear benefit of using an importance sampling strategy.
| R\'emi Flamary (LAGRANGE, OCA), Alain Rakotomamonjy (LITIS), Gilles
Gasso (LITIS) | 10.1109/CAMSAP.2015.7383796 | 1606.07286 | null | null |
Non-convex regularization in remote sensing | stat.ML cs.LG | In this paper, we study the effect of different regularizers and their
implications in high dimensional image classification and sparse linear
unmixing. Although kernelization or sparse methods are globally accepted
solutions for processing data in high dimensions, we present here a study on
the impact of the form of regularization used and its parametrization. We
consider regularization via traditional squared (2) and sparsity-promoting (1)
norms, as well as more unconventional nonconvex regularizers (p and Log Sum
Penalty). We compare their properties and advantages on several classification
and linear unmixing tasks and provide advices on the choice of the best
regularizer for the problem at hand. Finally, we also provide a fully
functional toolbox for the community.
| Devis Tuia, Remi Flamary, Michel Barlaud | 10.1109/TGRS.2016.2585201 | 1606.07289 | null | null |
Explaining Predictions of Non-Linear Classifiers in NLP | cs.CL cs.IR cs.LG cs.NE stat.ML | Layer-wise relevance propagation (LRP) is a recently proposed technique for
explaining predictions of complex non-linear classifiers in terms of input
variables. In this paper, we apply LRP for the first time to natural language
processing (NLP). More precisely, we use it to explain the predictions of a
convolutional neural network (CNN) trained on a topic categorization task. Our
analysis highlights which words are relevant for a specific prediction of the
CNN. We compare our technique to standard sensitivity analysis, both
qualitatively and quantitatively, using a "word deleting" perturbation
experiment, a PCA analysis, and various visualizations. All experiments
validate the suitability of LRP for explaining the CNN predictions, which is
also in line with results reported in recent image classification studies.
| Leila Arras and Franziska Horn and Gr\'egoire Montavon and
Klaus-Robert M\"uller and Wojciech Samek | null | 1606.07298 | null | null |
Unsupervised preprocessing for Tactile Data | cs.RO cs.LG stat.ML | Tactile information is important for gripping, stable grasp, and in-hand
manipulation, yet the complexity of tactile data prevents widespread use of
such sensors. We make use of an unsupervised learning algorithm that transforms
the complex tactile data into a compact, latent representation without the need
to record ground truth reference data. These compact representations can either
be used directly in a reinforcement learning based controller or can be used to
calibrate the tactile sensor to physical quantities with only a few datapoints.
We show the quality of our latent representation by predicting important
features and with a simple control task.
| Maximilian Karl, Justin Bayer, Patrick van der Smagt | null | 1606.07312 | null | null |
Nearly-optimal Robust Matrix Completion | cs.LG cs.NA | In this paper, we consider the problem of Robust Matrix Completion (RMC)
where the goal is to recover a low-rank matrix by observing a small number of
its entries out of which a few can be arbitrarily corrupted. We propose a
simple projected gradient descent method to estimate the low-rank matrix that
alternately performs a projected gradient descent step and cleans up a few of
the corrupted entries using hard-thresholding. Our algorithm solves RMC using
nearly optimal number of observations as well as nearly optimal number of
corruptions. Our result also implies significant improvement over the existing
time complexity bounds for the low-rank matrix completion problem. Finally, an
application of our result to the robust PCA problem (low-rank+sparse matrix
separation) leads to nearly linear time (in matrix dimensions) algorithm for
the same; existing state-of-the-art methods require quadratic time. Our
empirical results corroborate our theoretical results and show that even for
moderate sized problems, our method for robust PCA is an an order of magnitude
faster than the existing methods.
| Yeshwanth Cherapanamjeri, Kartik Gupta, Prateek Jain | null | 1606.07315 | null | null |
DropNeuron: Simplifying the Structure of Deep Neural Networks | cs.CV cs.LG stat.ML | Deep learning using multi-layer neural networks (NNs) architecture manifests
superb power in modern machine learning systems. The trained Deep Neural
Networks (DNNs) are typically large. The question we would like to address is
whether it is possible to simplify the NN during training process to achieve a
reasonable performance within an acceptable computational time. We presented a
novel approach of optimising a deep neural network through regularisation of
net- work architecture. We proposed regularisers which support a simple
mechanism of dropping neurons during a network training process. The method
supports the construction of a simpler deep neural networks with compatible
performance with its simplified version. As a proof of concept, we evaluate the
proposed method with examples including sparse linear regression, deep
autoencoder and convolutional neural network. The valuations demonstrate
excellent performance.
The code for this work can be found in
http://www.github.com/panweihit/DropNeuron
| Wei Pan and Hao Dong and Yike Guo | null | 1606.07326 | null | null |
Analyzing the Behavior of Visual Question Answering Models | cs.CL cs.AI cs.CV cs.LG | Recently, a number of deep-learning based models have been proposed for the
task of Visual Question Answering (VQA). The performance of most models is
clustered around 60-70%. In this paper we propose systematic methods to analyze
the behavior of these models as a first step towards recognizing their
strengths and weaknesses, and identifying the most fruitful directions for
progress. We analyze two models, one each from two major classes of VQA models
-- with-attention and without-attention and show the similarities and
differences in the behavior of these models. We also analyze the winning entry
of the VQA Challenge 2016.
Our behavior analysis reveals that despite recent progress, today's VQA
models are "myopic" (tend to fail on sufficiently novel instances), often "jump
to conclusions" (converge on a predicted answer after 'listening' to just half
the question), and are "stubborn" (do not change their answers across images).
| Aishwarya Agrawal, Dhruv Batra, Devi Parikh | null | 1606.07356 | null | null |
Parallel SGD: When does averaging help? | stat.ML cs.LG | Consider a number of workers running SGD independently on the same pool of
data and averaging the models every once in a while -- a common but not well
understood practice. We study model averaging as a variance-reducing mechanism
and describe two ways in which the frequency of averaging affects convergence.
For convex objectives, we show the benefit of frequent averaging depends on the
gradient variance envelope. For non-convex objectives, we illustrate that this
benefit depends on the presence of multiple globally optimal points. We
complement our findings with multicore experiments on both synthetic and real
data.
| Jian Zhang, Christopher De Sa, Ioannis Mitliagkas, Christopher R\'e | null | 1606.07365 | null | null |
Personalized Prognostic Models for Oncology: A Machine Learning Approach | stat.AP cs.LG stat.ML | We have applied a little-known data transformation to subsets of the
Surveillance, Epidemiology, and End Results (SEER) publically available data of
the National Cancer Institute (NCI) to make it suitable input to standard
machine learning classifiers. This transformation properly treats the
right-censored data in the SEER data and the resulting Random Forest and
Multi-Layer Perceptron models predict full survival curves. Treating the 6, 12,
and 60 months points of the resulting survival curves as 3 binary classifiers,
the 18 resulting classifiers have AUC values ranging from .765 to .885. Further
evidence that the models have generalized well from the training data is
provided by the extremely high levels of agreement between the random forest
and neural network models predictions on the 6, 12, and 60 month binary
classifiers.
| David Dooling, Angela Kim, Barbara McAneny, Jennifer Webster | null | 1606.07369 | null | null |
Multi-Stage Temporal Difference Learning for 2048-like Games | cs.LG | Szubert and Jaskowski successfully used temporal difference (TD) learning
together with n-tuple networks for playing the game 2048. However, we observed
a phenomenon that the programs based on TD learning still hardly reach large
tiles. In this paper, we propose multi-stage TD (MS-TD) learning, a kind of
hierarchical reinforcement learning method, to effectively improve the
performance for the rates of reaching large tiles, which are good metrics to
analyze the strength of 2048 programs. Our experiments showed significant
improvements over the one without using MS-TD learning. Namely, using 3-ply
expectimax search, the program with MS-TD learning reached 32768-tiles with a
rate of 18.31%, while the one with TD learning did not reach any. After further
tuned, our 2048 program reached 32768-tiles with a rate of 31.75% in 10,000
games, and one among these games even reached a 65536-tile, which is the first
ever reaching a 65536-tile to our knowledge. In addition, MS-TD learning method
can be easily applied to other 2048-like games, such as Threes. Based on MS-TD
learning, our experiments for Threes also demonstrated similar performance
improvement, where the program with MS-TD learning reached 6144-tiles with a
rate of 7.83%, while the one with TD learning only reached 0.45%.
| Kun-Hao Yeh, I-Chen Wu, Chu-Hsuan Hsueh, Chia-Chuan Chang, Chao-Chin
Liang, Han Chiang | null | 1606.07374 | null | null |
Robust Learning of Fixed-Structure Bayesian Networks | cs.DS cs.AI cs.LG math.ST stat.TH | We investigate the problem of learning Bayesian networks in a robust model
where an $\epsilon$-fraction of the samples are adversarially corrupted. In
this work, we study the fully observable discrete case where the structure of
the network is given. Even in this basic setting, previous learning algorithms
either run in exponential time or lose dimension-dependent factors in their
error guarantees. We provide the first computationally efficient robust
learning algorithm for this problem with dimension-independent error
guarantees. Our algorithm has near-optimal sample complexity, runs in
polynomial time, and achieves error that scales nearly-linearly with the
fraction of adversarially corrupted samples. Finally, we show on both synthetic
and semi-synthetic data that our algorithm performs well in practice.
| Yu Cheng, Ilias Diakonikolas, Daniel Kane, Alistair Stewart | null | 1606.07384 | null | null |
Deep Recurrent Neural Networks for Supernovae Classification | astro-ph.IM astro-ph.CO cs.LG physics.data-an | We apply deep recurrent neural networks, which are capable of learning
complex sequential information, to classify supernovae\footnote{Code available
at
\href{https://github.com/adammoss/supernovae}{https://github.com/adammoss/supernovae}}.
The observational time and filter fluxes are used as inputs to the network, but
since the inputs are agnostic additional data such as host galaxy information
can also be included. Using the Supernovae Photometric Classification Challenge
(SPCC) data, we find that deep networks are capable of learning about light
curves, however the performance of the network is highly sensitive to the
amount of training data. For a training size of 50\% of the representational
SPCC dataset (around $10^4$ supernovae) we obtain a type-Ia vs. non-type-Ia
classification accuracy of 94.7\%, an area under the Receiver Operating
Characteristic curve AUC of 0.986 and a SPCC figure-of-merit $F_1=0.64$. When
using only the data for the early-epoch challenge defined by the SPCC we
achieve a classification accuracy of 93.1\%, AUC of 0.977 and $F_1=0.58$,
results almost as good as with the whole light-curve. By employing
bidirectional neural networks we can acquire impressive classification results
between supernovae types -I,~-II and~-III at an accuracy of 90.4\% and AUC of
0.974. We also apply a pre-trained model to obtain classification probabilities
as a function of time, and show it can give early indications of supernovae
type. Our method is competitive with existing algorithms and has applications
for future large-scale photometric surveys.
| Tom Charnock and Adam Moss | 10.3847/2041-8213/aa603d | 1606.07442 | null | null |
The VGLC: The Video Game Level Corpus | cs.HC cs.AI cs.LG | Levels are a key component of many different video games, and a large body of
work has been produced on how to procedurally generate game levels. Recently,
Machine Learning techniques have been applied to video game level generation
towards the purpose of automatically generating levels that have the properties
of the training corpus. Towards that end we have made available a corpora of
video game levels in an easy to parse format ideal for different machine
learning and other game AI research purposes.
| Adam James Summerville, Sam Snodgrass, Michael Mateas, Santiago
Onta\~n\'on | null | 1606.07487 | null | null |
Sort Story: Sorting Jumbled Images and Captions into Stories | cs.CL cs.AI cs.CV cs.LG | Temporal common sense has applications in AI tasks such as QA, multi-document
summarization, and human-AI communication. We propose the task of sequencing --
given a jumbled set of aligned image-caption pairs that belong to a story, the
task is to sort them such that the output sequence forms a coherent story. We
present multiple approaches, via unary (position) and pairwise (order)
predictions, and their ensemble-based combinations, achieving strong results on
this task. We use both text-based and image-based features, which depict
complementary improvements. Using qualitative examples, we demonstrate that our
models have learnt interesting aspects of temporal common sense.
| Harsh Agrawal, Arjun Chandrasekaran, Dhruv Batra, Devi Parikh, Mohit
Bansal | null | 1606.07493 | null | null |
Is a Picture Worth Ten Thousand Words in a Review Dataset? | cs.CV cs.CL cs.IR cs.LG cs.NE | While textual reviews have become prominent in many recommendation-based
systems, automated frameworks to provide relevant visual cues against text
reviews where pictures are not available is a new form of task confronted by
data mining and machine learning researchers. Suggestions of pictures that are
relevant to the content of a review could significantly benefit the users by
increasing the effectiveness of a review. We propose a deep learning-based
framework to automatically: (1) tag the images available in a review dataset,
(2) generate a caption for each image that does not have one, and (3) enhance
each review by recommending relevant images that might not be uploaded by the
corresponding reviewer. We evaluate the proposed framework using the Yelp
Challenge Dataset. While a subset of the images in this particular dataset are
correctly captioned, the majority of the pictures do not have any associated
text. Moreover, there is no mapping between reviews and images. Each image has
a corresponding business-tag where the picture was taken, though. The overall
data setting and unavailability of crucial pieces required for a mapping make
the problem of recommending images for reviews a major challenge. Qualitative
and quantitative evaluations indicate that our proposed framework provides high
quality enhancements through automatic captioning, tagging, and recommendation
for mapping reviews and images.
| Roberto Camacho Barranco (1), Laura M. Rodriguez (1), Rebecca Urbina
(1), and M. Shahriar Hossain (1) ((1) The University of Texas at El Paso) | null | 1606.07496 | null | null |
On the Solvability of Inductive Problems: A Study in Epistemic Topology | cs.LO cs.LG | We investigate the issues of inductive problem-solving and learning by
doxastic agents. We provide topological characterizations of solvability and
learnability, and we use them to prove that AGM-style belief revision is
"universal", i.e., that every solvable problem is solvable by AGM conditioning.
| Alexandru Baltag (Institute for logic, Language and Computation.
University of Amsterdam), Nina Gierasimczuk (Institute for Logic, Language
and Computation. University of Amsterdam), Sonja Smets (Institute for Logic,
Language and Computation. University of Amsterdam) | 10.4204/EPTCS.215.7 | 1606.07518 | null | null |
Satisfying Real-world Goals with Dataset Constraints | cs.LG | The goal of minimizing misclassification error on a training set is often
just one of several real-world goals that might be defined on different
datasets. For example, one may require a classifier to also make positive
predictions at some specified rate for some subpopulation (fairness), or to
achieve a specified empirical recall. Other real-world goals include reducing
churn with respect to a previously deployed model, or stabilizing online
training. In this paper we propose handling multiple goals on multiple datasets
by training with dataset constraints, using the ramp penalty to accurately
quantify costs, and present an efficient algorithm to approximately optimize
the resulting non-convex constrained optimization problem. Experiments on both
benchmark and real-world industry datasets demonstrate the effectiveness of our
approach.
| Gabriel Goh, Andrew Cotter, Maya Gupta, Michael Friedlander | null | 1606.07558 | null | null |
Multipartite Ranking-Selection of Low-Dimensional Instances by
Supervised Projection to High-Dimensional Space | stat.ML cs.CV cs.LG | Pruning of redundant or irrelevant instances of data is a key to every
successful solution for pattern recognition. In this paper, we present a novel
ranking-selection framework for low-length but highly correlated instances.
Instead of working in the low-dimensional instance space, we learn a supervised
projection to high-dimensional space spanned by the number of classes in the
dataset under study. Imposing higher distinctions via exposing the notion of
labels to the instances, lets to deploy one versus all ranking for each
individual classes and selecting quality instances via adaptive thresholding of
the overall scores. To prove the efficiency of our paradigm, we employ it for
the purpose of texture understanding which is a hard recognition challenge due
to high similarity of texture pixels and low dimensionality of their color
features. Our experiments show considerable improvements in recognition
performance over other local descriptors on several publicly available
datasets.
| Arash Shahriari | null | 1606.07575 | null | null |
Regression Trees and Random forest based feature selection for malaria
risk exposure prediction | stat.ML cs.LG | This paper deals with prediction of anopheles number, the main vector of
malaria risk, using environmental and climate variables. The variables
selection is based on an automatic machine learning method using regression
trees, and random forests combined with stratified two levels cross validation.
The minimum threshold of variables importance is accessed using the quadratic
distance of variables importance while the optimal subset of selected variables
is used to perform predictions. Finally the results revealed to be
qualitatively better, at the selection, the prediction , and the CPU time point
of view than those obtained by GLM-Lasso method.
| Bienvenue Kouway\`e | null | 1606.07578 | null | null |
Is the Bellman residual a bad proxy? | cs.LG stat.ML | This paper aims at theoretically and empirically comparing two standard
optimization criteria for Reinforcement Learning: i) maximization of the mean
value and ii) minimization of the Bellman residual. For that purpose, we place
ourselves in the framework of policy search algorithms, that are usually
designed to maximize the mean value, and derive a method that minimizes the
residual $\|T_* v_\pi - v_\pi\|_{1,\nu}$ over policies. A theoretical analysis
shows how good this proxy is to policy optimization, and notably that it is
better than its value-based counterpart. We also propose experiments on
randomly generated generic Markov decision processes, specifically designed for
studying the influence of the involved concentrability coefficient. They show
that the Bellman residual is generally a bad proxy to policy optimization and
that directly maximizing the mean value is much better, despite the current
lack of deep theoretical analysis. This might seem obvious, as directly
addressing the problem of interest is usually better, but given the prevalence
of (projected) Bellman residual minimization in value-based reinforcement
learning, we believe that this question is worth to be considered.
| Matthieu Geist and Bilal Piot and Olivier Pietquin | null | 1606.07636 | null | null |
Hybrid Recommender System based on Autoencoders | cs.LG cs.IR | A standard model for Recommender Systems is the Matrix Completion setting:
given partially known matrix of ratings given by users (rows) to items
(columns), infer the unknown ratings. In the last decades, few attempts where
done to handle that objective with Neural Networks, but recently an
architecture based on Autoencoders proved to be a promising approach. In
current paper, we enhanced that architecture (i) by using a loss function
adapted to input data with missing values, and (ii) by incorporating side
information. The experiments demonstrate that while side information only
slightly improve the test error averaged on all users/items, it has more impact
on cold users/items.
| Florian Strub (CRIStAL, SEQUEL), Romaric Gaudel (CRIStAL, SEQUEL),
J\'er\'emie Mary (CRIStAL, SEQUEL) | 10.1145/2988450.2988456 | 1606.07659 | null | null |
Collective Semi-Supervised Learning for User Profiling in Social Media | cs.SI cs.LG | The abundance of user-generated data in social media has incentivized the
development of methods to infer the latent attributes of users, which are
crucially useful for personalization, advertising and recommendation. However,
the current user profiling approaches have limited success, due to the lack of
a principled way to integrate different types of social relationships of a
user, and the reliance on scarcely-available labeled data in building a
prediction model. In this paper, we present a novel solution termed Collective
Semi-Supervised Learning (CSL), which provides a principled means to integrate
different types of social relationship and unlabeled data under a unified
computational framework. The joint learning from multiple relationships and
unlabeled data yields a computationally sound and accurate approach to model
user attributes in social media. Extensive experiments using Twitter data have
demonstrated the efficacy of our CSL approach in inferring user attributes such
as account type and marital status. We also show how CSL can be used to
determine important user features, and to make inference on a larger user
population.
| Richard J. Oentaryo, Ee-Peng Lim, Freddy Chong Tat Chua, Jia-Wei Low,
David Lo | null | 1606.07707 | null | null |
Neural Network Based Next-Song Recommendation | cs.IR cs.AI cs.LG | Recently, the next-item/basket recommendation system, which considers the
sequential relation between bought items, has drawn attention of researchers.
The utilization of sequential patterns has boosted performance on several kinds
of recommendation tasks. Inspired by natural language processing (NLP)
techniques, we propose a novel neural network (NN) based next-song recommender,
CNN-rec, in this paper. Then, we compare the proposed system with several NN
based and classic recommendation systems on the next-song recommendation task.
Verification results indicate the proposed system outperforms classic systems
and has comparable performance with the state-of-the-art system.
| Kai-Chun Hsu, Szu-Yu Chou, Yi-Hsuan Yang, Tai-Shih Chi | null | 1606.07722 | null | null |
Sampling-based Gradient Regularization for Capturing Long-Term
Dependencies in Recurrent Neural Networks | cs.NE cs.LG | Vanishing (and exploding) gradients effect is a common problem for recurrent
neural networks with nonlinear activation functions which use backpropagation
method for calculation of derivatives. Deep feedforward neural networks with
many hidden layers also suffer from this effect. In this paper we propose a
novel universal technique that makes the norm of the gradient stay in the
suitable range. We construct a way to estimate a contribution of each training
example to the norm of the long-term components of the target function s
gradient. Using this subroutine we can construct mini-batches for the
stochastic gradient descent (SGD) training that leads to high performance and
accuracy of the trained network even for very complex tasks. We provide a
straightforward mathematical estimation of minibatch s impact on for the
gradient norm and prove its correctness theoretically. To check our framework
experimentally we use some special synthetic benchmarks for testing RNNs on
ability to capture long-term dependencies. Our network can detect links between
events in the (temporal) sequence at the range approx. 100 and longer.
| Artem Chernodub and Dimitri Nowicki | null | 1606.07767 | null | null |
Precise neural network computation with imprecise analog devices | cs.NE cs.AI cs.LG | The operations used for neural network computation map favorably onto simple
analog circuits, which outshine their digital counterparts in terms of
compactness and efficiency. Nevertheless, such implementations have been
largely supplanted by digital designs, partly because of device mismatch
effects due to material and fabrication imperfections. We propose a framework
that exploits the power of deep learning to compensate for this mismatch by
incorporating the measured device variations as constraints in the neural
network training process. This eliminates the need for mismatch minimization
strategies and allows circuit complexity and power-consumption to be reduced to
a minimum. Our results, based on large-scale simulations as well as a prototype
VLSI chip implementation indicate a processing efficiency comparable to current
state-of-art digital implementations. This method is suitable for future
technology based on nanodevices with large variability, such as memristive
arrays.
| Jonathan Binas, Daniel Neil, Giacomo Indiveri, Shih-Chii Liu, Michael
Pfeiffer | null | 1606.07786 | null | null |
Wide & Deep Learning for Recommender Systems | cs.LG cs.IR stat.ML | Generalized linear models with nonlinear feature transformations are widely
used for large-scale regression and classification problems with sparse inputs.
Memorization of feature interactions through a wide set of cross-product
feature transformations are effective and interpretable, while generalization
requires more feature engineering effort. With less feature engineering, deep
neural networks can generalize better to unseen feature combinations through
low-dimensional dense embeddings learned for the sparse features. However, deep
neural networks with embeddings can over-generalize and recommend less relevant
items when the user-item interactions are sparse and high-rank. In this paper,
we present Wide & Deep learning---jointly trained wide linear models and deep
neural networks---to combine the benefits of memorization and generalization
for recommender systems. We productionized and evaluated the system on Google
Play, a commercial mobile app store with over one billion active users and over
one million apps. Online experiment results show that Wide & Deep significantly
increased app acquisitions compared with wide-only and deep-only models. We
have also open-sourced our implementation in TensorFlow.
| Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar
Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa
Ispir, Rohan Anil, Zakaria Haque, Lichan Hong, Vihan Jain, Xiaobing Liu,
Hemal Shah | null | 1606.07792 | null | null |
Sequence-Level Knowledge Distillation | cs.CL cs.LG cs.NE | Neural machine translation (NMT) offers a novel alternative formulation of
translation that is potentially simpler than statistical approaches. However to
reach competitive performance, NMT models need to be exceedingly large. In this
paper we consider applying knowledge distillation approaches (Bucila et al.,
2006; Hinton et al., 2015) that have proven successful for reducing the size of
neural models in other domains to the problem of NMT. We demonstrate that
standard knowledge distillation applied to word-level prediction can be
effective for NMT, and also introduce two novel sequence-level versions of
knowledge distillation that further improve performance, and somewhat
surprisingly, seem to eliminate the need for beam search (even when applied on
the original teacher model). Our best student model runs 10 times faster than
its state-of-the-art teacher with little loss in performance. It is also
significantly better than a baseline model trained without knowledge
distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight
pruning on top of knowledge distillation results in a student model that has 13
times fewer parameters than the original teacher model, with a decrease of 0.4
BLEU.
| Yoon Kim, Alexander M. Rush | null | 1606.07947 | null | null |
Bidirectional Recurrent Neural Networks for Medical Event Detection in
Electronic Health Records | cs.CL cs.LG cs.NE | Sequence labeling for extraction of medical events and their attributes from
unstructured text in Electronic Health Record (EHR) notes is a key step towards
semantic understanding of EHRs. It has important applications in health
informatics including pharmacovigilance and drug surveillance. The state of the
art supervised machine learning models in this domain are based on Conditional
Random Fields (CRFs) with features calculated from fixed context windows. In
this application, we explored various recurrent neural network frameworks and
show that they significantly outperformed the CRF models.
| Abhyuday Jagannatha, Hong Yu | null | 1606.07953 | null | null |
Gear fault diagnosis based on Gaussian correlation of vibrations signals
and wavelet coefficients | cs.IT cs.CE cs.LG math.IT | The features of non-stationary multi-component signals are often difficult to
be extracted for expert systems. In this paper, a new method for feature
extraction that is based on maximization of local Gaussian correlation function
of wavelet coefficients and signal is presented. The effect of empirical mode
decomposition (EMD) to decompose multi-component signals to intrinsic mode
functions (IMFs), before using of local Gaussian correlation is discussed. The
experimental vibration signals from two gearbox systems are used to show the
efficiency of the presented method. Linear support vector machine (SVM) is
utilized to classify feature sets extracted with the presented method. The
obtained results show that the features extracted in this method have excellent
ability to classify faults without any additional feature selection; it is also
shown that EMD can improve or degrade features according to the utilized
feature reduction method.
| Amir Hosein Zamanian, Abdolreza Ohadi | 10.1016/j.asoc.2011.06.020 | 1606.07981 | null | null |
Fast Methods for Recovering Sparse Parameters in Linear Low Rank Models | cs.LG stat.ML | In this paper, we investigate the recovery of a sparse weight vector
(parameters vector) from a set of noisy linear combinations. However, only
partial information about the matrix representing the linear combinations is
available. Assuming a low-rank structure for the matrix, one natural solution
would be to first apply a matrix completion on the data, and then to solve the
resulting compressed sensing problem. In big data applications such as massive
MIMO and medical data, the matrix completion step imposes a huge computational
burden. Here, we propose to reduce the computational cost of the completion
task by ignoring the columns corresponding to zero elements in the sparse
vector. To this end, we employ a technique to initially approximate the support
of the sparse vector. We further propose to unify the partial matrix completion
and sparse vector recovery into an augmented four-step problem. Simulation
results reveal that the augmented approach achieves the best performance, while
both proposed methods outperform the natural two-step technique with
substantially less computational requirements.
| Ashkan Esmaeili, Arash Amini, and Farokh Marvasti | null | 1606.08009 | null | null |
Training LDCRF model on unsegmented sequences using Connectionist
Temporal Classification | cs.LG cs.CV | Many machine learning problems such as speech recognition, gesture
recognition, and handwriting recognition are concerned with simultaneous
segmentation and labeling of sequence data. Latent-dynamic conditional random
field (LDCRF) is a well-known discriminative method that has been successfully
used for this task. However, LDCRF can only be trained with pre-segmented data
sequences in which the label of each frame is available apriori. In the realm
of neural networks, the invention of connectionist temporal classification
(CTC) made it possible to train recurrent neural networks on unsegmented
sequences with great success. In this paper, we use CTC to train an LDCRF model
on unsegmented sequences. Experimental results on two gesture recognition tasks
show that the proposed method outperforms LDCRFs, hidden Markov models, and
conditional random fields.
| Amir Ahooye Atashin, Kamaledin Ghiasi-Shirazi, Ahad Harati | null | 1606.08051 | null | null |
Exact gradient updates in time independent of output size for the
spherical loss family | cs.NE cs.LG | An important class of problems involves training deep neural networks with
sparse prediction targets of very high dimension D. These occur naturally in
e.g. neural language models or the learning of word-embeddings, often posed as
predicting the probability of next words among a vocabulary of size D (e.g.
200,000). Computing the equally large, but typically non-sparse D-dimensional
output vector from a last hidden layer of reasonable dimension d (e.g. 500)
incurs a prohibitive O(Dd) computational cost for each example, as does
updating the $D \times d$ output weight matrix and computing the gradient
needed for backpropagation to previous layers. While efficient handling of
large sparse network inputs is trivial, the case of large sparse targets is
not, and has thus so far been sidestepped with approximate alternatives such as
hierarchical softmax or sampling-based approximations during training. In this
work we develop an original algorithmic approach which, for a family of loss
functions that includes squared error and spherical softmax, can compute the
exact loss, gradient update for the output weights, and gradient for
backpropagation, all in $O(d^{2})$ per example instead of $O(Dd)$, remarkably
without ever computing the D-dimensional output. The proposed algorithm yields
a speedup of up to $D/4d$ i.e. two orders of magnitude for typical sizes, for
that critical part of the computations that often dominates the training time
in this kind of network architecture.
| Pascal Vincent, Alexandre de Br\'ebisson, Xavier Bouthillier | null | 1606.08061 | null | null |
Improved Recurrent Neural Networks for Session-based Recommendations | cs.LG | Recurrent neural networks (RNNs) were recently proposed for the session-based
recommendation task. The models showed promising improvements over traditional
recommendation approaches. In this work, we further study RNN-based models for
session-based recommendations. We propose the application of two techniques to
improve model performance, namely, data augmentation, and a method to account
for shifts in the input data distribution. We also empirically study the use of
generalised distillation, and a novel alternative model that directly predicts
item embeddings. Experiments on the RecSys Challenge 2015 dataset demonstrate
relative improvements of 12.8% and 14.8% over previously reported results on
the Recall@20 and Mean Reciprocal Rank@20 metrics respectively.
| Yong Kiam Tan, Xinxing Xu and Yong Liu | null | 1606.08117 | null | null |
Supervised learning based on temporal coding in spiking neural networks | cs.NE cs.LG | Gradient descent training techniques are remarkably successful in training
analog-valued artificial neural networks (ANNs). Such training techniques,
however, do not transfer easily to spiking networks due to the spike generation
hard non-linearity and the discrete nature of spike communication. We show that
in a feedforward spiking network that uses a temporal coding scheme where
information is encoded in spike times instead of spike rates, the network
input-output relation is differentiable almost everywhere. Moreover, this
relation is piece-wise linear after a transformation of variables. Methods for
training ANNs thus carry directly to the training of such spiking networks as
we show when training on the permutation invariant MNIST task. In contrast to
rate-based spiking networks that are often used to approximate the behavior of
ANNs, the networks we present spike much more sparsely and their behavior can
not be directly approximated by conventional ANNs. Our results highlight a new
approach for controlling the behavior of spiking networks with realistic
temporal dynamics, opening up the potential for using these networks to process
spike patterns with complex temporal information.
| Hesham Mostafa | null | 1606.08165 | null | null |
Out-of-Sample Extension for Dimensionality Reduction of Noisy Time
Series | stat.ML cs.CG cs.CV cs.LG cs.NE | This paper proposes an out-of-sample extension framework for a global
manifold learning algorithm (Isomap) that uses temporal information in
out-of-sample points in order to make the embedding more robust to noise and
artifacts. Given a set of noise-free training data and its embedding, the
proposed framework extends the embedding for a noisy time series. This is
achieved by adding a spatio-temporal compactness term to the optimization
objective of the embedding. To the best of our knowledge, this is the first
method for out-of-sample extension of manifold embeddings that leverages timing
information available for the extension set. Experimental results demonstrate
that our out-of-sample extension algorithm renders a more robust and accurate
embedding of sequentially ordered image data in the presence of various noise
and artifacts when compared to other timing-aware embeddings. Additionally, we
show that an out-of-sample extension framework based on the proposed algorithm
outperforms the state of the art in eye-gaze estimation.
| Hamid Dadkhahi and Marco F. Duarte and Benjamin Marlin | 10.1109/TIP.2017.2735189 | 1606.08282 | null | null |
Lifted Rule Injection for Relation Embeddings | cs.LG cs.AI cs.CL | Methods based on representation learning currently hold the state-of-the-art
in many natural language processing and knowledge base inference tasks. Yet, a
major challenge is how to efficiently incorporate commonsense knowledge into
such models. A recent approach regularizes relation and entity representations
by propositionalization of first-order logic rules. However,
propositionalization does not scale beyond domains with only few entities and
rules. In this paper we present a highly efficient method for incorporating
implication rules into distributed representations for automated knowledge base
construction. We map entity-tuple embeddings into an approximately Boolean
space and encourage a partial ordering over relation embeddings based on
implication rules mined from WordNet. Surprisingly, we find that the strong
restriction of the entity-tuple embedding space does not hurt the
expressiveness of the model and even acts as a regularizer that improves
generalization. By incorporating few commonsense rules, we achieve an increase
of 2 percentage points mean average precision over a matrix factorization
baseline, while observing a negligible increase in runtime.
| Thomas Demeester and Tim Rockt\"aschel and Sebastian Riedel | null | 1606.08359 | null | null |
A Reduction for Optimizing Lattice Submodular Functions with Diminishing
Returns | cs.DS cs.AI cs.LG | A function $f: \mathbb{Z}_+^E \rightarrow \mathbb{R}_+$ is DR-submodular if
it satisfies $f({\bf x} + \chi_i) -f ({\bf x}) \ge f({\bf y} + \chi_i) - f({\bf
y})$ for all ${\bf x}\le {\bf y}, i\in E$. Recently, the problem of maximizing
a DR-submodular function $f: \mathbb{Z}_+^E \rightarrow \mathbb{R}_+$ subject
to a budget constraint $\|{\bf x}\|_1 \leq B$ as well as additional constraints
has received significant attention \cite{SKIK14,SY15,MYK15,SY16}.
In this note, we give a generic reduction from the DR-submodular setting to
the submodular setting. The running time of the reduction and the size of the
resulting submodular instance depends only \emph{logarithmically} on $B$. Using
this reduction, one can translate the results for unconstrained and constrained
submodular maximization to the DR-submodular setting for many types of
constraints in a unified manner.
| Alina Ene, Huy L. Nguyen | null | 1606.08362 | null | null |
Gaussian Error Linear Units (GELUs) | cs.LG | We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation function is $x\Phi(x)$, where
$\Phi(x)$ the standard Gaussian cumulative distribution function. The GELU
nonlinearity weights inputs by their value, rather than gates inputs by their
sign as in ReLUs ($x\mathbf{1}_{x>0}$). We perform an empirical evaluation of
the GELU nonlinearity against the ReLU and ELU activations and find performance
improvements across all considered computer vision, natural language
processing, and speech tasks.
| Dan Hendrycks and Kevin Gimpel | null | 1606.08415 | null | null |
Symmetric and antisymmetric properties of solutions to kernel-based
machine learning problems | cs.LG | A particularly interesting instance of supervised learning with kernels is
when each training example is associated with two objects, as in pairwise
classification (Brunner et al., 2012), and in supervised learning of preference
relations (Herbrich et al., 1998). In these cases, one may want to embed
additional prior knowledge into the optimization problem associated with the
training of the learning machine, modeled, respectively, by the symmetry of its
optimal solution with respect to an exchange of order between the two objects,
and by its antisymmetry. Extending the approach proposed in (Brunner et al.,
2012) (where the only symmetric case was considered), we show, focusing on
support vector binary classification, how such embedding is possible through
the choice of a suitable pairwise kernel, which takes as inputs the individual
feature vectors and also the group feature vectors associated with the two
objects. We also prove that the symmetry/antisymmetry constraints still hold
when considering the sequence of suboptimal solutions generated by one version
of the Sequential Minimal Optimization (SMO) algorithm, and we present
numerical results supporting the theoretical findings. We conclude discussing
extensions of the main results to support vector regression, to transductive
support vector machines, and to several kinds of graph kernels, including
diffusion kernels.
| Giorgio Gnecco | null | 1606.08501 | null | null |
A Learning Algorithm for Relational Logistic Regression: Preliminary
Results | cs.AI cs.LG stat.ML | Relational logistic regression (RLR) is a representation of conditional
probability in terms of weighted formulae for modelling multi-relational data.
In this paper, we develop a learning algorithm for RLR models. Learning an RLR
model from data consists of two steps: 1- learning the set of formulae to be
used in the model (a.k.a. structure learning) and learning the weight of each
formula (a.k.a. parameter learning). For structure learning, we deploy Schmidt
and Murphy's hierarchical assumption: first we learn a model with simple
formulae, then more complex formulae are added iteratively only if all their
sub-formulae have proven effective in previous learned models. For parameter
learning, we convert the problem into a non-relational learning problem and use
an off-the-shelf logistic regression learning algorithm from Weka, an
open-source machine learning tool, to learn the weights. We also indicate how
hidden features about the individuals can be incorporated into RLR to boost the
learning performance. We compare our learning algorithm to other structure and
parameter learning algorithms in the literature, and compare the performance of
RLR models to standard logistic regression and RDN-Boost on a modified version
of the MovieLens data-set.
| Bahare Fatemi, Seyed Mehran Kazemi, David Poole | null | 1606.08531 | null | null |
A Local Density-Based Approach for Local Outlier Detection | cs.AI cs.LG stat.ML | This paper presents a simple but effective density-based outlier detection
approach with the local kernel density estimation (KDE). A Relative
Density-based Outlier Score (RDOS) is introduced to measure the local
outlierness of objects, in which the density distribution at the location of an
object is estimated with a local KDE method based on extended nearest neighbors
of the object. Instead of using only $k$ nearest neighbors, we further consider
reverse nearest neighbors and shared nearest neighbors of an object for density
distribution estimation. Some theoretical properties of the proposed RDOS
including its expected value and false alarm probability are derived. A
comprehensive experimental study on both synthetic and real-life data sets
demonstrates that our approach is more effective than state-of-the-art outlier
detection methods.
| Bo Tang and Haibo He | null | 1606.08538 | null | null |
Estimating the class prior and posterior from noisy positives and
unlabeled data | stat.ML cs.LG | We develop a classification algorithm for estimating posterior distributions
from positive-unlabeled data, that is robust to noise in the positive labels
and effective for high-dimensional data. In recent years, several algorithms
have been proposed to learn from positive-unlabeled data; however, many of
these contributions remain theoretical, performing poorly on real
high-dimensional data that is typically contaminated with noise. We build on
this previous work to develop two practical classification algorithms that
explicitly model the noise in the positive labels and utilize univariate
transforms built on discriminative classifiers. We prove that these univariate
transforms preserve the class prior, enabling estimation in the univariate
space and avoiding kernel density estimation for high-dimensional data. The
theoretical development and both parametric and nonparametric algorithms
proposed here constitutes an important step towards wide-spread use of robust
classification algorithms for positive-unlabeled data.
| Shantanu Jain, Martha White, Predrag Radivojac | null | 1606.08561 | null | null |
Alternating Back-Propagation for Generator Network | stat.ML cs.CV cs.LG cs.NE | This paper proposes an alternating back-propagation algorithm for learning
the generator network model. The model is a non-linear generalization of factor
analysis. In this model, the mapping from the continuous latent factors to the
observed signal is parametrized by a convolutional neural network. The
alternating back-propagation algorithm iterates the following two steps: (1)
Inferential back-propagation, which infers the latent factors by Langevin
dynamics or gradient descent. (2) Learning back-propagation, which updates the
parameters given the inferred latent factors by gradient descent. The gradient
computations in both steps are powered by back-propagation, and they share most
of their code in common. We show that the alternating back-propagation
algorithm can learn realistic generator models of natural images, video
sequences, and sounds. Moreover, it can also be used to learn from incomplete
or indirect training data.
| Tian Han, Yang Lu, Song-Chun Zhu, and Ying Nian Wu | null | 1606.08571 | null | null |
Clustering-Based Relational Unsupervised Representation Learning with an
Explicit Distributed Representation | stat.ML cs.LG | The goal of unsupervised representation learning is to extract a new
representation of data, such that solving many different tasks becomes easier.
Existing methods typically focus on vectorized data and offer little support
for relational data, which additionally describe relationships among instances.
In this work we introduce an approach for relational unsupervised
representation learning. Viewing a relational dataset as a hypergraph, new
features are obtained by clustering vertices and hyperedges. To find a
representation suited for many relational learning tasks, a wide range of
similarities between relational objects is considered, e.g. feature and
structural similarities. We experimentally evaluate the proposed approach and
show that models learned on such latent representations perform better, have
lower complexity, and outperform the existing approaches on classification
tasks.
| Sebastijan Dumancic and Hendrik Blockeel | 10.24963/ijcai.2017/226 | 1606.08658 | null | null |
Theory reconstruction: a representation learning view on predicate
invention | stat.ML cs.LG cs.LO | With this positional paper we present a representation learning view on
predicate invention. The intention of this proposal is to bridge the relational
and deep learning communities on the problem of predicate invention. We propose
a theory reconstruction approach, a formalism that extends autoencoder approach
to representation learning to the relational settings. Our intention is to
start a discussion to define a unifying framework for predicate invention and
theory revision.
| Sebastijan Dumancic and Wannes Meert and Hendrik Blockeel | null | 1606.08660 | null | null |
Reviving Threshold-Moving: a Simple Plug-in Bagging Ensemble for Binary
and Multiclass Imbalanced Data | cs.LG stat.AP stat.ML | Class imbalance presents a major hurdle in the application of data mining
methods. A common practice to deal with it is to create ensembles of
classifiers that learn from resampled balanced data. For example, bagged
decision trees combined with random undersampling (RUS) or the synthetic
minority oversampling technique (SMOTE). However, most of the resampling
methods entail asymmetric changes to the examples of different classes, which
in turn can introduce its own biases in the model. Furthermore, those methods
require a performance measure to be specified a priori before learning. An
alternative is to use a so-called threshold-moving method that a posteriori
changes the decision threshold of a model to counteract the imbalance, thus has
a potential to adapt to the performance measure of interest. Surprisingly,
little attention has been paid to the potential of combining bagging ensemble
with threshold-moving. In this paper, we present probability thresholding
bagging (PT-bagging), a versatile plug-in method that fills this gap. Contrary
to usual rebalancing practice, our method preserves the natural class
distribution of the data resulting in well calibrated posterior probabilities.
We also extend the proposed method to handle multiclass data. The method is
validated on binary and multiclass benchmark data sets. We perform analyses
that provide insights into the proposed method.
| Guillem Collell, Drazen Prelec, Kaustubh Patil | null | 1606.08698 | null | null |
"Show me the cup": Reference with Continuous Representations | cs.CL cs.AI cs.LG | One of the most basic functions of language is to refer to objects in a
shared scene. Modeling reference with continuous representations is challenging
because it requires individuation, i.e., tracking and distinguishing an
arbitrary number of referents. We introduce a neural network model that, given
a definite description and a set of objects represented by natural images,
points to the intended object if the expression has a unique referent, or
indicates a failure, if it does not. The model, directly trained on reference
acts, is competitive with a pipeline manually engineered to perform the same
task, both when referents are purely visual, and when they are characterized by
a combination of visual and linguistic properties.
| Gemma Boleda and Sebastian Pad\'o and Marco Baroni | 10.1007/978-3-319-77113-7_17 | 1606.08777 | null | null |
Adaptive Training of Random Mapping for Data Quantization | cs.LG cs.AI | Data quantization learns encoding results of data with certain requirements,
and provides a broad perspective of many real-world applications to data
handling. Nevertheless, the results of encoder is usually limited to
multivariate inputs with the random mapping, and side information of binary
codes are hardly to mostly depict the original data patterns as possible. In
the literature, cosine based random quantization has attracted much attentions
due to its intrinsic bounded results. Nevertheless, it usually suffers from the
uncertain outputs, and information of original data fails to be fully preserved
in the reduced codes. In this work, a novel binary embedding method, termed
adaptive training quantization (ATQ), is proposed to learn the ideal transform
of random encoder, where the limitation of cosine random mapping is tackled. As
an adaptive learning idea, the reduced mapping is adaptively calculated with
idea of data group, while the bias of random transform is to be improved to
hold most matching information. Experimental results show that the proposed
method is able to obtain outstanding performance compared with other random
quantization methods.
| Miao Cheng, Ah Chung Tsoi | null | 1606.08808 | null | null |
European Union regulations on algorithmic decision-making and a "right
to explanation" | stat.ML cs.CY cs.LG | We summarize the potential impact that the European Union's new General Data
Protection Regulation will have on the routine use of machine learning
algorithms. Slated to take effect as law across the EU in 2018, it will
restrict automated individual decision-making (that is, algorithms that make
decisions based on user-level predictors) which "significantly affect" users.
The law will also effectively create a "right to explanation," whereby a user
can ask for an explanation of an algorithmic decision that was made about them.
We argue that while this law will pose large challenges for industry, it
highlights opportunities for computer scientists to take the lead in designing
algorithms and evaluation frameworks which avoid discrimination and enable
explanation.
| Bryce Goodman and Seth Flaxman | 10.1609/aimag.v38i3.2741 | 1606.08813 | null | null |
Multi-View Kernel Consensus For Data Analysis | cs.LG stat.ML | The input data features set for many data driven tasks is high-dimensional
while the intrinsic dimension of the data is low. Data analysis methods aim to
uncover the underlying low dimensional structure imposed by the low dimensional
hidden parameters by utilizing distance metrics that consider the set of
attributes as a single monolithic set. However, the transformation of the low
dimensional phenomena into the measured high dimensional observations might
distort the distance metric, This distortion can effect the desired estimated
low dimensional geometric structure. In this paper, we suggest to utilize the
redundancy in the attribute domain by partitioning the attributes into multiple
subsets we call views. The proposed methods utilize the agreement also called
consensus between different views to extract valuable geometric information
that unifies multiple views about the intrinsic relationships among several
different observations. This unification enhances the information that a single
view or a simple concatenations of views provides.
| Moshe Salhov, Ofir Lindenbaum, Yariv Aizenbud, Avi Silberschatz, Yoel
Shkolnisky, Amir Averbuch | null | 1606.08819 | null | null |
Active Ranking from Pairwise Comparisons and when Parametric Assumptions
Don't Help | cs.LG cs.AI cs.IT math.IT stat.ML | We consider sequential or active ranking of a set of n items based on noisy
pairwise comparisons. Items are ranked according to the probability that a
given item beats a randomly chosen item, and ranking refers to partitioning the
items into sets of pre-specified sizes according to their scores. This notion
of ranking includes as special cases the identification of the top-k items and
the total ordering of the items. We first analyze a sequential ranking
algorithm that counts the number of comparisons won, and uses these counts to
decide whether to stop, or to compare another pair of items, chosen based on
confidence intervals specified by the data collected up to that point. We prove
that this algorithm succeeds in recovering the ranking using a number of
comparisons that is optimal up to logarithmic factors. This guarantee does not
require any structural properties of the underlying pairwise probability
matrix, unlike a significant body of past work on pairwise ranking based on
parametric models such as the Thurstone or Bradley-Terry-Luce models. It has
been a long-standing open question as to whether or not imposing these
parametric assumptions allows for improved ranking algorithms. For stochastic
comparison models, in which the pairwise probabilities are bounded away from
zero, our second contribution is to resolve this issue by proving a lower bound
for parametric models. This shows, perhaps surprisingly, that these popular
parametric modeling choices offer at most logarithmic gains for stochastic
comparisons.
| Reinhard Heckel and Nihar B. Shah and Kannan Ramchandran and Martin J.
Wainwright | null | 1606.08842 | null | null |
Technical Report: Towards a Universal Code Formatter through Machine
Learning | cs.PL cs.AI cs.LG | There are many declarative frameworks that allow us to implement code
formatters relatively easily for any specific language, but constructing them
is cumbersome. The first problem is that "everybody" wants to format their code
differently, leading to either many formatter variants or a ridiculous number
of configuration options. Second, the size of each implementation scales with a
language's grammar size, leading to hundreds of rules.
In this paper, we solve the formatter construction problem using a novel
approach, one that automatically derives formatters for any given language
without intervention from a language expert. We introduce a code formatter
called CodeBuff that uses machine learning to abstract formatting rules from a
representative corpus, using a carefully designed feature set. Our experiments
on Java, SQL, and ANTLR grammars show that CodeBuff is efficient, has excellent
accuracy, and is grammar invariant for a given language. It also generalizes to
a 4th language tested during manuscript preparation.
| Terence Parr and Jurgin Vinju | null | 1606.08866 | null | null |
Defending Non-Bayesian Learning against Adversarial Attacks | cs.DC cs.LG | This paper addresses the problem of non-Bayesian learning over multi-agent
networks, where agents repeatedly collect partially informative observations
about an unknown state of the world, and try to collaboratively learn the true
state. We focus on the impact of the adversarial agents on the performance of
consensus-based non-Bayesian learning, where non-faulty agents combine local
learning updates with consensus primitives. In particular, we consider the
scenario where an unknown subset of agents suffer Byzantine faults -- agents
suffering Byzantine faults behave arbitrarily. Two different learning rules are
proposed.
| Lili Su, Nitin H. Vaidya | null | 1606.08883 | null | null |
Exact Lower Bounds for the Agnostic Probably-Approximately-Correct (PAC)
Machine Learning Model | cs.LG math.PR math.ST stat.TH | We provide an exact non-asymptotic lower bound on the minimax expected excess
risk (EER) in the agnostic probably-ap\-proximately-correct (PAC) machine
learning classification model and identify minimax learning algorithms as
certain maximally symmetric and minimally randomized "voting" procedures. Based
on this result, an exact asymptotic lower bound on the minimax EER is provided.
This bound is of the simple form $c_\infty/\sqrt{\nu}$ as $\nu\to\infty$, where
$c_\infty=0.16997\dots$ is a universal constant, $\nu=m/d$, $m$ is the size of
the training sample, and $d$ is the Vapnik--Chervonenkis dimension of the
hypothesis class. It is shown that the differences between these asymptotic and
non-asymptotic bounds, as well as the differences between these two bounds and
the maximum EER of any learning algorithms that minimize the empirical risk,
are asymptotically negligible, and all these differences are due to ties in the
mentioned "voting" procedures. A few easy to compute non-asymptotic lower
bounds on the minimax EER are also obtained, which are shown to be close to the
exact asymptotic lower bound $c_\infty/\sqrt{\nu}$ even for rather small values
of the ratio $\nu=m/d$. As an application of these results, we substantially
improve existing lower bounds on the tail probability of the excess risk. Among
the tools used are Bayes estimation and apparently new identities and
inequalities for binomial distributions.
| Aryeh Kontorovich and Iosif Pinelis | null | 1606.08920 | null | null |
subgraph2vec: Learning Distributed Representations of Rooted Sub-graphs
from Large Graphs | cs.LG cs.AI cs.CR cs.SE | In this paper, we present subgraph2vec, a novel approach for learning latent
representations of rooted subgraphs from large graphs inspired by recent
advancements in Deep Learning and Graph Kernels. These latent representations
encode semantic substructure dependencies in a continuous vector space, which
is easily exploited by statistical models for tasks such as graph
classification, clustering, link prediction and community detection.
subgraph2vec leverages on local information obtained from neighbourhoods of
nodes to learn their latent representations in an unsupervised fashion. We
demonstrate that subgraph vectors learnt by our approach could be used in
conjunction with classifiers such as CNNs, SVMs and relational data clustering
algorithms to achieve significantly superior accuracies. Also, we show that the
subgraph vectors could be used for building a deep learning variant of
Weisfeiler-Lehman graph kernel. Our experiments on several benchmark and
large-scale real-world datasets reveal that subgraph2vec achieves significant
improvements in accuracies over existing graph kernels on both supervised and
unsupervised learning tasks. Specifically, on two realworld program analysis
tasks, namely, code clone and malware detection, subgraph2vec outperforms
state-of-the-art kernels by more than 17% and 4%, respectively.
| Annamalai Narayanan, Mahinthan Chandramohan, Lihui Chen, Yang Liu and
Santhoshkumar Saminathan | null | 1606.08928 | null | null |
Non-linear Label Ranking for Large-scale Prediction of Long-Term User
Interests | cs.AI cs.LG stat.ML | We consider the problem of personalization of online services from the
viewpoint of ad targeting, where we seek to find the best ad categories to be
shown to each user, resulting in improved user experience and increased
advertisers' revenue. We propose to address this problem as a task of ranking
the ad categories depending on a user's preference, and introduce a novel label
ranking approach capable of efficiently learning non-linear, highly accurate
models in large-scale settings. Experiments on a real-world advertising data
set with more than 3.2 million users show that the proposed algorithm
outperforms the existing solutions in terms of both rank loss and top-K
retrieval performance, strongly suggesting the benefit of using the proposed
model on large-scale ranking problems.
| Nemanja Djuric, Mihajlo Grbovic, Vladan Radosavljevic, Narayan
Bhamidipati, Slobodan Vucetic | null | 1606.08963 | null | null |
Decision making via semi-supervised machine learning techniques | cs.LG | Semi-supervised learning (SSL) is a class of supervised learning tasks and
techniques that also exploits the unlabeled data for training. SSL
significantly reduces labeling related costs and is able to handle large data
sets. The primary objective is the extraction of robust inference rules.
Decision support systems (DSSs) who utilize SSL have significant advantages.
Only a small amount of labelled data is required for the initialization. Then,
new (unlabeled) data can be utilized and improve system's performance. Thus,
the DSS is continuously adopted to new conditions, with minimum effort.
Techniques which are cost effective and easily adopted to dynamic systems, can
be beneficial for many practical applications. Such applications fields are:
(a) industrial assembly lines monitoring, (b) sea border surveillance, (c)
elders' falls detection, (d) transportation tunnels inspection, (e) concrete
foundation piles defect recognition, (f) commercial sector companies financial
assessment and (g) image advanced filtering for cultural heritage applications.
| Eftychios Protopapadakis | null | 1606.09022 | null | null |
A Distributional Semantics Approach to Implicit Language Learning | cs.CL cs.LG | In the present paper we show that distributional information is particularly
important when considering concept availability under implicit language
learning conditions. Based on results from different behavioural experiments we
argue that the implicit learnability of semantic regularities depends on the
degree to which the relevant concept is reflected in language use. In our
simulations, we train a Vector-Space model on either an English or a Chinese
corpus and then feed the resulting representations to a feed-forward neural
network. The task of the neural network was to find a mapping between the word
representations and the novel words. Using datasets from four behavioural
experiments, which used different semantic manipulations, we were able to
obtain learning patterns very similar to those obtained by humans.
| Dimitrios Alikaniotis and John N. Williams | null | 1606.09058 | null | null |
Actor-critic versus direct policy search: a comparison based on sample
complexity | cs.LG | Sample efficiency is a critical property when optimizing policy parameters
for the controller of a robot. In this paper, we evaluate two state-of-the-art
policy optimization algorithms. One is a recent deep reinforcement learning
method based on an actor-critic algorithm, Deep Deterministic Policy Gradient
(DDPG), that has been shown to perform well on various control benchmarks. The
other one is a direct policy search method, Covariance Matrix Adaptation
Evolution Strategy (CMA-ES), a black-box optimization method that is widely
used for robot learning. The algorithms are evaluated on a continuous version
of the mountain car benchmark problem, so as to compare their sample
complexity. From a preliminary analysis, we expect DDPG to be more sample
efficient than CMA-ES, which is confirmed by our experimental results.
| Arnaud de Froissard de Broissia and Olivier Sigaud | null | 1606.09152 | null | null |
Disease Trajectory Maps | stat.ML cs.LG stat.AP | Medical researchers are coming to appreciate that many diseases are in fact
complex, heterogeneous syndromes composed of subpopulations that express
different variants of a related complication. Time series data extracted from
individual electronic health records (EHR) offer an exciting new way to study
subtle differences in the way these diseases progress over time. In this paper,
we focus on answering two questions that can be asked using these databases of
time series. First, we want to understand whether there are individuals with
similar disease trajectories and whether there are a small number of degrees of
freedom that account for differences in trajectories across the population.
Second, we want to understand how important clinical outcomes are associated
with disease trajectories. To answer these questions, we propose the Disease
Trajectory Map (DTM), a novel probabilistic model that learns low-dimensional
representations of sparse and irregularly sampled time series. We propose a
stochastic variational inference algorithm for learning the DTM that allows the
model to scale to large modern medical datasets. To demonstrate the DTM, we
analyze data collected on patients with the complex autoimmune disease,
scleroderma. We find that DTM learns meaningful representations of disease
trajectories and that the representations are significantly associated with
important clinical outcomes.
| Peter Schulam and Raman Arora | null | 1606.09184 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.