title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Sequential Dialogue Context Modeling for Spoken Language Understanding | cs.CL cs.AI cs.LG | Spoken Language Understanding (SLU) is a key component of goal oriented
dialogue systems that would parse user utterances into semantic frame
representations. Traditionally SLU does not utilize the dialogue history beyond
the previous system turn and contextual ambiguities are resolved by the
downstream components. In this paper, we explore novel approaches for modeling
dialogue context in a recurrent neural network (RNN) based language
understanding system. We propose the Sequential Dialogue Encoder Network, that
allows encoding context from the dialogue history in chronological order. We
compare the performance of our proposed architecture with two context models,
one that uses just the previous turn context and another that encodes dialogue
context in a memory network, but loses the order of utterances in the dialogue
history. Experiments with a multi-domain dialogue dataset demonstrate that the
proposed architecture results in reduced semantic frame error rates.
| Ankur Bapna, Gokhan Tur, Dilek Hakkani-Tur, Larry Heck | null | 1705.03455 | null | null |
OMNIRank: Risk Quantification for P2P Platforms with Deep Learning | cs.CY cs.LG | P2P lending presents as an innovative and flexible alternative for
conventional lending institutions like banks, where lenders and borrowers
directly make transactions and benefit each other without complicated
verifications. However, due to lack of specialized laws, delegated monitoring
and effective managements, P2P platforms may spawn potential risks, such as
withdraw failures, investigation involvements and even runaway bosses, which
cause great losses to lenders and are especially serious and notorious in
China. Although there are abundant public information and data available on the
Internet related to P2P platforms, challenges of multi-sourcing and
heterogeneity matter. In this paper, we promote a novel deep learning model,
OMNIRank, which comprehends multi-dimensional features of P2P platforms for
risk quantification and produces scores for ranking. We first construct a
large-scale flexible crawling framework and obtain great amounts of
multi-source heterogeneous data of domestic P2P platforms since 2007 from the
Internet. Purifications like duplication and noise removal, null handing,
format unification and fusion are applied to improve data qualities. Then we
extract deep features of P2P platforms via text comprehension, topic modeling,
knowledge graph and sentiment analysis, which are delivered as inputs to
OMNIRank, a deep learning model for risk quantification of P2P platforms.
Finally, according to rankings generated by OMNIRank, we conduct flourish data
visualizations and interactions, providing lenders with comprehensive
information supports, decision suggestions and safety guarantees.
| Honglun Zhang, Haiyang Wang, Xiaming Chen, Yongkun Wang, Yaohui Jin | null | 1705.03497 | null | null |
DeepDeath: Learning to Predict the Underlying Cause of Death with Big
Data | cs.CL cs.LG stat.ML | Multiple cause-of-death data provides a valuable source of information that
can be used to enhance health standards by predicting health related
trajectories in societies with large populations. These data are often
available in large quantities across U.S. states and require Big Data
techniques to uncover complex hidden patterns. We design two different classes
of models suitable for large-scale analysis of mortality data, a Hadoop-based
ensemble of random forests trained over N-grams, and the DeepDeath, a deep
classifier based on the recurrent neural network (RNN). We apply both classes
to the mortality data provided by the National Center for Health Statistics and
show that while both perform significantly better than the random classifier,
the deep model that utilizes long short-term memory networks (LSTMs), surpasses
the N-gram based models and is capable of learning the temporal aspect of the
data without a need for building ad-hoc, expert-driven features.
| Hamid Reza Hassanzadeh, Ying Sha, May D. Wang | null | 1705.03508 | null | null |
Policy Iterations for Reinforcement Learning Problems in Continuous Time
and Space -- Fundamental Theory and Methods | cs.AI cs.LG cs.SY eess.SY | Policy iteration (PI) is a recursive process of policy evaluation and
improvement for solving an optimal decision-making/control problem, or in other
words, a reinforcement learning (RL) problem. PI has also served as the
fundamental for developing RL methods. In this paper, we propose two PI
methods, called differential PI (DPI) and integral PI (IPI), and their
variants, for a general RL framework in continuous time and space (CTS), where
the environment is modeled by a system of ordinary differential equations
(ODEs). The proposed methods inherit the current ideas of PI in classical RL
and optimal control and theoretically support the existing RL algorithms in
CTS: TD-learning and value-gradient-based (VGB) greedy policy update. We also
provide case studies including 1) discounted RL and 2) optimal control tasks.
Fundamental mathematical properties -- admissibility, uniqueness of the
solution to the Bellman equation (BE), monotone improvement, convergence, and
optimality of the solution to the Hamilton-Jacobi-Bellman equation (HJBE) --
are all investigated in-depth and improved from the existing theory, along with
the general and case studies. Finally, the proposed ones are simulated with an
inverted-pendulum model and their model-based and partially model-free
implementations to support the theory and further investigate them beyond.
| Jaeyoung Lee and Richard S. Sutton | 10.1016/j.automatica.2020.109421 | 1705.0352 | null | null |
A Large-Scale Exploration of Factors Affecting Hand Hygiene Compliance
Using Linear Predictive Models | cs.CY cs.LG stat.AP | This large-scale study, consisting of 24.5 million hand hygiene opportunities
spanning 19 distinct facilities in 10 different states, uses linear predictive
models to expose factors that may affect hand hygiene compliance. We examine
the use of features such as temperature, relative humidity, influenza severity,
day/night shift, federal holidays and the presence of new residents in
predicting daily hand hygiene compliance. The results suggest that colder
temperatures and federal holidays have an adverse effect on hand hygiene
compliance rates, and that individual cultures and attitudes regarding hand
hygiene seem to exist among facilities.
| Michael T. Lash, Jason Slater, Philip M. Polgreen, and Alberto M.
Segre | null | 1705.0354 | null | null |
CORe50: a New Dataset and Benchmark for Continuous Object Recognition | cs.CV cs.AI cs.LG cs.RO | Continuous/Lifelong learning of high-dimensional data streams is a
challenging research problem. In fact, fully retraining models each time new
data become available is infeasible, due to computational and storage issues,
while na\"ive incremental strategies have been shown to suffer from
catastrophic forgetting. In the context of real-world object recognition
applications (e.g., robotic vision), where continuous learning is crucial, very
few datasets and benchmarks are available to evaluate and compare emerging
techniques. In this work we propose a new dataset and benchmark CORe50,
specifically designed for continuous object recognition, and introduce baseline
approaches for different continuous learning scenarios.
| Vincenzo Lomonaco and Davide Maltoni | null | 1705.0355 | null | null |
Relevance-based Word Embedding | cs.IR cs.CL cs.LG cs.NE | Learning a high-dimensional dense representation for vocabulary terms, also
known as a word embedding, has recently attracted much attention in natural
language processing and information retrieval tasks. The embedding vectors are
typically learned based on term proximity in a large corpus. This means that
the objective in well-known word embedding algorithms, e.g., word2vec, is to
accurately predict adjacent word(s) for a given word or context. However, this
objective is not necessarily equivalent to the goal of many information
retrieval (IR) tasks. The primary objective in various IR tasks is to capture
relevance instead of term proximity, syntactic, or even semantic similarity.
This is the motivation for developing unsupervised relevance-based word
embedding models that learn word representations based on query-document
relevance information. In this paper, we propose two learning models with
different objective functions; one learns a relevance distribution over the
vocabulary set for each query, and the other classifies each term as belonging
to the relevant or non-relevant class for each query. To train our models, we
used over six million unique queries and the top ranked documents retrieved in
response to each query, which are assumed to be relevant to the query. We
extrinsically evaluate our learned word representation models using two IR
tasks: query expansion and query classification. Both query expansion
experiments on four TREC collections and query classification experiments on
the KDD Cup 2005 dataset suggest that the relevance-based word embedding models
significantly outperform state-of-the-art proximity-based embedding models,
such as word2vec and GloVe.
| Hamed Zamani, W. Bruce Croft | null | 1705.03556 | null | null |
Deep Episodic Value Iteration for Model-based Meta-Reinforcement
Learning | stat.ML cs.AI cs.LG | We present a new deep meta reinforcement learner, which we call Deep Episodic
Value Iteration (DEVI). DEVI uses a deep neural network to learn a similarity
metric for a non-parametric model-based reinforcement learning algorithm. Our
model is trained end-to-end via back-propagation. Despite being trained using
the model-free Q-learning objective, we show that DEVI's model-based internal
structure provides `one-shot' transfer to changes in reward and transition
structure, even for tasks with very high-dimensional state spaces.
| Steven Stenberg Hansen | null | 1705.03562 | null | null |
Spatial Random Sampling: A Structure-Preserving Data Sketching Tool | cs.LG stat.ME stat.ML | Random column sampling is not guaranteed to yield data sketches that preserve
the underlying structures of the data and may not sample sufficiently from
less-populated data clusters. Also, adaptive sampling can often provide
accurate low rank approximations, yet may fall short of producing descriptive
data sketches, especially when the cluster centers are linearly dependent.
Motivated by that, this paper introduces a novel randomized column sampling
tool dubbed Spatial Random Sampling (SRS), in which data points are sampled
based on their proximity to randomly sampled points on the unit sphere. The
most compelling feature of SRS is that the corresponding probability of
sampling from a given data cluster is proportional to the surface area the
cluster occupies on the unit sphere, independently from the size of the cluster
population. Although it is fully randomized, SRS is shown to provide
descriptive and balanced data representations. The proposed idea addresses a
pressing need in data science and holds potential to inspire many novel
approaches for analysis of big data.
| Mostafa Rahmani, George Atia | 10.1109/LSP.2017.2723472 | 1705.03566 | null | null |
Discovery Radiomics via Evolutionary Deep Radiomic Sequencer Discovery
for Pathologically-Proven Lung Cancer Detection | cs.NE cs.CV cs.LG | While lung cancer is the second most diagnosed form of cancer in men and
women, a sufficiently early diagnosis can be pivotal in patient survival rates.
Imaging-based, or radiomics-driven, detection methods have been developed to
aid diagnosticians, but largely rely on hand-crafted features which may not
fully encapsulate the differences between cancerous and healthy tissue.
Recently, the concept of discovery radiomics was introduced, where custom
abstract features are discovered from readily available imaging data. We
propose a novel evolutionary deep radiomic sequencer discovery approach based
on evolutionary deep intelligence. Motivated by patient privacy concerns and
the idea of operational artificial intelligence, the evolutionary deep radiomic
sequencer discovery approach organically evolves increasingly more efficient
deep radiomic sequencers that produce significantly more compact yet similarly
descriptive radiomic sequences over multiple generations. As a result, this
framework improves operational efficiency and enables diagnosis to be run
locally at the radiologist's computer while maintaining detection accuracy. We
evaluated the evolved deep radiomic sequencer (EDRS) discovered via the
proposed evolutionary deep radiomic sequencer discovery framework against
state-of-the-art radiomics-driven and discovery radiomics methods using
clinical lung CT data with pathologically-proven diagnostic data from the
LIDC-IDRI dataset. The evolved deep radiomic sequencer shows improved
sensitivity (93.42%), specificity (82.39%), and diagnostic accuracy (88.78%)
relative to previous radiomics approaches.
| Mohammad Javad Shafiee, Audrey G. Chung, Farzad Khalvati, Masoom A.
Haider, and Alexander Wong | null | 1705.03572 | null | null |
Collaborative Descriptors: Convolutional Maps for Preprocessing | cs.CV cs.LG | The paper presents a novel concept for collaborative descriptors between
deeply learned and hand-crafted features. To achieve this concept, we apply
convolutional maps for pre-processing, namely the convovlutional maps are used
as input of hand-crafted features. We recorded an increase in the performance
rate of +17.06 % (multi-class object recognition) and +24.71 % (car detection)
from grayscale input to convolutional maps. Although the framework is
straight-forward, the concept should be inherited for an improved
representation.
| Hirokatsu Kataoka, Kaori Abe, Akio Nakamura, Yutaka Satoh | null | 1705.03595 | null | null |
An initialization method for the k-means using the concept of useful
nearest centers | cs.LG | The aim of the k-means is to minimize squared sum of Euclidean distance from
the mean (SSEDM) of each cluster. The k-means can effectively optimize this
function, but it is too sensitive for initial centers (seeds). This paper
proposed a method for initialization of the k-means using the concept of useful
nearest center for each data point.
| Hassan Ismkhan | null | 1705.03613 | null | null |
Inferring and Executing Programs for Visual Reasoning | cs.CV cs.CL cs.LG | Existing methods for visual reasoning attempt to directly map inputs to
outputs using black-box architectures without explicitly modeling the
underlying reasoning processes. As a result, these black-box models often learn
to exploit biases in the data rather than learning to perform visual reasoning.
Inspired by module networks, this paper proposes a model for visual reasoning
that consists of a program generator that constructs an explicit representation
of the reasoning process to be performed, and an execution engine that executes
the resulting program to produce an answer. Both the program generator and the
execution engine are implemented by neural networks, and are trained using a
combination of backpropagation and REINFORCE. Using the CLEVR benchmark for
visual reasoning, we show that our model significantly outperforms strong
baselines and generalizes better in a variety of settings.
| Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Judy
Hoffman, Li Fei-Fei, C. Lawrence Zitnick, Ross Girshick | null | 1705.03633 | null | null |
Deep Speaker Feature Learning for Text-independent Speaker Verification | cs.SD cs.CL cs.LG | Recently deep neural networks (DNNs) have been used to learn speaker
features. However, the quality of the learned features is not sufficiently
good, so a complex back-end model, either neural or probabilistic, has to be
used to address the residual uncertainty when applied to speaker verification,
just as with raw features. This paper presents a convolutional time-delay deep
neural network structure (CT-DNN) for speaker feature learning. Our
experimental results on the Fisher database demonstrated that this CT-DNN can
produce high-quality speaker features: even with a single feature (0.3 seconds
including the context), the EER can be as low as 7.68%. This effectively
confirmed that the speaker trait is largely a deterministic short-time property
rather than a long-time distributional pattern, and therefore can be extracted
from just dozens of frames.
| Lantian Li, Yixiang Chen, Ying Shi, Zhiyuan Tang, Dong Wang | null | 1705.0367 | null | null |
Comments on the proof of adaptive submodular function minimization | stat.ML cs.LG | We point out an issue with Theorem 5 appearing in "Group-based active query
selection for rapid diagnosis in time-critical situations". Theorem 5 bounds
the expected number of queries for a greedy algorithm to identify the class of
an item within a constant factor of optimal. The Theorem is based on
correctness of a result on minimization of adaptive submodular functions. We
present an example that shows that a critical step in Theorem A.11 of "Adaptive
Submodularity: Theory and Applications in Active Learning and Stochastic
Optimization" is incorrect.
| Feng Nan and Venkatesh Saligrama | null | 1705.03771 | null | null |
Hybrid Isolation Forest - Application to Intrusion Detection | cs.LG | From the identification of a drawback in the Isolation Forest (IF) algorithm
that limits its use in the scope of anomaly detection, we propose two
extensions that allow to firstly overcome the previously mention limitation and
secondly to provide it with some supervised learning capability. The resulting
Hybrid Isolation Forest (HIF) that we propose is first evaluated on a synthetic
dataset to analyze the effect of the new meta-parameters that are introduced
and verify that the addressed limitation of the IF algorithm is effectively
overcame. We hen compare the two algorithms on the ISCX benchmark dataset, in
the context of a network intrusion detection application. Our experiments show
that HIF outperforms IF, but also challenges the 1-class and 2-classes SVM
baselines with computational efficiency.
| Pierre-Fran\c{c}ois Marteau, Saeid Soheily-Khah, Nicolas B\'echet | null | 1705.038 | null | null |
Context Attentive Bandits: Contextual Bandit with Restricted Context | cs.AI cs.LG stat.ML | We consider a novel formulation of the multi-armed bandit model, which we
call the contextual bandit with restricted context, where only a limited number
of features can be accessed by the learner at every iteration. This novel
formulation is motivated by different online problems arising in clinical
trials, recommender systems and attention modeling. Herein, we adapt the
standard multi-armed bandit algorithm known as Thompson Sampling to take
advantage of our restricted context setting, and propose two novel algorithms,
called the Thompson Sampling with Restricted Context(TSRC) and the Windows
Thompson Sampling with Restricted Context(WTSRC), for handling stationary and
nonstationary environments, respectively. Our empirical results demonstrate
advantages of the proposed approaches on several real-life datasets
| Djallel Bouneffouf, Irina Rish, Guillermo A. Cecchi, Raphael Feraud | null | 1705.03821 | null | null |
Context-Aware Hierarchical Online Learning for Performance Maximization
in Mobile Crowdsourcing | cs.LG cs.HC cs.SI | In mobile crowdsourcing (MCS), mobile users accomplish outsourced human
intelligence tasks. MCS requires an appropriate task assignment strategy, since
different workers may have different performance in terms of acceptance rate
and quality. Task assignment is challenging, since a worker's performance (i)
may fluctuate, depending on both the worker's current personal context and the
task context, (ii) is not known a priori, but has to be learned over time.
Moreover, learning context-specific worker performance requires access to
context information, which may not be available at a central entity due to
communication overhead or privacy concerns. Additionally, evaluating worker
performance might require costly quality assessments. In this paper, we propose
a context-aware hierarchical online learning algorithm addressing the problem
of performance maximization in MCS. In our algorithm, a local controller (LC)
in the mobile device of a worker regularly observes the worker's context,
her/his decisions to accept or decline tasks and the quality in completing
tasks. Based on these observations, the LC regularly estimates the worker's
context-specific performance. The mobile crowdsourcing platform (MCSP) then
selects workers based on performance estimates received from the LCs. This
hierarchical approach enables the LCs to learn context-specific worker
performance and it enables the MCSP to select suitable workers. In addition,
our algorithm preserves worker context locally, and it keeps the number of
required quality assessments low. We prove that our algorithm converges to the
optimal task assignment strategy. Moreover, the algorithm outperforms simpler
task assignment strategies in experiments based on synthetic and real data.
| Sabrina Klos (n\'ee M\"uller), Cem Tekin, Mihaela van der Schaar, Anja
Klein | 10.1109/TNET.2018.2828415 | 1705.03822 | null | null |
Net2Vec: Deep Learning for the Network | cs.NI cs.LG | We present Net2Vec, a flexible high-performance platform that allows the
execution of deep learning algorithms in the communication network. Net2Vec is
able to capture data from the network at more than 60Gbps, transform it into
meaningful tuples and apply predictions over the tuples in real time. This
platform can be used for different purposes ranging from traffic classification
to network performance analysis.
Finally, we showcase the use of Net2Vec by implementing and testing a
solution able to profile network users at line rate using traces coming from a
real network. We show that the use of deep learning for this case outperforms
the baseline method both in terms of accuracy and performance.
| Roberto Gonzalez, Filipe Manco, Alberto Garcia-Duran, Jose Mendes,
Felipe Huici, Saverio Niccolini, Mathias Niepert | null | 1705.03881 | null | null |
Why & When Deep Learning Works: Looking Inside Deep Learnings | cs.LG | The Intel Collaborative Research Institute for Computational Intelligence
(ICRI-CI) has been heavily supporting Machine Learning and Deep Learning
research from its foundation in 2012. We have asked six leading ICRI-CI Deep
Learning researchers to address the challenge of "Why & When Deep Learning
works", with the goal of looking inside Deep Learning, providing insights on
how deep networks function, and uncovering key observations on their
expressiveness, limitations, and potential. The output of this challenge
resulted in five papers that address different facets of deep learning. These
different facets include a high-level understating of why and when deep
networks work (and do not work), the impact of geometry on the expressiveness
of deep networks, and making deep networks interpretable.
| Ronny Ronen | null | 1705.03921 | null | null |
GQ($\lambda$) Quick Reference and Implementation Guide | cs.LG | This document should serve as a quick reference for and guide to the
implementation of linear GQ($\lambda$), a gradient-based off-policy
temporal-difference learning algorithm. Explanation of the intuition and theory
behind the algorithm are provided elsewhere (e.g., Maei & Sutton 2010, Maei
2011). If you questions or concerns about the content in this document or the
attached java code please email Adam White ([email protected]).
The code is provided as part of the source files in the arXiv submission.
| Adam White and Richard S. Sutton | null | 1705.03967 | null | null |
Mining Functional Modules by Multiview-NMF of Phenome-Genome Association | cs.LG q-bio.QM | Background: Mining gene modules from genomic data is an important step to
detect gene members of pathways or other relations such as protein-protein
interactions. In this work, we explore the plausibility of detecting gene
modules by factorizing gene-phenotype associations from a phenotype ontology
rather than the conventionally used gene expression data. In particular, the
hierarchical structure of ontology has not been sufficiently utilized in
clustering genes while functionally related genes are consistently associated
with phenotypes on the same path in the phenotype ontology. Results: We propose
a hierarchal Nonnegative Matrix Factorization (NMF)-based method, called
Consistent Multiple Nonnegative Matrix Factorization (CMNMF), to factorize
genome-phenome association matrix at two levels of the hierarchical structure
in phenotype ontology for mining gene functional modules. CMNMF constrains the
gene clusters from the association matrices at two consecutive levels to be
consistent since the genes are annotated with both the child phenotype and the
parent phenotype in the consecutive levels. CMNMF also restricts the identified
phenotype clusters to be densely connected in the phenotype ontology hierarchy.
In the experiments on mining functionally related genes from mouse phenotype
ontology and human phenotype ontology, CMNMF effectively improved clustering
performance over the baseline methods. Gene ontology enrichment analysis was
also conducted to reveal interesting gene modules. Conclusions: Utilizing the
information in the hierarchical structure of phenotype ontology, CMNMF can
identify functional gene modules with more biological significance than the
conventional methods. CMNMF could also be a better tool for predicting members
of gene pathways and protein-protein interactions. Availability:
https://github.com/nkiip/CMNMF
| YaoGong Zhang, YingJie Xu, Xin Fan, YuXiang Hong, Jiahui Liu, ZhiCheng
He, YaLou Huang and MaoQiang Xie | null | 1705.03998 | null | null |
Fast Stochastic Variance Reduced ADMM for Stochastic Composition
Optimization | cs.LG stat.ML | We consider the stochastic composition optimization problem proposed in
\cite{wang2017stochastic}, which has applications ranging from estimation to
statistical and machine learning. We propose the first ADMM-based algorithm
named com-SVR-ADMM, and show that com-SVR-ADMM converges linearly for strongly
convex and Lipschitz smooth objectives, and has a convergence rate of $O( \log
S/S)$, which improves upon the $O(S^{-4/9})$ rate in
\cite{wang2016accelerating} when the objective is convex and Lipschitz smooth.
Moreover, com-SVR-ADMM possesses a rate of $O(1/\sqrt{S})$ when the objective
is convex but without Lipschitz smoothness. We also conduct experiments and
show that it outperforms existing algorithms.
| Yue Yu, Longbo Huang | null | 1705.04138 | null | null |
Program Induction by Rationale Generation : Learning to Solve and
Explain Algebraic Word Problems | cs.AI cs.CL cs.LG | Solving algebraic word problems requires executing a series of arithmetic
operations---a program---to obtain a final answer. However, since programs can
be arbitrarily complicated, inducing them directly from question-answer pairs
is a formidable challenge. To make this task more feasible, we solve these
problems by generating answer rationales, sequences of natural language and
human-readable mathematical expressions that derive the final answer through a
series of small steps. Although rationales do not explicitly specify programs,
they provide a scaffolding for their structure via intermediate milestones. To
evaluate our approach, we have created a new 100,000-sample dataset of
questions, answers and rationales. Experimental results show that indirect
supervision of program learning via answer rationales is a promising strategy
for inducing arithmetic programs.
| Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom | null | 1705.04146 | null | null |
A First Empirical Study of Emphatic Temporal Difference Learning | cs.AI cs.LG | In this paper we present the first empirical study of the emphatic
temporal-difference learning algorithm (ETD), comparing it with conventional
temporal-difference learning, in particular, with linear TD(0), on on-policy
and off-policy variations of the Mountain Car problem. The initial motivation
for developing ETD was that it has good convergence properties under off-policy
training (Sutton, Mahmood and White 2016), but it is also a new algorithm for
the on-policy case. In both our on-policy and off-policy experiments, we found
that each method converged to a characteristic asymptotic level of error, with
ETD better than TD(0). TD(0) achieved a still lower error level temporarily
before falling back to its higher asymptote, whereas ETD never showed this kind
of "bounce". In the off-policy case (in which TD(0) is not guaranteed to
converge), ETD was significantly slower.
| Sina Ghiassian, Banafsheh Rafiee, Richard S. Sutton | null | 1705.04185 | null | null |
Nonnegative Matrix Factorization with Transform Learning | cs.LG | Traditional NMF-based signal decomposition relies on the factorization of
spectral data, which is typically computed by means of short-time frequency
transform. In this paper we propose to relax the choice of a pre-fixed
transform and learn a short-time orthogonal transform together with the
factorization. To this end, we formulate a regularized optimization problem
reminiscent of conventional NMF, yet with the transform as additional unknown
parameters, and design a novel block-descent algorithm enabling to find
stationary points of this objective function. The proposed joint transform
learning and factorization approach is tested for two audio signal processing
experiments, illustrating its conceptual and practical benefits.
| Dylan Fagot, C\'edric F\'evotte and Herwig Wendt | null | 1705.04193 | null | null |
Incremental Learning Through Deep Adaptation | cs.CV cs.LG | Given an existing trained neural network, it is often desirable to learn new
capabilities without hindering performance of those already learned. Existing
approaches either learn sub-optimal solutions, require joint training, or incur
a substantial increment in the number of parameters for each added domain,
typically as many as the original network. We propose a method called
\emph{Deep Adaptation Networks} (DAN) that constrains newly learned filters to
be linear combinations of existing ones. DANs precisely preserve performance on
the original domain, require a fraction (typically 13\%, dependent on network
architecture) of the number of parameters compared to standard fine-tuning
procedures and converge in less cycles of training to a comparable or better
level of performance. When coupled with standard network quantization
techniques, we further reduce the parameter cost to around 3\% of the original
with negligible or no loss in accuracy. The learned architecture can be
controlled to switch between various learned representations, enabling a single
network to solve a task from multiple different domains. We conduct extensive
experiments showing the effectiveness of our method on a range of image
classification tasks and explore different aspects of its behavior.
| Amir Rosenfeld, John K. Tsotsos | null | 1705.04228 | null | null |
K-sets+: a Linear-time Clustering Algorithm for Data Points with a
Sparse Similarity Measure | cs.DS cs.LG | In this paper, we first propose a new iterative algorithm, called the K-sets+
algorithm for clustering data points in a semi-metric space, where the distance
measure does not necessarily satisfy the triangular inequality. We show that
the K-sets+ algorithm converges in a finite number of iterations and it retains
the same performance guarantee as the K-sets algorithm for clustering data
points in a metric space. We then extend the applicability of the K-sets+
algorithm from data points in a semi-metric space to data points that only have
a symmetric similarity measure. Such an extension leads to great reduction of
computational complexity. In particular, for an n * n similarity matrix with m
nonzero elements in the matrix, the computational complexity of the K-sets+
algorithm is O((Kn + m)I), where I is the number of iterations. The memory
complexity to achieve that computational complexity is O(Kn + m). As such, both
the computational complexity and the memory complexity are linear in n when the
n * n similarity matrix is sparse, i.e., m = O(n). We also conduct various
experiments to show the effectiveness of the K-sets+ algorithm by using a
synthetic dataset from the stochastic block model and a real network from the
WonderNetwork website.
| Cheng-Shang Chang, Chia-Tai Chang, Duan-Shin Lee and Li-Heng Liou | null | 1705.04249 | null | null |
Learning to see people like people | cs.CV cs.AI cs.LG | Humans make complex inferences on faces, ranging from objective properties
(gender, ethnicity, expression, age, identity, etc) to subjective judgments
(facial attractiveness, trustworthiness, sociability, friendliness, etc). While
the objective aspects of face perception have been extensively studied,
relatively fewer computational models have been developed for the social
impressions of faces. Bridging this gap, we develop a method to predict human
impressions of faces in 40 subjective social dimensions, using deep
representations from state-of-the-art neural networks. We find that model
performance grows as the human consensus on a face trait increases, and that
model predictions outperform human groups in correlation with human averages.
This illustrates the learnability of subjective social perception of faces,
especially when there is high human consensus. Our system can be used to decide
which photographs from a personal collection will make the best impression. The
results are significant for the field of social robotics, demonstrating that
robots can learn the subjective judgments defining the underlying fabric of
human interaction.
| Amanda Song, Linjie Li, Chad Atalla, Garrison Cottrell | null | 1705.04282 | null | null |
Phase recovery and holographic image reconstruction using deep learning
in neural networks | cs.CV cs.IR cs.LG physics.app-ph physics.optics | Phase recovery from intensity-only measurements forms the heart of coherent
imaging techniques and holography. Here we demonstrate that a neural network
can learn to perform phase recovery and holographic image reconstruction after
appropriate training. This deep learning-based approach provides an entirely
new framework to conduct holographic imaging by rapidly eliminating twin-image
and self-interference related spatial artifacts. Compared to existing
approaches, this neural network based method is significantly faster to
compute, and reconstructs improved phase and amplitude images of the objects
using only one hologram, i.e., requires less number of measurements in addition
to being computationally faster. We validated this method by reconstructing
phase and amplitude images of various samples, including blood and Pap smears,
and tissue sections. These results are broadly applicable to any phase recovery
problem, and highlight that through machine learning challenging problems in
imaging science can be overcome, providing new avenues to design powerful
computational imaging systems.
| Yair Rivenson, Yibo Zhang, Harun Gunaydin, Da Teng, Aydogan Ozcan | 10.1038/lsa.2017.141 | 1705.04286 | null | null |
Bayesian Approaches to Distribution Regression | stat.ML cs.LG | Distribution regression has recently attracted much interest as a generic
solution to the problem of supervised learning where labels are available at
the group level, rather than at the individual level. Current approaches,
however, do not propagate the uncertainty in observations due to sampling
variability in the groups. This effectively assumes that small and large groups
are estimated equally well, and should have equal weight in the final
regression. We account for this uncertainty with a Bayesian distribution
regression formalism, improving the robustness and performance of the model
when group sizes vary. We frame our models in a neural network style, allowing
for simple MAP inference using backpropagation to learn the parameters, as well
as MCMC-based inference which can fully propagate uncertainty. We demonstrate
our approach on illustrative toy datasets, as well as on a challenging problem
of predicting age from images.
| Ho Chung Leon Law, Danica J. Sutherland, Dino Sejdinovic, Seth Flaxman | null | 1705.04293 | null | null |
The Network Nullspace Property for Compressed Sensing of Big Data over
Networks | stat.ML cs.LG | We present a novel condition, which we term the net- work nullspace property,
which ensures accurate recovery of graph signals representing massive
network-structured datasets from few signal values. The network nullspace
property couples the cluster structure of the underlying network-structure with
the geometry of the sampling set. Our results can be used to design efficient
sampling strategies based on the network topology.
| Alexander Jung, Madelon Hulsebos | null | 1705.04379 | null | null |
Long-term Blood Pressure Prediction with Deep Recurrent Neural Networks | cs.LG cs.AI math.DS stat.ML | Existing methods for arterial blood pressure (BP) estimation directly map the
input physiological signals to output BP values without explicitly modeling the
underlying temporal dependencies in BP dynamics. As a result, these models
suffer from accuracy decay over a long time and thus require frequent
calibration. In this work, we address this issue by formulating BP estimation
as a sequence prediction problem in which both the input and target are
temporal sequences. We propose a novel deep recurrent neural network (RNN)
consisting of multilayered Long Short-Term Memory (LSTM) networks, which are
incorporated with (1) a bidirectional structure to access larger-scale context
information of input sequence, and (2) residual connections to allow gradients
in deep RNN to propagate more effectively. The proposed deep RNN model was
tested on a static BP dataset, and it achieved root mean square error (RMSE) of
3.90 and 2.66 mmHg for systolic BP (SBP) and diastolic BP (DBP) prediction
respectively, surpassing the accuracy of traditional BP prediction models. On a
multi-day BP dataset, the deep RNN achieved RMSE of 3.84, 5.25, 5.80 and 5.81
mmHg for the 1st day, 2nd day, 4th day and 6th month after the 1st day SBP
prediction, and 1.80, 4.78, 5.0, 5.21 mmHg for corresponding DBP prediction,
respectively, which outperforms all previous models with notable improvement.
The experimental results suggest that modeling the temporal dependencies in BP
dynamics significantly improves the long-term BP prediction accuracy.
| Peng Su, Xiao-Rong Ding, Yuan-Ting Zhang, Jing Liu, Fen Miao, Ni Zhao | null | 1705.04524 | null | null |
Learning ReLUs via Gradient Descent | cs.LG cs.IT math.IT math.OC stat.ML | In this paper we study the problem of learning Rectified Linear Units (ReLUs)
which are functions of the form $max(0,<w,x>)$ with $w$ denoting the weight
vector. We study this problem in the high-dimensional regime where the number
of observations are fewer than the dimension of the weight vector. We assume
that the weight vector belongs to some closed set (convex or nonconvex) which
captures known side-information about its structure. We focus on the realizable
model where the inputs are chosen i.i.d.~from a Gaussian distribution and the
labels are generated according to a planted weight vector. We show that
projected gradient descent, when initialization at 0, converges at a linear
rate to the planted model with a number of samples that is optimal up to
numerical constants. Our results on the dynamics of convergence of these very
shallow neural nets may provide some insights towards understanding the
dynamics of deeper architectures.
| Mahdi Soltanolkotabi | null | 1705.04591 | null | null |
Molecular Generation with Recurrent Neural Networks (RNNs) | cs.LG q-bio.BM | The potential number of drug like small molecules is estimated to be between
10^23 and 10^60 while current databases of known compounds are orders of
magnitude smaller with approximately 10^8 compounds. This discrepancy has led
to an interest in generating virtual libraries using hand crafted chemical
rules and fragment based methods to cover a larger area of chemical space and
generate chemical libraries for use in in silico drug discovery endeavors. Here
it is explored to what extent a recurrent neural network with long short term
memory cells can figure out sensible chemical rules and generate synthesizable
molecules by being trained on existing compounds encoded as SMILES. The
networks can to a high extent generate novel, but chemically sensible
molecules. The properties of the molecules are tuned by training on two
different datasets consisting of fragment like molecules and drug like
molecules. The produced molecules and the training databases have very similar
distributions of molar weight, predicted logP, number of hydrogen bond
acceptors and donors, number of rotatable bonds and topological polar surface
area when compared to their respective training sets. The compounds are for the
most cases synthesizable as assessed with SA score and Wiley ChemPlanner.
| Esben Jannik Bjerrum, Richard Threlfall | null | 1705.04612 | null | null |
Forecasting using incomplete models | cs.LG | We consider the task of forecasting an infinite sequence of future
observations based on some number of past observations, where the probability
measure generating the observations is "suspected" to satisfy one or more of a
set of incomplete models, i.e. convex sets in the space of probability
measures. This setting is in some sense intermediate between the realizable
setting where the probability measure comes from some known set of probability
measures (which can be addressed using e.g. Bayesian inference) and the
unrealizable setting where the probability measure is completely arbitrary. We
demonstrate a method of forecasting which guarantees that, whenever the true
probability measure satisfies an incomplete model in a given countable set, the
forecast converges to the same incomplete model in the (appropriately
normalized) Kantorovich-Rubinstein metric. This is analogous to merging of
opinions for Bayesian inference, except that convergence in the
Kantorovich-Rubinstein metric is weaker than convergence in total variation.
| Vanessa Kosoy | null | 1705.0463 | null | null |
Iteratively-Reweighted Least-Squares Fitting of Support Vector Machines:
A Majorization--Minimization Algorithm Approach | stat.CO cs.LG stat.ML | Support vector machines (SVMs) are an important tool in modern data analysis.
Traditionally, support vector machines have been fitted via quadratic
programming, either using purpose-built or off-the-shelf algorithms. We present
an alternative approach to SVM fitting via the majorization--minimization (MM)
paradigm. Algorithms that are derived via MM algorithm constructions can be
shown to monotonically decrease their objectives at each iteration, as well as
be globally convergent to stationary points. We demonstrate the construction of
iteratively-reweighted least-squares (IRLS) algorithms, via the MM paradigm,
for SVM risk minimization problems involving the hinge, least-square,
squared-hinge, and logistic losses, and 1-norm, 2-norm, and elastic net
penalizations. Successful implementations of our algorithms are presented via
some numerical examples.
| Hien D. Nguyen and Geoffrey J. McLachlan | null | 1705.04651 | null | null |
Monaural Audio Speaker Separation with Source Contrastive Estimation | cs.SD cs.AI cs.LG stat.ML | We propose an algorithm to separate simultaneously speaking persons from each
other, the "cocktail party problem", using a single microphone. Our approach
involves a deep recurrent neural networks regression to a vector space that is
descriptive of independent speakers. Such a vector space can embed empirically
determined speaker characteristics and is optimized by distinguishing between
speaker masks. We call this technique source-contrastive estimation. The
methodology is inspired by negative sampling, which has seen success in natural
language processing, where an embedding is learned by correlating and
de-correlating a given input vector with output weights. Although the matrix
determined by the output weights is dependent on a set of known speakers, we
only use the input vectors during inference. Doing so will ensure that source
separation is explicitly speaker-independent. Our approach is similar to recent
deep neural network clustering and permutation-invariant training research; we
use weighted spectral features and masks to augment individual speaker
frequencies while filtering out other speakers. We avoid, however, the severe
computational burden of other approaches with our technique. Furthermore, by
training a vector space rather than combinations of different speakers or
differences thereof, we avoid the so-called permutation problem during
training. Our algorithm offers an intuitive, computationally efficient response
to the cocktail party problem, and most importantly boasts better empirical
performance than other current techniques.
| Cory Stephenson, Patrick Callier, Abhinav Ganesh, Karl Ni | null | 1705.04662 | null | null |
Deep Learning Microscopy | cs.LG cs.CV physics.optics | We demonstrate that a deep neural network can significantly improve optical
microscopy, enhancing its spatial resolution over a large field-of-view and
depth-of-field. After its training, the only input to this network is an image
acquired using a regular optical microscope, without any changes to its design.
We blindly tested this deep learning approach using various tissue samples that
are imaged with low-resolution and wide-field systems, where the network
rapidly outputs an image with remarkably better resolution, matching the
performance of higher numerical aperture lenses, also significantly surpassing
their limited field-of-view and depth-of-field. These results are
transformative for various fields that use microscopy tools, including e.g.,
life sciences, where optical microscopy is considered as one of the most widely
used and deployed techniques. Beyond such applications, our presented approach
is broadly applicable to other imaging modalities, also spanning different
parts of the electromagnetic spectrum, and can be used to design computational
imagers that get better and better as they continue to image specimen and
establish new transformations among different modes of imaging.
| Yair Rivenson, Zoltan Gorocs, Harun Gunaydin, Yibo Zhang, Hongda Wang,
Aydogan Ozcan | 10.1364/OPTICA.4.001437 | 1705.04709 | null | null |
Bayesian Decision Making in Groups is Hard | math.ST cs.CC cs.LG cs.MA cs.SI stat.TH | We study the computations that Bayesian agents undertake when exchanging
opinions over a network. The agents act repeatedly on their private information
and take myopic actions that maximize their expected utility according to a
fully rational posterior belief. We show that such computations are NP-hard for
two natural utility functions: one with binary actions, and another where
agents reveal their posterior beliefs. In fact, we show that distinguishing
between posteriors that are concentrated on different states of the world is
NP-hard. Therefore, even approximating the Bayesian posterior beliefs is hard.
We also describe a natural search algorithm to compute agents' actions, which
we call elimination of impossible signals, and show that if the network is
transitive, the algorithm can be modified to run in polynomial time.
| Jan H\k{a}z{\l}a, Ali Jadbabaie, Elchanan Mossel, M. Amin Rahimian | 10.1287/opre.2020.2000 | 1705.0477 | null | null |
Automatically Redundant Features Removal for Unsupervised Feature
Selection via Sparse Feature Graph | cs.LG | The redundant features existing in high dimensional datasets always affect
the performance of learning and mining algorithms. How to detect and remove
them is an important research topic in machine learning and data mining
research. In this paper, we propose a graph based approach to find and remove
those redundant features automatically for high dimensional data. Based on the
sparse learning based unsupervised feature selection framework, Sparse Feature
Graph (SFG) is introduced not only to model the redundancy between two
features, but also to disclose the group redundancy between two groups of
features. With SFG, we can divide the whole features into different groups, and
improve the intrinsic structure of data by removing detected redundant
features. With accurate data structure, quality indicator vectors can be
obtained to improve the learning performance of existing unsupervised feature
selection algorithms such as multi-cluster feature selection (MCFS). Our
experimental results on benchmark datasets show that the proposed SFG and
feature redundancy remove algorithm can improve the performance of unsupervised
feature selection algorithms consistently.
| Shuchu Han, Hao Huang, Hong Qin | null | 1705.04804 | null | null |
Efficient Parallel Methods for Deep Reinforcement Learning | cs.LG | We propose a novel framework for efficient parallelization of deep
reinforcement learning algorithms, enabling these algorithms to learn from
multiple actors on a single machine. The framework is algorithm agnostic and
can be applied to on-policy, off-policy, value based and policy gradient based
algorithms. Given its inherent parallelism, the framework can be efficiently
implemented on a GPU, allowing the usage of powerful models while significantly
reducing training time. We demonstrate the effectiveness of our framework by
implementing an advantage actor-critic algorithm on a GPU, using on-policy
experiences and employing synchronous updates. Our algorithm achieves
state-of-the-art performance on the Atari domain after only a few hours of
training. Our framework thus opens the door for much faster experimentation on
demanding problem domains. Our implementation is open-source and is made public
at https://github.com/alfredvc/paac
| Alfredo V. Clemente, Humberto N. Castej\'on, Arjun Chandra | null | 1705.04862 | null | null |
Convergence Analysis of Proximal Gradient with Momentum for Nonconvex
Optimization | cs.LG | In many modern machine learning applications, structures of underlying
mathematical models often yield nonconvex optimization problems. Due to the
intractability of nonconvexity, there is a rising need to develop efficient
methods for solving general nonconvex problems with certain performance
guarantee. In this work, we investigate the accelerated proximal gradient
method for nonconvex programming (APGnc). The method compares between a usual
proximal gradient step and a linear extrapolation step, and accepts the one
that has a lower function value to achieve a monotonic decrease. In specific,
under a general nonsmooth and nonconvex setting, we provide a rigorous argument
to show that the limit points of the sequence generated by APGnc are critical
points of the objective function. Then, by exploiting the
Kurdyka-{\L}ojasiewicz (\KL) property for a broad class of functions, we
establish the linear and sub-linear convergence rates of the function value
sequence generated by APGnc. We further propose a stochastic variance reduced
APGnc (SVRG-APGnc), and establish its linear convergence under a special case
of the \KL property. We also extend the analysis to the inexact version of
these methods and develop an adaptive momentum strategy that improves the
numerical performance.
| Qunwei Li, Yi Zhou, Yingbin Liang, Pramod K. Varshney | null | 1705.04925 | null | null |
Detecting Statistical Interactions from Neural Network Weights | stat.ML cs.LG | Interpreting neural networks is a crucial and challenging task in machine
learning. In this paper, we develop a novel framework for detecting statistical
interactions captured by a feedforward multilayer neural network by directly
interpreting its learned weights. Depending on the desired interactions, our
method can achieve significantly better or similar interaction detection
performance compared to the state-of-the-art without searching an exponential
solution space of possible interactions. We obtain this accuracy and efficiency
by observing that interactions between input features are created by the
non-additive effect of nonlinear activation functions, and that interacting
paths are encoded in weight matrices. We demonstrate the performance of our
method and the importance of discovered interactions via experimental results
on both synthetic datasets and real-world application datasets.
| Michael Tsang, Dehua Cheng, Yan Liu | null | 1705.04977 | null | null |
Discrete-Continuous ADMM for Transductive Inference in Higher-Order MRFs | cs.LG | This paper introduces a novel algorithm for transductive inference in
higher-order MRFs, where the unary energies are parameterized by a variable
classifier. The considered task is posed as a joint optimization problem in the
continuous classifier parameters and the discrete label variables. In contrast
to prior approaches such as convex relaxations, we propose an advantageous
decoupling of the objective function into discrete and continuous subproblems
and a novel, efficient optimization method related to ADMM. This approach
preserves integrality of the discrete label variables and guarantees global
convergence to a critical point. We demonstrate the advantages of our approach
in several experiments including video object segmentation on the DAVIS data
set and interactive image segmentation.
| Emanuel Laude, Jan-Hendrik Lange, Jonas Sch\"upfer, Csaba Domokos,
Laura Leal-Taix\'e, Frank R. Schmidt, Bjoern Andres, Daniel Cremers | null | 1705.0502 | null | null |
Discrete Sequential Prediction of Continuous Actions for Deep RL | cs.LG cs.AI stat.ML | It has long been assumed that high dimensional continuous control problems
cannot be solved effectively by discretizing individual dimensions of the
action space due to the exponentially large number of bins over which policies
would have to be learned. In this paper, we draw inspiration from the recent
success of sequence-to-sequence models for structured prediction problems to
develop policies over discretized spaces. Central to this method is the
realization that complex functions over high dimensional spaces can be modeled
by neural networks that predict one dimension at a time. Specifically, we show
how Q-values and policies over continuous spaces can be modeled using a next
step prediction model over discretized dimensions. With this parameterization,
it is possible to both leverage the compositional structure of action spaces
during learning, as well as compute maxima over action spaces (approximately).
On a simple example task we demonstrate empirically that our method can perform
global search, which effectively gets around the local optimization issues that
plague DDPG. We apply the technique to off-policy (Q-learning) methods and show
that our method can achieve the state-of-the-art for off-policy methods on
several continuous control tasks.
| Luke Metz, Julian Ibarz, Navdeep Jaitly, James Davidson | null | 1705.05035 | null | null |
Robust Frequent Directions with Application in Online Learning | cs.LG | The frequent directions (FD) technique is a deterministic approach for online
sketching that has many applications in machine learning. The conventional FD
is a heuristic procedure that often outputs rank deficient matrices. To
overcome the rank deficiency problem, we propose a new sketching strategy
called robust frequent directions (RFD) by introducing a regularization term.
RFD can be derived from an optimization problem. It updates the sketch matrix
and the regularization term adaptively and jointly. RFD reduces the
approximation error of FD without increasing the computational cost. We also
apply RFD to online learning and propose an effective hyperparameter-free
online Newton algorithm. We derive a regret bound for our online Newton
algorithm based on RFD, which guarantees the robustness of the algorithm. The
experimental studies demonstrate that the proposed method outperforms
state-of-the-art second order online learning algorithms.
| Luo Luo, Cheng Chen, Zhihua Zhang, Wu-Jun Li, Tong Zhang | null | 1705.05067 | null | null |
Active Learning for Graph Embedding | cs.LG stat.ML | Graph embedding provides an efficient solution for graph analysis by
converting the graph into a low-dimensional space which preserves the structure
information. In contrast to the graph structure data, the i.i.d. node embedding
can be processed efficiently in terms of both time and space. Current
semi-supervised graph embedding algorithms assume the labelled nodes are given,
which may not be always true in the real world. While manually label all
training data is inapplicable, how to select the subset of training data to
label so as to maximize the graph analysis task performance is of great
importance. This motivates our proposed active graph embedding (AGE) framework,
in which we design a general active learning query strategy for any
semi-supervised graph embedding algorithm. AGE selects the most informative
nodes as the training labelled nodes based on the graphical information (i.e.,
node centrality) as well as the learnt node embedding (i.e., node
classification uncertainty and node embedding representativeness). Different
query criteria are combined with the time-sensitive parameters which shift the
focus from graph based query criteria to embedding based criteria as the
learning progresses. Experiments have been conducted on three public data sets
and the results verified the effectiveness of each component of our query
strategy and the power of combining them using time-sensitive parameters. Our
code is available online at: https://github.com/vwz/AGE.
| Hongyun Cai, Vincent W. Zheng, Kevin Chen-Chuan Chang | null | 1705.05085 | null | null |
Bandit Regret Scaling with the Effective Loss Range | cs.LG stat.ML | We study how the regret guarantees of nonstochastic multi-armed bandits can
be improved, if the effective range of the losses in each round is small (e.g.
the maximal difference between two losses in a given round). Despite a recent
impossibility result, we show how this can be made possible under certain mild
additional assumptions, such as availability of rough estimates of the losses,
or advance knowledge of the loss of a single, possibly unspecified arm. Along
the way, we develop a novel technique which might be of independent interest,
to convert any multi-armed bandit algorithm with regret depending on the loss
range, to an algorithm with regret depending only on the effective range, while
avoiding predictably bad arms altogether.
| Nicol\`o Cesa-Bianchi and Ohad Shamir | null | 1705.05091 | null | null |
Tuning Modular Networks with Weighted Losses for Hand-Eye Coordination | cs.RO cs.AI cs.CV cs.LG cs.SY | This paper introduces an end-to-end fine-tuning method to improve hand-eye
coordination in modular deep visuo-motor policies (modular networks) where each
module is trained independently. Benefiting from weighted losses, the
fine-tuning method significantly improves the performance of the policies for a
robotic planar reaching task.
| Fangyi Zhang, J\"urgen Leitner, Michael Milford, Peter I. Corke | null | 1705.05116 | null | null |
Layerwise Systematic Scan: Deep Boltzmann Machines and Beyond | cs.LG cs.DS | For Markov chain Monte Carlo methods, one of the greatest discrepancies
between theory and system is the scan order - while most theoretical
development on the mixing time analysis deals with random updates, real-world
systems are implemented with systematic scans. We bridge this gap for models
that exhibit a bipartite structure, including, most notably, the
Restricted/Deep Boltzmann Machine. The de facto implementation for these models
scans variables in a layerwise fashion. We show that the Gibbs sampler with a
layerwise alternating scan order has its relaxation time (in terms of epochs)
no larger than that of a random-update Gibbs sampler (in terms of variable
updates). We also construct examples to show that this bound is asymptotically
tight. Through standard inequalities, our result also implies a comparison on
the mixing times.
| Heng Guo and Kaan Kara and Ce Zhang | null | 1705.05154 | null | null |
Emotion in Reinforcement Learning Agents and Robots: A Survey | cs.LG cs.AI cs.HC cs.RO stat.ML | This article provides the first survey of computational models of emotion in
reinforcement learning (RL) agents. The survey focuses on agent/robot emotions,
and mostly ignores human user emotions. Emotions are recognized as functional
in decision-making by influencing motivation and action selection. Therefore,
computational emotion models are usually grounded in the agent's decision
making architecture, of which RL is an important subclass. Studying emotions in
RL-based agents is useful for three research fields. For machine learning (ML)
researchers, emotion models may improve learning efficiency. For the
interactive ML and human-robot interaction (HRI) community, emotions can
communicate state and enhance user investment. Lastly, it allows affective
modelling (AM) researchers to investigate their emotion theories in a
successful AI agent class. This survey provides background on emotion theory
and RL. It systematically addresses 1) from what underlying dimensions (e.g.,
homeostasis, appraisal) emotions can be derived and how these can be modelled
in RL-agents, 2) what types of emotions have been derived from these
dimensions, and 3) how these emotions may either influence the learning
efficiency of the agent or be useful as social signals. We also systematically
compare evaluation criteria, and draw connections to important RL sub-domains
like (intrinsic) motivation and model-based RL. In short, this survey provides
both a practical overview for engineers wanting to implement emotions in their
RL agents, and identifies challenges and directions for future emotion-RL
research.
| Thomas M. Moerland, Joost Broekens, Catholijn M. Jonker | 10.1007/s10994-017-5666-0 | 1705.05172 | null | null |
Modeling of the Latent Embedding of Music using Deep Neural Network | cs.SD cs.LG | While both the data volume and heterogeneity of the digital music content is
huge, it has become increasingly important and convenient to build a
recommendation or search system to facilitate surfacing these content to the
user or consumer community. Most of the recommendation models fall into two
primary species, collaborative filtering based and content based approaches.
Variants of instantiations of collaborative filtering approach suffer from the
common issues of so called "cold start" and "long tail" problems where there is
not much user interaction data to reveal user opinions or affinities on the
content and also the distortion towards the popular content. Content-based
approaches are sometimes limited by the richness of the available content data
resulting in a heavily biased and coarse recommendation result. In recent
years, the deep neural network has enjoyed a great success in large-scale image
and video recognitions. In this paper, we propose and experiment using deep
convolutional neural network to imitate how human brain processes hierarchical
structures in the auditory signals, such as music, speech, etc., at various
timescales. This approach can be used to discover the latent factor models of
the music based upon acoustic hyper-images that are extracted from the raw
audio waves of music. These latent embeddings can be used either as features to
feed to subsequent models, such as collaborative filtering, or to build
similarity metrics between songs, or to classify music based on the labels for
training such as genre, mood, sentiment, etc.
| Zhou Xing, Eddy Baik, Yan Jiao, Nilesh Kulkarni, Chris Li, Gautam
Muralidhar, Marzieh Parandehgheibi, Erik Reed, Abhishek Singhal, Fei Xiao and
Chris Pouliot | null | 1705.05229 | null | null |
Comparison of Maximum Likelihood and GAN-based training of Real NVPs | cs.LG | We train a generator by maximum likelihood and we also train the same
generator architecture by Wasserstein GAN. We then compare the generated
samples, exact log-probability densities and approximate Wasserstein distances.
We show that an independent critic trained to approximate Wasserstein distance
between the validation set and the generator distribution helps detect
overfitting. Finally, we use ideas from the one-shot learning literature to
develop a novel fast learning critic.
| Ivo Danihelka, Balaji Lakshminarayanan, Benigno Uria, Daan Wierstra,
Peter Dayan | null | 1705.05263 | null | null |
Extending Defensive Distillation | cs.LG cs.CR stat.ML | Machine learning is vulnerable to adversarial examples: inputs carefully
modified to force misclassification. Designing defenses against such inputs
remains largely an open problem. In this work, we revisit defensive
distillation---which is one of the mechanisms proposed to mitigate adversarial
examples---to address its limitations. We view our results not only as an
effective way of addressing some of the recently discovered attacks but also as
reinforcing the importance of improved training techniques.
| Nicolas Papernot and Patrick McDaniel | null | 1705.05264 | null | null |
Learning from Clinical Judgments: Semi-Markov-Modulated Marked Hawkes
Processes for Risk Prognosis | cs.LG | Critically ill patients in regular wards are vulnerable to unanticipated
adverse events which require prompt transfer to the intensive care unit (ICU).
To allow for accurate prognosis of deteriorating patients, we develop a novel
continuous-time probabilistic model for a monitored patient's temporal sequence
of physiological data. Our model captures "informatively sampled" patient
episodes: the clinicians' decisions on when to observe a hospitalized patient's
vital signs and lab tests over time are represented by a marked Hawkes process,
with intensity parameters that are modulated by the patient's latent clinical
states, and with observable physiological data (mark process) modeled as a
switching multi-task Gaussian process. In addition, our model captures
"informatively censored" patient episodes by representing the patient's latent
clinical states as an absorbing semi-Markov jump process. The model parameters
are learned from offline patient episodes in the electronic health records via
an EM-based algorithm. Experiments conducted on a cohort of patients admitted
to a major medical center over a 3-year period show that risk prognosis based
on our model significantly outperforms the currently deployed medical risk
scores and other baseline machine learning algorithms.
| Ahmed M. Alaa, Scott Hu, Mihaela van der Schaar | null | 1705.05267 | null | null |
Curiosity-driven Exploration by Self-supervised Prediction | cs.LG cs.AI cs.CV cs.RO stat.ML | In many real-world scenarios, rewards extrinsic to the agent are extremely
sparse, or absent altogether. In such cases, curiosity can serve as an
intrinsic reward signal to enable the agent to explore its environment and
learn skills that might be useful later in its life. We formulate curiosity as
the error in an agent's ability to predict the consequence of its own actions
in a visual feature space learned by a self-supervised inverse dynamics model.
Our formulation scales to high-dimensional continuous state spaces like images,
bypasses the difficulties of directly predicting pixels, and, critically,
ignores the aspects of the environment that cannot affect the agent. The
proposed approach is evaluated in two environments: VizDoom and Super Mario
Bros. Three broad settings are investigated: 1) sparse extrinsic reward, where
curiosity allows for far fewer interactions with the environment to reach the
goal; 2) exploration with no extrinsic reward, where curiosity pushes the agent
to explore more efficiently; and 3) generalization to unseen scenarios (e.g.
new levels of the same game) where the knowledge gained from earlier experience
helps the agent explore new places much faster than starting from scratch. Demo
video and code available at https://pathak22.github.io/noreward-rl/
| Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, Trevor Darrell | null | 1705.05363 | null | null |
Maximum Selection and Ranking under Noisy Comparisons | cs.LG | We consider $(\epsilon,\delta)$-PAC maximum-selection and ranking for general
probabilistic models whose comparisons probabilities satisfy strong stochastic
transitivity and stochastic triangle inequality. Modifying the popular knockout
tournament, we propose a maximum-selection algorithm that uses
$\mathcal{O}\left(\frac{n}{\epsilon^2}\log \frac{1}{\delta}\right)$
comparisons, a number tight up to a constant factor. We then derive a general
framework that improves the performance of many ranking algorithms, and combine
it with merge sort and binary search to obtain a ranking algorithm that uses
$\mathcal{O}\left(\frac{n\log n (\log \log n)^3}{\epsilon^2}\right)$
comparisons for any $\delta\ge\frac1n$, a number optimal up to a $(\log \log
n)^3$ factor.
| Moein Falahatgar and Alon Orlitsky and Venkatadheeraj Pichapati and
Ananda Theertha Suresh | null | 1705.05366 | null | null |
Probabilistically Safe Policy Transfer | cs.RO cs.AI cs.LG | Although learning-based methods have great potential for robotics, one
concern is that a robot that updates its parameters might cause large amounts
of damage before it learns the optimal policy. We formalize the idea of safe
learning in a probabilistic sense by defining an optimization problem: we
desire to maximize the expected return while keeping the expected damage below
a given safety limit. We study this optimization for the case of a robot
manipulator with safety-based torque limits. We would like to ensure that the
damage constraint is maintained at every step of the optimization and not just
at convergence. To achieve this aim, we introduce a novel method which predicts
how modifying the torque limit, as well as how updating the policy parameters,
might affect the robot's safety. We show through a number of experiments that
our approach allows the robot to improve its performance while ensuring that
the expected damage constraint is not violated during the learning process.
| David Held, Zoe McCarthy, Michael Zhang, Fred Shentu, Pieter Abbeel | null | 1705.05394 | null | null |
Learning Probabilistic Programs Using Backpropagation | cs.LG cs.AI stat.ML | Probabilistic modeling enables combining domain knowledge with learning from
data, thereby supporting learning from fewer training instances than purely
data-driven methods. However, learning probabilistic models is difficult and
has not achieved the level of performance of methods such as deep neural
networks on many tasks. In this paper, we attempt to address this issue by
presenting a method for learning the parameters of a probabilistic program
using backpropagation. Our approach opens the possibility to building deep
probabilistic programming models that are trained in a similar way to neural
networks.
| Avi Pfeffer | null | 1705.05396 | null | null |
Repeated Inverse Reinforcement Learning | cs.AI cs.LG | We introduce a novel repeated Inverse Reinforcement Learning problem: the
agent has to act on behalf of a human in a sequence of tasks and wishes to
minimize the number of tasks that it surprises the human by acting suboptimally
with respect to how the human would have acted. Each time the human is
surprised, the agent is provided a demonstration of the desired behavior by the
human. We formalize this problem, including how the sequence of tasks is
chosen, in a few different ways and provide some foundational results.
| Kareem Amin, Nan Jiang, Satinder Singh | null | 1705.05427 | null | null |
Sparse Coding by Spiking Neural Networks: Convergence Theory and
Computational Results | cs.LG cs.NA cs.NE q-bio.NC | In a spiking neural network (SNN), individual neurons operate autonomously
and only communicate with other neurons sparingly and asynchronously via spike
signals. These characteristics render a massively parallel hardware
implementation of SNN a potentially powerful computer, albeit a non von Neumann
one. But can one guarantee that a SNN computer solves some important problems
reliably? In this paper, we formulate a mathematical model of one SNN that can
be configured for a sparse coding problem for feature extraction. With a
moderate but well-defined assumption, we prove that the SNN indeed solves
sparse coding. To the best of our knowledge, this is the first rigorous result
of this kind.
| Ping Tak Peter Tang, Tsung-Han Lin, Mike Davies | null | 1705.05475 | null | null |
Distributed Statistical Machine Learning in Adversarial Settings:
Byzantine Gradient Descent | cs.DC cs.CR cs.LG stat.ML | We consider the problem of distributed statistical machine learning in
adversarial settings, where some unknown and time-varying subset of working
machines may be compromised and behave arbitrarily to prevent an accurate model
from being learned. This setting captures the potential adversarial attacks
faced by Federated Learning -- a modern machine learning paradigm that is
proposed by Google researchers and has been intensively studied for ensuring
user privacy. Formally, we focus on a distributed system consisting of a
parameter server and $m$ working machines. Each working machine keeps $N/m$
data samples, where $N$ is the total number of samples. The goal is to
collectively learn the underlying true model parameter of dimension $d$.
In classical batch gradient descent methods, the gradients reported to the
server by the working machines are aggregated via simple averaging, which is
vulnerable to a single Byzantine failure. In this paper, we propose a Byzantine
gradient descent method based on the geometric median of means of the
gradients. We show that our method can tolerate $q \le (m-1)/2$ Byzantine
failures, and the parameter estimate converges in $O(\log N)$ rounds with an
estimation error of $\sqrt{d(2q+1)/N}$, hence approaching the optimal error
rate $\sqrt{d/N}$ in the centralized and failure-free setting. The total
computational complexity of our algorithm is of $O((Nd/m) \log N)$ at each
working machine and $O(md + kd \log^3 N)$ at the central server, and the total
communication cost is of $O(m d \log N)$. We further provide an application of
our general results to the linear regression problem.
A key challenge arises in the above problem is that Byzantine failures create
arbitrary and unspecified dependency among the iterations and the aggregated
gradients. We prove that the aggregated gradient converges uniformly to the
true gradient function.
| Yudong Chen, Lili Su, Jiaming Xu | null | 1705.05491 | null | null |
Data clustering with edge domination in complex networks | cs.SI cs.LG physics.soc-ph | This paper presents a model for a dynamical system where particles dominate
edges in a complex network. The proposed dynamical system is then extended to
an application on the problem of community detection and data clustering. In
the case of the data clustering problem, 6 different techniques were simulated
on 10 different datasets in order to compare with the proposed technique. The
results show that the proposed algorithm performs well when prior knowledge of
the number of clusters is known to the algorithm.
| Paulo Roberto Urio, Zhao Liang | null | 1705.05494 | null | null |
The power of deeper networks for expressing natural functions | cs.LG cs.NE stat.ML | It is well-known that neural networks are universal approximators, but that
deeper networks tend in practice to be more powerful than shallower ones. We
shed light on this by proving that the total number of neurons $m$ required to
approximate natural classes of multivariate polynomials of $n$ variables grows
only linearly with $n$ for deep neural networks, but grows exponentially when
merely a single hidden layer is allowed. We also provide evidence that when the
number of hidden layers is increased from $1$ to $k$, the neuron requirement
grows exponentially not with $n$ but with $n^{1/k}$, suggesting that the
minimum number of layers required for practical expressibility grows only
logarithmically with $n$.
| David Rolnick (MIT), Max Tegmark (MIT) | null | 1705.05502 | null | null |
Learning Hard Alignments with Variational Inference | cs.AI cs.LG stat.ML | There has recently been significant interest in hard attention models for
tasks such as object recognition, visual captioning and speech recognition.
Hard attention can offer benefits over soft attention such as decreased
computational cost, but training hard attention models can be difficult because
of the discrete latent variables they introduce. Previous work used REINFORCE
and Q-learning to approach these issues, but those methods can provide
high-variance gradient estimates and be slow to train. In this paper, we tackle
the problem of learning hard attention for a sequential task using variational
inference methods, specifically the recently introduced VIMCO and NVIL.
Furthermore, we propose a novel baseline that adapts VIMCO to this setting. We
demonstrate our method on a phoneme recognition task in clean and noisy
environments and show that our method outperforms REINFORCE, with the
difference being greater for a more complicated task.
| Dieterich Lawson, Chung-Cheng Chiu, George Tucker, Colin Raffel, Kevin
Swersky, Navdeep Jaitly | null | 1705.05524 | null | null |
Metaheuristic Design of Feedforward Neural Networks: A Review of Two
Decades of Research | cs.NE cs.LG | Over the past two decades, the feedforward neural network (FNN) optimization
has been a key interest among the researchers and practitioners of multiple
disciplines. The FNN optimization is often viewed from the various
perspectives: the optimization of weights, network architecture, activation
nodes, learning parameters, learning environment, etc. Researchers adopted such
different viewpoints mainly to improve the FNN's generalization ability. The
gradient-descent algorithm such as backpropagation has been widely applied to
optimize the FNNs. Its success is evident from the FNN's application to
numerous real-world problems. However, due to the limitations of the
gradient-based optimization methods, the metaheuristic algorithms including the
evolutionary algorithms, swarm intelligence, etc., are still being widely
explored by the researchers aiming to obtain generalized FNN for a given
problem. This article attempts to summarize a broad spectrum of FNN
optimization methodologies including conventional and metaheuristic approaches.
This article also tries to connect various research directions emerged out of
the FNN optimization practices, such as evolving neural network (NN),
cooperative coevolution NN, complex-valued NN, deep learning, extreme learning
machine, quantum NN, etc. Additionally, it provides interesting research
challenges for future research to cope-up with the present information
processing era.
| Varun Kumar Ojha, Ajith Abraham, V\'aclav Sn\'a\v{s}el | 10.1016/j.engappai.2017.01.013 | 1705.05584 | null | null |
Learning Convex Regularizers for Optimal Bayesian Denoising | cs.LG stat.ML | We propose a data-driven algorithm for the maximum a posteriori (MAP)
estimation of stochastic processes from noisy observations. The primary
statistical properties of the sought signal is specified by the penalty
function (i.e., negative logarithm of the prior probability density function).
Our alternating direction method of multipliers (ADMM)-based approach
translates the estimation task into successive applications of the proximal
mapping of the penalty function. Capitalizing on this direct link, we define
the proximal operator as a parametric spline curve and optimize the spline
coefficients by minimizing the average reconstruction error for a given
training set. The key aspects of our learning method are that the associated
penalty function is constrained to be convex and the convergence of the ADMM
iterations is proven. As a result of these theoretical guarantees, adaptation
of the proposed framework to different levels of measurement noise is extremely
simple and does not require any retraining. We apply our method to estimation
of both sparse and non-sparse models of L\'{e}vy processes for which the
minimum mean square error (MMSE) estimators are available. We carry out a
single training session and perform comparisons at various signal-to-noise
ratio (SNR) values. Simulations illustrate that the performance of our
algorithm is practically identical to the one of the MMSE estimator
irrespective of the noise power.
| Ha Q. Nguyen and Emrah Bostan and Michael Unser | 10.1109/TSP.2017.2777407 | 1705.05591 | null | null |
Learning how to explain neural networks: PatternNet and
PatternAttribution | stat.ML cs.LG | DeConvNet, Guided BackProp, LRP, were invented to better understand deep
neural networks. We show that these methods do not produce the theoretically
correct explanation for a linear model. Yet they are used on multi-layer
networks with millions of parameters. This is a cause for concern since linear
models are simple neural networks. We argue that explanation methods for neural
nets should work reliably in the limit of simplicity, the linear models. Based
on our analysis of linear models we propose a generalization that yields two
explanation techniques (PatternNet and PatternAttribution) that are
theoretically sound for linear models and produce improved explanations for
deep networks.
| Pieter-Jan Kindermans, Kristof T. Sch\"utt, Maximilian Alber,
Klaus-Robert M\"uller, Dumitru Erhan, Been Kim, Sven D\"ahne | null | 1705.05598 | null | null |
Learning Edge Representations via Low-Rank Asymmetric Projections | cs.LG cs.SI stat.ML | We propose a new method for embedding graphs while preserving directed edge
information. Learning such continuous-space vector representations (or
embeddings) of nodes in a graph is an important first step for using network
information (from social networks, user-item graphs, knowledge bases, etc.) in
many machine learning tasks.
Unlike previous work, we (1) explicitly model an edge as a function of node
embeddings, and we (2) propose a novel objective, the "graph likelihood", which
contrasts information from sampled random walks with non-existent edges.
Individually, both of these contributions improve the learned representations,
especially when there are memory constraints on the total size of the
embeddings. When combined, our contributions enable us to significantly improve
the state-of-the-art by learning more concise representations that better
preserve the graph structure.
We evaluate our method on a variety of link-prediction task including social
networks, collaboration networks, and protein interactions, showing that our
proposed method learn representations with error reductions of up to 76% and
55%, on directed and undirected graphs. In addition, we show that the
representations learned by our method are quite space efficient, producing
embeddings which have higher structure-preserving accuracy but are 10 times
smaller.
| Sami Abu-El-Haija, Bryan Perozzi, Rami Al-Rfou | 10.1145/3132847.3132959 | 1705.05615 | null | null |
Social Media-based Substance Use Prediction | cs.CL cs.LG cs.SI | In this paper, we demonstrate how the state-of-the-art machine learning and
text mining techniques can be used to build effective social media-based
substance use detection systems. Since a substance use ground truth is
difficult to obtain on a large scale, to maximize system performance, we
explore different feature learning methods to take advantage of a large amount
of unsupervised social media data. We also demonstrate the benefit of using
multi-view unsupervised feature learning to combine heterogeneous user
information such as Facebook `"likes" and "status updates" to enhance system
performance. Based on our evaluation, our best models achieved 86% AUC for
predicting tobacco use, 81% for alcohol use and 84% for drug use, all of which
significantly outperformed existing methods. Our investigation has also
uncovered interesting relations between a user's social media behavior (e.g.,
word usage) and substance use.
| Tao Ding, Warren K. Bickel, Shimei Pan | null | 1705.05633 | null | null |
To tune or not to tune the number of trees in random forest? | stat.ML cs.LG | The number of trees T in the random forest (RF) algorithm for supervised
learning has to be set by the user. It is controversial whether T should simply
be set to the largest computationally manageable value or whether a smaller T
may in some cases be better. While the principle underlying bagging is that
"more trees are better", in practice the classification error rate sometimes
reaches a minimum before increasing again for increasing number of trees. The
goal of this paper is four-fold: (i) providing theoretical results showing that
the expected error rate may be a non-monotonous function of the number of trees
and explaining under which circumstances this happens; (ii) providing
theoretical results showing that such non-monotonous patterns cannot be
observed for other performance measures such as the Brier score and the
logarithmic loss (for classification) and the mean squared error (for
regression); (iii) illustrating the extent of the problem through an
application to a large number (n = 306) of datasets from the public database
OpenML; (iv) finally arguing in favor of setting it to a computationally
feasible large number, depending on convergence properties of the desired
performance measure.
| Philipp Probst, Anne-Laure Boulesteix | null | 1705.05654 | null | null |
Learning Image Relations with Contrast Association Networks | cs.CV cs.LG | Inferring the relations between two images is an important class of tasks in
computer vision. Examples of such tasks include computing optical flow and
stereo disparity. We treat the relation inference tasks as a machine learning
problem and tackle it with neural networks. A key to the problem is learning a
representation of relations. We propose a new neural network module, contrast
association unit (CAU), which explicitly models the relations between two sets
of input variables. Due to the non-negativity of the weights in CAU, we adopt a
multiplicative update algorithm for learning these weights. Experiments show
that neural networks with CAUs are more effective in learning five fundamental
image transformations than conventional neural networks.
| Yao Lu, Zhirong Yang, Juho Kannala, Samuel Kaski | null | 1705.05665 | null | null |
Optimal Warping Paths are unique for almost every Pair of Time Series | cs.LG cs.AI stat.ML | Update rules for learning in dynamic time warping spaces are based on optimal
warping paths between parameter and input time series. In general, optimal
warping paths are not unique resulting in adverse effects in theory and
practice. Under the assumption of squared error local costs, we show that no
two warping paths have identical costs almost everywhere in a measure-theoretic
sense. Two direct consequences of this result are: (i) optimal warping paths
are unique almost everywhere, and (ii) the set of all pairs of time series with
multiple equal-cost warping paths coincides with the union of exponentially
many zero sets of quadratic forms. One implication of the proposed results is
that typical distance-based cost functions such as the k-means objective are
differentiable almost everywhere and can be minimized by subgradient methods.
| Brijnesh J. Jain and David Schultz | null | 1705.05681 | null | null |
A Long Short-Term Memory Recurrent Neural Network Framework for Network
Traffic Matrix Prediction | cs.NI cs.LG | Network Traffic Matrix (TM) prediction is defined as the problem of
estimating future network traffic from the previous and achieved network
traffic data. It is widely used in network planning, resource management and
network security. Long Short-Term Memory (LSTM) is a specific recurrent neural
network (RNN) architecture that is well-suited to learn from experience to
classify, process and predict time series with time lags of unknown size. LSTMs
have been shown to model temporal sequences and their long-range dependencies
more accurately than conventional RNNs. In this paper, we propose a LSTM RNN
framework for predicting short and long term Traffic Matrix (TM) in large
networks. By validating our framework on real-world data from GEANT network, we
show that our LSTM models converge quickly and give state of the art TM
prediction performance for relatively small sized models.
| Abdelhadi Azzouni, Guy Pujolle | null | 1705.0569 | null | null |
Know-Evolve: Deep Temporal Reasoning for Dynamic Knowledge Graphs | cs.AI cs.CL cs.LG | The availability of large scale event data with time stamps has given rise to
dynamically evolving knowledge graphs that contain temporal information for
each edge. Reasoning over time in such dynamic knowledge graphs is not yet well
understood. To this end, we present Know-Evolve, a novel deep evolutionary
knowledge network that learns non-linearly evolving entity representations over
time. The occurrence of a fact (edge) is modeled as a multivariate point
process whose intensity function is modulated by the score for that fact
computed based on the learned entity embeddings. We demonstrate significantly
improved performance over various relational learning approaches on two large
scale real-world datasets. Further, our method effectively predicts occurrence
or recurrence time of a fact which is novel compared to prior reasoning
approaches in multi-relational setting.
| Rakshit Trivedi, Hanjun Dai, Yichen Wang, Le Song | null | 1705.05742 | null | null |
Hierarchical Temporal Representation in Linear Reservoir Computing | cs.LG stat.ML | Recently, studies on deep Reservoir Computing (RC) highlighted the role of
layering in deep recurrent neural networks (RNNs). In this paper, the use of
linear recurrent units allows us to bring more evidence on the intrinsic
hierarchical temporal representation in deep RNNs through frequency analysis
applied to the state signals. The potentiality of our approach is assessed on
the class of Multiple Superimposed Oscillator tasks. Furthermore, our
investigation provides useful insights to open a discussion on the main aspects
that characterize the deep learning framework in the temporal domain.
| Claudio Gallicchio, Alessio Micheli, Luca Pedrelli | null | 1705.05782 | null | null |
Demystifying Relational Latent Representations | cs.AI cs.LG stat.ML | Latent features learned by deep learning approaches have proven to be a
powerful tool for machine learning. They serve as a data abstraction that makes
learning easier by capturing regularities in data explicitly. Their benefits
motivated their adaptation to relational learning context. In our previous
work, we introduce an approach that learns relational latent features by means
of clustering instances and their relations. The major drawback of latent
representations is that they are often black-box and difficult to interpret.
This work addresses these issues and shows that (1) latent features created by
clustering are interpretable and capture interesting properties of data; (2)
they identify local regions of instances that match well with the label, which
partially explains their benefit; and (3) although the number of latent
features generated by this approach is large, often many of them are highly
redundant and can be removed without hurting performance much.
| Sebastijan Duman\v{c}i\'c and Hendrik Blockeel | null | 1705.05785 | null | null |
Real-Time Adaptive Image Compression | stat.ML cs.CV cs.LG | We present a machine learning-based approach to lossy image compression which
outperforms all existing codecs, while running in real-time.
Our algorithm typically produces files 2.5 times smaller than JPEG and JPEG
2000, 2 times smaller than WebP, and 1.7 times smaller than BPG on datasets of
generic images across all quality levels. At the same time, our codec is
designed to be lightweight and deployable: for example, it can encode or decode
the Kodak dataset in around 10ms per image on GPU.
Our architecture is an autoencoder featuring pyramidal analysis, an adaptive
coding module, and regularization of the expected codelength. We also
supplement our approach with adversarial training specialized towards use in a
compression setting: this enables us to produce visually pleasing
reconstructions for very low bitrates.
| Oren Rippel, Lubomir Bourdev | null | 1705.05823 | null | null |
DeepGO: Predicting protein functions from sequence and interactions
using a deep ontology-aware classifier | q-bio.GN cs.LG q-bio.QM | A large number of protein sequences are becoming available through the
application of novel high-throughput sequencing technologies. Experimental
functional characterization of these proteins is time-consuming and expensive,
and is often only done rigorously for few selected model organisms.
Computational function prediction approaches have been suggested to fill this
gap. The functions of proteins are classified using the Gene Ontology (GO),
which contains over 40,000 classes. Additionally, proteins have multiple
functions, making function prediction a large-scale, multi-class, multi-label
problem.
We have developed a novel method to predict protein function from sequence.
We use deep learning to learn features from protein sequences as well as a
cross-species protein-protein interaction network. Our approach specifically
outputs information in the structure of the GO and utilizes the dependencies
between GO classes as background information to construct a deep learning
model. We evaluate our method using the standards established by the
Computational Assessment of Function Annotation (CAFA) and demonstrate a
significant improvement over baseline methods such as BLAST, with significant
improvement for predicting cellular locations.
| Maxat Kulmanov, Mohammed Asif Khan and Robert Hoehndorf | 10.1093/bioinformatics/btx624 | 1705.05919 | null | null |
Sub-sampled Cubic Regularization for Non-convex Optimization | cs.LG math.OC stat.ML | We consider the minimization of non-convex functions that typically arise in
machine learning. Specifically, we focus our attention on a variant of trust
region methods known as cubic regularization. This approach is particularly
attractive because it escapes strict saddle points and it provides stronger
convergence guarantees than first- and second-order as well as classical trust
region methods. However, it suffers from a high computational complexity that
makes it impractical for large-scale learning. Here, we propose a novel method
that uses sub-sampling to lower this computational cost. By the use of
concentration inequalities we provide a sampling scheme that gives sufficiently
accurate gradient and Hessian approximations to retain the strong global and
local convergence guarantees of cubically regularized methods. To the best of
our knowledge this is the first work that gives global convergence guarantees
for a sub-sampled variant of cubic regularization on non-convex functions.
Furthermore, we provide experimental results supporting our theory.
| Jonas Moritz Kohler and Aurelien Lucchi | null | 1705.05933 | null | null |
One Shot Joint Colocalization and Cosegmentation | cs.CV cs.LG | This paper presents a novel framework in which image cosegmentation and
colocalization are cast into a single optimization problem that integrates
information from low level appearance cues with that of high level localization
cues in a very weakly supervised manner. In contrast to multi-task learning
paradigm that learns similar tasks using a shared representation, the proposed
framework leverages two representations at different levels and simultaneously
discriminates between foreground and background at the bounding box and
superpixel level using discriminative clustering. We show empirically that
constraining the two problems at different scales enables the transfer of
semantic localization cues to improve cosegmentation output whereas local
appearance based segmentation cues help colocalization. The unified framework
outperforms strong baseline approaches, of learning the two problems
separately, by a large margin on four benchmark datasets. Furthermore, it
obtains competitive results compared to the state of the art for cosegmentation
on two benchmark datasets and second best result for colocalization on Pascal
VOC 2007.
| Abhishek Sharma | null | 1705.06 | null | null |
An Investigation of Newton-Sketch and Subsampled Newton Methods | math.OC cs.LG stat.ML | Sketching, a dimensionality reduction technique, has received much attention
in the statistics community. In this paper, we study sketching in the context
of Newton's method for solving finite-sum optimization problems in which the
number of variables and data points are both large. We study two forms of
sketching that perform dimensionality reduction in data space: Hessian
subsampling and randomized Hadamard transformations. Each has its own
advantages, and their relative tradeoffs have not been investigated in the
optimization literature. Our study focuses on practical versions of the two
methods in which the resulting linear systems of equations are solved
approximately, at every iteration, using an iterative solver. The advantages of
using the conjugate gradient method vs. a stochastic gradient iteration are
revealed through a set of numerical experiments, and a complexity analysis of
the Hessian subsampling method is presented.
| Albert S. Berahas, Raghu Bollapragada and Jorge Nocedal | null | 1705.06211 | null | null |
Practical Processing of Mobile Sensor Data for Continual Deep Learning
Predictions | cs.LG cs.HC | We present a practical approach for processing mobile sensor time series data
for continual deep learning predictions. The approach comprises data cleaning,
normalization, capping, time-based compression, and finally classification with
a recurrent neural network. We demonstrate the effectiveness of the approach in
a case study with 279 participants. On the basis of sparse sensor events, the
network continually predicts whether the participants would attend to a
notification within 10 minutes. Compared to a random baseline, the classifier
achieves a 40% performance increase (AUC of 0.702) on a withheld test set. This
approach allows to forgo resource-intensive, domain-specific, error-prone
feature engineering, which may drastically increase the applicability of
machine learning to mobile phone sensor data.
| Kleomenis Katevas, Ilias Leontiadis, Martin Pielot, Joan Serr\`a | 10.1145/3089801.3089802 | 1705.06224 | null | null |
Learning to Represent Haptic Feedback for Partially-Observable Tasks | cs.RO cs.AI cs.LG | The sense of touch, being the earliest sensory system to develop in a human
body [1], plays a critical part of our daily interaction with the environment.
In order to successfully complete a task, many manipulation interactions
require incorporating haptic feedback. However, manually designing a feedback
mechanism can be extremely challenging. In this work, we consider manipulation
tasks that need to incorporate tactile sensor feedback in order to modify a
provided nominal plan. To incorporate partial observation, we present a new
framework that models the task as a partially observable Markov decision
process (POMDP) and learns an appropriate representation of haptic feedback
which can serve as the state for a POMDP model. The model, that is parametrized
by deep recurrent neural networks, utilizes variational Bayes methods to
optimize the approximate posterior. Finally, we build on deep Q-learning to be
able to select the optimal action in each state without access to a simulator.
We test our model on a PR2 robot for multiple tasks of turning a knob until it
clicks.
| Jaeyong Sung, J. Kenneth Salisbury, Ashutosh Saxena | null | 1705.06243 | null | null |
Supervised Machine Learning for Signals Having RRC Shaped Pulses | cs.IT cs.LG math.IT | Classification performances of the supervised machine learning techniques
such as support vector machines, neural networks and logistic regression are
compared for modulation recognition purposes. The simple and robust features
are used to distinguish continuous-phase FSK from QAM-PSK signals. Signals
having root-raised-cosine shaped pulses are simulated in extreme noisy
conditions having joint impurities of block fading, lack of symbol and sampling
synchronization, carrier offset, and additive white Gaussian noise. The
features are based on sample mean and sample variance of the imaginary part of
the product of two consecutive complex signal values.
| Mohammad Bari, Hussain Taher, Syed Saad Sherazi, Milos Doroslovacki | 10.1109/ACSSC.2016.7869124 | 1705.06299 | null | null |
Automatic Goal Generation for Reinforcement Learning Agents | cs.LG cs.AI cs.RO | Reinforcement learning is a powerful technique to train an agent to perform a
task. However, an agent that is trained using reinforcement learning is only
capable of achieving the single task that is specified via its reward function.
Such an approach does not scale well to settings in which an agent needs to
perform a diverse set of tasks, such as navigating to varying positions in a
room or moving objects to varying locations. Instead, we propose a method that
allows an agent to automatically discover the range of tasks that it is capable
of performing. We use a generator network to propose tasks for the agent to try
to achieve, specified as goal states. The generator network is optimized using
adversarial training to produce tasks that are always at the appropriate level
of difficulty for the agent. Our method thus automatically produces a
curriculum of tasks for the agent to learn. We show that, by using this
framework, an agent can efficiently and automatically learn to perform a wide
set of tasks without requiring any prior knowledge of its environment. Our
method can also learn to achieve tasks with sparse rewards, which traditionally
pose significant challenges.
| Carlos Florensa, David Held, Xinyang Geng, Pieter Abbeel | null | 1705.06366 | null | null |
Maximum Margin Principal Components | stat.ML cs.LG | Principal Component Analysis (PCA) is a very successful dimensionality
reduction technique, widely used in predictive modeling. A key factor in its
widespread use in this domain is the fact that the projection of a dataset onto
its first $K$ principal components minimizes the sum of squared errors between
the original data and the projected data over all possible rank $K$
projections. Thus, PCA provides optimal low-rank representations of data for
least-squares linear regression under standard modeling assumptions. On the
other hand, when the loss function for a prediction problem is not the
least-squares error, PCA is typically a heuristic choice of dimensionality
reduction -- in particular for classification problems under the zero-one loss.
In this paper we target classification problems by proposing a straightforward
alternative to PCA that aims to minimize the difference in margin distribution
between the original and the projected data. Extensive experiments show that
our simple approach typically outperforms PCA on any particular dataset, in
terms of classification error, though this difference is not always
statistically significant, and despite being a filter method is frequently
competitive with Partial Least Squares (PLS) and Lasso on a wide range of
datasets.
| Xianghui Luo and Robert J. Durrant | null | 1705.06371 | null | null |
Learning a bidirectional mapping between human whole-body motion and
natural language using deep recurrent neural networks | cs.LG cs.CL cs.RO stat.ML | Linking human whole-body motion and natural language is of great interest for
the generation of semantic representations of observed human behaviors as well
as for the generation of robot behaviors based on natural language input. While
there has been a large body of research in this area, most approaches that
exist today require a symbolic representation of motions (e.g. in the form of
motion primitives), which have to be defined a-priori or require complex
segmentation algorithms. In contrast, recent advances in the field of neural
networks and especially deep learning have demonstrated that sub-symbolic
representations that can be learned end-to-end usually outperform more
traditional approaches, for applications such as machine translation. In this
paper we propose a generative model that learns a bidirectional mapping between
human whole-body motion and natural language using deep recurrent neural
networks (RNNs) and sequence-to-sequence learning. Our approach does not
require any segmentation or manual feature engineering and learns a distributed
representation, which is shared for all motions and descriptions. We evaluate
our approach on 2,846 human whole-body motions and 6,187 natural language
descriptions thereof from the KIT Motion-Language Dataset. Our results clearly
demonstrate the effectiveness of the proposed model: We show that our model
generates a wide variety of realistic motions only from descriptions thereof in
form of a single sentence. Conversely, our model is also capable of generating
correct and detailed natural language descriptions from human motions.
| Matthias Plappert, Christian Mandery, Tamim Asfour | 10.1016/j.robot.2018.07.006 | 1705.064 | null | null |
Sample-Efficient Algorithms for Recovering Structured Signals from
Magnitude-Only Measurements | stat.ML cs.LG | We consider the problem of recovering a signal $\mathbf{x}^* \in
\mathbf{R}^n$, from magnitude-only measurements $y_i =
|\left\langle\mathbf{a}_i,\mathbf{x}^*\right\rangle|$ for $i=[m]$. Also called
the phase retrieval, this is a fundamental challenge in bio-,astronomical
imaging and speech processing. The problem above is ill-posed; additional
assumptions on the signal and/or the measurements are necessary. In this paper
we first study the case where the signal $\mathbf{x}^*$ is $s$-sparse. We
develop a novel algorithm that we call Compressive Phase Retrieval with
Alternating Minimization, or CoPRAM. Our algorithm is simple; it combines the
classical alternating minimization approach for phase retrieval with the CoSaMP
algorithm for sparse recovery. Despite its simplicity, we prove that CoPRAM
achieves a sample complexity of $O(s^2\log n)$ with Gaussian measurements
$\mathbf{a}_i$, matching the best known existing results; moreover, it
demonstrates linear convergence in theory and practice. Additionally, it
requires no extra tuning parameters other than signal sparsity $s$ and is
robust to noise. When the sorted coefficients of the sparse signal exhibit a
power law decay, we show that CoPRAM achieves a sample complexity of $O(s\log
n)$, which is close to the information-theoretic limit. We also consider the
case where the signal $\mathbf{x}^*$ arises from structured sparsity models. We
specifically examine the case of block-sparse signals with uniform block size
of $b$ and block sparsity $k=s/b$. For this problem, we design a recovery
algorithm Block CoPRAM that further reduces the sample complexity to $O(ks\log
n)$. For sufficiently large block lengths of $b=\Theta(s)$, this bound equates
to $O(s\log n)$. To our knowledge, this constitutes the first end-to-end
algorithm for phase retrieval where the Gaussian sample complexity has a
sub-quadratic dependence on the signal sparsity level.
| Gauri Jagatap and Chinmay Hegde | null | 1705.06412 | null | null |
Delving into adversarial attacks on deep policies | stat.ML cs.LG | Adversarial examples have been shown to exist for a variety of deep learning
architectures. Deep reinforcement learning has shown promising results on
training agent policies directly on raw inputs such as image pixels. In this
paper we present a novel study into adversarial attacks on deep reinforcement
learning polices. We compare the effectiveness of the attacks using adversarial
examples vs. random noise. We present a novel method for reducing the number of
times adversarial examples need to be injected for a successful attack, based
on the value function. We further explore how re-training on random noise and
FGSM perturbations affects the resilience against adversarial examples.
| Jernej Kos, Dawn Song | null | 1705.06452 | null | null |
Evolving Ensemble Fuzzy Classifier | cs.LG cs.AI | The concept of ensemble learning offers a promising avenue in learning from
data streams under complex environments because it addresses the bias and
variance dilemma better than its single model counterpart and features a
reconfigurable structure, which is well suited to the given context. While
various extensions of ensemble learning for mining non-stationary data streams
can be found in the literature, most of them are crafted under a static base
classifier and revisits preceding samples in the sliding window for a
retraining step. This feature causes computationally prohibitive complexity and
is not flexible enough to cope with rapidly changing environments. Their
complexities are often demanding because it involves a large collection of
offline classifiers due to the absence of structural complexities reduction
mechanisms and lack of an online feature selection mechanism. A novel evolving
ensemble classifier, namely Parsimonious Ensemble pENsemble, is proposed in
this paper. pENsemble differs from existing architectures in the fact that it
is built upon an evolving classifier from data streams, termed Parsimonious
Classifier pClass. pENsemble is equipped by an ensemble pruning mechanism,
which estimates a localized generalization error of a base classifier. A
dynamic online feature selection scenario is integrated into the pENsemble.
This method allows for dynamic selection and deselection of input features on
the fly. pENsemble adopts a dynamic ensemble structure to output a final
classification decision where it features a novel drift detection scenario to
grow the ensemble structure. The efficacy of the pENsemble has been numerically
demonstrated through rigorous numerical studies with dynamic and evolving data
streams where it delivers the most encouraging performance in attaining a
tradeoff between accuracy and complexity.
| Mahardhika Pratama, Witold Pedrycz, Edwin Lughofer | 10.1109/TFUZZ.2018.2796099 | 1705.0646 | null | null |
Online learnability of Statistical Relational Learning in anomaly
detection | cs.LG cs.AI | Statistical Relational Learning (SRL) methods for anomaly detection are
introduced via a security-related application. Operational requirements for
online learning stability are outlined and compared to mathematical definitions
as applied to the learning process of a representative SRL method - Bayesian
Logic Programs (BLP). Since a formal proof of online stability appears to be
impossible, tentative common sense requirements are formulated and tested by
theoretical and experimental analysis of a simple and analytically tractable
BLP model. It is found that learning algorithms in initial stages of online
learning can lock on unstable false predictors that nevertheless comply with
our tentative stability requirements and thus masquerade as bona fide
solutions. The very expressiveness of SRL seems to cause significant stability
issues in settings with many variables and scarce data. We conclude that
reliable anomaly detection with SRL-methods requires monitoring by an
overarching framework that may involve a comprehensive context knowledge base
or human supervision.
| Magnus J\"andel, Pontus Svenson, Niclas Wadstr\"omer | null | 1705.06573 | null | null |
DeepXplore: Automated Whitebox Testing of Deep Learning Systems | cs.LG cs.CR cs.SE | Deep learning (DL) systems are increasingly deployed in safety- and
security-critical domains including self-driving cars and malware detection,
where the correctness and predictability of a system's behavior for corner case
inputs are of great importance. Existing DL testing depends heavily on manually
labeled data and therefore often fails to expose erroneous behaviors for rare
inputs.
We design, implement, and evaluate DeepXplore, the first whitebox framework
for systematically testing real-world DL systems. First, we introduce neuron
coverage for systematically measuring the parts of a DL system exercised by
test inputs. Next, we leverage multiple DL systems with similar functionality
as cross-referencing oracles to avoid manual checking. Finally, we demonstrate
how finding inputs for DL systems that both trigger many differential behaviors
and achieve high neuron coverage can be represented as a joint optimization
problem and solved efficiently using gradient-based search techniques.
DeepXplore efficiently finds thousands of incorrect corner case behaviors
(e.g., self-driving cars crashing into guard rails and malware masquerading as
benign software) in state-of-the-art DL models with thousands of neurons
trained on five popular datasets including ImageNet and Udacity self-driving
challenge data. For all tested DL models, on average, DeepXplore generated one
test input demonstrating incorrect behavior within one second while running
only on a commodity laptop. We further show that the test inputs generated by
DeepXplore can also be used to retrain the corresponding DL model to improve
the model's accuracy by up to 3%.
| Kexin Pei, Yinzhi Cao, Junfeng Yang, Suman Jana | 10.1145/3132747.3132785 | 1705.0664 | null | null |
Limited-Memory Matrix Adaptation for Large Scale Black-box Optimization | cs.NE cs.LG math.OC | The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is a popular
method to deal with nonconvex and/or stochastic optimization problems when the
gradient information is not available. Being based on the CMA-ES, the recently
proposed Matrix Adaptation Evolution Strategy (MA-ES) provides a rather
surprising result that the covariance matrix and all associated operations
(e.g., potentially unstable eigendecomposition) can be replaced in the CMA-ES
by a updated transformation matrix without any loss of performance. In order to
further simplify MA-ES and reduce its $\mathcal{O}\big(n^2\big)$ time and
storage complexity to $\mathcal{O}\big(n\log(n)\big)$, we present the
Limited-Memory Matrix Adaptation Evolution Strategy (LM-MA-ES) for efficient
zeroth order large-scale optimization. The algorithm demonstrates
state-of-the-art performance on a set of established large-scale benchmarks. We
explore the algorithm on the problem of generating adversarial inputs for a
(non-smooth) random forest classifier, demonstrating a surprising vulnerability
of the classifier.
| Ilya Loshchilov, Tobias Glasmachers, Hans-Georg Beyer | null | 1705.06693 | null | null |
Learning Spatiotemporal Features for Infrared Action Recognition with 3D
Convolutional Neural Networks | cs.CV cs.AI cs.LG cs.MM | Infrared (IR) imaging has the potential to enable more robust action
recognition systems compared to visible spectrum cameras due to lower
sensitivity to lighting conditions and appearance variability. While the action
recognition task on videos collected from visible spectrum imaging has received
much attention, action recognition in IR videos is significantly less explored.
Our objective is to exploit imaging data in this modality for the action
recognition task. In this work, we propose a novel two-stream 3D convolutional
neural network (CNN) architecture by introducing the discriminative code layer
and the corresponding discriminative code loss function. The proposed network
processes IR image and the IR-based optical flow field sequences. We pretrain
the 3D CNN model on the visible spectrum Sports-1M action dataset and finetune
it on the Infrared Action Recognition (InfAR) dataset. To our best knowledge,
this is the first application of the 3D CNN to action recognition in the IR
domain. We conduct an elaborate analysis of different fusion schemes (weighted
average, single and double-layer neural nets) applied to different 3D CNN
outputs. Experimental results demonstrate that our approach can achieve
state-of-the-art average precision (AP) performances on the InfAR dataset: (1)
the proposed two-stream 3D CNN achieves the best reported 77.5% AP, and (2) our
3D CNN model applied to the optical flow fields achieves the best reported
single stream 75.42% AP.
| Zhuolin Jiang, Viktor Rozgic, Sancar Adali | null | 1705.06709 | null | null |
Discovering the Graph Structure in the Clustering Results | stat.ML cs.LG | In a standard cluster analysis, such as k-means, in addition to clusters
locations and distances between them, it's important to know if they are
connected or well separated from each other. The main focus of this paper is
discovering the relations between the resulting clusters. We propose a new
method which is based on pairwise overlapping k-means clustering, that in
addition to means of clusters provides the graph structure of their relations.
The proposed method has a set of parameters that can be tuned in order to
control the sensitivity of the model and the desired relative size of the
pairwise overlapping interval between means of two adjacent clusters, i.e.,
level of overlapping. We present the exact formula for calculating that
parameter. The empirical study presented in the paper demonstrates that our
approach works well not only on toy data but also compliments standard
clustering results with a reasonable graph structure on real datasets, such as
financial indices and restaurants.
| Evgeny Bauman, Konstantin Bauman | null | 1705.06753 | null | null |
Feature Control as Intrinsic Motivation for Hierarchical Reinforcement
Learning | cs.LG cs.AI | The problem of sparse rewards is one of the hardest challenges in
contemporary reinforcement learning. Hierarchical reinforcement learning (HRL)
tackles this problem by using a set of temporally-extended actions, or options,
each of which has its own subgoal. These subgoals are normally handcrafted for
specific tasks. Here, though, we introduce a generic class of subgoals with
broad applicability in the visual domain. Underlying our approach (in common
with work using "auxiliary tasks") is the hypothesis that the ability to
control aspects of the environment is an inherently useful skill to have. We
incorporate such subgoals in an end-to-end hierarchical reinforcement learning
system and test two variants of our algorithm on a number of games from the
Atari suite. We highlight the advantage of our approach in one of the hardest
games -- Montezuma's revenge -- for which the ability to handle sparse rewards
is key. Our agent learns several times faster than the current state-of-the-art
HRL agent in this game, reaching a similar level of performance. UPDATE
22/11/17: We found that a standard A3C agent with a simple shaped reward, i.e.
extrinsic reward + feature control intrinsic reward, has comparable performance
to our agent in Montezuma Revenge. In light of the new experiments performed,
the advantage of our HRL approach can be attributed more to its ability to
learn useful features from intrinsic rewards rather than its ability to explore
and reuse abstracted skills with hierarchical components. This has led us to a
new conclusion about the result.
| Nat Dilokthanakul, Christos Kaplanis, Nick Pawlowski, Murray Shanahan | 10.1109/TNNLS.2019.2891792 | 1705.06769 | null | null |
Pixel Deconvolutional Networks | cs.LG cs.CV cs.NE stat.ML | Deconvolutional layers have been widely used in a variety of deep models for
up-sampling, including encoder-decoder networks for semantic segmentation and
deep generative models for unsupervised learning. One of the key limitations of
deconvolutional operations is that they result in the so-called checkerboard
problem. This is caused by the fact that no direct relationship exists among
adjacent pixels on the output feature map. To address this problem, we propose
the pixel deconvolutional layer (PixelDCL) to establish direct relationships
among adjacent pixels on the up-sampled feature map. Our method is based on a
fresh interpretation of the regular deconvolution operation. The resulting
PixelDCL can be used to replace any deconvolutional layer in a plug-and-play
manner without compromising the fully trainable capabilities of original
models. The proposed PixelDCL may result in slight decrease in efficiency, but
this can be overcome by an implementation trick. Experimental results on
semantic segmentation demonstrate that PixelDCL can consider spatial features
such as edges and shapes and yields more accurate segmentation outputs than
deconvolutional layers. When used in image generation tasks, our PixelDCL can
largely overcome the checkerboard problem suffered by regular deconvolution
operations.
| Hongyang Gao and Hao Yuan and Zhengyang Wang and Shuiwang Ji | null | 1705.0682 | null | null |
Spatial Variational Auto-Encoding via Matrix-Variate Normal
Distributions | cs.LG cs.CV cs.NE stat.ML | The key idea of variational auto-encoders (VAEs) resembles that of
traditional auto-encoder models in which spatial information is supposed to be
explicitly encoded in the latent space. However, the latent variables in VAEs
are vectors, which can be interpreted as multiple feature maps of size 1x1.
Such representations can only convey spatial information implicitly when
coupled with powerful decoders. In this work, we propose spatial VAEs that use
feature maps of larger size as latent variables to explicitly capture spatial
information. This is achieved by allowing the latent variables to be sampled
from matrix-variate normal (MVN) distributions whose parameters are computed
from the encoder network. To increase dependencies among locations on latent
feature maps and reduce the number of parameters, we further propose spatial
VAEs via low-rank MVN distributions. Experimental results show that the
proposed spatial VAEs outperform original VAEs in capturing rich structural and
spatial information.
| Zhengyang Wang, Hao Yuan, Shuiwang Ji | null | 1705.06821 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.