title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Optimal Generalized Decision Trees via Integer Programming | cs.LG math.OC stat.ML | Decision trees have been a very popular class of predictive models for
decades due to their interpretability and good performance on categorical
features. However, they are not always robust and tend to overfit the data.
Additionally, if allowed to grow large, they lose interpretability. In this
paper, we present a mixed integer programming formulation to construct optimal
decision trees of a prespecified size. We take the special structure of
categorical features into account and allow combinatorial decisions (based on
subsets of values of features) at each node. Our approach can also handle
numerical features via thresholding. We show that very good accuracy can be
achieved with small trees using moderately-sized training sets. The
optimization problems we solve are tractable with modern solvers.
| Oktay Gunluk, Jayant Kalagnanam, Minhan Li, Matt Menickelly, Katya
Scheinberg | null | 1612.03225 | null | null |
Active Learning for Speech Recognition: the Power of Gradients | cs.CL cs.LG stat.ML | In training speech recognition systems, labeling audio clips can be
expensive, and not all data is equally valuable. Active learning aims to label
only the most informative samples to reduce cost. For speech recognition,
confidence scores and other likelihood-based active learning methods have been
shown to be effective. Gradient-based active learning methods, however, are
still not well-understood. This work investigates the Expected Gradient Length
(EGL) approach in active learning for end-to-end speech recognition. We justify
EGL from a variance reduction perspective, and observe that EGL's measure of
informativeness picks novel samples uncorrelated with confidence scores.
Experimentally, we show that EGL can reduce word errors by 11\%, or
alternatively, reduce the number of samples to label by 50\%, when compared to
random sampling.
| Jiaji Huang, Rewon Child, Vinay Rao, Hairong Liu, Sanjeev Satheesh,
Adam Coates | null | 1612.03226 | null | null |
Generalized Deep Image to Image Regression | cs.CV cs.LG cs.NE | We present a Deep Convolutional Neural Network architecture which serves as a
generic image-to-image regressor that can be trained end-to-end without any
further machinery. Our proposed architecture: the Recursively Branched
Deconvolutional Network (RBDN) develops a cheap multi-context image
representation very early on using an efficient recursive branching scheme with
extensive parameter sharing and learnable upsampling. This multi-context
representation is subjected to a highly non-linear locality preserving
transformation by the remainder of our network comprising of a series of
convolutions/deconvolutions without any spatial downsampling. The RBDN
architecture is fully convolutional and can handle variable sized images during
inference. We provide qualitative/quantitative results on $3$ diverse tasks:
relighting, denoising and colorization and show that our proposed RBDN
architecture obtains comparable results to the state-of-the-art on each of
these tasks when used off-the-shelf without any post processing or
task-specific architectural modifications.
| Venkataraman Santhanam, Vlad I. Morariu, Larry S. Davis | null | 1612.03268 | null | null |
Gradient Coding | stat.ML cs.DC cs.IT cs.LG math.IT stat.CO | We propose a novel coding theoretic framework for mitigating stragglers in
distributed learning. We show how carefully replicating data blocks and coding
across gradients can provide tolerance to failures and stragglers for
Synchronous Gradient Descent. We implement our schemes in python (using MPI) to
run on Amazon EC2, and show how we compare against baseline approaches in
running time and generalization error.
| Rashish Tandon, Qi Lei, Alexandros G. Dimakis and Nikos Karampatziakis | null | 1612.03301 | null | null |
Knowledge Elicitation via Sequential Probabilistic Inference for
High-Dimensional Prediction | cs.AI cs.HC cs.LG stat.ML | Prediction in a small-sized sample with a large number of covariates, the
"small n, large p" problem, is challenging. This setting is encountered in
multiple applications, such as precision medicine, where obtaining additional
samples can be extremely costly or even impossible, and extensive research
effort has recently been dedicated to finding principled solutions for accurate
prediction. However, a valuable source of additional information, domain
experts, has not yet been efficiently exploited. We formulate knowledge
elicitation generally as a probabilistic inference process, where expert
knowledge is sequentially queried to improve predictions. In the specific case
of sparse linear regression, where we assume the expert has knowledge about the
values of the regression coefficients or about the relevance of the features,
we propose an algorithm and computational approximation for fast and efficient
interaction, which sequentially identifies the most informative features on
which to query expert knowledge. Evaluations of our method in experiments with
simulated and real users show improved prediction accuracy already with a small
effort from the expert.
| Pedram Daee, Tomi Peltola, Marta Soare, Samuel Kaski | 10.1007/s10994-017-5651-7 | 1612.03328 | null | null |
An Empirical Study of ADMM for Nonconvex Problems | math.OC cs.LG | The alternating direction method of multipliers (ADMM) is a common
optimization tool for solving constrained and non-differentiable problems. We
provide an empirical study of the practical performance of ADMM on several
nonconvex applications, including l0 regularized linear regression, l0
regularized image denoising, phase retrieval, and eigenvector computation. Our
experiments suggest that ADMM performs well on a broad class of non-convex
problems. Moreover, recently proposed adaptive ADMM methods, which
automatically tune penalty parameters as the method runs, can improve algorithm
efficiency and solution quality compared to ADMM with a non-tuned penalty.
| Zheng Xu, Soham De, Mario Figueiredo, Christoph Studer, Tom Goldstein | null | 1612.03349 | null | null |
Non-negative Factorization of the Occurrence Tensor from Financial
Contracts | cs.CE cs.LG stat.ML | We propose an algorithm for the non-negative factorization of an occurrence
tensor built from heterogeneous networks. We use l0 norm to model sparse errors
over discrete values (occurrences), and use decomposed factors to model the
embedded groups of nodes. An efficient splitting method is developed to
optimize the nonconvex and nonsmooth objective. We study both synthetic
problems and a new dataset built from financial documents, resMBS.
| Zheng Xu, Furong Huang, Louiqa Raschid, Tom Goldstein | null | 1612.0335 | null | null |
Technical Report: A Generalized Matching Pursuit Approach for
Graph-Structured Sparsity | cs.LG cs.AI stat.ML | Sparsity-constrained optimization is an important and challenging problem
that has wide applicability in data mining, machine learning, and statistics.
In this paper, we focus on sparsity-constrained optimization in cases where the
cost function is a general nonlinear function and, in particular, the sparsity
constraint is defined by a graph-structured sparsity model. Existing methods
explore this problem in the context of sparse estimation in linear models. To
the best of our knowledge, this is the first work to present an efficient
approximation algorithm, namely, Graph-structured Matching Pursuit (Graph-Mp),
to optimize a general nonlinear function subject to graph-structured
constraints. We prove that our algorithm enjoys the strong guarantees analogous
to those designed for linear models in terms of convergence rate and
approximation accuracy. As a case study, we specialize Graph-Mp to optimize a
number of well-known graph scan statistic models for the connected subgraph
detection task, and empirical evidence demonstrates that our general algorithm
performs superior over state-of-the-art methods that are designed specifically
for the task of connected subgraph detection.
| Feng Chen, Baojian Zhou | null | 1612.03364 | null | null |
Non-Redundant Spectral Dimensionality Reduction | cs.LG stat.ML | Spectral dimensionality reduction algorithms are widely used in numerous
domains, including for recognition, segmentation, tracking and visualization.
However, despite their popularity, these algorithms suffer from a major
limitation known as the "repeated Eigen-directions" phenomenon. That is, many
of the embedding coordinates they produce typically capture the same direction
along the data manifold. This leads to redundant and inefficient
representations that do not reveal the true intrinsic dimensionality of the
data. In this paper, we propose a general method for avoiding redundancy in
spectral algorithms. Our approach relies on replacing the orthogonality
constraints underlying those methods by unpredictability constraints.
Specifically, we require that each embedding coordinate be unpredictable (in
the statistical sense) from all previous ones. We prove that these constraints
necessarily prevent redundancy, and provide a simple technique to incorporate
them into existing methods. As we illustrate on challenging high-dimensional
scenarios, our approach produces significantly more informative and compact
representations, which improve visualization and classification tasks.
| Yochai Blau and Tomer Michaeli | 10.1007/978-3-319-71249-9_16 | 1612.03412 | null | null |
Lock-Free Optimization for Non-Convex Problems | stat.ML cs.LG | Stochastic gradient descent~(SGD) and its variants have attracted much
attention in machine learning due to their efficiency and effectiveness for
optimization. To handle large-scale problems, researchers have recently
proposed several lock-free strategy based parallel SGD~(LF-PSGD) methods for
multi-core systems. However, existing works have only proved the convergence of
these LF-PSGD methods for convex problems. To the best of our knowledge, no
work has proved the convergence of the LF-PSGD methods for non-convex problems.
In this paper, we provide the theoretical proof about the convergence of two
representative LF-PSGD methods, Hogwild! and AsySVRG, for non-convex problems.
Empirical results also show that both Hogwild! and AsySVRG are convergent on
non-convex problems, which successfully verifies our theoretical results.
| Shen-Yi Zhao, Gong-Duo Zhang, Wu-Jun Li | null | 1612.03441 | null | null |
Noisy subspace clustering via matching pursuits | cs.LG cs.IT math.IT stat.ML | Sparsity-based subspace clustering algorithms have attracted significant
attention thanks to their excellent performance in practical applications. A
prominent example is the sparse subspace clustering (SSC) algorithm by
Elhamifar and Vidal, which performs spectral clustering based on an adjacency
matrix obtained by sparsely representing each data point in terms of all the
other data points via the Lasso. When the number of data points is large or the
dimension of the ambient space is high, the computational complexity of SSC
quickly becomes prohibitive. Dyer et al. observed that SSC-OMP obtained by
replacing the Lasso by the greedy orthogonal matching pursuit (OMP) algorithm
results in significantly lower computational complexity, while often yielding
comparable performance. The central goal of this paper is an analytical
performance characterization of SSC-OMP for noisy data. Moreover, we introduce
and analyze the SSC-MP algorithm, which employs matching pursuit (MP) in lieu
of OMP. Both SSC-OMP and SSC-MP are proven to succeed even when the subspaces
intersect and when the data points are contaminated by severe noise. The
clustering conditions we obtain for SSC-OMP and SSC-MP are similar to those for
SSC and for the thresholding-based subspace clustering (TSC) algorithm due to
Heckel and B\"olcskei. Analytical results in combination with numerical results
indicate that both SSC-OMP and SSC-MP with a data-dependent stopping criterion
automatically detect the dimensions of the subspaces underlying the data.
Moreover, experiments on synthetic and on real data show that SSC-MP compares
very favorably to SSC, SSC-OMP, TSC, and the nearest subspace neighbor
algorithm, both in terms of clustering performance and running time. In
addition, we find that, in contrast to SSC-OMP, the performance of SSC-MP is
very robust with respect to the choice of parameters in the stopping criteria.
| Michael Tschannen and Helmut B\"olcskei | 10.1109/TIT.2018.2812824 | 1612.0345 | null | null |
Self-calibrating Neural Networks for Dimensionality Reduction | cs.LG cs.NE q-bio.NC stat.ML | Recently, a novel family of biologically plausible online algorithms for
reducing the dimensionality of streaming data has been derived from the
similarity matching principle. In these algorithms, the number of output
dimensions can be determined adaptively by thresholding the singular values of
the input data matrix. However, setting such threshold requires knowing the
magnitude of the desired singular values in advance. Here we propose online
algorithms where the threshold is self-calibrating based on the singular values
computed from the existing observations. To derive these algorithms from the
similarity matching cost function we propose novel regularizers. As before,
these online algorithms can be implemented by Hebbian/anti-Hebbian neural
networks in which the learning rule depends on the chosen regularizer. We
demonstrate both mathematically and via simulation the effectiveness of these
online algorithms in various settings.
| Yuansi Chen, Cengiz Pehlevan, Dmitri B. Chklovskii | 10.1109/ACSSC.2016.7869625 | 1612.0348 | null | null |
Kernel-based Reconstruction of Space-time Functions on Dynamic Graphs | cs.LG eess.SP stat.ML | Graph-based methods pervade the inference toolkits of numerous disciplines
including sociology, biology, neuroscience, physics, chemistry, and
engineering. A challenging problem encountered in this context pertains to
determining the attributes of a set of vertices given those of another subset
at possibly different time instants. Leveraging spatiotemporal dynamics can
drastically reduce the number of observed vertices, and hence the cost of
sampling. Alleviating the limited flexibility of existing approaches, the
present paper broadens the existing kernel-based graph function reconstruction
framework to accommodate time-evolving functions over possibly time-evolving
topologies. This approach inherits the versatility and generality of
kernel-based methods, for which no knowledge on distributions or second-order
statistics is required. Systematic guidelines are provided to construct two
families of space-time kernels with complementary strengths. The first
facilitates judicious control of regularization on a space-time frequency
plane, whereas the second can afford time-varying topologies. Batch and online
estimators are also put forth, and a novel kernel Kalman filter is developed to
obtain these estimates at affordable computational cost. Numerical tests with
real data sets corroborate the merits of the proposed methods relative to
competing alternatives.
| Daniel Romero, Vassilis N. Ioannidis, Georgios B. Giannakis | null | 1612.03615 | null | null |
FastText.zip: Compressing text classification models | cs.CL cs.LG | We consider the problem of producing compact architectures for text
classification, such that the full model fits in a limited amount of memory.
After considering different solutions inspired by the hashing literature, we
propose a method built upon product quantization to store word embeddings.
While the original technique leads to a loss in accuracy, we adapt this method
to circumvent quantization artefacts. Our experiments carried out on several
benchmarks show that our approach typically requires two orders of magnitude
less memory than fastText while being only slightly inferior with respect to
accuracy. As a result, it outperforms the state of the art by a good margin in
terms of the compromise between memory usage and accuracy.
| Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze,
H\'erve J\'egou, Tomas Mikolov | null | 1612.03651 | null | null |
Analysis and Optimization of Loss Functions for Multiclass, Top-k, and
Multilabel Classification | cs.CV cs.LG stat.ML | Top-k error is currently a popular performance measure on large scale image
classification benchmarks such as ImageNet and Places. Despite its wide
acceptance, our understanding of this metric is limited as most of the previous
research is focused on its special case, the top-1 error. In this work, we
explore two directions that shed more light on the top-k error. First, we
provide an in-depth analysis of established and recently proposed single-label
multiclass methods along with a detailed account of efficient optimization
algorithms for them. Our results indicate that the softmax loss and the smooth
multiclass SVM are surprisingly competitive in top-k error uniformly across all
k, which can be explained by our analysis of multiclass top-k calibration.
Further improvements for a specific k are possible with a number of proposed
top-k loss functions. Second, we use the top-k methods to explore the
transition from multiclass to multilabel learning. In particular, we find that
it is possible to obtain effective multilabel classifiers on Pascal VOC using a
single label per image for training, while the gap between multiclass and
multilabel methods on MS COCO is more significant. Finally, our contribution of
efficient algorithms for training with the considered top-k and multilabel loss
functions is of independent interest.
| Maksim Lapin, Matthias Hein, and Bernt Schiele | null | 1612.03663 | null | null |
Neurogenesis Deep Learning | cs.NE cs.LG stat.ML | Neural machine learning methods, such as deep neural networks (DNN), have
achieved remarkable success in a number of complex data processing tasks. These
methods have arguably had their strongest impact on tasks such as image and
audio processing - data processing domains in which humans have long held clear
advantages over conventional algorithms. In contrast to biological neural
systems, which are capable of learning continuously, deep artificial networks
have a limited ability for incorporating new information in an already trained
network. As a result, methods for continuous learning are potentially highly
impactful in enabling the application of deep networks to dynamic data sets.
Here, inspired by the process of adult neurogenesis in the hippocampus, we
explore the potential for adding new neurons to deep layers of artificial
neural networks in order to facilitate their acquisition of novel information
while preserving previously trained data representations. Our results on the
MNIST handwritten digit dataset and the NIST SD 19 dataset, which includes
lower and upper case letters and digits, demonstrate that neurogenesis is well
suited for addressing the stability-plasticity dilemma that has long challenged
adaptive machine learning algorithms.
| Timothy J. Draelos, Nadine E. Miner, Christopher C. Lamb, Jonathan A.
Cox, Craig M. Vineyard, Kristofor D. Carlson, William M. Severa, Conrad D.
James, and James B. Aimone | 10.1109/IJCNN.2017.7965898 | 1612.0377 | null | null |
Online Reinforcement Learning for Real-Time Exploration in Continuous
State and Action Markov Decision Processes | cs.AI cs.LG | This paper presents a new method to learn online policies in continuous
state, continuous action, model-free Markov decision processes, with two
properties that are crucial for practical applications. First, the policies are
implementable with a very low computational cost: once the policy is computed,
the action corresponding to a given state is obtained in logarithmic time with
respect to the number of samples used. Second, our method is versatile: it does
not rely on any a priori knowledge of the structure of optimal policies. We
build upon the Fitted Q-iteration algorithm which represents the $Q$-value as
the average of several regression trees. Our algorithm, the Fitted Policy
Forest algorithm (FPF), computes a regression forest representing the Q-value
and transforms it into a single tree representing the policy, while keeping
control on the size of the policy using resampling and leaf merging. We
introduce an adaptation of Multi-Resolution Exploration (MRE) which is
particularly suited to FPF. We assess the performance of FPF on three classical
benchmarks for reinforcement learning: the "Inverted Pendulum", the "Double
Integrator" and "Car on the Hill" and show that FPF equals or outperforms other
algorithms, although these algorithms rely on the use of particular
representations of the policies, especially chosen in order to fit each of the
three problems. Finally, we exhibit that the combination of FPF and MRE allows
to find nearly optimal solutions in problems where $\epsilon$-greedy approaches
would fail.
| Ludovic Hofer, Hugo Gimbert | null | 1612.0378 | null | null |
A Unit Selection Methodology for Music Generation Using Deep Neural
Networks | cs.SD cs.AI cs.LG | Several methods exist for a computer to generate music based on data
including Markov chains, recurrent neural networks, recombinancy, and grammars.
We explore the use of unit selection and concatenation as a means of generating
music using a procedure based on ranking, where, we consider a unit to be a
variable length number of measures of music. We first examine whether a unit
selection method, that is restricted to a finite size unit library, can be
sufficient for encompassing a wide spectrum of music. We do this by developing
a deep autoencoder that encodes a musical input and reconstructs the input by
selecting from the library. We then describe a generative model that combines a
deep structured semantic model (DSSM) with an LSTM to predict the next unit,
where units consist of four, two, and one measures of music. We evaluate the
generative model using objective metrics including mean rank and accuracy and
with a subjective listening test in which expert musicians are asked to
complete a forced-choiced ranking task. We compare our model to a note-level
generative baseline that consists of a stacked LSTM trained to predict forward
by one note.
| Mason Bretan, Gil Weinberg, and Larry Heck | null | 1612.03789 | null | null |
Generalizable Features From Unsupervised Learning | stat.ML cs.CV cs.LG | Humans learn a predictive model of the world and use this model to reason
about future events and the consequences of actions. In contrast to most
machine predictors, we exhibit an impressive ability to generalize to unseen
scenarios and reason intelligently in these settings. One important aspect of
this ability is physical intuition(Lake et al., 2016). In this work, we explore
the potential of unsupervised learning to find features that promote better
generalization to settings outside the supervised training distribution. Our
task is predicting the stability of towers of square blocks. We demonstrate
that an unsupervised model, trained to predict future frames of a video
sequence of stable and unstable block configurations, can yield features that
support extrapolating stability prediction to blocks configurations outside the
training set distribution
| Mehdi Mirza, Aaron Courville, Yoshua Bengio | null | 1612.03809 | null | null |
Tensor Decompositions via Two-Mode Higher-Order SVD (HOSVD) | cs.LG stat.ML | Tensor decompositions have rich applications in statistics and machine
learning, and developing efficient, accurate algorithms for the problem has
received much attention recently. Here, we present a new method built on
Kruskal's uniqueness theorem to decompose symmetric, nearly orthogonally
decomposable tensors. Unlike the classical higher-order singular value
decomposition which unfolds a tensor along a single mode, we consider
unfoldings along two modes and use rank-1 constraints to characterize the
underlying components. This tensor decomposition method provably handles a
greater level of noise compared to previous methods and achieves a high
estimation accuracy. Numerical results demonstrate that our algorithm is robust
to various noise distributions and that it performs especially favorably as the
order increases.
| Miaoyan Wang and Yun S. Song | null | 1612.03839 | null | null |
Knowledge Completion for Generics using Guided Tensor Factorization | cs.AI cs.LG stat.ML | Given a knowledge base or KB containing (noisy) facts about common nouns or
generics, such as "all trees produce oxygen" or "some animals live in forests",
we consider the problem of inferring additional such facts at a precision
similar to that of the starting KB. Such KBs capture general knowledge about
the world, and are crucial for various applications such as question answering.
Different from commonly studied named entity KBs such as Freebase, generics KBs
involve quantification, have more complex underlying regularities, tend to be
more incomplete, and violate the commonly used locally closed world assumption
(LCWA). We show that existing KB completion methods struggle with this new
task, and present the first approach that is successful. Our results
demonstrate that external information, such as relation schemas and entity
taxonomies, if used appropriately, can be a surprisingly powerful tool in this
setting. First, our simple yet effective knowledge guided tensor factorization
approach achieves state-of-the-art results on two generics KBs (80% precise)
for science, doubling their size at 74%-86% precision. Second, our novel
taxonomy guided, submodular, active learning method for collecting annotations
about rare entities (e.g., oriole, a bird) is 6x more effective at inferring
further new facts about them than multiple active learning baselines.
| Hanie Sedghi and Ashish Sabharwal | null | 1612.03871 | null | null |
Inverse Compositional Spatial Transformer Networks | cs.CV cs.LG | In this paper, we establish a theoretical connection between the classical
Lucas & Kanade (LK) algorithm and the emerging topic of Spatial Transformer
Networks (STNs). STNs are of interest to the vision and learning communities
due to their natural ability to combine alignment and classification within the
same theoretical framework. Inspired by the Inverse Compositional (IC) variant
of the LK algorithm, we present Inverse Compositional Spatial Transformer
Networks (IC-STNs). We demonstrate that IC-STNs can achieve better performance
than conventional STNs with less model capacity; in particular, we show
superior performance in pure image alignment tasks as well as joint
alignment/classification problems on real-world problems.
| Chen-Hsuan Lin, Simon Lucey | null | 1612.03897 | null | null |
Design of Data-Driven Mathematical Laws for Optimal Statistical
Classification Systems | cs.LG | This article will devise data-driven, mathematical laws that generate
optimal, statistical classification systems which achieve minimum error rates
for data distributions with unchanging statistics. Thereby, I will design
learning machines that minimize the expected risk or probability of
misclassification. I will devise a system of fundamental equations of binary
classification for a classification system in statistical equilibrium. I will
use this system of equations to formulate the problem of learning unknown,
linear and quadratic discriminant functions from data as a locus problem,
thereby formulating geometric locus methods within a statistical framework.
Solving locus problems involves finding equations of curves or surfaces defined
by given properties and finding graphs or loci of given equations. I will
devise three systems of data-driven, locus equations that generate optimal,
statistical classification systems. Each class of learning machines satisfies
fundamental statistical laws for a classification system in statistical
equilibrium. Thereby, I will formulate three classes of learning machines that
are scalable modules for optimal, statistical pattern recognition systems, all
of which are capable of performing a wide variety of statistical pattern
recognition tasks, where any given M-class statistical pattern recognition
system exhibits optimal generalization performance for an M-class feature
space.
| Denise M. Reeves | null | 1612.03902 | null | null |
Identification of release sources in advection-diffusion system by
machine learning combined with Green function inverse method | cs.LG stat.ML | The identification of sources of advection-diffusion transport is based
usually on solving complex ill-posed inverse models against the available
state- variable data records. However, if there are several sources with
different locations and strengths, the data records represent mixtures rather
than the separate influences of the original sources. Importantly, the number
of these original release sources is typically unknown, which hinders
reliability of the classical inverse-model analyses. To address this challenge,
we present here a novel hybrid method for identification of the unknown number
of release sources. Our hybrid method, called HNMF, couples unsupervised
learning based on Nonnegative Matrix Factorization (NMF) and inverse-analysis
Green functions method. HNMF synergistically performs decomposition of the
recorded mixtures, finds the number of the unknown sources and uses the Green
function of advection-diffusion equation to identify their characteristics. In
the paper, we introduce the method and demonstrate that it is capable of
identifying the advection velocity and dispersivity of the medium as well as
the unknown number, locations, and properties of various sets of synthetic
release sources with different space and time dependencies, based only on the
recorded data. HNMF can be applied directly to any problem controlled by a
partial-differential parabolic equation where mixtures of an unknown number of
sources are measured at multiple locations.
| Valentin G. Stanev, Filip L. Iliev, Scott Hansen, Velimir V.
Vesselinov, Boian S. Alexandrov | 10.1016/j.apm.2018.03.006 | 1612.03948 | null | null |
Nonnegative Matrix Factorization for identification of unknown number of
sources emitting delayed signals | cs.LG stat.ML | Factor analysis is broadly used as a powerful unsupervised machine learning
tool for reconstruction of hidden features in recorded mixtures of signals. In
the case of a linear approximation, the mixtures can be decomposed by a variety
of model-free Blind Source Separation (BSS) algorithms. Most of the available
BSS algorithms consider an instantaneous mixing of signals, while the case when
the mixtures are linear combinations of signals with delays is less explored.
Especially difficult is the case when the number of sources of the signals with
delays is unknown and has to be determined from the data as well. To address
this problem, in this paper, we present a new method based on Nonnegative
Matrix Factorization (NMF) that is capable of identifying: (a) the unknown
number of the sources, (b) the delays and speed of propagation of the signals,
and (c) the locations of the sources. Our method can be used to decompose
records of mixtures of signals with delays emitted by an unknown number of
sources in a nondispersive medium, based only on recorded data. This is the
case, for example, when electromagnetic signals from multiple antennas are
received asynchronously; or mixtures of acoustic or seismic signals recorded by
sensors located at different positions; or when a shift in frequency is induced
by the Doppler effect. By applying our method to synthetic datasets, we
demonstrate its ability to identify the unknown number of sources as well as
the waveforms, the delays, and the strengths of the signals. Using Bayesian
analysis, we also evaluate estimation uncertainties and identify the region of
likelihood where the positions of the sources can be found.
| Filip L. Iliev, Valentin G. Stanev, Velimir V. Vesselinov, Boian S.
Alexandrov | null | 1612.0395 | null | null |
Hybrid Repeat/Multi-point Sampling for Highly Volatile Objective
Functions | stat.ML cs.AI cs.LG cs.RO | A key drawback of the current generation of artificial decision-makers is
that they do not adapt well to changes in unexpected situations. This paper
addresses the situation in which an AI for aerial dog fighting, with tunable
parameters that govern its behavior, will optimize behavior with respect to an
objective function that must be evaluated and learned through simulations. Once
this objective function has been modeled, the agent can then choose its desired
behavior in different situations. Bayesian optimization with a Gaussian Process
surrogate is used as the method for investigating the objective function. One
key benefit is that during optimization the Gaussian Process learns a global
estimate of the true objective function, with predicted outcomes and a
statistical measure of confidence in areas that haven't been investigated yet.
However, standard Bayesian optimization does not perform consistently or
provide an accurate Gaussian Process surrogate function for highly volatile
objective functions. We treat these problems by introducing a novel sampling
technique called Hybrid Repeat/Multi-point Sampling. This technique gives the
AI ability to learn optimum behaviors in a highly uncertain environment. More
importantly, it not only improves the reliability of the optimization, but also
creates a better model of the entire objective surface. With this improved
model the agent is equipped to better adapt behaviors.
| Brett Israelsen and Nisar Ahmed | null | 1612.03981 | null | null |
An empirical analysis of the optimization of deep network loss surfaces | cs.LG | The success of deep neural networks hinges on our ability to accurately and
efficiently optimize high-dimensional, non-convex functions. In this paper, we
empirically investigate the loss functions of state-of-the-art networks, and
how commonly-used stochastic gradient descent variants optimize these loss
functions. To do this, we visualize the loss function by projecting them down
to low-dimensional spaces chosen based on the convergence points of different
optimization algorithms. Our observations suggest that optimization algorithms
encounter and choose different descent directions at many saddle points to find
different final weights. Based on consistency we observe across re-runs of the
same stochastic optimization algorithm, we hypothesize that each optimization
algorithm makes characteristic choices at these saddle points.
| Daniel Jiwoong Im, Michael Tao, Kristin Branson | null | 1612.0401 | null | null |
Generative Adversarial Parallelization | cs.LG stat.ML | Generative Adversarial Networks have become one of the most studied
frameworks for unsupervised learning due to their intuitive formulation. They
have also been shown to be capable of generating convincing examples in limited
domains, such as low-resolution images. However, they still prove difficult to
train in practice and tend to ignore modes of the data generating distribution.
Quantitatively capturing effects such as mode coverage and more generally the
quality of the generative model still remain elusive. We propose Generative
Adversarial Parallelization, a framework in which many GANs or their variants
are trained simultaneously, exchanging their discriminators. This eliminates
the tight coupling between a generator and discriminator, leading to improved
convergence and improved coverage of modes. We also propose an improved variant
of the recently proposed Generative Adversarial Metric and show how it can
score individual GANs or their collections under the GAP model.
| Daniel Jiwoong Im, He Ma, Chris Dongjoo Kim, Graham Taylor | null | 1612.04021 | null | null |
Distributed Multi-Task Relationship Learning | cs.LG stat.ML | Multi-task learning aims to learn multiple tasks jointly by exploiting their
relatedness to improve the generalization performance for each task.
Traditionally, to perform multi-task learning, one needs to centralize data
from all the tasks to a single machine. However, in many real-world
applications, data of different tasks may be geo-distributed over different
local machines. Due to heavy communication caused by transmitting the data and
the issue of data privacy and security, it is impossible to send data of
different task to a master machine to perform multi-task learning. Therefore,
in this paper, we propose a distributed multi-task learning framework that
simultaneously learns predictive models for each task as well as task
relationships between tasks alternatingly in the parameter server paradigm. In
our framework, we first offer a general dual form for a family of regularized
multi-task relationship learning methods. Subsequently, we propose a
communication-efficient primal-dual distributed optimization algorithm to solve
the dual problem by carefully designing local subproblems to make the dual
problem decomposable. Moreover, we provide a theoretical convergence analysis
for the proposed algorithm, which is specific for distributed multi-task
relationship learning. We conduct extensive experiments on both synthetic and
real-world datasets to evaluate our proposed framework in terms of
effectiveness and convergence.
| Sulin Liu, Sinno Jialin Pan, Qirong Ho | null | 1612.04022 | null | null |
DizzyRNN: Reparameterizing Recurrent Neural Networks for Norm-Preserving
Backpropagation | cs.LG | The vanishing and exploding gradient problems are well-studied obstacles that
make it difficult for recurrent neural networks to learn long-term time
dependencies. We propose a reparameterization of standard recurrent neural
networks to update linear transformations in a provably norm-preserving way
through Givens rotations. Additionally, we use the absolute value function as
an element-wise non-linearity to preserve the norm of backpropagated signals
over the entire network. We show that this reparameterization reduces the
number of parameters and maintains the same algorithmic complexity as a
standard recurrent neural network, while outperforming standard recurrent
neural networks with orthogonal initializations and Long Short-Term Memory
networks on the copy problem.
| Victor Dorobantu, Per Andre Stromhaug, Jess Renteria | null | 1612.04035 | null | null |
Theory and Tools for the Conversion of Analog to Spiking Convolutional
Neural Networks | stat.ML cs.CV cs.LG cs.NE | Deep convolutional neural networks (CNNs) have shown great potential for
numerous real-world machine learning applications, but performing inference in
large CNNs in real-time remains a challenge. We have previously demonstrated
that traditional CNNs can be converted into deep spiking neural networks
(SNNs), which exhibit similar accuracy while reducing both latency and
computational load as a consequence of their data-driven, event-based style of
computing. Here we provide a novel theory that explains why this conversion is
successful, and derive from it several new tools to convert a larger and more
powerful class of deep networks into SNNs. We identify the main sources of
approximation errors in previous conversion methods, and propose simple
mechanisms to fix these issues. Furthermore, we develop spiking implementations
of common CNN operations such as max-pooling, softmax, and batch-normalization,
which allow almost loss-less conversion of arbitrary CNN architectures into the
spiking domain. Empirical evaluation of different network architectures on the
MNIST and CIFAR10 benchmarks leads to the best SNN results reported to date.
| Bodo Rueckauer, Iulia-Alexandra Lungu, Yuhuang Hu, Michael Pfeiffer | null | 1612.04052 | null | null |
Joint Bayesian Gaussian discriminant analysis for speaker verification | cs.SD cs.LG | State-of-the-art i-vector based speaker verification relies on variants of
Probabilistic Linear Discriminant Analysis (PLDA) for discriminant analysis. We
are mainly motivated by the recent work of the joint Bayesian (JB) method,
which is originally proposed for discriminant analysis in face verification. We
apply JB to speaker verification and make three contributions beyond the
original JB. 1) In contrast to the EM iterations with approximated statistics
in the original JB, the EM iterations with exact statistics are employed and
give better performance. 2) We propose to do simultaneous diagonalization (SD)
of the within-class and between-class covariance matrices to achieve efficient
testing, which has broader application scope than the SVD-based efficient
testing method in the original JB. 3) We scrutinize similarities and
differences between various Gaussian PLDAs and JB, complementing the previous
analysis of comparing JB only with Prince-Elder PLDA. Extensive experiments are
conducted on NIST SRE10 core condition 5, empirically validating the
superiority of JB with faster convergence rate and 9-13% EER reduction compared
with state-of-the-art PLDA.
| Yiyan Wang, Haotian Xu, Zhijian Ou | null | 1612.04056 | null | null |
Corporate Disruption in the Science of Machine Learning | cs.CY cs.LG | This MSc dissertation considers the effects of the current corporate interest
on researchers in the field of machine learning. Situated within the field's
cyclical history of academic, public and corporate interest, this dissertation
investigates how current researchers view recent developments and negotiate
their own research practices within an environment of increased commercial
interest and funding. The original research consists of in-depth interviews
with 12 machine learning researchers working in both academia and industry.
Building on theory from science, technology and society studies, this
dissertation problematizes the traditional narratives of the neoliberalization
of academic research by allowing the researchers themselves to discuss how
their career choices, working environments and interactions with others in the
field have been affected by the reinvigorated corporate interest of recent
years.
| Sam Work | null | 1612.04108 | null | null |
Parsimonious Online Learning with Kernels via Sparse Projections in
Function Space | stat.ML cs.LG | Despite their attractiveness, popular perception is that techniques for
nonparametric function approximation do not scale to streaming data due to an
intractable growth in the amount of storage they require. To solve this problem
in a memory-affordable way, we propose an online technique based on functional
stochastic gradient descent in tandem with supervised sparsification based on
greedy function subspace projections. The method, called parsimonious online
learning with kernels (POLK), provides a controllable tradeoff? between its
solution accuracy and the amount of memory it requires. We derive conditions
under which the generated function sequence converges almost surely to the
optimal function, and we establish that the memory requirement remains finite.
We evaluate POLK for kernel multi-class logistic regression and kernel
hinge-loss classification on three canonical data sets: a synthetic Gaussian
mixture model, the MNIST hand-written digits, and the Brodatz texture database.
On all three tasks, we observe a favorable tradeoff of objective function
evaluation, classification performance, and complexity of the nonparametric
regressor extracted the proposed method.
| Alec Koppel, Garrett Warnell, Ethan Stump, Alejandro Ribeiro | null | 1612.04111 | null | null |
Upper Bound of Bayesian Generalization Error in Non-negative Matrix
Factorization | math.ST cs.LG stat.ML stat.TH | Non-negative matrix factorization (NMF) is a new knowledge discovery method
that is used for text mining, signal processing, bioinformatics, and consumer
analysis. However, its basic property as a learning machine is not yet
clarified, as it is not a regular statistical model, resulting that theoretical
optimization method of NMF has not yet established. In this paper, we study the
real log canonical threshold of NMF and give an upper bound of the
generalization error in Bayesian learning. The results show that the
generalization error of the matrix factorization can be made smaller than
regular statistical models if Bayesian learning is applied.
| Naoki Hayashi, Sumio Watanabe | 10.1016/j.neucom.2017.04.068 | 1612.04112 | null | null |
Information Extraction with Character-level Neural Networks and Free
Noisy Supervision | cs.CL cs.IR cs.LG | We present an architecture for information extraction from text that augments
an existing parser with a character-level neural network. The network is
trained using a measure of consistency of extracted data with existing
databases as a form of noisy supervision. Our architecture combines the ability
of constraint-based information extraction systems to easily incorporate domain
knowledge and constraints with the ability of deep neural networks to leverage
large amounts of data to learn complex features. Boosting the existing parser's
precision, the system led to large improvements over a mature and highly tuned
constraint-based production information extraction system used at Bloomberg for
financial language text.
| Philipp Meerkamp (Bloomberg LP) and Zhengyi Zhou (AT&T Labs Research) | null | 1612.04118 | null | null |
TF.Learn: TensorFlow's High-level Module for Distributed Machine
Learning | cs.DC cs.LG | TF.Learn is a high-level Python module for distributed machine learning
inside TensorFlow. It provides an easy-to-use Scikit-learn style interface to
simplify the process of creating, configuring, training, evaluating, and
experimenting a machine learning model. TF.Learn integrates a wide range of
state-of-art machine learning algorithms built on top of TensorFlow's low level
APIs for small to large-scale supervised and unsupervised problems. This module
focuses on bringing machine learning to non-specialists using a general-purpose
high-level language as well as researchers who want to implement, benchmark,
and compare their new methods in a structured environment. Emphasis is put on
ease of use, performance, documentation, and API consistency.
| Yuan Tang | null | 1612.04251 | null | null |
Neuro-symbolic representation learning on biological knowledge graphs | q-bio.QM cs.LG q-bio.MN | Motivation: Biological data and knowledge bases increasingly rely on Semantic
Web technologies and the use of knowledge graphs for data integration,
retrieval and federated queries. In the past years, feature learning methods
that are applicable to graph-structured data are becoming available, but have
not yet widely been applied and evaluated on structured biological knowledge.
Results: We develop a novel method for feature learning on biological knowledge
graphs. Our method combines symbolic methods, in particular knowledge
representation using symbolic logic and automated reasoning, with neural
networks to generate embeddings of nodes that encode for related information
within knowledge graphs. Through the use of symbolic logic, these embeddings
contain both explicit and implicit information. We apply these embeddings to
the prediction of edges in the knowledge graph representing problems of
function prediction, finding candidate genes of diseases, protein-protein
interactions, or drug target relations, and demonstrate performance that
matches and sometimes outperforms traditional approaches based on manually
crafted features. Our method can be applied to any biological knowledge graph,
and will thereby open up the increasing amount of Semantic Web based knowledge
bases in biology to use in machine learning and data analytics. Availability
and Implementation:
https://github.com/bio-ontology-research-group/walking-rdf-and-owl Contact:
[email protected]
| Mona Alshahrani, Mohammed Asif Khan, Omar Maddouri, Akira R Kinjo,
N\'uria Queralt-Rosinach, Robert Hoehndorf | 10.1093/bioinformatics/btx275 | 1612.04256 | null | null |
An equation-of-state-meter of QCD transition from deep learning | hep-ph cs.LG hep-th nucl-th stat.ML | Supervised learning with a deep convolutional neural network is used to
identify the QCD equation of state (EoS) employed in relativistic hydrodynamic
simulations of heavy-ion collisions from the simulated final-state particle
spectra $\rho(p_T,\Phi)$. High-level correlations of $\rho(p_T,\Phi)$ learned
by the neural network act as an effective "EoS-meter" in detecting the nature
of the QCD transition. The EoS-meter is model independent and insensitive to
other simulation inputs, especially the initial conditions. Thus it provides a
powerful direct-connection of heavy-ion collision observables with the bulk
properties of QCD.
| Long-Gang Pang, Kai Zhou, Nan Su, Hannah Petersen, Horst St\"ocker and
Xin-Nian Wang | null | 1612.04262 | null | null |
Towards Adaptive Training of Agent-based Sparring Partners for Fighter
Pilots | stat.ML cs.AI cs.LG cs.RO | A key requirement for the current generation of artificial decision-makers is
that they should adapt well to changes in unexpected situations. This paper
addresses the situation in which an AI for aerial dog fighting, with tunable
parameters that govern its behavior, must optimize behavior with respect to an
objective function that is evaluated and learned through simulations. Bayesian
optimization with a Gaussian Process surrogate is used as the method for
investigating the objective function. One key benefit is that during
optimization, the Gaussian Process learns a global estimate of the true
objective function, with predicted outcomes and a statistical measure of
confidence in areas that haven't been investigated yet. Having a model of the
objective function is important for being able to understand possible outcomes
in the decision space; for example this is crucial for training and providing
feedback to human pilots. However, standard Bayesian optimization does not
perform consistently or provide an accurate Gaussian Process surrogate function
for highly volatile objective functions. We treat these problems by introducing
a novel sampling technique called Hybrid Repeat/Multi-point Sampling. This
technique gives the AI ability to learn optimum behaviors in a highly uncertain
environment. More importantly, it not only improves the reliability of the
optimization, but also creates a better model of the entire objective surface.
With this improved model the agent is equipped to more accurately/efficiently
predict performance in unexplored scenarios.
| Brett W. Israelsen, Nisar Ahmed, Kenneth Center, Roderick Green,
Winston Bennett Jr | null | 1612.04315 | null | null |
Incorporating Human Domain Knowledge into Large Scale Cost Function
Learning | cs.RO cs.AI cs.LG | Recent advances have shown the capability of Fully Convolutional Neural
Networks (FCN) to model cost functions for motion planning in the context of
learning driving preferences purely based on demonstration data from human
drivers. While pure learning from demonstrations in the framework of Inverse
Reinforcement Learning (IRL) is a promising approach, we can benefit from well
informed human priors and incorporate them into the learning process. Our work
achieves this by pretraining a model to regress to a manual cost function and
refining it based on Maximum Entropy Deep Inverse Reinforcement Learning. When
injecting prior knowledge as pretraining for the network, we achieve higher
robustness, more visually distinct obstacle boundaries, and the ability to
capture instances of obstacles that elude models that purely learn from
demonstration data. Furthermore, by exploiting these human priors, the
resulting model can more accurately handle corner cases that are scarcely seen
in the demonstration data, such as stairs, slopes, and underpasses.
| Markus Wulfmeier, Dushyant Rao, Ingmar Posner | null | 1612.04318 | null | null |
Fast Patch-based Style Transfer of Arbitrary Style | cs.CV cs.GR cs.LG | Artistic style transfer is an image synthesis problem where the content of an
image is reproduced with the style of another. Recent works show that a
visually appealing style transfer can be achieved by using the hidden
activations of a pretrained convolutional neural network. However, existing
methods either apply (i) an optimization procedure that works for any style
image but is very expensive, or (ii) an efficient feedforward network that only
allows a limited number of trained styles. In this work we propose a simpler
optimization objective based on local matching that combines the content
structure and style textures in a single layer of the pretrained network. We
show that our objective has desirable properties such as a simpler optimization
landscape, intuitive parameter tuning, and consistent frame-by-frame
performance on video. Furthermore, we use 80,000 natural images and 80,000
paintings to train an inverse network that approximates the result of the
optimization. This results in a procedure for artistic style transfer that is
efficient but also allows arbitrary content and style images.
| Tian Qi Chen and Mark Schmidt | null | 1612.04337 | null | null |
End-to-End Deep Reinforcement Learning for Lane Keeping Assist | stat.ML cs.LG cs.RO | Reinforcement learning is considered to be a strong AI paradigm which can be
used to teach machines through interaction with the environment and learning
from their mistakes, but it has not yet been successfully used for automotive
applications. There has recently been a revival of interest in the topic,
however, driven by the ability of deep learning algorithms to learn good
representations of the environment. Motivated by Google DeepMind's successful
demonstrations of learning for games from Breakout to Go, we will propose
different methods for autonomous driving using deep reinforcement learning.
This is of particular interest as it is difficult to pose autonomous driving as
a supervised learning problem as it has a strong interaction with the
environment including other vehicles, pedestrians and roadworks. As this is a
relatively new area of research for autonomous driving, we will formulate two
main categories of algorithms: 1) Discrete actions category, and 2) Continuous
actions category. For the discrete actions category, we will deal with Deep
Q-Network Algorithm (DQN) while for the continuous actions category, we will
deal with Deep Deterministic Actor Critic Algorithm (DDAC). In addition to
that, We will also discover the performance of these two categories on an open
source car simulator for Racing called (TORCS) which stands for The Open Racing
car Simulator. Our simulation results demonstrate learning of autonomous
maneuvering in a scenario of complex road curvatures and simple interaction
with other vehicles. Finally, we explain the effect of some restricted
conditions, put on the car during the learning phase, on the convergence time
for finishing its learning phase.
| Ahmad El Sallab, Mohammed Abdou, Etienne Perot and Senthil Yogamani | null | 1612.0434 | null | null |
Stacked Generative Adversarial Networks | cs.CV cs.LG cs.NE stat.ML | In this paper, we propose a novel generative model named Stacked Generative
Adversarial Networks (SGAN), which is trained to invert the hierarchical
representations of a bottom-up discriminative network. Our model consists of a
top-down stack of GANs, each learned to generate lower-level representations
conditioned on higher-level representations. A representation discriminator is
introduced at each feature hierarchy to encourage the representation manifold
of the generator to align with that of the bottom-up discriminative network,
leveraging the powerful discriminative representations to guide the generative
model. In addition, we introduce a conditional loss that encourages the use of
conditional information from the layer above, and a novel entropy loss that
maximizes a variational lower bound on the conditional entropy of generator
outputs. We first train each stack independently, and then train the whole
model end-to-end. Unlike the original GAN that uses a single noise vector to
represent all the variations, our SGAN decomposes variations into multiple
levels and gradually resolves uncertainties in the top-down generative process.
Based on visual inspection, Inception scores and visual Turing test, we
demonstrate that SGAN is able to generate images of much higher quality than
GANs without stacking.
| Xun Huang, Yixuan Li, Omid Poursaeed, John Hopcroft, Serge Belongie | null | 1612.04357 | null | null |
Inferring object rankings based on noisy pairwise comparisons from
multiple annotators | stat.ML cs.LG | Ranking a set of objects involves establishing an order allowing for
comparisons between any pair of objects in the set. Oftentimes, due to the
unavailability of a ground truth of ranked orders, researchers resort to
obtaining judgments from multiple annotators followed by inferring the ground
truth based on the collective knowledge of the crowd. However, the aggregation
is often ad-hoc and involves imposing stringent assumptions in inferring the
ground truth (e.g. majority vote). In this work, we propose
Expectation-Maximization (EM) based algorithms that rely on the judgments from
multiple annotators and the object attributes for inferring the latent ground
truth. The algorithm learns the relation between the latent ground truth and
object attributes as well as annotator specific probabilities of flipping, a
metric to assess annotator quality. We further extend the EM algorithm to allow
for a variable probability of flipping based on the pair of objects at hand. We
test our algorithms on two data sets with synthetic annotations and investigate
the impact of annotator quality and quantity on the inferred ground truth. We
also obtain the results on two other data sets with annotations from
machine/human annotators and interpret the output trends based on the data
characteristics.
| Rahul Gupta, Shrikanth Narayanan | null | 1612.04413 | null | null |
User Model-Based Intent-Aware Metrics for Multilingual Search Evaluation | cs.IR cs.CL cs.HC cs.LG stat.ML | Despite the growing importance of multilingual aspect of web search, no
appropriate offline metrics to evaluate its quality are proposed so far. At the
same time, personal language preferences can be regarded as intents of a query.
This approach translates the multilingual search problem into a particular task
of search diversification. Furthermore, the standard intent-aware approach
could be adopted to build a diversified metric for multilingual search on the
basis of a classical IR metric such as ERR. The intent-aware approach estimates
user satisfaction under a user behavior model. We show however that the
underlying user behavior models is not realistic in the multilingual case, and
the produced intent-aware metric do not appropriately estimate the user
satisfaction. We develop a novel approach to build intent-aware user behavior
models, which overcome these limitations and convert to quality metrics that
better correlate with standard online metrics of user satisfaction.
| Alexey Drutsa (Yandex, Moscow, Russia), Andrey Shutovich (Yandex,
Moscow, Russia), Philipp Pushnyakov (Yandex, Moscow, Russia), Evgeniy
Krokhalyov (Yandex, Moscow, Russia), Gleb Gusev (Yandex, Moscow, Russia),
Pavel Serdyukov (Yandex, Moscow, Russia) | null | 1612.04418 | null | null |
Improving Neural Language Models with a Continuous Cache | cs.CL cs.LG | We propose an extension to neural network language models to adapt their
prediction to the recent history. Our model is a simplified version of memory
augmented networks, which stores past hidden activations as memory and accesses
them through a dot product with the current hidden activation. This mechanism
is very efficient and scales to very large memory sizes. We also draw a link
between the use of external memory in neural network and cache models used with
count based language models. We demonstrate on several language model datasets
that our approach performs significantly better than recent memory augmented
networks.
| Edouard Grave, Armand Joulin, Nicolas Usunier | null | 1612.04426 | null | null |
Identification of Cancer Patient Subgroups via Smoothed Shortest Path
Graph Kernel | cs.CE cs.LG | Characterizing patient somatic mutations through next-generation sequencing
technologies opens up possibilities for refining cancer subtypes. However,
catalogues of mutations reveal that only a small fraction of the genes are
altered frequently in patients. On the other hand different genomic alterations
may perturb the same pathways. We propose a novel clustering procedure that
quantifies the similarities of patients from their mutational profile on
pathways via a novel graph kernel. We represent each KEGG pathway as an
undirected graph. For each patient the vertex labels are assigned based on her
altered genes. Smoothed shortest path graph kernel (smSPK) evaluates each pair
of patients by comparing their vertex labeled pathway graphs. Our clustering
procedure involves two steps: the smSPK kernel matrix derived for each pathway
are input to kernel k-means algorithm and each pathway is evaluated
individually. In the next step, only those pathways that are successful are
combined in to a single kernel input to kernel k-means to stratify patients.
Evaluating the procedure on simulated data showed that smSPK clusters patients
up to 88\% accuracy. Finally to identify ovarian cancer patient subgroups, we
apply our methodology to the cancer genome atlas ovarian data that involves 481
patients. The identified subgroups are evaluated through survival analysis.
Grouping patients into four clusters results with patients groups that are
significantly different in their survival times ($p$-value $\le 0.005$).
| Ali Burak \"Unal, \"Oznur Ta\c{s}tan | null | 1612.04431 | null | null |
Disentangling Space and Time in Video with Hierarchical Variational
Auto-encoders | cs.CV cs.LG stat.ML | There are many forms of feature information present in video data. Principle
among them are object identity information which is largely static across
multiple video frames, and object pose and style information which continuously
transforms from frame to frame. Most existing models confound these two types
of representation by mapping them to a shared feature space. In this paper we
propose a probabilistic approach for learning separable representations of
object identity and pose information using unsupervised video data. Our
approach leverages a deep generative model with a factored prior distribution
that encodes properties of temporal invariances in the hidden feature set.
Learning is achieved via variational inference. We present results of learning
identity and pose information on a dataset of moving characters as well as a
dataset of rotating 3D objects. Our experimental results demonstrate our
model's success in factoring its representation, and demonstrate that the model
achieves improved performance in transfer learning tasks.
| Will Grathwohl, Aaron Wilson | null | 1612.0444 | null | null |
Retrieving sinusoids from nonuniformly sampled data using recursive
formulation | cs.IT cs.LG math.IT | A heuristic procedure based on novel recursive formulation of sinusoid (RFS)
and on regression with predictive least-squares (LS) enables to decompose both
uniformly and nonuniformly sampled 1-d signals into a sparse set of sinusoids
(SSS). An optimal SSS is found by Levenberg-Marquardt (LM) optimization of RFS
parameters of near-optimal sinusoids combined with common criteria for the
estimation of the number of sinusoids embedded in noise. The procedure
estimates both the cardinality and the parameters of SSS. The proposed
algorithm enables to identify the RFS parameters of a sinusoid from a data
sequence containing only a fraction of its cycle. In extreme cases when the
frequency of a sinusoid approaches zero the algorithm is able to detect a
linear trend in data. Also, an irregular sampling pattern enables the algorithm
to correctly reconstruct the under-sampled sinusoid. Parsimonious nature of the
obtaining models opens the possibilities of using the proposed method in
machine learning and in expert and intelligent systems needing analysis and
simple representation of 1-d signals. The properties of the proposed algorithm
are evaluated on examples of irregularly sampled artificial signals in noise
and are compared with high accuracy frequency estimation algorithms based on
linear prediction (LP) approach, particularly with respect to Cramer-Rao Bound
(CRB).
| Ivan Maric | 10.1016/j.eswa.2016.10.057 | 1612.04599 | null | null |
Predicting Process Behaviour using Deep Learning | cs.LG stat.ML | Predicting business process behaviour is an important aspect of business
process management. Motivated by research in natural language processing, this
paper describes an application of deep learning with recurrent neural networks
to the problem of predicting the next event in a business process. This is both
a novel method in process prediction, which has largely relied on explicit
process models, and also a novel application of deep learning methods. The
approach is evaluated on two real datasets and our results surpass the
state-of-the-art in prediction precision.
| Joerg Evermann, Jana-Rebecca Rehse, Peter Fettke | 10.1016/j.dss.2017.04.003 | 1612.046 | null | null |
Harmonic Networks: Deep Translation and Rotation Equivariance | cs.CV cs.LG stat.ML | Translating or rotating an input image should not affect the results of many
computer vision tasks. Convolutional neural networks (CNNs) are already
translation equivariant: input image translations produce proportionate feature
map translations. This is not the case for rotations. Global rotation
equivariance is typically sought through data augmentation, but patch-wise
equivariance is more difficult. We present Harmonic Networks or H-Nets, a CNN
exhibiting equivariance to patch-wise translation and 360-rotation. We achieve
this by replacing regular CNN filters with circular harmonics, returning a
maximal response and orientation for every receptive field patch.
H-Nets use a rich, parameter-efficient and low computational complexity
representation, and we show that deep feature maps within the network encode
complicated rotational invariants. We demonstrate that our layers are general
enough to be used in conjunction with the latest architectures and techniques,
such as deep supervision and batch normalization. We also achieve
state-of-the-art classification on rotated-MNIST, and competitive results on
other benchmark challenges.
| Daniel E. Worrall, Stephan J. Garbin, Daniyar Turmukhambetov and
Gabriel J. Brostow | null | 1612.04642 | null | null |
Stable Memory Allocation in the Hippocampus: Fundamental Limits and
Neural Realization | cs.NE cs.DS cs.LG | It is believed that hippocampus functions as a memory allocator in brain, the
mechanism of which remains unrevealed. In Valiant's neuroidal model, the
hippocampus was described as a randomly connected graph, the computation on
which maps input to a set of activated neuroids with stable size. Valiant
proposed three requirements for the hippocampal circuit to become a stable
memory allocator (SMA): stability, continuity and orthogonality. The
functionality of SMA in hippocampus is essential in further computation within
cortex, according to Valiant's model.
In this paper, we put these requirements for memorization functions into
rigorous mathematical formulation and introduce the concept of capacity, based
on the probability of erroneous allocation. We prove fundamental limits for the
capacity and error probability of SMA, in both data-independent and
data-dependent settings. We also establish an example of stable memory
allocator that can be implemented via neuroidal circuits. Both theoretical
bounds and simulation results show that the neural SMA functions well.
| Wenlong Mou, Zhi Wang, Liwei Wang | null | 1612.04659 | null | null |
An Architecture for Deep, Hierarchical Generative Models | cs.LG | We present an architecture which lets us train deep, directed generative
models with many layers of latent variables. We include deterministic paths
between all latent variables and the generated output, and provide a richer set
of connections between computations for inference and generation, which enables
more effective communication of information throughout the model during
training. To improve performance on natural images, we incorporate a
lightweight autoregressive model in the reconstruction distribution. These
techniques permit end-to-end training of models with 10+ layers of latent
variables. Experiments show that our approach achieves state-of-the-art
performance on standard image modelling benchmarks, can expose latent class
structure in the absence of label information, and can provide convincing
imputations of occluded regions in natural images.
| Philip Bachman | null | 1612.04739 | null | null |
Incorporating Language Level Information into Acoustic Models | cs.CL cs.LG cs.SD | This paper proposed a class of novel Deep Recurrent Neural Networks which can
incorporate language-level information into acoustic models. For simplicity, we
named these networks Recurrent Deep Language Networks (RDLNs). Multiple
variants of RDLNs were considered, including two kinds of context information,
two methods to process the context, and two methods to incorporate the
language-level information. RDLNs provided possible methods to fine-tune the
whole Automatic Speech Recognition (ASR) system in the acoustic modeling
process.
| Peidong Wang, Deliang Wang | null | 1612.04744 | null | null |
Encapsulating models and approximate inference programs in probabilistic
modules | cs.AI cs.LG stat.ML | This paper introduces the probabilistic module interface, which allows
encapsulation of complex probabilistic models with latent variables alongside
custom stochastic approximate inference machinery, and provides a
platform-agnostic abstraction barrier separating the model internals from the
host probabilistic inference system. The interface can be seen as a stochastic
generalization of a standard simulation and density interface for probabilistic
primitives. We show that sound approximate inference algorithms can be
constructed for networks of probabilistic modules, and we demonstrate that the
interface can be implemented using learned stochastic inference networks and
MCMC and SMC approximate inference programs.
| Marco F. Cusumano-Towner, Vikash K. Mansinghka | null | 1612.04759 | null | null |
Detect, Replace, Refine: Deep Structured Prediction For Pixel Wise
Labeling | cs.CV cs.LG | Pixel wise image labeling is an interesting and challenging problem with
great significance in the computer vision community. In order for a dense
labeling algorithm to be able to achieve accurate and precise results, it has
to consider the dependencies that exist in the joint space of both the input
and the output variables. An implicit approach for modeling those dependencies
is by training a deep neural network that, given as input an initial estimate
of the output labels and the input image, it will be able to predict a new
refined estimate for the labels. In this context, our work is concerned with
what is the optimal architecture for performing the label improvement task. We
argue that the prior approaches of either directly predicting new label
estimates or predicting residual corrections w.r.t. the initial labels with
feed-forward deep network architectures are sub-optimal. Instead, we propose a
generic architecture that decomposes the label improvement task to three steps:
1) detecting the initial label estimates that are incorrect, 2) replacing the
incorrect labels with new ones, and finally 3) refining the renewed labels by
predicting residual corrections w.r.t. them. Furthermore, we explore and
compare various other alternative architectures that consist of the
aforementioned Detection, Replace, and Refine components. We extensively
evaluate the examined architectures in the challenging task of dense disparity
estimation (stereo matching) and we report both quantitative and qualitative
results on three different datasets. Finally, our dense disparity estimation
network that implements the proposed generic architecture, achieves
state-of-the-art results in the KITTI 2015 test surpassing prior approaches by
a significant margin.
| Spyros Gidaris, Nikos Komodakis | null | 1612.0477 | null | null |
Deep Function Machines: Generalized Neural Networks for Topological
Layer Expression | stat.ML cs.CV cs.LG | In this paper we propose a generalization of deep neural networks called deep
function machines (DFMs). DFMs act on vector spaces of arbitrary (possibly
infinite) dimension and we show that a family of DFMs are invariant to the
dimension of input data; that is, the parameterization of the model does not
directly hinge on the quality of the input (eg. high resolution images). Using
this generalization we provide a new theory of universal approximation of
bounded non-linear operators between function spaces. We then suggest that DFMs
provide an expressive framework for designing new neural network layer types
with topological considerations in mind. Finally, we introduce a novel
architecture, RippLeNet, for resolution invariant computer vision, which
empirically achieves state of the art invariance.
| William H. Guss | null | 1612.04799 | null | null |
Anomaly Detection Using the Knowledge-based Temporal Abstraction Method | cs.LG cs.AI | The rapid growth in stored time-oriented data necessitates the development of
new methods for handling, processing, and interpreting large amounts of
temporal data. One important example of such processing is detecting anomalies
in time-oriented data. The Knowledge-Based Temporal Abstraction method was
previously proposed for intelligent interpretation of temporal data based on
predefined domain knowledge. In this study we propose a framework that
integrates the KBTA method with a temporal pattern mining process for anomaly
detection. According to the proposed method a temporal pattern mining process
is applied on a dataset of basic temporal abstraction database in order to
extract patterns representing normal behavior. These patterns are then analyzed
in order to identify abnormal time periods characterized by a significantly
small number of normal patterns. The proposed approach was demonstrated using a
dataset collected from a real server.
| Asaf Shabtai | null | 1612.04804 | null | null |
Uncovering the Dynamics of Crowdlearning and the Value of Knowledge | cs.SI cs.LG physics.soc-ph stat.ML | Learning from the crowd has become increasingly popular in the Web and social
media. There is a wide variety of crowdlearning sites in which, on the one
hand, users learn from the knowledge that other users contribute to the site,
and, on the other hand, knowledge is reviewed and curated by the same users
using assessment measures such as upvotes or likes.
In this paper, we present a probabilistic modeling framework of
crowdlearning, which uncovers the evolution of a user's expertise over time by
leveraging other users' assessments of her contributions. The model allows for
both off-site and on-site learning and captures forgetting of knowledge. We
then develop a scalable estimation method to fit the model parameters from
millions of recorded learning and contributing events. We show the
effectiveness of our model by tracing activity of ~25 thousand users in Stack
Overflow over a 4.5 year period. We find that answers with high knowledge value
are rare. Newbies and experts tend to acquire less knowledge than users in the
middle range. Prolific learners tend to be also proficient contributors that
post answers with high knowledge value.
| Utkarsh Upadhyay, Isabel Valera, Manuel Gomez-Rodriguez | 10.1145/3018661.3018685 | 1612.04831 | null | null |
Constraint Selection in Metric Learning | cs.LG stat.ML | A number of machine learning algorithms are using a metric, or a distance, in
order to compare individuals. The Euclidean distance is usually employed, but
it may be more efficient to learn a parametric distance such as Mahalanobis
metric. Learning such a metric is a hot topic since more than ten years now,
and a number of methods have been proposed to efficiently learn it. However,
the nature of the problem makes it quite difficult for large scale data, as
well as data for which classes overlap. This paper presents a simple way of
improving accuracy and scalability of any iterative metric learning algorithm,
where constraints are obtained prior to the algorithm. The proposed approach
relies on a loss-dependent weighted selection of constraints that are used for
learning the metric. Using the corresponding dedicated loss function, the
method clearly allows to obtain better results than state-of-the-art methods,
both in terms of accuracy and time complexity. Some experimental results on
real world, and potentially large, datasets are demonstrating the effectiveness
of our proposition.
| Hoel Le Capitaine | null | 1612.04853 | null | null |
Bayesian Optimization for Machine Learning : A Practical Guidebook | cs.LG | The engineering of machine learning systems is still a nascent field; relying
on a seemingly daunting collection of quickly evolving tools and best
practices. It is our hope that this guidebook will serve as a useful resource
for machine learning practitioners looking to take advantage of Bayesian
optimization techniques. We outline four example machine learning problems that
can be solved using open source machine learning libraries, and highlight the
benefits of using Bayesian optimization in the context of these common machine
learning applications.
| Ian Dewancker, Michael McCourt, Scott Clark | null | 1612.04858 | null | null |
Interpretable Semantic Textual Similarity: Finding and explaining
differences between sentences | cs.CL cs.AI cs.LG | User acceptance of artificial intelligence agents might depend on their
ability to explain their reasoning, which requires adding an interpretability
layer that fa- cilitates users to understand their behavior. This paper focuses
on adding an in- terpretable layer on top of Semantic Textual Similarity (STS),
which measures the degree of semantic equivalence between two sentences. The
interpretability layer is formalized as the alignment between pairs of segments
across the two sentences, where the relation between the segments is labeled
with a relation type and a similarity score. We present a publicly available
dataset of sentence pairs annotated following the formalization. We then
develop a system trained on this dataset which, given a sentence pair, explains
what is similar and different, in the form of graded and typed segment
alignments. When evaluated on the dataset, the system performs better than an
informed baseline, showing that the dataset and task are well-defined and
feasible. Most importantly, two user studies show how the system output can be
used to automatically produce explanations in natural language. Users performed
better when having access to the explanations, pro- viding preliminary evidence
that our dataset and method to automatically produce explanations is useful in
real applications.
| I. Lopez-Gazpio and M. Maritxalar and A. Gonzalez-Agirre and G. Rigau
and L. Uria and E. Agirre | 10.1016/j.knosys.2016.12.013 | 1612.04868 | null | null |
A Data-Driven Compressive Sensing Framework Tailored For
Energy-Efficient Wearable Sensing | cs.LG cs.IT math.IT | Compressive sensing (CS) is a promising technology for realizing
energy-efficient wireless sensors for long-term health monitoring. However,
conventional model-driven CS frameworks suffer from limited compression ratio
and reconstruction quality when dealing with physiological signals due to
inaccurate models and the overlook of individual variability. In this paper, we
propose a data-driven CS framework that can learn signal characteristics and
personalized features from any individual recording of physiologic signals to
enhance CS performance with a minimized number of measurements. Such
improvements are accomplished by a co-training approach that optimizes the
sensing matrix and the dictionary towards improved restricted isometry property
and signal sparsity, respectively. Experimental results upon ECG signals show
that the proposed method, at a compression ratio of 10x, successfully reduces
the isometry constant of the trained sensing matrices by 86% against random
matrices and improves the overall reconstructed signal-to-noise ratio by 15dB
over conventional model-driven approaches.
| Kai Xu, Yixing Li, Fengbo Ren | null | 1612.04887 | null | null |
Deep learning is effective for the classification of OCT images of
normal versus Age-related Macular Degeneration | stat.ML cs.CV cs.LG | Objective: The advent of Electronic Medical Records (EMR) with large
electronic imaging databases along with advances in deep neural networks with
machine learning has provided a unique opportunity to achieve milestones in
automated image analysis. Optical coherence tomography (OCT) is the most
commonly obtained imaging modality in ophthalmology and represents a dense and
rich dataset when combined with labels derived from the EMR. We sought to
determine if deep learning could be utilized to distinguish normal OCT images
from images from patients with Age-related Macular Degeneration (AMD). Methods:
Automated extraction of an OCT imaging database was performed and linked to
clinical endpoints from the EMR. OCT macula scans were obtained by Heidelberg
Spectralis, and each OCT scan was linked to EMR clinical endpoints extracted
from EPIC. The central 11 images were selected from each OCT scan of two
cohorts of patients: normal and AMD. Cross-validation was performed using a
random subset of patients. Area under receiver operator curves (auROC) were
constructed at an independent image level, macular OCT level, and patient
level. Results: Of an extraction of 2.6 million OCT images linked to clinical
datapoints from the EMR, 52,690 normal and 48,312 AMD macular OCT images were
selected. A deep neural network was trained to categorize images as either
normal or AMD. At the image level, we achieved an auROC of 92.78% with an
accuracy of 87.63%. At the macula level, we achieved an auROC of 93.83% with an
accuracy of 88.98%. At a patient level, we achieved an auROC of 97.45% with an
accuracy of 93.45%. Peak sensitivity and specificity with optimal cutoffs were
92.64% and 93.69% respectively. Conclusions: Deep learning techniques are
effective for classifying OCT images. These findings have important
implications in utilizing OCT in automated screening and computer aided
diagnosis tools.
| Cecilia S. Lee, Doug M. Baughman, Aaron Y. Lee | null | 1612.04891 | null | null |
Efficient Distributed Semi-Supervised Learning using Stochastic
Regularization over Affinity Graphs | stat.ML cs.DC cs.LG | We describe a computationally efficient, stochastic graph-regularization
technique that can be utilized for the semi-supervised training of deep neural
networks in a parallel or distributed setting. We utilize a technique, first
described in [13] for the construction of mini-batches for stochastic gradient
descent (SGD) based on synthesized partitions of an affinity graph that are
consistent with the graph structure, but also preserve enough stochasticity for
convergence of SGD to good local minima. We show how our technique allows a
graph-based semi-supervised loss function to be decomposed into a sum over
objectives, facilitating data parallelism for scalable training of machine
learning models. Empirical results indicate that our method significantly
improves classification accuracy compared to the fully-supervised case when the
fraction of labeled data is low, and in the parallel case, achieves significant
speed-up in terms of wall-clock time to convergence. We show the results for
both sequential and distributed-memory semi-supervised DNN training on a speech
corpus.
| Sunil Thulasidasan, Jeffrey Bilmes, Garrett Kenyon | null | 1612.04898 | null | null |
Semi-Supervised Phone Classification using Deep Neural Networks and
Stochastic Graph-Based Entropic Regularization | stat.ML cs.LG | We describe a graph-based semi-supervised learning framework in the context
of deep neural networks that uses a graph-based entropic regularizer to favor
smooth solutions over a graph induced by the data. The main contribution of
this work is a computationally efficient, stochastic graph-regularization
technique that uses mini-batches that are consistent with the graph structure,
but also provides enough stochasticity (in terms of mini-batch data diversity)
for convergence of stochastic gradient descent methods to good solutions. For
this work, we focus on results of frame-level phone classification accuracy on
the TIMIT speech corpus but our method is general and scalable to much larger
data sets. Results indicate that our method significantly improves
classification accuracy compared to the fully-supervised case when the fraction
of labeled data is low, and it is competitive with other methods in the fully
labeled case.
| Sunil Thulasidasan, Jeffrey Bilmes | null | 1612.04899 | null | null |
Dynamical Kinds and their Discovery | stat.ML cs.AI cs.LG cs.SY | We demonstrate the possibility of classifying causal systems into kinds that
share a common structure without first constructing an explicit dynamical model
or using prior knowledge of the system dynamics. The algorithmic ability to
determine whether arbitrary systems are governed by causal relations of the
same form offers significant practical applications in the development and
validation of dynamical models. It is also of theoretical interest as an
essential stage in the scientific inference of laws from empirical data. The
algorithm presented is based on the dynamical symmetry approach to dynamical
kinds. A dynamical symmetry with respect to time is an intervention on one or
more variables of a system that commutes with the time evolution of the system.
A dynamical kind is a class of systems sharing a set of dynamical symmetries.
The algorithm presented classifies deterministic, time-dependent causal systems
by directly comparing their exhibited symmetries. Using simulated, noisy data
from a variety of nonlinear systems, we show that this algorithm correctly
sorts systems into dynamical kinds. It is robust under significant sampling
error, is immune to violations of normality in sampling error, and fails
gracefully with increasing dynamical similarity. The algorithm we demonstrate
is the first to address this aspect of automated scientific discovery.
| Benjamin C. Jantzen | null | 1612.04933 | null | null |
Improving Neural Network Generalization by Combining Parallel Circuits
with Dropout | cs.NE cs.LG | In an attempt to solve the lengthy training times of neural networks, we
proposed Parallel Circuits (PCs), a biologically inspired architecture.
Previous work has shown that this approach fails to maintain generalization
performance in spite of achieving sharp speed gains. To address this issue, and
motivated by the way Dropout prevents node co-adaption, in this paper, we
suggest an improvement by extending Dropout to the PC architecture. The paper
provides multiple insights into this combination, including a variety of fusion
approaches. Experiments show promising results in which improved error rates
are achieved in most cases, whilst maintaining the speed advantage of the PC
approach.
| Kien Tuong Phan, Tomas Henrique Maul, Tuong Thuy Vu, Lai Weng Kin | 10.1007/978-3-319-46675-0_63 | 1612.0497 | null | null |
Graph-based semi-supervised learning for relational networks | cs.SI cs.LG physics.data-an stat.ML | We address the problem of semi-supervised learning in relational networks,
networks in which nodes are entities and links are the relationships or
interactions between them. Typically this problem is confounded with the
problem of graph-based semi-supervised learning (GSSL), because both problems
represent the data as a graph and predict the missing class labels of nodes.
However, not all graphs are created equally. In GSSL a graph is constructed,
often from independent data, based on similarity. As such, edges tend to
connect instances with the same class label. Relational networks, however, can
be more heterogeneous and edges do not always indicate similarity. For
instance, instead of links being more likely to connect nodes with the same
class label, they may occur more frequently between nodes with different class
labels (link-heterogeneity). Or nodes with the same class label do not
necessarily have the same type of connectivity across the whole network
(class-heterogeneity), e.g. in a network of sexual interactions we may observe
links between opposite genders in some parts of the graph and links between the
same genders in others. Performing classification in networks with different
types of heterogeneity is a hard problem that is made harder still when we do
not know a-priori the type or level of heterogeneity. Here we present two
scalable approaches for graph-based semi-supervised learning for the more
general case of relational networks. We demonstrate these approaches on
synthetic and real-world networks that display different link patterns within
and between classes. Compared to state-of-the-art approaches, ours give better
classification performance without prior knowledge of how classes interact. In
particular, our two-step label propagation algorithm gives consistently good
accuracy and runs on networks of over 1.6 million nodes and 30 million edges in
around 12 seconds.
| Leto Peel | null | 1612.05001 | null | null |
Optimal structure and parameter learning of Ising models | cond-mat.stat-mech cs.LG physics.data-an stat.ML | Reconstruction of structure and parameters of an Ising model from binary
samples is a problem of practical importance in a variety of disciplines,
ranging from statistical physics and computational biology to image processing
and machine learning. The focus of the research community shifted towards
developing universal reconstruction algorithms which are both computationally
efficient and require the minimal amount of expensive data. We introduce a new
method, Interaction Screening, which accurately estimates the model parameters
using local optimization problems. The algorithm provably achieves perfect
graph structure recovery with an information-theoretically optimal number of
samples, notably in the low-temperature regime which is known to be the hardest
for learning. The efficacy of Interaction Screening is assessed through
extensive numerical tests on synthetic Ising models of various topologies with
different types of interactions, as well as on a real data produced by a D-Wave
quantum computer. This study shows that the Interaction Screening method is an
exact, tractable and optimal technique universally solving the inverse Ising
problem.
| Andrey Y. Lokhov, Marc Vuffray, Sidhant Misra, Michael Chertkov | null | 1612.05024 | null | null |
Towards Score Following in Sheet Music Images | cs.LG cs.CV | This paper addresses the matching of short music audio snippets to the
corresponding pixel location in images of sheet music. A system is presented
that simultaneously learns to read notes, listens to music and matches the
currently played music to its corresponding notes in the sheet. It consists of
an end-to-end multi-modal convolutional neural network that takes as input
images of sheet music and spectrograms of the respective audio snippets. It
learns to predict, for a given unseen audio snippet (covering approximately one
bar of music), the corresponding position in the respective score line. Our
results suggest that with the use of (deep) neural networks -- which have
proven to be powerful image processing models -- working with sheet music
becomes feasible and a promising future research direction.
| Matthias Dorfer, Andreas Arzt, Gerhard Widmer | null | 1612.0505 | null | null |
Graphical RNN Models | cs.NE cs.LG | Many time series are generated by a set of entities that interact with one
another over time. This paper introduces a broad, flexible framework to learn
from multiple inter-dependent time series generated by such entities. Our
framework explicitly models the entities and their interactions through time.
It achieves this by building on the capabilities of Recurrent Neural Networks,
while also offering several ways to incorporate domain knowledge/constraints
into the model architecture. The capabilities of our approach are showcased
through an application to weather prediction, which shows gains over strong
baselines.
| Ashish Bora, Sugato Basu, Joydeep Ghosh | null | 1612.05054 | null | null |
Feature Learning for Chord Recognition: The Deep Chroma Extractor | cs.SD cs.LG | We explore frame-level audio feature learning for chord recognition using
artificial neural networks. We present the argument that chroma vectors
potentially hold enough information to model harmonic content of audio for
chord recognition, but that standard chroma extractors compute too noisy
features. This leads us to propose a learned chroma feature extractor based on
artificial neural networks. It is trained to compute chroma features that
encode harmonic information important for chord recognition, while being robust
to irrelevant interferences. We achieve this by feeding the network an audio
spectrum with context instead of a single frame as input. This way, the network
can learn to selectively compensate noise and resolve harmonic ambiguities.
We compare the resulting features to hand-crafted ones by using a simple
linear frame-wise classifier for chord recognition on various data sets. The
results show that the learned feature extractor produces superior chroma
vectors for chord recognition.
| Filip Korzeniowski and Gerhard Widmer | null | 1612.05065 | null | null |
Towards End-to-End Audio-Sheet-Music Retrieval | cs.SD cs.IR cs.LG | This paper demonstrates the feasibility of learning to retrieve short
snippets of sheet music (images) when given a short query excerpt of music
(audio) -- and vice versa --, without any symbolic representation of music or
scores. This would be highly useful in many content-based musical retrieval
scenarios. Our approach is based on Deep Canonical Correlation Analysis (DCCA)
and learns correlated latent spaces allowing for cross-modality retrieval in
both directions. Initial experiments with relatively simple monophonic music
show promising results.
| Matthias Dorfer, Andreas Arzt, Gerhard Widmer | null | 1612.0507 | null | null |
A Fully Convolutional Deep Auditory Model for Musical Chord Recognition | cs.LG cs.SD | Chord recognition systems depend on robust feature extraction pipelines.
While these pipelines are traditionally hand-crafted, recent advances in
end-to-end machine learning have begun to inspire researchers to explore
data-driven methods for such tasks. In this paper, we present a chord
recognition system that uses a fully convolutional deep auditory model for
feature extraction. The extracted features are processed by a Conditional
Random Field that decodes the final chord sequence. Both processing stages are
trained automatically and do not require expert knowledge for optimising
parameters. We show that the learned auditory system extracts musically
interpretable features, and that the proposed chord recognition system achieves
results on par or better than state-of-the-art algorithms.
| Filip Korzeniowski and Gerhard Widmer | 10.1109/MLSP.2016.7738895 | 1612.05082 | null | null |
Coupling Adaptive Batch Sizes with Learning Rates | cs.LG cs.CV stat.ML | Mini-batch stochastic gradient descent and variants thereof have become
standard for large-scale empirical risk minimization like the training of
neural networks. These methods are usually used with a constant batch size
chosen by simple empirical inspection. The batch size significantly influences
the behavior of the stochastic optimization algorithm, though, since it
determines the variance of the gradient estimates. This variance also changes
over the optimization process; when using a constant batch size, stability and
convergence is thus often enforced by means of a (manually tuned) decreasing
learning rate schedule.
We propose a practical method for dynamic batch size adaptation. It estimates
the variance of the stochastic gradients and adapts the batch size to decrease
the variance proportionally to the value of the objective function, removing
the need for the aforementioned learning rate decrease. In contrast to recent
related work, our algorithm couples the batch size to the learning rate,
directly reflecting the known relationship between the two. On popular image
classification benchmarks, our batch size adaptation yields faster optimization
convergence, while simultaneously simplifying learning rate tuning. A
TensorFlow implementation is available.
| Lukas Balles and Javier Romero and Philipp Hennig | null | 1612.05086 | null | null |
On the Potential of Simple Framewise Approaches to Piano Transcription | cs.SD cs.LG | In an attempt at exploring the limitations of simple approaches to the task
of piano transcription (as usually defined in MIR), we conduct an in-depth
analysis of neural network-based framewise transcription. We systematically
compare different popular input representations for transcription systems to
determine the ones most suitable for use with neural networks. Exploiting
recent advances in training techniques and new regularizers, and taking into
account hyper-parameter tuning, we show that it is possible, by simple
bottom-up frame-wise processing, to obtain a piano transcriber that outperforms
the current published state of the art on the publicly available MAPS dataset
-- without any complex post-processing steps. Thus, we propose this simple
approach as a new baseline for this dataset, for future transcription research
to build on and improve.
| Rainer Kelz, Matthias Dorfer, Filip Korzeniowski, Sebastian B\"ock,
Andreas Arzt, Gerhard Widmer | null | 1612.05153 | null | null |
Separation of Concerns in Reinforcement Learning | cs.LG cs.AI | In this paper, we propose a framework for solving a single-agent task by
using multiple agents, each focusing on different aspects of the task. This
approach has two main advantages: 1) it allows for training specialized agents
on different parts of the task, and 2) it provides a new way to transfer
knowledge, by transferring trained agents. Our framework generalizes the
traditional hierarchical decomposition, in which, at any moment in time, a
single agent has control until it has solved its particular subtask. We
illustrate our framework with empirical experiments on two domains.
| Harm van Seijen and Mehdi Fatemi and Joshua Romoff and Romain Laroche | null | 1612.05159 | null | null |
CSVideoNet: A Real-time End-to-end Learning Framework for
High-frame-rate Video Compressive Sensing | cs.CV cs.LG | This paper addresses the real-time encoding-decoding problem for
high-frame-rate video compressive sensing (CS). Unlike prior works that perform
reconstruction using iterative optimization-based approaches, we propose a
non-iterative model, named "CSVideoNet". CSVideoNet directly learns the inverse
mapping of CS and reconstructs the original input in a single forward
propagation. To overcome the limitations of existing CS cameras, we propose a
multi-rate CNN and a synthesizing RNN to improve the trade-off between
compression ratio (CR) and spatial-temporal resolution of the reconstructed
videos. The experiment results demonstrate that CSVideoNet significantly
outperforms the state-of-the-art approaches. With no pre/post-processing, we
achieve 25dB PSNR recovery quality at 100x CR, with a frame rate of 125 fps on
a Titan X GPU. Due to the feedforward and high-data-concurrency natures of
CSVideoNet, it can take advantage of GPU acceleration to achieve three orders
of magnitude speed-up over conventional iterative-based approaches. We share
the source code at https://github.com/PSCLab-ASU/CSVideoNet.
| Kai Xu, Fengbo Ren | null | 1612.05203 | null | null |
Tunable Efficient Unitary Neural Networks (EUNN) and their application
to RNNs | cs.LG cs.NE stat.ML | Using unitary (instead of general) matrices in artificial neural networks
(ANNs) is a promising way to solve the gradient explosion/vanishing problem, as
well as to enable ANNs to learn long-term correlations in the data. This
approach appears particularly promising for Recurrent Neural Networks (RNNs).
In this work, we present a new architecture for implementing an Efficient
Unitary Neural Network (EUNNs); its main advantages can be summarized as
follows. Firstly, the representation capacity of the unitary space in an EUNN
is fully tunable, ranging from a subspace of SU(N) to the entire unitary space.
Secondly, the computational complexity for training an EUNN is merely
$\mathcal{O}(1)$ per parameter. Finally, we test the performance of EUNNs on
the standard copying task, the pixel-permuted MNIST digit recognition benchmark
as well as the Speech Prediction Test (TIMIT). We find that our architecture
significantly outperforms both other state-of-the-art unitary RNNs and the LSTM
architecture, in terms of the final performance and/or the wall-clock training
speed. EUNNs are thus promising alternatives to RNNs and LSTMs for a wide
variety of applications.
| Li Jing, Yichen Shen, Tena Dub\v{c}ek, John Peurifoy, Scott Skirlo,
Yann LeCun, Max Tegmark, Marin Solja\v{c}i\'c | null | 1612.05231 | null | null |
Private Learning on Networks | cs.DC cs.LG math.OC | Continual data collection and widespread deployment of machine learning
algorithms, particularly the distributed variants, have raised new privacy
challenges. In a distributed machine learning scenario, the dataset is stored
among several machines and they solve a distributed optimization problem to
collectively learn the underlying model. We present a secure multi-party
computation inspired privacy preserving distributed algorithm for optimizing a
convex function consisting of several possibly non-convex functions. Each
individual objective function is privately stored with an agent while the
agents communicate model parameters with neighbor machines connected in a
network. We show that our algorithm can correctly optimize the overall
objective function and learn the underlying model accurately. We further prove
that under a vertex connectivity condition on the topology, our algorithm
preserves privacy of individual objective functions. We establish limits on the
what a coalition of adversaries can learn by observing the messages and states
shared over a network.
| Shripad Gade and Nitin H. Vaidya | null | 1612.05236 | null | null |
A Simple Approach to Multilingual Polarity Classification in Twitter | cs.CL cs.LG stat.ML | Recently, sentiment analysis has received a lot of attention due to the
interest in mining opinions of social media users. Sentiment analysis consists
in determining the polarity of a given text, i.e., its degree of positiveness
or negativeness. Traditionally, Sentiment Analysis algorithms have been
tailored to a specific language given the complexity of having a number of
lexical variations and errors introduced by the people generating content. In
this contribution, our aim is to provide a simple to implement and easy to use
multilingual framework, that can serve as a baseline for sentiment analysis
contests, and as starting point to build new sentiment analysis systems. We
compare our approach in eight different languages, three of them have important
international contests, namely, SemEval (English), TASS (Spanish), and
SENTIPOLC (Italian). Within the competitions our approach reaches from medium
to high positions in the rankings; whereas in the remaining languages our
approach outperforms the reported results.
| Eric S. Tellez, Sabino Miranda Jim\'enez, Mario Graff, Daniela
Moctezuma, Ranyart R. Su\'arez, Oscar S. Siordia | null | 1612.0527 | null | null |
Automatic time-series phenotyping using massive feature extraction | cs.LG physics.data-an q-bio.QM | Across a far-reaching diversity of scientific and industrial applications, a
general key problem involves relating the structure of time-series data to a
meaningful outcome, such as detecting anomalous events from sensor recordings,
or diagnosing patients from physiological time-series measurements like heart
rate or brain activity. Currently, researchers must devote considerable effort
manually devising, or searching for, properties of their time series that are
suitable for the particular analysis problem at hand. Addressing this
non-systematic and time-consuming procedure, here we introduce a new tool,
hctsa, that selects interpretable and useful properties of time series
automatically, by comparing implementations over 7700 time-series features
drawn from diverse scientific literatures. Using two exemplar biological
applications, we show how hctsa allows researchers to leverage decades of
time-series research to quantify and understand informative structure in their
time-series data.
| Ben D Fulcher and Nick S Jones | 10.1016/j.cels.2017.10.001 | 1612.05296 | null | null |
A Survey of Inductive Biases for Factorial Representation-Learning | cs.LG cs.AI | With the resurgence of interest in neural networks, representation learning
has re-emerged as a central focus in artificial intelligence. Representation
learning refers to the discovery of useful encodings of data that make
domain-relevant information explicit. Factorial representations identify
underlying independent causal factors of variation in data. A factorial
representation is compact and faithful, makes the causal factors explicit, and
facilitates human interpretation of data. Factorial representations support a
variety of applications, including the generation of novel examples, indexing
and search, novelty detection, and transfer learning.
This article surveys various constraints that encourage a learning algorithm
to discover factorial representations. I dichotomize the constraints in terms
of unsupervised and supervised inductive bias. Unsupervised inductive biases
exploit assumptions about the environment, such as the statistical distribution
of factor coefficients, assumptions about the perturbations a factor should be
invariant to (e.g. a representation of an object can be invariant to rotation,
translation or scaling), and assumptions about how factors are combined to
synthesize an observation. Supervised inductive biases are constraints on the
representations based on additional information connected to observations.
Supervisory labels come in variety of types, which vary in how strongly they
constrain the representation, how many factors are labeled, how many
observations are labeled, and whether or not we know the associations between
the constraints and the factors they are related to.
This survey brings together a wide variety of models that all touch on the
problem of learning factorial representations and lays out a framework for
comparing these models based on the strengths of the underlying supervised and
unsupervised inductive biases.
| Karl Ridgeway | null | 1612.05299 | null | null |
Projected Semi-Stochastic Gradient Descent Method with Mini-Batch Scheme
under Weak Strong Convexity Assumption | cs.LG math.OC stat.ML | We propose a projected semi-stochastic gradient descent method with
mini-batch for improving both the theoretical complexity and practical
performance of the general stochastic gradient descent method (SGD). We are
able to prove linear convergence under weak strong convexity assumption. This
requires no strong convexity assumption for minimizing the sum of smooth convex
functions subject to a compact polyhedral set, which remains popular across
machine learning community. Our PS2GD preserves the low-cost per iteration and
high optimization accuracy via stochastic gradient variance-reduced technique,
and admits a simple parallel implementation with mini-batches. Moreover, PS2GD
is also applicable to dual problem of SVM with hinge loss.
| Jie Liu, Martin Takac | null | 1612.05356 | null | null |
Neural networks based EEG-Speech Models | cs.SD cs.LG | In this paper, we propose an end-to-end neural network (NN) based EEG-speech
(NES) modeling framework, in which three network structures are developed to
map imagined EEG signals to phonemes. The proposed NES models incorporate a
language model based EEG feature extraction layer, an acoustic feature mapping
layer, and a restricted Boltzmann machine (RBM) based the feature learning
layer. The NES models can jointly realize the representation of multichannel
EEG signals and the projection of acoustic speech signals. Among three proposed
NES models, two augmented networks utilize spoken EEG signals as either bias or
gate information to strengthen the feature learning and translation of imagined
EEG signals. Experimental results show that all three proposed NES models
outperform the baseline support vector machine (SVM) method on EEG-speech
classification. With respect to binary classification, our approach achieves
comparable results relative to deep believe network approach.
| Pengfei Sun and Jun Qin | null | 1612.05369 | null | null |
Defensive Player Classification in the National Basketball Association | cs.LG cs.AI | The National Basketball Association(NBA) has expanded their data gathering
and have heavily invested in new technologies to gather advanced performance
metrics on players. This expanded data set allows analysts to use unique
performance metrics in models to estimate and classify player performance.
Instead of grouping players together based on physical attributes and positions
played, analysts can group together players that play similar to each other
based on these tracked metrics. Existing methods for player classification have
typically used offensive metrics for clustering [1]. There have been attempts
to classify players using past defensive metrics, but the lack of quality
metrics has not produced promising results. The classifications presented in
the paper use newly introduced defensive metrics to find different defensive
positions for each player. Without knowing the number of categories that
players can be cast into, Gaussian Mixture Models (GMM) can be applied to find
the optimal number of clusters. In the model presented, five different
defensive player types can be identified.
| Neil Seward | null | 1612.05502 | null | null |
Deep Reinforcement Learning with Successor Features for Navigation
across Similar Environments | cs.RO cs.AI cs.LG | In this paper we consider the problem of robot navigation in simple maze-like
environments where the robot has to rely on its onboard sensors to perform the
navigation task. In particular, we are interested in solutions to this problem
that do not require localization, mapping or planning. Additionally, we require
that our solution can quickly adapt to new situations (e.g., changing
navigation goals and environments). To meet these criteria we frame this
problem as a sequence of related reinforcement learning tasks. We propose a
successor feature based deep reinforcement learning algorithm that can learn to
transfer knowledge from previously mastered navigation tasks to new problem
instances. Our algorithm substantially decreases the required learning time
after the first task instance has been solved, which makes it easily adaptable
to changing environments. We validate our method in both simulated and real
robot experiments with a Robotino and compare it to a set of baseline methods
including classical planning-based navigation.
| Jingwei Zhang, Jost Tobias Springenberg, Joschka Boedecker, Wolfram
Burgard | null | 1612.05533 | null | null |
Models, networks and algorithmic complexity | cs.LG | I aim to show that models, classification or generating functions,
invariances and datasets are algorithmically equivalent concepts once properly
defined, and provide some concrete examples of them. I then show that a) neural
networks (NNs) of different kinds can be seen to implement models, b) that
perturbations of inputs and nodes in NNs trained to optimally implement simple
models propagate strongly, c) that there is a framework in which recurrent,
deep and shallow networks can be seen to fall into a descriptive power
hierarchy in agreement with notions from the theory of recursive functions. The
motivation for these definitions and following analysis lies in the context of
cognitive neuroscience, and in particular in Ruffini (2016), where the concept
of model is used extensively, as is the concept of algorithmic complexity.
| Giulio Ruffini | null | 1612.05627 | null | null |
An Alternative Softmax Operator for Reinforcement Learning | cs.AI cs.LG stat.ML | A softmax operator applied to a set of values acts somewhat like the
maximization function and somewhat like an average. In sequential decision
making, softmax is often used in settings where it is necessary to maximize
utility but also to hedge against problems that arise from putting all of one's
weight behind a single maximum utility decision. The Boltzmann softmax operator
is the most commonly used softmax operator in this setting, but we show that
this operator is prone to misbehavior. In this work, we study a differentiable
softmax operator that, among other properties, is a non-expansion ensuring a
convergent behavior in learning and planning. We introduce a variant of SARSA
algorithm that, by utilizing the new operator, computes a Boltzmann policy with
a state-dependent temperature parameter. We show that the algorithm is
convergent and that it performs favorably in practice.
| Kavosh Asadi, Michael L. Littman | null | 1612.05628 | null | null |
A User Simulator for Task-Completion Dialogues | cs.LG cs.AI cs.CL | Despite widespread interests in reinforcement-learning for task-oriented
dialogue systems, several obstacles can frustrate research and development
progress. First, reinforcement learners typically require interaction with the
environment, so conventional dialogue corpora cannot be used directly. Second,
each task presents specific challenges, requiring separate corpus of
task-specific annotated data. Third, collecting and annotating human-machine or
human-human conversations for task-oriented dialogues requires extensive domain
knowledge. Because building an appropriate dataset can be both financially
costly and time-consuming, one popular approach is to build a user simulator
based upon a corpus of example dialogues. Then, one can train reinforcement
learning agents in an online fashion as they interact with the simulator.
Dialogue agents trained on these simulators can serve as an effective starting
point. Once agents master the simulator, they may be deployed in a real
environment to interact with humans, and continue to be trained online. To ease
empirical algorithmic comparisons in dialogues, this paper introduces a new,
publicly available simulation framework, where our simulator, designed for the
movie-booking domain, leverages both rules and collected data. The simulator
supports two tasks: movie ticket booking and movie seeking. Finally, we
demonstrate several agents and detail the procedure to add and test your own
agent in the proposed framework.
| Xiujun Li, Zachary C. Lipton, Bhuwan Dhingra, Lihong Li, Jianfeng Gao,
Yun-Nung Chen | null | 1612.05688 | null | null |
Reinforcement Learning Using Quantum Boltzmann Machines | quant-ph cs.AI cs.LG cs.NE math.OC | We investigate whether quantum annealers with select chip layouts can
outperform classical computers in reinforcement learning tasks. We associate a
transverse field Ising spin Hamiltonian with a layout of qubits similar to that
of a deep Boltzmann machine (DBM) and use simulated quantum annealing (SQA) to
numerically simulate quantum sampling from this system. We design a
reinforcement learning algorithm in which the set of visible nodes representing
the states and actions of an optimal policy are the first and last layers of
the deep network. In absence of a transverse field, our simulations show that
DBMs are trained more effectively than restricted Boltzmann machines (RBM) with
the same number of nodes. We then develop a framework for training the network
as a quantum Boltzmann machine (QBM) in the presence of a significant
transverse field for reinforcement learning. This method also outperforms the
reinforcement learning method that uses RBMs.
| Daniel Crawford, Anna Levit, Navid Ghadermarzy, Jaspreet S. Oberoi,
Pooya Ronagh | null | 1612.05695 | null | null |
Mutual information for fitting deep nonlinear models | math.OC cs.LG stat.ML | Deep nonlinear models pose a challenge for fitting parameters due to lack of
knowledge of the hidden layer and the potentially non-affine relation of the
initial and observed layers. In the present work we investigate the use of
information theoretic measures such as mutual information and Kullback-Leibler
(KL) divergence as objective functions for fitting such models without
knowledge of the hidden layer. We investigate one model as a proof of concept
and one application of cogntive performance. We further investigate the use of
optimizers with these methods. Mutual information is largely successful as an
objective, depending on the parameters. KL divergence is found to be similarly
succesful, given some knowledge of the statistics of the hidden layer.
| Jacob S. Hunter (1) and Nathan O. Hodas (1) ((1) Pacific Northwest
National Laboratory) | null | 1612.05708 | null | null |
Exploiting sparsity to build efficient kernel based collaborative
filtering for top-N item recommendation | cs.IR cs.AI cs.LG | The increasing availability of implicit feedback datasets has raised the
interest in developing effective collaborative filtering techniques able to
deal asymmetrically with unambiguous positive feedback and ambiguous negative
feedback. In this paper, we propose a principled kernel-based collaborative
filtering method for top-N item recommendation with implicit feedback. We
present an efficient implementation using the linear kernel, and we show how to
generalize it to kernels of the dot product family preserving the efficiency.
We also investigate on the elements which influence the sparsity of a standard
cosine kernel. This analysis shows that the sparsity of the kernel strongly
depends on the properties of the dataset, in particular on the long tail
distribution. We compare our method with state-of-the-art algorithms achieving
good results both in terms of efficiency and effectiveness.
| Mirko Polato and Fabio Aiolli | null | 1612.05729 | null | null |
Towards Wide Learning: Experiments in Healthcare | stat.ML cs.LG | In this paper, a Wide Learning architecture is proposed that attempts to
automate the feature engineering portion of the machine learning (ML) pipeline.
Feature engineering is widely considered as the most time consuming and expert
knowledge demanding portion of any ML task. The proposed feature recommendation
approach is tested on 3 healthcare datasets: a) PhysioNet Challenge 2016
dataset of phonocardiogram (PCG) signals, b) MIMIC II blood pressure
classification dataset of photoplethysmogram (PPG) signals and c) an emotion
classification dataset of PPG signals. While the proposed method beats the
state of the art techniques for 2nd and 3rd dataset, it reaches 94.38% of the
accuracy level of the winner of PhysioNet Challenge 2016. In all cases, the
effort to reach a satisfactory performance was drastically less (a few days)
than manual feature engineering.
| Snehasis Banerjee, Tanushyam Chattopadhyay, Swagata Biswas, Rohan
Banerjee, Anirban Dutta Choudhury, Arpan Pal and Utpal Garain | null | 1612.0573 | null | null |
Machine Learning, Linear and Bayesian Models for Logistic Regression in
Failure Detection Problems | cs.LG stat.ML | In this work, we study the use of logistic regression in manufacturing
failures detection. As a data set for the analysis, we used the data from
Kaggle competition Bosch Production Line Performance. We considered the use of
machine learning, linear and Bayesian models. For machine learning approach, we
analyzed XGBoost tree based classifier to obtain high scored classification.
Using the generalized linear model for logistic regression makes it possible to
analyze the influence of the factors under study. The Bayesian approach for
logistic regression gives the statistical distribution for the parameters of
the model. It can be useful in the probabilistic analysis, e.g. risk
assessment.
| B. Pavlyshenko | null | 1612.0574 | null | null |
Learning to predict where to look in interactive environments using deep
recurrent q-learning | cs.CV cs.LG | Bottom-Up (BU) saliency models do not perform well in complex interactive
environments where humans are actively engaged in tasks (e.g., sandwich making
and playing the video games). In this paper, we leverage Reinforcement Learning
(RL) to highlight task-relevant locations of input frames. We propose a soft
attention mechanism combined with the Deep Q-Network (DQN) model to teach an RL
agent how to play a game and where to look by focusing on the most pertinent
parts of its visual input. Our evaluations on several Atari 2600 games show
that the soft attention based model could predict fixation locations
significantly better than bottom-up models such as Itti-Kochs saliency and
Graph-Based Visual Saliency (GBVS) models.
| Sajad Mousavi, Michael Schukat, Enda Howley, Ali Borji and Nasser
Mozayani | null | 1612.05753 | null | null |
A new recurrent neural network based predictive model for Faecal
Calprotectin analysis: A retrospective study | cs.LG | Faecal Calprotectin (FC) is a surrogate marker for intestinal inflammation,
termed Inflammatory Bowel Disease (IBD), but not for cancer. In this
retrospective study of 804 patients, an enhanced benchmark predictive model for
analyzing FC is developed, based on a novel state-of-the-art Echo State Network
(ESN), an advanced dynamic recurrent neural network which implements a
biologically plausible architecture, and a supervised learning mechanism. The
proposed machine learning driven predictive model is benchmarked against a
conventional logistic regression model, demonstrating statistically significant
performance improvements.
| Zeeshan Khawar Malik, Zain U. Hussain, Ziad Kobti, Charlie W. Lees,
Newton Howard and Amir Hussain | null | 1612.05794 | null | null |
EgoTransfer: Transferring Motion Across Egocentric and Exocentric
Domains using Deep Neural Networks | cs.CV cs.LG cs.NE | Mirror neurons have been observed in the primary motor cortex of primate
species, in particular in humans and monkeys. A mirror neuron fires when a
person performs a certain action, and also when he observes the same action
being performed by another person. A crucial step towards building fully
autonomous intelligent systems with human-like learning abilities is the
capability in modeling the mirror neuron. On one hand, the abundance of
egocentric cameras in the past few years has offered the opportunity to study a
lot of vision problems from the first-person perspective. A great deal of
interesting research has been done during the past few years, trying to explore
various computer vision tasks from the perspective of the self. On the other
hand, videos recorded by traditional static cameras, capture humans performing
different actions from an exocentric third-person perspective. In this work, we
take the first step towards relating motion information across these two
perspectives. We train models that predict motion in an egocentric view, by
observing it from an exocentric view, and vice versa. This allows models to
predict how an egocentric motion would look like from outside. To do so, we
train linear and nonlinear models and evaluate their performance in terms of
retrieving the egocentric (exocentric) motion features, while having access to
an exocentric (egocentric) motion feature. Our experimental results demonstrate
that motion information can be successfully transferred across the two views.
| Shervin Ardeshir, Krishna Regmi, and Ali Borji | null | 1612.05836 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.