title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Sampling Generative Networks | cs.NE cs.LG stat.ML | We introduce several techniques for sampling and visualizing the latent
spaces of generative models. Replacing linear interpolation with spherical
linear interpolation prevents diverging from a model's prior distribution and
produces sharper samples. J-Diagrams and MINE grids are introduced as
visualizations of manifolds created by analogies and nearest neighbors. We
demonstrate two new techniques for deriving attribute vectors: bias-corrected
vectors with data replication and synthetic vectors with data augmentation.
Binary classification using attribute vectors is presented as a technique
supporting quantitative analysis of the latent space. Most techniques are
intended to be independent of model type and examples are shown on both
Variational Autoencoders and Generative Adversarial Networks.
| Tom White | null | 1609.04468 | null | null |
Tsallis Regularized Optimal Transport and Ecological Inference | cs.LG | Optimal transport is a powerful framework for computing distances between
probability distributions. We unify the two main approaches to optimal
transport, namely Monge-Kantorovitch and Sinkhorn-Cuturi, into what we define
as Tsallis regularized optimal transport (\trot). \trot~interpolates a rich
family of distortions from Wasserstein to Kullback-Leibler, encompassing as
well Pearson, Neyman and Hellinger divergences, to name a few. We show that
metric properties known for Sinkhorn-Cuturi generalize to \trot, and provide
efficient algorithms for finding the optimal transportation plan with formal
convergence proofs. We also present the first application of optimal transport
to the problem of ecological inference, that is, the reconstruction of joint
distributions from their marginals, a problem of large interest in the social
sciences. \trot~provides a convenient framework for ecological inference by
allowing to compute the joint distribution --- that is, the optimal
transportation plan itself --- when side information is available, which is
\textit{e.g.} typically what census represents in political science.
Experiments on data from the 2012 US presidential elections display the
potential of \trot~in delivering a faithful reconstruction of the joint
distribution of ethnic groups and voter preferences.
| Boris Muzellec and Richard Nock and Giorgio Patrini and Frank Nielsen | null | 1609.04495 | null | null |
Column Networks for Collective Classification | cs.LG cs.AI stat.ML | Relational learning deals with data that are characterized by relational
structures. An important task is collective classification, which is to jointly
classify networked objects. While it holds a great promise to produce a better
accuracy than non-collective classifiers, collective classification is
computational challenging and has not leveraged on the recent breakthroughs of
deep learning. We present Column Network (CLN), a novel deep learning model for
collective classification in multi-relational domains. CLN has many desirable
theoretical properties: (i) it encodes multi-relations between any two
instances; (ii) it is deep and compact, allowing complex functions to be
approximated at the network level with a small set of free parameters; (iii)
local and relational features are learned simultaneously; (iv) long-range,
higher-order dependencies between instances are supported naturally; and (v)
crucially, learning and inference are efficient, linear in the size of the
network and the number of relations. We evaluate CLN on multiple real-world
applications: (a) delay prediction in software projects, (b) PubMed Diabetes
publication classification and (c) film genre classification. In all
applications, CLN demonstrates a higher accuracy than state-of-the-art rivals.
| Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh | null | 1609.04508 | null | null |
Structured Dropout for Weak Label and Multi-Instance Learning and Its
Application to Score-Informed Source Separation | cs.LG cs.SD | Many success stories involving deep neural networks are instances of
supervised learning, where available labels power gradient-based learning
methods. Creating such labels, however, can be expensive and thus there is
increasing interest in weak labels which only provide coarse information, with
uncertainty regarding time, location or value. Using such labels often leads to
considerable challenges for the learning process. Current methods for
weak-label training often employ standard supervised approaches that
additionally reassign or prune labels during the learning process. The
information gain, however, is often limited as only the importance of labels
where the network already yields reasonable results is boosted. We propose
treating weak-label training as an unsupervised problem and use the labels to
guide the representation learning to induce structure. To this end, we propose
two autoencoder extensions: class activity penalties and structured dropout. We
demonstrate the capabilities of our approach in the context of score-informed
source separation of music.
| Sebastian Ewert and Mark B. Sandler | null | 1609.04557 | null | null |
Recursive nearest agglomeration (ReNA): fast clustering for
approximation of structured signals | stat.ML cs.LG | In this work, we revisit fast dimension reduction approaches, as with random
projections and random sampling. Our goal is to summarize the data to decrease
computational costs and memory footprint of subsequent analysis. Such dimension
reduction can be very efficient when the signals of interest have a strong
structure, such as with images. We focus on this setting and investigate
feature clustering schemes for data reductions that capture this structure. An
impediment to fast dimension reduction is that good clustering comes with large
algorithmic costs. We address it by contributing a linear-time agglomerative
clustering scheme, Recursive Nearest Agglomeration (ReNA). Unlike existing fast
agglomerative schemes, it avoids the creation of giant clusters. We empirically
validate that it approximates the data as well as traditional
variance-minimizing clustering schemes that have a quadratic complexity. In
addition, we analyze signal approximation with feature clustering and show that
it can remove noise, improving subsequent analysis steps. As a consequence,
data reduction by clustering features with ReNA yields very fast and accurate
models, enabling to process large datasets on budget. Our theoretical analysis
is backed by extensive experiments on publicly-available data that illustrate
the computation efficiency and the denoising properties of the resulting
dimension reduction scheme.
| Andr\'es Hoyos-Idrobo (PARIETAL, NEUROSPIN), Ga\"el Varoquaux
(PARIETAL, NEUROSPIN), Jonas Kahn, Bertrand Thirion (PARIETAL) | 10.1109/TPAMI.2018.2815524 | 1609.04608 | null | null |
On Unbounded Delays in Asynchronous Parallel Fixed-Point Algorithms | math.OC cs.DC cs.LG | The need for scalable numerical solutions has motivated the development of
asynchronous parallel algorithms, where a set of nodes run in parallel with
little or no synchronization, thus computing with delayed information. This
paper studies the convergence of the asynchronous parallel algorithm ARock
under potentially unbounded delays.
ARock is a general asynchronous algorithm that has many applications. It
parallelizes fixed-point iterations by letting a set of nodes randomly choose
solution coordinates and update them in an asynchronous parallel fashion. ARock
takes some recent asynchronous coordinate descent algorithms as special cases
and gives rise to new asynchronous operator-splitting algorithms. Existing
analysis of ARock assumes the delays to be bounded, and uses this bound to set
a step size that is important to both convergence and efficiency. Other work,
though allowing unbounded delays, imposes strict conditions on the underlying
fixed-point operator, resulting in limited applications.
In this paper, convergence is established under unbounded delays, which can
be either stochastic or deterministic. The proposed step sizes are more
practical and generally larger than those in the existing work. The step size
adapts to the delay distribution or the current delay being experienced in the
system. New Lyapunov functions, which are the key to analyzing asynchronous
algorithms, are generated to obtain our results. A set of applicable
optimization algorithms with large-scale applications are given, including
machine learning and scientific computing algorithms.
| Robert Hannah, Wotao Yin | null | 1609.04746 | null | null |
An overview of gradient descent optimization algorithms | cs.LG | Gradient descent optimization algorithms, while increasingly popular, are
often used as black-box optimizers, as practical explanations of their
strengths and weaknesses are hard to come by. This article aims to provide the
reader with intuitions with regard to the behaviour of different algorithms
that will allow her to put them to use. In the course of this overview, we look
at different variants of gradient descent, summarize challenges, introduce the
most common optimization algorithms, review architectures in a parallel and
distributed setting, and investigate additional strategies for optimizing
gradient descent.
| Sebastian Ruder | null | 1609.04747 | null | null |
Coherence Pursuit: Fast, Simple, and Robust Principal Component Analysis | cs.LG stat.ML | This paper presents a remarkably simple, yet powerful, algorithm termed
Coherence Pursuit (CoP) to robust Principal Component Analysis (PCA). As
inliers lie in a low dimensional subspace and are mostly correlated, an inlier
is likely to have strong mutual coherence with a large number of data points.
By contrast, outliers either do not admit low dimensional structures or form
small clusters. In either case, an outlier is unlikely to bear strong
resemblance to a large number of data points. Given that, CoP sets an outlier
apart from an inlier by comparing their coherence with the rest of the data
points. The mutual coherences are computed by forming the Gram matrix of the
normalized data points. Subsequently, the sought subspace is recovered from the
span of the subset of the data points that exhibit strong coherence with the
rest of the data. As CoP only involves one simple matrix multiplication, it is
significantly faster than the state-of-the-art robust PCA algorithms. We derive
analytical performance guarantees for CoP under different models for the
distributions of inliers and outliers in both noise-free and noisy settings.
CoP is the first robust PCA algorithm that is simultaneously non-iterative,
provably robust to both unstructured and structured outliers, and can tolerate
a large number of unstructured outliers.
| Mostafa Rahmani, George Atia | 10.1109/TSP.2017.2749215 | 1609.04789 | null | null |
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
Minima | cs.LG math.OC | The stochastic gradient descent (SGD) method and its variants are algorithms
of choice for many Deep Learning tasks. These methods operate in a small-batch
regime wherein a fraction of the training data, say $32$-$512$ data points, is
sampled to compute an approximation to the gradient. It has been observed in
practice that when using a larger batch there is a degradation in the quality
of the model, as measured by its ability to generalize. We investigate the
cause for this generalization drop in the large-batch regime and present
numerical evidence that supports the view that large-batch methods tend to
converge to sharp minimizers of the training and testing functions - and as is
well known, sharp minima lead to poorer generalization. In contrast,
small-batch methods consistently converge to flat minimizers, and our
experiments support a commonly held view that this is due to the inherent noise
in the gradient estimation. We discuss several strategies to attempt to help
large-batch methods eliminate this generalization gap.
| Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail
Smelyanskiy and Ping Tak Peter Tang | null | 1609.04836 | null | null |
Predicting Shot Making in Basketball Learnt from Adversarial Multiagent
Trajectories | stat.ML cs.LG | In this paper, we predict the likelihood of a player making a shot in
basketball from multiagent trajectories. Previous approaches to similar
problems center on hand-crafting features to capture domain specific knowledge.
Although intuitive, recent work in deep learning has shown this approach is
prone to missing important predictive features. To circumvent this issue, we
present a convolutional neural network (CNN) approach where we initially
represent the multiagent behavior as an image. To encode the adversarial nature
of basketball, we use a multi-channel image which we then feed into a CNN.
Additionally, to capture the temporal aspect of the trajectories we "fade" the
player trajectories. We find that this approach is superior to a traditional
FFN model. By using gradient ascent to create images using an already trained
CNN, we discover what features the CNN filters learn. Last, we find that a
combined CNN+FFN is the best performing network with an error rate of 39%.
| Mark Harmon, Abdolghani Ebrahimi, Patrick Lucey, Diego Klabjan | null | 1609.04849 | null | null |
Image-to-Markup Generation with Coarse-to-Fine Attention | cs.CV cs.CL cs.LG cs.NE | We present a neural encoder-decoder model to convert images into
presentational markup based on a scalable coarse-to-fine attention mechanism.
Our method is evaluated in the context of image-to-LaTeX generation, and we
introduce a new dataset of real-world rendered mathematical expressions paired
with LaTeX markup. We show that unlike neural OCR techniques using CTC-based
models, attention-based approaches can tackle this non-standard OCR task. Our
approach outperforms classical mathematical OCR systems by a large margin on
in-domain rendered data, and, with pretraining, also performs well on
out-of-domain handwritten data. To reduce the inference complexity associated
with the attention-based approaches, we introduce a new coarse-to-fine
attention layer that selects a support region before applying attention.
| Yuntian Deng, Anssi Kanervisto, Jeffrey Ling, Alexander M. Rush | null | 1609.04938 | null | null |
Exploration Potential | cs.LG cs.AI | We introduce exploration potential, a quantity that measures how much a
reinforcement learning agent has explored its environment class. In contrast to
information gain, exploration potential takes the problem's reward structure
into account. This leads to an exploration criterion that is both necessary and
sufficient for asymptotic optimality (learning to act optimally across the
entire environment class). Our experiments in multi-armed bandits use
exploration potential to illustrate how different algorithms make the tradeoff
between exploration and exploitation.
| Jan Leike | null | 1609.04994 | null | null |
A Formal Solution to the Grain of Truth Problem | cs.AI cs.GT cs.LG | A Bayesian agent acting in a multi-agent environment learns to predict the
other agents' policies if its prior assigns positive probability to them (in
other words, its prior contains a \emph{grain of truth}). Finding a reasonably
large class of policies that contains the Bayes-optimal policies with respect
to this class is known as the \emph{grain of truth problem}. Only small classes
are known to have a grain of truth and the literature contains several related
impossibility results. In this paper we present a formal and general solution
to the full grain of truth problem: we construct a class of policies that
contains all computable policies as well as Bayes-optimal policies for every
lower semicomputable prior over the class. When the environment is unknown,
Bayes-optimal agents may fail to act optimally even asymptotically. However,
agents based on Thompson sampling converge to play {\epsilon}-Nash equilibria
in arbitrary unknown computable multi-agent environments. While these results
are purely theoretical, we show that they can be computationally approximated
arbitrarily closely.
| Jan Leike, Jessica Taylor, Benya Fallenstein | null | 1609.05058 | null | null |
Learning Opposites Using Neural Networks | cs.LG cs.NE | Many research works have successfully extended algorithms such as
evolutionary algorithms, reinforcement agents and neural networks using
"opposition-based learning" (OBL). Two types of the "opposites" have been
defined in the literature, namely \textit{type-I} and \textit{type-II}. The
former are linear in nature and applicable to the variable space, hence easy to
calculate. On the other hand, type-II opposites capture the "oppositeness" in
the output space. In fact, type-I opposites are considered a special case of
type-II opposites where inputs and outputs have a linear relationship. However,
in many real-world problems, inputs and outputs do in fact exhibit a nonlinear
relationship. Therefore, type-II opposites are expected to be better in
capturing the sense of "opposition" in terms of the input-output relation. In
the absence of any knowledge about the problem at hand, there seems to be no
intuitive way to calculate the type-II opposites. In this paper, we introduce
an approach to learn type-II opposites from the given inputs and their outputs
using the artificial neural networks (ANNs). We first perform \emph{opposition
mining} on the sample data, and then use the mined data to learn the
relationship between input $x$ and its opposite $\breve{x}$. We have validated
our algorithm using various benchmark functions to compare it against an
evolving fuzzy inference approach that has been recently introduced. The
results show the better performance of a neural approach to learn the
opposites. This will create new possibilities for integrating oppositional
schemes within existing algorithms promising a potential increase in
convergence speed and/or accuracy.
| Shivam Kalra, Aditya Sriram, Shahryar Rahnamayan, H.R. Tizhoosh | null | 1609.05123 | null | null |
No-Regret Replanning under Uncertainty | cs.RO cs.LG | This paper explores the problem of path planning under uncertainty.
Specifically, we consider online receding horizon based planners that need to
operate in a latent environment where the latent information can be modeled via
Gaussian Processes. Online path planning in latent environments is challenging
since the robot needs to explore the environment to get a more accurate model
of latent information for better planning later and also achieves the task as
quick as possible. We propose UCB style algorithms that are popular in the
bandit settings and show how those analyses can be adapted to the online
robotic path planning problems. The proposed algorithm trades-off exploration
and exploitation in near-optimal manner and has appealing no-regret properties.
We demonstrate the efficacy of the framework on the application of aircraft
flight path planning when the winds are partially observed.
| Wen Sun, Niteesh Sood, Debadeepta Dey, Gireeja Ranade, Siddharth
Prakash and Ashish Kapoor | null | 1609.05162 | null | null |
Information Theoretic Limits of Data Shuffling for Distributed Learning | cs.IT cs.DC cs.LG math.IT | Data shuffling is one of the fundamental building blocks for distributed
learning algorithms, that increases the statistical gain for each step of the
learning process. In each iteration, different shuffled data points are
assigned by a central node to a distributed set of workers to perform local
computations, which leads to communication bottlenecks. The focus of this paper
is on formalizing and understanding the fundamental information-theoretic
trade-off between storage (per worker) and the worst-case communication
overhead for the data shuffling problem. We completely characterize the
information theoretic trade-off for $K=2$, and $K=3$ workers, for any value of
storage capacity, and show that increasing the storage across workers can
reduce the communication overhead by leveraging coding. We propose a novel and
systematic data delivery and storage update strategy for each data shuffle
iteration, which preserves the structural properties of the storage across the
workers, and aids in minimizing the communication overhead in subsequent data
shuffling iterations.
| Mohamed Attia, Ravi Tandon | null | 1609.05181 | null | null |
Gradient Descent Learns Linear Dynamical Systems | cs.LG cs.DS math.OC stat.ML | We prove that stochastic gradient descent efficiently converges to the global
optimizer of the maximum likelihood objective of an unknown linear
time-invariant dynamical system from a sequence of noisy observations generated
by the system. Even though the objective function is non-convex, we provide
polynomial running time and sample complexity bounds under strong but natural
assumptions. Linear systems identification has been studied for many decades,
yet, to the best of our knowledge, these are the first polynomial guarantees
for the problem we consider.
| Moritz Hardt, Tengyu Ma, Benjamin Recht | null | 1609.05191 | null | null |
Scaling up Echo-State Networks with multiple light scattering | cs.ET cs.LG physics.optics | Echo-State Networks and Reservoir Computing have been studied for more than a
decade. They provide a simpler yet powerful alternative to Recurrent Neural
Networks, every internal weight is fixed and only the last linear layer is
trained. They involve many multiplications by dense random matrices. Very large
networks are difficult to obtain, as the complexity scales quadratically both
in time and memory. Here, we present a novel optical implementation of
Echo-State Networks using light-scattering media and a Digital Micromirror
Device. As a proof of concept, binary networks have been successfully trained
to predict the chaotic Mackey-Glass time series. This new method is fast, power
efficient and easily scalable to very large networks.
| Jonathan Dong, Sylvain Gigan, Florent Krzakala, Gilles Wainrib | 10.1109/SSP.2018.8450698 | 1609.05204 | null | null |
ReasoNet: Learning to Stop Reading in Machine Comprehension | cs.LG cs.NE | Teaching a computer to read and answer general questions pertaining to a
document is a challenging yet unsolved problem. In this paper, we describe a
novel neural network architecture called the Reasoning Network (ReasoNet) for
machine comprehension tasks. ReasoNets make use of multiple turns to
effectively exploit and then reason over the relation among queries, documents,
and answers. Different from previous approaches using a fixed number of turns
during inference, ReasoNets introduce a termination state to relax this
constraint on the reasoning depth. With the use of reinforcement learning,
ReasoNets can dynamically determine whether to continue the comprehension
process after digesting intermediate results, or to terminate reading when it
concludes that existing information is adequate to produce an answer. ReasoNets
have achieved exceptional performance in machine comprehension datasets,
including unstructured CNN and Daily Mail datasets, the Stanford SQuAD dataset,
and a structured Graph Reachability dataset.
| Yelong Shen, Po-Sen Huang, Jianfeng Gao, Weizhu Chen | 10.1145/3097983.3098177 | 1609.05284 | null | null |
Sparse Boltzmann Machines with Structure Learning as Applied to Text
Analysis | cs.LG | We are interested in exploring the possibility and benefits of structure
learning for deep models. As the first step, this paper investigates the matter
for Restricted Boltzmann Machines (RBMs). We conduct the study with Replicated
Softmax, a variant of RBMs for unsupervised text analysis. We present a method
for learning what we call Sparse Boltzmann Machines, where each hidden unit is
connected to a subset of the visible units instead of all of them. Empirical
results show that the method yields models with significantly improved model
fit and interpretability as compared with RBMs where each hidden unit is
connected to all visible units.
| Zhourong Chen, Nevin L. Zhang, Dit-Yan Yeung, Peixian Chen | null | 1609.05294 | null | null |
Fast and Effective Algorithms for Symmetric Nonnegative Matrix
Factorization | cs.CV cs.LG stat.ML | Symmetric Nonnegative Matrix Factorization (SNMF) models arise naturally as
simple reformulations of many standard clustering algorithms including the
popular spectral clustering method. Recent work has demonstrated that an
elementary instance of SNMF provides superior clustering quality compared to
many classic clustering algorithms on a variety of synthetic and real world
data sets. In this work, we present novel reformulations of this instance of
SNMF based on the notion of variable splitting and produce two fast and
effective algorithms for its optimization using i) the provably convergent
Accelerated Proximal Gradient (APG) procedure and ii) a heuristic version of
the Alternating Direction Method of Multipliers (ADMM) framework. Our two
algorithms present an interesting tradeoff between computational speed and
mathematical convergence guarantee: while the former method is provably
convergent it is considerably slower than the latter approach, for which we
also provide significant but less stringent mathematical proof regarding its
convergence. Through extensive experiments we show not only that the efficacy
of these approaches is equal to that of the state of the art SNMF algorithm,
but also that the latter of our algorithms is extremely fast being one to two
orders of magnitude faster in terms of total computation time than the state of
the art approach, outperforming even spectral clustering in terms of
computation time on large data sets.
| Reza Borhani, Jeremy Watt, Aggelos Katsaggelos | null | 1609.05342 | null | null |
Leveraging Environmental Correlations: The Thermodynamics of Requisite
Variety | cond-mat.stat-mech cs.IT cs.LG math.DS math.IT nlin.AO | Key to biological success, the requisite variety that confronts an adaptive
organism is the set of detectable, accessible, and controllable states in its
environment. We analyze its role in the thermodynamic functioning of
information ratchets---a form of autonomous Maxwellian Demon capable of
exploiting fluctuations in an external information reservoir to harvest useful
work from a thermal bath. This establishes a quantitative paradigm for
understanding how adaptive agents leverage structured thermal environments for
their own thermodynamic benefit. General ratchets behave as memoryful
communication channels, interacting with their environment sequentially and
storing results to an output. The bulk of thermal ratchets analyzed to date,
however, assume memoryless environments that generate input signals without
temporal correlations. Employing computational mechanics and a new
information-processing Second Law of Thermodynamics (IPSL) we remove these
restrictions, analyzing general finite-state ratchets interacting with
structured environments that generate correlated input signals. On the one
hand, we demonstrate that a ratchet need not have memory to exploit an
uncorrelated environment. On the other, and more appropriate to biological
adaptation, we show that a ratchet must have memory to most effectively
leverage structure and correlation in its environment. The lesson is that to
optimally harvest work a ratchet's memory must reflect the input generator's
memory. Finally, we investigate achieving the IPSL bounds on the amount of work
a ratchet can extract from its environment, discovering that finite-state,
optimal ratchets are unable to reach these bounds. In contrast, we show that
infinite-state ratchets can go well beyond these bounds by utilizing their own
infinite "negentropy". We conclude with an outline of the collective
thermodynamics of information-ratchet swarms.
| Alexander B. Boyd, Dibyendu Mandal, and James P. Crutchfield | 10.1007/s10955-017-1776-0 | 1609.05353 | null | null |
Online Learning of Combinatorial Objects via Extended Formulation | cs.LG | The standard techniques for online learning of combinatorial objects perform
multiplicative updates followed by projections into the convex hull of all the
objects. However, this methodology can be expensive if the convex hull contains
many facets. For example, the convex hull of $n$-symbol Huffman trees is known
to have exponentially many facets (Maurras et al., 2010). We get around this
difficulty by exploiting extended formulations (Kaibel, 2011), which encode the
polytope of combinatorial objects in a higher dimensional "extended" space with
only polynomially many facets. We develop a general framework for converting
extended formulations into efficient online algorithms with good relative loss
bounds. We present applications of our framework to online learning of Huffman
trees and permutations. The regret bounds of the resulting algorithms are
within a factor of $O(\sqrt{\log(n)})$ of the state-of-the-art specialized
algorithms for permutations, and depending on the loss regimes, improve on or
match the state-of-the-art for Huffman trees. Our method is general and can be
applied to other combinatorial objects.
| Holakou Rahmanian, David P. Helmbold, S.V.N. Vishwanathan | null | 1609.05374 | null | null |
ADAGIO: Fast Data-aware Near-Isometric Linear Embeddings | stat.ML cs.LG | Many important applications, including signal reconstruction, parameter
estimation, and signal processing in a compressed domain, rely on a
low-dimensional representation of the dataset that preserves {\em all} pairwise
distances between the data points and leverages the inherent geometric
structure that is typically present. Recently Hedge, Sankaranarayanan, Yin and
Baraniuk \cite{hedge2015} proposed the first data-aware near-isometric linear
embedding which achieves the best of both worlds. However, their method NuMax
does not scale to large-scale datasets.
Our main contribution is a simple, data-aware, near-isometric linear
dimensionality reduction method which significantly outperforms a
state-of-the-art method \cite{hedge2015} with respect to scalability while
achieving high quality near-isometries. Furthermore, our method comes with
strong worst-case theoretical guarantees that allow us to guarantee the quality
of the obtained near-isometry. We verify experimentally the efficiency of our
method on numerous real-world datasets, where we find that our method ($<$10
secs) is more than 3\,000$\times$ faster than the state-of-the-art method
\cite{hedge2015} ($>$9 hours) on medium scale datasets with 60\,000 data points
in 784 dimensions. Finally, we use our method as a preprocessing step to
increase the computational efficiency of a classification application and for
speeding up approximate nearest neighbor queries.
| Jaros{\l}aw B{\l}asiok, Charalampos E. Tsourakakis | null | 1609.05388 | null | null |
Predicting Future Shanghai Stock Market Price using ANN in the Period
21-Sep-2016 to 11-Oct-2016 | cs.LG q-fin.ST | Predicting the prices of stocks at any stock market remains a quest for many
investors and researchers. Those who trade at the stock market tend to use
technical, fundamental or time series analysis in their predictions. These
methods usually guide on trends and not the exact likely prices. It is for this
reason that Artificial Intelligence systems, such as Artificial Neural Network,
that is feedforward multi-layer perceptron with error backpropagation, can be
used for such predictions. A difficulty in neural network application is the
determination of suitable network parameters. A previous research by the author
already determined the network parameters as 5:21:21:1 with 80% training data
or 4-year of training data as a good enough model for stock prediction. This
model has been put to the test in predicting selected Shanghai Stock Exchange
stocks in the future period of 21-Sep-2016 to 11-Oct-2016, about one week after
the publication of these predictions. The research aims at confirming that
simple neural network systems can be quite powerful in typical stock market
predictions.
| Barack Wamkaya Wanjawa | null | 1609.05394 | null | null |
A Deep Metric for Multimodal Registration | cs.CV cs.LG cs.NE | Multimodal registration is a challenging problem in medical imaging due the
high variability of tissue appearance under different imaging modalities. The
crucial component here is the choice of the right similarity measure. We make a
step towards a general learning-based solution that can be adapted to specific
situations and present a metric based on a convolutional neural network. Our
network can be trained from scratch even from a few aligned image pairs. The
metric is validated on intersubject deformable registration on a dataset
different from the one used for training, demonstrating good generalization. In
this task, we outperform mutual information by a significant margin.
| Martin Simonovsky, Benjam\'in Guti\'errez-Becker, Diana Mateus, Nassir
Navab, Nikos Komodakis | null | 1609.05396 | null | null |
SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient | cs.LG cs.AI | As a new way of training generative models, Generative Adversarial Nets (GAN)
that uses a discriminative model to guide the training of the generative model
has enjoyed considerable success in generating real-valued data. However, it
has limitations when the goal is for generating sequences of discrete tokens. A
major reason lies in that the discrete outputs from the generative model make
it difficult to pass the gradient update from the discriminative model to the
generative model. Also, the discriminative model can only assess a complete
sequence, while for a partially generated sequence, it is non-trivial to
balance its current score and the future one once the entire sequence has been
generated. In this paper, we propose a sequence generation framework, called
SeqGAN, to solve the problems. Modeling the data generator as a stochastic
policy in reinforcement learning (RL), SeqGAN bypasses the generator
differentiation problem by directly performing gradient policy update. The RL
reward signal comes from the GAN discriminator judged on a complete sequence,
and is passed back to the intermediate state-action steps using Monte Carlo
search. Extensive experiments on synthetic data and real-world tasks
demonstrate significant improvements over strong baselines.
| Lantao Yu, Weinan Zhang, Jun Wang, Yong Yu | null | 1609.05473 | null | null |
Probabilistic Feature Selection and Classification Vector Machine | cs.LG stat.ML | Sparse Bayesian learning is a state-of-the-art supervised learning algorithm
that can choose a subset of relevant samples from the input data and make
reliable probabilistic predictions. However, in the presence of
high-dimensional data with irrelevant features, traditional sparse Bayesian
classifiers suffer from performance degradation and low efficiency by failing
to eliminate irrelevant features. To tackle this problem, we propose a novel
sparse Bayesian embedded feature selection method that adopts truncated
Gaussian distributions as both sample and feature priors. The proposed method,
called probabilistic feature selection and classification vector machine
(PFCVMLP ), is able to simultaneously select relevant features and samples for
classification tasks. In order to derive the analytical solutions, Laplace
approximation is applied to compute approximate posteriors and marginal
likelihoods. Finally, parameters and hyperparameters are optimized by the
type-II maximum likelihood method. Experiments on three datasets validate the
performance of PFCVMLP along two dimensions: classification performance and
effectiveness for feature selection. Finally, we analyze the generalization
performance and derive a generalization error bound for PFCVMLP . By tightening
the bound, the importance of feature selection is demonstrated.
| Bingbing Jiang, Chang Li, Maarten de Rijke, Xin Yao and Huanhuan Chen | 10.1145/3309541 | 1609.05486 | null | null |
Towards Deep Symbolic Reinforcement Learning | cs.AI cs.LG | Deep reinforcement learning (DRL) brings the power of deep neural networks to
bear on the generic task of trial-and-error learning, and its effectiveness has
been convincingly demonstrated on tasks such as Atari video games and the game
of Go. However, contemporary DRL systems inherit a number of shortcomings from
the current generation of deep learning techniques. For example, they require
very large datasets to work effectively, entailing that they are slow to learn
even when such datasets are available. Moreover, they lack the ability to
reason on an abstract level, which makes it difficult to implement high-level
cognitive functions such as transfer learning, analogical reasoning, and
hypothesis-based reasoning. Finally, their operation is largely opaque to
humans, rendering them unsuitable for domains in which verifiability is
important. In this paper, we propose an end-to-end reinforcement learning
architecture comprising a neural back end and a symbolic front end with the
potential to overcome each of these shortcomings. As proof-of-concept, we
present a preliminary implementation of the architecture and apply it to
several variants of a simple video game. We show that the resulting system --
though just a prototype -- learns effectively, and, by acquiring a set of
symbolic rules that are easily comprehensible to humans, dramatically
outperforms a conventional, fully neural DRL system on a stochastic variant of
the game.
| Marta Garnelo, Kai Arulkumaran, Murray Shanahan | null | 1609.05518 | null | null |
Playing FPS Games with Deep Reinforcement Learning | cs.AI cs.LG | Advances in deep reinforcement learning have allowed autonomous agents to
perform well on Atari games, often outperforming humans, using only raw pixels
to make their decisions. However, most of these games take place in 2D
environments that are fully observable to the agent. In this paper, we present
the first architecture to tackle 3D environments in first-person shooter games,
that involve partially observable states. Typically, deep reinforcement
learning methods only utilize visual input for training. We present a method to
augment these models to exploit game feature information such as the presence
of enemies or items, during the training phase. Our model is trained to
simultaneously learn these features along with minimizing a Q-learning
objective, which is shown to dramatically improve the training speed and
performance of our agent. Our architecture is also modularized to allow
different models to be independently trained for different phases of the game.
We show that the proposed architecture substantially outperforms built-in AI
agents of the game as well as humans in deathmatch scenarios.
| Guillaume Lample, Devendra Singh Chaplot | null | 1609.05521 | null | null |
Principled Option Learning in Markov Decision Processes | cs.LG stat.ML | It is well known that options can make planning more efficient, among their
many benefits. Thus far, algorithms for autonomously discovering a set of
useful options were heuristic. Naturally, a principled way of finding a set of
useful options may be more promising and insightful. In this paper we suggest a
mathematical characterization of good sets of options using tools from
information theory. This characterization enables us to find conditions for a
set of options to be optimal and an algorithm that outputs a useful set of
options and illustrate the proposed algorithm in simulation.
| Roy Fox, Michal Moshkovitz and Naftali Tishby | null | 1609.05524 | null | null |
Sequential Ensemble Learning for Outlier Detection: A Bias-Variance
Perspective | cs.LG stat.ML | Ensemble methods for classification and clustering have been effectively used
for decades, while ensemble learning for outlier detection has only been
studied recently. In this work, we design a new ensemble approach for outlier
detection in multi-dimensional point data, which provides improved accuracy by
reducing error through both bias and variance. Although classification and
outlier detection appear as different problems, their theoretical underpinnings
are quite similar in terms of the bias-variance trade-off [1], where outlier
detection is considered as a binary classification task with unobserved labels
but a similar bias-variance decomposition of error.
In this paper, we propose a sequential ensemble approach called CARE that
employs a two-phase aggregation of the intermediate results in each iteration
to reach the final outcome. Unlike existing outlier ensembles which solely
incorporate a parallel framework by aggregating the outcomes of independent
base detectors to reduce variance, our ensemble incorporates both the parallel
and sequential building blocks to reduce bias as well as variance by ($i$)
successively eliminating outliers from the original dataset to build a better
data model on which outlierness is estimated (sequentially), and ($ii$)
combining the results from individual base detectors and across iterations
(parallelly). Through extensive experiments on sixteen real-world datasets
mainly from the UCI machine learning repository [2], we show that CARE performs
significantly better than or at least similar to the individual baselines. We
also compare CARE with the state-of-the-art outlier ensembles where it also
provides significant improvement when it is the winner and remains close
otherwise.
| Shebuti Rayana, Wen Zhong and Leman Akoglu | null | 1609.05528 | null | null |
Learning Personalized Optimal Control for Repeatedly Operated Systems | cs.LG stat.ML | We consider the problem of online learning of optimal control for repeatedly
operated systems in the presence of parametric uncertainty. During each round
of operation, environment selects system parameters according to a fixed but
unknown probability distribution. These parameters govern the dynamics of a
plant. An agent chooses a control input to the plant and is then revealed the
cost of the choice. In this setting, we design an agent that personalizes the
control input to this plant taking into account the stochasticity involved. We
demonstrate the effectiveness of our approach on a simulated system.
| Theja Tulabandhula | null | 1609.05536 | null | null |
On Randomized Distributed Coordinate Descent with Quantized Updates | stat.ML cs.LG | In this paper, we study the randomized distributed coordinate descent
algorithm with quantized updates. In the literature, the iteration complexity
of the randomized distributed coordinate descent algorithm has been
characterized under the assumption that machines can exchange updates with an
infinite precision. We consider a practical scenario in which the messages
exchange occurs over channels with finite capacity, and hence the updates have
to be quantized. We derive sufficient conditions on the quantization error such
that the algorithm with quantized update still converge. We further verify our
theoretical results by running an experiment, where we apply the algorithm with
quantized updates to solve a linear regression problem.
| Mostafa El Gamal and Lifeng Lai | null | 1609.05539 | null | null |
Opponent Modeling in Deep Reinforcement Learning | cs.LG | Opponent modeling is necessary in multi-agent settings where secondary agents
with competing goals also adapt their strategies, yet it remains challenging
because strategies interact with each other and change. Most previous work
focuses on developing probabilistic models or parameterized strategies for
specific applications. Inspired by the recent success of deep reinforcement
learning, we present neural-based models that jointly learn a policy and the
behavior of opponents. Instead of explicitly predicting the opponent's action,
we encode observation of the opponents into a deep Q-Network (DQN); however, we
retain explicit modeling (if desired) using multitasking. By using a
Mixture-of-Experts architecture, our model automatically discovers different
strategy patterns of opponents without extra supervision. We evaluate our
models on a simulated soccer game and a popular trivia game, showing superior
performance over DQN and its variants.
| He He, Jordan Boyd-Graber, Kevin Kwok, Hal Daum\'e III | null | 1609.05559 | null | null |
Tensor Completion by Alternating Minimization under the Tensor Train
(TT) Model | cs.NA cs.IT cs.LG math.IT | Using the matrix product state (MPS) representation of tensor train
decompositions, in this paper we propose a tensor completion algorithm which
alternates over the matrices (tensors) in the MPS representation. This
development is motivated in part by the success of matrix completion algorithms
which alternate over the (low-rank) factors. We comment on the computational
complexity of the proposed algorithm and numerically compare it with existing
methods employing low rank tensor train approximation for data completion as
well as several other recently proposed methods. We show that our method is
superior to existing ones for a variety of real settings.
| Wenqi Wang and Vaneet Aggarwal and Shuchin Aeron | null | 1609.05587 | null | null |
Enhancing LambdaMART Using Oblivious Trees | cs.IR cs.LG | Learning to rank is a machine learning technique broadly used in many areas
such as document retrieval, collaborative filtering or question answering. We
present experimental results which suggest that the performance of the current
state-of-the-art learning to rank algorithm LambdaMART, when used for document
retrieval for search engines, can be improved if standard regression trees are
replaced by oblivious trees. This paper provides a comparison of both variants
and our results demonstrate that the use of oblivious trees can improve the
performance by more than $2.2\%$. Additional experimental analysis of the
influence of a number of features and of a size of the training set is also
provided and confirms the desirability of properties of oblivious decision
trees.
| Michal Ferov and Marek Modr\'y | null | 1609.0561 | null | null |
Stochastic Matrix Factorization | stat.ML cs.LG | This paper considers a restriction to non-negative matrix factorization in
which at least one matrix factor is stochastic. That is, the elements of the
matrix factors are non-negative and the columns of one matrix factor sum to 1.
This restriction includes topic models, a popular method for analyzing
unstructured data. It also includes a method for storing and finding pictures.
The paper presents necessary and sufficient conditions on the observed data
such that the factorization is unique. In addition, the paper characterizes
natural bounds on the parameters for any observed data and presents a
consistent least squares estimator. The results are illustrated using a topic
model analysis of PhD abstracts in economics and the problem of storing and
retrieving a set of pictures of faces.
| Christopher Adams | null | 1609.05772 | null | null |
Inherent Trade-Offs in the Fair Determination of Risk Scores | cs.LG cs.CY stat.ML | Recent discussion in the public sphere about algorithmic classification has
involved tension between competing notions of what it means for a probabilistic
classification to be fair to different groups. We formalize three fairness
conditions that lie at the heart of these debates, and we prove that except in
highly constrained special cases, there is no method that can satisfy these
three conditions simultaneously. Moreover, even satisfying all three conditions
approximately requires that the data lie in an approximate version of one of
the constrained special cases identified by our theorem. These results suggest
some of the ways in which key notions of fairness are incompatible with each
other, and hence provide a framework for thinking about the trade-offs between
them.
| Jon Kleinberg, Sendhil Mullainathan, Manish Raghavan | null | 1609.05807 | null | null |
The Projected Power Method: An Efficient Algorithm for Joint Alignment
from Pairwise Differences | cs.IT cs.CV cs.LG math.IT math.OC stat.ML | Various applications involve assigning discrete label values to a collection
of objects based on some pairwise noisy data. Due to the discrete---and hence
nonconvex---structure of the problem, computing the optimal assignment
(e.g.~maximum likelihood assignment) becomes intractable at first sight. This
paper makes progress towards efficient computation by focusing on a concrete
joint alignment problem---that is, the problem of recovering $n$ discrete
variables $x_i \in \{1,\cdots, m\}$, $1\leq i\leq n$ given noisy observations
of their modulo differences $\{x_i - x_j~\mathsf{mod}~m\}$. We propose a
low-complexity and model-free procedure, which operates in a lifted space by
representing distinct label values in orthogonal directions, and which attempts
to optimize quadratic functions over hypercubes. Starting with a first guess
computed via a spectral method, the algorithm successively refines the iterates
via projected power iterations. We prove that for a broad class of statistical
models, the proposed projected power method makes no error---and hence
converges to the maximum likelihood estimate---in a suitable regime. Numerical
experiments have been carried out on both synthetic and real data to
demonstrate the practicality of our algorithm. We expect this algorithmic
framework to be effective for a broad range of discrete assignment problems.
| Yuxin Chen and Emmanuel Candes | null | 1609.0582 | null | null |
A Cheap Linear Attention Mechanism with Fast Lookups and Fixed-Size
Representations | cs.LG cs.IR cs.NE stat.ML | The softmax content-based attention mechanism has proven to be very
beneficial in many applications of recurrent neural networks. Nevertheless it
suffers from two major computational limitations. First, its computations for
an attention lookup scale linearly in the size of the attended sequence.
Second, it does not encode the sequence into a fixed-size representation but
instead requires to memorize all the hidden states. These two limitations
restrict the use of the softmax attention mechanism to relatively small-scale
applications with short sequences and few lookups per sequence. In this work we
introduce a family of linear attention mechanisms designed to overcome the two
limitations listed above. We show that removing the softmax non-linearity from
the traditional attention formulation yields constant-time attention lookups
and fixed-size representations of the attended sequences. These properties make
these linear attention mechanisms particularly suitable for large-scale
applications with extreme query loads, real-time requirements and memory
constraints. Early experiments on a question answering task show that these
linear mechanisms yield significantly better accuracy results than no
attention, but obviously worse than their softmax alternative.
| Alexandre de Br\'ebisson, Pascal Vincent | null | 1609.05866 | null | null |
Online and Distributed learning of Gaussian mixture models by Bayesian
Moment Matching | cs.AI cs.LG stat.ML | The Gaussian mixture model is a classic technique for clustering and data
modeling that is used in numerous applications. With the rise of big data,
there is a need for parameter estimation techniques that can handle streaming
data and distribute the computation over several processors. While online
variants of the Expectation Maximization (EM) algorithm exist, their data
efficiency is reduced by a stochastic approximation of the E-step and it is not
clear how to distribute the computation over multiple processors. We propose a
Bayesian learning technique that lends itself naturally to online and
distributed computation. Since the Bayesian posterior is not tractable, we
project it onto a family of tractable distributions after each observation by
matching a set of sufficient moments. This Bayesian moment matching technique
compares favorably to online EM in terms of time and accuracy on a set of data
modeling benchmarks.
| Priyank Jaini and Pascal Poupart | null | 1609.05881 | null | null |
A Quantum Implementation Model for Artificial Neural Networks | quant-ph cs.LG cs.NE | The learning process for multi layered neural networks with many nodes makes
heavy demands on computational resources. In some neural network models, the
learning formulas, such as the Widrow-Hoff formula, do not change the
eigenvectors of the weight matrix while flatting the eigenvalues. In infinity,
this iterative formulas result in terms formed by the principal components of
the weight matrix: i.e., the eigenvectors corresponding to the non-zero
eigenvalues. In quantum computing, the phase estimation algorithm is known to
provide speed-ups over the conventional algorithms for the eigenvalue-related
problems. Combining the quantum amplitude amplification with the phase
estimation algorithm, a quantum implementation model for artificial neural
networks using the Widrow-Hoff learning rule is presented. The complexity of
the model is found to be linear in the size of the weight matrix. This provides
a quadratic improvement over the classical algorithms.
| Ammar Daskin | 10.12743/quanta.v7i1.65 | 1609.05884 | null | null |
Conformalized Kernel Ridge Regression | stat.ML cs.LG stat.AP | General predictive models do not provide a measure of confidence in
predictions without Bayesian assumptions. A way to circumvent potential
restrictions is to use conformal methods for constructing non-parametric
confidence regions, that offer guarantees regarding validity. In this paper we
provide a detailed description of a computationally efficient conformal
procedure for Kernel Ridge Regression (KRR), and conduct a comparative
numerical study to see how well conformal regions perform against the Bayesian
confidence sets. The results suggest that conformalized KRR can yield
predictive confidence regions with specified coverage rate, which is essential
in constructing anomaly detection systems based on predictive models.
| Evgeny Burnaev and Ivan Nazarov | null | 1609.05959 | null | null |
An Approach for Self-Training Audio Event Detectors Using Web Data | cs.SD cs.LG cs.MM | Audio Event Detection (AED) aims to recognize sounds within audio and video
recordings. AED employs machine learning algorithms commonly trained and tested
on annotated datasets. However, available datasets are limited in number of
samples and hence it is difficult to model acoustic diversity. Therefore, we
propose combining labeled audio from a dataset and unlabeled audio from the web
to improve the sound models. The audio event detectors are trained on the
labeled audio and ran on the unlabeled audio downloaded from YouTube. Whenever
the detectors recognized any of the known sounds with high confidence, the
unlabeled audio was use to re-train the detectors. The performance of the
re-trained detectors is compared to the one from the original detectors using
the annotated test set. Results showed an improvement of the AED, and uncovered
challenges of using web audio from videos.
| Benjamin Elizalde, Ankit Shah, Siddharth Dalmia, Min Hun Lee, Rohan
Badlani, Anurag Kumar, Bhiksha Raj and Ian Lane | null | 1609.06026 | null | null |
Modelling Stock-market Investors as Reinforcement Learning Agents
[Correction] | cs.CE cs.LG | Decision making in uncertain and risky environments is a prominent area of
research. Standard economic theories fail to fully explain human behaviour,
while a potentially promising alternative may lie in the direction of
Reinforcement Learning (RL) theory. We analyse data for 46 players extracted
from a financial market online game and test whether Reinforcement Learning
(Q-Learning) could capture these players behaviour using a risk measure based
on financial modeling. Moreover we test an earlier hypothesis that players are
"na\"ive" (short-sighted). Our results indicate that a simple Reinforcement
Learning model which considers only the selling component of the task captures
the decision-making process for a subset of players but this is not sufficient
to draw any conclusion on the population. We also find that there is not a
significant improvement of fitting of the players when using a full RL model
against a myopic version, where only immediate reward is valued by the players.
This indicates that players, if using a Reinforcement Learning approach, do so
na\"ively
| Alvin Pastore, Umberto Esposito, Eleni Vasilaki | 10.1109/EAIS.2015.7368789 | 1609.06086 | null | null |
Distributed Adaptive Learning of Graph Signals | cs.LG stat.ML | The aim of this paper is to propose distributed strategies for adaptive
learning of signals defined over graphs. Assuming the graph signal to be
bandlimited, the method enables distributed reconstruction, with guaranteed
performance in terms of mean-square error, and tracking from a limited number
of sampled observations taken from a subset of vertices. A detailed mean square
analysis is carried out and illustrates the role played by the sampling
strategy on the performance of the proposed method. Finally, some useful
strategies for distributed selection of the sampling set are provided. Several
numerical results validate our theoretical findings, and illustrate the
performance of the proposed method for distributed adaptive learning of signals
defined over graphs.
| P. Di Lorenzo, P. Banelli, S. Barbarossa, S. Sardellitti | 10.1109/TSP.2017.2708035 | 1609.061 | null | null |
FastBDT: A speed-optimized and cache-friendly implementation of
stochastic gradient-boosted decision trees for multivariate classification | cs.LG | Stochastic gradient-boosted decision trees are widely employed for
multivariate classification and regression tasks. This paper presents a
speed-optimized and cache-friendly implementation for multivariate
classification called FastBDT. FastBDT is one order of magnitude faster during
the fitting-phase and application-phase, in comparison with popular
implementations in software frameworks like TMVA, scikit-learn and XGBoost. The
concepts used to optimize the execution time and performance studies are
discussed in detail in this paper. The key ideas include: An equal-frequency
binning on the input data, which allows replacing expensive floating-point with
integer operations, while at the same time increasing the quality of the
classification; a cache-friendly linear access pattern to the input data, in
contrast to usual implementations, which exhibit a random access pattern.
FastBDT provides interfaces to C/C++, Python and TMVA. It is extensively used
in the field of high energy physics by the Belle II experiment.
| Thomas Keck | null | 1609.06119 | null | null |
A framework for mining process models from emails logs | cs.CL cs.LG | Due to its wide use in personal, but most importantly, professional contexts,
email represents a valuable source of information that can be harvested for
understanding, reengineering and repurposing undocumented business processes of
companies and institutions. Towards this aim, a few researchers investigated
the problem of extracting process oriented information from email logs in order
to take benefit of the many available process mining techniques and tools. In
this paper we go further in this direction, by proposing a new method for
mining process models from email logs that leverage unsupervised machine
learning techniques with little human involvement. Moreover, our method allows
to semi-automatically label emails with activity names, that can be used for
activity recognition in new incoming emails. A use case demonstrates the
usefulness of the proposed solution using a modest in size, yet real-world,
dataset containing emails that belong to two different process models.
| Diana Jlailaty and Daniela Grigori and Khalid Belhajjame | null | 1609.06127 | null | null |
mlr Tutorial | cs.LG | This document provides and in-depth introduction to the mlr framework for
machine learning experiments in R.
| Julia Schiffner, Bernd Bischl, Michel Lang, Jakob Richter, Zachary M.
Jones, Philipp Probst, Florian Pfisterer, Mason Gallo, Dominik Kirchhoff,
Tobias K\"uhn, Janek Thomas, Lars Kotthoff | null | 1609.06146 | null | null |
Unsupervised learning of transcriptional regulatory networks via latent
tree graphical models | q-bio.MN cs.LG | Gene expression is a readily-observed quantification of transcriptional
activity and cellular state that enables the recovery of the relationships
between regulators and their target genes. Reconstructing transcriptional
regulatory networks from gene expression data is a problem that has attracted
much attention, but previous work often makes the simplifying (but unrealistic)
assumption that regulator activity is represented by mRNA levels. We use a
latent tree graphical model to analyze gene expression without relying on
transcription factor expression as a proxy for regulator activity. The latent
tree model is a type of Markov random field that includes both observed gene
variables and latent (hidden) variables, which factorize on a Markov tree.
Through efficient unsupervised learning approaches, we determine which groups
of genes are co-regulated by hidden regulators and the activity levels of those
regulators. Post-processing annotates many of these discovered latent variables
as specific transcription factors or groups of transcription factors. Other
latent variables do not necessarily represent physical regulators but instead
reveal hidden structure in the gene expression such as shared biological
function. We apply the latent tree graphical model to a yeast stress response
dataset. In addition to novel predictions, such as condition-specific binding
of the transcription factor Msn4, our model recovers many known aspects of the
yeast regulatory network. These include groups of co-regulated genes,
condition-specific regulator activity, and combinatorial regulation among
transcription factors. The latent tree graphical model is a general approach
for analyzing gene expression data that requires no prior knowledge of which
possible regulators exist, regulator activity, or where transcription factors
physically bind.
| Anthony Gitter, Furong Huang, Ragupathyraj Valluvan, Ernest Fraenkel,
Animashree Anandkumar | null | 1609.06335 | null | null |
Recognizing Detailed Human Context In-the-Wild from Smartphones and
Smartwatches | cs.AI cs.CY cs.HC cs.LG | The ability to automatically recognize a person's behavioral context can
contribute to health monitoring, aging care and many other domains. Validating
context recognition in-the-wild is crucial to promote practical applications
that work in real-life settings. We collected over 300k minutes of sensor data
with context labels from 60 subjects. Unlike previous studies, our subjects
used their own personal phone, in any way that was convenient to them, and
engaged in their routine in their natural environments. Unscripted behavior and
unconstrained phone usage resulted in situations that are harder to recognize.
We demonstrate how fusion of multi-modal sensors is important for resolving
such cases. We present a baseline system, and encourage researchers to use our
public dataset to compare methods and improve context recognition in-the-wild.
| Yonatan Vaizman and Katherine Ellis and Gert Lanckriet | null | 1609.06354 | null | null |
Geometry-Based Next Frame Prediction from Monocular Video | cs.LG cs.CV | We consider the problem of next frame prediction from video input. A
recurrent convolutional neural network is trained to predict depth from
monocular video input, which, along with the current video image and the camera
trajectory, can then be used to compute the next frame. Unlike prior next-frame
prediction approaches, we take advantage of the scene geometry and use the
predicted depth for generating the next frame prediction. Our approach can
produce rich next frame predictions which include depth information attached to
each pixel. Another novel aspect of our approach is that it predicts depth from
a sequence of images (e.g. in a video), rather than from a single still image.
We evaluate the proposed approach on the KITTI dataset, a standard dataset for
benchmarking tasks relevant to autonomous driving. The proposed method produces
results which are visually and numerically superior to existing methods that
directly predict the next frame. We show that the accuracy of depth prediction
improves as more prior frames are considered.
| Reza Mahjourian, Martin Wicke, Anelia Angelova | null | 1609.06377 | null | null |
Multiclass Classification Calibration Functions | stat.ML cs.LG | In this paper we refine the process of computing calibration functions for a
number of multiclass classification surrogate losses. Calibration functions are
a powerful tool for easily converting bounds for the surrogate risk (which can
be computed through well-known methods) into bounds for the true risk, the
probability of making a mistake. They are particularly suitable in
non-parametric settings, where the approximation error can be controlled, and
provide tighter bounds than the common technique of upper-bounding the 0-1 loss
by the surrogate loss.
The abstract nature of the more sophisticated existing calibration function
results requires calibration functions to be explicitly derived on a
case-by-case basis, requiring repeated efforts whenever bounds for a new
surrogate loss are required. We devise a streamlined analysis that simplifies
the process of deriving calibration functions for a large number of surrogate
losses that have been proposed in the literature. The effort of deriving
calibration functions is then surmised in verifying, for a chosen surrogate
loss, a small number of conditions that we introduce.
As case studies, we recover existing calibration functions for the well-known
loss of Lee et al. (2004), and also provide novel calibration functions for
well-known losses, including the one-versus-all loss and the logistic
regression loss, plus a number of other losses that have been shown to be
classification-calibrated in the past, but for which no calibration function
had been derived.
| Bernardo \'Avila Pires and Csaba Szepesv\'ari | null | 1609.06385 | null | null |
Learning HMMs with Nonparametric Emissions via Spectral Decompositions
of Continuous Matrices | stat.ML cs.LG | Recently, there has been a surge of interest in using spectral methods for
estimating latent variable models. However, it is usually assumed that the
distribution of the observations conditioned on the latent variables is either
discrete or belongs to a parametric family. In this paper, we study the
estimation of an $m$-state hidden Markov model (HMM) with only smoothness
assumptions, such as H\"olderian conditions, on the emission densities. By
leveraging some recent advances in continuous linear algebra and numerical
analysis, we develop a computationally efficient spectral algorithm for
learning nonparametric HMMs. Our technique is based on computing an SVD on
nonparametric estimates of density functions by viewing them as
\emph{continuous matrices}. We derive sample complexity bounds via
concentration results for nonparametric density estimation and novel
perturbation theory results for continuous matrices. We implement our method
using Chebyshev polynomial approximations. Our method is competitive with other
baselines on synthetic and real problems and is also very computationally
efficient.
| Kirthevasan Kandasamy, Maruan Al-Shedivat, Eric P. Xing | null | 1609.0639 | null | null |
Large-Scale Strategic Games and Adversarial Machine Learning | cs.GT cs.LG | Decision making in modern large-scale and complex systems such as
communication networks, smart electricity grids, and cyber-physical systems
motivate novel game-theoretic approaches. This paper investigates big strategic
(non-cooperative) games where a finite number of individual players each have a
large number of continuous decision variables and input data points. Such
high-dimensional decision spaces and big data sets lead to computational
challenges, relating to efforts in non-linear optimization scaling up to large
systems of variables. In addition to these computational challenges, real-world
players often have limited information about their preference parameters due to
the prohibitive cost of identifying them or due to operating in dynamic online
settings. The challenge of limited information is exacerbated in high
dimensions and big data sets. Motivated by both computational and information
limitations that constrain the direct solution of big strategic games, our
investigation centers around reductions using linear transformations such as
random projection methods and their effect on Nash equilibrium solutions.
Specific analytical results are presented for quadratic games and
approximations. In addition, an adversarial learning game is presented where
random projection and sampling schemes are investigated.
| Tansu Alpcan, Benjamin I. P. Rubinstein, Christopher Leckie | null | 1609.06438 | null | null |
AMOS: An Automated Model Order Selection Algorithm for Spectral Graph
Clustering | cs.SI cs.LG stat.ML | One of the longstanding problems in spectral graph clustering (SGC) is the
so-called model order selection problem: automated selection of the correct
number of clusters. This is equivalent to the problem of finding the number of
connected components or communities in an undirected graph. In this paper, we
propose AMOS, an automated model order selection algorithm for SGC. Based on a
recent analysis of clustering reliability for SGC under the random
interconnection model, AMOS works by incrementally increasing the number of
clusters, estimating the quality of identified clusters, and providing a series
of clustering reliability tests. Consequently, AMOS outputs clusters of minimal
model order with statistical clustering reliability guarantees. Comparing to
three other automated graph clustering methods on real-world datasets, AMOS
shows superior performance in terms of multiple external and internal
clustering metrics.
| Pin-Yu Chen and Thibaut Gensollen and Alfred O. Hero III | null | 1609.06457 | null | null |
Network-regularized Sparse Logistic Regression Models for Clinical Risk
Prediction and Biomarker Discovery | q-bio.GN cs.LG stat.ML | Molecular profiling data (e.g., gene expression) has been used for clinical
risk prediction and biomarker discovery. However, it is necessary to integrate
other prior knowledge like biological pathways or gene interaction networks to
improve the predictive ability and biological interpretability of biomarkers.
Here, we first introduce a general regularized Logistic Regression (LR)
framework with regularized term $\lambda \|\bm{w}\|_1 +
\eta\bm{w}^T\bm{M}\bm{w}$, which can reduce to different penalties, including
Lasso, elastic net, and network-regularized terms with different $\bm{M}$. This
framework can be easily solved in a unified manner by a cyclic coordinate
descent algorithm which can avoid inverse matrix operation and accelerate the
computing speed. However, if those estimated $\bm{w}_i$ and $\bm{w}_j$ have
opposite signs, then the traditional network-regularized penalty may not
perform well. To address it, we introduce a novel network-regularized sparse LR
model with a new penalty $\lambda \|\bm{w}\|_1 + \eta|\bm{w}|^T\bm{M}|\bm{w}|$
to consider the difference between the absolute values of the coefficients. And
we develop two efficient algorithms to solve it. Finally, we test our methods
and compare them with the related ones using simulated and real data to show
their efficiency.
| Wenwen Min, Juan Liu, Shihua Zhang | null | 1609.0648 | null | null |
Document Image Coding and Clustering for Script Discrimination | cs.CV cs.AI cs.CL cs.LG cs.NE | The paper introduces a new method for discrimination of documents given in
different scripts. The document is mapped into a uniformly coded text of
numerical values. It is derived from the position of the letters in the text
line, based on their typographical characteristics. Each code is considered as
a gray level. Accordingly, the coded text determines a 1-D image, on which
texture analysis by run-length statistics and local binary pattern is
performed. It defines feature vectors representing the script content of the
document. A modified clustering approach employed on document feature vector
groups documents written in the same script. Experimentation performed on two
custom oriented databases of historical documents in old Cyrillic, angular and
round Glagolitic as well as Antiqua and Fraktur scripts demonstrates the
superiority of the proposed method with respect to well-known methods in the
state-of-the-art.
| Darko Brodic, Alessia Amelio, Zoran N. Milivojevic, Milena Jevtic | null | 1609.06492 | null | null |
Bibliographic Analysis on Research Publications using Authors,
Categorical Labels and the Citation Network | cs.DL cs.LG stat.ML | Bibliographic analysis considers the author's research areas, the citation
network and the paper content among other things. In this paper, we combine
these three in a topic model that produces a bibliographic model of authors,
topics and documents, using a nonparametric extension of a combination of the
Poisson mixed-topic link model and the author-topic model. This gives rise to
the Citation Network Topic Model (CNTM). We propose a novel and efficient
inference algorithm for the CNTM to explore subsets of research publications
from CiteSeerX. The publication datasets are organised into three corpora,
totalling to about 168k publications with about 62k authors. The queried
datasets are made available online. In three publicly available corpora in
addition to the queried datasets, our proposed model demonstrates an improved
performance in both model fitting and document clustering, compared to several
baselines. Moreover, our model allows extraction of additional useful knowledge
from the corpora, such as the visualisation of the author-topics network.
Additionally, we propose a simple method to incorporate supervision into topic
modelling to achieve further improvement on the clustering task.
| Kar Wai Lim and Wray Buntine | 10.1007/s10994-016-5554-z | 1609.06532 | null | null |
On Data-Independent Properties for Density-Based Dissimilarity Measures
in Hybrid Clustering | stat.ML cs.LG | Hybrid clustering combines partitional and hierarchical clustering for
computational effectiveness and versatility in cluster shape. In such
clustering, a dissimilarity measure plays a crucial role in the hierarchical
merging. The dissimilarity measure has great impact on the final clustering,
and data-independent properties are needed to choose the right dissimilarity
measure for the problem at hand. Properties for distance-based dissimilarity
measures have been studied for decades, but properties for density-based
dissimilarity measures have so far received little attention. Here, we propose
six data-independent properties to evaluate density-based dissimilarity
measures associated with hybrid clustering, regarding equality, orthogonality,
symmetry, outlier and noise observations, and light-tailed models for
heavy-tailed clusters. The significance of the properties is investigated, and
we study some well-known dissimilarity measures based on Shannon entropy,
misclassification rate, Bhattacharyya distance and Kullback-Leibler divergence
with respect to the proposed properties. As none of them satisfy all the
proposed properties, we introduce a new dissimilarity measure based on the
Kullback-Leibler information and show that it satisfies all proposed
properties. The effect of the proposed properties is also illustrated on
several real and simulated data sets.
| Kajsa M{\o}llersen, Subhra S. Dhar, Fred Godtliebsen | 10.4236/am.2016.715143 | 1609.06533 | null | null |
Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced
Datasets in Machine Learning | cs.LG | Imbalanced-learn is an open-source python toolbox aiming at providing a wide
range of methods to cope with the problem of imbalanced dataset frequently
encountered in machine learning and pattern recognition. The implemented
state-of-the-art methods can be categorized into 4 groups: (i) under-sampling,
(ii) over-sampling, (iii) combination of over- and under-sampling, and (iv)
ensemble learning methods. The proposed toolbox only depends on numpy, scipy,
and scikit-learn and is distributed under MIT license. Furthermore, it is fully
compatible with scikit-learn and is part of the scikit-learn-contrib supported
project. Documentation, unit tests as well as integration tests are provided to
ease usage and contribution. The toolbox is publicly available in GitHub:
https://github.com/scikit-learn-contrib/imbalanced-learn.
| Guillaume Lemaitre and Fernando Nogueira and Christos K. Aridas | null | 1609.0657 | null | null |
Theoretical Evaluation of Feature Selection Methods based on Mutual
Information | stat.ML cs.LG | Feature selection methods are usually evaluated by wrapping specific
classifiers and datasets in the evaluation process, resulting very often in
unfair comparisons between methods. In this work, we develop a theoretical
framework that allows obtaining the true feature ordering of two-dimensional
sequential forward feature selection methods based on mutual information, which
is independent of entropy or mutual information estimation methods,
classifiers, or datasets, and leads to an undoubtful comparison of the methods.
Moreover, the theoretical framework unveils problems intrinsic to some methods
that are otherwise difficult to detect, namely inconsistencies in the
construction of the objective function used to select the candidate features,
due to various types of indeterminations and to the possibility of the entropy
of continuous random variables taking null and negative values.
| Cl\'audia Pascoal, M. Ros\'ario Oliveira, Ant\'onio Pacheco, and Rui
Valadas | null | 1609.06575 | null | null |
Twitter Opinion Topic Model: Extracting Product Opinions from Tweets by
Leveraging Hashtags and Sentiment Lexicon | cs.CL cs.IR cs.LG | Aspect-based opinion mining is widely applied to review data to aggregate or
summarize opinions of a product, and the current state-of-the-art is achieved
with Latent Dirichlet Allocation (LDA)-based model. Although social media data
like tweets are laden with opinions, their "dirty" nature (as natural language)
has discouraged researchers from applying LDA-based opinion model for product
review mining. Tweets are often informal, unstructured and lacking labeled data
such as categories and ratings, making it challenging for product opinion
mining. In this paper, we propose an LDA-based opinion model named Twitter
Opinion Topic Model (TOTM) for opinion mining and sentiment analysis. TOTM
leverages hashtags, mentions, emoticons and strong sentiment words that are
present in tweets in its discovery process. It improves opinion prediction by
modeling the target-opinion interaction directly, thus discovering target
specific opinion words, neglected in existing approaches. Moreover, we propose
a new formulation of incorporating sentiment prior information into a topic
model, by utilizing an existing public sentiment lexicon. This is novel in that
it learns and updates with the data. We conduct experiments on 9 million tweets
on electronic products, and demonstrate the improved performance of TOTM in
both quantitative evaluations and qualitative analysis. We show that
aspect-based opinion analysis on massive volume of tweets provides useful
opinions on products.
| Kar Wai Lim, Wray Buntine | 10.1145/2661829.2662005 | 1609.06578 | null | null |
Privacy-Friendly Mobility Analytics using Aggregate Location Data | cs.CR cs.CY cs.LG | Location data can be extremely useful to study commuting patterns and
disruptions, as well as to predict real-time traffic volumes. At the same time,
however, the fine-grained collection of user locations raises serious privacy
concerns, as this can reveal sensitive information about the users, such as,
life style, political and religious inclinations, or even identities. In this
paper, we study the feasibility of crowd-sourced mobility analytics over
aggregate location information: users periodically report their location, using
a privacy-preserving aggregation protocol, so that the server can only recover
aggregates -- i.e., how many, but not which, users are in a region at a given
time. We experiment with real-world mobility datasets obtained from the
Transport For London authority and the San Francisco Cabs network, and present
a novel methodology based on time series modeling that is geared to forecast
traffic volumes in regions of interest and to detect mobility anomalies in
them. In the presence of anomalies, we also make enhanced traffic volume
predictions by feeding our model with additional information from correlated
regions. Finally, we present and evaluate a mobile app prototype, called
Mobility Data Donors (MDD), in terms of computation, communication, and energy
overhead, demonstrating the real-world deployability of our techniques.
| Apostolos Pyrgelis and Emiliano De Cristofaro and Gordon Ross | null | 1609.06582 | null | null |
Vote3Deep: Fast Object Detection in 3D Point Clouds Using Efficient
Convolutional Neural Networks | cs.RO cs.AI cs.CV cs.LG cs.NE | This paper proposes a computationally efficient approach to detecting objects
natively in 3D point clouds using convolutional neural networks (CNNs). In
particular, this is achieved by leveraging a feature-centric voting scheme to
implement novel convolutional layers which explicitly exploit the sparsity
encountered in the input. To this end, we examine the trade-off between
accuracy and speed for different architectures and additionally propose to use
an L1 penalty on the filter activations to further encourage sparsity in the
intermediate representations. To the best of our knowledge, this is the first
work to propose sparse convolutional layers and L1 regularisation for efficient
large-scale processing of 3D data. We demonstrate the efficacy of our approach
on the KITTI object detection benchmark and show that Vote3Deep models with as
few as three layers outperform the previous state of the art in both laser and
laser-vision based approaches by margins of up to 40% while remaining highly
competitive in terms of processing time.
| Martin Engelcke, Dushyant Rao, Dominic Zeng Wang, Chi Hay Tong, Ingmar
Posner | null | 1609.06666 | null | null |
Character-level and Multi-channel Convolutional Neural Networks for
Large-scale Authorship Attribution | cs.CL cs.LG | Convolutional neural networks (CNNs) have demonstrated superior capability
for extracting information from raw signals in computer vision. Recently,
character-level and multi-channel CNNs have exhibited excellent performance for
sentence classification tasks. We apply CNNs to large-scale authorship
attribution, which aims to determine an unknown text's author among many
candidate authors, motivated by their ability to process character-level
signals and to differentiate between a large number of classes, while making
fast predictions in comparison to state-of-the-art approaches. We extensively
evaluate CNN-based approaches that leverage word and character channels and
compare them against state-of-the-art methods for a large range of author
numbers, shedding new light on traditional approaches. We show that
character-level CNNs outperform the state-of-the-art on four out of five
datasets in different domains. Additionally, we present the first application
of authorship attribution to reddit.
| Sebastian Ruder, Parsa Ghaffari, John G. Breslin | null | 1609.06686 | null | null |
SoftTarget Regularization: An Effective Technique to Reduce Over-Fitting
in Neural Networks | cs.LG | Deep neural networks are learning models with a very high capacity and
therefore prone to over-fitting. Many regularization techniques such as
Dropout, DropConnect, and weight decay all attempt to solve the problem of
over-fitting by reducing the capacity of their respective models (Srivastava et
al., 2014), (Wan et al., 2013), (Krogh & Hertz, 1992). In this paper we
introduce a new form of regularization that guides the learning problem in a
way that reduces over-fitting without sacrificing the capacity of the model.
The mistakes that models make in early stages of training carry information
about the learning problem. By adjusting the labels of the current epoch of
training through a weighted average of the real labels, and an exponential
average of the past soft-targets we achieved a regularization scheme as
powerful as Dropout without necessarily reducing the capacity of the model, and
simplified the complexity of the learning problem. SoftTarget regularization
proved to be an effective tool in various neural network architectures.
| Armen Aghajanyan | null | 1609.06693 | null | null |
PixelNet: Towards a General Pixel-level Architecture | cs.CV cs.LG | We explore architectures for general pixel-level prediction problems, from
low-level edge detection to mid-level surface normal estimation to high-level
semantic segmentation. Convolutional predictors, such as the
fully-convolutional network (FCN), have achieved remarkable success by
exploiting the spatial redundancy of neighboring pixels through convolutional
processing. Though computationally efficient, we point out that such approaches
are not statistically efficient during learning precisely because spatial
redundancy limits the information learned from neighboring pixels. We
demonstrate that (1) stratified sampling allows us to add diversity during
batch updates and (2) sampled multi-scale features allow us to explore more
nonlinear predictors (multiple fully-connected layers followed by ReLU) that
improve overall accuracy. Finally, our objective is to show how a architecture
can get performance better than (or comparable to) the architectures designed
for a particular task. Interestingly, our single architecture produces
state-of-the-art results for semantic segmentation on PASCAL-Context, surface
normal estimation on NYUDv2 dataset, and edge detection on BSDS without
contextual post-processing.
| Aayush Bansal, Xinlei Chen, Bryan Russell, Abhinav Gupta, Deva Ramanan | null | 1609.06694 | null | null |
Nonparametric Bayesian Topic Modelling with the Hierarchical Pitman-Yor
Processes | stat.ML cs.CL cs.LG | The Dirichlet process and its extension, the Pitman-Yor process, are
stochastic processes that take probability distributions as a parameter. These
processes can be stacked up to form a hierarchical nonparametric Bayesian
model. In this article, we present efficient methods for the use of these
processes in this hierarchical context, and apply them to latent variable
models for text analytics. In particular, we propose a general framework for
designing these Bayesian models, which are called topic models in the computer
science community. We then propose a specific nonparametric Bayesian topic
model for modelling text from social media. We focus on tweets (posts on
Twitter) in this article due to their ease of access. We find that our
nonparametric model performs better than existing parametric models in both
goodness of fit and real world applications.
| Kar Wai Lim, Wray Buntine, Changyou Chen, Lan Du | 10.1016/j.ijar.2016.07.007 | 1609.06783 | null | null |
Decoupled Asynchronous Proximal Stochastic Gradient Descent with
Variance Reduction | cs.LG math.OC | In the era of big data, optimizing large scale machine learning problems
becomes a challenging task and draws significant attention. Asynchronous
optimization algorithms come out as a promising solution. Recently, decoupled
asynchronous proximal stochastic gradient descent (DAP-SGD) is proposed to
minimize a composite function. It is claimed to be able to off-loads the
computation bottleneck from server to workers by allowing workers to evaluate
the proximal operators, therefore, server just need to do element-wise
operations. However, it still suffers from slow convergence rate because of the
variance of stochastic gradient is nonzero. In this paper, we propose a faster
method, decoupled asynchronous proximal stochastic variance reduced gradient
descent method (DAP-SVRG). We prove that our method has linear convergence for
strongly convex problem. Large-scale experiments are also conducted in this
paper, and results demonstrate our theoretical analysis.
| Zhouyuan Huo, Bin Gu, Heng Huang | null | 1609.06804 | null | null |
Bibliographic Analysis with the Citation Network Topic Model | cs.DL cs.LG stat.ML | Bibliographic analysis considers author's research areas, the citation
network and paper content among other things. In this paper, we combine these
three in a topic model that produces a bibliographic model of authors, topics
and documents using a non-parametric extension of a combination of the Poisson
mixed-topic link model and the author-topic model. We propose a novel and
efficient inference algorithm for the model to explore subsets of research
publications from CiteSeerX. Our model demonstrates improved performance in
both model fitting and a clustering task compared to several baselines.
| Kar Wai Lim, Wray Buntine | null | 1609.06826 | null | null |
Hawkes Processes with Stochastic Excitations | cs.LG stat.ML | We propose an extension to Hawkes processes by treating the levels of
self-excitation as a stochastic differential equation. Our new point process
allows better approximation in application domains where events and intensities
accelerate each other with correlated levels of contagion. We generalize a
recent algorithm for simulating draws from Hawkes processes whose levels of
excitation are stochastic processes, and propose a hybrid Markov chain Monte
Carlo approach for model fitting. Our sampling procedure scales linearly with
the number of required events and does not require stationarity of the point
process. A modular inference procedure consisting of a combination between
Gibbs and Metropolis Hastings steps is put forward. We recover expectation
maximization as a special case. Our general approach is illustrated for
contagion following geometric Brownian motion and exponential Langevin
dynamics.
| Young Lee, Kar Wai Lim, Cheng Soon Ong | null | 1609.06831 | null | null |
Exact Sampling from Determinantal Point Processes | cs.LG math.PR stat.ML | Determinantal point processes (DPPs) are an important concept in random
matrix theory and combinatorics. They have also recently attracted interest in
the study of numerical methods for machine learning, as they offer an elegant
"missing link" between independent Monte Carlo sampling and deterministic
evaluation on regular grids, applicable to a general set of spaces. This is
helpful whenever an algorithm explores to reduce uncertainty, such as in active
learning, Bayesian optimization, reinforcement learning, and marginalization in
graphical models. To draw samples from a DPP in practice, existing literature
focuses on approximate schemes of low cost, or comparably inefficient exact
algorithms like rejection sampling. We point out that, for many settings of
relevance to machine learning, it is also possible to draw exact samples from
DPPs on continuous domains. We start from an intuitive example on the real
line, which is then generalized to multivariate real vector spaces. We also
compare to previously studied approximations, showing that exact sampling,
despite higher cost, can be preferable where precision is needed.
| Philipp Hennig and Roman Garnett | null | 1609.0684 | null | null |
Randomized Independent Component Analysis | stat.ML cs.LG cs.SY math.PR math.ST stat.TH | Independent component analysis (ICA) is a method for recovering statistically
independent signals from observations of unknown linear combinations of the
sources. Some of the most accurate ICA decomposition methods require searching
for the inverse transformation which minimizes different approximations of the
Mutual Information, a measure of statistical independence of random vectors.
Two such approximations are the Kernel Generalized Variance or the Kernel
Canonical Correlation which has been shown to reach the highest performance of
ICA methods. However, the computational effort necessary just for computing
these measures is cubic in the sample size. Hence, optimizing them becomes even
more computationally demanding, in terms of both space and time. Here, we
propose a couple of alternative novel measures based on randomized features of
the samples - the Randomized Generalized Variance and the Randomized Canonical
Correlation. The computational complexity of calculating the proposed
alternatives is linear in the sample size and provide a controllable
approximation of their Kernel-based non-random versions. We also show that
optimization of the proposed statistical properties yields a comparable
separation error at an order of magnitude faster compared to Kernel-based
measures.
| Matan Sela and Ron Kimmel | null | 1609.06942 | null | null |
Semiring Programming: A Declarative Framework for Generalized Sum
Product Problems | cs.AI cs.LG cs.LO | To solve hard problems, AI relies on a variety of disciplines such as logic,
probabilistic reasoning, machine learning and mathematical programming.
Although it is widely accepted that solving real-world problems requires an
integration amongst these, contemporary representation methodologies offer
little support for this.
In an attempt to alleviate this situation, we introduce a new declarative
programming framework that provides abstractions of well-known problems such as
SAT, Bayesian inference, generative models, and convex optimization. The
semantics of programs is defined in terms of first-order structures with
semiring labels, which allows us to freely combine and integrate problems from
different AI disciplines.
| Vaishak Belle, Luc De Raedt | null | 1609.06954 | null | null |
Early Warning System for Seismic Events in Coal Mines Using Machine
Learning | cs.LG stat.ML | This document describes an approach to the problem of predicting dangerous
seismic events in active coal mines up to 8 hours in advance. It was developed
as a part of the AAIA'16 Data Mining Challenge: Predicting Dangerous Seismic
Events in Active Coal Mines. The solutions presented consist of ensembles of
various predictive models trained on different sets of features. The best one
achieved a winning score of 0.939 AUC.
| Robert Bogucki, Jan Lasek, Jan Kanty Milczek, Michal Tadeusiak | null | 1609.06957 | null | null |
Pose-Selective Max Pooling for Measuring Similarity | cs.CV cs.AI cs.LG stat.ML | In this paper, we deal with two challenges for measuring the similarity of
the subject identities in practical video-based face recognition - the
variation of the head pose in uncontrolled environments and the computational
expense of processing videos. Since the frame-wise feature mean is unable to
characterize the pose diversity among frames, we define and preserve the
overall pose diversity and closeness in a video. Then, identity will be the
only source of variation across videos since the pose varies even within a
single video. Instead of simply using all the frames, we select those faces
whose pose point is closest to the centroid of the K-means cluster containing
that pose point. Then, we represent a video as a bag of frame-wise deep face
features while the number of features has been reduced from hundreds to K.
Since the video representation can well represent the identity, now we measure
the subject similarity between two videos as the max correlation among all
possible pairs in the two bags of features. On the official 5,000 video-pairs
of the YouTube Face dataset for face verification, our algorithm achieves a
comparable performance with VGG-face that averages over deep features of all
frames. Other vision tasks can also benefit from the generic idea of employing
geometric cues to improve the descriptiveness of deep features.
| Xiang Xiang and Trac D. Tran | null | 1609.07042 | null | null |
Quantized Neural Networks: Training Neural Networks with Low Precision
Weights and Activations | cs.NE cs.LG | We introduce a method to train Quantized Neural Networks (QNNs) --- neural
networks with extremely low precision (e.g., 1-bit) weights and activations, at
run-time. At train-time the quantized weights and activations are used for
computing the parameter gradients. During the forward pass, QNNs drastically
reduce memory size and accesses, and replace most arithmetic operations with
bit-wise operations. As a result, power consumption is expected to be
drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and
ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to
their 32-bit counterparts. For example, our quantized version of AlexNet with
1-bit weights and 2-bit activations achieves $51\%$ top-1 accuracy. Moreover,
we quantize the parameter gradients to 6-bits as well which enables gradients
computation using only bit-wise operation. Quantized recurrent neural networks
were tested over the Penn Treebank dataset, and achieved comparable accuracy as
their 32-bit counterparts using only 4-bits. Last but not least, we programmed
a binary matrix multiplication GPU kernel with which it is possible to run our
MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering
any loss in classification accuracy. The QNN code is available online.
| Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv and
Yoshua Bengio | null | 1609.07061 | null | null |
Large Margin Nearest Neighbor Classification using Curved Mahalanobis
Distances | cs.LG cs.CG cs.CV | We consider the supervised classification problem of machine learning in
Cayley-Klein projective geometries: We show how to learn a curved Mahalanobis
metric distance corresponding to either the hyperbolic geometry or the elliptic
geometry using the Large Margin Nearest Neighbor (LMNN) framework. We report on
our experimental results, and further consider the case of learning a mixed
curved Mahalanobis distance. Besides, we show that the Cayley-Klein Voronoi
diagrams are affine, and can be built from an equivalent (clipped) power
diagrams, and that Cayley-Klein balls have Mahalanobis shapes with displaced
centers.
| Frank Nielsen and Boris Muzellec and Richard Nock | null | 1609.07082 | null | null |
(Bandit) Convex Optimization with Biased Noisy Gradient Oracles | cs.LG stat.ML | Algorithms for bandit convex optimization and online learning often rely on
constructing noisy gradient estimates, which are then used in appropriately
adjusted first-order algorithms, replacing actual gradients. Depending on the
properties of the function to be optimized and the nature of ``noise'' in the
bandit feedback, the bias and variance of gradient estimates exhibit various
tradeoffs. In this paper we propose a novel framework that replaces the
specific gradient estimation methods with an abstract oracle. With the help of
the new framework we unify previous works, reproducing their results in a clean
and concise fashion, while, perhaps more importantly, the framework also allows
us to formally show that to achieve the optimal root-$n$ rate either the
algorithms that use existing gradient estimators, or the proof techniques used
to analyze them have to go beyond what exists today.
| Xiaowei Hu, Prashanth L.A., Andr\'as Gy\"orgy and Csaba Szepesv\'ari | null | 1609.07087 | null | null |
Learning Modular Neural Network Policies for Multi-Task and Multi-Robot
Transfer | cs.LG cs.RO | Reinforcement learning (RL) can automate a wide variety of robotic skills,
but learning each new skill requires considerable real-world data collection
and manual representation engineering to design policy classes or features.
Using deep reinforcement learning to train general purpose neural network
policies alleviates some of the burden of manual representation engineering by
using expressive policy classes, but exacerbates the challenge of data
collection, since such methods tend to be less efficient than RL with
low-dimensional, hand-designed representations. Transfer learning can mitigate
this problem by enabling us to transfer information from one skill to another
and even from one robot to another. We show that neural network policies can be
decomposed into "task-specific" and "robot-specific" modules, where the
task-specific modules are shared across robots, and the robot-specific modules
are shared across all tasks on that robot. This allows for sharing task
information, such as perception, between robots and sharing robot information,
such as dynamics and kinematics, between tasks. We exploit this decomposition
to train mix-and-match modules that can solve new robot-task combinations that
were not seen during training. Using a novel neural network architecture, we
demonstrate the effectiveness of our transfer method for enabling zero-shot
generalization with a variety of robots and tasks in simulation for both visual
and non-visual tasks.
| Coline Devin, Abhishek Gupta, Trevor Darrell, Pieter Abbeel, Sergey
Levine | null | 1609.07088 | null | null |
Neural Photo Editing with Introspective Adversarial Networks | cs.LG cs.CV cs.NE stat.ML | The increasingly photorealistic sample quality of generative image models
suggests their feasibility in applications beyond image generation. We present
the Neural Photo Editor, an interface that leverages the power of generative
neural networks to make large, semantically coherent changes to existing
images. To tackle the challenge of achieving accurate reconstructions without
loss of feature quality, we introduce the Introspective Adversarial Network, a
novel hybridization of the VAE and GAN. Our model efficiently captures
long-range dependencies through use of a computational block based on
weight-shared dilated convolutions, and improves generalization performance
with Orthogonal Regularization, a novel weight regularization method. We
validate our contributions on CelebA, SVHN, and CIFAR-100, and produce samples
and reconstructions with high visual fidelity.
| Andrew Brock, Theodore Lim, J.M. Ritchie, Nick Weston | null | 1609.07093 | null | null |
A Fully Convolutional Neural Network for Speech Enhancement | cs.LG | In hearing aids, the presence of babble noise degrades hearing
intelligibility of human speech greatly. However, removing the babble without
creating artifacts in human speech is a challenging task in a low SNR
environment. Here, we sought to solve the problem by finding a `mapping'
between noisy speech spectra and clean speech spectra via supervised learning.
Specifically, we propose using fully Convolutional Neural Networks, which
consist of lesser number of parameters than fully connected networks. The
proposed network, Redundant Convolutional Encoder Decoder (R-CED), demonstrates
that a convolutional network can be 12 times smaller than a recurrent network
and yet achieves better performance, which shows its applicability for an
embedded system: the hearing aids.
| Se Rim Park, Jinwon Lee | null | 1609.07132 | null | null |
Input Convex Neural Networks | cs.LG math.OC | This paper presents the input convex neural network architecture. These are
scalar-valued (potentially deep) neural networks with constraints on the
network parameters such that the output of the network is a convex function of
(some of) the inputs. The networks allow for efficient inference via
optimization over some inputs to the network given others, and can be applied
to settings including structured prediction, data imputation, reinforcement
learning, and others. In this paper we lay the basic groundwork for these
models, proposing methods for inference, optimization and learning, and analyze
their representational power. We show that many existing neural network
architectures can be made input-convex with a minor modification, and develop
specialized optimization algorithms tailored to this setting. Finally, we
highlight the performance of the methods on multi-label prediction, image
completion, and reinforcement learning problems, where we show improvement over
the existing state of the art in many cases.
| Brandon Amos, Lei Xu, J. Zico Kolter | null | 1609.07152 | null | null |
Multilayer Spectral Graph Clustering via Convex Layer Aggregation | cs.LG cs.SI stat.ML | Multilayer graphs are commonly used for representing different relations
between entities and handling heterogeneous data processing tasks. New
challenges arise in multilayer graph clustering for assigning clusters to a
common multilayer node set and for combining information from each layer. This
paper presents a theoretical framework for multilayer spectral graph clustering
of the nodes via convex layer aggregation. Under a novel multilayer signal plus
noise model, we provide a phase transition analysis that establishes the
existence of a critical value on the noise level that permits reliable cluster
separation. The analysis also specifies analytical upper and lower bounds on
the critical value, where the bounds become exact when the clusters have
identical sizes. Numerical experiments on synthetic multilayer graphs are
conducted to validate the phase transition analysis and study the effect of
layer weights and noise levels on clustering reliability.
| Pin-Yu Chen and Alfred O. Hero III | null | 1609.072 | null | null |
A Novel Progressive Multi-label Classifier for Classincremental Data | cs.LG cs.NE | In this paper, a progressive learning algorithm for multi-label
classification to learn new labels while retaining the knowledge of previous
labels is designed. New output neurons corresponding to new labels are added
and the neural network connections and parameters are automatically
restructured as if the label has been introduced from the beginning. This work
is the first of the kind in multi-label classifier for class-incremental
learning. It is useful for real-world applications such as robotics where
streaming data are available and the number of labels is often unknown. Based
on the Extreme Learning Machine framework, a novel universal classifier with
plug and play capabilities for progressive multi-label classification is
developed. Experimental results on various benchmark synthetic and real
datasets validate the efficiency and effectiveness of our proposed algorithm.
| Mihika Dave, Sahil Tapiawala, Meng Joo Er, Rajasekar Venkatesan | null | 1609.07215 | null | null |
Using Neural Network Formalism to Solve Multiple-Instance Problems | cs.LG stat.ML | Many objects in the real world are difficult to describe by a single
numerical vector of a fixed length, whereas describing them by a set of vectors
is more natural. Therefore, Multiple instance learning (MIL) techniques have
been constantly gaining on importance throughout last years. MIL formalism
represents each object (sample) by a set (bag) of feature vectors (instances)
of fixed length where knowledge about objects (e.g., class label) is available
on bag level but not necessarily on instance level. Many standard tools
including supervised classifiers have been already adapted to MIL setting since
the problem got formalized in late nineties. In this work we propose a neural
network (NN) based formalism that intuitively bridges the gap between MIL
problem definition and the vast existing knowledge-base of standard models and
classifiers. We show that the proposed NN formalism is effectively optimizable
by a modified back-propagation algorithm and can reveal unknown patterns inside
bags. Comparison to eight types of classifiers from the prior art on a set of
14 publicly available benchmark datasets confirms the advantages and accuracy
of the proposed solution.
| Tomas Pevny and Petr Somol | null | 1609.07257 | null | null |
Constraint-Based Clustering Selection | stat.ML cs.LG | Semi-supervised clustering methods incorporate a limited amount of
supervision into the clustering process. Typically, this supervision is
provided by the user in the form of pairwise constraints. Existing methods use
such constraints in one of the following ways: they adapt their clustering
procedure, their similarity metric, or both. All of these approaches operate
within the scope of individual clustering algorithms. In contrast, we propose
to use constraints to choose between clusterings generated by very different
unsupervised clustering algorithms, run with different parameter settings. We
empirically show that this simple approach often outperforms existing
semi-supervised clustering methods.
| Toon Van Craenendonck, Hendrik Blockeel | null | 1609.07272 | null | null |
Discovering Sound Concepts and Acoustic Relations In Text | cs.SD cs.AI cs.LG | In this paper we describe approaches for discovering acoustic concepts and
relations in text. The first major goal is to be able to identify text phrases
which contain a notion of audibility and can be termed as a sound or an
acoustic concept. We also propose a method to define an acoustic scene through
a set of sound concepts. We use pattern matching and parts of speech tags to
generate sound concepts from large scale text corpora. We use dependency
parsing and LSTM recurrent neural network to predict a set of sound concepts
for a given acoustic scene. These methods are not only helpful in creating an
acoustic knowledge base but in the future can also directly help acoustic event
and scene detection research.
| Anurag Kumar, Bhiksha Raj, Ndapandula Nakashole | null | 1609.07384 | null | null |
Gated Neural Networks for Option Pricing: Rationality by Design | q-fin.CP cs.LG q-fin.PR | We propose a neural network approach to price EU call options that
significantly outperforms some existing pricing models and comes with
guarantees that its predictions are economically reasonable. To achieve this,
we introduce a class of gated neural networks that automatically learn to
divide-and-conquer the problem space for robust and accurate pricing. We then
derive instantiations of these networks that are 'rational by design' in terms
of naturally encoding a valid call option surface that enforces no arbitrage
principles. This integration of human insight within data-driven learning
provides significantly better generalisation in pricing performance due to the
encoded inductive bias in the learning, guarantees sanity in the model's
predictions, and provides econometrically useful byproduct such as risk neutral
density.
| Yongxin Yang, Yu Zheng, Timothy M. Hospedales | null | 1609.07472 | null | null |
Screening Rules for Convex Problems | math.OC cs.LG stat.ML | We propose a new framework for deriving screening rules for convex
optimization problems. Our approach covers a large class of constrained and
penalized optimization formulations, and works in two steps. First, given any
approximate point, the structure of the objective function and the duality gap
is used to gather information on the optimal solution. In the second step, this
information is used to produce screening rules, i.e. safely identifying
unimportant weight variables of the optimal solution. Our general framework
leads to a large variety of useful existing as well as new screening rules for
many applications. For example, we provide new screening rules for general
simplex and $L_1$-constrained problems, Elastic Net, squared-loss Support
Vector Machines, minimum enclosing ball, as well as structured norm regularized
problems, such as group lasso.
| Anant Raj, Jakob Olbrich, Bernd G\"artner, Bernhard Sch\"olkopf,
Martin Jaggi | null | 1609.07478 | null | null |
A Rotation Invariant Latent Factor Model for Moveme Discovery from
Static Poses | cs.CV cs.LG | We tackle the problem of learning a rotation invariant latent factor model
when the training data is comprised of lower-dimensional projections of the
original feature space. The main goal is the discovery of a set of 3-D bases
poses that can characterize the manifold of primitive human motions, or
movemes, from a training set of 2-D projected poses obtained from still images
taken at various camera angles. The proposed technique for basis discovery is
data-driven rather than hand-designed. The learned representation is rotation
invariant, and can reconstruct any training instance from multiple viewing
angles. We apply our method to modeling human poses in sports (via the Leeds
Sports Dataset), and demonstrate the effectiveness of the learned bases in a
range of applications such as activity classification, inference of dynamics
from a single frame, and synthetic representation of movements.
| Matteo Ruggero Ronchi, Joon Sik Kim and Yisong Yue | null | 1609.07495 | null | null |
Fast Learning of Clusters and Topics via Sparse Posteriors | stat.ML cs.AI cs.LG | Mixture models and topic models generate each observation from a single
cluster, but standard variational posteriors for each observation assign
positive probability to all possible clusters. This requires dense storage and
runtime costs that scale with the total number of clusters, even though
typically only a few clusters have significant posterior mass for any data
point. We propose a constrained family of sparse variational distributions that
allow at most $L$ non-zero entries, where the tunable threshold $L$ trades off
speed for accuracy. Previous sparse approximations have used hard assignments
($L=1$), but we find that moderate values of $L>1$ provide superior
performance. Our approach easily integrates with stochastic or incremental
optimization algorithms to scale to millions of examples. Experiments training
mixture models of image patches and topic models for news articles show that
our approach produces better-quality models in far less time than baseline
methods.
| Michael C. Hughes and Erik B. Sudderth | null | 1609.07521 | null | null |
A Tutorial on Distributed (Non-Bayesian) Learning: Problem, Algorithms
and Results | math.OC cs.LG cs.MA cs.SI stat.ML | We overview some results on distributed learning with focus on a family of
recently proposed algorithms known as non-Bayesian social learning. We consider
different approaches to the distributed learning problem and its algorithmic
solutions for the case of finitely many hypotheses. The original centralized
problem is discussed at first, and then followed by a generalization to the
distributed setting. The results on convergence and convergence rate are
presented for both asymptotic and finite time regimes. Various extensions are
discussed such as those dealing with directed time-varying networks, Nesterov's
acceleration technique and a continuum sets of hypothesis.
| Angelia Nedi\'c, Alex Olshevsky and C\'esar A. Uribe | null | 1609.07537 | null | null |
Derivative Delay Embedding: Online Modeling of Streaming Time Series | cs.LG | The staggering amount of streaming time series coming from the real world
calls for more efficient and effective online modeling solution. For time
series modeling, most existing works make some unrealistic assumptions such as
the input data is of fixed length or well aligned, which requires extra effort
on segmentation or normalization of the raw streaming data. Although some
literature claim their approaches to be invariant to data length and
misalignment, they are too time-consuming to model a streaming time series in
an online manner. We propose a novel and more practical online modeling and
classification scheme, DDE-MGM, which does not make any assumptions on the time
series while maintaining high efficiency and state-of-the-art performance. The
derivative delay embedding (DDE) is developed to incrementally transform time
series to the embedding space, where the intrinsic characteristics of data is
preserved as recursive patterns regardless of the stream length and
misalignment. Then, a non-parametric Markov geographic model (MGM) is proposed
to both model and classify the pattern in an online manner. Experimental
results demonstrate the effectiveness and superior classification accuracy of
the proposed DDE-MGM in an online setting as compared to the state-of-the-art.
| Zhifei Zhang, Yang Song, Wei Wang, and Hairong Qi | null | 1609.0754 | null | null |
Informative Planning and Online Learning with Sparse Gaussian Processes | cs.RO cs.AI cs.LG stat.ML | A big challenge in environmental monitoring is the spatiotemporal variation
of the phenomena to be observed. To enable persistent sensing and estimation in
such a setting, it is beneficial to have a time-varying underlying
environmental model. Here we present a planning and learning method that
enables an autonomous marine vehicle to perform persistent ocean monitoring
tasks by learning and refining an environmental model. To alleviate the
computational bottleneck caused by large-scale data accumulated, we propose a
framework that iterates between a planning component aimed at collecting the
most information-rich data, and a sparse Gaussian Process learning component
where the environmental model and hyperparameters are learned online by taking
advantage of only a subset of data that provides the greatest contribution. Our
simulations with ground-truth ocean data shows that the proposed method is both
accurate and efficient.
| Kai-Chieh Ma, Lantao Liu, Gaurav S. Sukhatme | null | 1609.0756 | null | null |
Dynamic Pricing in High-dimensions | stat.ML cs.LG | We study the pricing problem faced by a firm that sells a large number of
products, described via a wide range of features, to customers that arrive over
time. Customers independently make purchasing decisions according to a general
choice model that includes products features and customers' characteristics,
encoded as $d$-dimensional numerical vectors, as well as the price offered. The
parameters of the choice model are a priori unknown to the firm, but can be
learned as the (binary-valued) sales data accrues over time. The firm's
objective is to minimize the regret, i.e., the expected revenue loss against a
clairvoyant policy that knows the parameters of the choice model in advance,
and always offers the revenue-maximizing price. This setting is motivated in
part by the prevalence of online marketplaces that allow for real-time pricing.
We assume a structured choice model, parameters of which depend on $s_0$ out of
the $d$ product features. We propose a dynamic policy, called Regularized
Maximum Likelihood Pricing (RMLP) that leverages the (sparsity) structure of
the high-dimensional model and obtains a logarithmic regret in $T$. More
specifically, the regret of our algorithm is of $O(s_0 \log d \cdot \log T)$.
Furthermore, we show that no policy can obtain regret better than $O(s_0 (\log
d + \log T))$.
| Adel Javanmard and Hamid Nazerzadeh | null | 1609.07574 | null | null |
Information-Theoretic Methods for Planning and Learning in Partially
Observable Markov Decision Processes | cs.LG | Bounded agents are limited by intrinsic constraints on their ability to
process information that is available in their sensors and memory and choose
actions and memory updates. In this dissertation, we model these constraints as
information-rate constraints on communication channels connecting these various
internal components of the agent. We make four major contributions detailed
below and many smaller contributions detailed in each section. First, we
formulate the problem of optimizing the agent under both extrinsic and
intrinsic constraints and develop the main tools for solving it. Second, we
identify another reason for the challenging convergence properties of the
optimization algorithm, which is the bifurcation structure of the update
operator near phase transitions. Third, we study the special case of
linear-Gaussian dynamics and quadratic cost (LQG), where the optimal solution
has a particularly simple and solvable form. Fourth, we explore the learning
task, where the model of the world dynamics is unknown and sample-based updates
are used instead.
| Roy Fox | null | 1609.07672 | null | null |
Learning by Stimulation Avoidance: A Principle to Control Spiking Neural
Networks Dynamics | cs.NE cs.AI cs.LG | Learning based on networks of real neurons, and by extension biologically
inspired models of neural networks, has yet to find general learning rules
leading to widespread applications. In this paper, we argue for the existence
of a principle allowing to steer the dynamics of a biologically inspired neural
network. Using carefully timed external stimulation, the network can be driven
towards a desired dynamical state. We term this principle "Learning by
Stimulation Avoidance" (LSA). We demonstrate through simulation that the
minimal sufficient conditions leading to LSA in artificial networks are also
sufficient to reproduce learning results similar to those obtained in
biological neurons by Shahaf and Marom [1]. We examine the mechanism's basic
dynamics in a reduced network, and demonstrate how it scales up to a network of
100 neurons. We show that LSA has a higher explanatory power than existing
hypotheses about the response of biological neural networks to external
simulation, and can be used as a learning rule for an embodied application:
learning of wall avoidance by a simulated robot. The surge in popularity of
artificial neural networks is mostly directed to disembodied models of neurons
with biologically irrelevant dynamics: to the authors' knowledge, this is the
first work demonstrating sensory-motor learning with random spiking networks
through pure Hebbian learning.
| Lana Sinapayen, Atsushi Masumori, Takashi Ikegami | 10.1371/journal.pone.0170388 | 1609.07706 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.