title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Large-Scale Detection of Non-Technical Losses in Imbalanced Data Sets | cs.LG cs.AI | Non-technical losses (NTL) such as electricity theft cause significant harm
to our economies, as in some countries they may range up to 40% of the total
electricity distributed. Detecting NTLs requires costly on-site inspections.
Accurate prediction of NTLs for customers using machine learning is therefore
crucial. To date, related research largely ignore that the two classes of
regular and non-regular customers are highly imbalanced, that NTL proportions
may change and mostly consider small data sets, often not allowing to deploy
the results in production. In this paper, we present a comprehensive approach
to assess three NTL detection models for different NTL proportions in large
real world data sets of 100Ks of customers: Boolean rules, fuzzy logic and
Support Vector Machine. This work has resulted in appreciable results that are
about to be deployed in a leading industry solution. We believe that the
considerations and observations made in this contribution are necessary for
future smart meter research in order to report their effectiveness on
imbalanced and large real world data sets.
| Patrick O. Glauner, Andre Boechat, Lautaro Dolberg, Radu State, Franck
Bettinger, Yves Rangoni, Diogo Duarte | null | 1602.08350 | null | null |
Simple Bayesian Algorithms for Best Arm Identification | cs.LG | This paper considers the optimal adaptive allocation of measurement effort
for identifying the best among a finite set of options or designs. An
experimenter sequentially chooses designs to measure and observes noisy signals
of their quality with the goal of confidently identifying the best design after
a small number of measurements. This paper proposes three simple and intuitive
Bayesian algorithms for adaptively allocating measurement effort, and
formalizes a sense in which these seemingly naive rules are the best possible.
One proposal is top-two probability sampling, which computes the two designs
with the highest posterior probability of being optimal, and then randomizes to
select among these two. One is a variant of top-two sampling which considers
not only the probability a design is optimal, but the expected amount by which
its quality exceeds that of other designs. The final algorithm is a modified
version of Thompson sampling that is tailored for identifying the best design.
We prove that these simple algorithms satisfy a sharp optimality property. In a
frequentist setting where the true quality of the designs is fixed, one hopes
the posterior definitively identifies the optimal design, in the sense that
that the posterior probability assigned to the event that some other design is
optimal converges to zero as measurements are collected. We show that under the
proposed algorithms this convergence occurs at an exponential rate, and the
corresponding exponent is the best possible among all allocation
| Daniel Russo | null | 1602.08448 | null | null |
A Single Model Explains both Visual and Auditory Precortical Coding | q-bio.NC cs.CV cs.LG cs.NE | Precortical neural systems encode information collected by the senses, but
the driving principles of the encoding used have remained a subject of debate.
We present a model of retinal coding that is based on three constraints:
information preservation, minimization of the neural wiring, and response
equalization. The resulting novel version of sparse principal components
analysis successfully captures a number of known characteristics of the retinal
coding system, such as center-surround receptive fields, color opponency
channels, and spatiotemporal responses that correspond to magnocellular and
parvocellular pathways. Furthermore, when trained on auditory data, the same
model learns receptive fields well fit by gammatone filters, commonly used to
model precortical auditory coding. This suggests that efficient coding may be a
unifying principle of precortical encoding across modalities.
| Honghao Shan, Matthew H. Tong, Garrison W. Cottrell | null | 1602.08486 | null | null |
Lie Access Neural Turing Machine | cs.NE cs.AI cs.LG | Following the recent trend in explicit neural memory structures, we present a
new design of an external memory, wherein memories are stored in an Euclidean
key space $\mathbb R^n$. An LSTM controller performs read and write via
specialized read and write heads. It can move a head by either providing a new
address in the key space (aka random access) or moving from its previous
position via a Lie group action (aka Lie access). In this way, the "L" and "R"
instructions of a traditional Turing Machine are generalized to arbitrary
elements of a fixed Lie group action. For this reason, we name this new model
the Lie Access Neural Turing Machine, or LANTM.
We tested two different configurations of LANTM against an LSTM baseline in
several basic experiments. We found the right configuration of LANTM to
outperform the baseline in all of our experiments. In particular, we trained
LANTM on addition of $k$-digit numbers for $2 \le k \le 16$, but it was able to
generalize almost perfectly to $17 \le k \le 32$, all with the number of
parameters 2 orders of magnitude below the LSTM baseline.
| Greg Yang | null | 1602.08671 | null | null |
A Structured Variational Auto-encoder for Learning Deep Hierarchies of
Sparse Features | stat.ML cs.LG stat.CO | In this note we present a generative model of natural images consisting of a
deep hierarchy of layers of latent random variables, each of which follows a
new type of distribution that we call rectified Gaussian. These rectified
Gaussian units allow spike-and-slab type sparsity, while retaining the
differentiability necessary for efficient stochastic gradient variational
inference. To learn the parameters of the new model, we approximate the
posterior of the latent variables with a variational auto-encoder. Rather than
making the usual mean-field assumption however, the encoder parameterizes a new
type of structured variational approximation that retains the prior
dependencies of the generative model. Using this structured posterior
approximation, we are able to perform joint training of deep models with many
layers of latent random variables, without having to resort to stacking or
other layerwise training procedures.
| Tim Salimans | null | 1602.08734 | null | null |
Resource Constrained Structured Prediction | stat.ML cs.CL cs.CV cs.LG | We study the problem of structured prediction under test-time budget
constraints. We propose a novel approach applicable to a wide range of
structured prediction problems in computer vision and natural language
processing. Our approach seeks to adaptively generate computationally costly
features during test-time in order to reduce the computational cost of
prediction while maintaining prediction performance. We show that training the
adaptive feature generation system can be reduced to a series of structured
learning problems, resulting in efficient training using existing structured
learning algorithms. This framework provides theoretical justification for
several existing heuristic approaches found in literature. We evaluate our
proposed adaptive system on two structured prediction tasks, optical character
recognition (OCR) and dependency parsing and show strong performance in
reduction of the feature costs without degrading accuracy.
| Tolga Bolukbasi, Kai-Wei Chang, Joseph Wang, Venkatesh Saligrama | null | 1602.08761 | null | null |
Investigating practical linear temporal difference learning | cs.LG cs.AI stat.ML | Off-policy reinforcement learning has many applications including: learning
from demonstration, learning multiple goal seeking policies in parallel, and
representing predictive knowledge. Recently there has been an proliferation of
new policy-evaluation algorithms that fill a longstanding algorithmic void in
reinforcement learning: combining robustness to off-policy sampling, function
approximation, linear complexity, and temporal difference (TD) updates. This
paper contains two main contributions. First, we derive two new hybrid TD
policy-evaluation algorithms, which fill a gap in this collection of
algorithms. Second, we perform an empirical comparison to elicit which of these
new linear TD methods should be preferred in different situations, and make
concrete suggestions about practical use.
| Adam White, Martha White | null | 1602.08771 | null | null |
Does quantification without adjustments work? | stat.ML cs.LG math.ST stat.TH | Classification is the task of predicting the class labels of objects based on
the observation of their features. In contrast, quantification has been defined
as the task of determining the prevalences of the different sorts of class
labels in a target dataset. The simplest approach to quantification is Classify
& Count where a classifier is optimised for classification on a training set
and applied to the target dataset for the prediction of class labels. In the
case of binary quantification, the number of predicted positive labels is then
used as an estimate of the prevalence of the positive class in the target
dataset. Since the performance of Classify & Count for quantification is known
to be inferior its results typically are subject to adjustments. However, some
researchers recently have suggested that Classify & Count might actually work
without adjustments if it is based on a classifer that was specifically trained
for quantification. We discuss the theoretical foundation for this claim and
explore its potential and limitations with a numerical example based on the
binormal model with equal variances. In order to identify an optimal quantifier
in the binormal setting, we introduce the concept of local Bayes optimality. As
a side remark, we present a complete proof of a theorem by Ye et al. (2012).
| Dirk Tasche | null | 1602.08780 | null | null |
Iterative Aggregation Method for Solving Principal Component Analysis
Problems | cs.NA cs.IR cs.LG | Motivated by the previously developed multilevel aggregation method for
solving structural analysis problems a novel two-level aggregation approach for
efficient iterative solution of Principal Component Analysis (PCA) problems is
proposed. The course aggregation model of the original covariance matrix is
used in the iterative solution of the eigenvalue problem by a power iterations
method. The method is tested on several data sets consisting of large number of
text documents.
| Vitaly Bulgakov | null | 1602.08800 | null | null |
Collaborative Learning of Stochastic Bandits over a Social Network | cs.LG stat.ML | We consider a collaborative online learning paradigm, wherein a group of
agents connected through a social network are engaged in playing a stochastic
multi-armed bandit game. Each time an agent takes an action, the corresponding
reward is instantaneously observed by the agent, as well as its neighbours in
the social network. We perform a regret analysis of various policies in this
collaborative learning setting. A key finding of this paper is that natural
extensions of widely-studied single agent learning policies to the network
setting need not perform well in terms of regret. In particular, we identify a
class of non-altruistic and individually consistent policies, and argue by
deriving regret lower bounds that they are liable to suffer a large regret in
the networked setting. We also show that the learning performance can be
substantially improved if the agents exploit the structure of the network, and
develop a simple learning algorithm based on dominating sets of the network.
Specifically, we first consider a star network, which is a common motif in
hierarchical social networks, and show analytically that the hub agent can be
used as an information sink to expedite learning and improve the overall
regret. We also derive networkwide regret bounds for the algorithm applied to
general networks. We conduct numerical experiments on a variety of networks to
corroborate our analytical results.
| Ravi Kumar Kolla, Krishna Jagannathan, Aditya Gopalan | null | 1602.08886 | null | null |
High-Dimensional $L_2$Boosting: Rate of Convergence | stat.ML cs.LG econ.EM math.ST stat.ME stat.TH | Boosting is one of the most significant developments in machine learning.
This paper studies the rate of convergence of $L_2$Boosting, which is tailored
for regression, in a high-dimensional setting. Moreover, we introduce so-called
\textquotedblleft post-Boosting\textquotedblright. This is a post-selection
estimator which applies ordinary least squares to the variables selected in the
first stage by $L_2$Boosting. Another variant is \textquotedblleft Orthogonal
Boosting\textquotedblright\ where after each step an orthogonal projection is
conducted. We show that both post-$L_2$Boosting and the orthogonal boosting
achieve the same rate of convergence as LASSO in a sparse, high-dimensional
setting. We show that the rate of convergence of the classical $L_2$Boosting
depends on the design matrix described by a sparse eigenvalue constant. To show
the latter results, we derive new approximation results for the pure greedy
algorithm, based on analyzing the revisiting behavior of $L_2$Boosting. We also
introduce feasible rules for early stopping, which can be easily implemented
and used in applied work. Our results also allow a direct comparison between
LASSO and boosting which has been missing from the literature. Finally, we
present simulation studies and applications to illustrate the relevance of our
theoretical results and to provide insights into the practical aspects of
boosting. In these simulation studies, post-$L_2$Boosting clearly outperforms
LASSO.
| Ye Luo and Martin Spindler and Jannis K\"uck | null | 1602.08927 | null | null |
Representation of linguistic form and function in recurrent neural
networks | cs.CL cs.LG | We present novel methods for analyzing the activation patterns of RNNs from a
linguistic point of view and explore the types of linguistic structure they
learn. As a case study, we use a multi-task gated recurrent network
architecture consisting of two parallel pathways with shared word embeddings
trained on predicting the representations of the visual scene corresponding to
an input sentence, and predicting the next word in the same sentence. Based on
our proposed method to estimate the amount of contribution of individual tokens
in the input to the final prediction of the networks we show that the image
prediction pathway: a) is sensitive to the information structure of the
sentence b) pays selective attention to lexical categories and grammatical
functions that carry semantic information c) learns to treat the same input
token differently depending on its grammatical functions in the sentence. In
contrast the language model is comparatively more sensitive to words with a
syntactic function. Furthermore, we propose methods to ex- plore the function
of individual hidden units in RNNs and show that the two pathways of the
architecture in our case study contain specialized units tuned to patterns
informative for the task, some of which can carry activations to later time
steps to encode long-term dependencies.
| \'Akos K\'ad\'ar, Grzegorz Chrupa{\l}a, Afra Alishahi | null | 1602.08952 | null | null |
Even Trolls Are Useful: Efficient Link Classification in Signed Networks | cs.LG cs.SI physics.soc-ph | We address the problem of classifying the links of signed social networks
given their full structural topology. Motivated by a binary user behaviour
assumption, which is supported by decades of research in psychology, we develop
an efficient and surprisingly simple approach to solve this classification
problem. Our methods operate both within the active and batch settings. We
demonstrate that the algorithms we developed are extremely fast in both
theoretical and practical terms. Within the active setting, we provide a new
complexity measure and a rigorous analysis of our methods that hold for
arbitrary signed networks. We validate our theoretical claims carrying out a
set of experiments on three well known real-world datasets, showing that our
methods outperform the competitors while being much faster.
| G\'eraud Le Falher and Fabio Vitale | null | 1602.08986 | null | null |
Beyond CCA: Moment Matching for Multi-View Models | stat.ML cs.LG | We introduce three novel semi-parametric extensions of probabilistic
canonical correlation analysis with identifiability guarantees. We consider
moment matching techniques for estimation in these models. For that, by drawing
explicit links between the new models and a discrete version of independent
component analysis (DICA), we first extend the DICA cumulant tensors to the new
discrete version of CCA. By further using a close connection with independent
component analysis, we introduce generalized covariance matrices, which can
replace the cumulant tensors in the moment matching framework, and, therefore,
improve sample complexity and simplify derivations and algorithms
significantly. As the tensor power method or orthogonal joint diagonalization
are not applicable in the new setting, we use non-orthogonal joint
diagonalization techniques for matching the cumulants. We demonstrate
performance of the proposed models and estimation techniques on experiments
with both synthetic and real datasets.
| Anastasia Podosinnikova, Francis Bach, and Simon Lacoste-Julien | null | 1602.09013 | null | null |
Easy Monotonic Policy Iteration | cs.LG cs.AI stat.ML | A key problem in reinforcement learning for control with general function
approximators (such as deep neural networks and other nonlinear functions) is
that, for many algorithms employed in practice, updates to the policy or
$Q$-function may fail to improve performance---or worse, actually cause the
policy performance to degrade. Prior work has addressed this for policy
iteration by deriving tight policy improvement bounds; by optimizing the lower
bound on policy improvement, a better policy is guaranteed. However, existing
approaches suffer from bounds that are hard to optimize in practice because
they include sup norm terms which cannot be efficiently estimated or
differentiated. In this work, we derive a better policy improvement bound where
the sup norm of the policy divergence has been replaced with an average
divergence; this leads to an algorithm, Easy Monotonic Policy Iteration, that
generates sequences of policies with guaranteed non-decreasing returns and is
easy to implement in a sample-based framework.
| Joshua Achiam | null | 1602.09118 | null | null |
Learning, Visualizing, and Exploiting a Model for the Intrinsic Value of
a Batted Ball | stat.AP cs.LG | We present an algorithm for learning the intrinsic value of a batted ball in
baseball. This work addresses the fundamental problem of separating the value
of a batted ball at contact from factors such as the defense, weather, and
ballpark that can affect its observed outcome. The algorithm uses a Bayesian
model to construct a continuous mapping from a vector of batted ball parameters
to an intrinsic measure defined as the expected value of a linear weights
representation for run value. A kernel method is used to build nonparametric
estimates for the component probability density functions in Bayes theorem from
a set of over one hundred thousand batted ball measurements recorded by the
HITf/x system during the 2014 major league baseball (MLB) season.
Cross-validation is used to determine the optimal vector of smoothing
parameters for the density estimates. Properties of the mapping are visualized
by considering reduced-dimension subsets of the batted ball parameter space. We
use the mapping to derive statistics for intrinsic quality of contact for
batters and pitchers which have the potential to improve the accuracy of player
models and forecasting systems. We also show that the new approach leads to a
simple automated measure of contact-adjusted defense and provides insight into
the impact of environmental variables on batted balls.
| Glenn Healey | null | 1603.00050 | null | null |
Characterizing Diseases from Unstructured Text: A Vocabulary Driven
Word2vec Approach | cs.LG cs.CL stat.ML | Traditional disease surveillance can be augmented with a wide variety of
real-time sources such as, news and social media. However, these sources are in
general unstructured and, construction of surveillance tools such as
taxonomical correlations and trace mapping involves considerable human
supervision. In this paper, we motivate a disease vocabulary driven word2vec
model (Dis2Vec) to model diseases and constituent attributes as word embeddings
from the HealthMap news corpus. We use these word embeddings to automatically
create disease taxonomies and evaluate our model against corresponding human
annotated taxonomies. We compare our model accuracies against several
state-of-the art word2vec methods. Our results demonstrate that Dis2Vec
outperforms traditional distributed vector representations in its ability to
faithfully capture taxonomical attributes across different class of diseases
such as endemic, emerging and rare.
| Saurav Ghosh, Prithwish Chakraborty, Emily Cohn, John S. Brownstein,
and Naren Ramakrishnan | null | 1603.00106 | null | null |
On Tie Strength Augmented Social Correlation for Inferring Preference of
Mobile Telco Users | cs.SI cs.IR cs.LG | For mobile telecom operators, it is critical to build preference profiles of
their customers and connected users, which can help operators make better
marketing strategies, and provide more personalized services. With the
deployment of deep packet inspection (DPI) in telecom networks, it is possible
for the telco operators to obtain user online preference. However, DPI has its
limitations and user preference derived only from DPI faces sparsity and cold
start problems. To better infer the user preference, social correlation in
telco users network derived from Call Detailed Records (CDRs) with regard to
online preference is investigated. Though widely verified in several online
social networks, social correlation between online preference of users in
mobile telco networks, where the CDRs derived relationship are of less social
properties and user mobile internet surfing activities are not visible to
neighbourhood, has not been explored at a large scale. Based on a real world
telecom dataset including CDRs and preference of more than $550K$ users for
several months, we verified that correlation does exist between online
preference in such \textit{ambiguous} social network. Furthermore, we found
that the stronger ties that users build, the more similarity between their
preference may have. After defining the preference inferring task as a Top-$K$
recommendation problem, we incorporated Matrix Factorization Collaborative
Filtering model with social correlation and tie strength based on call patterns
to generate Top-$K$ preferred categories for users. The proposed Tie Strength
Augmented Social Recommendation (TSASoRec) model takes data sparsity and cold
start user problems into account, considering both the recorded and missing
recorded category entries. The experiment on real dataset shows the proposed
model can better infer user preference, especially for cold start users.
| Shifeng Liu, Zheng Hu, Sujit Dey and Xin Ke | null | 1603.00145 | null | null |
Convolutional Rectifier Networks as Generalized Tensor Decompositions | cs.NE cs.LG | Convolutional rectifier networks, i.e. convolutional neural networks with
rectified linear activation and max or average pooling, are the cornerstone of
modern deep learning. However, despite their wide use and success, our
theoretical understanding of the expressive properties that drive these
networks is partial at best. On the other hand, we have a much firmer grasp of
these issues in the world of arithmetic circuits. Specifically, it is known
that convolutional arithmetic circuits possess the property of "complete depth
efficiency", meaning that besides a negligible set, all functions that can be
implemented by a deep network of polynomial size, require exponential size in
order to be implemented (or even approximated) by a shallow network. In this
paper we describe a construction based on generalized tensor decompositions,
that transforms convolutional arithmetic circuits into convolutional rectifier
networks. We then use mathematical tools available from the world of arithmetic
circuits to prove new results. First, we show that convolutional rectifier
networks are universal with max pooling but not with average pooling. Second,
and more importantly, we show that depth efficiency is weaker with
convolutional rectifier networks than it is with convolutional arithmetic
circuits. This leads us to believe that developing effective methods for
training convolutional arithmetic circuits, thereby fulfilling their expressive
potential, may give rise to a deep learning architecture that is provably
superior to convolutional rectifier networks but has so far been overlooked by
practitioners.
| Nadav Cohen, Amnon Shashua | null | 1603.00162 | null | null |
Segmental Recurrent Neural Networks for End-to-end Speech Recognition | cs.CL cs.LG cs.NE | We study the segmental recurrent neural network for end-to-end acoustic
modelling. This model connects the segmental conditional random field (CRF)
with a recurrent neural network (RNN) used for feature extraction. Compared to
most previous CRF-based acoustic models, it does not rely on an external system
to provide features or segmentation boundaries. Instead, this model
marginalises out all the possible segmentations, and features are extracted
from the RNN trained together with the segmental CRF. In essence, this model is
self-contained and can be trained end-to-end. In this paper, we discuss
practical training and decoding issues as well as the method to speed up the
training in the context of speech recognition. We performed experiments on the
TIMIT dataset. We achieved 17.3 phone error rate (PER) from the first-pass
decoding --- the best reported result using CRFs, despite the fact that we only
used a zeroth-order CRF and without using any language model.
| Liang Lu, Lingpeng Kong, Chris Dyer, Noah A. Smith and Steve Renals | null | 1603.00223 | null | null |
Noisy Activation Functions | cs.LG cs.NE stat.ML | Common nonlinear activation functions used in neural networks can cause
training difficulties due to the saturation behavior of the activation
function, which may hide dependencies that are not visible to vanilla-SGD
(using first order gradients only). Gating mechanisms that use softly
saturating activation functions to emulate the discrete switching of digital
logic circuits are good examples of this. We propose to exploit the injection
of appropriate noise so that the gradients may flow easily, even if the
noiseless application of the activation function would yield zero gradient.
Large noise will dominate the noise-free gradient and allow stochastic gradient
descent toexplore more. By adding noise only to the problematic parts of the
activation function, we allow the optimization procedure to explore the
boundary between the degenerate (saturating) and the well-behaved parts of the
activation function. We also establish connections to simulated annealing, when
the amount of noise is annealed down, making it easier to optimize hard
objective functions. We find experimentally that replacing such saturating
activation functions by noisy variants helps training in many contexts,
yielding state-of-the-art or competitive results on different datasets and
task, especially when training seems to be the most difficult, e.g., when
curriculum learning is necessary to obtain good results.
| Caglar Gulcehre, Marcin Moczulski, Misha Denil and Yoshua Bengio | null | 1603.00391 | null | null |
A Nonlinear Adaptive Filter Based on the Model of Simple Multilinear
Functionals | cs.SY cs.LG | Nonlinear adaptive filtering allows for modeling of some additional aspects
of a general system and usually relies on highly complex algorithms, such as
those based on the Volterra series. Through the use of the Kronecker product
and some basic facts of tensor algebra, we propose a simple model of
nonlinearity, one that can be interpreted as a product of the outputs of K FIR
linear filters, and compute its cost function together with its gradient, which
allows for some analysis of the optimization problem. We use these results it
in a stochastic gradient framework, from which we derive an LMS-like algorithm
and investigate the problems of multi-modality in the mean-square error surface
and the choice of adequate initial conditions. Its computational complexity is
calculated. The new algorithm is tested in a system identification setup and is
compared with other polynomial algorithms from the literature, presenting
favorable convergence and/or computational complexity.
| Felipe C. Pinheiro and C\'assio G. Lopes | null | 1603.00427 | null | null |
Crowdsourcing On-street Parking Space Detection | cs.HC cs.LG | As the number of vehicles continues to grow, parking spaces are at a premium
in city streets. Additionally, due to the lack of knowledge about street
parking spaces, heuristic circling the blocks not only costs drivers' time and
fuel, but also increases city congestion. In the wake of recent trend to build
convenient, green and energy-efficient smart cities, we rethink common
techniques adopted by high-profile smart parking systems, and present a
user-engaged (crowdsourcing) and sonar-based prototype to identify urban
on-street parking spaces. The prototype includes an ultrasonic sensor, a GPS
receiver and associated Arduino micro-controllers. It is mounted on the
passenger side of a car to measure the distance from the vehicle to the nearest
roadside obstacle. Multiple road tests are conducted around Wheatley, Oxford to
gather results and emulate the crowdsourcing approach. By extracting parked
vehicles' features from the collected trace, a supervised learning algorithm is
developed to estimate roadside parking occupancy and spot illegal parking
vehicles. A quantity estimation model is derived to calculate the required
number of sensing units to cover urban streets. The estimation is
quantitatively compared to a fixed sensing solution. The results show that the
crowdsourcing way would need substantially fewer sensors compared to the fixed
sensing system.
| Ruizhi Liao, Cristian Roman, Peter Ball, Shumao Ou, Liping Chen | null | 1603.00441 | null | null |
Guided Cost Learning: Deep Inverse Optimal Control via Policy
Optimization | cs.LG cs.AI cs.RO | Reinforcement learning can acquire complex behaviors from high-level
specifications. However, defining a cost function that can be optimized
effectively and encodes the correct task is challenging in practice. We explore
how inverse optimal control (IOC) can be used to learn behaviors from
demonstrations, with applications to torque control of high-dimensional robotic
systems. Our method addresses two key challenges in inverse optimal control:
first, the need for informative features and effective regularization to impose
structure on the cost, and second, the difficulty of learning the cost function
under unknown dynamics for high-dimensional continuous systems. To address the
former challenge, we present an algorithm capable of learning arbitrary
nonlinear cost functions, such as neural networks, without meticulous feature
engineering. To address the latter challenge, we formulate an efficient
sample-based approximation for MaxEnt IOC. We evaluate our method on a series
of simulated tasks and real-world robotic manipulation problems, demonstrating
substantial improvement over prior methods both in terms of task complexity and
sample efficiency.
| Chelsea Finn, Sergey Levine, Pieter Abbeel | null | 1603.00448 | null | null |
Solving Combinatorial Games using Products, Projections and
Lexicographically Optimal Bases | cs.LG | In order to find Nash-equilibria for two-player zero-sum games where each
player plays combinatorial objects like spanning trees, matchings etc, we
consider two online learning algorithms: the online mirror descent (OMD)
algorithm and the multiplicative weights update (MWU) algorithm. The OMD
algorithm requires the computation of a certain Bregman projection, that has
closed form solutions for simple convex sets like the Euclidean ball or the
simplex. However, for general polyhedra one often needs to exploit the general
machinery of convex optimization. We give a novel primal-style algorithm for
computing Bregman projections on the base polytopes of polymatroids. Next, in
the case of the MWU algorithm, although it scales logarithmically in the number
of pure strategies or experts $N$ in terms of regret, the algorithm takes time
polynomial in $N$; this especially becomes a problem when learning
combinatorial objects. We give a general recipe to simulate the multiplicative
weights update algorithm in time polynomial in their natural dimension. This is
useful whenever there exists a polynomial time generalized counting oracle
(even if approximate) over these objects. Finally, using the combinatorial
structure of symmetric Nash-equilibria (SNE) when both players play bases of
matroids, we show that these can be found with a single projection or convex
minimization (without using online learning).
| Swati Gupta, Michel Goemans, Patrick Jaillet | null | 1603.00522 | null | null |
LOFS: Library of Online Streaming Feature Selection | cs.LG stat.ML | As an emerging research direction, online streaming feature selection deals
with sequentially added dimensions in a feature space while the number of data
instances is fixed. Online streaming feature selection provides a new,
complementary algorithmic methodology to enrich online feature selection,
especially targets to high dimensionality in big data analytics. This paper
introduces the first comprehensive open-source library for use in MATLAB that
implements the state-of-the-art algorithms of online streaming feature
selection. The library is designed to facilitate the development of new
algorithms in this exciting research direction and make comparisons between the
new methods and existing ones available.
| Kui Yu, Wei Ding, Xindong Wu | null | 1603.00531 | null | null |
Asymptotic behavior of $\ell_p$-based Laplacian regularization in
semi-supervised learning | cs.LG stat.ML | Given a weighted graph with $N$ vertices, consider a real-valued regression
problem in a semi-supervised setting, where one observes $n$ labeled vertices,
and the task is to label the remaining ones. We present a theoretical study of
$\ell_p$-based Laplacian regularization under a $d$-dimensional geometric
random graph model. We provide a variational characterization of the
performance of this regularized learner as $N$ grows to infinity while $n$
stays constant, the associated optimality conditions lead to a partial
differential equation that must be satisfied by the associated function
estimate $\hat{f}$. From this formulation we derive several predictions on the
limiting behavior the $d$-dimensional function $\hat{f}$, including (a) a phase
transition in its smoothness at the threshold $p = d + 1$, and (b) a tradeoff
between smoothness and sensitivity to the underlying unlabeled data
distribution $P$. Thus, over the range $p \leq d$, the function estimate
$\hat{f}$ is degenerate and "spiky," whereas for $p\geq d+1$, the function
estimate $\hat{f}$ is smooth. We show that the effect of the underlying density
vanishes monotonically with $p$, such that in the limit $p = \infty$,
corresponding to the so-called Absolutely Minimal Lipschitz Extension, the
estimate $\hat{f}$ is independent of the distribution $P$. Under the assumption
of semi-supervised smoothness, ignoring $P$ can lead to poor statistical
performance, in particular, we construct a specific example for $d=1$ to
demonstrate that $p=2$ has lower risk than $p=\infty$ due to the former penalty
adapting to $P$ and the latter ignoring it. We also provide simulations that
verify the accuracy of our predictions for finite sample sizes. Together, these
properties show that $p = d+1$ is an optimal choice, yielding a function
estimate $\hat{f}$ that is both smooth and non-degenerate, while remaining
maximally sensitive to $P$.
| Ahmed El Alaoui, Xiang Cheng, Aaditya Ramdas, Martin J. Wainwright,
Michael I. Jordan | null | 1603.00564 | null | null |
Without-Replacement Sampling for Stochastic Gradient Methods:
Convergence Results and Application to Distributed Optimization | cs.LG math.OC stat.ML | Stochastic gradient methods for machine learning and optimization problems
are usually analyzed assuming data points are sampled \emph{with} replacement.
In practice, however, sampling \emph{without} replacement is very common,
easier to implement in many cases, and often performs better. In this paper, we
provide competitive convergence guarantees for without-replacement sampling,
under various scenarios, for three types of algorithms: Any algorithm with
online regret guarantees, stochastic gradient descent, and SVRG. A useful
application of our SVRG analysis is a nearly-optimal algorithm for regularized
least squares in a distributed setting, in terms of both communication
complexity and runtime complexity, when the data is randomly partitioned and
the condition number can be as large as the data size per machine (up to
logarithmic factors). Our proof techniques combine ideas from stochastic
optimization, adversarial online learning, and transductive learning theory,
and can potentially be applied to other stochastic optimization and learning
problems.
| Ohad Shamir | null | 1603.00570 | null | null |
Distributed Estimation of Dynamic Parameters : Regret Analysis | math.OC cs.LG cs.SI | This paper addresses the estimation of a time- varying parameter in a
network. A group of agents sequentially receive noisy signals about the
parameter (or moving target), which does not follow any particular dynamics.
The parameter is not observable to an individual agent, but it is globally
identifiable for the whole network. Viewing the problem with an online
optimization lens, we aim to provide the finite-time or non-asymptotic analysis
of the problem. To this end, we use a notion of dynamic regret which suits the
online, non-stationary nature of the problem. In our setting, dynamic regret
can be recognized as a finite-time counterpart of stability in the mean- square
sense. We develop a distributed, online algorithm for tracking the moving
target. Defining the path-length as the consecutive differences between target
locations, we express an upper bound on regret in terms of the path-length of
the target and network errors. We further show the consistency of the result
with static setting and noiseless observations.
| Shahin Shahrampour, Alexander Rakhlin, Ali Jadbabaie | null | 1603.00576 | null | null |
PLATO: Policy Learning using Adaptive Trajectory Optimization | cs.LG | Policy search can in principle acquire complex strategies for control of
robots and other autonomous systems. When the policy is trained to process raw
sensory inputs, such as images and depth maps, it can also acquire a strategy
that combines perception and control. However, effectively processing such
complex inputs requires an expressive policy class, such as a large neural
network. These high-dimensional policies are difficult to train, especially
when learning to control safety-critical systems. We propose PLATO, an
algorithm that trains complex control policies with supervised learning, using
model-predictive control (MPC) to generate the supervision, hence never in need
of running a partially trained and potentially unsafe policy. PLATO uses an
adaptive training method to modify the behavior of MPC to gradually match the
learned policy in order to generate training samples at states that are likely
to be visited by the learned policy. PLATO also maintains the MPC cost as an
objective to avoid highly undesirable actions that would result from strictly
following the learned policy before it has been fully trained. We prove that
this type of adaptive MPC expert produces supervision that leads to good
long-horizon performance of the resulting policy. We also empirically
demonstrate that MPC can still avoid dangerous on-policy actions in unexpected
situations during training. Our empirical results on a set of challenging
simulated aerial vehicle tasks demonstrate that, compared to prior methods,
PLATO learns faster, experiences substantially fewer catastrophic failures
(crashes) during training, and often converges to a better policy.
| Gregory Kahn, Tianhao Zhang, Sergey Levine, Pieter Abbeel | null | 1603.00622 | null | null |
Probabilistic Relational Model Benchmark Generation | cs.LG cs.AI | The validation of any database mining methodology goes through an evaluation
process where benchmarks availability is essential. In this paper, we aim to
randomly generate relational database benchmarks that allow to check
probabilistic dependencies among the attributes. We are particularly interested
in Probabilistic Relational Models (PRMs), which extend Bayesian Networks (BNs)
to a relational data mining context and enable effective and robust reasoning
over relational data. Even though a panoply of works have focused, separately ,
on the generation of random Bayesian networks and relational databases, no work
has been identified for PRMs on that track. This paper provides an algorithmic
approach for generating random PRMs from scratch to fill this gap. The proposed
method allows to generate PRMs as well as synthetic relational data from a
randomly generated relational schema and a random set of probabilistic
dependencies. This can be of interest not only for machine learning researchers
to evaluate their proposals in a common framework, but also for databases
designers to evaluate the effectiveness of the components of a database
management system.
| Mouna Ben Ishak (LARODEC), Rajani Chulyadyo (LINA), Philippe Leray
(LINA) | null | 1603.00709 | null | null |
Continuous Deep Q-Learning with Model-based Acceleration | cs.LG cs.AI cs.RO cs.SY | Model-free reinforcement learning has been successfully applied to a range of
challenging problems, and has recently been extended to handle large neural
network policies and value functions. However, the sample complexity of
model-free algorithms, particularly when using high-dimensional function
approximators, tends to limit their applicability to physical systems. In this
paper, we explore algorithms and representations to reduce the sample
complexity of deep reinforcement learning for continuous control tasks. We
propose two complementary techniques for improving the efficiency of such
algorithms. First, we derive a continuous variant of the Q-learning algorithm,
which we call normalized adantage functions (NAF), as an alternative to the
more commonly used policy gradient and actor-critic methods. NAF representation
allows us to apply Q-learning with experience replay to continuous tasks, and
substantially improves performance on a set of simulated robotic control tasks.
To further improve the efficiency of our approach, we explore the use of
learned models for accelerating model-free reinforcement learning. We show that
iteratively refitted local linear models are especially effective for this, and
demonstrate substantially faster learning on domains where such models are
applicable.
| Shixiang Gu and Timothy Lillicrap and Ilya Sutskever and Sergey Levine | null | 1603.00748 | null | null |
Equity forecast: Predicting long term stock price movement using machine
learning | cs.LG q-fin.GN | Long term investment is one of the major investment strategies. However,
calculating intrinsic value of some company and evaluating shares for long term
investment is not easy, since analyst have to care about a large number of
financial indicators and evaluate them in a right manner. So far, little help
in predicting the direction of the company value over the longer period of time
has been provided from the machines. In this paper we present a machine
learning aided approach to evaluate the equity's future price over the long
time. Our method is able to correctly predict whether some company's value will
be 10% higher or not over the period of one year in 76.5% of cases.
| Nikola Milosevic | null | 1603.00751 | null | null |
Automatic Differentiation Variational Inference | stat.ML cs.AI cs.LG stat.CO | Probabilistic modeling is iterative. A scientist posits a simple model, fits
it to her data, refines it according to her analysis, and repeats. However,
fitting complex models to large data is a bottleneck in this process. Deriving
algorithms for new models can be both mathematically and computationally
challenging, which makes it difficult to efficiently cycle through the steps.
To this end, we develop automatic differentiation variational inference (ADVI).
Using our method, the scientist only provides a probabilistic model and a
dataset, nothing else. ADVI automatically derives an efficient variational
inference algorithm, freeing the scientist to refine and explore many models.
ADVI supports a broad class of models-no conjugacy assumptions are required. We
study ADVI across ten different models and apply it to a dataset with millions
of observations. ADVI is integrated into Stan, a probabilistic programming
system; it is available for immediate use.
| Alp Kucukelbir, Dustin Tran, Rajesh Ranganath, Andrew Gelman, David M.
Blei | null | 1603.00788 | null | null |
Character-based Neural Machine Translation | cs.CL cs.LG cs.NE stat.ML | Neural Machine Translation (MT) has reached state-of-the-art results.
However, one of the main challenges that neural MT still faces is dealing with
very large vocabularies and morphologically rich languages. In this paper, we
propose a neural MT system using character-based embeddings in combination with
convolutional and highway layers to replace the standard lookup-based word
representations. The resulting unlimited-vocabulary and affix-aware source word
embeddings are tested in a state-of-the-art neural MT based on an
attention-based bidirectional recurrent neural network. The proposed MT scheme
provides improved results even when the source language is not morphologically
rich. Improvements up to 3 BLEU points are obtained in the German-English WMT
task.
| Marta R. Costa-Juss\`a and Jos\'e A. R. Fonollosa | null | 1603.00810 | null | null |
Shallow and Deep Convolutional Networks for Saliency Prediction | cs.CV cs.LG | The prediction of salient areas in images has been traditionally addressed
with hand-crafted features based on neuroscience principles. This paper,
however, addresses the problem with a completely data-driven approach by
training a convolutional neural network (convnet). The learning process is
formulated as a minimization of a loss function that measures the Euclidean
distance of the predicted saliency map with the provided ground truth. The
recent publication of large datasets of saliency prediction has provided enough
data to train end-to-end architectures that are both fast and accurate. Two
designs are proposed: a shallow convnet trained from scratch, and a another
deeper solution whose first three layers are adapted from another network
trained for classification. To the authors knowledge, these are the first
end-to-end CNNs trained and tested for the purpose of saliency prediction.
| Junting Pan, Kevin McGuinness, Elisa Sayrol, Noel O'Connor and Xavier
Giro-i-Nieto | null | 1603.00845 | null | null |
Molecular Graph Convolutions: Moving Beyond Fingerprints | stat.ML cs.LG | Molecular "fingerprints" encoding structural information are the workhorse of
cheminformatics and machine learning in drug discovery applications. However,
fingerprint representations necessarily emphasize particular aspects of the
molecular structure while ignoring others, rather than allowing the model to
make data-driven decisions. We describe molecular "graph convolutions", a
machine learning architecture for learning from undirected graphs, specifically
small molecules. Graph convolutions use a simple encoding of the molecular
graph---atoms, bonds, distances, etc.---which allows the model to take greater
advantage of information in the graph structure. Although graph convolutions do
not outperform all fingerprint-based methods, they (along with other
graph-based methods) represent a new paradigm in ligand-based virtual screening
with exciting opportunities for future improvement.
| Steven Kearnes, Kevin McCloskey, Marc Berndl, Vijay Pande, Patrick
Riley | 10.1007/s10822-016-9938-8 | 1603.00856 | null | null |
Counter-fitting Word Vectors to Linguistic Constraints | cs.CL cs.LG | In this work, we present a novel counter-fitting method which injects
antonymy and synonymy constraints into vector space representations in order to
improve the vectors' capability for judging semantic similarity. Applying this
method to publicly available pre-trained word vectors leads to a new state of
the art performance on the SimLex-999 dataset. We also show how the method can
be used to tailor the word vector space for the downstream task of dialogue
state tracking, resulting in robust improvements across different dialogue
domains.
| Nikola Mrk\v{s}i\'c and Diarmuid \'O S\'eaghdha and Blaise Thomson and
Milica Ga\v{s}i\'c and Lina Rojas-Barahona and Pei-Hao Su and David Vandyke
and Tsung-Hsien Wen and Steve Young | null | 1603.00892 | null | null |
Super Mario as a String: Platformer Level Generation Via LSTMs | cs.NE cs.LG | The procedural generation of video game levels has existed for at least 30
years, but only recently have machine learning approaches been used to generate
levels without specifying the rules for generation. A number of these have
looked at platformer levels as a sequence of characters and performed
generation using Markov chains. In this paper we examine the use of Long
Short-Term Memory recurrent neural networks (LSTMs) for the purpose of
generating levels trained from a corpus of Super Mario Brothers levels. We
analyze a number of different data representations and how the generated levels
fit into the space of human authored Super Mario Brothers levels.
| Adam Summerville and Michael Mateas | null | 1603.00930 | null | null |
Training Input-Output Recurrent Neural Networks through Spectral Methods | cs.LG cs.NE stat.ML | We consider the problem of training input-output recurrent neural networks
(RNN) for sequence labeling tasks. We propose a novel spectral approach for
learning the network parameters. It is based on decomposition of the
cross-moment tensor between the output and a non-linear transformation of the
input, based on score functions. We guarantee consistent learning with
polynomial sample and computational complexity under transparent conditions
such as non-degeneracy of model parameters, polynomial activations for the
neurons, and a Markovian evolution of the input sequence. We also extend our
results to Bidirectional RNN which uses both previous and future information to
output the label at each time point, and is employed in many NLP tasks such as
POS tagging.
| Hanie Sedghi and Anima Anandkumar | null | 1603.00954 | null | null |
Audio Word2Vec: Unsupervised Learning of Audio Segment Representations
using Sequence-to-sequence Autoencoder | cs.SD cs.LG | The vector representations of fixed dimensionality for words (in text)
offered by Word2Vec have been shown to be very useful in many application
scenarios, in particular due to the semantic information they carry. This paper
proposes a parallel version, the Audio Word2Vec. It offers the vector
representations of fixed dimensionality for variable-length audio segments.
These vector representations are shown to describe the sequential phonetic
structures of the audio segments to a good degree, with very attractive real
world applications such as query-by-example Spoken Term Detection (STD). In
this STD application, the proposed approach significantly outperformed the
conventional Dynamic Time Warping (DTW) based approaches at significantly lower
computation requirements. We propose unsupervised learning of Audio Word2Vec
from audio data without human annotation using Sequence-to-sequence Audoencoder
(SA). SA consists of two RNNs equipped with Long Short-Term Memory (LSTM)
units: the first RNN (encoder) maps the input audio sequence into a vector
representation of fixed dimensionality, and the second RNN (decoder) maps the
representation back to the input audio sequence. The two RNNs are jointly
trained by minimizing the reconstruction error. Denoising Sequence-to-sequence
Autoencoder (DSA) is furthered proposed offering more robust learning.
| Yu-An Chung, Chao-Chung Wu, Chia-Hao Shen, Hung-Yi Lee, Lin-Shan Lee | null | 1603.00982 | null | null |
Learning Functions: When Is Deep Better Than Shallow | cs.LG | While the universal approximation property holds both for hierarchical and
shallow networks, we prove that deep (hierarchical) networks can approximate
the class of compositional functions with the same accuracy as shallow networks
but with exponentially lower number of training parameters as well as
VC-dimension. This theorem settles an old conjecture by Bengio on the role of
depth in networks. We then define a general class of scalable, shift-invariant
algorithms to show a simple and natural set of requirements that justify deep
convolutional networks.
| Hrushikesh Mhaskar, Qianli Liao, Tomaso Poggio | null | 1603.00988 | null | null |
Convolutional Neural Networks using Logarithmic Data Representation | cs.NE cs.LG | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits.
| Daisuke Miyashita and Edward H. Lee and Boris Murmann | null | 1603.01025 | null | null |
Modeling the Sequence of Brain Volumes by Local Mesh Models for Brain
Decoding | cs.LG cs.AI cs.CV | We represent the sequence of fMRI (Functional Magnetic Resonance Imaging)
brain volumes recorded during a cognitive stimulus by a graph which consists of
a set of local meshes. The corresponding cognitive process, encoded in the
brain, is then represented by these meshes each of which is estimated assuming
a linear relationship among the voxel time series in a predefined locality.
First, we define the concept of locality in two neighborhood systems, namely,
the spatial and functional neighborhoods. Then, we construct spatially and
functionally local meshes around each voxel, called seed voxel, by connecting
it either to its spatial or functional p-nearest neighbors. The mesh formed
around a voxel is a directed sub-graph with a star topology, where the
direction of the edges is taken towards the seed voxel at the center of the
mesh. We represent the time series recorded at each seed voxel in terms of
linear combination of the time series of its p-nearest neighbors in the mesh.
The relationships between a seed voxel and its neighbors are represented by the
edge weights of each mesh, and are estimated by solving a linear regression
equation. The estimated mesh edge weights lead to a better representation of
information in the brain for encoding and decoding of the cognitive tasks. We
test our model on a visual object recognition and emotional memory retrieval
experiments using Support Vector Machines that are trained using the mesh edge
weights as features. In the experimental analysis, we observe that the edge
weights of the spatial and functional meshes perform better than the
state-of-the-art brain decoding models.
| Itir Onal, Mete Ozay, Eda Mizrak, Ilke Oztekin, Fatos T. Yarman Vural | null | 1603.01067 | null | null |
Deep Reinforcement Learning from Self-Play in Imperfect-Information
Games | cs.LG cs.AI cs.GT | Many real-world applications can be described as large-scale games of
imperfect information. To deal with these challenging domains, prior work has
focused on computing Nash equilibria in a handcrafted abstraction of the
domain. In this paper we introduce the first scalable end-to-end approach to
learning approximate Nash equilibria without prior domain knowledge. Our method
combines fictitious self-play with deep reinforcement learning. When applied to
Leduc poker, Neural Fictitious Self-Play (NFSP) approached a Nash equilibrium,
whereas common reinforcement learning methods diverged. In Limit Texas Holdem,
a poker game of real-world scale, NFSP learnt a strategy that approached the
performance of state-of-the-art, superhuman algorithms based on significant
domain expertise.
| Johannes Heinrich, David Silver | null | 1603.01121 | null | null |
End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF | cs.LG cs.CL stat.ML | State-of-the-art sequence labeling systems traditionally require large
amounts of task-specific knowledge in the form of hand-crafted features and
data pre-processing. In this paper, we introduce a novel neutral network
architecture that benefits from both word- and character-level representations
automatically, by using combination of bidirectional LSTM, CNN and CRF. Our
system is truly end-to-end, requiring no feature engineering or data
pre-processing, thus making it applicable to a wide range of sequence labeling
tasks. We evaluate our system on two data sets for two sequence labeling tasks
--- Penn Treebank WSJ corpus for part-of-speech (POS) tagging and CoNLL 2003
corpus for named entity recognition (NER). We obtain state-of-the-art
performance on both the two data --- 97.55\% accuracy for POS tagging and
91.21\% F1 for NER.
| Xuezhe Ma, Eduard Hovy | null | 1603.01354 | null | null |
Learning deep representation of multityped objects and tasks | stat.ML cs.CV cs.LG | We introduce a deep multitask architecture to integrate multityped
representations of multimodal objects. This multitype exposition is less
abstract than the multimodal characterization, but more machine-friendly, and
thus is more precise to model. For example, an image can be described by
multiple visual views, which can be in the forms of bag-of-words (counts) or
color/texture histograms (real-valued). At the same time, the image may have
several social tags, which are best described using a sparse binary vector. Our
deep model takes as input multiple type-specific features, narrows the
cross-modality semantic gaps, learns cross-type correlation, and produces a
high-level homogeneous representation. At the same time, the model supports
heterogeneously typed tasks. We demonstrate the capacity of the model on two
applications: social image retrieval and multiple concept prediction. The deep
architecture produces more compact representation, naturally integrates
multiviews and multimodalities, exploits better side information, and most
importantly, performs competitively against baselines.
| Truyen Tran, Dinh Phung and Svetha Venkatesh | null | 1603.01359 | null | null |
A Unified View of Localized Kernel Learning | cs.LG stat.ML | Multiple Kernel Learning, or MKL, extends (kernelized) SVM by attempting to
learn not only a classifier/regressor but also the best kernel for the training
task, usually from a combination of existing kernel functions. Most MKL methods
seek the combined kernel that performs best over every training example,
sacrificing performance in some areas to seek a global optimum. Localized
kernel learning (LKL) overcomes this limitation by allowing the training
algorithm to match a component kernel to the examples that can exploit it best.
Several approaches to the localized kernel learning problem have been explored
in the last several years. We unify many of these approaches under one simple
system and design a new algorithm with improved performance. We also develop
enhanced versions of existing algorithms, with an eye on scalability and
performance.
| John Moeller, Sarathkrishna Swaminathan, Suresh Venkatasubramanian | null | 1603.01374 | null | null |
Normalization Propagation: A Parametric Technique for Removing Internal
Covariate Shift in Deep Networks | stat.ML cs.LG | While the authors of Batch Normalization (BN) identify and address an
important problem involved in training deep networks-- Internal Covariate
Shift-- the current solution has certain drawbacks. Specifically, BN depends on
batch statistics for layerwise input normalization during training which makes
the estimates of mean and standard deviation of input (distribution) to hidden
layers inaccurate for validation due to shifting parameter values (especially
during initial training epochs). Also, BN cannot be used with batch-size 1
during training. We address these drawbacks by proposing a non-adaptive
normalization technique for removing internal covariate shift, that we call
Normalization Propagation. Our approach does not depend on batch statistics,
but rather uses a data-independent parametric estimate of mean and
standard-deviation in every layer thus being computationally faster compared
with BN. We exploit the observation that the pre-activation before Rectified
Linear Units follow Gaussian distribution in deep networks, and that once the
first and second order statistics of any given dataset are normalized, we can
forward propagate this normalization without the need for recalculating the
approximate statistics for hidden layers.
| Devansh Arpit, Yingbo Zhou, Bhargava U. Kota, Venu Govindaraju | null | 1603.01431 | null | null |
Sequential ranking under random semi-bandit feedback | cs.DS cs.LG | In many web applications, a recommendation is not a single item suggested to
a user but a list of possibly interesting contents that may be ranked in some
contexts. The combinatorial bandit problem has been studied quite extensively
these last two years and many theoretical results now exist : lower bounds on
the regret or asymptotically optimal algorithms. However, because of the
variety of situations that can be considered, results are designed to solve the
problem for a specific reward structure such as the Cascade Model. The present
work focuses on the problem of ranking items when the user is allowed to click
on several items while scanning the list from top to bottom.
| Hossein Vahabi, Paul Lagr\'ee, Claire Vernade, Olivier Capp\'e | null | 1603.01450 | null | null |
Integrated Sequence Tagging for Medieval Latin Using Deep Representation
Learning | cs.CL cs.LG stat.ML | In this paper we consider two sequence tagging tasks for medieval Latin:
part-of-speech tagging and lemmatization. These are both basic, yet
foundational preprocessing steps in applications such as text re-use detection.
Nevertheless, they are generally complicated by the considerable orthographic
variation which is typical of medieval Latin. In Digital Classics, these tasks
are traditionally solved in a (i) cascaded and (ii) lexicon-dependent fashion.
For example, a lexicon is used to generate all the potential lemma-tag pairs
for a token, and next, a context-aware PoS-tagger is used to select the most
appropriate tag-lemma pair. Apart from the problems with out-of-lexicon items,
error percolation is a major downside of such approaches. In this paper we
explore the possibility to elegantly solve these tasks using a single,
integrated approach. For this, we make use of a layered neural network
architecture from the field of deep representation learning.
| Mike Kestemont, Jeroen De Gussem | 10.46298/jdmdh.1398 | 1603.01597 | null | null |
Network Morphism | cs.LG cs.CV cs.NE | We present in this paper a systematic study on how to morph a well-trained
neural network to a new one so that its network function can be completely
preserved. We define this as \emph{network morphism} in this research. After
morphing a parent network, the child network is expected to inherit the
knowledge from its parent network and also has the potential to continue
growing into a more powerful one with much shortened training time. The first
requirement for this network morphism is its ability to handle diverse morphing
types of networks, including changes of depth, width, kernel size, and even
subnet. To meet this requirement, we first introduce the network morphism
equations, and then develop novel morphing algorithms for all these morphing
types for both classic and convolutional neural networks. The second
requirement for this network morphism is its ability to deal with non-linearity
in a network. We propose a family of parametric-activation functions to
facilitate the morphing of any continuous non-linear activation neurons.
Experimental results on benchmark datasets and typical neural networks
demonstrate the effectiveness of the proposed network morphism scheme.
| Tao Wei, Changhu Wang, Yong Rui, Chang Wen Chen | null | 1603.01670 | null | null |
Classifier ensemble creation via false labelling | cs.LG | In this paper, a novel approach to classifier ensemble creation is presented.
While other ensemble creation techniques are based on careful selection of
existing classifiers or preprocessing of the data, the presented approach
automatically creates an optimal labelling for a number of classifiers, which
are then assigned to the original data instances and fed to classifiers. The
approach has been evaluated on high-dimensional biomedical datasets. The
results show that the approach outperformed individual approaches in all cases.
| B\'alint Antal | 10.1016/j.knosys.2015.07.009 | 1603.01716 | null | null |
Variational methods for Conditional Multimodal Deep Learning | cs.CV cs.LG stat.ML | In this paper, we address the problem of conditional modality learning,
whereby one is interested in generating one modality given the other. While it
is straightforward to learn a joint distribution over multiple modalities using
a deep multimodal architecture, we observe that such models aren't very
effective at conditional generation. Hence, we address the problem by learning
conditional distributions between the modalities. We use variational methods
for maximizing the corresponding conditional log-likelihood. The resultant deep
model, which we refer to as conditional multimodal autoencoder (CMMA), forces
the latent representation obtained from a single modality alone to be `close'
to the joint representation obtained from multiple modalities. We use the
proposed model to generate faces from attributes. We show that the faces
generated from attributes using the proposed model, are qualitatively and
quantitatively more representative of the attributes from which they were
generated, than those obtained by other deep generative models. We also propose
a secondary task, whereby the existing faces are modified by modifying the
corresponding attributes. We observe that the modifications in face introduced
by the proposed model are representative of the corresponding modifications in
attributes.
| Gaurav Pandey and Ambedkar Dukkipati | null | 1603.01801 | null | null |
Hierarchical Decision Making In Electricity Grid Management | cs.AI cs.LG stat.AP | The power grid is a complex and vital system that necessitates careful
reliability management. Managing the grid is a difficult problem with multiple
time scales of decision making and stochastic behavior due to renewable energy
generations, variable demand and unplanned outages. Solving this problem in the
face of uncertainty requires a new methodology with tractable algorithms. In
this work, we introduce a new model for hierarchical decision making in complex
systems. We apply reinforcement learning (RL) methods to learn a proxy, i.e., a
level of abstraction, for real-time power grid reliability. We devise an
algorithm that alternates between slow time-scale policy improvement, and fast
time-scale value function approximation. We compare our results to prevailing
heuristics, and show the strength of our method.
| Gal Dalal, Elad Gilboa, Shie Mannor | null | 1603.01840 | null | null |
Online Learning to Rank with Feedback at the Top | cs.LG | We consider an online learning to rank setting in which, at each round, an
oblivious adversary generates a list of $m$ documents, pertaining to a query,
and the learner produces scores to rank the documents. The adversary then
generates a relevance vector and the learner updates its ranker according to
the feedback received. We consider the setting where the feedback is restricted
to be the relevance levels of only the top $k$ documents in the ranked list for
$k \ll m$. However, the performance of learner is judged based on the
unrevealed full relevance vectors, using an appropriate learning to rank loss
function. We develop efficient algorithms for well known losses in the
pointwise, pairwise and listwise families. We also prove that no online
algorithm can have sublinear regret, with top-1 feedback, for any loss that is
calibrated with respect to NDCG. We apply our algorithms on benchmark datasets
demonstrating efficient online learning of a ranking function from highly
restricted feedback.
| Sougata Chaudhuri and Ambuj Tewari | null | 1603.01855 | null | null |
Generalization error bounds for learning to rank: Does the length of
document lists matter? | cs.LG | We consider the generalization ability of algorithms for learning to rank at
a query level, a problem also called subset ranking. Existing generalization
error bounds necessarily degrade as the size of the document list associated
with a query increases. We show that such a degradation is not intrinsic to the
problem. For several loss functions, including the cross-entropy loss used in
the well known ListNet method, there is \emph{no} degradation in generalization
ability as document lists become longer. We also provide novel generalization
error bounds under $\ell_1$ regularization and faster convergence rates if the
loss function is smooth.
| Ambuj Tewari and Sougata Chaudhuri | null | 1603.01860 | null | null |
Personalized Advertisement Recommendation: A Ranking Approach to Address
the Ubiquitous Click Sparsity Problem | cs.LG cs.IR | We study the problem of personalized advertisement recommendation (PAR),
which consist of a user visiting a system (website) and the system displaying
one of $K$ ads to the user. The system uses an internal ad recommendation
policy to map the user's profile (context) to one of the ads. The user either
clicks or ignores the ad and correspondingly, the system updates its
recommendation policy. PAR problem is usually tackled by scalable
\emph{contextual bandit} algorithms, where the policies are generally based on
classifiers. A practical problem in PAR is extreme click sparsity, due to very
few users actually clicking on ads. We systematically study the drawback of
using contextual bandit algorithms based on classifier-based policies, in face
of extreme click sparsity. We then suggest an alternate policy, based on
rankers, learnt by optimizing the Area Under the Curve (AUC) ranking loss,
which can significantly alleviate the problem of click sparsity. We conduct
extensive experiments on public datasets, as well as three industry proprietary
datasets, to illustrate the improvement in click-through-rate (CTR) obtained by
using the ranker-based policy over classifier-based policies.
| Sougata Chaudhuri and Georgios Theocharous and Mohammad Ghavamzadeh | null | 1603.01870 | null | null |
Confidence-Constrained Maximum Entropy Framework for Learning from
Multi-Instance Data | cs.LG cs.IT math.IT stat.ML | Multi-instance data, in which each object (bag) contains a collection of
instances, are widespread in machine learning, computer vision, bioinformatics,
signal processing, and social sciences. We present a maximum entropy (ME)
framework for learning from multi-instance data. In this approach each bag is
represented as a distribution using the principle of ME. We introduce the
concept of confidence-constrained ME (CME) to simultaneously learn the
structure of distribution space and infer each distribution. The shared
structure underlying each density is used to learn from instances inside each
bag. The proposed CME is free of tuning parameters. We devise a fast
optimization algorithm capable of handling large scale multi-instance data. In
the experimental section, we evaluate the performance of the proposed approach
in terms of exact rank recovery in the space of distributions and compare it
with the regularized ME approach. Moreover, we compare the performance of CME
with Multi-Instance Learning (MIL) state-of-the-art algorithms and show a
comparable performance in terms of accuracy with reduced computational
complexity.
| Behrouz Behmardi, Forrest Briggs, Xiaoli Z. Fern, and Raviv Raich | null | 1603.01901 | null | null |
A Latent Variable Recurrent Neural Network for Discourse Relation
Language Models | cs.CL cs.LG cs.NE stat.ML | This paper presents a novel latent variable recurrent neural network
architecture for jointly modeling sequences of words and (possibly latent)
discourse relations between adjacent sentences. A recurrent neural network
generates individual words, thus reaping the benefits of
discriminatively-trained vector representations. The discourse relations are
represented with a latent variable, which can be predicted or marginalized,
depending on the task. The resulting model can therefore employ a training
objective that includes not only discourse relation classification, but also
word prediction. As a result, it outperforms state-of-the-art alternatives for
two tasks: implicit discourse relation classification in the Penn Discourse
Treebank, and dialog act classification in the Switchboard corpus. Furthermore,
by marginalizing over latent discourse relations at test time, we obtain a
discourse informed language model, which improves over a strong LSTM baseline.
| Yangfeng Ji and Gholamreza Haffari and Jacob Eisenstein | null | 1603.01913 | null | null |
Differentially Private Policy Evaluation | cs.LG stat.ML | We present the first differentially private algorithms for reinforcement
learning, which apply to the task of evaluating a fixed policy. We establish
two approaches for achieving differential privacy, provide a theoretical
analysis of the privacy and utility of the two algorithms, and show promising
results on simple empirical examples.
| Borja Balle, Maziar Gomrokchi, Doina Precup | null | 1603.02010 | null | null |
Unscented Bayesian Optimization for Safe Robot Grasping | cs.RO cs.AI cs.LG cs.SY | We address the robot grasp optimization problem of unknown objects
considering uncertainty in the input space. Grasping unknown objects can be
achieved by using a trial and error exploration strategy. Bayesian optimization
is a sample efficient optimization algorithm that is especially suitable for
this setups as it actively reduces the number of trials for learning about the
function to optimize. In fact, this active object exploration is the same
strategy that infants do to learn optimal grasps. One problem that arises while
learning grasping policies is that some configurations of grasp parameters may
be very sensitive to error in the relative pose between the object and robot
end-effector. We call these configurations unsafe because small errors during
grasp execution may turn good grasps into bad grasps. Therefore, to reduce the
risk of grasp failure, grasps should be planned in safe areas. We propose a new
algorithm, Unscented Bayesian optimization that is able to perform sample
efficient optimization while taking into consideration input noise to find safe
optima. The contribution of Unscented Bayesian optimization is twofold as if
provides a new decision process that drives exploration to safe regions and a
new selection procedure that chooses the optimal in terms of its safety without
extra analysis or computational cost. Both contributions are rooted on the
strong theory behind the unscented transformation, a popular nonlinear
approximation method. We show its advantages with respect to the classical
Bayesian optimization both in synthetic problems and in realistic robot grasp
simulations. The results highlights that our method achieves optimal and robust
grasping policies after few trials while the selected grasps remain in safe
regions.
| Jos\'e Nogueira, Ruben Martinez-Cantin, Alexandre Bernardino and
Lorenzo Jamone | null | 1603.02038 | null | null |
Learning Shared Representations in Multi-task Reinforcement Learning | cs.AI cs.LG | We investigate a paradigm in multi-task reinforcement learning (MT-RL) in
which an agent is placed in an environment and needs to learn to perform a
series of tasks, within this space. Since the environment does not change,
there is potentially a lot of common ground amongst tasks and learning to solve
them individually seems extremely wasteful. In this paper, we explicitly model
and learn this shared structure as it arises in the state-action value space.
We will show how one can jointly learn optimal value-functions by modifying the
popular Value-Iteration and Policy-Iteration procedures to accommodate this
shared representation assumption and leverage the power of multi-task
supervised learning. Finally, we demonstrate that the proposed model and
training procedures, are able to infer good value functions, even under low
samples regimes. In addition to data efficiency, we will show in our analysis,
that learning abstractions of the state space jointly across tasks leads to
more robust, transferable representations with the potential for better
generalization. this shared representation assumption and leverage the power of
multi-task supervised learning. Finally, we demonstrate that the proposed model
and training procedures, are able to infer good value functions, even under low
samples regimes. In addition to data efficiency, we will show in our analysis,
that learning abstractions of the state space jointly across tasks leads to
more robust, transferable representations with the potential for better
generalization.
| Diana Borsa and Thore Graepel and John Shawe-Taylor | null | 1603.02041 | null | null |
Optimal dictionary for least squares representation | cs.LG math.OC stat.ML | Dictionaries are collections of vectors used for representations of random
vectors in Euclidean spaces. Recent research on optimal dictionaries is focused
on constructing dictionaries that offer sparse representations, i.e.,
$\ell_0$-optimal representations. Here we consider the problem of finding
optimal dictionaries with which representations of samples of a random vector
are optimal in an $\ell_2$-sense: optimality of representation is defined as
attaining the minimal average $\ell_2$-norm of the coefficients used to
represent the random vector. With the help of recent results on rank-$1$
decompositions of symmetric positive semidefinite matrices, we provide an
explicit description of $\ell_2$-optimal dictionaries as well as their
algorithmic constructions in polynomial time.
| Mohammed Rayyan Sheriff and Debasish Chatterjee | null | 1603.02074 | null | null |
Distributed Multi-Task Learning with Shared Representation | cs.LG stat.ML | We study the problem of distributed multi-task learning with shared
representation, where each machine aims to learn a separate, but related, task
in an unknown shared low-dimensional subspaces, i.e. when the predictor matrix
has low rank. We consider a setting where each task is handled by a different
machine, with samples for the task available locally on the machine, and study
communication-efficient methods for exploiting the shared structure.
| Jialei Wang, Mladen Kolar, Nathan Srebro | null | 1603.02185 | null | null |
Gaussian Process Regression for Out-of-Sample Extension | cs.LG cs.CV | Manifold learning methods are useful for high dimensional data analysis. Many
of the existing methods produce a low dimensional representation that attempts
to describe the intrinsic geometric structure of the original data. Typically,
this process is computationally expensive and the produced embedding is limited
to the training data. In many real life scenarios, the ability to produce
embedding of unseen samples is essential. In this paper we propose a Bayesian
non-parametric approach for out-of-sample extension. The method is based on
Gaussian Process Regression and independent of the manifold learning algorithm.
Additionally, the method naturally provides a measure for the degree of
abnormality for a newly arrived data point that did not participate in the
training process. We derive the mathematical connection between the proposed
method and the Nystrom extension and show that the latter is a special case of
the former. We present extensive experimental results that demonstrate the
performance of the proposed method and compare it to other existing
out-of-sample extension methods.
| Oren Barkan, Jonathan Weill and Amir Averbuch | null | 1603.02194 | null | null |
Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning
and Large-Scale Data Collection | cs.LG cs.AI cs.CV cs.RO | We describe a learning-based approach to hand-eye coordination for robotic
grasping from monocular images. To learn hand-eye coordination for grasping, we
trained a large convolutional neural network to predict the probability that
task-space motion of the gripper will result in successful grasps, using only
monocular camera images and independently of camera calibration or the current
robot pose. This requires the network to observe the spatial relationship
between the gripper and objects in the scene, thus learning hand-eye
coordination. We then use this network to servo the gripper in real time to
achieve successful grasps. To train our network, we collected over 800,000
grasp attempts over the course of two months, using between 6 and 14 robotic
manipulators at any given time, with differences in camera placement and
hardware. Our experimental evaluation demonstrates that our method achieves
effective real-time control, can successfully grasp novel objects, and corrects
mistakes by continuous servoing.
| Sergey Levine, Peter Pastor, Alex Krizhevsky, Deirdre Quillen | null | 1603.02199 | null | null |
Online Sparse Linear Regression | cs.LG | We consider the online sparse linear regression problem, which is the problem
of sequentially making predictions observing only a limited number of features
in each round, to minimize regret with respect to the best sparse linear
regressor, where prediction accuracy is measured by square loss. We give an
inefficient algorithm that obtains regret bounded by $\tilde{O}(\sqrt{T})$
after $T$ prediction rounds. We complement this result by showing that no
algorithm running in polynomial time per iteration can achieve regret bounded
by $O(T^{1-\delta})$ for any constant $\delta > 0$ unless $\text{NP} \subseteq
\text{BPP}$. This computational hardness result resolves an open problem
presented in COLT 2014 (Kale, 2014) and also posed by Zolghadr et al. (2013).
This hardness result holds even if the algorithm is allowed to access more
features than the best sparse linear regressor up to a logarithmic factor in
the dimension.
| Dean Foster, Satyen Kale and Howard Karloff | null | 1603.02250 | null | null |
Stochastic dual averaging methods using variance reduction techniques
for regularized empirical risk minimization problems | math.OC cs.LG stat.ML | We consider a composite convex minimization problem associated with
regularized empirical risk minimization, which often arises in machine
learning. We propose two new stochastic gradient methods that are based on
stochastic dual averaging method with variance reduction. Our methods generate
a sparser solution than the existing methods because we do not need to take the
average of the history of the solutions. This is favorable in terms of both
interpretability and generalization. Moreover, our methods have theoretical
support for both a strongly and a non-strongly convex regularizer and achieve
the best known convergence rates among existing nonaccelerated stochastic
gradient methods.
| Tomoya Murata and Taiji Suzuki | null | 1603.02412 | null | null |
A Bayesian non-parametric method for clustering high-dimensional binary
data | stat.AP cs.LG stat.ML | In many real life problems, objects are described by large number of binary
features. For instance, documents are characterized by presence or absence of
certain keywords; cancer patients are characterized by presence or absence of
certain mutations etc. In such cases, grouping together similar
objects/profiles based on such high dimensional binary features is desirable,
but challenging. Here, I present a Bayesian non parametric algorithm for
clustering high dimensional binary data. It uses a Dirichlet Process (DP)
mixture model and simulated annealing to not only cluster binary data, but also
find optimal number of clusters in the data. The performance of the algorithm
was evaluated and compared with other algorithms using simulated datasets. It
outperformed all other clustering methods that were tested in the simulation
studies. It was also used to cluster real datasets arising from document
analysis, handwritten image analysis and cancer research. It successfully
divided a set of documents based on their topics, hand written images based on
different styles of writing digits and identified tissue and mutation
specificity of chemotherapy treatments.
| Tapesh Santra | null | 1603.02494 | null | null |
Mixture Proportion Estimation via Kernel Embedding of Distributions | cs.LG stat.ML | Mixture proportion estimation (MPE) is the problem of estimating the weight
of a component distribution in a mixture, given samples from the mixture and
component. This problem constitutes a key part in many "weakly supervised
learning" problems like learning with positive and unlabelled samples, learning
with label noise, anomaly detection and crowdsourcing. While there have been
several methods proposed to solve this problem, to the best of our knowledge no
efficient algorithm with a proven convergence rate towards the true proportion
exists for this problem. We fill this gap by constructing a provably correct
algorithm for MPE, and derive convergence rates under certain assumptions on
the distribution. Our method is based on embedding distributions onto an RKHS,
and implementing it only requires solving a simple convex quadratic programming
problem a few times. We run our algorithm on several standard classification
datasets, and demonstrate that it performs comparably to or better than other
algorithms on most datasets.
| Harish G. Ramaswamy and Clayton Scott and Ambuj Tewari | null | 1603.02501 | null | null |
Variational Autoencoders for Semi-supervised Text Classification | cs.CL cs.LG | Although semi-supervised variational autoencoder (SemiVAE) works in image
classification task, it fails in text classification task if using vanilla LSTM
as its decoder. From a perspective of reinforcement learning, it is verified
that the decoder's capability to distinguish between different categorical
labels is essential. Therefore, Semi-supervised Sequential Variational
Autoencoder (SSVAE) is proposed, which increases the capability by feeding
label into its decoder RNN at each time-step. Two specific decoder structures
are investigated and both of them are verified to be effective. Besides, in
order to reduce the computational complexity in training, a novel optimization
method is proposed, which estimates the gradient of the unlabeled objective
function by sampling, along with two variance reduction techniques.
Experimental results on Large Movie Review Dataset (IMDB) and AG's News corpus
show that the proposed approach significantly improves the classification
accuracy compared with pure-supervised classifiers, and achieves competitive
performance against previous advanced methods. State-of-the-art results can be
obtained by integrating other pretraining-based methods.
| Weidi Xu, Haoze Sun, Chao Deng, Ying Tan | null | 1603.02514 | null | null |
On the inconsistency of $\ell_1$-penalised sparse precision matrix
estimation | cs.LG stat.CO stat.ML | Various $\ell_1$-penalised estimation methods such as graphical lasso and
CLIME are widely used for sparse precision matrix estimation. Many of these
methods have been shown to be consistent under various quantitative assumptions
about the underlying true covariance matrix. Intuitively, these conditions are
related to situations where the penalty term will dominate the optimisation. In
this paper, we explore the consistency of $\ell_1$-based methods for a class of
sparse latent variable -like models, which are strongly motivated by several
types of applications. We show that all $\ell_1$-based methods fail
dramatically for models with nearly linear dependencies between the variables.
We also study the consistency on models derived from real gene expression data
and note that the assumptions needed for consistency never hold even for modest
sized gene networks and $\ell_1$-based methods also become unreliable in
practice for larger networks.
| Otte Hein\"avaara, Janne Lepp\"a-aho, Jukka Corander and Antti Honkela | null | 1603.02532 | null | null |
Batched Lazy Decision Trees | cs.LG | We introduce a batched lazy algorithm for supervised classification using
decision trees. It avoids unnecessary visits to irrelevant nodes when it is
used to make predictions with either eagerly or lazily trained decision trees.
A set of experiments demonstrate that the proposed algorithm can outperform
both the conventional and lazy decision tree algorithms in terms of computation
time as well as memory consumption, without compromising accuracy.
| Mathieu Guillame-Bert and Artur Dubrawski | null | 1603.02578 | null | null |
Prediction of Infinite Words with Automata | cs.FL cs.LG | In the classic problem of sequence prediction, a predictor receives a
sequence of values from an emitter and tries to guess the next value before it
appears. The predictor masters the emitter if there is a point after which all
of the predictor's guesses are correct. In this paper we consider the case in
which the predictor is an automaton and the emitted values are drawn from a
finite set; i.e., the emitted sequence is an infinite word. We examine the
predictive capabilities of finite automata, pushdown automata, stack automata
(a generalization of pushdown automata), and multihead finite automata. We
relate our predicting automata to purely periodic words, ultimately periodic
words, and multilinear words, describing novel prediction algorithms for
mastering these sequences.
| Tim Smith | null | 1603.02597 | null | null |
UTA-poly and UTA-splines: additive value functions with polynomial
marginals | math.OC cs.AI cs.LG | Additive utility function models are widely used in multiple criteria
decision analysis. In such models, a numerical value is associated to each
alternative involved in the decision problem. It is computed by aggregating the
scores of the alternative on the different criteria of the decision problem.
The score of an alternative is determined by a marginal value function that
evolves monotonically as a function of the performance of the alternative on
this criterion. Determining the shape of the marginals is not easy for a
decision maker. It is easier for him/her to make statements such as
"alternative $a$ is preferred to $b$". In order to help the decision maker, UTA
disaggregation procedures use linear programming to approximate the marginals
by piecewise linear functions based only on such statements. In this paper, we
propose to infer polynomials and splines instead of piecewise linear functions
for the marginals. In this aim, we use semidefinite programming instead of
linear programming. We illustrate this new elicitation method and present some
experimental results.
| Olivier Sobrie and Nicolas Gillis and Vincent Mousseau and Marc Pirlot | 10.1016/j.ejor.2017.03.021 | 1603.02626 | null | null |
DROW: Real-Time Deep Learning based Wheelchair Detection in 2D Range
Data | cs.RO cs.CV cs.LG cs.NE | We introduce the DROW detector, a deep learning based detector for 2D range
data. Laser scanners are lighting invariant, provide accurate range data, and
typically cover a large field of view, making them interesting sensors for
robotics applications. So far, research on detection in laser range data has
been dominated by hand-crafted features and boosted classifiers, potentially
losing performance due to suboptimal design choices. We propose a Convolutional
Neural Network (CNN) based detector for this task. We show how to effectively
apply CNNs for detection in 2D range data, and propose a depth preprocessing
step and voting scheme that significantly improve CNN performance. We
demonstrate our approach on wheelchairs and walkers, obtaining state of the art
detection results. Apart from the training data, none of our design choices
limits the detector to these two classes, though. We provide a ROS node for our
detector and release our dataset containing 464k laser scans, out of which 24k
were annotated.
| Lucas Beyer and Alexander Hermans and Bastian Leibe | null | 1603.02636 | null | null |
Small ensembles of kriging models for optimization | math.OC cs.LG stat.ML | The Efficient Global Optimization (EGO) algorithm uses a conditional
Gaus-sian Process (GP) to approximate an objective function known at a finite
number of observation points and sequentially adds new points which maximize
the Expected Improvement criterion according to the GP. The important factor
that controls the efficiency of EGO is the GP covariance function (or kernel)
which should be chosen according to the objective function. Traditionally, a
pa-rameterized family of covariance functions is considered whose parameters
are learned through statistical procedures such as maximum likelihood or
cross-validation. However, it may be questioned whether statistical procedures
for learning covariance functions are the most efficient for optimization as
they target a global agreement between the GP and the observations which is not
the ultimate goal of optimization. Furthermore, statistical learning procedures
are computationally expensive. The main alternative to the statistical learning
of the GP is self-adaptation, where the algorithm tunes the kernel parameters
based on their contribution to objective function improvement. After
questioning the possibility of self-adaptation for kriging based optimizers,
this paper proposes a novel approach for tuning the length-scale of the GP in
EGO: At each iteration, a small ensemble of kriging models structured by their
length-scales is created. All of the models contribute to an iterate in an
EGO-like fashion. Then, the set of models is densified around the model whose
length-scale yielded the best iterate and further points are produced.
Numerical experiments are provided which motivate the use of many
length-scales. The tested implementation does not perform better than the
classical EGO algorithm in a sequential context but show the potential of the
approach for parallel implementations.
| Hossein Mohammadi, Rodolphe Le Riche, Eric Touboul | null | 1603.02638 | null | null |
Online but Accurate Inference for Latent Variable Models with Local
Gibbs Sampling | cs.LG stat.ML | We study parameter inference in large-scale latent variable models. We first
propose an unified treatment of online inference for latent variable models
from a non-canonical exponential family, and draw explicit links between
several previously proposed frequentist or Bayesian methods. We then propose a
novel inference method for the frequentist estimation of parameters, that
adapts MCMC methods to online inference of latent variable models with the
proper use of local Gibbs sampling. Then, for latent Dirich-let allocation,we
provide an extensive set of experiments and comparisons with existing work,
where our new approach outperforms all previously proposed methods. In
particular, using Gibbs sampling for latent variable inference is superior to
variational inference in terms of test log-likelihoods. Moreover, Bayesian
inference through variational methods perform poorly, sometimes leading to
worse fits with latent variables of higher dimensionality.
| Christophe Dupuy (SIERRA), Francis Bach (LIENS, SIERRA) | null | 1603.02644 | null | null |
Rank Aggregation for Course Sequence Discovery | cs.LG | In this work, we adapt the rank aggregation framework for the discovery of
optimal course sequences at the university level. Each student provides a
partial ranking of the courses taken throughout his or her undergraduate
career. We compute pairwise rank comparisons between courses based on the order
students typically take them, aggregate the results over the entire student
population, and then obtain a proxy for the rank offset between pairs of
courses. We extract a global ranking of the courses via several state-of-the
art algorithms for ranking with pairwise noisy information, including
SerialRank, Rank Centrality, and the recent SyncRank based on the group
synchronization problem. We test this application of rank aggregation on 15
years of student data from the Department of Mathematics at the University of
California, Los Angeles (UCLA). Furthermore, we experiment with the above
approach on different subsets of the student population conditioned on final
GPA, and highlight several differences in the obtained rankings that uncover
hidden pre-requisites in the Mathematics curriculum.
| Mihai Cucuringu, Charlie Marshak, Dillon Montag, and Puck Rombach | null | 1603.02695 | null | null |
Best-of-K Bandits | cs.LG stat.ML | This paper studies the Best-of-K Bandit game: At each time the player chooses
a subset S among all N-choose-K possible options and observes reward max(X(i) :
i in S) where X is a random vector drawn from a joint distribution. The
objective is to identify the subset that achieves the highest expected reward
with high probability using as few queries as possible. We present
distribution-dependent lower bounds based on a particular construction which
force a learner to consider all N-choose-K subsets, and match naive extensions
of known upper bounds in the bandit setting obtained by treating each subset as
a separate arm. Nevertheless, we present evidence that exhaustive search may be
avoided for certain, favorable distributions because the influence of
high-order order correlations may be dominated by lower order statistics.
Finally, we present an algorithm and analysis for independent arms, which
mitigates the surprising non-trivial information occlusion that occurs due to
only observing the max in the subset. This may inform strategies for more
general dependent measures, and we complement these result with independent-arm
lower bounds.
| Max Simchowitz, Kevin Jamieson, Benjamin Recht | null | 1603.02752 | null | null |
XGBoost: A Scalable Tree Boosting System | cs.LG | Tree boosting is a highly effective and widely used machine learning method.
In this paper, we describe a scalable end-to-end tree boosting system called
XGBoost, which is used widely by data scientists to achieve state-of-the-art
results on many machine learning challenges. We propose a novel sparsity-aware
algorithm for sparse data and weighted quantile sketch for approximate tree
learning. More importantly, we provide insights on cache access patterns, data
compression and sharding to build a scalable tree boosting system. By combining
these insights, XGBoost scales beyond billions of examples using far fewer
resources than existing systems.
| Tianqi Chen and Carlos Guestrin | 10.1145/2939672.2939785 | 1603.02754 | null | null |
megaman: Manifold Learning with Millions of points | cs.LG cs.CG stat.ML | Manifold Learning is a class of algorithms seeking a low-dimensional
non-linear representation of high-dimensional data. Thus manifold learning
algorithms are, at least in theory, most applicable to high-dimensional data
and sample sizes to enable accurate estimation of the manifold. Despite this,
most existing manifold learning implementations are not particularly scalable.
Here we present a Python package that implements a variety of manifold learning
algorithms in a modular and scalable fashion, using fast approximate neighbors
searches and fast sparse eigendecompositions. The package incorporates
theoretical advances in manifold learning, such as the unbiased Laplacian
estimator and the estimation of the embedding distortion by the Riemannian
metric method. In benchmarks, even on a single-core desktop computer, our code
embeds millions of data points in minutes, and takes just 200 minutes to embed
the main sample of galaxy spectra from the Sloan Digital Sky Survey ---
consisting of 0.6 million samples in 3750-dimensions --- a task which has not
previously been possible.
| James McQueen and Marina Meila and Jacob VanderPlas and Zhongyue Zhang | null | 1603.02763 | null | null |
Optimized Kernel Entropy Components | stat.ML cs.LG | This work addresses two main issues of the standard Kernel Entropy Component
Analysis (KECA) algorithm: the optimization of the kernel decomposition and the
optimization of the Gaussian kernel parameter. KECA roughly reduces to a
sorting of the importance of kernel eigenvectors by entropy instead of by
variance as in Kernel Principal Components Analysis. In this work, we propose
an extension of the KECA method, named Optimized KECA (OKECA), that directly
extracts the optimal features retaining most of the data entropy by means of
compacting the information in very few features (often in just one or two). The
proposed method produces features which have higher expressive power. In
particular, it is based on the Independent Component Analysis (ICA) framework,
and introduces an extra rotation to the eigen-decomposition, which is optimized
via gradient ascent search. This maximum entropy preservation suggests that
OKECA features are more efficient than KECA features for density estimation. In
addition, a critical issue in both methods is the selection of the kernel
parameter since it critically affects the resulting performance. Here we
analyze the most common kernel length-scale selection criteria. Results of both
methods are illustrated in different synthetic and real problems. Results show
that 1) OKECA returns projections with more expressive power than KECA, 2) the
most successful rule for estimating the kernel parameter is based on maximum
likelihood, and 3) OKECA is more robust to the selection of the length-scale
parameter in kernel density estimation.
| Emma Izquierdo-Verdiguier, Valero Laparra, Robert Jenssen, Luis
G\'omez-Chova, Gustau Camps-Valls | 10.1109/TNNLS.2016.2530403. | 1603.02806 | null | null |
Faster learning of deep stacked autoencoders on multi-core systems using
synchronized layer-wise pre-training | cs.LG | Deep neural networks are capable of modelling highly non-linear functions by
capturing different levels of abstraction of data hierarchically. While
training deep networks, first the system is initialized near a good optimum by
greedy layer-wise unsupervised pre-training. However, with burgeoning data and
increasing dimensions of the architecture, the time complexity of this approach
becomes enormous. Also, greedy pre-training of the layers often turns
detrimental by over-training a layer causing it to lose harmony with the rest
of the network. In this paper a synchronized parallel algorithm for
pre-training deep networks on multi-core machines has been proposed. Different
layers are trained by parallel threads running on different cores with regular
synchronization. Thus the pre-training process becomes faster and chances of
over-training are reduced. This is experimentally validated using a stacked
autoencoder for dimensionality reduction of MNIST handwritten digit database.
The proposed algorithm achieved 26\% speed-up compared to greedy layer-wise
pre-training for achieving the same reconstruction accuracy substantiating its
potential as an alternative.
| Anirban Santara, Debapriya Maji, DP Tejas, Pabitra Mitra and Arobinda
Gupta | null | 1603.02836 | null | null |
Starting Small -- Learning with Adaptive Sample Sizes | cs.LG | For many machine learning problems, data is abundant and it may be
prohibitive to make multiple passes through the full training set. In this
context, we investigate strategies for dynamically increasing the effective
sample size, when using iterative methods such as stochastic gradient descent.
Our interest is motivated by the rise of variance-reduced methods, which
achieve linear convergence rates that scale favorably for smaller sample sizes.
Exploiting this feature, we show -- theoretically and empirically -- how to
obtain significant speed-ups with a novel algorithm that reaches statistical
accuracy on an $n$-sample in $2n$, instead of $n \log n$ steps.
| Hadi Daneshmand, Aurelien Lucchi, Thomas Hofmann | null | 1603.02839 | null | null |
Low-rank passthrough neural networks | cs.LG cs.NE | Various common deep learning architectures, such as LSTMs, GRUs, Resnets and
Highway Networks, employ state passthrough connections that support training
with high feed-forward depth or recurrence over many time steps. These
"Passthrough Networks" architectures also enable the decoupling of the network
state size from the number of parameters of the network, a possibility has been
studied by \newcite{Sak2014} with their low-rank parametrization of the LSTM.
In this work we extend this line of research, proposing effective, low-rank and
low-rank plus diagonal matrix parametrizations for Passthrough Networks which
exploit this decoupling property, reducing the data complexity and memory
requirements of the network while preserving its memory capacity. This is
particularly beneficial in low-resource settings as it supports expressive
models with a compact parametrization less susceptible to overfitting. We
present competitive experimental results on several tasks, including language
modeling and a near state of the art result on sequential randomly-permuted
MNIST classification, a hard task on natural data.
| Antonio Valerio Miceli Barone | null | 1603.03116 | null | null |
Theoretical Comparisons of Positive-Unlabeled Learning against
Positive-Negative Learning | cs.LG stat.ML | In PU learning, a binary classifier is trained from positive (P) and
unlabeled (U) data without negative (N) data. Although N data is missing, it
sometimes outperforms PN learning (i.e., ordinary supervised learning).
Hitherto, neither theoretical nor experimental analysis has been given to
explain this phenomenon. In this paper, we theoretically compare PU (and NU)
learning against PN learning based on the upper bounds on estimation errors. We
find simple conditions when PU and NU learning are likely to outperform PN
learning, and we prove that, in terms of the upper bounds, either PU or NU
learning (depending on the class-prior probability and the sizes of P and N
data) given infinite U data will improve on PN learning. Our theoretical
findings well agree with the experimental results on artificial and benchmark
data even when the experimental setup does not match the theoretical
assumptions exactly.
| Gang Niu, Marthinus Christoffel du Plessis, Tomoya Sakai, Yao Ma, and
Masashi Sugiyama | null | 1603.03130 | null | null |
Scenario Submodular Cover | cs.DS cs.LG | Many problems in Machine Learning can be modeled as submodular optimization
problems. Recent work has focused on stochastic or adaptive versions of these
problems. We consider the Scenario Submodular Cover problem, which is a
counterpart to the Stochastic Submodular Cover problem studied by Golovin and
Krause. In Scenario Submodular Cover, the goal is to produce a cover with
minimum expected cost, where the expectation is with respect to an empirical
joint distribution, given as input by a weighted sample of realizations. In
contrast, in Stochastic Submodular Cover, the variables of the input
distribution are assumed to be independent, and the distribution of each
variable is given as input. Building on algorithms developed by Cicalese et al.
and Golovin and Krause for related problems, we give two approximation
algorithms for Scenario Submodular Cover over discrete distributions. The first
achieves an approximation factor of O(log Qm), where m is the size of the
sample and Q is the goal utility. The second, simpler algorithm achieves an
approximation bound of O(log QW), where Q is the goal utility and W is the sum
of the integer weights. (Both bounds assume an integer-valued utility
function.) Our results yield approximation bounds for other problems involving
non-independent distributions that are explicitly specified by their support.
| Nathaniel Grammel, Lisa Hellerstein, Devorah Kletenik, Patrick Lin | null | 1603.03158 | null | null |
Personalized Speech recognition on mobile devices | cs.CL cs.LG cs.SD | We describe a large vocabulary speech recognition system that is accurate,
has low latency, and yet has a small enough memory and computational footprint
to run faster than real-time on a Nexus 5 Android smartphone. We employ a
quantized Long Short-Term Memory (LSTM) acoustic model trained with
connectionist temporal classification (CTC) to directly predict phoneme
targets, and further reduce its memory footprint using an SVD-based compression
scheme. Additionally, we minimize our memory footprint by using a single
language model for both dictation and voice command domains, constructed using
Bayesian interpolation. Finally, in order to properly handle device-specific
information, such as proper names and other context-dependent information, we
inject vocabulary items into the decoder graph and bias the language model
on-the-fly. Our system achieves 13.5% word error rate on an open-ended
dictation task, running with a median speed that is seven times faster than
real-time.
| Ian McGraw, Rohit Prabhavalkar, Raziel Alvarez, Montse Gonzalez
Arenas, Kanishka Rao, David Rybach, Ouais Alsharif, Hasim Sak, Alexander
Gruenstein, Francoise Beaufays, Carolina Parada | null | 1603.03185 | null | null |
Pymanopt: A Python Toolbox for Optimization on Manifolds using Automatic
Differentiation | cs.MS cs.LG math.OC stat.ML | Optimization on manifolds is a class of methods for optimization of an
objective function, subject to constraints which are smooth, in the sense that
the set of points which satisfy the constraints admits the structure of a
differentiable manifold. While many optimization problems are of the described
form, technicalities of differential geometry and the laborious calculation of
derivatives pose a significant barrier for experimenting with these methods.
We introduce Pymanopt (available at https://pymanopt.github.io), a toolbox
for optimization on manifolds, implemented in Python, that---similarly to the
Manopt Matlab toolbox---implements several manifold geometries and optimization
algorithms. Moreover, we lower the barriers to users further by using automated
differentiation for calculating derivative information, saving users time and
saving them from potential calculation and implementation errors.
| James Townsend, Niklas Koep, Sebastian Weichwald | null | 1603.03236 | null | null |
An Innovative Imputation and Classification Approach for Accurate
Disease Prediction | cs.DB cs.IR cs.LG | Imputation of missing attribute values in medical datasets for extracting
hidden knowledge from medical datasets is an interesting research topic of
interest which is very challenging. One cannot eliminate missing values in
medical records. The reason may be because some tests may not been conducted as
they are cost effective, values missed when conducting clinical trials, values
may not have been recorded to name some of the reasons. Data mining researchers
have been proposing various approaches to find and impute missing values to
increase classification accuracies so that disease may be predicted accurately.
In this paper, we propose a novel imputation approach for imputation of missing
values and performing classification after fixing missing values. The approach
is based on clustering concept and aims at dimensionality reduction of the
records. The case study discussed shows that missing values can be fixed and
imputed efficiently by achieving dimensionality reduction. The importance of
proposed approach for classification is visible in the case study which assigns
single class label in contrary to multi-label assignment if dimensionality
reduction is not performed.
| Yelipe UshaRani, P. Sammulal | null | 1603.03281 | null | null |
Scalable Linear Causal Inference for Irregularly Sampled Time Series
with Long Range Dependencies | cs.LG stat.ME | Linear causal analysis is central to a wide range of important application
spanning finance, the physical sciences, and engineering. Much of the existing
literature in linear causal analysis operates in the time domain.
Unfortunately, the direct application of time domain linear causal analysis to
many real-world time series presents three critical challenges: irregular
temporal sampling, long range dependencies, and scale. Moreover, real-world
data is often collected at irregular time intervals across vast arrays of
decentralized sensors and with long range dependencies which make naive time
domain correlation estimators spurious. In this paper we present a frequency
domain based estimation framework which naturally handles irregularly sampled
data and long range dependencies while enabled memory and communication
efficient distributed processing of time series data. By operating in the
frequency domain we eliminate the need to interpolate and help mitigate the
effects of long range dependencies. We implement and evaluate our new work-flow
in the distributed setting using Apache Spark and demonstrate on both Monte
Carlo simulations and high-frequency financial trading that we can accurately
recover causal structure at scale.
| Francois W. Belletti, Evan R. Sparks, Michael J. Franklin, Alexandre
M. Bayen, Joseph E. Gonzalez | null | 1603.03336 | null | null |
Near-Optimal Active Learning of Halfspaces via Query Synthesis in the
Noisy Setting | cs.AI cs.IT cs.LG math.IT | In this paper, we consider the problem of actively learning a linear
classifier through query synthesis where the learner can construct artificial
queries in order to estimate the true decision boundaries. This problem has
recently gained a lot of interest in automated science and adversarial reverse
engineering for which only heuristic algorithms are known. In such
applications, queries can be constructed de novo to elicit information (e.g.,
automated science) or to evade detection with minimal cost (e.g., adversarial
reverse engineering). We develop a general framework, called dimension coupling
(DC), that 1) reduces a d-dimensional learning problem to d-1 low dimensional
sub-problems, 2) solves each sub-problem efficiently, 3) appropriately
aggregates the results and outputs a linear classifier, and 4) provides a
theoretical guarantee for all possible schemes of aggregation. The proposed
method is proved resilient to noise. We show that the DC framework avoids the
curse of dimensionality: its computational complexity scales linearly with the
dimension. Moreover, we show that the query complexity of DC is near optimal
(within a constant factor of the optimum algorithm). To further support our
theoretical analysis, we compare the performance of DC with the existing work.
We observe that DC consistently outperforms the prior arts in terms of query
complexity while often running orders of magnitude faster.
| Lin Chen, Hamed Hassani, Amin Karbasi | null | 1603.03515 | null | null |
Watch-n-Patch: Unsupervised Learning of Actions and Relations | cs.CV cs.LG cs.RO | There is a large variation in the activities that humans perform in their
everyday lives. We consider modeling these composite human activities which
comprises multiple basic level actions in a completely unsupervised setting.
Our model learns high-level co-occurrence and temporal relations between the
actions. We consider the video as a sequence of short-term action clips, which
contains human-words and object-words. An activity is about a set of
action-topics and object-topics indicating which actions are present and which
objects are interacting with. We then propose a new probabilistic model
relating the words and the topics. It allows us to model long-range action
relations that commonly exist in the composite activities, which is challenging
in previous works. We apply our model to the unsupervised action segmentation
and clustering, and to a novel application that detects forgotten actions,
which we call action patching. For evaluation, we contribute a new challenging
RGB-D activity video dataset recorded by the new Kinect v2, which contains
several human daily activities as compositions of multiple actions interacting
with different objects. Moreover, we develop a robotic system that watches
people and reminds people by applying our action patching algorithm. Our
robotic setup can be easily deployed on any assistive robot.
| Chenxia Wu, Jiemi Zhang, Ozan Sener, Bart Selman, Silvio Savarese,
Ashutosh Saxena | null | 1603.03541 | null | null |
Learning from Imbalanced Multiclass Sequential Data Streams Using
Dynamically Weighted Conditional Random Fields | cs.LG | The present study introduces a method for improving the classification
performance of imbalanced multiclass data streams from wireless body worn
sensors. Data imbalance is an inherent problem in activity recognition caused
by the irregular time distribution of activities, which are sequential and
dependent on previous movements. We use conditional random fields (CRF), a
graphical model for structured classification, to take advantage of
dependencies between activities in a sequence. However, CRFs do not consider
the negative effects of class imbalance during training. We propose a
class-wise dynamically weighted CRF (dWCRF) where weights are automatically
determined during training by maximizing the expected overall F-score. Our
results based on three case studies from a healthcare application using a
batteryless body worn sensor, demonstrate that our method, in general, improves
overall and minority class F-score when compared to other CRF based classifiers
and achieves similar or better overall and class-wise performance when compared
to SVM based classifiers under conditions of limited training data. We also
confirm the performance of our approach using an additional battery powered
body worn sensor dataset, achieving similar results in cases of high class
imbalance.
| Roberto L. Shinmoto Torres and Damith C. Ranasinghe and Qinfeng Shi
and Anton van den Hengel | null | 1603.03627 | null | null |
Efficient forward propagation of time-sequences in convolutional neural
networks using Deep Shifting | cs.LG cs.CV cs.NE | When a Convolutional Neural Network is used for on-the-fly evaluation of
continuously updating time-sequences, many redundant convolution operations are
performed. We propose the method of Deep Shifting, which remembers previously
calculated results of convolution operations in order to minimize the number of
calculations. The reduction in complexity is at least a constant and in the
best case quadratic. We demonstrate that this method does indeed save
significant computation time in a practical implementation, especially when the
networks receives a large number of time-frames.
| Koen Groenland, Sander Bohte | null | 1603.03657 | null | null |
Nonstationary Distance Metric Learning | stat.ML cs.LG | Recent work in distance metric learning has focused on learning
transformations of data that best align with provided sets of pairwise
similarity and dissimilarity constraints. The learned transformations lead to
improved retrieval, classification, and clustering algorithms due to the better
adapted distance or similarity measures. Here, we introduce the problem of
learning these transformations when the underlying constraint generation
process is nonstationary. This nonstationarity can be due to changes in either
the ground-truth clustering used to generate constraints or changes to the
feature subspaces in which the class structure is apparent. We propose and
evaluate COMID-SADL, an adaptive, online approach for learning and tracking
optimal metrics as they change over time that is highly robust to a variety of
nonstationary behaviors in the changing metric. We demonstrate COMID-SADL on
both real and synthetic data sets and show significant performance improvements
relative to previously proposed batch and online distance metric learning
algorithms.
| Kristjan Greenewald, Stephen Kelley, Alfred Hero | null | 1603.03678 | null | null |
Determination of the edge of criticality in echo state networks through
Fisher information maximization | physics.data-an cs.LG cs.NE | It is a widely accepted fact that the computational capability of recurrent
neural networks is maximized on the so-called "edge of criticality". Once the
network operates in this configuration, it performs efficiently on a specific
application both in terms of (i) low prediction error and (ii) high short-term
memory capacity. Since the behavior of recurrent networks is strongly
influenced by the particular input signal driving the dynamics, a universal,
application-independent method for determining the edge of criticality is still
missing. In this paper, we aim at addressing this issue by proposing a
theoretically motivated, unsupervised method based on Fisher information for
determining the edge of criticality in recurrent neural networks. It is proven
that Fisher information is maximized for (finite-size) systems operating in
such critical regions. However, Fisher information is notoriously difficult to
compute and either requires the probability density function or the conditional
dependence of the system states with respect to the model parameters. The paper
takes advantage of a recently-developed non-parametric estimator of the Fisher
information matrix and provides a method to determine the critical region of
echo state networks, a particular class of recurrent networks. The considered
control parameters, which indirectly affect the echo state network performance,
are explored to identify those configurations lying on the edge of criticality
and, as such, maximizing Fisher information and computational performance.
Experimental results on benchmarks and real-world data demonstrate the
effectiveness of the proposed method.
| Lorenzo Livi, Filippo Maria Bianchi, Cesare Alippi | 10.1109/TNNLS.2016.2644268 | 1603.03685 | null | null |
Searching for Topological Symmetry in Data Haystack | cs.LG | Finding interesting symmetrical topological structures in high-dimensional
systems is an important problem in statistical machine learning. Limited amount
of available high-dimensional data and its sensitivity to noise pose
computational challenges to find symmetry. Our paper presents a new method to
find local symmetries in a low-dimensional 2-D grid structure which is embedded
in high-dimensional structure. To compute the symmetry in a grid structure, we
introduce three legal grid moves (i) Commutation (ii) Cyclic Permutation (iii)
Stabilization on sets of local grid squares, grid blocks. The three grid moves
are legal transformations as they preserve the statistical distribution of
hamming distances in each grid block. We propose and coin the term of grid
symmetry of data on the 2-D data grid as the invariance of statistical
distributions of hamming distance are preserved after a sequence of grid moves.
We have computed and analyzed the grid symmetry of data on multivariate
Gaussian distributions and Gamma distributions with noise.
| Kallol Roy and Anh Tong and Jaesik Choi | null | 1603.03703 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.