title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Continuous control with deep reinforcement learning | cs.LG stat.ML | We adapt the ideas underlying the success of Deep Q-Learning to the
continuous action domain. We present an actor-critic, model-free algorithm
based on the deterministic policy gradient that can operate over continuous
action spaces. Using the same learning algorithm, network architecture and
hyper-parameters, our algorithm robustly solves more than 20 simulated physics
tasks, including classic problems such as cartpole swing-up, dexterous
manipulation, legged locomotion and car driving. Our algorithm is able to find
policies whose performance is competitive with those found by a planning
algorithm with full access to the dynamics of the domain and its derivatives.
We further demonstrate that for many of the tasks the algorithm can learn
policies end-to-end: directly from raw pixel inputs.
| Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas
Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra | null | 1509.02971 | null | null |
Compatible Value Gradients for Reinforcement Learning of Continuous Deep
Policies | cs.LG cs.AI cs.NE stat.ML | This paper proposes GProp, a deep reinforcement learning algorithm for
continuous policies with compatible function approximation. The algorithm is
based on two innovations. Firstly, we present a temporal-difference based
method for learning the gradient of the value-function. Secondly, we present
the deviator-actor-critic (DAC) model, which comprises three neural networks
that estimate the value function, its gradient, and determine the actor's
policy respectively. We evaluate GProp on two challenging tasks: a contextual
bandit problem constructed from nonparametric regression datasets that is
designed to probe the ability of reinforcement learning algorithms to
accurately estimate gradients; and the octopus arm, a challenging reinforcement
learning benchmark. GProp is competitive with fully supervised methods on the
bandit task and achieves the best performance to date on the octopus arm.
| David Balduzzi, Muhammad Ghifary | null | 1509.03005 | null | null |
Fast low-rank estimation by projected gradient descent: General
statistical and algorithmic guarantees | math.ST cs.LG stat.ML stat.TH | Optimization problems with rank constraints arise in many applications,
including matrix regression, structured PCA, matrix completion and matrix
decomposition problems. An attractive heuristic for solving such problems is to
factorize the low-rank matrix, and to run projected gradient descent on the
nonconvex factorized optimization problem. The goal of this problem is to
provide a general theoretical framework for understanding when such methods
work well, and to characterize the nature of the resulting fixed point. We
provide a simple set of conditions under which projected gradient descent, when
given a suitable initialization, converges geometrically to a statistically
useful solution. Our results are applicable even when the initial solution is
outside any region of local convexity, and even when the problem is globally
concave. Working in a non-asymptotic framework, we show that our conditions are
satisfied for a wide range of concrete models, including matrix regression,
structured PCA, matrix completion with real and quantized observations, matrix
decomposition, and graph clustering problems. Simulation results show excellent
agreement with the theoretical predictions.
| Yudong Chen, Martin J. Wainwright | null | 1509.03025 | null | null |
Recurrent Reinforcement Learning: A Hybrid Approach | cs.LG cs.AI cs.SY | Successful applications of reinforcement learning in real-world problems
often require dealing with partially observable states. It is in general very
challenging to construct and infer hidden states as they often depend on the
agent's entire interaction history and may require substantial domain
knowledge. In this work, we investigate a deep-learning approach to learning
the representation of states in partially observable tasks, with minimal prior
knowledge of the domain. In particular, we propose a new family of hybrid
models that combines the strength of both supervised learning (SL) and
reinforcement learning (RL), trained in a joint fashion: The SL component can
be a recurrent neural networks (RNN) or its long short-term memory (LSTM)
version, which is equipped with the desired property of being able to capture
long-term dependency on history, thus providing an effective way of learning
the representation of hidden states. The RL component is a deep Q-network (DQN)
that learns to optimize the control for maximizing long-term rewards. Extensive
experiments in a direct mailing campaign problem demonstrate the effectiveness
and advantages of the proposed approach, which performs the best among a set of
previous state-of-the-art methods.
| Xiujun Li, Lihong Li, Jianfeng Gao, Xiaodong He, Jianshu Chen, Li
Deng, Ji He | null | 1509.03044 | null | null |
Use it or Lose it: Selective Memory and Forgetting in a Perpetual
Learning Machine | cs.LG | In a recent article we described a new type of deep neural network - a
Perpetual Learning Machine (PLM) - which is capable of learning 'on the fly'
like a brain by existing in a state of Perpetual Stochastic Gradient Descent
(PSGD). Here, by simulating the process of practice, we demonstrate both
selective memory and selective forgetting when we introduce statistical recall
biases during PSGD. Frequently recalled memories are remembered, whilst
memories recalled rarely are forgotten. This results in a 'use it or lose it'
stimulus driven memory process that is similar to human memory.
| Andrew J.R. Simpson | null | 1509.03185 | null | null |
A new Initial Centroid finding Method based on Dissimilarity Tree for
K-means Algorithm | cs.LG | Cluster analysis is one of the primary data analysis technique in data mining
and K-means is one of the commonly used partitioning clustering algorithm. In
K-means algorithm, resulting set of clusters depend on the choice of initial
centroids. If we can find initial centroids which are coherent with the
arrangement of data, the better set of clusters can be obtained. This paper
proposes a method based on the Dissimilarity Tree to find, the better initial
centroid as well as every bit more accurate cluster with less computational
time. Theory analysis and experimental results indicate that the proposed
method can effectively improve the accuracy of clusters and reduce the
computational complexity of the K-means algorithm.
| Abhishek Kumar and Suresh Chandra Gupta | null | 1509.03200 | null | null |
Gibbs Sampling Strategies for Semantic Perception of Streaming Video
Data | cs.RO cs.LG | Topic modeling of streaming sensor data can be used for high level perception
of the environment by a mobile robot. In this paper we compare various Gibbs
sampling strategies for topic modeling of streaming spatiotemporal data, such
as video captured by a mobile robot. Compared to previous work on online topic
modeling, such as o-LDA and incremental LDA, we show that the proposed
technique results in lower online and final perplexity, given the realtime
constraints.
| Yogesh Girdhar and Gregory Dudek | null | 1509.03242 | null | null |
A deep matrix factorization method for learning attribute
representations | cs.CV cs.LG stat.ML | Semi-Non-negative Matrix Factorization is a technique that learns a
low-dimensional representation of a dataset that lends itself to a clustering
interpretation. It is possible that the mapping between this new representation
and our original data matrix contains rather complex hierarchical information
with implicit lower-level hidden attributes, that classical one level
clustering methodologies can not interpret. In this work we propose a novel
model, Deep Semi-NMF, that is able to learn such hidden representations that
allow themselves to an interpretation of clustering according to different,
unknown attributes of a given dataset. We also present a semi-supervised
version of the algorithm, named Deep WSF, that allows the use of (partial)
prior information for each of the known attributes of a dataset, that allows
the model to be used on datasets with mixed attribute knowledge. Finally, we
show that our models are able to learn low-dimensional representations that are
better suited for clustering, but also classification, outperforming
Semi-Non-negative Matrix Factorization, but also other state-of-the-art
methodologies variants.
| George Trigeorgis, Konstantinos Bousmalis, Stefanos Zafeiriou, Bjoern
W.Schuller | null | 1509.03248 | null | null |
Performance Bounds for Pairwise Entity Resolution | stat.ML cs.CY cs.DB cs.LG | One significant challenge to scaling entity resolution algorithms to massive
datasets is understanding how performance changes after moving beyond the realm
of small, manually labeled reference datasets. Unlike traditional machine
learning tasks, when an entity resolution algorithm performs well on small
hold-out datasets, there is no guarantee this performance holds on larger
hold-out datasets. We prove simple bounding properties between the performance
of a match function on a small validation set and the performance of a pairwise
entity resolution algorithm on arbitrarily sized datasets. Thus, our approach
enables optimization of pairwise entity resolution algorithms for large
datasets, using a small set of labeled data.
| Matt Barnes, Kyle Miller, Artur Dubrawski | null | 1509.03302 | null | null |
Hessian-free Optimization for Learning Deep Multidimensional Recurrent
Neural Networks | cs.LG cs.NE stat.ML | Multidimensional recurrent neural networks (MDRNNs) have shown a remarkable
performance in the area of speech and handwriting recognition. The performance
of an MDRNN is improved by further increasing its depth, and the difficulty of
learning the deeper network is overcome by using Hessian-free (HF)
optimization. Given that connectionist temporal classification (CTC) is
utilized as an objective of learning an MDRNN for sequence labeling, the
non-convexity of CTC poses a problem when applying HF to the network. As a
solution, a convex approximation of CTC is formulated and its relationship with
the EM algorithm and the Fisher information matrix is discussed. An MDRNN up to
a depth of 15 layers is successfully trained using HF, resulting in an improved
performance for sequence labeling.
| Minhyung Cho, Chandra Shekhar Dhir, Jaehyung Lee | null | 1509.03475 | null | null |
Hardness of Online Sleeping Combinatorial Optimization Problems | cs.LG cs.DS | We show that several online combinatorial optimization problems that admit
efficient no-regret algorithms become computationally hard in the sleeping
setting where a subset of actions becomes unavailable in each round.
Specifically, we show that the sleeping versions of these problems are at least
as hard as PAC learning DNF expressions, a long standing open problem. We show
hardness for the sleeping versions of Online Shortest Paths, Online Minimum
Spanning Tree, Online $k$-Subsets, Online $k$-Truncated Permutations, Online
Minimum Cut, and Online Bipartite Matching. The hardness result for the
sleeping version of the Online Shortest Paths problem resolves an open problem
presented at COLT 2015 (Koolen et al., 2015).
| Satyen Kale and Chansoo Lee and D\'avid P\'al | null | 1509.03600 | null | null |
Toward better feature weighting algorithms: a focus on Relief | cs.LG | Feature weighting algorithms try to solve a problem of great importance
nowadays in machine learning: The search of a relevance measure for the
features of a given domain. This relevance is primarily used for feature
selection as feature weighting can be seen as a generalization of it, but it is
also useful to better understand a problem's domain or to guide an inductor in
its learning process. Relief family of algorithms are proven to be very
effective in this task. Some other feature weighting methods are reviewed in
order to give some context and then the different existing extensions to the
original algorithm are explained.
One of Relief's known issues is the performance degradation of its estimates
when redundant features are present. A novel theoretical definition of
redundancy level is given in order to guide the work towards an extension of
the algorithm that is more robust against redundancy. A new extension is
presented that aims for improving the algorithms performance. Some experiments
were driven to test this new extension against the existing ones with a set of
artificial and real datasets and denoted that in certain cases it improves the
weight's estimation accuracy.
| Gabriel Prat Masramon and Llu\'is A. Belanche Mu\~noz | null | 1509.03755 | null | null |
Dropping Convexity for Faster Semi-definite Optimization | stat.ML cs.DS cs.IT cs.LG cs.NA math.IT math.OC | We study the minimization of a convex function $f(X)$ over the set of
$n\times n$ positive semi-definite matrices, but when the problem is recast as
$\min_U g(U) := f(UU^\top)$, with $U \in \mathbb{R}^{n \times r}$ and $r \leq
n$. We study the performance of gradient descent on $g$---which we refer to as
Factored Gradient Descent (FGD)---under standard assumptions on the original
function $f$.
We provide a rule for selecting the step size and, with this choice, show
that the local convergence rate of FGD mirrors that of standard gradient
descent on the original $f$: i.e., after $k$ steps, the error is $O(1/k)$ for
smooth $f$, and exponentially small in $k$ when $f$ is (restricted) strongly
convex. In addition, we provide a procedure to initialize FGD for (restricted)
strongly convex objectives and when one only has access to $f$ via a
first-order oracle; for several problem instances, such proper initialization
leads to global convergence guarantees.
FGD and similar procedures are widely used in practice for problems that can
be posed as matrix factorization. To the best of our knowledge, this is the
first paper to provide precise convergence rate guarantees for general convex
functions under standard convex assumptions.
| Srinadh Bhojanapalli, Anastasios Kyrillidis, Sujay Sanghavi | null | 1509.03917 | null | null |
Parametric Maxflows for Structured Sparse Learning with Convex
Relaxations of Submodular Functions | cs.LG cs.NA | The proximal problem for structured penalties obtained via convex relaxations
of submodular functions is known to be equivalent to minimizing separable
convex functions over the corresponding submodular polyhedra. In this paper, we
reveal a comprehensive class of structured penalties for which penalties this
problem can be solved via an efficiently solvable class of parametric maxflow
optimization. We then show that the parametric maxflow algorithm proposed by
Gallo et al. and its variants, which runs, in the worst-case, at the cost of
only a constant factor of a single computation of the corresponding maxflow
optimization, can be adapted to solve the proximal problems for those
penalties. Several existing structured penalties satisfy these conditions;
thus, regularized learning with these penalties is solvable quickly using the
parametric maxflow algorithm. We also investigate the empirical runtime
performance of the proposed framework.
| Yoshinobu Kawahara and Yutaro Yamaguchi | null | 1509.03946 | null | null |
Optimization of anemia treatment in hemodialysis patients via
reinforcement learning | stat.ML cs.AI cs.LG | Objective: Anemia is a frequent comorbidity in hemodialysis patients that can
be successfully treated by administering erythropoiesis-stimulating agents
(ESAs). ESAs dosing is currently based on clinical protocols that often do not
account for the high inter- and intra-individual variability in the patient's
response. As a result, the hemoglobin level of some patients oscillates around
the target range, which is associated with multiple risks and side-effects.
This work proposes a methodology based on reinforcement learning (RL) to
optimize ESA therapy.
Methods: RL is a data-driven approach for solving sequential decision-making
problems that are formulated as Markov decision processes (MDPs). Computing
optimal drug administration strategies for chronic diseases is a sequential
decision-making problem in which the goal is to find the best sequence of drug
doses. MDPs are particularly suitable for modeling these problems due to their
ability to capture the uncertainty associated with the outcome of the treatment
and the stochastic nature of the underlying process. The RL algorithm employed
in the proposed methodology is fitted Q iteration, which stands out for its
ability to make an efficient use of data.
Results: The experiments reported here are based on a computational model
that describes the effect of ESAs on the hemoglobin level. The performance of
the proposed method is evaluated and compared with the well-known Q-learning
algorithm and with a standard protocol. Simulation results show that the
performance of Q-learning is substantially lower than FQI and the protocol.
Conclusion: Although prospective validation is required, promising results
demonstrate the potential of RL to become an alternative to current protocols.
| Pablo Escandell-Montero, Milena Chermisi, Jos\'e M.
Mart\'inez-Mart\'inez, Juan G\'omez-Sanchis, Carlo Barbieri, Emilio
Soria-Olivas, Flavio Mari, Joan Vila-Franc\'es, Andrea Stopper, Emanuele
Gatti and Jos\'e D. Mart\'in-Guerrero | 10.1016/j.artmed.2014.07.004 | 1509.03977 | null | null |
Fame for sale: efficient detection of fake Twitter followers | cs.SI cs.CR cs.LG | $\textit{Fake followers}$ are those Twitter accounts specifically created to
inflate the number of followers of a target account. Fake followers are
dangerous for the social platform and beyond, since they may alter concepts
like popularity and influence in the Twittersphere - hence impacting on
economy, politics, and society. In this paper, we contribute along different
dimensions. First, we review some of the most relevant existing features and
rules (proposed by Academia and Media) for anomalous Twitter accounts
detection. Second, we create a baseline dataset of verified human and fake
follower accounts. Such baseline dataset is publicly available to the
scientific community. Then, we exploit the baseline dataset to train a set of
machine-learning classifiers built over the reviewed rules and features. Our
results show that most of the rules proposed by Media provide unsatisfactory
performance in revealing fake followers, while features proposed in the past by
Academia for spam detection provide good results. Building on the most
promising features, we revise the classifiers both in terms of reduction of
overfitting and cost for gathering the data needed to compute the features. The
final result is a novel $\textit{Class A}$ classifier, general enough to thwart
overfitting, lightweight thanks to the usage of the less costly features, and
still able to correctly classify more than 95% of the accounts of the original
training set. We ultimately perform an information fusion-based sensitivity
analysis, to assess the global sensitivity of each of the features employed by
the classifier. The findings reported in this paper, other than being supported
by a thorough experimental methodology and interesting on their own, also pave
the way for further investigation on the novel issue of fake Twitter followers.
| Stefano Cresci, Roberto Di Pietro, Marinella Petrocchi, Angelo
Spognardi, Maurizio Tesconi | 10.1016/j.dss.2015.09.003 | 1509.04098 | null | null |
Model Accuracy and Runtime Tradeoff in Distributed Deep Learning:A
Systematic Study | stat.ML cs.DC cs.LG cs.NE | This paper presents Rudra, a parameter server based distributed computing
framework tuned for training large-scale deep neural networks. Using variants
of the asynchronous stochastic gradient descent algorithm we study the impact
of synchronization protocol, stale gradient updates, minibatch size, learning
rates, and number of learners on runtime performance and model accuracy. We
introduce a new learning rate modulation strategy to counter the effect of
stale gradients and propose a new synchronization protocol that can effectively
bound the staleness in gradients, improve runtime performance and achieve good
model accuracy. Our empirical investigation reveals a principled approach for
distributed training of neural networks: the mini-batch size per learner should
be reduced as more learners are added to the system to preserve the model
accuracy. We validate this approach using commonly-used image classification
benchmarks: CIFAR10 and ImageNet.
| Suyog Gupta, Wei Zhang, Fei Wang | null | 1509.04210 | null | null |
Double Relief with progressive weighting function | cs.LG cs.AI | Feature weighting algorithms try to solve a problem of great importance
nowadays in machine learning: The search of a relevance measure for the
features of a given domain. This relevance is primarily used for feature
selection as feature weighting can be seen as a generalization of it, but it is
also useful to better understand a problem's domain or to guide an inductor in
its learning process. Relief family of algorithms are proven to be very
effective in this task.
On previous work, a new extension was proposed that aimed for improving the
algorithm's performance and it was shown that in certain cases it improved the
weights' estimation accuracy. However, it also seemed to be sensible to some
characteristics of the data. An improvement of that previously presented
extension is presented in this work that aims to make it more robust to problem
specific characteristics. An experimental design is proposed to test its
performance. Results of the tests prove that it indeed increase the robustness
of the previously proposed extension.
| Gabriel Prat Masramon and Llu\'is A. Belanche Mu\~noz | null | 1509.04265 | null | null |
Voted Kernel Regularization | cs.LG | This paper presents an algorithm, Voted Kernel Regularization , that provides
the flexibility of using potentially very complex kernel functions such as
predictors based on much higher-degree polynomial kernels, while benefitting
from strong learning guarantees. The success of our algorithm arises from
derived bounds that suggest a new regularization penalty in terms of the
Rademacher complexities of the corresponding families of kernel maps. In a
series of experiments we demonstrate the improved performance of our algorithm
as compared to baselines. Furthermore, the algorithm enjoys several favorable
properties. The optimization problem is convex, it allows for learning with
non-PDS kernels, and the solutions are highly sparse, resulting in improved
classification speed and memory requirements.
| Corinna Cortes, Prasoon Goyal, Vitaly Kuznetsov, Mehryar Mohri | null | 1509.04340 | null | null |
Towards Making High Dimensional Distance Metric Learning Practical | cs.LG | In this work, we study distance metric learning (DML) for high dimensional
data. A typical approach for DML with high dimensional data is to perform the
dimensionality reduction first before learning the distance metric. The main
shortcoming of this approach is that it may result in a suboptimal solution due
to the subspace removed by the dimensionality reduction method. In this work,
we present a dual random projection frame for DML with high dimensional data
that explicitly addresses the limitation of dimensionality reduction for DML.
The key idea is to first project all the data points into a low dimensional
space by random projection, and compute the dual variables using the projected
vectors. It then reconstructs the distance metric in the original space using
the estimated dual variables. The proposed method, on one hand, enjoys the
light computation of random projection, and on the other hand, alleviates the
limitation of most dimensionality reduction methods. We verify both empirically
and theoretically the effectiveness of the proposed algorithm for high
dimensional DML.
| Qi Qian, Rong Jin, Lijun Zhang and Shenghuo Zhu | null | 1509.04355 | null | null |
Precise Phase Transition of Total Variation Minimization | cs.IT cs.LG math.IT math.OC stat.ML | Characterizing the phase transitions of convex optimizations in recovering
structured signals or data is of central importance in compressed sensing,
machine learning and statistics. The phase transitions of many convex
optimization signal recovery methods such as $\ell_1$ minimization and nuclear
norm minimization are well understood through recent years' research. However,
rigorously characterizing the phase transition of total variation (TV)
minimization in recovering sparse-gradient signal is still open. In this paper,
we fully characterize the phase transition curve of the TV minimization. Our
proof builds on Donoho, Johnstone and Montanari's conjectured phase transition
curve for the TV approximate message passing algorithm (AMP), together with the
linkage between the minmax Mean Square Error of a denoising problem and the
high-dimensional convex geometry for TV minimization.
| Bingwen Zhang, Weiyu Xu, Jian-Feng Cai and Lifeng Lai | null | 1509.04376 | null | null |
Exponential Family Matrix Completion under Structural Constraints | stat.ML cs.LG | We consider the matrix completion problem of recovering a structured matrix
from noisy and partial measurements. Recent works have proposed tractable
estimators with strong statistical guarantees for the case where the underlying
matrix is low--rank, and the measurements consist of a subset, either of the
exact individual entries, or of the entries perturbed by additive Gaussian
noise, which is thus implicitly suited for thin--tailed continuous data.
Arguably, common applications of matrix completion require estimators for (a)
heterogeneous data--types, such as skewed--continuous, count, binary, etc., (b)
for heterogeneous noise models (beyond Gaussian), which capture varied
uncertainty in the measurements, and (c) heterogeneous structural constraints
beyond low--rank, such as block--sparsity, or a superposition structure of
low--rank plus elementwise sparseness, among others. In this paper, we provide
a vastly unified framework for generalized matrix completion by considering a
matrix completion setting wherein the matrix entries are sampled from any
member of the rich family of exponential family distributions; and impose
general structural constraints on the underlying matrix, as captured by a
general regularizer $\mathcal{R}(.)$. We propose a simple convex regularized
$M$--estimator for the generalized framework, and provide a unified and novel
statistical analysis for this general class of estimators. We finally
corroborate our theoretical results on simulated datasets.
| Suriya Gunasekar, Pradeep Ravikumar, Joydeep Ghosh | null | 1509.04397 | null | null |
Adapting Resilient Propagation for Deep Learning | cs.NE cs.CV cs.LG stat.ML | The Resilient Propagation (Rprop) algorithm has been very popular for
backpropagation training of multilayer feed-forward neural networks in various
applications. The standard Rprop however encounters difficulties in the context
of deep neural networks as typically happens with gradient-based learning
algorithms. In this paper, we propose a modification of the Rprop that combines
standard Rprop steps with a special drop out technique. We apply the method for
training Deep Neural Networks as standalone components and in ensemble
formulations. Results on the MNIST dataset show that the proposed modification
alleviates standard Rprop's problems demonstrating improved learning speed and
accuracy.
| Alan Mosca and George D. Magoulas | null | 1509.04612 | null | null |
Dynamic Poisson Factorization | cs.LG cs.IR stat.ML | Models for recommender systems use latent factors to explain the preferences
and behaviors of users with respect to a set of items (e.g., movies, books,
academic papers). Typically, the latent factors are assumed to be static and,
given these factors, the observed preferences and behaviors of users are
assumed to be generated without order. These assumptions limit the explorative
and predictive capabilities of such models, since users' interests and item
popularity may evolve over time. To address this, we propose dPF, a dynamic
matrix factorization model based on the recent Poisson factorization model for
recommendations. dPF models the time evolving latent factors with a Kalman
filter and the actions with Poisson distributions. We derive a scalable
variational inference algorithm to infer the latent factors. Finally, we
demonstrate dPF on 10 years of user click data from arXiv.org, one of the
largest repository of scientific papers and a formidable source of information
about the behavior of scientists. Empirically we show performance improvement
over both static and, more recently proposed, dynamic recommendation models. We
also provide a thorough exploration of the inferred posteriors over the latent
variables.
| Laurent Charlin and Rajesh Ranganath and James McInerney and David M.
Blei | 10.1145/2792838.2800174 | 1509.04640 | null | null |
Forecasting Method for Grouped Time Series with the Use of k-Means
Algorithm | cs.LG | The paper is focused on the forecasting method for time series groups with
the use of algorithms for cluster analysis. $K$-means algorithm is suggested to
be a basic one for clustering. The coordinates of the centers of clusters have
been put in correspondence with summarizing time series data the centroids of
the clusters. A description of time series, the centroids of the clusters, is
implemented with the use of forecasting models. They are based on strict binary
trees and a modified clonal selection algorithm. With the help of such
forecasting models, the possibility of forming analytic dependences is shown.
It is suggested to use a common forecasting model, which is constructed for
time series the centroid of the cluster, in forecasting the private
(individual) time series in the cluster. The promising application of the
suggested method for grouped time series forecasting is demonstrated.
| N.N. Astakhova, L.A. Demidova, E.V. Nikulchev | 10.12988/ams.2015.55391 | 1509.04705 | null | null |
On the Expressive Power of Deep Learning: A Tensor Analysis | cs.NE cs.LG cs.NA stat.ML | It has long been conjectured that hypotheses spaces suitable for data that is
compositional in nature, such as text or images, may be more efficiently
represented with deep hierarchical networks than with shallow ones. Despite the
vast empirical evidence supporting this belief, theoretical justifications to
date are limited. In particular, they do not account for the locality, sharing
and pooling constructs of convolutional networks, the most successful deep
learning architecture to date. In this work we derive a deep network
architecture based on arithmetic circuits that inherently employs locality,
sharing and pooling. An equivalence between the networks and hierarchical
tensor factorizations is established. We show that a shallow network
corresponds to CP (rank-1) decomposition, whereas a deep network corresponds to
Hierarchical Tucker decomposition. Using tools from measure theory and matrix
algebra, we prove that besides a negligible set, all functions that can be
implemented by a deep network of polynomial size, require exponential size in
order to be realized (or even approximated) by a shallow network. Since
log-space computation transforms our networks into SimNets, the result applies
directly to a deep learning architecture demonstrating promising empirical
performance. The construction and theory developed in this paper shed new light
on various practices and ideas employed by the deep learning community.
| Nadav Cohen, Or Sharir, Amnon Shashua | null | 1509.05009 | null | null |
Fast Sequence Component Analysis for Attack Detection in Synchrophasor
Networks | cs.LG cs.CR | Modern power systems have begun integrating synchrophasor technologies into
part of daily operations. Given the amount of solutions offered and the
maturity rate of application development it is not a matter of "if" but a
matter of "when" in regards to these technologies becoming ubiquitous in
control centers around the world. While the benefits are numerous, the
functionality of operator-level applications can easily be nullified by
injection of deceptive data signals disguised as genuine measurements. Such
deceptive action is a common precursor to nefarious, often malicious activity.
A correlation coefficient characterization and machine learning methodology are
proposed to detect and identify injection of spoofed data signals. The proposed
method utilizes statistical relationships intrinsic to power system parameters,
which are quantified and presented. Several spoofing schemes have been
developed to qualitatively and quantitatively demonstrate detection
capabilities.
| Jordan Landford, Rich Meier, Richard Barella, Xinghui Zhao, Eduardo
Cotilla-Sanchez, Robert B. Bass, Scott Wallace | null | 1509.05086 | null | null |
Revealed Preference at Scale: Learning Personalized Preferences from
Assortment Choices | stat.ML cs.LG math.OC | We consider the problem of learning the preferences of a heterogeneous
population by observing choices from an assortment of products, ads, or other
offerings. Our observation model takes a form common in assortment planning
applications: each arriving customer is offered an assortment consisting of a
subset of all possible offerings; we observe only the assortment and the
customer's single choice.
In this paper we propose a mixture choice model with a natural underlying
low-dimensional structure, and show how to estimate its parameters. In our
model, the preferences of each customer or segment follow a separate parametric
choice model, but the underlying structure of these parameters over all the
models has low dimension. We show that a nuclear-norm regularized maximum
likelihood estimator can learn the preferences of all customers using a number
of observations much smaller than the number of item-customer combinations.
This result shows the potential for structural assumptions to speed up learning
and improve revenues in assortment planning and customization. We provide a
specialized factored gradient descent algorithm and study the success of the
approach empirically.
| Nathan Kallus, Madeleine Udell | null | 1509.05113 | null | null |
Fast Gaussian Process Regression for Big Data | cs.LG stat.ML | Gaussian Processes are widely used for regression tasks. A known limitation
in the application of Gaussian Processes to regression tasks is that the
computation of the solution requires performing a matrix inversion. The
solution also requires the storage of a large matrix in memory. These factors
restrict the application of Gaussian Process regression to small and moderate
size data sets. We present an algorithm that combines estimates from models
developed using subsets of the data obtained in a manner similar to the
bootstrap. The sample size is a critical parameter for this algorithm.
Guidelines for reasonable choices of algorithm parameters, based on detailed
experimental study, are provided. Various techniques have been proposed to
scale Gaussian Processes to large scale regression tasks. The most appropriate
choice depends on the problem context. The proposed method is most appropriate
for problems where an additive model works well and the response depends on a
small number of features. The minimax rate of convergence for such problems is
attractive and we can build effective models with a small subset of the data.
The Stochastic Variational Gaussian Process and the Sparse Gaussian Process are
also appropriate choices for such problems. These methods pick a subset of data
based on theoretical considerations. The proposed algorithm uses bagging and
random sampling. Results from experiments conducted as part of this study
indicate that the algorithm presented in this work can be as effective as these
methods. Model stacking can be used to combine the model developed with the
proposed method with models from other methods for large scale regression such
as Gradient Boosted Trees. This can yield performance gains.
| Sourish Das, Sasanka Roy, Rajiv Sambasivan | null | 1509.05142 | null | null |
Generalized Emphatic Temporal Difference Learning: Bias-Variance
Analysis | stat.ML cs.LG | We consider the off-policy evaluation problem in Markov decision processes
with function approximation. We propose a generalization of the recently
introduced \emph{emphatic temporal differences} (ETD) algorithm
\citep{SuttonMW15}, which encompasses the original ETD($\lambda$), as well as
several other off-policy evaluation algorithms as special cases. We call this
framework \ETD, where our introduced parameter $\beta$ controls the decay rate
of an importance-sampling term. We study conditions under which the projected
fixed-point equation underlying \ETD\ involves a contraction operator, allowing
us to present the first asymptotic error bounds (bias) for \ETD. Our results
show that the original ETD algorithm always involves a contraction operator,
and its bias is bounded. Moreover, by controlling $\beta$, our proposed
generalization allows trading-off bias for variance reduction, thereby
achieving a lower total error.
| Assaf Hallak, Aviv Tamar, Remi Munos, Shie Mannor | null | 1509.05172 | null | null |
Taming the ReLU with Parallel Dither in a Deep Neural Network | cs.LG | Rectified Linear Units (ReLU) seem to have displaced traditional 'smooth'
nonlinearities as activation-function-du-jour in many - but not all - deep
neural network (DNN) applications. However, nobody seems to know why. In this
article, we argue that ReLU are useful because they are ideal demodulators -
this helps them perform fast abstract learning. However, this fast learning
comes at the expense of serious nonlinear distortion products - decoy features.
We show that Parallel Dither acts to suppress the decoy features, preventing
overfitting and leaving the true features cleanly demodulated for rapid,
reliable learning.
| Andrew J.R. Simpson | null | 1509.05173 | null | null |
(Blue) Taxi Destination and Trip Time Prediction from Partial
Trajectories | stat.ML cs.AI cs.CY cs.LG | Real-time estimation of destination and travel time for taxis is of great
importance for existing electronic dispatch systems. We present an approach
based on trip matching and ensemble learning, in which we leverage the patterns
observed in a dataset of roughly 1.7 million taxi journeys to predict the
corresponding final destination and travel time for ongoing taxi trips, as a
solution for the ECML/PKDD Discovery Challenge 2015 competition. The results of
our empirical evaluation show that our approach is effective and very robust,
which led our team -- BlueTaxi -- to the 3rd and 7th position of the final
rankings for the trip time and destination prediction tasks, respectively.
Given the fact that the final rankings were computed using a very small test
set (with only 320 trips) we believe that our approach is one of the most
robust solutions for the challenge based on the consistency of our good results
across the test sets.
| Hoang Thanh Lam and Ernesto Diaz-Aviles and Alessandra Pascale and
Yiannis Gkoufas and Bei Chen | null | 1509.05257 | null | null |
DeXpression: Deep Convolutional Neural Network for Expression
Recognition | cs.CV cs.LG | We propose a convolutional neural network (CNN) architecture for facial
expression recognition. The proposed architecture is independent of any
hand-crafted feature extraction and performs better than the earlier proposed
convolutional neural network based approaches. We visualize the automatically
extracted features which have been learned by the network in order to provide a
better understanding. The standard datasets, i.e. Extended Cohn-Kanade (CKP)
and MMI Facial Expression Databse are used for the quantitative evaluation. On
the CKP set the current state of the art approach, using CNNs, achieves an
accuracy of 99.2%. For the MMI dataset, currently the best accuracy for emotion
recognition is 93.33%. The proposed architecture achieves 99.6% for CKP and
98.63% for MMI, therefore performing better than the state of the art using
CNNs. Automatic facial expression recognition has a broad spectrum of
applications such as human-computer interaction and safety systems. This is due
to the fact that non-verbal cues are important forms of communication and play
a pivotal role in interpersonal communication. The performance of the proposed
architecture endorses the efficacy and reliable usage of the proposed work for
real world applications.
| Peter Burkert, Felix Trier, Muhammad Zeshan Afzal, Andreas Dengel,
Marcus Liwicki | null | 1509.05371 | null | null |
Learning to Hash for Indexing Big Data - A Survey | cs.LG | The explosive growth in big data has attracted much attention in designing
efficient indexing and search methods recently. In many critical applications
such as large-scale search and pattern matching, finding the nearest neighbors
to a query is a fundamental research problem. However, the straightforward
solution using exhaustive comparison is infeasible due to the prohibitive
computational complexity and memory requirement. In response, Approximate
Nearest Neighbor (ANN) search based on hashing techniques has become popular
due to its promising performance in both efficiency and accuracy. Prior
randomized hashing methods, e.g., Locality-Sensitive Hashing (LSH), explore
data-independent hash functions with random projections or permutations.
Although having elegant theoretic guarantees on the search quality in certain
metric spaces, performance of randomized hashing has been shown insufficient in
many real-world applications. As a remedy, new approaches incorporating
data-driven learning methods in development of advanced hash functions have
emerged. Such learning to hash methods exploit information such as data
distributions or class labels when optimizing the hash codes or functions.
Importantly, the learned hash codes are able to preserve the proximity of
neighboring data in the original feature spaces in the hash code spaces. The
goal of this paper is to provide readers with systematic understanding of
insights, pros and cons of the emerging techniques. We provide a comprehensive
survey of the learning to hash framework and representative techniques of
various types, including unsupervised, semi-supervised, and supervised. In
addition, we also summarize recent hashing approaches utilizing the deep
learning models. Finally, we discuss the future direction and trends of
research in this area.
| Jun Wang, Wei Liu, Sanjiv Kumar, Shih-Fu Chang | null | 1509.05472 | null | null |
Algorithmic statistics, prediction and machine learning | cs.LG cs.IT math.IT | Algorithmic statistics considers the following problem: given a binary string
$x$ (e.g., some experimental data), find a "good" explanation of this data. It
uses algorithmic information theory to define formally what is a good
explanation. In this paper we extend this framework in two directions.
First, the explanations are not only interesting in themselves but also used
for prediction: we want to know what kind of data we may reasonably expect in
similar situations (repeating the same experiment). We show that some kind of
hierarchy can be constructed both in terms of algorithmic statistics and using
the notion of a priori probability, and these two approaches turn out to be
equivalent.
Second, a more realistic approach that goes back to machine learning theory,
assumes that we have not a single data string $x$ but some set of "positive
examples" $x_1,\ldots,x_l$ that all belong to some unknown set $A$, a property
that we want to learn. We want this set $A$ to contain all positive examples
and to be as small and simple as possible. We show how algorithmic statistic
can be extended to cover this situation.
| Alexey Milovanov | null | 1509.05473 | null | null |
Fast and Simple PCA via Convex Optimization | math.OC cs.LG cs.NA math.NA | The problem of principle component analysis (PCA) is traditionally solved by
spectral or algebraic methods. We show how computing the leading principal
component could be reduced to solving a \textit{small} number of
well-conditioned {\it convex} optimization problems. This gives rise to a new
efficient method for PCA based on recent advances in stochastic methods for
convex optimization.
In particular we show that given a $d\times d$ matrix $\X =
\frac{1}{n}\sum_{i=1}^n\x_i\x_i^{\top}$ with top eigenvector $\u$ and top
eigenvalue $\lambda_1$ it is possible to: \begin{itemize} \item compute a unit
vector $\w$ such that $(\w^{\top}\u)^2 \geq 1-\epsilon$ in
$\tilde{O}\left({\frac{d}{\delta^2}+N}\right)$ time, where $\delta = \lambda_1
- \lambda_2$ and $N$ is the total number of non-zero entries in
$\x_1,...,\x_n$,
\item compute a unit vector $\w$ such that $\w^{\top}\X\w \geq
\lambda_1-\epsilon$ in $\tilde{O}(d/\epsilon^2)$ time. \end{itemize} To the
best of our knowledge, these bounds are the fastest to date for a wide regime
of parameters. These results could be further accelerated when $\delta$ (in the
first case) and $\epsilon$ (in the second case) are smaller than $\sqrt{d/N}$.
| Dan Garber and Elad Hazan | null | 1509.05647 | null | null |
Accelerating Optimization via Adaptive Prediction | stat.ML cs.LG | We present a powerful general framework for designing data-dependent
optimization algorithms, building upon and unifying recent techniques in
adaptive regularization, optimistic gradient predictions, and problem-dependent
randomization. We first present a series of new regret guarantees that hold at
any time and under very minimal assumptions, and then show how different
relaxations recover existing algorithms, both basic as well as more recent
sophisticated ones. Finally, we show how combining adaptivity, optimism, and
problem-dependent randomization can guide the design of algorithms that benefit
from more favorable guarantees than recent state-of-the-art methods.
| Mehryar Mohri, Scott Yang | null | 1509.05760 | null | null |
"Oddball SGD": Novelty Driven Stochastic Gradient Descent for Training
Deep Neural Networks | cs.LG | Stochastic Gradient Descent (SGD) is arguably the most popular of the machine
learning methods applied to training deep neural networks (DNN) today. It has
recently been demonstrated that SGD can be statistically biased so that certain
elements of the training set are learned more rapidly than others. In this
article, we place SGD into a feedback loop whereby the probability of selection
is proportional to error magnitude. This provides a novelty-driven oddball SGD
process that learns more rapidly than traditional SGD by prioritising those
elements of the training set with the largest novelty (error). In our DNN
example, oddball SGD trains some 50x faster than regular SGD.
| Andrew J.R. Simpson | null | 1509.05765 | null | null |
BLC: Private Matrix Factorization Recommenders via Automatic Group
Learning | cs.LG stat.ML | We propose a privacy-enhanced matrix factorization recommender that exploits
the fact that users can often be grouped together by interest. This allows a
form of "hiding in the crowd" privacy. We introduce a novel matrix
factorization approach suited to making recommendations in a shared group (or
nym) setting and the BLC algorithm for carrying out this matrix factorization
in a privacy-enhanced manner. We demonstrate that the increased privacy does
not come at the cost of reduced recommendation accuracy.
| Alessandro Checco, Giuseppe Bianchi, Doug Leith | null | 1509.05789 | null | null |
Word, graph and manifold embedding from Markov processes | cs.CL cs.LG stat.ML | Continuous vector representations of words and objects appear to carry
surprisingly rich semantic content. In this paper, we advance both the
conceptual and theoretical understanding of word embeddings in three ways.
First, we ground embeddings in semantic spaces studied in
cognitive-psychometric literature and introduce new evaluation tasks. Second,
in contrast to prior work, we take metric recovery as the key object of study,
unify existing algorithms as consistent metric recovery methods based on
co-occurrence counts from simple Markov random walks, and propose a new
recovery algorithm. Third, we generalize metric recovery to graphs and
manifolds, relating co-occurence counts on random walks in graphs and random
processes on manifolds to the underlying metric to be recovered, thereby
reconciling manifold estimation and embedding algorithms. We compare embedding
algorithms across a range of tasks, from nonlinear dimensionality reduction to
three semantic language tasks, including analogies, sequence completion, and
classification.
| Tatsunori B. Hashimoto, David Alvarez-Melis, Tommi S. Jaakkola | null | 1509.05808 | null | null |
STDP as presynaptic activity times rate of change of postsynaptic
activity | cs.NE cs.LG q-bio.NC | We introduce a weight update formula that is expressed only in terms of
firing rates and their derivatives and that results in changes consistent with
those associated with spike-timing dependent plasticity (STDP) rules and
biological observations, even though the explicit timing of spikes is not
needed. The new rule changes a synaptic weight in proportion to the product of
the presynaptic firing rate and the temporal rate of change of activity on the
postsynaptic side. These quantities are interesting for studying theoretical
explanation for synaptic changes from a machine learning perspective. In
particular, if neural dynamics moved neural activity towards reducing some
objective function, then this STDP rule would correspond to stochastic gradient
descent on that objective function.
| Yoshua Bengio, Thomas Mesnard, Asja Fischer, Saizheng Zhang and Yuhuai
Wu | null | 1509.05936 | null | null |
Telugu OCR Framework using Deep Learning | stat.ML cs.AI cs.CV cs.LG cs.NE | In this paper, we address the task of Optical Character Recognition(OCR) for
the Telugu script. We present an end-to-end framework that segments the text
image, classifies the characters and extracts lines using a language model. The
segmentation is based on mathematical morphology. The classification module,
which is the most challenging task of the three, is a deep convolutional neural
network. The language is modelled as a third degree markov chain at the glyph
level. Telugu script is a complex alphasyllabary and the language is
agglutinative, making the problem hard. In this paper we apply the latest
advances in neural networks to achieve state-of-the-art error rates. We also
review convolutional neural networks in great detail and expound the
statistical justification behind the many tricks needed to make Deep Learning
work.
| Rakesh Achanta, Trevor Hastie | null | 1509.05962 | null | null |
Denoising without access to clean data using a partitioned autoencoder | cs.NE cs.LG | Training a denoising autoencoder neural network requires access to truly
clean data, a requirement which is often impractical. To remedy this, we
introduce a method to train an autoencoder using only noisy data, having
examples with and without the signal class of interest. The autoencoder learns
a partitioned representation of signal and noise, learning to reconstruct each
separately. We illustrate the method by denoising birdsong audio (available
abundantly in uncontrolled noisy datasets) using a convolutional autoencoder.
| Dan Stowell and Richard E. Turner | null | 1509.05982 | null | null |
Robust Image Sentiment Analysis Using Progressively Trained and Domain
Transferred Deep Networks | cs.CV cs.IR cs.LG | Sentiment analysis of online user generated content is important for many
social media analytics tasks. Researchers have largely relied on textual
sentiment analysis to develop systems to predict political elections, measure
economic indicators, and so on. Recently, social media users are increasingly
using images and videos to express their opinions and share their experiences.
Sentiment analysis of such large scale visual content can help better extract
user sentiments toward events or topics, such as those in image tweets, so that
prediction of sentiment from visual content is complementary to textual
sentiment analysis. Motivated by the needs in leveraging large scale yet noisy
training data to solve the extremely challenging problem of image sentiment
analysis, we employ Convolutional Neural Networks (CNN). We first design a
suitable CNN architecture for image sentiment analysis. We obtain half a
million training samples by using a baseline sentiment algorithm to label
Flickr images. To make use of such noisy machine labeled data, we employ a
progressive strategy to fine-tune the deep network. Furthermore, we improve the
performance on Twitter images by inducing domain transfer with a small number
of manually labeled Twitter images. We have conducted extensive experiments on
manually labeled Twitter images. The results show that the proposed CNN can
achieve better performance in image sentiment analysis than competing
algorithms.
| Quanzeng You, Jiebo Luo, Hailin Jin, Jianchao Yang | null | 1509.06041 | null | null |
Significance Analysis of High-Dimensional, Low-Sample Size Partially
Labeled Data | stat.ML cs.LG stat.ME | Classification and clustering are both important topics in statistical
learning. A natural question herein is whether predefined classes are really
different from one another, or whether clusters are really there. Specifically,
we may be interested in knowing whether the two classes defined by some class
labels (when they are provided), or the two clusters tagged by a clustering
algorithm (where class labels are not provided), are from the same underlying
distribution. Although both are challenging questions for the high-dimensional,
low-sample size data, there has been some recent development for both. However,
when it is costly to manually place labels on observations, it is often that
only a small portion of the class labels is available. In this article, we
propose a significance analysis approach for such type of data, namely
partially labeled data. Our method makes use of the whole data and tries to
test the class difference as if all the labels were observed. Compared to a
testing method that ignores the label information, our method provides a
greater power, meanwhile, maintaining the size, illustrated by a comprehensive
simulation study. Theoretical properties of the proposed method are studied
with emphasis on the high-dimensional, low-sample size setting. Our simulated
examples help to understand when and how the information extracted from the
labeled data can be effective. A real data example further illustrates the
usefulness of the proposed method.
| Qiyi Lu, Xingye Qiao | null | 1509.06088 | null | null |
Multilayer bootstrap network for unsupervised speaker recognition | cs.LG cs.SD | We apply multilayer bootstrap network (MBN), a recent proposed unsupervised
learning method, to unsupervised speaker recognition. The proposed method first
extracts supervectors from an unsupervised universal background model, then
reduces the dimension of the high-dimensional supervectors by multilayer
bootstrap network, and finally conducts unsupervised speaker recognition by
clustering the low-dimensional data. The comparison results with 2 unsupervised
and 1 supervised speaker recognition techniques demonstrate the effectiveness
and robustness of the proposed method.
| Xiao-Lei Zhang | null | 1509.06095 | null | null |
Deep Spatial Autoencoders for Visuomotor Learning | cs.LG cs.CV cs.RO | Reinforcement learning provides a powerful and flexible framework for
automated acquisition of robotic motion skills. However, applying reinforcement
learning requires a sufficiently detailed representation of the state,
including the configuration of task-relevant objects. We present an approach
that automates state-space construction by learning a state representation
directly from camera images. Our method uses a deep spatial autoencoder to
acquire a set of feature points that describe the environment for the current
task, such as the positions of objects, and then learns a motion skill with
these feature points using an efficient reinforcement learning method based on
local linear models. The resulting controller reacts continuously to the
learned feature points, allowing the robot to dynamically manipulate objects in
the world with closed-loop control. We demonstrate our method with a PR2 robot
on tasks that include pushing a free-standing toy block, picking up a bag of
rice using a spatula, and hanging a loop of rope on a hook at various
positions. In each task, our method automatically learns to track task-relevant
objects and manipulate their configuration with the robot's arm.
| Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine,
Pieter Abbeel | null | 1509.06113 | null | null |
The Utility of Clustering in Prediction Tasks | cs.LG | We explore the utility of clustering in reducing error in various prediction
tasks. Previous work has hinted at the improvement in prediction accuracy
attributed to clustering algorithms if used to pre-process the data. In this
work we more deeply investigate the direct utility of using clustering to
improve prediction accuracy and provide explanations for why this may be so. We
look at a number of datasets, run k-means at different scales and for each
scale we train predictors. This produces k sets of predictions. These
predictions are then combined by a na\"ive ensemble. We observed that this use
of a predictor in conjunction with clustering improved the prediction accuracy
in most datasets. We believe this indicates the predictive utility of
exploiting structure in the data and the data compression handed over by
clustering. We also found that using this method improves upon the prediction
of even a Random Forests predictor which suggests this method is providing a
novel, and useful source of variance in the prediction process.
| Shubhendu Trivedi, Zachary A. Pardos, Neil T. Heffernan | null | 1509.06163 | null | null |
Sports highlights generation based on acoustic events detection: A rugby
case study | cs.SD cs.AI cs.LG | We approach the challenging problem of generating highlights from sports
broadcasts utilizing audio information only. A language-independent,
multi-stage classification approach is employed for detection of key acoustic
events which then act as a platform for summarization of highlight scenes.
Objective results and human experience indicate that our system is highly
efficient.
| Anant Baijal, Jaeyoun Cho, Woojung Lee and Byeong-Seob Ko | 10.1109/ICCE.2015.7066303 | 1509.06279 | null | null |
Efficient Neighborhood Selection for Gaussian Graphical Models | stat.ML cs.IT cs.LG math.IT | This paper addresses the problem of neighborhood selection for Gaussian
graphical models. We present two heuristic algorithms: a forward-backward
greedy algorithm for general Gaussian graphical models based on mutual
information test, and a threshold-based algorithm for walk summable Gaussian
graphical models. Both algorithms are shown to be structurally consistent, and
efficient. Numerical results show that both algorithms work very well.
| Yingxiang Yang, Jalal Etesami, Negar Kiyavash | null | 1509.06449 | null | null |
Harmonic Extension | cs.LG math.NA | In this paper, we consider the harmonic extension problem, which is widely
used in many applications of machine learning. We find that the transitional
method of graph Laplacian fails to produce a good approximation of the
classical harmonic function. To tackle this problem, we propose a new method
called the point integral method (PIM). We consider the harmonic extension
problem from the point of view of solving PDEs on manifolds. The basic idea of
the PIM method is to approximate the harmonicity using an integral equation,
which is easy to be discretized from points. Based on the integral equation, we
explain the reason why the transitional graph Laplacian may fail to approximate
the harmonicity in the classical sense and propose a different approach which
we call the volume constraint method (VCM). Theoretically, both the PIM and the
VCM computes a harmonic function with convergence guarantees, and practically,
they are both simple, which amount to solve a linear system. One important
application of the harmonic extension in machine learning is semi-supervised
learning. We run a popular semi-supervised learning algorithm by Zhu et al.
over a couple of well-known datasets and compare the performance of the
aforementioned approaches. Our experiments show the PIM performs the best.
| Zuoqiang Shi and Jian Sun and Minghao Tian | null | 1509.06458 | null | null |
Deep Reinforcement Learning with Double Q-learning | cs.LG | The popular Q-learning algorithm is known to overestimate action values under
certain conditions. It was not previously known whether, in practice, such
overestimations are common, whether they harm performance, and whether they can
generally be prevented. In this paper, we answer all these questions
affirmatively. In particular, we first show that the recent DQN algorithm,
which combines Q-learning with a deep neural network, suffers from substantial
overestimations in some games in the Atari 2600 domain. We then show that the
idea behind the Double Q-learning algorithm, which was introduced in a tabular
setting, can be generalized to work with large-scale function approximation. We
propose a specific adaptation to the DQN algorithm and show that the resulting
algorithm not only reduces the observed overestimations, as hypothesized, but
that this also leads to much better performance on several games.
| Hado van Hasselt, Arthur Guez, David Silver | null | 1509.06461 | null | null |
Tensorizing Neural Networks | cs.LG cs.NE | Deep neural networks currently demonstrate state-of-the-art performance in
several domains. At the same time, models of this class are very demanding in
terms of computational resources. In particular, a large amount of memory is
required by commonly used fully-connected layers, making it hard to use the
models on low-end devices and stopping the further increase of the model size.
In this paper we convert the dense weight matrices of the fully-connected
layers to the Tensor Train format such that the number of parameters is reduced
by a huge factor and at the same time the expressive power of the layer is
preserved. In particular, for the Very Deep VGG networks we report the
compression factor of the dense weight matrix of a fully-connected layer up to
200000 times leading to the compression factor of the whole network up to 7
times.
| Alexander Novikov, Dmitry Podoprikhin, Anton Osokin, Dmitry Vetrov | null | 1509.06569 | null | null |
Graph Kernels exploiting Weisfeiler-Lehman Graph Isomorphism Test
Extensions | cs.LG cs.AI | In this paper we present a novel graph kernel framework inspired the by the
Weisfeiler-Lehman (WL) isomorphism tests. Any WL test comprises a relabelling
phase of the nodes based on test-specific information extracted from the graph,
for example the set of neighbours of a node. We defined a novel relabelling and
derived two kernels of the framework from it. The novel kernels are very fast
to compute and achieve state-of-the-art results on five real-world datasets.
| Giovanni Da San Martino, Nicol\`o Navarin, and Alessandro Sperduti | 10.1007/978-3-319-12640-1_12 | 1509.06589 | null | null |
Reasoning about Entailment with Neural Attention | cs.CL cs.AI cs.LG cs.NE | While most approaches to automatically recognizing entailment relations have
used classifiers employing hand engineered features derived from complex
natural language processing pipelines, in practice their performance has been
only slightly better than bag-of-word pair classifiers using only lexical
similarity. The only attempt so far to build an end-to-end differentiable
neural network for entailment failed to outperform such a simple similarity
classifier. In this paper, we propose a neural model that reads two sentences
to determine entailment using long short-term memory units. We extend this
model with a word-by-word neural attention mechanism that encourages reasoning
over entailments of pairs of words and phrases. Furthermore, we present a
qualitative analysis of attention weights produced by this model, demonstrating
such reasoning capabilities. On a large entailment dataset this model
outperforms the previous best neural model and a classifier with engineered
features by a substantial margin. It is the first generic end-to-end
differentiable system that achieves state-of-the-art accuracy on a textual
entailment dataset.
| Tim Rockt\"aschel, Edward Grefenstette, Karl Moritz Hermann,
Tom\'a\v{s} Ko\v{c}isk\'y, Phil Blunsom | null | 1509.06664 | null | null |
Learning Deep Control Policies for Autonomous Aerial Vehicles with
MPC-Guided Policy Search | cs.LG cs.RO | Model predictive control (MPC) is an effective method for controlling robotic
systems, particularly autonomous aerial vehicles such as quadcopters. However,
application of MPC can be computationally demanding, and typically requires
estimating the state of the system, which can be challenging in complex,
unstructured environments. Reinforcement learning can in principle forego the
need for explicit state estimation and acquire a policy that directly maps
sensor readings to actions, but is difficult to apply to unstable systems that
are liable to fail catastrophically during training before an effective policy
has been found. We propose to combine MPC with reinforcement learning in the
framework of guided policy search, where MPC is used to generate data at
training time, under full state observations provided by an instrumented
training environment. This data is used to train a deep neural network policy,
which is allowed to access only the raw observations from the vehicle's onboard
sensors. After training, the neural network policy can successfully control the
robot without knowledge of the full state, and at a fraction of the
computational cost of MPC. We evaluate our method by learning obstacle
avoidance policies for a simulated quadrotor, using simulated onboard sensors
and no explicit state estimation at test time.
| Tianhao Zhang, Gregory Kahn, Sergey Levine, Pieter Abbeel | null | 1509.06791 | null | null |
Bandit Label Inference for Weakly Supervised Learning | cs.LG stat.ML | The scarcity of data annotated at the desired level of granularity is a
recurring issue in many applications. Significant amounts of effort have been
devoted to developing weakly supervised methods tailored to each individual
setting, which are often carefully designed to take advantage of the particular
properties of weak supervision regimes, form of available data and prior
knowledge of the task at hand. Unfortunately, it is difficult to adapt these
methods to new tasks and/or forms of data, which often require different weak
supervision regimes or models. We present a general-purpose method that can
solve any weakly supervised learning problem irrespective of the weak
supervision regime or the model. The proposed method turns any off-the-shelf
strongly supervised classifier into a weakly supervised classifier and allows
the user to specify any arbitrary weakly supervision regime via a loss
function. We apply the method to several different weak supervision regimes and
demonstrate competitive results compared to methods specifically engineered for
those settings.
| Ke Li and Jitendra Malik | null | 1509.06807 | null | null |
Learning Wake-Sleep Recurrent Attention Models | cs.LG | Despite their success, convolutional neural networks are computationally
expensive because they must examine all image locations. Stochastic
attention-based models have been shown to improve computational efficiency at
test time, but they remain difficult to train because of intractable posterior
inference and high variance in the stochastic gradient estimates. Borrowing
techniques from the literature on training deep generative models, we present
the Wake-Sleep Recurrent Attention Model, a method for training stochastic
attention networks which improves posterior inference and which reduces the
variability in the stochastic gradients. We show that our method can greatly
speed up the training time for stochastic attention networks in the domains of
image classification and caption generation.
| Jimmy Ba, Roger Grosse, Ruslan Salakhutdinov, Brendan Frey | null | 1509.06812 | null | null |
Model-based Reinforcement Learning with Parametrized Physical Models and
Optimism-Driven Exploration | cs.LG cs.RO | In this paper, we present a robotic model-based reinforcement learning method
that combines ideas from model identification and model predictive control. We
use a feature-based representation of the dynamics that allows the dynamics
model to be fitted with a simple least squares procedure, and the features are
identified from a high-level specification of the robot's morphology,
consisting of the number and connectivity structure of its links. Model
predictive control is then used to choose the actions under an optimistic model
of the dynamics, which produces an efficient and goal-directed exploration
strategy. We present real time experimental results on standard benchmark
problems involving the pendulum, cartpole, and double pendulum systems.
Experiments indicate that our method is able to learn a range of benchmark
tasks substantially faster than the previous best methods. To evaluate our
approach on a realistic robotic control task, we also demonstrate real time
control of a simulated 7 degree of freedom arm.
| Christopher Xie, Sachin Patil, Teodor Moldovan, Sergey Levine, Pieter
Abbeel | null | 1509.06824 | null | null |
Supersizing Self-supervision: Learning to Grasp from 50K Tries and 700
Robot Hours | cs.LG cs.CV cs.RO | Current learning-based robot grasping approaches exploit human-labeled
datasets for training the models. However, there are two problems with such a
methodology: (a) since each object can be grasped in multiple ways, manually
labeling grasp locations is not a trivial task; (b) human labeling is biased by
semantics. While there have been attempts to train robots using trial-and-error
experiments, the amount of data used in such experiments remains substantially
low and hence makes the learner prone to over-fitting. In this paper, we take
the leap of increasing the available training data to 40 times more than prior
work, leading to a dataset size of 50K data points collected over 700 hours of
robot grasping attempts. This allows us to train a Convolutional Neural Network
(CNN) for the task of predicting grasp locations without severe overfitting. In
our formulation, we recast the regression problem to an 18-way binary
classification over image patches. We also present a multi-stage learning
approach where a CNN trained in one stage is used to collect hard negatives in
subsequent stages. Our experiments clearly show the benefit of using
large-scale datasets (and multi-stage training) for the task of grasping. We
also compare to several baselines and show state-of-the-art performance on
generalization to unseen objects for grasping.
| Lerrel Pinto and Abhinav Gupta | null | 1509.06825 | null | null |
One-Shot Learning of Manipulation Skills with Online Dynamics Adaptation
and Neural Network Priors | cs.LG cs.RO | One of the key challenges in applying reinforcement learning to complex
robotic control tasks is the need to gather large amounts of experience in
order to find an effective policy for the task at hand. Model-based
reinforcement learning can achieve good sample efficiency, but requires the
ability to learn a model of the dynamics that is good enough to learn an
effective policy. In this work, we develop a model-based reinforcement learning
algorithm that combines prior knowledge from previous tasks with online
adaptation of the dynamics model. These two ingredients enable highly
sample-efficient learning even in regimes where estimating the true dynamics is
very difficult, since the online model adaptation allows the method to locally
compensate for unmodeled variation in the dynamics. We encode the prior
experience into a neural network dynamics model, adapt it online by
progressively refitting a local linear model of the dynamics, and use model
predictive control to plan under these dynamics. Our experimental results show
that this approach can be used to solve a variety of complex robotic
manipulation tasks in just a single attempt, using prior data from other
manipulation behaviors.
| Justin Fu, Sergey Levine, Pieter Abbeel | null | 1509.06841 | null | null |
Efficient reconstruction of transmission probabilities in a spreading
process from partial observations | physics.soc-ph cond-mat.stat-mech cs.LG cs.SI stat.ML | An important problem of reconstruction of diffusion network and transmission
probabilities from the data has attracted a considerable attention in the past
several years. A number of recent papers introduced efficient algorithms for
the estimation of spreading parameters, based on the maximization of the
likelihood of observed cascades, assuming that the full information for all the
nodes in the network is available. In this work, we focus on a more realistic
and restricted scenario, in which only a partial information on the cascades is
available: either the set of activation times for a limited number of nodes, or
the states of nodes for a subset of observation times. To tackle this problem,
we first introduce a framework based on the maximization of the likelihood of
the incomplete diffusion trace. However, we argue that the computation of this
incomplete likelihood is a computationally hard problem, and show that a fast
and robust reconstruction of transmission probabilities in sparse networks can
be achieved with a new algorithm based on recently introduced dynamic
message-passing equations for the spreading processes. The suggested approach
can be easily generalized to a large class of discrete and continuous dynamic
models, as well as to the cases of dynamically-changing networks and noisy
information.
| Andrey Y. Lokhov, Theodor Misiakiewicz | null | 1509.06893 | null | null |
Fast k-NN search | stat.ML cs.DS cs.LG | Efficient index structures for fast approximate nearest neighbor queries are
required in many applications such as recommendation systems. In
high-dimensional spaces, many conventional methods suffer from excessive usage
of memory and slow response times. We propose a method where multiple random
projection trees are combined by a novel voting scheme. The key idea is to
exploit the redundancy in a large number of candidate sets obtained by
independently generated random projections in order to reduce the number of
expensive exact distance evaluations. The method is straightforward to
implement using sparse projections which leads to a reduced memory footprint
and fast index construction. Furthermore, it enables grouping of the required
computations into big matrix multiplications, which leads to additional savings
due to cache effects and low-level parallelization. We demonstrate by extensive
experiments on a wide variety of data sets that the method is faster than
existing partitioning tree or hashing based approaches, making it the fastest
available technique on high accuracy levels.
| Ville Hyv\"onen, Teemu Pitk\"anen, Sotiris Tasoulis, Elias
J\"a\"asaari, Risto Tuomainen, Liang Wang, Jukka Corander, Teemu Roos | 10.1109/BigData.2016.7840682 | 1509.06957 | null | null |
A Novel Pre-processing Scheme to Improve the Prediction of Sand Fraction
from Seismic Attributes using Neural Networks | cs.CE cs.LG | This paper presents a novel pre-processing scheme to improve the prediction
of sand fraction from multiple seismic attributes such as seismic impedance,
amplitude and frequency using machine learning and information filtering. The
available well logs along with the 3-D seismic data have been used to benchmark
the proposed pre-processing stage using a methodology which primarily consists
of three steps: pre-processing, training and post-processing. An Artificial
Neural Network (ANN) with conjugate-gradient learning algorithm has been used
to model the sand fraction. The available sand fraction data from the high
resolution well logs has far more information content than the low resolution
seismic attributes. Therefore, regularization schemes based on Fourier
Transform (FT), Wavelet Decomposition (WD) and Empirical Mode Decomposition
(EMD) have been proposed to shape the high resolution sand fraction data for
effective machine learning. The input data sets have been segregated into
training, testing and validation sets. The test results are primarily used to
check different network structures and activation function performances. Once
the network passes the testing phase with an acceptable performance in terms of
the selected evaluators, the validation phase follows. In the validation stage,
the prediction model is tested against unseen data. The network yielding
satisfactory performance in the validation stage is used to predict
lithological properties from seismic attributes throughout a given volume.
Finally, a post-processing scheme using 3-D spatial filtering is implemented
for smoothing the sand fraction in the volume. Prediction of lithological
properties using this framework is helpful for Reservoir Characterization.
| Soumi Chaki, Aurobinda Routray and William K. Mohanty | 10.1109/JSTARS.2015.2404808 | 1509.07065 | null | null |
Detecting phase transitions in collective behavior using manifold's
curvature | math.DS cs.LG cs.MA math.GT stat.ML | If a given behavior of a multi-agent system restricts the phase variable to a
invariant manifold, then we define a phase transition as change of physical
characteristics such as speed, coordination, and structure. We define such a
phase transition as splitting an underlying manifold into two sub-manifolds
with distinct dimensionalities around the singularity where the phase
transition physically exists. Here, we propose a method of detecting phase
transitions and splitting the manifold into phase transitions free
sub-manifolds. Therein, we utilize a relationship between curvature and
singular value ratio of points sampled in a curve, and then extend the
assertion into higher-dimensions using the shape operator. Then we attest that
the same phase transition can also be approximated by singular value ratios
computed locally over the data in a neighborhood on the manifold. We validate
the phase transitions detection method using one particle simulation and three
real world examples.
| Kelum Gajamannage, Erik M. Bollt | 10.3934/mbe.2017027 | 1509.07078 | null | null |
Deep Temporal Sigmoid Belief Networks for Sequence Modeling | stat.ML cs.LG | Deep dynamic generative models are developed to learn sequential dependencies
in time-series data. The multi-layered model is designed by constructing a
hierarchy of temporal sigmoid belief networks (TSBNs), defined as a sequential
stack of sigmoid belief networks (SBNs). Each SBN has a contextual hidden
state, inherited from the previous SBNs in the sequence, and is used to
regulate its hidden bias. Scalable learning and inference algorithms are
derived by introducing a recognition model that yields fast sampling from the
variational posterior. This recognition model is trained jointly with the
generative model, by maximizing its variational lower bound on the
log-likelihood. Experimental results on bouncing balls, polyphonic music,
motion capture, and text streams show that the proposed approach achieves
state-of-the-art predictive performance, and has the capacity to synthesize
various sequences.
| Zhe Gan, Chunyuan Li, Ricardo Henao, David Carlson and Lawrence Carin | null | 1509.07087 | null | null |
A review of learning vector quantization classifiers | cs.LG astro-ph.IM cs.NE stat.ML | In this work we present a review of the state of the art of Learning Vector
Quantization (LVQ) classifiers. A taxonomy is proposed which integrates the
most relevant LVQ approaches to date. The main concepts associated with modern
LVQ approaches are defined. A comparison is made among eleven LVQ classifiers
using one real-world and two artificial datasets.
| David Nova and Pablo A. Estevez | 10.1007/s00521-013-1535-3 | 1509.07093 | null | null |
On The Direct Maximization of Quadratic Weighted Kappa | cs.LG | In recent years, quadratic weighted kappa has been growing in popularity in
the machine learning community as an evaluation metric in domains where the
target labels to be predicted are drawn from integer ratings, usually obtained
from human experts. For example, it was the metric of choice in several recent,
high profile machine learning contests hosted on Kaggle :
https://www.kaggle.com/c/asap-aes , https://www.kaggle.com/c/asap-sas ,
https://www.kaggle.com/c/diabetic-retinopathy-detection . Yet, little is
understood about the nature of this metric, its underlying mathematical
properties, where it fits among other common evaluation metrics such as mean
squared error (MSE) and correlation, or if it can be optimized analytically,
and if so, how. Much of this is due to the cumbersome way that this metric is
commonly defined. In this paper we first derive an equivalent but much simpler,
and more useful, definition for quadratic weighted kappa, and then employ this
alternate form to address the above issues.
| David Vaughn, Derek Justice | null | 1509.07107 | null | null |
IllinoisSL: A JAVA Library for Structured Prediction | cs.LG cs.CL stat.ML | IllinoisSL is a Java library for learning structured prediction models. It
supports structured Support Vector Machines and structured Perceptron. The
library consists of a core learning module and several applications, which can
be executed from command-lines. Documentation is provided to guide users. In
Comparison to other structured learning libraries, IllinoisSL is efficient,
general, and easy to use.
| Kai-Wei Chang and Shyam Upadhyay and Ming-Wei Chang and Vivek Srikumar
and Dan Roth | null | 1509.07179 | null | null |
Sparsity-based Correction of Exponential Artifacts | cs.LG | This paper describes an exponential transient excision algorithm (ETEA). In
biomedical time series analysis, e.g., in vivo neural recording and
electrocorticography (ECoG), some measurement artifacts take the form of
piecewise exponential transients. The proposed method is formulated as an
unconstrained convex optimization problem, regularized by smoothed l1-norm
penalty function, which can be solved by majorization-minimization (MM) method.
With a slight modification of the regularizer, ETEA can also suppress more
irregular piecewise smooth artifacts, especially, ocular artifacts (OA) in
electroencephalog- raphy (EEG) data. Examples of synthetic signal, EEG data,
and ECoG data are presented to illustrate the proposed algorithms.
| Yin Ding and Ivan W. Selesnick | 10.1016/j.sigpro.2015.09.017 | 1509.07234 | null | null |
Provable approximation properties for deep neural networks | stat.ML cs.LG cs.NE | We discuss approximation of functions using deep neural nets. Given a
function $f$ on a $d$-dimensional manifold $\Gamma \subset \mathbb{R}^m$, we
construct a sparsely-connected depth-4 neural network and bound its error in
approximating $f$. The size of the network depends on dimension and curvature
of the manifold $\Gamma$, the complexity of $f$, in terms of its wavelet
description, and only weakly on the ambient dimension $m$. Essentially, our
network computes wavelet functions, which are computed from Rectified Linear
Units (ReLU)
| Uri Shaham, Alexander Cloninger, Ronald R. Coifman | 10.1016/j.acha.2016.04.003 | 1509.07385 | null | null |
Adaptive Sequential Optimization with Applications to Machine Learning | cs.LG cs.DS | A framework is introduced for solving a sequence of slowly changing
optimization problems, including those arising in regression and classification
applications, using optimization algorithms such as stochastic gradient descent
(SGD). The optimization problems change slowly in the sense that the minimizers
change at either a fixed or bounded rate. A method based on estimates of the
change in the minimizers and properties of the optimization algorithm is
introduced for adaptively selecting the number of samples needed from the
distributions underlying each problem in order to ensure that the excess risk,
i.e., the expected gap between the loss achieved by the approximate minimizer
produced by the optimization algorithm and the exact minimizer, does not exceed
a target level. Experiments with synthetic and real data are used to confirm
that this approach performs well.
| Craig Wilson and Venugopal V. Veeravalli | null | 1509.07422 | null | null |
A 128 channel Extreme Learning Machine based Neural Decoder for Brain
Machine Interfaces | cs.LG cs.HC | Currently, state-of-the-art motor intention decoding algorithms in
brain-machine interfaces are mostly implemented on a PC and consume significant
amount of power. A machine learning co-processor in 0.35um CMOS for motor
intention decoding in brain-machine interfaces is presented in this paper.
Using Extreme Learning Machine algorithm and low-power analog processing, it
achieves an energy efficiency of 290 GMACs/W at a classification rate of 50 Hz.
The learning in second stage and corresponding digitally stored coefficients
are used to increase robustness of the core analog processor. The chip is
verified with neural data recorded in monkey finger movements experiment,
achieving a decoding accuracy of 99.3% for movement type. The same co-processor
is also used to decode time of movement from asynchronous neural spikes. With
time-delayed feature dimension enhancement, the classification accuracy can be
increased by 5% with limited number of input channels. Further, a sparsity
promoting training scheme enables reduction of number of programmable weights
by ~2X.
| Yi Chen, Enyi Yao, Arindam Basu | 10.1109/TBCAS.2015.2483618 | 1509.07450 | null | null |
Spatially Encoding Temporal Correlations to Classify Temporal Data Using
Convolutional Neural Networks | cs.LG | We propose an off-line approach to explicitly encode temporal patterns
spatially as different types of images, namely, Gramian Angular Fields and
Markov Transition Fields. This enables the use of techniques from computer
vision for feature learning and classification. We used Tiled Convolutional
Neural Networks to learn high-level features from individual GAF, MTF, and
GAF-MTF images on 12 benchmark time series datasets and two real
spatial-temporal trajectory datasets. The classification results of our
approach are competitive with state-of-the-art approaches on both types of
data. An analysis of the features and weights learned by the CNNs explains why
the approach works.
| Zhiguang Wang and Tim Oates | null | 1509.07481 | null | null |
Linear-time Learning on Distributions with Approximate Kernel Embeddings | stat.ML cs.LG | Many interesting machine learning problems are best posed by considering
instances that are distributions, or sample sets drawn from distributions.
Previous work devoted to machine learning tasks with distributional inputs has
done so through pairwise kernel evaluations between pdfs (or sample sets).
While such an approach is fine for smaller datasets, the computation of an $N
\times N$ Gram matrix is prohibitive in large datasets. Recent scalable
estimators that work over pdfs have done so only with kernels that use
Euclidean metrics, like the $L_2$ distance. However, there are a myriad of
other useful metrics available, such as total variation, Hellinger distance,
and the Jensen-Shannon divergence. This work develops the first random features
for pdfs whose dot product approximates kernels using these non-Euclidean
metrics, allowing estimators using such kernels to scale to large datasets by
working in a primal space, without computing large Gram matrices. We provide an
analysis of the approximation error in using our proposed random features and
show empirically the quality of our approximation both in estimating a Gram
matrix and in solving learning tasks in real-world and synthetic data.
| Danica J. Sutherland and Junier B. Oliva and Barnab\'as P\'oczos and
Jeff Schneider | null | 1509.07553 | null | null |
A Review of Feature Selection Methods Based on Mutual Information | cs.LG stat.ML | In this work we present a review of the state of the art of information
theoretic feature selection methods. The concepts of feature relevance,
redundance and complementarity (synergy) are clearly defined, as well as Markov
blanket. The problem of optimal feature selection is defined. A unifying
theoretical framework is described, which can retrofit successful heuristic
criteria, indicating the approximations made by each method. A number of open
problems in the field are presented.
| Jorge R. Vergara, Pablo A. Est\'evez | 10.1007/s00521-013-1368-0 | 1509.07577 | null | null |
Online Stochastic Linear Optimization under One-bit Feedback | cs.LG | In this paper, we study a special bandit setting of online stochastic linear
optimization, where only one-bit of information is revealed to the learner at
each round. This problem has found many applications including online
advertisement and online recommendation. We assume the binary feedback is a
random variable generated from the logit model, and aim to minimize the regret
defined by the unknown linear function. Although the existing method for
generalized linear bandit can be applied to our problem, the high computational
cost makes it impractical for real-world problems. To address this challenge,
we develop an efficient online learning algorithm by exploiting particular
structures of the observation model. Specifically, we adopt online Newton step
to estimate the unknown parameter and derive a tight confidence region based on
the exponential concavity of the logistic loss. Our analysis shows that the
proposed algorithm achieves a regret bound of $O(d\sqrt{T})$, which matches the
optimal result of stochastic linear bandits.
| Lijun Zhang, Tianbao Yang, Rong Jin, Zhi-Hua Zhou | null | 1509.07728 | null | null |
A Mathematical Theory for Clustering in Metric Spaces | cs.LG | Clustering is one of the most fundamental problems in data analysis and it
has been studied extensively in the literature. Though many clustering
algorithms have been proposed, clustering theories that justify the use of
these clustering algorithms are still unsatisfactory. In particular, one of the
fundamental challenges is to address the following question:
What is a cluster in a set of data points?
In this paper, we make an attempt to address such a question by considering a
set of data points associated with a distance measure (metric). We first
propose a new cohesion measure in terms of the distance measure. Using the
cohesion measure, we define a cluster as a set of points that are cohesive to
themselves. For such a definition, we show there are various equivalent
statements that have intuitive explanations. We then consider the second
question:
How do we find clusters and good partitions of clusters under such a
definition?
For such a question, we propose a hierarchical agglomerative algorithm and a
partitional algorithm. Unlike standard hierarchical agglomerative algorithms,
our hierarchical agglomerative algorithm has a specific stopping criterion and
it stops with a partition of clusters. Our partitional algorithm, called the
K-sets algorithm in the paper, appears to be a new iterative algorithm. Unlike
the Lloyd iteration that needs two-step minimization, our K-sets algorithm only
takes one-step minimization.
One of the most interesting findings of our paper is the duality result
between a distance measure and a cohesion measure. Such a duality result leads
to a dual K-sets algorithm for clustering a set of data points with a cohesion
measure. The dual K-sets algorithm converges in the same way as a sequential
version of the classical kernel K-means algorithm. The key difference is that a
cohesion measure does not need to be positive semi-definite.
| Cheng-Shang Chang, Wanjiun Liao, Yu-Sheng Chen, and Li-Heng Liou | 10.1109/TNSE.2016.2516339 | 1509.07755 | null | null |
Computational Intelligence Challenges and Applications on Large-Scale
Astronomical Time Series Databases | astro-ph.IM cs.LG | Time-domain astronomy (TDA) is facing a paradigm shift caused by the
exponential growth of the sample size, data complexity and data generation
rates of new astronomical sky surveys. For example, the Large Synoptic Survey
Telescope (LSST), which will begin operations in northern Chile in 2022, will
generate a nearly 150 Petabyte imaging dataset of the southern hemisphere sky.
The LSST will stream data at rates of 2 Terabytes per hour, effectively
capturing an unprecedented movie of the sky. The LSST is expected not only to
improve our understanding of time-varying astrophysical objects, but also to
reveal a plethora of yet unknown faint and fast-varying phenomena. To cope with
a change of paradigm to data-driven astronomy, the fields of astroinformatics
and astrostatistics have been created recently. The new data-oriented paradigms
for astronomy combine statistics, data mining, knowledge discovery, machine
learning and computational intelligence, in order to provide the automated and
robust methods needed for the rapid detection and classification of known
astrophysical objects as well as the unsupervised characterization of novel
phenomena. In this article we present an overview of machine learning and
computational intelligence applications to TDA. Future big data challenges and
new lines of research in TDA, focusing on the LSST, are identified and
discussed from the viewpoint of computational intelligence/machine learning.
Interdisciplinary collaboration will be required to cope with the challenges
posed by the deluge of astronomical data coming from the LSST.
| Pablo Huijse and Pablo A. Estevez and Pavlos Protopapas and Jose C.
Principe and Pablo Zegers | 10.1109/MCI.2014.2326100 | 1509.07823 | null | null |
Deep Multimodal Embedding: Manipulating Novel Objects with Point-clouds,
Language and Trajectories | cs.RO cs.AI cs.CV cs.LG | A robot operating in a real-world environment needs to perform reasoning over
a variety of sensor modalities such as vision, language and motion
trajectories. However, it is extremely challenging to manually design features
relating such disparate modalities. In this work, we introduce an algorithm
that learns to embed point-cloud, natural language, and manipulation trajectory
data into a shared embedding space with a deep neural network. To learn
semantically meaningful spaces throughout our network, we use a loss-based
margin to bring embeddings of relevant pairs closer together while driving
less-relevant cases from different modalities further apart. We use this both
to pre-train its lower layers and fine-tune our final embedding space, leading
to a more robust representation. We test our algorithm on the task of
manipulating novel objects and appliances based on prior experience with other
objects. On a large dataset, we achieve significant improvements in both
accuracy and inference time over the previous state of the art. We also perform
end-to-end experiments on a PR2 robot utilizing our learned embedding space.
| Jaeyong Sung, Ian Lenz, Ashutosh Saxena | null | 1509.07831 | null | null |
Evasion and Hardening of Tree Ensemble Classifiers | cs.LG cs.CR stat.ML | Classifier evasion consists in finding for a given instance $x$ the nearest
instance $x'$ such that the classifier predictions of $x$ and $x'$ are
different. We present two novel algorithms for systematically computing
evasions for tree ensembles such as boosted trees and random forests. Our first
algorithm uses a Mixed Integer Linear Program solver and finds the optimal
evading instance under an expressive set of constraints. Our second algorithm
trades off optimality for speed by using symbolic prediction, a novel algorithm
for fast finite differences on tree ensembles. On a digit recognition task, we
demonstrate that both gradient boosted trees and random forests are extremely
susceptible to evasions. Finally, we harden a boosted tree model without loss
of predictive accuracy by augmenting the training set of each boosting round
with evading instances, a technique we call adversarial boosting.
| Alex Kantchelian, J. D. Tygar, Anthony D. Joseph | null | 1509.07892 | null | null |
Algorithms for Linear Bandits on Polyhedral Sets | cs.LG | We study stochastic linear optimization problem with bandit feedback. The set
of arms take values in an $N$-dimensional space and belong to a bounded
polyhedron described by finitely many linear inequalities. We provide a lower
bound for the expected regret that scales as $\Omega(N\log T)$. We then provide
a nearly optimal algorithm and show that its expected regret scales as
$O(N\log^{1+\epsilon}(T))$ for an arbitrary small $\epsilon >0$. The algorithm
alternates between exploration and exploitation intervals sequentially where
deterministic set of arms are played in the exploration intervals and greedily
selected arm is played in the exploitation intervals. We also develop an
algorithm that achieves the optimal regret when sub-Gaussianity parameter of
the noise term is known. Our key insight is that for a polyhedron the optimal
arm is robust to small perturbations in the reward function. Consequently, a
greedily selected arm is guaranteed to be optimal when the estimation error
falls below some suitable threshold. Our solution resolves a question posed by
Rusmevichientong and Tsitsiklis (2011) that left open the possibility of
efficient algorithms with asymptotic logarithmic regret bounds. We also show
that the regret upper bounds hold with probability $1$. Our numerical
investigations show that while theoretical results are asymptotic the
performance of our algorithms compares favorably to state-of-the-art algorithms
in finite time as well.
| Manjesh K. Hanawal and Amir Leshem and Venkatesh Saligrama | null | 1509.07927 | null | null |
Super-Resolution Off the Grid | cs.LG | Super-resolution is the problem of recovering a superposition of point
sources using bandlimited measurements, which may be corrupted with noise. This
signal processing problem arises in numerous imaging problems, ranging from
astronomy to biology to spectroscopy, where it is common to take (coarse)
Fourier measurements of an object. Of particular interest is in obtaining
estimation procedures which are robust to noise, with the following desirable
statistical and computational properties: we seek to use coarse Fourier
measurements (bounded by some cutoff frequency); we hope to take a
(quantifiably) small number of measurements; we desire our algorithm to run
quickly.
Suppose we have k point sources in d dimensions, where the points are
separated by at least \Delta from each other (in Euclidean distance). This work
provides an algorithm with the following favorable guarantees: - The algorithm
uses Fourier measurements, whose frequencies are bounded by O(1/\Delta) (up to
log factors). Previous algorithms require a cutoff frequency which may be as
large as {\Omega}( d/\Delta). - The number of measurements taken by and the
computational complexity of our algorithm are bounded by a polynomial in both
the number of points k and the dimension d, with no dependence on the
separation \Delta. In contrast, previous algorithms depended inverse
polynomially on the minimal separation and exponentially on the dimension for
both of these quantities.
Our estimation procedure itself is simple: we take random bandlimited
measurements (as opposed to taking an exponential number of measurements on the
hyper-grid). Furthermore, our analysis and algorithm are elementary (based on
concentration bounds for sampling and the singular value decomposition).
| Qingqing Huang, Sham M. Kakade | null | 1509.07943 | null | null |
Modeling Curiosity in a Mobile Robot for Long-Term Autonomous
Exploration and Monitoring | cs.RO cs.CV cs.LG | This paper presents a novel approach to modeling curiosity in a mobile robot,
which is useful for monitoring and adaptive data collection tasks, especially
in the context of long term autonomous missions where pre-programmed missions
are likely to have limited utility. We use a realtime topic modeling technique
to build a semantic perception model of the environment, using which, we plan a
path through the locations in the world with high semantic information content.
The life-long learning behavior of the proposed perception model makes it
suitable for long-term exploration missions. We validate the approach using
simulated exploration experiments using aerial and underwater data, and
demonstrate an implementation on the Aqua underwater robot in a variety of
scenarios. We find that the proposed exploration paths that are biased towards
locations with high topic perplexity, produce better terrain models with high
discriminative power. Moreover, we show that the proposed algorithm implemented
on Aqua robot is able to do tasks such as coral reef inspection, diver
following, and sea floor exploration, without any prior training or
preparation.
| Yogesh Girdhar, Gregory Dudek | 10.1007/s10514-015-9500-x | 1509.07975 | null | null |
Probably certifiably correct k-means clustering | cs.IT cs.DS cs.LG math.IT math.ST stat.TH | Recently, Bandeira [arXiv:1509.00824] introduced a new type of algorithm (the
so-called probably certifiably correct algorithm) that combines fast solvers
with the optimality certificates provided by convex relaxations. In this paper,
we devise such an algorithm for the problem of k-means clustering. First, we
prove that Peng and Wei's semidefinite relaxation of k-means is tight with high
probability under a distribution of planted clusters called the stochastic ball
model. Our proof follows from a new dual certificate for integral solutions of
this semidefinite program. Next, we show how to test the optimality of a
proposed k-means solution using this dual certificate in quasilinear time.
Finally, we analyze a version of spectral clustering from Peng and Wei that is
designed to solve k-means in the case of two clusters. In particular, we show
that this quasilinear-time method typically recovers planted clusters under the
stochastic ball model.
| Takayuki Iguchi, Dustin G. Mixon, Jesse Peterson, Soledad Villar | null | 1509.07983 | null | null |
Deep Trans-layer Unsupervised Networks for Representation Learning | cs.NE cs.CV cs.LG | Learning features from massive unlabelled data is a vast prevalent topic for
high-level tasks in many machine learning applications. The recent great
improvements on benchmark data sets achieved by increasingly complex
unsupervised learning methods and deep learning models with lots of parameters
usually requires many tedious tricks and much expertise to tune. However,
filters learned by these complex architectures are quite similar to standard
hand-crafted features visually. In this paper, unsupervised learning methods,
such as PCA or auto-encoder, are employed as the building block to learn filter
banks at each layer. The lower layer responses are transferred to the last
layer (trans-layer) to form a more complete representation retaining more
information. In addition, some beneficial methods such as local contrast
normalization and whitening are added to the proposed deep trans-layer networks
to further boost performance. The trans-layer representations are followed by
block histograms with binary encoder schema to learn translation and rotation
invariant representations, which are utilized to do high-level tasks such as
recognition and classification. Compared to traditional deep learning methods,
the implemented feature learning method has much less parameters and is
validated in several typical experiments, such as digit recognition on MNIST
and MNIST variations, object recognition on Caltech 101 dataset and face
verification on LFW dataset. The deep trans-layer unsupervised learning
achieves 99.45% accuracy on MNIST dataset, 67.11% accuracy on 15 samples per
class and 75.98% accuracy on 30 samples per class on Caltech 101 dataset,
87.10% on LFW dataset.
| Wentao Zhu, Jun Miao, Laiyun Qing, Xilin Chen | null | 1509.08038 | null | null |
End-to-End Text-Dependent Speaker Verification | cs.LG cs.SD | In this paper we present a data-driven, integrated approach to speaker
verification, which maps a test utterance and a few reference utterances
directly to a single score for verification and jointly optimizes the system's
components using the same evaluation protocol and metric as at test time. Such
an approach will result in simple and efficient systems, requiring little
domain-specific knowledge and making few model assumptions. We implement the
idea by formulating the problem as a single neural network architecture,
including the estimation of a speaker model on only a few utterances, and
evaluate it on our internal "Ok Google" benchmark for text-dependent speaker
verification. The proposed approach appears to be very effective for big data
applications like ours that require highly accurate, easy-to-maintain systems
with a small footprint.
| Georg Heigold, Ignacio Moreno, Samy Bengio, Noam Shazeer | null | 1509.08062 | null | null |
Online Object Tracking, Learning and Parsing with And-Or Graphs | cs.CV cs.LG | This paper presents a method, called AOGTracker, for simultaneously tracking,
learning and parsing (TLP) of unknown objects in video sequences with a
hierarchical and compositional And-Or graph (AOG) representation. %The AOG
captures both structural and appearance variations of a target object in a
principled way. The TLP method is formulated in the Bayesian framework with a
spatial and a temporal dynamic programming (DP) algorithms inferring object
bounding boxes on-the-fly. During online learning, the AOG is discriminatively
learned using latent SVM to account for appearance (e.g., lighting and partial
occlusion) and structural (e.g., different poses and viewpoints) variations of
a tracked object, as well as distractors (e.g., similar objects) in background.
Three key issues in online inference and learning are addressed: (i)
maintaining purity of positive and negative examples collected online, (ii)
controling model complexity in latent structure learning, and (iii) identifying
critical moments to re-learn the structure of AOG based on its intrackability.
The intrackability measures uncertainty of an AOG based on its score maps in a
frame. In experiments, our AOGTracker is tested on two popular tracking
benchmarks with the same parameter setting: the TB-100/50/CVPR2013 benchmarks,
and the VOT benchmarks --- VOT 2013, 2014, 2015 and TIR2015 (thermal imagery
tracking). In the former, our AOGTracker outperforms state-of-the-art tracking
algorithms including two trackers based on deep convolutional network. In the
latter, our AOGTracker outperforms all other trackers in VOT2013 and is
comparable to the state-of-the-art methods in VOT2014, 2015 and TIR2015.
| Tianfu Wu and Yang Lu and Song-Chun Zhu | null | 1509.08067 | null | null |
Non-asymptotic Analysis of $\ell_1$-norm Support Vector Machines | cs.IT cs.LG math.FA math.IT math.ST stat.TH | Support Vector Machines (SVM) with $\ell_1$ penalty became a standard tool in
analysis of highdimensional classification problems with sparsity constraints
in many applications including bioinformatics and signal processing. Although
SVM have been studied intensively in the literature, this paper has to our
knowledge first non-asymptotic results on the performance of $\ell_1$-SVM in
identification of sparse classifiers. We show that a $d$-dimensional $s$-sparse
classification vector can be (with high probability) well approximated from
only $O(s\log(d))$ Gaussian trials. The methods used in the proof include
concentration of measure and probability in Banach spaces.
| Anton Kolleck, Jan Vyb\'iral | null | 1509.08083 | null | null |
Representation Benefits of Deep Feedforward Networks | cs.LG cs.NE | This note provides a family of classification problems, indexed by a positive
integer $k$, where all shallow networks with fewer than exponentially (in $k$)
many nodes exhibit error at least $1/6$, whereas a deep network with 2 nodes in
each of $2k$ layers achieves zero error, as does a recurrent network with 3
distinct nodes iterated $k$ times. The proof is elementary, and the networks
are standard feedforward networks with ReLU (Rectified Linear Unit)
nonlinearities.
| Matus Telgarsky | null | 1509.08101 | null | null |
Discriminative Learning of the Prototype Set for Nearest Neighbor
Classification | cs.LG | The nearest neighbor rule is a classic yet essential classification model,
particularly in problems where the supervising information is given by pairwise
dissimilarities and the embedding function are not easily obtained. Prototype
selection provides means of generalization and improving efficiency of the
nearest neighbor model, but many existing methods assume and rely on the
analyses of the input vector space. In this paper, we explore a
dissimilarity-based, parametrized model of the nearest neighbor rule. In the
proposed model, the selection of the nearest prototypes is influenced by the
parameters of the respective prototypes. It provides a formulation for
minimizing the violation of the extended nearest neighbor rule over the
training set in a tractable form to exploit numerical techniques. We show that
the minimization problem reduces to a large-margin principle learning and
demonstrate its advantage by empirical comparisons with other prototype
selection methods.
| Shin Ando | null | 1509.08102 | null | null |
Feature Selection for classification of hyperspectral data by minimizing
a tight bound on the VC dimension | cs.LG | Hyperspectral data consists of large number of features which require
sophisticated analysis to be extracted. A popular approach to reduce
computational cost, facilitate information representation and accelerate
knowledge discovery is to eliminate bands that do not improve the
classification and analysis methods being applied. In particular, algorithms
that perform band elimination should be designed to take advantage of the
specifics of the classification method being used. This paper employs a
recently proposed filter-feature-selection algorithm based on minimizing a
tight bound on the VC dimension. We have successfully applied this algorithm to
determine a reasonable subset of bands without any user-defined stopping
criteria on widely used hyperspectral images and demonstrate that this method
outperforms state-of-the-art methods in terms of both sparsity of feature set
as well as accuracy of classification.\end{abstract}
| Phool Preet, Sanjit Singh Batra, Jayadeva | null | 1509.08112 | null | null |
Optimal Copula Transport for Clustering Multivariate Time Series | cs.LG stat.ML | This paper presents a new methodology for clustering multivariate time series
leveraging optimal transport between copulas. Copulas are used to encode both
(i) intra-dependence of a multivariate time series, and (ii) inter-dependence
between two time series. Then, optimal copula transport allows us to define two
distances between multivariate time series: (i) one for measuring
intra-dependence dissimilarity, (ii) another one for measuring inter-dependence
dissimilarity based on a new multivariate dependence coefficient which is
robust to noise, deterministic, and which can target specified dependencies.
| Gautier Marti, Frank Nielsen, Philippe Donnat | null | 1509.08144 | null | null |
Analysis of Intelligent Classifiers and Enhancing the Detection Accuracy
for Intrusion Detection System | cs.CR cs.LG | In this paper we discuss and analyze some of the intelligent classifiers
which allows for automatic detection and classification of networks attacks for
any intrusion detection system. We will proceed initially with their analysis
using the WEKA software to work with the classifiers on a well-known IDS
(Intrusion Detection Systems) dataset like NSL-KDD dataset. The NSL-KDD dataset
of network attacks was created in a military network by MIT Lincoln Labs. Then
we will discuss and experiment some of the hybrid AI (Artificial Intelligence)
classifiers that can be used for IDS, and finally we developed a Java software
with three most efficient classifiers and compared it with other options. The
outputs would show the detection accuracy and efficiency of the single and
combined classifiers used.
| Mohanad Albayati and Biju Issac | 10.1080/18756891.2015.1084705 | 1509.08239 | null | null |
High-dimensional Time Series Prediction with Missing Values | cs.LG stat.ML | High-dimensional time series prediction is needed in applications as diverse
as demand forecasting and climatology. Often, such applications require methods
that are both highly scalable, and deal with noisy data in terms of corruptions
or missing values. Classical time series methods usually fall short of handling
both these issues. In this paper, we propose to adapt matrix matrix completion
approaches that have previously been successfully applied to large scale noisy
data, but which fail to adequately model high-dimensional time series due to
temporal dependencies. We present a novel temporal regularized matrix
factorization (TRMF) framework which supports data-driven temporal dependency
learning and enables forecasting ability to our new matrix factorization
approach. TRMF is highly general, and subsumes many existing matrix
factorization approaches for time series data. We make interesting connections
to graph regularized matrix factorization methods in the context of learning
the dependencies. Experiments on both real and synthetic data show that TRMF
outperforms several existing approaches for common time series tasks.
| Hsiang-Fu Yu and Nikhil Rao and Inderjit S. Dhillon | null | 1509.08333 | null | null |
Compressive spectral embedding: sidestepping the SVD | stat.ML cs.LG | Spectral embedding based on the Singular Value Decomposition (SVD) is a
widely used "preprocessing" step in many learning tasks, typically leading to
dimensionality reduction by projecting onto a number of dominant singular
vectors and rescaling the coordinate axes (by a predefined function of the
singular value). However, the number of such vectors required to capture
problem structure grows with problem size, and even partial SVD computation
becomes a bottleneck. In this paper, we propose a low-complexity it compressive
spectral embedding algorithm, which employs random projections and finite order
polynomial expansions to compute approximations to SVD-based embedding. For an
m times n matrix with T non-zeros, its time complexity is O((T+m+n)log(m+n)),
and the embedding dimension is O(log(m+n)), both of which are independent of
the number of singular vectors whose effect we wish to capture. To the best of
our knowledge, this is the first work to circumvent this dependence on the
number of singular vectors for general SVD-based embeddings. The key to
sidestepping the SVD is the observation that, for downstream inference tasks
such as clustering and classification, we are only interested in using the
resulting embedding to evaluate pairwise similarity metrics derived from the
euclidean norm, rather than capturing the effect of the underlying matrix on
arbitrary vectors as a partial SVD tries to do. Our numerical results on
network datasets demonstrate the efficacy of the proposed method, and motivate
further exploration of its application to large-scale inference tasks.
| Dinesh Ramasamy and Upamanyu Madhow | null | 1509.08360 | null | null |
Distance-Penalized Active Learning Using Quantile Search | stat.ML cs.LG | Adaptive sampling theory has shown that, with proper assumptions on the
signal class, algorithms exist to reconstruct a signal in $\mathbb{R}^{d}$ with
an optimal number of samples. We generalize this problem to the case of spatial
signals, where the sampling cost is a function of both the number of samples
taken and the distance traveled during estimation. This is motivated by our
work studying regions of low oxygen concentration in the Great Lakes. We show
that for one-dimensional threshold classifiers, a tradeoff between the number
of samples taken and distance traveled can be achieved using a generalization
of binary search, which we refer to as quantile search. We characterize both
the estimation error after a fixed number of samples and the distance traveled
in the noiseless case, as well as the estimation error in the case of noisy
measurements. We illustrate our results in both simulations and experiments and
show that our method outperforms existing algorithms in the majority of
practical scenarios.
| John Lipor, Brandon Wong, Donald Scavia, Branko Kerkez, and Laura
Balzano | null | 1509.08387 | null | null |
Efficient Empowerment | stat.ML cs.LG | Empowerment quantifies the influence an agent has on its environment. This is
formally achieved by the maximum of the expected KL-divergence between the
distribution of the successor state conditioned on a specific action and a
distribution where the actions are marginalised out. This is a natural
candidate for an intrinsic reward signal in the context of reinforcement
learning: the agent will place itself in a situation where its action have
maximum stability and maximum influence on the future. The limiting factor so
far has been the computational complexity of the method: the only way of
calculation has so far been a brute force algorithm, reducing the applicability
of the method to environments with a small set discrete states. In this work,
we propose to use an efficient approximation for marginalising out the actions
in the case of continuous environments. This allows fast evaluation of
empowerment, paving the way towards challenging environments such as real world
robotics. The method is presented on a pendulum swing up problem.
| Maximilian Karl, Justin Bayer, Patrick van der Smagt | null | 1509.08455 | null | null |
Optimization over Sparse Symmetric Sets via a Nonmonotone Projected
Gradient Method | math.OC cs.LG cs.NA stat.CO stat.ML | We consider the problem of minimizing a Lipschitz differentiable function
over a class of sparse symmetric sets that has wide applications in engineering
and science. For this problem, it is known that any accumulation point of the
classical projected gradient (PG) method with a constant stepsize $1/L$
satisfies the $L$-stationarity optimality condition that was introduced in [3].
In this paper we introduce a new optimality condition that is stronger than the
$L$-stationarity optimality condition. We also propose a nonmonotone projected
gradient (NPG) method for this problem by incorporating some support-changing
and coordintate-swapping strategies into a projected gradient method with
variable stepsizes. It is shown that any accumulation point of NPG satisfies
the new optimality condition and moreover it is a coordinatewise stationary
point. Under some suitable assumptions, we further show that it is a global or
a local minimizer of the problem. Numerical experiments are conducted to
compare the performance of PG and NPG. The computational results demonstrate
that NPG has substantially better solution quality than PG, and moreover, it is
at least comparable to, but sometimes can be much faster than PG in terms of
speed.
| Zhaosong Lu | null | 1509.08581 | null | null |
Semantics, Representations and Grammars for Deep Learning | cs.LG cs.NE stat.ML | Deep learning is currently the subject of intensive study. However,
fundamental concepts such as representations are not formally defined --
researchers "know them when they see them" -- and there is no common language
for describing and analyzing algorithms. This essay proposes an abstract
framework that identifies the essential features of current practice and may
provide a foundation for future developments.
The backbone of almost all deep learning algorithms is backpropagation, which
is simply a gradient computation distributed over a neural network. The main
ingredients of the framework are thus, unsurprisingly: (i) game theory, to
formalize distributed optimization; and (ii) communication protocols, to track
the flow of zeroth and first-order information. The framework allows natural
definitions of semantics (as the meaning encoded in functions), representations
(as functions whose semantics is chosen to optimized a criterion) and grammars
(as communication protocols equipped with first-order convergence guarantees).
Much of the essay is spent discussing examples taken from the literature. The
ultimate aim is to develop a graphical language for describing the structure of
deep learning algorithms that backgrounds the details of the optimization
procedure and foregrounds how the components interact. Inspiration is taken
from probabilistic graphical models and factor graphs, which capture the
essential structural features of multivariate distributions.
| David Balduzzi | null | 1509.08627 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.