categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG stat.ML
|
10.1109/TNNLS.2017.2709838
|
1610.06811
| null | null |
http://arxiv.org/abs/1610.06811v1
|
2016-10-21T14:55:48Z
|
2016-10-21T14:55:48Z
|
Convex Formulation for Kernel PCA and its Use in Semi-Supervised
Learning
|
In this paper, Kernel PCA is reinterpreted as the solution to a convex
optimization problem. Actually, there is a constrained convex problem for each
principal component, so that the constraints guarantee that the principal
component is indeed a solution, and not a mere saddle point. Although these
insights do not imply any algorithmic improvement, they can be used to further
understand the method, formulate possible extensions and properly address them.
As an example, a new convex optimization problem for semi-supervised
classification is proposed, which seems particularly well-suited whenever the
number of known labels is small. Our formulation resembles a Least Squares SVM
problem with a regularization parameter multiplied by a negative sign, combined
with a variational principle for Kernel PCA. Our primal optimization principle
for semi-supervised learning is solved in terms of the Lagrange multipliers.
Numerical experiments in several classification tasks illustrate the
performance of the proposed model in problems with only a few labeled data.
|
[
"['Carlos M. Alaíz' 'Michaël Fanuel' 'Johan A. K. Suykens']",
"Carlos M. Ala\\'iz, Micha\\\"el Fanuel, Johan A. K. Suykens"
] |
cs.LG stat.ML
| null |
1610.06848
| null | null |
http://arxiv.org/pdf/1610.06848v3
|
2017-07-09T16:36:03Z
|
2016-10-19T00:19:25Z
|
An Efficient Minibatch Acceptance Test for Metropolis-Hastings
|
We present a novel Metropolis-Hastings method for large datasets that uses
small expected-size minibatches of data. Previous work on reducing the cost of
Metropolis-Hastings tests yield variable data consumed per sample, with only
constant factor reductions versus using the full dataset for each sample. Here
we present a method that can be tuned to provide arbitrarily small batch sizes,
by adjusting either proposal step size or temperature. Our test uses the
noise-tolerant Barker acceptance test with a novel additive correction
variable. The resulting test has similar cost to a normal SGD update. Our
experiments demonstrate several order-of-magnitude speedups over previous work.
|
[
"['Daniel Seita' 'Xinlei Pan' 'Haoyu Chen' 'John Canny']",
"Daniel Seita, Xinlei Pan, Haoyu Chen, John Canny"
] |
cs.CR cs.LG
| null |
1610.06918
| null | null |
http://arxiv.org/pdf/1610.06918v1
|
2016-10-21T19:58:29Z
|
2016-10-21T19:58:29Z
|
Learning to Protect Communications with Adversarial Neural Cryptography
|
We ask whether neural networks can learn to use secret keys to protect
information from other neural networks. Specifically, we focus on ensuring
confidentiality properties in a multiagent system, and we specify those
properties in terms of an adversary. Thus, a system may consist of neural
networks named Alice and Bob, and we aim to limit what a third neural network
named Eve learns from eavesdropping on the communication between Alice and Bob.
We do not prescribe specific cryptographic algorithms to these neural networks;
instead, we train end-to-end, adversarially. We demonstrate that the neural
networks can learn how to perform forms of encryption and decryption, and also
how to apply these operations selectively in order to meet confidentiality
goals.
|
[
"['Martín Abadi' 'David G. Andersen']",
"Mart\\'in Abadi and David G. Andersen (Google Brain)"
] |
cs.LG cs.AI cs.AR cs.CV
| null |
1610.0692
| null | null | null | null | null |
Bit-pragmatic Deep Neural Network Computing
|
We quantify a source of ineffectual computations when processing the
multiplications of the convolutional layers in Deep Neural Networks (DNNs) and
propose Pragmatic (PRA), an architecture that exploits it improving performance
and energy efficiency. The source of these ineffectual computations is best
understood in the context of conventional multipliers which generate internally
multiple terms, that is, products of the multiplicand and powers of two, which
added together produce the final product [1]. At runtime, many of these terms
are zero as they are generated when the multiplicand is combined with the
zero-bits of the multiplicator. While conventional bit-parallel multipliers
calculate all terms in parallel to reduce individual product latency, PRA
calculates only the non-zero terms using a) on-the-fly conversion of the
multiplicator representation into an explicit list of powers of two, and b)
hybrid bit-parallel multplicand/bit-serial multiplicator processing units. PRA
exploits two sources of ineffectual computations: 1) the aforementioned zero
product terms which are the result of the lack of explicitness in the
multiplicator representation, and 2) the excess in the representation precision
used for both multiplicants and multiplicators, e.g., [2]. Measurements
demonstrate that for the convolutional layers, a straightforward variant of PRA
improves performance by 2.6x over the DaDiaNao (DaDN) accelerator [3] and by
1.4x over STR [4]. Similarly, PRA improves energy efficiency by 28% and 10% on
average compared to DaDN and STR. An improved cross lane synchronication scheme
boosts performance improvements to 3.1x over DaDN. Finally, Pragmatic benefits
persist even with an 8-bit quantized representation [5].
|
[
"J. Albericio, P. Judd, A. Delm\\'as, S. Sharify, A. Moshovos"
] |
null | null |
1610.06920
| null | null |
http://arxiv.org/pdf/1610.06920v1
|
2016-10-20T22:16:05Z
|
2016-10-20T22:16:05Z
|
Bit-pragmatic Deep Neural Network Computing
|
We quantify a source of ineffectual computations when processing the multiplications of the convolutional layers in Deep Neural Networks (DNNs) and propose Pragmatic (PRA), an architecture that exploits it improving performance and energy efficiency. The source of these ineffectual computations is best understood in the context of conventional multipliers which generate internally multiple terms, that is, products of the multiplicand and powers of two, which added together produce the final product [1]. At runtime, many of these terms are zero as they are generated when the multiplicand is combined with the zero-bits of the multiplicator. While conventional bit-parallel multipliers calculate all terms in parallel to reduce individual product latency, PRA calculates only the non-zero terms using a) on-the-fly conversion of the multiplicator representation into an explicit list of powers of two, and b) hybrid bit-parallel multplicand/bit-serial multiplicator processing units. PRA exploits two sources of ineffectual computations: 1) the aforementioned zero product terms which are the result of the lack of explicitness in the multiplicator representation, and 2) the excess in the representation precision used for both multiplicants and multiplicators, e.g., [2]. Measurements demonstrate that for the convolutional layers, a straightforward variant of PRA improves performance by 2.6x over the DaDiaNao (DaDN) accelerator [3] and by 1.4x over STR [4]. Similarly, PRA improves energy efficiency by 28% and 10% on average compared to DaDN and STR. An improved cross lane synchronication scheme boosts performance improvements to 3.1x over DaDN. Finally, Pragmatic benefits persist even with an 8-bit quantized representation [5].
|
[
"['J. Albericio' 'P. Judd' 'A. Delmás' 'S. Sharify' 'A. Moshovos']"
] |
cs.AI cs.LG stat.ML
| null |
1610.0694
| null | null | null | null | null |
Safety Verification of Deep Neural Networks
|
Deep neural networks have achieved impressive experimental results in image
classification, but can surprisingly be unstable with respect to adversarial
perturbations, that is, minimal changes to the input image that cause the
network to misclassify it. With potential applications including perception
modules and end-to-end controllers for self-driving cars, this raises concerns
about their safety. We develop a novel automated verification framework for
feed-forward multi-layer neural networks based on Satisfiability Modulo Theory
(SMT). We focus on safety of image classification decisions with respect to
image manipulations, such as scratches or changes to camera angle or lighting
conditions that would result in the same class being assigned by a human, and
define safety for an individual decision in terms of invariance of the
classification within a small neighbourhood of the original image. We enable
exhaustive search of the region by employing discretisation, and propagate the
analysis layer by layer. Our method works directly with the network code and,
in contrast to existing methods, can guarantee that adversarial examples, if
they exist, are found for the given region and family of manipulations. If
found, adversarial examples can be shown to human testers and/or used to
fine-tune the network. We implement the techniques using Z3 and evaluate them
on state-of-the-art networks, including regularised and deep learning networks.
We also compare against existing techniques to search for adversarial examples
and estimate network robustness.
|
[
"Xiaowei Huang and Marta Kwiatkowska and Sen Wang and Min Wu"
] |
null | null |
1610.06940
| null | null |
http://arxiv.org/pdf/1610.06940v3
|
2017-05-05T10:16:50Z
|
2016-10-21T20:16:16Z
|
Safety Verification of Deep Neural Networks
|
Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. We develop a novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT). We focus on safety of image classification decisions with respect to image manipulations, such as scratches or changes to camera angle or lighting conditions that would result in the same class being assigned by a human, and define safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image. We enable exhaustive search of the region by employing discretisation, and propagate the analysis layer by layer. Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations. If found, adversarial examples can be shown to human testers and/or used to fine-tune the network. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. We also compare against existing techniques to search for adversarial examples and estimate network robustness.
|
[
"['Xiaowei Huang' 'Marta Kwiatkowska' 'Sen Wang' 'Min Wu']"
] |
cs.AI cs.LG stat.ML
| null |
1610.06972
| null | null |
http://arxiv.org/pdf/1610.06972v1
|
2016-10-21T23:17:03Z
|
2016-10-21T23:17:03Z
|
Learning Cost-Effective Treatment Regimes using Markov Decision
Processes
|
Decision makers, such as doctors and judges, make crucial decisions such as
recommending treatments to patients, and granting bails to defendants on a
daily basis. Such decisions typically involve weighting the potential benefits
of taking an action against the costs involved. In this work, we aim to
automate this task of learning \emph{cost-effective, interpretable and
actionable treatment regimes}. We formulate this as a problem of learning a
decision list -- a sequence of if-then-else rules -- which maps characteristics
of subjects (eg., diagnostic test results of patients) to treatments. We
propose a novel objective to construct a decision list which maximizes outcomes
for the population, and minimizes overall costs. We model the problem of
learning such a list as a Markov Decision Process (MDP) and employ a variant of
the Upper Confidence Bound for Trees (UCT) strategy which leverages customized
checks for pruning the search space effectively. Experimental results on real
world observational data capturing judicial bail decisions and treatment
recommendations for asthma patients demonstrate the effectiveness of our
approach.
|
[
"['Himabindu Lakkaraju' 'Cynthia Rudin']",
"Himabindu Lakkaraju, Cynthia Rudin"
] |
cs.LG
| null |
1610.06998
| null | null |
http://arxiv.org/pdf/1610.06998v1
|
2016-10-22T05:19:44Z
|
2016-10-22T05:19:44Z
|
Ranking of classification algorithms in terms of mean-standard deviation
using A-TOPSIS
|
In classification problems when multiples algorithms are applied to different
benchmarks a difficult issue arises, i.e., how can we rank the algorithms? In
machine learning it is common run the algorithms several times and then a
statistic is calculated in terms of means and standard deviations. In order to
compare the performance of the algorithms, it is very common to employ
statistical tests. However, these tests may also present limitations, since
they consider only the means and not the standard deviations of the obtained
results. In this paper, we present the so called A-TOPSIS, based on TOPSIS
(Technique for Order Preference by Similarity to Ideal Solution), to solve the
problem of ranking and comparing classification algorithms in terms of means
and standard deviations. We use two case studies to illustrate the A-TOPSIS for
ranking classification algorithms and the results show the suitability of
A-TOPSIS to rank the algorithms. The presented approach is general and can be
applied to compare the performance of stochastic algorithms in machine
learning. Finally, to encourage researchers to use the A-TOPSIS for ranking
algorithms we also presented in this work an easy-to-use A-TOPSIS web
framework.
|
[
"Andre G. C. Pacheco and Renato A. Krohling",
"['Andre G. C. Pacheco' 'Renato A. Krohling']"
] |
cs.CV cs.LG
| null |
1610.07031
| null | null |
http://arxiv.org/pdf/1610.07031v3
|
2017-07-22T22:09:17Z
|
2016-10-22T10:46:01Z
|
Exercise Motion Classification from Large-Scale Wearable Sensor Data
Using Convolutional Neural Networks
|
The ability to accurately identify human activities is essential for
developing automatic rehabilitation and sports training systems. In this paper,
large-scale exercise motion data obtained from a forearm-worn wearable sensor
are classified with a convolutional neural network (CNN). Time-series data
consisting of accelerometer and orientation measurements are formatted as
images, allowing the CNN to automatically extract discriminative features. A
comparative study on the effects of image formatting and different CNN
architectures is also presented. The best performing configuration classifies
50 gym exercises with 92.1% accuracy.
|
[
"Terry Taewoong Um, Vahid Babakeshizadeh and Dana Kuli\\'c",
"['Terry Taewoong Um' 'Vahid Babakeshizadeh' 'Dana Kulić']"
] |
stat.ML cs.LG
| null |
1610.07116
| null | null |
http://arxiv.org/pdf/1610.07116v2
|
2018-02-10T17:55:44Z
|
2016-10-23T02:56:03Z
|
Online Classification with Complex Metrics
|
We present a framework and analysis of consistent binary classification for
complex and non-decomposable performance metrics such as the F-measure and the
Jaccard measure. The proposed framework is general, as it applies to both batch
and online learning, and to both linear and non-linear models. Our work follows
recent results showing that the Bayes optimal classifier for many complex
metrics is given by a thresholding of the conditional probability of the
positive class. This manuscript extends this thresholding characterization --
showing that the utility is strictly locally quasi-concave with respect to the
threshold for a wide range of models and performance metrics. This, in turn,
motivates simple normalized gradient ascent updates for threshold estimation.
We present a finite-sample regret analysis for the resulting procedure. In
particular, the risk for the batch case converges to the Bayes risk at the same
rate as that of the underlying conditional probability estimation, and the risk
of proposed online algorithm converges at a rate that depends on the
conditional probability estimation risk. For instance, in the special case
where the conditional probability model is logistic regression, our procedure
achieves $O(\frac{1}{\sqrt{n}})$ sample complexity, both for batch and online
training. Empirical evaluation shows that the proposed algorithms out-perform
alternatives in practice, with comparable or better prediction performance and
reduced run time for various metrics and datasets.
|
[
"Bowei Yan, Oluwasanmi Koyejo, Kai Zhong, Pradeep Ravikumar",
"['Bowei Yan' 'Oluwasanmi Koyejo' 'Kai Zhong' 'Pradeep Ravikumar']"
] |
cs.LG cs.IR stat.ML
| null |
1610.07119
| null | null |
http://arxiv.org/pdf/1610.07119v2
|
2017-02-19T03:33:03Z
|
2016-10-23T03:25:05Z
|
Cross Device Matching for Online Advertising with Neural Feature
Ensembles : First Place Solution at CIKM Cup 2016
|
We describe the 1st place winning approach for the CIKM Cup 2016 Challenge.
In this paper, we provide an approach to reasonably identify same users across
multiple devices based on browsing logs. Our approach regards a candidate
ranking problem as pairwise classification and utilizes an unsupervised neural
feature ensemble approach to learn latent features of users. Combined with
traditional hand crafted features, each user pair feature is fed into a
supervised classifier in order to perform pairwise classification. Lastly, we
propose supervised and unsupervised inference techniques.
|
[
"Minh C. Phan, Yi Tay, Tuan-Anh Nguyen Pham",
"['Minh C. Phan' 'Yi Tay' 'Tuan-Anh Nguyen Pham']"
] |
cs.LG
| null |
1610.07183
| null | null |
http://arxiv.org/pdf/1610.07183v1
|
2016-10-23T15:13:46Z
|
2016-10-23T15:13:46Z
|
How to be Fair and Diverse?
|
Due to the recent cases of algorithmic bias in data-driven decision-making,
machine learning methods are being put under the microscope in order to
understand the root cause of these biases and how to correct them. Here, we
consider a basic algorithmic task that is central in machine learning:
subsampling from a large data set. Subsamples are used both as an end-goal in
data summarization (where fairness could either be a legal, political or moral
requirement) and to train algorithms (where biases in the samples are often a
source of bias in the resulting model). Consequently, there is a growing effort
to modify either the subsampling methods or the algorithms themselves in order
to ensure fairness. However, in doing so, a question that seems to be
overlooked is whether it is possible to produce fair subsamples that are also
adequately representative of the feature space of the data set - an important
and classic requirement in machine learning. Can diversity and fairness be
simultaneously ensured? We start by noting that, in some applications,
guaranteeing one does not necessarily guarantee the other, and a new approach
is required. Subsequently, we present an algorithmic framework which allows us
to produce both fair and diverse samples. Our experimental results on an image
summarization task show marked improvements in fairness without compromising
feature diversity by much, giving us the best of both the worlds.
|
[
"['L. Elisa Celis' 'Amit Deshpande' 'Tarun Kathuria' 'Nisheeth K. Vishnoi']",
"L. Elisa Celis, Amit Deshpande, Tarun Kathuria, Nisheeth K. Vishnoi"
] |
stat.ML cs.LG
|
10.1016/j.compbiomed.2017.09.007
|
1610.07187
| null | null |
http://arxiv.org/abs/1610.07187v3
|
2017-09-19T09:52:06Z
|
2016-10-23T15:51:46Z
|
Learning Deep Architectures for Interaction Prediction in
Structure-based Virtual Screening
|
We introduce a deep learning architecture for structure-based virtual
screening that generates fixed-sized fingerprints of proteins and small
molecules by applying learnable atom convolution and softmax operations to each
compound separately. These fingerprints are further transformed non-linearly,
their inner-product is calculated and used to predict the binding potential.
Moreover, we show that widely used benchmark datasets may be insufficient for
testing structure-based virtual screening methods that utilize machine
learning. Therefore, we introduce a new benchmark dataset, which we constructed
based on DUD-E and PDBBind databases.
|
[
"['Adam Gonczarek' 'Jakub M. Tomczak' 'Szymon Zaręba' 'Joanna Kaczmar'\n 'Piotr Dąbrowski' 'Michał J. Walczak']",
"Adam Gonczarek, Jakub M. Tomczak, Szymon Zar\\k{e}ba, Joanna Kaczmar,\n Piotr D\\k{a}browski, Micha{\\l} J. Walczak"
] |
cs.LG cs.NE
| null |
1610.07258
| null | null |
http://arxiv.org/pdf/1610.07258v3
|
2016-11-26T21:02:49Z
|
2016-10-24T01:53:12Z
|
Representation Learning with Deconvolution for Multivariate Time Series
Classification and Visualization
|
We propose a new model based on the deconvolutional networks and SAX
discretization to learn the representation for multivariate time series.
Deconvolutional networks fully exploit the advantage the powerful
expressiveness of deep neural networks in the manner of unsupervised learning.
We design a network structure specifically to capture the cross-channel
correlation with deconvolution, forcing the pooling operation to perform the
dimension reduction along each position in the individual channel.
Discretization based on Symbolic Aggregate Approximation is applied on the
feature vectors to further extract the bag of features. We show how this
representation and bag of features helps on classification. A full comparison
with the sequence distance based approach is provided to demonstrate the
effectiveness of our approach on the standard datasets. We further build the
Markov matrix from the discretized representation from the deconvolution to
visualize the time series as complex networks, which show more class-specific
statistical properties and clear structures with respect to different labels.
|
[
"['Zhiguang Wang' 'Wei Song' 'Lu Liu' 'Fan Zhang' 'Junxiao Xue'\n 'Yangdong Ye' 'Ming Fan' 'Mingliang Xu']",
"Zhiguang Wang, Wei Song, Lu Liu, Fan Zhang, Junxiao Xue, Yangdong Ye,\n Ming Fan, Mingliang Xu"
] |
cs.LG cs.HC
| null |
1610.07273
| null | null |
http://arxiv.org/pdf/1610.07273v4
|
2018-08-14T16:54:22Z
|
2016-10-24T03:39:35Z
|
Encoding Temporal Markov Dynamics in Graph for Visualizing and Mining
Time Series
|
Time series and signals are attracting more attention across statistics,
machine learning and pattern recognition as it appears widely in the industry
especially in sensor and IoT related research and applications, but few
advances has been achieved in effective time series visual analytics and
interaction due to its temporal dimensionality and complex dynamics. Inspired
by recent effort on using network metrics to characterize time series for
classification, we present an approach to visualize time series as complex
networks based on the first order Markov process in its temporal ordering. In
contrast to the classical bar charts, line plots and other statistics based
graph, our approach delivers more intuitive visualization that better preserves
both the temporal dependency and frequency structures. It provides a natural
inverse operation to map the graph back to raw signals, making it possible to
use graph statistics to characterize time series for better visual exploration
and statistical analysis. Our experimental results suggest the effectiveness on
various tasks such as pattern discovery and classification on both synthetic
and the real time series and sensor data.
|
[
"Lu Liu, Zhiguang Wang",
"['Lu Liu' 'Zhiguang Wang']"
] |
stat.ML cs.IT cs.LG math.IT
| null |
1610.07379
| null | null |
http://arxiv.org/pdf/1610.07379v1
|
2016-10-24T12:18:00Z
|
2016-10-24T12:18:00Z
|
Truncated Variance Reduction: A Unified Approach to Bayesian
Optimization and Level-Set Estimation
|
We present a new algorithm, truncated variance reduction (TruVaR), that
treats Bayesian optimization (BO) and level-set estimation (LSE) with Gaussian
processes in a unified fashion. The algorithm greedily shrinks a sum of
truncated variances within a set of potential maximizers (BO) or unclassified
points (LSE), which is updated based on confidence bounds. TruVaR is effective
in several important settings that are typically non-trivial to incorporate
into myopic algorithms, including pointwise costs and heteroscedastic noise. We
provide a general theoretical guarantee for TruVaR covering these aspects, and
use it to recover and strengthen existing results on BO and LSE. Moreover, we
provide a new result for a setting where one can select from a number of noise
levels having associated costs. We demonstrate the effectiveness of the
algorithm on both synthetic and real-world data sets.
|
[
"['Ilija Bogunovic' 'Jonathan Scarlett' 'Andreas Krause' 'Volkan Cevher']",
"Ilija Bogunovic and Jonathan Scarlett and Andreas Krause and Volkan\n Cevher"
] |
cs.NI cs.LG
| null |
1610.07419
| null | null |
http://arxiv.org/pdf/1610.07419v1
|
2016-10-24T14:07:56Z
|
2016-10-24T14:07:56Z
|
Using Machine Learning to Detect Noisy Neighbors in 5G Networks
|
5G networks are expected to be more dynamic and chaotic in their structure
than current networks. With the advent of Network Function Virtualization
(NFV), Network Functions (NF) will no longer be tightly coupled with the
hardware they are running on, which poses new challenges in network management.
Noisy neighbor is a term commonly used to describe situations in NFV
infrastructure where an application experiences degradation in performance due
to the fact that some of the resources it needs are occupied by other
applications in the same cloud node. These situations cannot be easily
identified using straightforward approaches, which calls for the use of
sophisticated methods for NFV infrastructure management. In this paper we
demonstrate how Machine Learning (ML) techniques can be used to identify such
events. Through experiments using data collected at real NFV infrastructure, we
show that standard models for automated classification can detect the noisy
neighbor phenomenon with an accuracy of more than 90% in a simple scenario.
|
[
"Udi Margolin, Alberto Mozo, Bruno Ordozgoiti, Danny Raz, Elisha\n Rosensweig, Itai Segall",
"['Udi Margolin' 'Alberto Mozo' 'Bruno Ordozgoiti' 'Danny Raz'\n 'Elisha Rosensweig' 'Itai Segall']"
] |
stat.ML cs.LG
|
10.1016/j.neunet.2017.04.004
|
1610.07448
| null | null |
http://arxiv.org/abs/1610.07448v3
|
2017-04-20T08:55:19Z
|
2016-10-24T14:58:56Z
|
A Framework for Parallel and Distributed Training of Neural Networks
|
The aim of this paper is to develop a general framework for training neural
networks (NNs) in a distributed environment, where training data is partitioned
over a set of agents that communicate with each other through a sparse,
possibly time-varying, connectivity pattern. In such distributed scenario, the
training problem can be formulated as the (regularized) optimization of a
non-convex social cost function, given by the sum of local (non-convex) costs,
where each agent contributes with a single error term defined with respect to
its local dataset. To devise a flexible and efficient solution, we customize a
recently proposed framework for non-convex optimization over networks, which
hinges on a (primal) convexification-decomposition technique to handle
non-convexity, and a dynamic consensus procedure to diffuse information among
the agents. Several typical choices for the training criterion (e.g., squared
loss, cross entropy, etc.) and regularization (e.g., $\ell_2$ norm, sparsity
inducing penalties, etc.) are included in the framework and explored along the
paper. Convergence to a stationary solution of the social non-convex problem is
guaranteed under mild assumptions. Additionally, we show a principled way
allowing each agent to exploit a possible multi-core architecture (e.g., a
local cloud) in order to parallelize its local optimization step, resulting in
strategies that are both distributed (across the agents) and parallel (inside
each agent) in nature. A comprehensive set of experimental results validate the
proposed approach.
|
[
"['Simone Scardapane' 'Paolo Di Lorenzo']",
"Simone Scardapane and Paolo Di Lorenzo"
] |
math.OC cs.LG stat.ML
| null |
1610.07519
| null | null |
http://arxiv.org/pdf/1610.07519v2
|
2017-01-20T15:20:57Z
|
2016-10-24T18:10:11Z
|
A Variational Bayesian Approach for Image Restoration. Application to
Image Deblurring with Poisson-Gaussian Noise
|
In this paper, a methodology is investigated for signal recovery in the
presence of non-Gaussian noise. In contrast with regularized minimization
approaches often adopted in the literature, in our algorithm the regularization
parameter is reliably estimated from the observations. As the posterior density
of the unknown parameters is analytically intractable, the estimation problem
is derived in a variational Bayesian framework where the goal is to provide a
good approximation to the posterior distribution in order to compute posterior
mean estimates. Moreover, a majorization technique is employed to circumvent
the difficulties raised by the intricate forms of the non-Gaussian likelihood
and of the prior density. We demonstrate the potential of the proposed approach
through comparisons with state-of-the-art techniques that are specifically
tailored to signal recovery in the presence of mixed Poisson-Gaussian noise.
Results show that the proposed approach is efficient and achieves performance
comparable with other methods where the regularization parameter is manually
tuned from the ground truth.
|
[
"Yosra Marnissi, Yuling Zheng, Emilie Chouzenoux, Jean-Christophe\n Pesquet",
"['Yosra Marnissi' 'Yuling Zheng' 'Emilie Chouzenoux'\n 'Jean-Christophe Pesquet']"
] |
cs.SY cs.LG
| null |
1610.0752
| null | null | null | null | null |
Nonlinear Adaptive Algorithms on Rank-One Tensor Models
|
This work proposes a low complexity nonlinearity model and develops adaptive
algorithms over it. The model is based on the decomposable---or rank-one, in
tensor language---Volterra kernels. It may also be described as a product of
FIR filters, which explains its low-complexity. The rank-one model is also
interesting because it comes from a well-posed problem in approximation theory.
The paper uses such model in an estimation theory context to develop an exact
gradient-type algorithm, from which adaptive algorithms such as the least mean
squares (LMS) filter and its data-reuse version---the TRUE-LMS---are derived.
Stability and convergence issues are addressed. The algorithms are then tested
in simulations, which show its good performance when compared to other
nonlinear processing algorithms in the literature.
|
[
"Felipe C. Pinheiro, Cassio G. Lopes"
] |
null | null |
1610.07520
| null | null |
http://arxiv.org/pdf/1610.07520v1
|
2016-10-24T18:12:18Z
|
2016-10-24T18:12:18Z
|
Nonlinear Adaptive Algorithms on Rank-One Tensor Models
|
This work proposes a low complexity nonlinearity model and develops adaptive algorithms over it. The model is based on the decomposable---or rank-one, in tensor language---Volterra kernels. It may also be described as a product of FIR filters, which explains its low-complexity. The rank-one model is also interesting because it comes from a well-posed problem in approximation theory. The paper uses such model in an estimation theory context to develop an exact gradient-type algorithm, from which adaptive algorithms such as the least mean squares (LMS) filter and its data-reuse version---the TRUE-LMS---are derived. Stability and convergence issues are addressed. The algorithms are then tested in simulations, which show its good performance when compared to other nonlinear processing algorithms in the literature.
|
[
"['Felipe C. Pinheiro' 'Cassio G. Lopes']"
] |
cs.LG
| null |
1610.07563
| null | null |
http://arxiv.org/pdf/1610.07563v1
|
2016-10-24T19:27:52Z
|
2016-10-24T19:27:52Z
|
On Multiplicative Multitask Feature Learning
|
We investigate a general framework of multiplicative multitask feature
learning which decomposes each task's model parameters into a multiplication of
two components. One of the components is used across all tasks and the other
component is task-specific. Several previous methods have been proposed as
special cases of our framework. We study the theoretical properties of this
framework when different regularization conditions are applied to the two
decomposed components. We prove that this framework is mathematically
equivalent to the widely used multitask feature learning methods that are based
on a joint regularization of all model parameters, but with a more general form
of regularizers. Further, an analytical formula is derived for the across-task
component as related to the task-specific component for all these regularizers,
leading to a better understanding of the shrinkage effect. Study of this
framework motivates new multitask learning algorithms. We propose two new
learning formulations by varying the parameters in the proposed framework.
Empirical studies have revealed the relative advantages of the two new
formulations by comparing with the state of the art, which provides instructive
insights into the feature learning problem with multiple tasks.
|
[
"['Xin Wang' 'Jinbo Bi' 'Shipeng Yu' 'Jiangwen Sun']",
"Xin Wang, Jinbo Bi, Shipeng Yu, Jiangwen Sun"
] |
cs.CL cs.LG stat.ML
| null |
1610.07569
| null | null |
http://arxiv.org/pdf/1610.07569v1
|
2016-10-24T19:35:29Z
|
2016-10-24T19:35:29Z
|
Geometry of Polysemy
|
Vector representations of words have heralded a transformational approach to
classical problems in NLP; the most popular example is word2vec. However, a
single vector does not suffice to model the polysemous nature of many
(frequent) words, i.e., words with multiple meanings. In this paper, we propose
a three-fold approach for unsupervised polysemy modeling: (a) context
representations, (b) sense induction and disambiguation and (c) lexeme (as a
word and sense pair) representations. A key feature of our work is the finding
that a sentence containing a target word is well represented by a low rank
subspace, instead of a point in a vector space. We then show that the subspaces
associated with a particular sense of the target word tend to intersect over a
line (one-dimensional subspace), which we use to disambiguate senses using a
clustering algorithm that harnesses the Grassmannian geometry of the
representations. The disambiguation algorithm, which we call $K$-Grassmeans,
leads to a procedure to label the different senses of the target word in the
corpus -- yielding lexeme vector representations, all in an unsupervised manner
starting from a large (Wikipedia) corpus in English. Apart from several
prototypical target (word,sense) examples and a host of empirical studies to
intuit and justify the various geometric representations, we validate our
algorithms on standard sense induction and disambiguation datasets and present
new state-of-the-art results.
|
[
"['Jiaqi Mu' 'Suma Bhat' 'Pramod Viswanath']",
"Jiaqi Mu, Suma Bhat, Pramod Viswanath"
] |
cs.CV cs.LG
| null |
1610.07584
| null | null |
http://arxiv.org/pdf/1610.07584v2
|
2017-01-04T18:35:52Z
|
2016-10-24T19:53:41Z
|
Learning a Probabilistic Latent Space of Object Shapes via 3D
Generative-Adversarial Modeling
|
We study the problem of 3D object generation. We propose a novel framework,
namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects
from a probabilistic space by leveraging recent advances in volumetric
convolutional networks and generative adversarial nets. The benefits of our
model are three-fold: first, the use of an adversarial criterion, instead of
traditional heuristic criteria, enables the generator to capture object
structure implicitly and to synthesize high-quality 3D objects; second, the
generator establishes a mapping from a low-dimensional probabilistic space to
the space of 3D objects, so that we can sample objects without a reference
image or CAD models, and explore the 3D object manifold; third, the adversarial
discriminator provides a powerful 3D shape descriptor which, learned without
supervision, has wide applications in 3D object recognition. Experiments
demonstrate that our method generates high-quality 3D objects, and our
unsupervisedly learned features achieve impressive performance on 3D object
recognition, comparable with those of supervised learning methods.
|
[
"['Jiajun Wu' 'Chengkai Zhang' 'Tianfan Xue' 'William T. Freeman'\n 'Joshua B. Tenenbaum']",
"Jiajun Wu, Chengkai Zhang, Tianfan Xue, William T. Freeman, Joshua B.\n Tenenbaum"
] |
cs.CV cs.LG
| null |
1610.07629
| null | null |
http://arxiv.org/pdf/1610.07629v5
|
2017-02-09T16:29:09Z
|
2016-10-24T20:06:54Z
|
A Learned Representation For Artistic Style
|
The diversity of painting styles represents a rich visual vocabulary for the
construction of an image. The degree to which one may learn and parsimoniously
capture this visual vocabulary measures our understanding of the higher level
features of paintings, if not images in general. In this work we investigate
the construction of a single, scalable deep network that can parsimoniously
capture the artistic style of a diversity of paintings. We demonstrate that
such a network generalizes across a diversity of artistic styles by reducing a
painting to a point in an embedding space. Importantly, this model permits a
user to explore new painting styles by arbitrarily combining the styles learned
from individual paintings. We hope that this work provides a useful step
towards building rich models of paintings and offers a window on to the
structure of the learned representation of artistic style.
|
[
"Vincent Dumoulin, Jonathon Shlens, Manjunath Kudlur",
"['Vincent Dumoulin' 'Jonathon Shlens' 'Manjunath Kudlur']"
] |
stat.ML cs.LG
| null |
1610.0765
| null | null | null | null | null |
A Theoretical Analysis of Noisy Sparse Subspace Clustering on
Dimensionality-Reduced Data
|
Subspace clustering is the problem of partitioning unlabeled data points into
a number of clusters so that data points within one cluster lie approximately
on a low-dimensional linear subspace. In many practical scenarios, the
dimensionality of data points to be clustered are compressed due to constraints
of measurement, computation or privacy. In this paper, we study the theoretical
properties of a popular subspace clustering algorithm named sparse subspace
clustering (SSC) and establish formal success conditions of SSC on
dimensionality-reduced data. Our analysis applies to the most general fully
deterministic model where both underlying subspaces and data points within each
subspace are deterministically positioned, and also a wide range of
dimensionality reduction techniques (e.g., Gaussian random projection, uniform
subsampling, sketching) that fall into a subspace embedding framework (Meng &
Mahoney, 2013; Avron et al., 2014). Finally, we apply our analysis to a
differentially private SSC algorithm and established both privacy and utility
guarantees of the proposed method.
|
[
"Yining Wang, Yu-Xiang Wang and Aarti Singh"
] |
null | null |
1610.07650
| null | null |
http://arxiv.org/pdf/1610.07650v1
|
2016-10-24T20:54:07Z
|
2016-10-24T20:54:07Z
|
A Theoretical Analysis of Noisy Sparse Subspace Clustering on
Dimensionality-Reduced Data
|
Subspace clustering is the problem of partitioning unlabeled data points into a number of clusters so that data points within one cluster lie approximately on a low-dimensional linear subspace. In many practical scenarios, the dimensionality of data points to be clustered are compressed due to constraints of measurement, computation or privacy. In this paper, we study the theoretical properties of a popular subspace clustering algorithm named sparse subspace clustering (SSC) and establish formal success conditions of SSC on dimensionality-reduced data. Our analysis applies to the most general fully deterministic model where both underlying subspaces and data points within each subspace are deterministically positioned, and also a wide range of dimensionality reduction techniques (e.g., Gaussian random projection, uniform subsampling, sketching) that fall into a subspace embedding framework (Meng & Mahoney, 2013; Avron et al., 2014). Finally, we apply our analysis to a differentially private SSC algorithm and established both privacy and utility guarantees of the proposed method.
|
[
"['Yining Wang' 'Yu-Xiang Wang' 'Aarti Singh']"
] |
cs.LG
| null |
1610.07667
| null | null |
http://arxiv.org/pdf/1610.07667v2
|
2016-10-26T06:11:07Z
|
2016-10-24T22:12:52Z
|
Predicting Counterfactuals from Large Historical Data and Small
Randomized Trials
|
When a new treatment is considered for use, whether a pharmaceutical drug or
a search engine ranking algorithm, a typical question that arises is, will its
performance exceed that of the current treatment? The conventional way to
answer this counterfactual question is to estimate the effect of the new
treatment in comparison to that of the conventional treatment by running a
controlled, randomized experiment. While this approach theoretically ensures an
unbiased estimator, it suffers from several drawbacks, including the difficulty
in finding representative experimental populations as well as the cost of
running such trials. Moreover, such trials neglect the huge quantities of
available control-condition data which are often completely ignored.
In this paper we propose a discriminative framework for estimating the
performance of a new treatment given a large dataset of the control condition
and data from a small (and possibly unrepresentative) randomized trial
comparing new and old treatments. Our objective, which requires minimal
assumptions on the treatments, models the relation between the outcomes of the
different conditions. This allows us to not only estimate mean effects but also
to generate individual predictions for examples outside the randomized sample.
We demonstrate the utility of our approach through experiments in three
areas: Search engine operation, treatments to diabetes patients, and market
value estimation for houses. Our results demonstrate that our approach can
reduce the number and size of the currently performed randomized controlled
experiments, thus saving significant time, money and effort on the part of
practitioners.
|
[
"['Nir Rosenfeld' 'Yishay Mansour' 'Elad Yom-Tov']",
"Nir Rosenfeld, Yishay Mansour, Elad Yom-Tov"
] |
cs.LG cs.AI cs.NE
| null |
1610.07675
| null | null |
http://arxiv.org/pdf/1610.07675v6
|
2016-12-13T23:32:24Z
|
2016-10-24T22:38:52Z
|
Surprisal-Driven Zoneout
|
We propose a novel method of regularization for recurrent neural networks
called suprisal-driven zoneout. In this method, states zoneout (maintain their
previous value rather than updating), when the suprisal (discrepancy between
the last state's prediction and target) is small. Thus regularization is
adaptive and input-driven on a per-neuron basis. We demonstrate the
effectiveness of this idea by achieving state-of-the-art bits per character of
1.31 on the Hutter Prize Wikipedia dataset, significantly reducing the gap to
the best known highly-engineered compression methods.
|
[
"['Kamil Rocki' 'Tomasz Kornuta' 'Tegan Maharaj']",
"Kamil Rocki, Tomasz Kornuta, Tegan Maharaj"
] |
stat.ML cs.LG
| null |
1610.07677
| null | null |
http://arxiv.org/pdf/1610.07677v1
|
2016-10-24T23:07:16Z
|
2016-10-24T23:07:16Z
|
A Bayesian Ensemble for Unsupervised Anomaly Detection
|
Methods for unsupervised anomaly detection suffer from the fact that the data
is unlabeled, making it difficult to assess the optimality of detection
algorithms. Ensemble learning has shown exceptional results in classification
and clustering problems, but has not seen as much research in the context of
outlier detection. Existing methods focus on combining output scores of
individual detectors, but this leads to outputs that are not easily
interpretable. In this paper, we introduce a theoretical foundation for
combining individual detectors with Bayesian classifier combination. Not only
are posterior distributions easily interpreted as the probability distribution
of anomalies, but bias, variance, and individual error rates of detectors are
all easily obtained. Performance on real-world datasets shows high accuracy
across varied types of time series data.
|
[
"Edward Yu, Parth Parekh",
"['Edward Yu' 'Parth Parekh']"
] |
cs.LG
| null |
1610.07686
| null | null |
http://arxiv.org/pdf/1610.07686v1
|
2016-10-25T00:01:33Z
|
2016-10-25T00:01:33Z
|
Co-Occuring Directions Sketching for Approximate Matrix Multiply
|
We introduce co-occurring directions sketching, a deterministic algorithm for
approximate matrix product (AMM), in the streaming model. We show that
co-occuring directions achieves a better error bound for AMM than other
randomized and deterministic approaches for AMM. Co-occurring directions gives
a $1 + \epsilon$ -approximation of the optimal low rank approximation of a
matrix product. Empirically our algorithm outperforms competing methods for
AMM, for a small sketch size. We validate empirically our theoretical findings
and algorithms
|
[
"Youssef Mroueh, Etienne Marcheret, Vaibhava Goel",
"['Youssef Mroueh' 'Etienne Marcheret' 'Vaibhava Goel']"
] |
cs.LG
| null |
1610.07717
| null | null |
http://arxiv.org/pdf/1610.07717v3
|
2017-05-19T21:20:18Z
|
2016-10-25T03:31:58Z
|
Distributed and parallel time series feature extraction for industrial
big data applications
|
The all-relevant problem of feature selection is the identification of all
strongly and weakly relevant attributes. This problem is especially hard to
solve for time series classification and regression in industrial applications
such as predictive maintenance or production line optimization, for which each
label or regression target is associated with several time series and
meta-information simultaneously. Here, we are proposing an efficient, scalable
feature extraction algorithm for time series, which filters the available
features in an early stage of the machine learning pipeline with respect to
their significance for the classification or regression task, while controlling
the expected percentage of selected but irrelevant features. The proposed
algorithm combines established feature extraction methods with a feature
importance filter. It has a low computational complexity, allows to start on a
problem with only limited domain knowledge available, can be trivially
parallelized, is highly scalable and based on well studied non-parametric
hypothesis tests. We benchmark our proposed algorithm on all binary
classification problems of the UCR time series classification archive as well
as time series from a production line optimization project and simulated
stochastic processes with underlying qualitative change of dynamics.
|
[
"['Maximilian Christ' 'Andreas W. Kempa-Liehr' 'Michael Feindt']",
"Maximilian Christ, Andreas W. Kempa-Liehr, Michael Feindt"
] |
cs.LG cs.NA
| null |
1610.07722
| null | null |
http://arxiv.org/pdf/1610.07722v1
|
2016-10-25T04:08:11Z
|
2016-10-25T04:08:11Z
|
Sparse Hierarchical Tucker Factorization and its Application to
Healthcare
|
We propose a new tensor factorization method, called the Sparse
Hierarchical-Tucker (Sparse H-Tucker), for sparse and high-order data tensors.
Sparse H-Tucker is inspired by its namesake, the classical Hierarchical Tucker
method, which aims to compute a tree-structured factorization of an input data
set that may be readily interpreted by a domain expert. However, Sparse
H-Tucker uses a nested sampling technique to overcome a key scalability problem
in Hierarchical Tucker, which is the creation of an unwieldy intermediate dense
core tensor; the result of our approach is a faster, more space-efficient, and
more accurate method. We extensively test our method on a real healthcare
dataset, which is collected from 30K patients and results in an 18th order
sparse data tensor. Unlike competing methods, Sparse H-Tucker can analyze the
full data set on a single multi-threaded machine. It can also do so more
accurately and in less time than the state-of-the-art: on a 12th order subset
of the input data, Sparse H-Tucker is 18x more accurate and 7.5x faster than a
previously state-of-the-art method. Even for analyzing low order tensors (e.g.,
4-order), our method requires close to an order of magnitude less time and over
two orders of magnitude less memory, as compared to traditional tensor
factorization methods such as CP and Tucker. Moreover, we observe that Sparse
H-Tucker scales nearly linearly in the number of non-zero tensor elements. The
resulting model also provides an interpretable disease hierarchy, which is
confirmed by a clinical expert.
|
[
"Ioakeim Perros and Robert Chen and Richard Vuduc and Jimeng Sun",
"['Ioakeim Perros' 'Robert Chen' 'Richard Vuduc' 'Jimeng Sun']"
] |
stat.ML cs.LG
| null |
1610.07733
| null | null |
http://arxiv.org/pdf/1610.07733v1
|
2016-10-25T05:10:49Z
|
2016-10-25T05:10:49Z
|
Approximate cross-validation formula for Bayesian linear regression
|
Cross-validation (CV) is a technique for evaluating the ability of
statistical models/learning systems based on a given data set. Despite its wide
applicability, the rather heavy computational cost can prevent its use as the
system size grows. To resolve this difficulty in the case of Bayesian linear
regression, we develop a formula for evaluating the leave-one-out CV error
approximately without actually performing CV. The usefulness of the developed
formula is tested by statistical mechanical analysis for a synthetic model.
This is confirmed by application to a real-world supernova data set as well.
|
[
"Yoshiyuki Kabashima, Tomoyuki Obuchi, Makoto Uemura",
"['Yoshiyuki Kabashima' 'Tomoyuki Obuchi' 'Makoto Uemura']"
] |
cs.NE cs.LG
| null |
1610.07752
| null | null |
http://arxiv.org/pdf/1610.07752v1
|
2016-10-25T07:11:11Z
|
2016-10-25T07:11:11Z
|
Big Models for Big Data using Multi objective averaged one dependence
estimators
|
Even though, many researchers tried to explore the various possibilities on
multi objective feature selection, still it is yet to be explored with best of
its capabilities in data mining applications rather than going for developing
new ones. In this paper, multi-objective evolutionary algorithm ENORA is used
to select the features in a multi-class classification problem. The fusion of
AnDE (averaged n-dependence estimators) with n=1, a variant of naive Bayes with
efficient feature selection by ENORA is performed in order to obtain a fast
hybrid classifier which can effectively learn from big data. This method aims
at solving the problem of finding optimal feature subset from full data which
at present still remains to be a difficult problem. The efficacy of the
obtained classifier is extensively evaluated with a range of most popular 21
real world dataset, ranging from small to big. The results obtained are
encouraging in terms of time, Root mean square error, zero-one loss and
classification accuracy.
|
[
"['Mrutyunjaya Panda']",
"Mrutyunjaya Panda"
] |
math.OC cs.LG stat.ML
| null |
1610.07797
| null | null |
http://arxiv.org/pdf/1610.07797v3
|
2017-03-03T21:34:24Z
|
2016-10-25T09:14:40Z
|
Frank-Wolfe Algorithms for Saddle Point Problems
|
We extend the Frank-Wolfe (FW) optimization algorithm to solve constrained
smooth convex-concave saddle point (SP) problems. Remarkably, the method only
requires access to linear minimization oracles. Leveraging recent advances in
FW optimization, we provide the first proof of convergence of a FW-type saddle
point solver over polytopes, thereby partially answering a 30 year-old
conjecture. We also survey other convergence results and highlight gaps in the
theoretical underpinnings of FW-style algorithms. Motivating applications
without known efficient alternatives are explored through structured prediction
with combinatorial penalties as well as games over matching polytopes involving
an exponential number of constraints.
|
[
"Gauthier Gidel, Tony Jebara and Simon Lacoste-Julien",
"['Gauthier Gidel' 'Tony Jebara' 'Simon Lacoste-Julien']"
] |
cs.LG cs.NE stat.ML
| null |
1610.07857
| null | null |
http://arxiv.org/pdf/1610.07857v1
|
2016-10-21T09:11:53Z
|
2016-10-21T09:11:53Z
|
Hybrid clustering-classification neural network in the medical
diagnostics of reactive arthritis
|
The hybrid clustering-classification neural network is proposed. This network
allows increasing a quality of information processing under the condition of
overlapping classes due to the rational choice of a learning rate parameter and
introducing a special procedure of fuzzy reasoning in the clustering process,
which occurs both with an external learning signal (supervised) and without the
one (unsupervised). As similarity measure neighborhood function or membership
one, cosine structures are used, which allow to provide a high flexibility due
to self-learning-learning process and to provide some new useful properties.
Many realized experiments have confirmed the efficiency of proposed hybrid
clustering-classification neural network; also, this network was used for
solving diagnostics task of reactive arthritis.
|
[
"['Yevgeniy Bodyanskiy' 'Olena Vynokurova' 'Volodymyr Savvo'\n 'Tatiana Tverdokhlib' 'Pavlo Mulesa']",
"Yevgeniy Bodyanskiy, Olena Vynokurova, Volodymyr Savvo, Tatiana\n Tverdokhlib, Pavlo Mulesa"
] |
cs.LG cs.FL
| null |
1610.07883
| null | null |
http://arxiv.org/pdf/1610.07883v1
|
2016-10-25T14:10:11Z
|
2016-10-25T14:10:11Z
|
Generalization Bounds for Weighted Automata
|
This paper studies the problem of learning weighted automata from a finite
labeled training sample. We consider several general families of weighted
automata defined in terms of three different measures: the norm of an
automaton's weights, the norm of the function computed by an automaton, or the
norm of the corresponding Hankel matrix. We present new data-dependent
generalization guarantees for learning weighted automata expressed in terms of
the Rademacher complexity of these families. We further present upper bounds on
these Rademacher complexities, which reveal key new data-dependent terms
related to the complexity of learning weighted automata.
|
[
"['Borja Balle' 'Mehryar Mohri']",
"Borja Balle and Mehryar Mohri"
] |
stat.ML cs.LG
| null |
1610.08077
| null | null |
http://arxiv.org/pdf/1610.08077v1
|
2016-10-25T20:18:24Z
|
2016-10-25T20:18:24Z
|
A statistical framework for fair predictive algorithms
|
Predictive modeling is increasingly being employed to assist human
decision-makers. One purported advantage of replacing human judgment with
computer models in high stakes settings-- such as sentencing, hiring, policing,
college admissions, and parole decisions-- is the perceived "neutrality" of
computers. It is argued that because computer models do not hold personal
prejudice, the predictions they produce will be equally free from prejudice.
There is growing recognition that employing algorithms does not remove the
potential for bias, and can even amplify it, since training data were
inevitably generated by a process that is itself biased. In this paper, we
provide a probabilistic definition of algorithmic bias. We propose a method to
remove bias from predictive models by removing all information regarding
protected variables from the permitted training data. Unlike previous work in
this area, our framework is general enough to accommodate arbitrary data types,
e.g. binary, continuous, etc. Motivated by models currently in use in the
criminal justice system that inform decisions on pre-trial release and
paroling, we apply our proposed method to a dataset on the criminal histories
of individuals at the time of sentencing to produce "race-neutral" predictions
of re-arrest. In the process, we demonstrate that the most common approach to
creating "race-neutral" models-- omitting race as a covariate-- still results
in racially disparate predictions. We then demonstrate that the application of
our proposed method to these data removes racial disparities from predictions
with minimal impact on predictive accuracy.
|
[
"['Kristian Lum' 'James Johndrow']",
"Kristian Lum and James Johndrow"
] |
cs.RO cs.CV cs.LG
| null |
1610.0812
| null | null | null | null | null |
Image Segmentation for Fruit Detection and Yield Estimation in Apple
Orchards
|
Ground vehicles equipped with monocular vision systems are a valuable source
of high resolution image data for precision agriculture applications in
orchards. This paper presents an image processing framework for fruit detection
and counting using orchard image data. A general purpose image segmentation
approach is used, including two feature learning algorithms; multi-scale
Multi-Layered Perceptrons (MLP) and Convolutional Neural Networks (CNN). These
networks were extended by including contextual information about how the image
data was captured (metadata), which correlates with some of the appearance
variations and/or class distributions observed in the data. The pixel-wise
fruit segmentation output is processed using the Watershed Segmentation (WS)
and Circular Hough Transform (CHT) algorithms to detect and count individual
fruits. Experiments were conducted in a commercial apple orchard near
Melbourne, Australia. The results show an improvement in fruit segmentation
performance with the inclusion of metadata on the previously benchmarked MLP
network. We extend this work with CNNs, bringing agrovision closer to the
state-of-the-art in computer vision, where although metadata had negligible
influence, the best pixel-wise F1-score of $0.791$ was achieved. The WS
algorithm produced the best apple detection and counting results, with a
detection F1-score of $0.858$. As a final step, image fruit counts were
accumulated over multiple rows at the orchard and compared against the
post-harvest fruit counts that were obtained from a grading and counting
machine. The count estimates using CNN and WS resulted in the best performance
for this dataset, with a squared correlation coefficient of $r^2=0.826$.
|
[
"Suchet Bargoti, James Underwood"
] |
null | null |
1610.08120
| null | null |
http://arxiv.org/pdf/1610.08120v1
|
2016-10-25T23:38:02Z
|
2016-10-25T23:38:02Z
|
Image Segmentation for Fruit Detection and Yield Estimation in Apple
Orchards
|
Ground vehicles equipped with monocular vision systems are a valuable source of high resolution image data for precision agriculture applications in orchards. This paper presents an image processing framework for fruit detection and counting using orchard image data. A general purpose image segmentation approach is used, including two feature learning algorithms; multi-scale Multi-Layered Perceptrons (MLP) and Convolutional Neural Networks (CNN). These networks were extended by including contextual information about how the image data was captured (metadata), which correlates with some of the appearance variations and/or class distributions observed in the data. The pixel-wise fruit segmentation output is processed using the Watershed Segmentation (WS) and Circular Hough Transform (CHT) algorithms to detect and count individual fruits. Experiments were conducted in a commercial apple orchard near Melbourne, Australia. The results show an improvement in fruit segmentation performance with the inclusion of metadata on the previously benchmarked MLP network. We extend this work with CNNs, bringing agrovision closer to the state-of-the-art in computer vision, where although metadata had negligible influence, the best pixel-wise F1-score of $0.791$ was achieved. The WS algorithm produced the best apple detection and counting results, with a detection F1-score of $0.858$. As a final step, image fruit counts were accumulated over multiple rows at the orchard and compared against the post-harvest fruit counts that were obtained from a grading and counting machine. The count estimates using CNN and WS resulted in the best performance for this dataset, with a squared correlation coefficient of $r^2=0.826$.
|
[
"['Suchet Bargoti' 'James Underwood']"
] |
cs.LG stat.ML
| null |
1610.08123
| null | null |
http://arxiv.org/pdf/1610.08123v4
|
2017-09-28T07:40:29Z
|
2016-10-25T23:43:49Z
|
Socratic Learning: Augmenting Generative Models to Incorporate Latent
Subsets in Training Data
|
A challenge in training discriminative models like neural networks is
obtaining enough labeled training data. Recent approaches use generative models
to combine weak supervision sources, like user-defined heuristics or knowledge
bases, to label training data. Prior work has explored learning accuracies for
these sources even without ground truth labels, but they assume that a single
accuracy parameter is sufficient to model the behavior of these sources over
the entire training set. In particular, they fail to model latent subsets in
the training data in which the supervision sources perform differently than on
average. We present Socratic learning, a paradigm that uses feedback from a
corresponding discriminative model to automatically identify these subsets and
augments the structure of the generative model accordingly. Experimentally, we
show that without any ground truth labels, the augmented generative model
reduces error by up to 56.06% for a relation extraction task compared to a
state-of-the-art weak supervision technique that utilizes generative models.
|
[
"Paroma Varma, Bryan He, Dan Iter, Peng Xu, Rose Yu, Christopher De Sa,\n Christopher R\\'e",
"['Paroma Varma' 'Bryan He' 'Dan Iter' 'Peng Xu' 'Rose Yu'\n 'Christopher De Sa' 'Christopher Ré']"
] |
cs.LG cs.AI cs.NA stat.ML
| null |
1610.08127
| null | null |
http://arxiv.org/pdf/1610.08127v1
|
2016-10-26T00:10:44Z
|
2016-10-26T00:10:44Z
|
Fast Bayesian Non-Negative Matrix Factorisation and Tri-Factorisation
|
We present a fast variational Bayesian algorithm for performing non-negative
matrix factorisation and tri-factorisation. We show that our approach achieves
faster convergence per iteration and timestep (wall-clock) than Gibbs sampling
and non-probabilistic approaches, and do not require additional samples to
estimate the posterior. We show that in particular for matrix tri-factorisation
convergence is difficult, but our variational Bayesian approach offers a fast
solution, allowing the tri-factorisation approach to be used more effectively.
|
[
"['Thomas Brouwer' 'Jes Frellsen' \"Pietro Lio'\"]",
"Thomas Brouwer, Jes Frellsen, Pietro Lio'"
] |
stat.ML cs.LG cs.SD
|
10.1121/1.4972527
|
1610.08166
| null | null |
http://arxiv.org/abs/1610.08166v1
|
2016-10-26T04:50:35Z
|
2016-10-26T04:50:35Z
|
Automatic measurement of vowel duration via structured prediction
|
A key barrier to making phonetic studies scalable and replicable is the need
to rely on subjective, manual annotation. To help meet this challenge, a
machine learning algorithm was developed for automatic measurement of a widely
used phonetic measure: vowel duration. Manually-annotated data were used to
train a model that takes as input an arbitrary length segment of the acoustic
signal containing a single vowel that is preceded and followed by consonants
and outputs the duration of the vowel. The model is based on the structured
prediction framework. The input signal and a hypothesized set of a vowel's
onset and offset are mapped to an abstract vector space by a set of acoustic
feature functions. The learning algorithm is trained in this space to minimize
the difference in expectations between predicted and manually-measured vowel
durations. The trained model can then automatically estimate vowel durations
without phonetic or orthographic transcription. Results comparing the model to
three sets of manually annotated data suggest it out-performed the current gold
standard for duration measurement, an HMM-based forced aligner (which requires
orthographic or phonetic transcription as an input).
|
[
"Yossi Adi, Joseph Keshet, Emily Cibelli, Erin Gustafson, Cynthia\n Clopper, Matthew Goldrick",
"['Yossi Adi' 'Joseph Keshet' 'Emily Cibelli' 'Erin Gustafson'\n 'Cynthia Clopper' 'Matthew Goldrick']"
] |
cs.LG cs.CL
| null |
1610.08229
| null | null |
http://arxiv.org/pdf/1610.08229v1
|
2016-10-26T08:48:10Z
|
2016-10-26T08:48:10Z
|
Word Embeddings and Their Use In Sentence Classification Tasks
|
This paper have two parts. In the first part we discuss word embeddings. We
discuss the need for them, some of the methods to create them, and some of
their interesting properties. We also compare them to image embeddings and see
how word embedding and image embedding can be combined to perform different
tasks. In the second part we implement a convolutional neural network trained
on top of pre-trained word vectors. The network is used for several
sentence-level classification tasks, and achieves state-of-art (or comparable)
results, demonstrating the great power of pre-trainted word embeddings over
random ones.
|
[
"Amit Mandelbaum and Adi Shalev",
"['Amit Mandelbaum' 'Adi Shalev']"
] |
cs.LG math.ST stat.ML stat.TH
|
10.1007/978-3-319-46379-7_17
|
1610.08239
| null | null |
http://arxiv.org/abs/1610.08239v2
|
2016-10-27T12:13:37Z
|
2016-10-26T09:13:28Z
|
Things Bayes can't do
|
The problem of forecasting conditional probabilities of the next event given
the past is considered in a general probabilistic setting. Given an arbitrary
(large, uncountable) set C of predictors, we would like to construct a single
predictor that performs asymptotically as well as the best predictor in C, on
any data. Here we show that there are sets C for which such predictors exist,
but none of them is a Bayesian predictor with a prior concentrated on C. In
other words, there is a predictor with sublinear regret, but every Bayesian
predictor must have a linear regret. This negative finding is in sharp contrast
with previous results that establish the opposite for the case when one of the
predictors in $C$ achieves asymptotically vanishing error. In such a case, if
there is a predictor that achieves asymptotically vanishing error for any
measure in C, then there is a Bayesian predictor that also has this property,
and whose prior is concentrated on (a countable subset of) C.
|
[
"Daniil Ryabko",
"['Daniil Ryabko']"
] |
math.ST cs.IT cs.LG math.IT stat.TH
| null |
1610.08249
| null | null |
http://arxiv.org/pdf/1610.08249v2
|
2016-11-01T13:01:03Z
|
2016-10-26T09:29:32Z
|
Universality of Bayesian mixture predictors
|
The problem is that of sequential probability forecasting for finite-valued
time series. The data is generated by an unknown probability distribution over
the space of all one-way infinite sequences. It is known that this measure
belongs to a given set C, but the latter is completely arbitrary (uncountably
infinite, without any structure given). The performance is measured with
asymptotic average log loss. In this work it is shown that the minimax
asymptotic performance is always attainable, and it is attained by a convex
combination of a countably many measures from the set C (a Bayesian mixture).
This was previously only known for the case when the best achievable asymptotic
error is 0. This also contrasts previous results that show that in the
non-realizable case all Bayesian mixtures may be suboptimal, while there is a
predictor that achieves the optimal performance.
|
[
"Daniil Ryabko",
"['Daniil Ryabko']"
] |
cs.LG
| null |
1610.0825
| null | null | null | null | null |
An Improved Approach for Prediction of Parkinson's Disease using Machine
Learning Techniques
|
Parkinson's disease (PD) is one of the major public health problems in the
world. It is a well-known fact that around one million people suffer from
Parkinson's disease in the United States whereas the number of people suffering
from Parkinson's disease worldwide is around 5 million. Thus, it is important
to predict Parkinson's disease in early stages so that early plan for the
necessary treatment can be made. People are mostly familiar with the motor
symptoms of Parkinson's disease, however, an increasing amount of research is
being done to predict the Parkinson's disease from non-motor symptoms that
precede the motor ones. If an early and reliable prediction is possible then a
patient can get a proper treatment at the right time. Nonmotor symptoms
considered are Rapid Eye Movement (REM) sleep Behaviour Disorder (RBD) and
olfactory loss. Developing machine learning models that can help us in
predicting the disease can play a vital role in early prediction. In this
paper, we extend a work which used the non-motor features such as RBD and
olfactory loss. Along with this the extended work also uses important
biomarkers. In this paper, we try to model this classifier using different
machine learning models that have not been used before. We developed automated
diagnostic models using Multilayer Perceptron, BayesNet, Random Forest and
Boosted Logistic Regression. It has been observed that Boosted Logistic
Regression provides the best performance with an impressive accuracy of 97.159
% and the area under the ROC curve was 98.9%. Thus, it is concluded that these
models can be used for early prediction of Parkinson's disease.
|
[
"Kamal Nayan Reddy Challa, Venkata Sasank Pagolu, Ganapati Panda,\n Babita Majhi"
] |
null | null |
1610.08250
| null | null |
http://arxiv.org/pdf/1610.08250v1
|
2016-10-26T09:34:39Z
|
2016-10-26T09:34:39Z
|
An Improved Approach for Prediction of Parkinson's Disease using Machine
Learning Techniques
|
Parkinson's disease (PD) is one of the major public health problems in the world. It is a well-known fact that around one million people suffer from Parkinson's disease in the United States whereas the number of people suffering from Parkinson's disease worldwide is around 5 million. Thus, it is important to predict Parkinson's disease in early stages so that early plan for the necessary treatment can be made. People are mostly familiar with the motor symptoms of Parkinson's disease, however, an increasing amount of research is being done to predict the Parkinson's disease from non-motor symptoms that precede the motor ones. If an early and reliable prediction is possible then a patient can get a proper treatment at the right time. Nonmotor symptoms considered are Rapid Eye Movement (REM) sleep Behaviour Disorder (RBD) and olfactory loss. Developing machine learning models that can help us in predicting the disease can play a vital role in early prediction. In this paper, we extend a work which used the non-motor features such as RBD and olfactory loss. Along with this the extended work also uses important biomarkers. In this paper, we try to model this classifier using different machine learning models that have not been used before. We developed automated diagnostic models using Multilayer Perceptron, BayesNet, Random Forest and Boosted Logistic Regression. It has been observed that Boosted Logistic Regression provides the best performance with an impressive accuracy of 97.159 % and the area under the ROC curve was 98.9%. Thus, it is concluded that these models can be used for early prediction of Parkinson's disease.
|
[
"['Kamal Nayan Reddy Challa' 'Venkata Sasank Pagolu' 'Ganapati Panda'\n 'Babita Majhi']"
] |
quant-ph cs.AI cs.LG
|
10.1103/PhysRevLett.117.130501
|
1610.08251
| null | null |
http://arxiv.org/abs/1610.08251v1
|
2016-10-26T09:35:11Z
|
2016-10-26T09:35:11Z
|
Quantum-enhanced machine learning
|
The emerging field of quantum machine learning has the potential to
substantially aid in the problems and scope of artificial intelligence. This is
only enhanced by recent successes in the field of classical machine learning.
In this work we propose an approach for the systematic treatment of machine
learning, from the perspective of quantum information. Our approach is general
and covers all three main branches of machine learning: supervised,
unsupervised and reinforcement learning. While quantum improvements in
supervised and unsupervised learning have been reported, reinforcement learning
has received much less attention. Within our approach, we tackle the problem of
quantum enhancements in reinforcement learning as well, and propose a
systematic scheme for providing improvements. As an example, we show that
quadratic improvements in learning efficiency, and exponential improvements in
performance over limited time periods, can be obtained for a broad class of
learning problems.
|
[
"['Vedran Dunjko' 'Jacob M. Taylor' 'Hans J. Briegel']",
"Vedran Dunjko, Jacob M. Taylor, Hans J. Briegel"
] |
cs.CV cs.AI cs.LG stat.ML
| null |
1610.08401
| null | null |
http://arxiv.org/pdf/1610.08401v3
|
2017-03-09T17:01:25Z
|
2016-10-26T16:30:45Z
|
Universal adversarial perturbations
|
Given a state-of-the-art deep neural network classifier, we show the
existence of a universal (image-agnostic) and very small perturbation vector
that causes natural images to be misclassified with high probability. We
propose a systematic algorithm for computing universal perturbations, and show
that state-of-the-art deep neural networks are highly vulnerable to such
perturbations, albeit being quasi-imperceptible to the human eye. We further
empirically analyze these universal perturbations and show, in particular, that
they generalize very well across neural networks. The surprising existence of
universal perturbations reveals important geometric correlations among the
high-dimensional decision boundary of classifiers. It further outlines
potential security breaches with the existence of single directions in the
input space that adversaries can possibly exploit to break a classifier on most
natural images.
|
[
"['Seyed-Mohsen Moosavi-Dezfooli' 'Alhussein Fawzi' 'Omar Fawzi'\n 'Pascal Frossard']",
"Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, Pascal\n Frossard"
] |
cs.RO cs.LG
|
10.1109/IROS.2015.7353783
|
1610.08424
| null | null |
http://arxiv.org/abs/1610.08424v1
|
2016-10-26T17:00:23Z
|
2016-10-26T17:00:23Z
|
Counterfactual Reasoning about Intent for Interactive Navigation in
Dynamic Environments
|
Many modern robotics applications require robots to function autonomously in
dynamic environments including other decision making agents, such as people or
other robots. This calls for fast and scalable interactive motion planning.
This requires models that take into consideration the other agent's intended
actions in one's own planning. We present a real-time motion planning framework
that brings together a few key components including intention inference by
reasoning counterfactually about potential motion of the other agents as they
work towards different goals. By using a light-weight motion model, we achieve
efficient iterative planning for fluid motion when avoiding pedestrians, in
parallel with goal inference for longer range movement prediction. This
inference framework is coupled with a novel distributed visual tracking method
that provides reliable and robust models for the current belief-state of the
monitored environment. This combined approach represents a computationally
efficient alternative to previously studied policy learning methods that often
require significant offline training or calibration and do not yet scale to
densely populated environments. We validate this framework with experiments
involving multi-robot and human-robot navigation. We further validate the
tracker component separately on much larger scale unconstrained pedestrian data
sets.
|
[
"['A. Bordallo' 'F. Previtali' 'N. Nardelli' 'S. Ramamoorthy']",
"A. Bordallo, F. Previtali, N. Nardelli, S. Ramamoorthy"
] |
stat.ML cs.LG
|
10.1145/3038912.3052660
|
1610.08452
| null | null |
http://arxiv.org/abs/1610.08452v2
|
2017-03-08T19:04:28Z
|
2016-10-26T18:34:48Z
|
Fairness Beyond Disparate Treatment & Disparate Impact: Learning
Classification without Disparate Mistreatment
|
Automated data-driven decision making systems are increasingly being used to
assist, or even replace humans in many settings. These systems function by
learning from historical decisions, often taken by humans. In order to maximize
the utility of these systems (or, classifiers), their training involves
minimizing the errors (or, misclassifications) over the given historical data.
However, it is quite possible that the optimally trained classifier makes
decisions for people belonging to different social groups with different
misclassification rates (e.g., misclassification rates for females are higher
than for males), thereby placing these groups at an unfair disadvantage. To
account for and avoid such unfairness, in this paper, we introduce a new notion
of unfairness, disparate mistreatment, which is defined in terms of
misclassification rates. We then propose intuitive measures of disparate
mistreatment for decision boundary-based classifiers, which can be easily
incorporated into their formulation as convex-concave constraints. Experiments
on synthetic as well as real world datasets show that our methodology is
effective at avoiding disparate mistreatment, often at a small cost in terms of
accuracy.
|
[
"Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna\n P. Gummadi",
"['Muhammad Bilal Zafar' 'Isabel Valera' 'Manuel Gomez Rodriguez'\n 'Krishna P. Gummadi']"
] |
cs.LG stat.ML
| null |
1610.08495
| null | null |
http://arxiv.org/pdf/1610.08495v1
|
2016-09-12T17:48:38Z
|
2016-09-12T17:48:38Z
|
Adaptive matching pursuit for sparse signal recovery
|
Spike and Slab priors have been of much recent interest in signal processing
as a means of inducing sparsity in Bayesian inference. Applications domains
that benefit from the use of these priors include sparse recovery, regression
and classification. It is well-known that solving for the sparse coefficient
vector to maximize these priors results in a hard non-convex and mixed integer
programming problem. Most existing solutions to this optimization problem
either involve simplifying assumptions/relaxations or are computationally
expensive. We propose a new greedy and adaptive matching pursuit (AMP)
algorithm to directly solve this hard problem. Essentially, in each step of the
algorithm, the set of active elements would be updated by either adding or
removing one index, whichever results in better improvement. In addition, the
intermediate steps of the algorithm are calculated via an inexpensive Cholesky
decomposition which makes the algorithm much faster. Results on simulated data
sets as well as real-world image recovery challenges confirm the benefits of
the proposed AMP, particularly in providing a superior cost-quality trade-off
over existing alternatives.
|
[
"Tiep H. Vu, Hojjat S. Mousavi, Vishal Monga",
"['Tiep H. Vu' 'Hojjat S. Mousavi' 'Vishal Monga']"
] |
cs.RO cs.AI cs.LG
| null |
1610.085
| null | null | null | null | null |
Synthesis of Shared Control Protocols with Provable Safety and
Performance Guarantees
|
We formalize synthesis of shared control protocols with correctness
guarantees for temporal logic specifications. More specifically, we introduce a
modeling formalism in which both a human and an autonomy protocol can issue
commands to a robot towards performing a certain task. These commands are
blended into a joint input to the robot. The autonomy protocol is synthesized
using an abstraction of possible human commands accounting for randomness in
decisions caused by factors such as fatigue or incomprehensibility of the
problem at hand. The synthesis is designed to ensure that the resulting robot
behavior satisfies given safety and performance specifications, e.g., in
temporal logic. Our solution is based on nonlinear programming and we address
the inherent scalability issue by presenting alternative methods. We assess the
feasibility and the scalability of the approach by an experimental evaluation.
|
[
"Nils Jansen and Murat Cubuktepe and Ufuk Topcu"
] |
null | null |
1610.08500
| null | null |
http://arxiv.org/pdf/1610.08500v1
|
2016-10-26T19:49:09Z
|
2016-10-26T19:49:09Z
|
Synthesis of Shared Control Protocols with Provable Safety and
Performance Guarantees
|
We formalize synthesis of shared control protocols with correctness guarantees for temporal logic specifications. More specifically, we introduce a modeling formalism in which both a human and an autonomy protocol can issue commands to a robot towards performing a certain task. These commands are blended into a joint input to the robot. The autonomy protocol is synthesized using an abstraction of possible human commands accounting for randomness in decisions caused by factors such as fatigue or incomprehensibility of the problem at hand. The synthesis is designed to ensure that the resulting robot behavior satisfies given safety and performance specifications, e.g., in temporal logic. Our solution is based on nonlinear programming and we address the inherent scalability issue by presenting alternative methods. We assess the feasibility and the scalability of the approach by an experimental evaluation.
|
[
"['Nils Jansen' 'Murat Cubuktepe' 'Ufuk Topcu']"
] |
stat.ML cs.LG
| null |
1610.08611
| null | null |
http://arxiv.org/pdf/1610.08611v1
|
2016-10-27T04:17:46Z
|
2016-10-27T04:17:46Z
|
Causal Network Learning from Multiple Interventions of Unknown
Manipulated Targets
|
In this paper, we discuss structure learning of causal networks from multiple
data sets obtained by external intervention experiments where we do not know
what variables are manipulated. For example, the conditions in these
experiments are changed by changing temperature or using drugs, but we do not
know what target variables are manipulated by the external interventions. From
such data sets, the structure learning becomes more difficult. For this case,
we first discuss the identifiability of causal structures. Next we present a
graph-merging method for learning causal networks for the case that the sample
sizes are large for these interventions. Then for the case that the sample
sizes of these interventions are relatively small, we propose a data-pooling
method for learning causal networks in which we pool all data sets of these
interventions together for the learning. Further we propose a re-sampling
approach to evaluate the edges of the causal network learned by the
data-pooling method. Finally we illustrate the proposed learning methods by
simulations.
|
[
"['Yango He' 'Zhi Geng']",
"Yango He, Zhi Geng"
] |
cs.LG cs.CL
| null |
1610.08613
| null | null |
http://arxiv.org/pdf/1610.08613v2
|
2017-03-07T04:04:33Z
|
2016-10-27T04:28:29Z
|
Can Active Memory Replace Attention?
|
Several mechanisms to focus attention of a neural network on selected parts
of its input or memory have been used successfully in deep learning models in
recent years. Attention has improved image classification, image captioning,
speech recognition, generative models, and learning algorithmic tasks, but it
had probably the largest impact on neural machine translation.
Recently, similar improvements have been obtained using alternative
mechanisms that do not focus on a single part of a memory but operate on all of
it in parallel, in a uniform way. Such mechanism, which we call active memory,
improved over attention in algorithmic tasks, image processing, and in
generative modelling.
So far, however, active memory has not improved over attention for most
natural language processing tasks, in particular for machine translation. We
analyze this shortcoming in this paper and propose an extended model of active
memory that matches existing attention models on neural machine translation and
generalizes better to longer sentences. We investigate this model and explain
why previous active memory models did not succeed. Finally, we discuss when
active memory brings most benefits and where attention can be a better choice.
|
[
"{\\L}ukasz Kaiser and Samy Bengio",
"['Łukasz Kaiser' 'Samy Bengio']"
] |
stat.ML cs.LG
| null |
1610.08628
| null | null |
http://arxiv.org/pdf/1610.08628v1
|
2016-10-27T06:19:27Z
|
2016-10-27T06:19:27Z
|
Regret Bounds for Lifelong Learning
|
We consider the problem of transfer learning in an online setting. Different
tasks are presented sequentially and processed by a within-task algorithm. We
propose a lifelong learning strategy which refines the underlying data
representation used by the within-task algorithm, thereby transferring
information from one task to the next. We show that when the within-task
algorithm comes with some regret bound, our strategy inherits this good
property. Our bounds are in expectation for a general loss function, and
uniform for a convex loss. We discuss applications to dictionary learning and
finite set of predictors. In the latter case, we improve previous
$O(1/\sqrt{m})$ bounds to $O(1/m)$ where $m$ is the per task sample size.
|
[
"Pierre Alquier and The Tien Mai and Massimiliano Pontil",
"['Pierre Alquier' 'The Tien Mai' 'Massimiliano Pontil']"
] |
q-bio.QM cs.LG
|
10.1016/j.compbiolchem.2018.01.009
|
1610.08664
| null | null |
http://arxiv.org/abs/1610.08664v1
|
2016-10-27T08:52:19Z
|
2016-10-27T08:52:19Z
|
A random version of principal component analysis in data clustering
|
Principal component analysis (PCA) is a widespread technique for data
analysis that relies on the covariance-correlation matrix of the analyzed data.
However to properly work with high-dimensional data, PCA poses severe
mathematical constraints on the minimum number of different replicates or
samples that must be included in the analysis. Here we show that a modified
algorithm works not only on well dimensioned datasets, but also on degenerated
ones.
|
[
"Luigi Leonardo Palese",
"['Luigi Leonardo Palese']"
] |
stat.ML cs.LG
| null |
1610.08696
| null | null |
http://arxiv.org/pdf/1610.08696v3
|
2017-01-18T04:41:17Z
|
2016-10-27T10:50:55Z
|
Learning Bound for Parameter Transfer Learning
|
We consider a transfer-learning problem by using the parameter transfer
approach, where a suitable parameter of feature mapping is learned through one
task and applied to another objective task. Then, we introduce the notion of
the local stability and parameter transfer learnability of parametric feature
mapping,and thereby derive a learning bound for parameter transfer algorithms.
As an application of parameter transfer learning, we discuss the performance of
sparse coding in self-taught learning. Although self-taught learning algorithms
with plentiful unlabeled data often show excellent empirical performance, their
theoretical analysis has not been studied. In this paper, we also provide the
first theoretical learning bound for self-taught learning.
|
[
"['Wataru Kumagai']",
"Wataru Kumagai"
] |
cs.LG stat.ML
| null |
1610.08738
| null | null |
http://arxiv.org/pdf/1610.08738v4
|
2017-02-10T15:22:24Z
|
2016-10-27T12:13:05Z
|
Compressive K-means
|
The Lloyd-Max algorithm is a classical approach to perform K-means
clustering. Unfortunately, its cost becomes prohibitive as the training dataset
grows large. We propose a compressive version of K-means (CKM), that estimates
cluster centers from a sketch, i.e. from a drastically compressed
representation of the training dataset. We demonstrate empirically that CKM
performs similarly to Lloyd-Max, for a sketch size proportional to the number
of cen-troids times the ambient dimension, and independent of the size of the
original dataset. Given the sketch, the computational complexity of CKM is also
independent of the size of the dataset. Unlike Lloyd-Max which requires several
replicates, we further demonstrate that CKM is almost insensitive to
initialization. For a large dataset of 10^7 data points, we show that CKM can
run two orders of magnitude faster than five replicates of Lloyd-Max, with
similar clustering performance on artificial data. Finally, CKM achieves lower
classification errors on handwritten digits classification.
|
[
"['Nicolas Keriven' 'Nicolas Tremblay' 'Yann Traonmilin' 'Rémi Gribonval']",
"Nicolas Keriven (PANAMA), Nicolas Tremblay (GIPSA-CICS), Yann\n Traonmilin (PANAMA), R\\'emi Gribonval (PANAMA)"
] |
stat.ML cs.CR cs.LG stat.ME
| null |
1610.08749
| null | null |
http://arxiv.org/pdf/1610.08749v2
|
2017-04-10T12:59:55Z
|
2016-10-27T12:34:36Z
|
Differentially Private Variational Inference for Non-conjugate Models
|
Many machine learning applications are based on data collected from people,
such as their tastes and behaviour as well as biological traits and genetic
data. Regardless of how important the application might be, one has to make
sure individuals' identities or the privacy of the data are not compromised in
the analysis. Differential privacy constitutes a powerful framework that
prevents breaching of data subject privacy from the output of a computation.
Differentially private versions of many important Bayesian inference methods
have been proposed, but there is a lack of an efficient unified approach
applicable to arbitrary models. In this contribution, we propose a
differentially private variational inference method with a very wide
applicability. It is built on top of doubly stochastic variational inference, a
recent advance which provides a variational solution to a large class of
models. We add differential privacy into doubly stochastic variational
inference by clipping and perturbing the gradients. The algorithm is made more
efficient through privacy amplification from subsampling. We demonstrate the
method can reach an accuracy close to non-private level under reasonably strong
privacy guarantees, clearly improving over previous sampling-based alternatives
especially in the strong privacy regime.
|
[
"['Joonas Jälkö' 'Onur Dikmen' 'Antti Honkela']",
"Joonas J\\\"alk\\\"o and Onur Dikmen and Antti Honkela"
] |
cs.CL cs.LG
| null |
1610.08763
| null | null |
http://arxiv.org/pdf/1610.08763v2
|
2017-06-02T19:28:19Z
|
2016-10-27T13:20:25Z
|
CoType: Joint Extraction of Typed Entities and Relations with Knowledge
Bases
|
Extracting entities and relations for types of interest from text is
important for understanding massive text corpora. Traditionally, systems of
entity relation extraction have relied on human-annotated corpora for training
and adopted an incremental pipeline. Such systems require additional human
expertise to be ported to a new domain, and are vulnerable to errors cascading
down the pipeline. In this paper, we investigate joint extraction of typed
entities and relations with labeled data heuristically obtained from knowledge
bases (i.e., distant supervision). As our algorithm for type labeling via
distant supervision is context-agnostic, noisy training data poses unique
challenges for the task. We propose a novel domain-independent framework,
called CoType, that runs a data-driven text segmentation algorithm to extract
entity mentions, and jointly embeds entity mentions, relation mentions, text
features and type labels into two low-dimensional spaces (for entity and
relation mentions respectively), where, in each space, objects whose types are
close will also have similar representations. CoType, then using these learned
embeddings, estimates the types of test (unlinkable) mentions. We formulate a
joint optimization problem to learn embeddings from text corpora and knowledge
bases, adopting a novel partial-label loss function for noisy labeled data and
introducing an object "translation" function to capture the cross-constraints
of entities and relations on each other. Experiments on three public datasets
demonstrate the effectiveness of CoType across different domains (e.g., news,
biomedical), with an average of 25% improvement in F1 score compared to the
next best method.
|
[
"['Xiang Ren' 'Zeqiu Wu' 'Wenqi He' 'Meng Qu' 'Clare R. Voss' 'Heng Ji'\n 'Tarek F. Abdelzaher' 'Jiawei Han']",
"Xiang Ren, Zeqiu Wu, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, Tarek\n F. Abdelzaher, Jiawei Han"
] |
stat.ML cs.LG
| null |
1610.08838
| null | null |
http://arxiv.org/pdf/1610.08838v1
|
2016-10-27T15:30:35Z
|
2016-10-27T15:30:35Z
|
A Category Space Approach to Supervised Dimensionality Reduction
|
Supervised dimensionality reduction has emerged as an important theme in the
last decade. Despite the plethora of models and formulations, there is a lack
of a simple model which aims to project the set of patterns into a space
defined by the classes (or categories). To this end, we set up a model in which
each class is represented as a 1D subspace of the vector space formed by the
features. Assuming the set of classes does not exceed the cardinality of the
features, the model results in multi-class supervised learning in which the
features of each class are projected into the class subspace. Class
discrimination is automatically guaranteed via the imposition of orthogonality
of the 1D class sub-spaces. The resulting optimization problem - formulated as
the minimization of a sum of quadratic functions on a Stiefel manifold - while
being non-convex (due to the constraints), nevertheless has a structure for
which we can identify when we have reached a global minimum. After formulating
a version with standard inner products, we extend the formulation to
reproducing kernel Hilbert spaces in a straightforward manner. The optimization
approach also extends in a similar fashion to the kernel version. Results and
comparisons with the multi-class Fisher linear (and kernel) discriminants and
principal component analysis (linear and kernel) showcase the relative merits
of this approach to dimensionality reduction.
|
[
"['Anthony O. Smith' 'Anand Rangarajan']",
"Anthony O. Smith and Anand Rangarajan"
] |
stat.ML cs.LG
| null |
1610.08861
| null | null |
http://arxiv.org/pdf/1610.08861v1
|
2016-10-27T16:09:30Z
|
2016-10-27T16:09:30Z
|
On Bochner's and Polya's Characterizations of Positive-Definite Kernels
and the Respective Random Feature Maps
|
Positive-definite kernel functions are fundamental elements of kernel methods
and Gaussian processes. A well-known construction of such functions comes from
Bochner's characterization, which connects a positive-definite function with a
probability distribution. Another construction, which appears to have attracted
less attention, is Polya's criterion that characterizes a subset of these
functions. In this paper, we study the latter characterization and derive a
number of novel kernels little known previously.
In the context of large-scale kernel machines, Rahimi and Recht (2007)
proposed a random feature map (random Fourier) that approximates a kernel
function, through independent sampling of the probability distribution in
Bochner's characterization. The authors also suggested another feature map
(random binning), which, although not explicitly stated, comes from Polya's
characterization. We show that with the same number of random samples, the
random binning map results in an Euclidean inner product closer to the kernel
than does the random Fourier map. The superiority of the random binning map is
confirmed empirically through regressions and classifications in the
reproducing kernel Hilbert space.
|
[
"['Jie Chen' 'Dehua Cheng' 'Yan Liu']",
"Jie Chen, Dehua Cheng, Yan Liu"
] |
cs.CV cs.LG
| null |
1610.08904
| null | null |
http://arxiv.org/pdf/1610.08904v1
|
2016-10-27T17:51:18Z
|
2016-10-27T17:51:18Z
|
Local Similarity-Aware Deep Feature Embedding
|
Existing deep embedding methods in vision tasks are capable of learning a
compact Euclidean space from images, where Euclidean distances correspond to a
similarity metric. To make learning more effective and efficient, hard sample
mining is usually employed, with samples identified through computing the
Euclidean feature distance. However, the global Euclidean distance cannot
faithfully characterize the true feature similarity in a complex visual feature
space, where the intraclass distance in a high-density region may be larger
than the interclass distance in low-density regions. In this paper, we
introduce a Position-Dependent Deep Metric (PDDM) unit, which is capable of
learning a similarity metric adaptive to local feature structure. The metric
can be used to select genuinely hard samples in a local neighborhood to guide
the deep embedding learning in an online and robust manner. The new layer is
appealing in that it is pluggable to any convolutional networks and is trained
end-to-end. Our local similarity-aware feature embedding not only demonstrates
faster convergence and boosted performance on two complex image retrieval
datasets, its large margin nature also leads to superior generalization results
under the large and open set scenarios of transfer learning and zero-shot
learning on ImageNet 2010 and ImageNet-10K datasets.
|
[
"Chen Huang, Chen Change Loy, Xiaoou Tang",
"['Chen Huang' 'Chen Change Loy' 'Xiaoou Tang']"
] |
cs.LG cs.AI stat.ML
| null |
1610.08936
| null | null |
http://arxiv.org/pdf/1610.08936v3
|
2017-10-05T01:14:56Z
|
2016-10-27T19:08:57Z
|
Learning Scalable Deep Kernels with Recurrent Structure
|
Many applications in speech, robotics, finance, and biology deal with
sequential data, where ordering matters and recurrent structures are common.
However, this structure cannot be easily captured by standard kernel functions.
To model such structure, we propose expressive closed-form kernel functions for
Gaussian processes. The resulting model, GP-LSTM, fully encapsulates the
inductive biases of long short-term memory (LSTM) recurrent networks, while
retaining the non-parametric probabilistic advantages of Gaussian processes. We
learn the properties of the proposed kernels by optimizing the Gaussian process
marginal likelihood using a new provably convergent semi-stochastic gradient
procedure and exploit the structure of these kernels for scalable training and
prediction. This approach provides a practical representation for Bayesian
LSTMs. We demonstrate state-of-the-art performance on several benchmarks, and
thoroughly investigate a consequential autonomous driving application, where
the predictive uncertainties provided by GP-LSTM are uniquely valuable.
|
[
"['Maruan Al-Shedivat' 'Andrew Gordon Wilson' 'Yunus Saatchi' 'Zhiting Hu'\n 'Eric P. Xing']",
"Maruan Al-Shedivat, Andrew Gordon Wilson, Yunus Saatchi, Zhiting Hu,\n Eric P. Xing"
] |
cs.CV cs.LG cs.SD
| null |
1610.09001
| null | null |
http://arxiv.org/pdf/1610.09001v1
|
2016-10-27T20:23:39Z
|
2016-10-27T20:23:39Z
|
SoundNet: Learning Sound Representations from Unlabeled Video
|
We learn rich natural sound representations by capitalizing on large amounts
of unlabeled sound data collected in the wild. We leverage the natural
synchronization between vision and sound to learn an acoustic representation
using two-million unlabeled videos. Unlabeled video has the advantage that it
can be economically acquired at massive scales, yet contains useful signals
about natural sound. We propose a student-teacher training procedure which
transfers discriminative visual knowledge from well established visual
recognition models into the sound modality using unlabeled video as a bridge.
Our sound representation yields significant performance improvements over the
state-of-the-art results on standard benchmarks for acoustic scene/object
classification. Visualizations suggest some high-level semantics automatically
emerge in the sound network, even though it is trained without ground truth
labels.
|
[
"['Yusuf Aytar' 'Carl Vondrick' 'Antonio Torralba']",
"Yusuf Aytar, Carl Vondrick, Antonio Torralba"
] |
cs.CV cs.LG cs.MM
| null |
1610.09003
| null | null |
http://arxiv.org/pdf/1610.09003v1
|
2016-10-27T20:24:36Z
|
2016-10-27T20:24:36Z
|
Cross-Modal Scene Networks
|
People can recognize scenes across many different modalities beyond natural
images. In this paper, we investigate how to learn cross-modal scene
representations that transfer across modalities. To study this problem, we
introduce a new cross-modal scene dataset. While convolutional neural networks
can categorize scenes well, they also learn an intermediate representation not
aligned across modalities, which is undesirable for cross-modal transfer
applications. We present methods to regularize cross-modal convolutional neural
networks so that they have a shared representation that is agnostic of the
modality. Our experiments suggest that our scene representation can help
transfer representations across modalities for retrieval. Moreover, our
visualizations suggest that units emerge in the shared representation that tend
to activate on consistent concepts independently of the modality.
|
[
"Yusuf Aytar, Lluis Castrejon, Carl Vondrick, Hamed Pirsiavash, Antonio\n Torralba",
"['Yusuf Aytar' 'Lluis Castrejon' 'Carl Vondrick' 'Hamed Pirsiavash'\n 'Antonio Torralba']"
] |
cs.LG
| null |
1610.09027
| null | null |
http://arxiv.org/pdf/1610.09027v1
|
2016-10-27T22:38:05Z
|
2016-10-27T22:38:05Z
|
Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes
|
Neural networks augmented with external memory have the ability to learn
algorithmic solutions to complex tasks. These models appear promising for
applications such as language modeling and machine translation. However, they
scale poorly in both space and time as the amount of memory grows --- limiting
their applicability to real-world domains. Here, we present an end-to-end
differentiable memory access scheme, which we call Sparse Access Memory (SAM),
that retains the representational power of the original approaches whilst
training efficiently with very large memories. We show that SAM achieves
asymptotic lower bounds in space and time complexity, and find that an
implementation runs $1,\!000\times$ faster and with $3,\!000\times$ less
physical memory than non-sparse models. SAM learns with comparable data
efficiency to existing models on a range of synthetic tasks and one-shot
Omniglot character recognition, and can scale to tasks requiring $100,\!000$s
of time steps and memories. As well, we show how our approach can be adapted
for models that maintain temporal associations between memories, as with the
recently introduced Differentiable Neural Computer.
|
[
"Jack W Rae, Jonathan J Hunt, Tim Harley, Ivo Danihelka, Andrew Senior,\n Greg Wayne, Alex Graves, Timothy P Lillicrap",
"['Jack W Rae' 'Jonathan J Hunt' 'Tim Harley' 'Ivo Danihelka'\n 'Andrew Senior' 'Greg Wayne' 'Alex Graves' 'Timothy P Lillicrap']"
] |
stat.ML cs.LG stat.CO stat.ME
| null |
1610.09033
| null | null |
http://arxiv.org/pdf/1610.09033v3
|
2018-03-15T01:08:06Z
|
2016-10-27T23:32:25Z
|
Operator Variational Inference
|
Variational inference is an umbrella term for algorithms which cast Bayesian
inference as optimization. Classically, variational inference uses the
Kullback-Leibler divergence to define the optimization. Though this divergence
has been widely used, the resultant posterior approximation can suffer from
undesirable statistical properties. To address this, we reexamine variational
inference from its roots as an optimization problem. We use operators, or
functions of functions, to design variational objectives. As one example, we
design a variational objective with a Langevin-Stein operator. We develop a
black box algorithm, operator variational inference (OPVI), for optimizing any
operator objective. Importantly, operators enable us to make explicit the
statistical and computational tradeoffs for variational inference. We can
characterize different properties of variational objectives, such as objectives
that admit data subsampling---allowing inference to scale to massive data---as
well as objectives that admit variational programs---a rich class of posterior
approximations that does not require a tractable density. We illustrate the
benefits of OPVI on a mixture model and a generative model of images.
|
[
"Rajesh Ranganath, Jaan Altosaar, Dustin Tran, David M. Blei",
"['Rajesh Ranganath' 'Jaan Altosaar' 'Dustin Tran' 'David M. Blei']"
] |
stat.ML cs.LG
| null |
1610.09038
| null | null |
http://arxiv.org/pdf/1610.09038v1
|
2016-10-27T23:54:31Z
|
2016-10-27T23:54:31Z
|
Professor Forcing: A New Algorithm for Training Recurrent Networks
|
The Teacher Forcing algorithm trains recurrent networks by supplying observed
sequence values as inputs during training and using the network's own
one-step-ahead predictions to do multi-step sampling. We introduce the
Professor Forcing algorithm, which uses adversarial domain adaptation to
encourage the dynamics of the recurrent network to be the same when training
the network and when sampling from the network over multiple time steps. We
apply Professor Forcing to language modeling, vocal synthesis on raw waveforms,
handwriting generation, and image generation. Empirically we find that
Professor Forcing acts as a regularizer, improving test likelihood on character
level Penn Treebank and sequential MNIST. We also find that the model
qualitatively improves samples, especially when sampling for a large number of
time steps. This is supported by human evaluation of sample quality. Trade-offs
between Professor Forcing and Scheduled Sampling are discussed. We produce
T-SNEs showing that Professor Forcing successfully makes the dynamics of the
network during training and sampling more similar.
|
[
"Alex Lamb, Anirudh Goyal, Ying Zhang, Saizheng Zhang, Aaron Courville,\n Yoshua Bengio",
"['Alex Lamb' 'Anirudh Goyal' 'Ying Zhang' 'Saizheng Zhang'\n 'Aaron Courville' 'Yoshua Bengio']"
] |
cs.LG stat.ML
| null |
1610.09072
| null | null |
http://arxiv.org/pdf/1610.09072v1
|
2016-10-28T03:50:00Z
|
2016-10-28T03:50:00Z
|
Orthogonal Random Features
|
We present an intriguing discovery related to Random Fourier Features: in
Gaussian kernel approximation, replacing the random Gaussian matrix by a
properly scaled random orthogonal matrix significantly decreases kernel
approximation error. We call this technique Orthogonal Random Features (ORF),
and provide theoretical and empirical justification for this behavior.
Motivated by this discovery, we further propose Structured Orthogonal Random
Features (SORF), which uses a class of structured discrete orthogonal matrices
to speed up the computation. The method reduces the time cost from
$\mathcal{O}(d^2)$ to $\mathcal{O}(d \log d)$, where $d$ is the data
dimensionality, with almost no compromise in kernel approximation quality
compared to ORF. Experiments on several datasets verify the effectiveness of
ORF and SORF over the existing methods. We also provide discussions on using
the same type of discrete orthogonal structure for a broader range of
applications.
|
[
"['Felix X. Yu' 'Ananda Theertha Suresh' 'Krzysztof Choromanski'\n 'Daniel Holtmann-Rice' 'Sanjiv Kumar']",
"Felix X. Yu, Ananda Theertha Suresh, Krzysztof Choromanski, Daniel\n Holtmann-Rice, Sanjiv Kumar"
] |
stat.ML cs.LG
|
10.1080/08839514.2018.1448143
|
1610.09075
| null | null |
http://arxiv.org/abs/1610.09075v2
|
2018-08-07T01:44:58Z
|
2016-10-28T04:06:59Z
|
Missing Data Imputation for Supervised Learning
|
Missing data imputation can help improve the performance of prediction models
in situations where missing data hide useful information. This paper compares
methods for imputing missing categorical data for supervised classification
tasks. We experiment on two machine learning benchmark datasets with missing
categorical data, comparing classifiers trained on non-imputed (i.e., one-hot
encoded) or imputed data with different levels of additional missing-data
perturbation. We show imputation methods can increase predictive accuracy in
the presence of missing-data perturbation, which can actually improve
prediction accuracy by regularizing the classifier. We achieve the
state-of-the-art on the Adult dataset with missing-data perturbation and
k-nearest-neighbors (k-NN) imputation.
|
[
"Jason Poulos and Rafael Valle",
"['Jason Poulos' 'Rafael Valle']"
] |
cs.LG stat.ML
| null |
1610.09083
| null | null |
http://arxiv.org/pdf/1610.09083v1
|
2016-10-28T05:47:51Z
|
2016-10-28T05:47:51Z
|
SOL: A Library for Scalable Online Learning Algorithms
|
SOL is an open-source library for scalable online learning algorithms, and is
particularly suitable for learning with high-dimensional data. The library
provides a family of regular and sparse online learning algorithms for
large-scale binary and multi-class classification tasks with high efficiency,
scalability, portability, and extensibility. SOL was implemented in C++, and
provided with a collection of easy-to-use command-line tools, python wrappers
and library calls for users and developers, as well as comprehensive documents
for both beginners and advanced users. SOL is not only a practical machine
learning toolbox, but also a comprehensive experimental platform for online
learning research. Experiments demonstrate that SOL is highly efficient and
scalable for large-scale machine learning with high-dimensional data.
|
[
"Yue Wu, Steven C.H. Hoi, Chenghao Liu, Jing Lu, Doyen Sahoo, Nenghai\n Yu",
"['Yue Wu' 'Steven C. H. Hoi' 'Chenghao Liu' 'Jing Lu' 'Doyen Sahoo'\n 'Nenghai Yu']"
] |
cs.IT cs.LG math.IT math.PR math.ST stat.TH
| null |
1610.0911
| null | null | null | null | null |
$f$-Divergence Inequalities via Functional Domination
|
This paper considers derivation of $f$-divergence inequalities via the
approach of functional domination. Bounds on an $f$-divergence based on one or
several other $f$-divergences are introduced, dealing with pairs of probability
measures defined on arbitrary alphabets. In addition, a variety of bounds are
shown to hold under boundedness assumptions on the relative information. The
journal paper, which includes more approaches for the derivation of
f-divergence inequalities and proofs, is available on the arXiv at
https://arxiv.org/abs/1508.00335, and it has been published in the IEEE Trans.
on Information Theory, vol. 62, no. 11, pp. 5973-6006, November 2016.
|
[
"Igal Sason and Sergio Verd\\'u"
] |
null | null |
1610.09110
| null | null |
http://arxiv.org/pdf/1610.09110v1
|
2016-10-28T08:11:26Z
|
2016-10-28T08:11:26Z
|
$f$-Divergence Inequalities via Functional Domination
|
This paper considers derivation of $f$-divergence inequalities via the approach of functional domination. Bounds on an $f$-divergence based on one or several other $f$-divergences are introduced, dealing with pairs of probability measures defined on arbitrary alphabets. In addition, a variety of bounds are shown to hold under boundedness assumptions on the relative information. The journal paper, which includes more approaches for the derivation of f-divergence inequalities and proofs, is available on the arXiv at https://arxiv.org/abs/1508.00335, and it has been published in the IEEE Trans. on Information Theory, vol. 62, no. 11, pp. 5973-6006, November 2016.
|
[
"['Igal Sason' 'Sergio Verdú']"
] |
stat.ML cs.LG
| null |
1610.09127
| null | null |
http://arxiv.org/pdf/1610.09127v2
|
2017-12-14T11:14:32Z
|
2016-10-28T09:04:30Z
|
Adaptive regularization for Lasso models in the context of
non-stationary data streams
|
Large scale, streaming datasets are ubiquitous in modern machine learning.
Streaming algorithms must be scalable, amenable to incremental training and
robust to the presence of non-stationarity. In this work consider the problem
of learning $\ell_1$ regularized linear models in the context of streaming
data. In particular, the focus of this work revolves around how to select the
regularization parameter when data arrives sequentially and the underlying
distribution is non-stationary (implying the choice of optimal regularization
parameter is itself time-varying). We propose a framework through which to
infer an adaptive regularization parameter. Our approach employs an $\ell_1$
penalty constraint where the corresponding sparsity parameter is iteratively
updated via stochastic gradient descent. This serves to reformulate the choice
of regularization parameter in a principled framework for online learning. The
proposed method is derived for linear regression and subsequently extended to
generalized linear models. We validate our approach using simulated and real
datasets and present an application to a neuroimaging dataset.
|
[
"['Ricardo Pio Monti' 'Christoforos Anagnostopoulos' 'Giovanni Montana']",
"Ricardo Pio Monti, Christoforos Anagnostopoulos, Giovanni Montana"
] |
cs.CL cs.LG
| null |
1610.09158
| null | null |
http://arxiv.org/pdf/1610.09158v1
|
2016-10-28T10:26:40Z
|
2016-10-28T10:26:40Z
|
Towards a continuous modeling of natural language domains
|
Humans continuously adapt their style and language to a variety of domains.
However, a reliable definition of `domain' has eluded researchers thus far.
Additionally, the notion of discrete domains stands in contrast to the
multiplicity of heterogeneous domains that humans navigate, many of which
overlap. In order to better understand the change and variation of human
language, we draw on research in domain adaptation and extend the notion of
discrete domains to the continuous spectrum. We propose representation
learning-based models that can adapt to continuous domains and detail how these
can be used to investigate variation in language. To this end, we propose to
use dialogue modeling as a test bed due to its proximity to language modeling
and its social component.
|
[
"Sebastian Ruder, Parsa Ghaffari, and John G. Breslin",
"['Sebastian Ruder' 'Parsa Ghaffari' 'John G. Breslin']"
] |
cs.LG
| null |
1610.09201
| null | null |
http://arxiv.org/pdf/1610.09201v1
|
2016-10-25T20:19:35Z
|
2016-10-25T20:19:35Z
|
A Conceptual Development of Quench Prediction App build on LSTM and ELQA
framework
|
This article presents a development of web application for quench prediction
in \gls{te-mpe-ee} at CERN. The authors describe an ELectrical Quality
Assurance (ELQA) framework, a platform which was designed for rapid development
of web integrated data analysis applications for different analysis needed
during the hardware commissioning of the Large Hadron Collider (LHC). In second
part the article describes a research carried out with the data collected from
Quench Detection System by means of using an LSTM recurrent neural network. The
article discusses and presents a conceptual work of implementing quench
prediction application for \gls{te-mpe-ee} based on the ELQA and quench
prediction algorithm.
|
[
"Matej Mertik and Maciej Wielgosz and Andrzej Skocze\\'n",
"['Matej Mertik' 'Maciej Wielgosz' 'Andrzej Skoczeń']"
] |
cs.LG
| null |
1610.09269
| null | null |
http://arxiv.org/pdf/1610.09269v1
|
2016-10-28T15:30:21Z
|
2016-10-28T15:30:21Z
|
Hierarchical Clustering via Spreading Metrics
|
We study the cost function for hierarchical clusterings introduced by
[arXiv:1510.05043] where hierarchies are treated as first-class objects rather
than deriving their cost from projections into flat clusters. It was also shown
in [arXiv:1510.05043] that a top-down algorithm returns a hierarchical
clustering of cost at most $O\left(\alpha_n \log n\right)$ times the cost of
the optimal hierarchical clustering, where $\alpha_n$ is the approximation
ratio of the Sparsest Cut subroutine used. Thus using the best known
approximation algorithm for Sparsest Cut due to Arora-Rao-Vazirani, the top
down algorithm returns a hierarchical clustering of cost at most
$O\left(\log^{3/2} n\right)$ times the cost of the optimal solution. We improve
this by giving an $O(\log{n})$-approximation algorithm for this problem. Our
main technical ingredients are a combinatorial characterization of ultrametrics
induced by this cost function, deriving an Integer Linear Programming (ILP)
formulation for this family of ultrametrics, and showing how to iteratively
round an LP relaxation of this formulation by using the idea of \emph{sphere
growing} which has been extensively used in the context of graph partitioning.
We also prove that our algorithm returns an $O(\log{n})$-approximate
hierarchical clustering for a generalization of this cost function also studied
in [arXiv:1510.05043]. Experiments show that the hierarchies found by using the
ILP formulation as well as our rounding algorithm often have better projections
into flat clusters than the standard linkage based algorithms. We also give
constant factor inapproximability results for this problem.
|
[
"Aurko Roy and Sebastian Pokutta",
"['Aurko Roy' 'Sebastian Pokutta']"
] |
cs.LG cs.IR stat.ML
| null |
1610.09274
| null | null |
http://arxiv.org/pdf/1610.09274v1
|
2016-10-28T15:33:25Z
|
2016-10-28T15:33:25Z
|
Toward Implicit Sample Noise Modeling: Deviation-driven Matrix
Factorization
|
The objective function of a matrix factorization model usually aims to
minimize the average of a regression error contributed by each element.
However, given the existence of stochastic noises, the implicit deviations of
sample data from their true values are almost surely diverse, which makes each
data point not equally suitable for fitting a model. In this case, simply
averaging the cost among data in the objective function is not ideal.
Intuitively we would like to emphasize more on the reliable instances (i.e.,
those contain smaller noise) while training a model. Motivated by such
observation, we derive our formula from a theoretical framework for optimal
weighting under heteroscedastic noise distribution. Specifically, by modeling
and learning the deviation of data, we design a novel matrix factorization
model. Our model has two advantages. First, it jointly learns the deviation and
conducts dynamic reweighting of instances, allowing the model to converge to a
better solution. Second, during learning the deviated instances are assigned
lower weights, which leads to faster convergence since the model does not need
to overfit the noise. The experiments are conducted in clean recommendation and
noisy sensor datasets to test the effectiveness of the model in various
scenarios. The results show that our model outperforms the state-of-the-art
factorization and deep learning models in both accuracy and efficiency.
|
[
"Guang-He Lee, Shao-Wen Yang, Shou-De Lin",
"['Guang-He Lee' 'Shao-Wen Yang' 'Shou-De Lin']"
] |
cs.LG cs.AI stat.ML
| null |
1610.09296
| null | null |
http://arxiv.org/pdf/1610.09296v3
|
2017-01-12T16:13:14Z
|
2016-10-28T16:17:03Z
|
Improving Sampling from Generative Autoencoders with Markov Chains
|
We focus on generative autoencoders, such as variational or adversarial
autoencoders, which jointly learn a generative model alongside an inference
model. Generative autoencoders are those which are trained to softly enforce a
prior on the latent distribution learned by the inference model. We call the
distribution to which the inference model maps observed samples, the learned
latent distribution, which may not be consistent with the prior. We formulate a
Markov chain Monte Carlo (MCMC) sampling process, equivalent to iteratively
decoding and encoding, which allows us to sample from the learned latent
distribution. Since, the generative model learns to map from the learned latent
distribution, rather than the prior, we may use MCMC to improve the quality of
samples drawn from the generative model, especially when the learned latent
distribution is far from the prior. Using MCMC sampling, we are able to reveal
previously unseen differences between generative autoencoders trained either
with or without a denoising criterion.
|
[
"Antonia Creswell, Kai Arulkumaran, Anil Anthony Bharath",
"['Antonia Creswell' 'Kai Arulkumaran' 'Anil Anthony Bharath']"
] |
cs.LG math.OC stat.ML
| null |
1610.093
| null | null | null | null | null |
Globally Optimal Training of Generalized Polynomial Neural Networks with
Nonlinear Spectral Methods
|
The optimization problem behind neural networks is highly non-convex.
Training with stochastic gradient descent and variants requires careful
parameter tuning and provides no guarantee to achieve the global optimum. In
contrast we show under quite weak assumptions on the data that a particular
class of feedforward neural networks can be trained globally optimal with a
linear convergence rate with our nonlinear spectral method. Up to our knowledge
this is the first practically feasible method which achieves such a guarantee.
While the method can in principle be applied to deep networks, we restrict
ourselves for simplicity in this paper to one and two hidden layer networks.
Our experiments confirm that these models are rich enough to achieve good
performance on a series of real-world datasets.
|
[
"Antoine Gautier, Quynh Nguyen and Matthias Hein"
] |
null | null |
1610.09300
| null | null |
http://arxiv.org/pdf/1610.09300v1
|
2016-10-28T16:28:23Z
|
2016-10-28T16:28:23Z
|
Globally Optimal Training of Generalized Polynomial Neural Networks with
Nonlinear Spectral Methods
|
The optimization problem behind neural networks is highly non-convex. Training with stochastic gradient descent and variants requires careful parameter tuning and provides no guarantee to achieve the global optimum. In contrast we show under quite weak assumptions on the data that a particular class of feedforward neural networks can be trained globally optimal with a linear convergence rate with our nonlinear spectral method. Up to our knowledge this is the first practically feasible method which achieves such a guarantee. While the method can in principle be applied to deep networks, we restrict ourselves for simplicity in this paper to one and two hidden layer networks. Our experiments confirm that these models are rich enough to achieve good performance on a series of real-world datasets.
|
[
"['Antoine Gautier' 'Quynh Nguyen' 'Matthias Hein']"
] |
null | null |
1610.09307
| null | null |
http://arxiv.org/pdf/1610.09307v2
|
2016-10-31T19:44:44Z
|
2016-10-28T16:37:39Z
|
Correlated-PCA: Principal Components' Analysis when Data and Noise are
Correlated
|
Given a matrix of observed data, Principal Components Analysis (PCA) computes a small number of orthogonal directions that contain most of its variability. Provably accurate solutions for PCA have been in use for a long time. However, to the best of our knowledge, all existing theoretical guarantees for it assume that the data and the corrupting noise are mutually independent, or at least uncorrelated. This is valid in practice often, but not always. In this paper, we study the PCA problem in the setting where the data and noise can be correlated. Such noise is often also referred to as "data-dependent noise". We obtain a correctness result for the standard eigenvalue decomposition (EVD) based solution to PCA under simple assumptions on the data-noise correlation. We also develop and analyze a generalization of EVD, cluster-EVD, that improves upon EVD in certain regimes.
|
[
"['Namrata Vaswani' 'Han Guo']"
] |
stat.ML cs.LG
| null |
1610.09322
| null | null |
http://arxiv.org/pdf/1610.09322v4
|
2017-06-14T00:11:55Z
|
2016-10-28T17:24:45Z
|
Homotopy Analysis for Tensor PCA
|
Developing efficient and guaranteed nonconvex algorithms has been an
important challenge in modern machine learning. Algorithms with good empirical
performance such as stochastic gradient descent often lack theoretical
guarantees. In this paper, we analyze the class of homotopy or continuation
methods for global optimization of nonconvex functions. These methods start
from an objective function that is efficient to optimize (e.g. convex), and
progressively modify it to obtain the required objective, and the solutions are
passed along the homotopy path. For the challenging problem of tensor PCA, we
prove global convergence of the homotopy method in the "high noise" regime. The
signal-to-noise requirement for our algorithm is tight in the sense that it
matches the recovery guarantee for the best degree-4 sum-of-squares algorithm.
In addition, we prove a phase transition along the homotopy path for tensor
PCA. This allows to simplify the homotopy method to a local search algorithm,
viz., tensor power iterations, with a specific initialization and a noise
injection procedure, while retaining the theoretical guarantees.
|
[
"Anima Anandkumar, Yuan Deng, Rong Ge, Hossein Mobahi",
"['Anima Anandkumar' 'Yuan Deng' 'Rong Ge' 'Hossein Mobahi']"
] |
cs.LG
| null |
1610.09369
| null | null |
http://arxiv.org/pdf/1610.09369v1
|
2016-10-28T11:57:26Z
|
2016-10-28T11:57:26Z
|
Discriminative Gaifman Models
|
We present discriminative Gaifman models, a novel family of relational
machine learning models. Gaifman models learn feature representations bottom up
from representations of locally connected and bounded-size regions of knowledge
bases (KBs). Considering local and bounded-size neighborhoods of knowledge
bases renders logical inference and learning tractable, mitigates the problem
of overfitting, and facilitates weight sharing. Gaifman models sample
neighborhoods of knowledge bases so as to make the learned relational models
more robust to missing objects and relations which is a common situation in
open-world KBs. We present the core ideas of Gaifman models and apply them to
large-scale relational learning problems. We also discuss the ways in which
Gaifman models relate to some existing relational machine learning approaches.
|
[
"['Mathias Niepert']",
"Mathias Niepert"
] |
stat.ML cs.LG
| null |
1610.0942
| null | null | null | null | null |
Dynamic matrix recovery from incomplete observations under an exact
low-rank constraint
|
Low-rank matrix factorizations arise in a wide variety of applications --
including recommendation systems, topic models, and source separation, to name
just a few. In these and many other applications, it has been widely noted that
by incorporating temporal information and allowing for the possibility of
time-varying models, significant improvements are possible in practice.
However, despite the reported superior empirical performance of these dynamic
models over their static counterparts, there is limited theoretical
justification for introducing these more complex models. In this paper we aim
to address this gap by studying the problem of recovering a dynamically
evolving low-rank matrix from incomplete observations. First, we propose the
locally weighted matrix smoothing (LOWEMS) framework as one possible approach
to dynamic matrix recovery. We then establish error bounds for LOWEMS in both
the {\em matrix sensing} and {\em matrix completion} observation models. Our
results quantify the potential benefits of exploiting dynamic constraints both
in terms of recovery accuracy and sample complexity. To illustrate these
benefits we provide both synthetic and real-world experimental results.
|
[
"Liangbei Xu and Mark A. Davenport"
] |
null | null |
1610.09420
| null | null |
http://arxiv.org/pdf/1610.09420v1
|
2016-10-28T22:44:29Z
|
2016-10-28T22:44:29Z
|
Dynamic matrix recovery from incomplete observations under an exact
low-rank constraint
|
Low-rank matrix factorizations arise in a wide variety of applications -- including recommendation systems, topic models, and source separation, to name just a few. In these and many other applications, it has been widely noted that by incorporating temporal information and allowing for the possibility of time-varying models, significant improvements are possible in practice. However, despite the reported superior empirical performance of these dynamic models over their static counterparts, there is limited theoretical justification for introducing these more complex models. In this paper we aim to address this gap by studying the problem of recovering a dynamically evolving low-rank matrix from incomplete observations. First, we propose the locally weighted matrix smoothing (LOWEMS) framework as one possible approach to dynamic matrix recovery. We then establish error bounds for LOWEMS in both the {em matrix sensing} and {em matrix completion} observation models. Our results quantify the potential benefits of exploiting dynamic constraints both in terms of recovery accuracy and sample complexity. To illustrate these benefits we provide both synthetic and real-world experimental results.
|
[
"['Liangbei Xu' 'Mark A. Davenport']"
] |
cs.LG cs.IR cs.SI
| null |
1610.09428
| null | null |
http://arxiv.org/pdf/1610.09428v1
|
2016-10-28T23:38:22Z
|
2016-10-28T23:38:22Z
|
Beyond Exchangeability: The Chinese Voting Process
|
Many online communities present user-contributed responses such as reviews of
products and answers to questions. User-provided helpfulness votes can
highlight the most useful responses, but voting is a social process that can
gain momentum based on the popularity of responses and the polarity of existing
votes. We propose the Chinese Voting Process (CVP) which models the evolution
of helpfulness votes as a self-reinforcing process dependent on position and
presentation biases. We evaluate this model on Amazon product reviews and more
than 80 StackExchange forums, measuring the intrinsic quality of individual
responses and behavioral coefficients of different communities.
|
[
"['Moontae Lee' 'Seok Hyun Jin' 'David Mimno']",
"Moontae Lee, Seok Hyun Jin, David Mimno"
] |
cs.LG
| null |
1610.09447
| null | null |
http://arxiv.org/pdf/1610.09447v3
|
2016-11-14T02:14:36Z
|
2016-10-29T03:39:16Z
|
Asynchronous Stochastic Block Coordinate Descent with Variance Reduction
|
Asynchronous parallel implementations for stochastic optimization have
received huge successes in theory and practice recently. Asynchronous
implementations with lock-free are more efficient than the one with writing or
reading lock. In this paper, we focus on a composite objective function
consisting of a smooth convex function $f$ and a block separable convex
function, which widely exists in machine learning and computer vision. We
propose an asynchronous stochastic block coordinate descent algorithm with the
accelerated technology of variance reduction (AsySBCDVR), which are with
lock-free in the implementation and analysis. AsySBCDVR is particularly
important because it can scale well with the sample size and dimension
simultaneously. We prove that AsySBCDVR achieves a linear convergence rate when
the function $f$ is with the optimal strong convexity property, and a sublinear
rate when $f$ is with the general convexity. More importantly, a near-linear
speedup on a parallel system with shared memory can be obtained.
|
[
"['Bin Gu' 'Zhouyuan Huo' 'Heng Huang']",
"Bin Gu, Zhouyuan Huo, Heng Huang"
] |
cs.LG cs.DC
| null |
1610.09451
| null | null |
http://arxiv.org/pdf/1610.09451v1
|
2016-10-29T04:21:24Z
|
2016-10-29T04:21:24Z
|
KeystoneML: Optimizing Pipelines for Large-Scale Advanced Analytics
|
Modern advanced analytics applications make use of machine learning
techniques and contain multiple steps of domain-specific and general-purpose
processing with high resource requirements. We present KeystoneML, a system
that captures and optimizes the end-to-end large-scale machine learning
applications for high-throughput training in a distributed environment with a
high-level API. This approach offers increased ease of use and higher
performance over existing systems for large scale learning. We demonstrate the
effectiveness of KeystoneML in achieving high quality statistical accuracy and
scalable training using real world datasets in several domains. By optimizing
execution KeystoneML achieves up to 15x training throughput over unoptimized
execution on a real image classification application.
|
[
"['Evan R. Sparks' 'Shivaram Venkataraman' 'Tomer Kaftan'\n 'Michael J. Franklin' 'Benjamin Recht']",
"Evan R. Sparks, Shivaram Venkataraman, Tomer Kaftan, Michael J.\n Franklin, Benjamin Recht"
] |
null | null |
1610.09461
| null | null |
http://arxiv.org/pdf/1610.09461v3
|
2017-06-20T16:51:03Z
|
2016-10-29T06:02:17Z
|
Fast Learning with Nonconvex L1-2 Regularization
|
Convex regularizers are often used for sparse learning. They are easy to optimize, but can lead to inferior prediction performance. The difference of $ell_1$ and $ell_2$ ($ell_{1-2}$) regularizer has been recently proposed as a nonconvex regularizer. It yields better recovery than both $ell_0$ and $ell_1$ regularizers on compressed sensing. However, how to efficiently optimize its learning problem is still challenging. The main difficulty is that both the $ell_1$ and $ell_2$ norms in $ell_{1-2}$ are not differentiable, and existing optimization algorithms cannot be applied. In this paper, we show that a closed-form solution can be derived for the proximal step associated with this regularizer. We further extend the result for low-rank matrix learning and the total variation model. Experiments on both synthetic and real data sets show that the resultant accelerated proximal gradient algorithm is more efficient than other noncovex optimization algorithms.
|
[
"['Quanming Yao' 'James T. Kwok' 'Xiawei Guo']"
] |
cs.IT cs.LG math.IT stat.ML
| null |
1610.09463
| null | null |
http://arxiv.org/pdf/1610.09463v1
|
2016-10-29T06:12:03Z
|
2016-10-29T06:12:03Z
|
Sparse Signal Recovery for Binary Compressed Sensing by Majority Voting
Neural Networks
|
In this paper, we propose majority voting neural networks for sparse signal
recovery in binary compressed sensing. The majority voting neural network is
composed of several independently trained feedforward neural networks employing
the sigmoid function as an activation function. Our empirical study shows that
a choice of a loss function used in training processes for the network is of
prime importance. We found a loss function suitable for sparse signal recovery,
which includes a cross entropy-like term and an $L_1$ regularized term. From
the experimental results, we observed that the majority voting neural network
achieves excellent recovery performance, which is approaching the optimal
performance as the number of component nets grows. The simple architecture of
the majority voting neural networks would be beneficial for both software and
hardware implementations.
|
[
"['Daisuke Ito' 'Tadashi Wadayama']",
"Daisuke Ito and Tadashi Wadayama"
] |
cs.LG
| null |
1610.09491
| null | null |
http://arxiv.org/pdf/1610.09491v1
|
2016-10-29T11:48:28Z
|
2016-10-29T11:48:28Z
|
SDP Relaxation with Randomized Rounding for Energy Disaggregation
|
We develop a scalable, computationally efficient method for the task of
energy disaggregation for home appliance monitoring. In this problem the goal
is to estimate the energy consumption of each appliance over time based on the
total energy-consumption signal of a household. The current state of the art is
to model the problem as inference in factorial HMMs, and use quadratic
programming to find an approximate solution to the resulting quadratic integer
program. Here we take a more principled approach, better suited to integer
programming problems, and find an approximate optimum by combining convex
semidefinite relaxations randomized rounding, as well as a scalable ADMM method
that exploits the special structure of the resulting semidefinite program.
Simulation results both in synthetic and real-world datasets demonstrate the
superiority of our method.
|
[
"Kiarash Shaloudegi, Andr\\'as Gy\\\"orgy, Csaba Szepesv\\'ari, and Wilsun\n Xu",
"['Kiarash Shaloudegi' 'András György' 'Csaba Szepesvári' 'Wilsun Xu']"
] |
cs.LG stat.ML
| null |
1610.09512
| null | null |
http://arxiv.org/pdf/1610.09512v2
|
2016-12-01T19:21:45Z
|
2016-10-29T14:01:58Z
|
Contextual Decision Processes with Low Bellman Rank are PAC-Learnable
|
This paper studies systematic exploration for reinforcement learning with
rich observations and function approximation. We introduce a new model called
contextual decision processes, that unifies and generalizes most prior
settings. Our first contribution is a complexity measure, the Bellman rank,
that we show enables tractable learning of near-optimal behavior in these
processes and is naturally small for many well-studied reinforcement learning
settings. Our second contribution is a new reinforcement learning algorithm
that engages in systematic exploration to learn contextual decision processes
with low Bellman rank. Our algorithm provably learns near-optimal behavior with
a number of samples that is polynomial in all relevant parameters but
independent of the number of unique observations. The approach uses Bellman
error minimization with optimistic exploration and provides new insights into
efficient exploration for reinforcement learning with function approximation.
|
[
"Nan Jiang, Akshay Krishnamurthy, Alekh Agarwal, John Langford, Robert\n E. Schapire",
"['Nan Jiang' 'Akshay Krishnamurthy' 'Alekh Agarwal' 'John Langford'\n 'Robert E. Schapire']"
] |
cs.LG
| null |
1610.09513
| null | null |
http://arxiv.org/pdf/1610.09513v1
|
2016-10-29T14:05:10Z
|
2016-10-29T14:05:10Z
|
Phased LSTM: Accelerating Recurrent Network Training for Long or
Event-based Sequences
|
Recurrent Neural Networks (RNNs) have become the state-of-the-art choice for
extracting patterns from temporal sequences. However, current RNN models are
ill-suited to process irregularly sampled data triggered by events generated in
continuous time by sensors or other neurons. Such data can occur, for example,
when the input comes from novel event-driven artificial sensors that generate
sparse, asynchronous streams of events or from multiple conventional sensors
with different update intervals. In this work, we introduce the Phased LSTM
model, which extends the LSTM unit by adding a new time gate. This gate is
controlled by a parametrized oscillation with a frequency range that produces
updates of the memory cell only during a small percentage of the cycle. Even
with the sparse updates imposed by the oscillation, the Phased LSTM network
achieves faster convergence than regular LSTMs on tasks which require learning
of long sequences. The model naturally integrates inputs from sensors of
arbitrary sampling rates, thereby opening new areas of investigation for
processing asynchronous sensory events that carry timing information. It also
greatly improves the performance of LSTMs in standard RNN applications, and
does so with an order-of-magnitude fewer computes at runtime.
|
[
"['Daniel Neil' 'Michael Pfeiffer' 'Shih-Chii Liu']",
"Daniel Neil, Michael Pfeiffer, and Shih-Chii Liu"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.