categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
stat.ML cs.LG
| null |
1612.08714
| null | null |
http://arxiv.org/pdf/1612.08714v2
|
2016-12-30T17:56:48Z
|
2016-12-27T19:39:23Z
|
Clustering with Confidence: Finding Clusters with Statistical Guarantees
|
Clustering is a widely used unsupervised learning method for finding
structure in the data. However, the resulting clusters are typically presented
without any guarantees on their robustness; slightly changing the used data
sample or re-running a clustering algorithm involving some stochastic component
may lead to completely different clusters. There is, hence, a need for
techniques that can quantify the instability of the generated clusters. In this
study, we propose a technique for quantifying the instability of a clustering
solution and for finding robust clusters, termed core clusters, which
correspond to clusters where the co-occurrence probability of each data item
within a cluster is at least $1 - \alpha$. We demonstrate how solving the core
clustering problem is linked to finding the largest maximal cliques in a graph.
We show that the method can be used with both clustering and classification
algorithms. The proposed method is tested on both simulated and real datasets.
The results show that the obtained clusters indeed meet the guarantees on
robustness.
|
[
"Andreas Henelius, Kai Puolam\\\"aki, Henrik Bostr\\\"om, Panagiotis\n Papapetrou",
"['Andreas Henelius' 'Kai Puolamäki' 'Henrik Boström'\n 'Panagiotis Papapetrou']"
] |
cs.LG
|
10.1109/TASE.2018.2876430
|
1612.08789
| null | null |
http://arxiv.org/abs/1612.08789v2
|
2019-02-01T18:18:21Z
|
2016-12-28T02:31:22Z
|
Automatic Composition and Optimization of Multicomponent Predictive
Systems With an Extended Auto-WEKA
|
Composition and parameterization of multicomponent predictive systems (MCPSs)
consisting of chains of data transformation steps are a challenging task.
Auto-WEKA is a tool to automate the combined algorithm selection and
hyperparameter (CASH) optimization problem. In this paper, we extend the CASH
problem and Auto-WEKA to support the MCPS, including preprocessing steps for
both classification and regression tasks. We define the optimization problem in
which the search space consists of suitably parameterized Petri nets forming
the sought MCPS solutions. In the experimental analysis, we focus on examining
the impact of considerably extending the search space (from approximately
22,000 to 812 billion possible combinations of methods and categorical
hyperparameters). In a range of extensive experiments, three different
optimization strategies are used to automatically compose MCPSs for 21 publicly
available data sets. The diversity of the composed MCPSs found is an indication
that fully and automatically exploiting different combinations of data cleaning
and preprocessing techniques is possible and highly beneficial for different
predictive models. We also present the results on seven data sets from real
chemical production processes. Our findings can have a major impact on the
development of high-quality predictive models as well as their maintenance and
scalability aspects needed in modern applications and deployment scenarios.
|
[
"['Manuel Martin Salvador' 'Marcin Budka' 'Bogdan Gabrys']",
"Manuel Martin Salvador, Marcin Budka, Bogdan Gabrys"
] |
cs.LG cs.DS stat.ML
| null |
1612.08795
| null | null |
http://arxiv.org/pdf/1612.08795v1
|
2016-12-28T03:35:59Z
|
2016-12-28T03:35:59Z
|
Provable learning of Noisy-or Networks
|
Many machine learning applications use latent variable models to explain
structure in data, whereby visible variables (= coordinates of the given
datapoint) are explained as a probabilistic function of some hidden variables.
Finding parameters with the maximum likelihood is NP-hard even in very simple
settings. In recent years, provably efficient algorithms were nevertheless
developed for models with linear structures: topic models, mixture models,
hidden markov models, etc. These algorithms use matrix or tensor decomposition,
and make some reasonable assumptions about the parameters of the underlying
model.
But matrix or tensor decomposition seems of little use when the latent
variable model has nonlinearities. The current paper shows how to make
progress: tensor decomposition is applied for learning the single-layer {\em
noisy or} network, which is a textbook example of a Bayes net, and used for
example in the classic QMR-DT software for diagnosing which disease(s) a
patient may have by observing the symptoms he/she exhibits.
The technical novelty here, which should be useful in other settings in
future, is analysis of tensor decomposition in presence of systematic error
(i.e., where the noise/error is correlated with the signal, and doesn't
decrease as number of samples goes to infinity). This requires rethinking all
steps of tensor decomposition methods from the ground up.
For simplicity our analysis is stated assuming that the network parameters
were chosen from a probability distribution but the method seems more generally
applicable.
|
[
"Sanjeev Arora, Rong Ge, Tengyu Ma, Andrej Risteski",
"['Sanjeev Arora' 'Rong Ge' 'Tengyu Ma' 'Andrej Risteski']"
] |
cs.LG cs.AI cs.NE
| null |
1612.0881
| null | null | null | null | null |
The Predictron: End-To-End Learning and Planning
|
One of the key challenges of artificial intelligence is to learn models that
are effective in the context of planning. In this document we introduce the
predictron architecture. The predictron consists of a fully abstract model,
represented by a Markov reward process, that can be rolled forward multiple
"imagined" planning steps. Each forward pass of the predictron accumulates
internal rewards and values over multiple planning depths. The predictron is
trained end-to-end so as to make these accumulated values accurately
approximate the true value function. We applied the predictron to procedurally
generated random mazes and a simulator for the game of pool. The predictron
yielded significantly more accurate predictions than conventional deep neural
network architectures.
|
[
"David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur\n Guez, Tim Harley, Gabriel Dulac-Arnold, David Reichert, Neil Rabinowitz,\n Andre Barreto, Thomas Degris"
] |
null | null |
1612.08810
| null | null |
http://arxiv.org/pdf/1612.08810v3
|
2017-07-20T09:21:54Z
|
2016-12-28T06:47:15Z
|
The Predictron: End-To-End Learning and Planning
|
One of the key challenges of artificial intelligence is to learn models that are effective in the context of planning. In this document we introduce the predictron architecture. The predictron consists of a fully abstract model, represented by a Markov reward process, that can be rolled forward multiple "imagined" planning steps. Each forward pass of the predictron accumulates internal rewards and values over multiple planning depths. The predictron is trained end-to-end so as to make these accumulated values accurately approximate the true value function. We applied the predictron to procedurally generated random mazes and a simulator for the game of pool. The predictron yielded significantly more accurate predictions than conventional deep neural network architectures.
|
[
"['David Silver' 'Hado van Hasselt' 'Matteo Hessel' 'Tom Schaul'\n 'Arthur Guez' 'Tim Harley' 'Gabriel Dulac-Arnold' 'David Reichert'\n 'Neil Rabinowitz' 'Andre Barreto' 'Thomas Degris']"
] |
stat.ML cs.LG
| null |
1612.08875
| null | null |
http://arxiv.org/pdf/1612.08875v3
|
2019-01-08T11:01:35Z
|
2016-12-28T13:17:07Z
|
The Pessimistic Limits and Possibilities of Margin-based Losses in
Semi-supervised Learning
|
Consider a classification problem where we have both labeled and unlabeled
data available. We show that for linear classifiers defined by convex
margin-based surrogate losses that are decreasing, it is impossible to
construct any semi-supervised approach that is able to guarantee an improvement
over the supervised classifier measured by this surrogate loss on the labeled
and unlabeled data. For convex margin-based loss functions that also increase,
we demonstrate safe improvements are possible.
|
[
"Jesse H. Krijthe and Marco Loog",
"['Jesse H. Krijthe' 'Marco Loog']"
] |
cs.AI cs.LG cs.RO
| null |
1612.08967
| null | null |
http://arxiv.org/pdf/1612.08967v1
|
2016-12-28T19:53:08Z
|
2016-12-28T19:53:08Z
|
Efficient iterative policy optimization
|
We tackle the issue of finding a good policy when the number of policy
updates is limited. This is done by approximating the expected policy reward as
a sequence of concave lower bounds which can be efficiently maximized,
drastically reducing the number of policy updates required to achieve good
performance. We also extend existing methods to negative rewards, enabling the
use of control variates.
|
[
"['Nicolas Le Roux']",
"Nicolas Le Roux"
] |
stat.ML cs.LG
| null |
1612.09007
| null | null |
http://arxiv.org/pdf/1612.09007v1
|
2016-12-28T23:43:27Z
|
2016-12-28T23:43:27Z
|
A Deep Learning Approach To Multiple Kernel Fusion
|
Kernel fusion is a popular and effective approach for combining multiple
features that characterize different aspects of data. Traditional approaches
for Multiple Kernel Learning (MKL) attempt to learn the parameters for
combining the kernels through sophisticated optimization procedures. In this
paper, we propose an alternative approach that creates dense embeddings for
data using the kernel similarities and adopts a deep neural network
architecture for fusing the embeddings. In order to improve the effectiveness
of this network, we introduce the kernel dropout regularization strategy
coupled with the use of an expanded set of composition kernels. Experiment
results on a real-world activity recognition dataset show that the proposed
architecture is effective in fusing kernels and achieves state-of-the-art
performance.
|
[
"Huan Song, Jayaraman J. Thiagarajan, Prasanna Sattigeri, Karthikeyan\n Natesan Ramamurthy, Andreas Spanias",
"['Huan Song' 'Jayaraman J. Thiagarajan' 'Prasanna Sattigeri'\n 'Karthikeyan Natesan Ramamurthy' 'Andreas Spanias']"
] |
cs.LG cs.AI cs.CV
| null |
1612.0903
| null | null | null | null | null |
Meta-Unsupervised-Learning: A supervised approach to unsupervised
learning
|
We introduce a new paradigm to investigate unsupervised learning, reducing
unsupervised learning to supervised learning. Specifically, we mitigate the
subjectivity in unsupervised decision-making by leveraging knowledge acquired
from prior, possibly heterogeneous, supervised learning tasks. We demonstrate
the versatility of our framework via comprehensive expositions and detailed
experiments on several unsupervised problems such as (a) clustering, (b)
outlier detection, and (c) similarity prediction under a common umbrella of
meta-unsupervised-learning. We also provide rigorous PAC-agnostic bounds to
establish the theoretical foundations of our framework, and show that our
framing of meta-clustering circumvents Kleinberg's impossibility theorem for
clustering.
|
[
"Vikas K. Garg and Adam Tauman Kalai"
] |
null | null |
1612.09030
| null | null |
http://arxiv.org/pdf/1612.09030v2
|
2017-01-03T17:34:39Z
|
2016-12-29T03:20:33Z
|
Meta-Unsupervised-Learning: A supervised approach to unsupervised
learning
|
We introduce a new paradigm to investigate unsupervised learning, reducing unsupervised learning to supervised learning. Specifically, we mitigate the subjectivity in unsupervised decision-making by leveraging knowledge acquired from prior, possibly heterogeneous, supervised learning tasks. We demonstrate the versatility of our framework via comprehensive expositions and detailed experiments on several unsupervised problems such as (a) clustering, (b) outlier detection, and (c) similarity prediction under a common umbrella of meta-unsupervised-learning. We also provide rigorous PAC-agnostic bounds to establish the theoretical foundations of our framework, and show that our framing of meta-clustering circumvents Kleinberg's impossibility theorem for clustering.
|
[
"['Vikas K. Garg' 'Adam Tauman Kalai']"
] |
math.OC cs.LG stat.ML
| null |
1612.09034
| null | null |
http://arxiv.org/pdf/1612.09034v4
|
2017-05-30T06:20:50Z
|
2016-12-29T04:25:28Z
|
Geometric descent method for convex composite minimization
|
In this paper, we extend the geometric descent method recently proposed by
Bubeck, Lee and Singh to tackle nonsmooth and strongly convex composite
problems. We prove that our proposed algorithm, dubbed geometric proximal
gradient method (GeoPG), converges with a linear rate $(1-1/\sqrt{\kappa})$ and
thus achieves the optimal rate among first-order methods, where $\kappa$ is the
condition number of the problem. Numerical results on linear regression and
logistic regression with elastic net regularization show that GeoPG compares
favorably with Nesterov's accelerated proximal gradient method, especially when
the problem is ill-conditioned.
|
[
"Shixiang Chen, Shiqian Ma, Wei Liu",
"['Shixiang Chen' 'Shiqian Ma' 'Wei Liu']"
] |
cs.LG
| null |
1612.09057
| null | null |
http://arxiv.org/pdf/1612.09057v4
|
2018-09-04T23:18:35Z
|
2016-12-29T07:26:26Z
|
Deep Learning and Hierarchal Generative Models
|
It is argued that deep learning is efficient for data that is generated from
hierarchal generative models. Examples of such generative models include
wavelet scattering networks, functions of compositional structure, and deep
rendering models. Unfortunately so far, for all such models, it is either not
rigorously known that they can be learned efficiently, or it is not known that
"deep algorithms" are required in order to learn them.
We propose a simple family of "generative hierarchal models" which can be
efficiently learned and where "deep" algorithm are necessary for learning. Our
definition of "deep" algorithms is based on the empirical observation that deep
nets necessarily use correlations between features. More formally, we show that
in a semi-supervised setting, given access to low-order moments of the labeled
data and all of the unlabeled data, it is information theoretically impossible
to perform classification while at the same time there is an efficient
algorithm, that given all labelled and unlabeled data, perfectly labels all
unlabelled data with high probability.
For the proof, we use and strengthen the fact that Belief Propagation does
not admit a good approximation in terms of linear functions.
|
[
"['Elchanan Mossel']",
"Elchanan Mossel"
] |
cs.LG stat.ML
| null |
1612.09076
| null | null |
http://arxiv.org/pdf/1612.09076v1
|
2016-12-29T08:53:20Z
|
2016-12-29T08:53:20Z
|
Selecting Bases in Spectral learning of Predictive State Representations
via Model Entropy
|
Predictive State Representations (PSRs) are powerful techniques for modelling
dynamical systems, which represent a state as a vector of predictions about
future observable events (tests). In PSRs, one of the fundamental problems is
the learning of the PSR model of the underlying system. Recently, spectral
methods have been successfully used to address this issue by treating the
learning problem as the task of computing an singular value decomposition (SVD)
over a submatrix of a special type of matrix called the Hankel matrix. Under
the assumptions that the rows and columns of the submatrix of the Hankel Matrix
are sufficient~(which usually means a very large number of rows and columns,
and almost fails in practice) and the entries of the matrix can be estimated
accurately, it has been proven that the spectral approach for learning PSRs is
statistically consistent and the learned parameters can converge to the true
parameters. However, in practice, due to the limit of the computation ability,
only a finite set of rows or columns can be chosen to be used for the spectral
learning. While different sets of columns usually lead to variant accuracy of
the learned model, in this paper, we propose an approach for selecting the set
of columns, namely basis selection, by adopting a concept of model entropy to
measure the accuracy of the learned model. Experimental results are shown to
demonstrate the effectiveness of the proposed approach.
|
[
"Yunlong Liu and Hexing Zhu",
"['Yunlong Liu' 'Hexing Zhu']"
] |
stat.AP cs.LG
| null |
1612.09106
| null | null |
http://arxiv.org/pdf/1612.09106v3
|
2017-09-18T08:37:11Z
|
2016-12-29T11:47:23Z
|
Sequence-to-point learning with neural networks for nonintrusive load
monitoring
|
Energy disaggregation (a.k.a nonintrusive load monitoring, NILM), a
single-channel blind source separation problem, aims to decompose the mains
which records the whole house electricity consumption into appliance-wise
readings. This problem is difficult because it is inherently unidentifiable.
Recent approaches have shown that the identifiability problem could be reduced
by introducing domain knowledge into the model. Deep neural networks have been
shown to be a promising approach for these problems, but sliding windows are
necessary to handle the long sequences which arise in signal processing
problems, which raises issues about how to combine predictions from different
sliding windows. In this paper, we propose sequence-to-point learning, where
the input is a window of the mains and the output is a single point of the
target appliance. We use convolutional neural networks to train the model.
Interestingly, we systematically show that the convolutional neural networks
can inherently learn the signatures of the target appliances, which are
automatically added into the model to reduce the identifiability problem. We
applied the proposed neural network approaches to real-world household energy
data, and show that the methods achieve state-of-the-art performance, improving
two standard error measures by 84% and 92%.
|
[
"Chaoyun Zhang, Mingjun Zhong, Zongzuo Wang, Nigel Goddard, Charles\n Sutton",
"['Chaoyun Zhang' 'Mingjun Zhong' 'Zongzuo Wang' 'Nigel Goddard'\n 'Charles Sutton']"
] |
cs.LG
| null |
1612.09122
| null | null |
http://arxiv.org/pdf/1612.09122v1
|
2016-12-29T12:29:20Z
|
2016-12-29T12:29:20Z
|
Modeling documents with Generative Adversarial Networks
|
This paper describes a method for using Generative Adversarial Networks to
learn distributed representations of natural language documents. We propose a
model that is based on the recently proposed Energy-Based GAN, but instead uses
a Denoising Autoencoder as the discriminator network. Document representations
are extracted from the hidden layer of the discriminator and evaluated both
quantitatively and qualitatively.
|
[
"John Glover",
"['John Glover']"
] |
cs.LG
| null |
1612.09147
| null | null |
http://arxiv.org/pdf/1612.09147v2
|
2017-01-26T16:44:56Z
|
2016-12-29T14:02:31Z
|
Linear Learning with Sparse Data
|
Linear predictors are especially useful when the data is high-dimensional and
sparse. One of the standard techniques used to train a linear predictor is the
Averaged Stochastic Gradient Descent (ASGD) algorithm. We present an efficient
implementation of ASGD that avoids dense vector operations. We also describe a
translation invariant extension called Centered Averaged Stochastic Gradient
Descent (CASGD).
|
[
"['Ofer Dekel']",
"Ofer Dekel"
] |
cs.SY cs.LG stat.ML
| null |
1612.09158
| null | null |
http://arxiv.org/pdf/1612.09158v1
|
2016-12-29T14:32:51Z
|
2016-12-29T14:32:51Z
|
The interplay between system identification and machine learning
|
Learning from examples is one of the key problems in science and engineering.
It deals with function reconstruction from a finite set of direct and noisy
samples. Regularization in reproducing kernel Hilbert spaces (RKHSs) is widely
used to solve this task and includes powerful estimators such as regularization
networks. Recent achievements include the proof of the statistical consistency
of these kernel- based approaches. Parallel to this, many different system
identification techniques have been developed but the interaction with machine
learning does not appear so strong yet. One reason is that the RKHSs usually
employed in machine learning do not embed the information available on dynamic
systems, e.g. BIBO stability. In addition, in system identification the
independent data assumptions routinely adopted in machine learning are never
satisfied in practice. This paper provides new results which strengthen the
connection between system identification and machine learning. Our starting
point is the introduction of RKHSs of dynamic systems. They contain functionals
over spaces defined by system inputs and allow to interpret system
identification as learning from examples. In both linear and nonlinear
settings, it is shown that this perspective permits to derive in a relatively
simple way conditions on RKHS stability (i.e. the property of containing only
BIBO stable systems or predictors), also facilitating the design of new kernels
for system identification. Furthermore, we prove the convergence of the
regularized estimator to the optimal predictor under conditions typical of
dynamic systems.
|
[
"['Gianluigi Pillonetto']",
"Gianluigi Pillonetto"
] |
cs.NE cs.AI cs.LG
| null |
1612.09205
| null | null |
http://arxiv.org/pdf/1612.09205v1
|
2016-12-29T17:14:05Z
|
2016-12-29T17:14:05Z
|
Deep neural heart rate variability analysis
|
Despite of the pain and limited accuracy of blood tests for early recognition
of cardiovascular disease, they dominate risk screening and triage. On the
other hand, heart rate variability is non-invasive and cheap, but not
considered accurate enough for clinical practice. Here, we tackle heart beat
interval based classification with deep learning. We introduce an end to end
differentiable hybrid architecture, consisting of a layer of biological neuron
models of cardiac dynamics (modified FitzHugh Nagumo neurons) and several
layers of a standard feed-forward neural network. The proposed model is
evaluated on ECGs from 474 stable at-risk (coronary artery disease) patients,
and 1172 chest pain patients of an emergency department. We show that it can
significantly outperform models based on traditional heart rate variability
predictors, as well as approaching or in some cases outperforming clinical
blood tests, based only on 60 seconds of inter-beat intervals.
|
[
"Tamas Madl",
"['Tamas Madl']"
] |
stat.ML cs.LG
| null |
1612.09283
| null | null |
http://arxiv.org/pdf/1612.09283v1
|
2016-12-29T20:40:52Z
|
2016-12-29T20:40:52Z
|
Generalized Intersection Kernel
|
Following the very recent line of work on the ``generalized min-max'' (GMM)
kernel, this study proposes the ``generalized intersection'' (GInt) kernel and
the related ``normalized generalized min-max'' (NGMM) kernel. In computer
vision, the (histogram) intersection kernel has been popular, and the GInt
kernel generalizes it to data which can have both negative and positive
entries. Through an extensive empirical classification study on 40 datasets
from the UCI repository, we are able to show that this (tuning-free) GInt
kernel performs fairly well.
The empirical results also demonstrate that the NGMM kernel typically
outperforms the GInt kernel. Interestingly, the NGMM kernel has another
interpretation --- it is the ``asymmetrically transformed'' version of the GInt
kernel, based on the idea of ``asymmetric hashing''. Just like the GMM kernel,
the NGMM kernel can be efficiently linearized through (e.g.,) generalized
consistent weighted sampling (GCWS), as empirically validated in our study.
Owing to the discrete nature of hashed values, it also provides a scheme for
approximate near neighbor search.
|
[
"Ping Li",
"['Ping Li']"
] |
cs.LG math.OC stat.ML
| null |
1612.09296
| null | null |
http://arxiv.org/pdf/1612.09296v3
|
2018-01-20T02:45:55Z
|
2016-12-29T20:57:19Z
|
Symmetry, Saddle Points, and Global Optimization Landscape of Nonconvex
Matrix Factorization
|
We propose a general theory for studying the \xl{landscape} of nonconvex
\xl{optimization} with underlying symmetric structures \tz{for a class of
machine learning problems (e.g., low-rank matrix factorization, phase
retrieval, and deep linear neural networks)}. In specific, we characterize the
locations of stationary points and the null space of Hessian matrices \xl{of
the objective function} via the lens of invariant groups\removed{for associated
optimization problems, including low-rank matrix factorization, phase
retrieval, and deep linear neural networks}. As a major motivating example, we
apply the proposed general theory to characterize the global \xl{landscape} of
the \xl{nonconvex optimization in} low-rank matrix factorization problem. In
particular, we illustrate how the rotational symmetry group gives rise to
infinitely many nonisolated strict saddle points and equivalent global minima
of the objective function. By explicitly identifying all stationary points, we
divide the entire parameter space into three regions: ($\cR_1$) the region
containing the neighborhoods of all strict saddle points, where the objective
has negative curvatures; ($\cR_2$) the region containing neighborhoods of all
global minima, where the objective enjoys strong convexity along certain
directions; and ($\cR_3$) the complement of the above regions, where the
gradient has sufficiently large magnitudes. We further extend our result to the
matrix sensing problem. Such global landscape implies strong global convergence
guarantees for popular iterative algorithms with arbitrary initial solutions.
|
[
"['Xingguo Li' 'Junwei Lu' 'Raman Arora' 'Jarvis Haupt' 'Han Liu'\n 'Zhaoran Wang' 'Tuo Zhao']",
"Xingguo Li, Junwei Lu, Raman Arora, Jarvis Haupt, Han Liu, Zhaoran\n Wang, Tuo Zhao"
] |
cs.LG stat.ML
| null |
1612.09328
| null | null |
http://arxiv.org/pdf/1612.09328v3
|
2017-11-21T16:04:21Z
|
2016-12-29T22:02:53Z
|
The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point
Process
|
Many events occur in the world. Some event types are stochastically excited
or inhibited---in the sense of having their probabilities elevated or
decreased---by patterns in the sequence of previous events. Discovering such
patterns can help us predict which type of event will happen next and when. We
model streams of discrete events in continuous time, by constructing a neurally
self-modulating multivariate point process in which the intensities of multiple
event types evolve according to a novel continuous-time LSTM. This generative
model allows past events to influence the future in complex and realistic ways,
by conditioning future event intensities on the hidden state of a recurrent
neural network that has consumed the stream of past events. Our model has
desirable qualitative properties. It achieves competitive likelihood and
predictive accuracy on real and synthetic datasets, including under
missing-data conditions.
|
[
"['Hongyuan Mei' 'Jason Eisner']",
"Hongyuan Mei and Jason Eisner"
] |
cs.CG cs.LG math.ST stat.TH
| null |
1612.09434
| null | null |
http://arxiv.org/pdf/1612.09434v1
|
2016-12-30T09:33:07Z
|
2016-12-30T09:33:07Z
|
Data driven estimation of Laplace-Beltrami operator
|
Approximations of Laplace-Beltrami operators on manifolds through graph
Lapla-cians have become popular tools in data analysis and machine learning.
These discretized operators usually depend on bandwidth parameters whose tuning
remains a theoretical and practical problem. In this paper, we address this
problem for the unnormalized graph Laplacian by establishing an oracle
inequality that opens the door to a well-founded data-driven procedure for the
bandwidth selection. Our approach relies on recent results by Lacour and
Massart [LM15] on the so-called Lepski's method.
|
[
"['Frédéric Chazal' 'Ilaria Giulini' 'Bertrand Michel']",
"Fr\\'ed\\'eric Chazal (DATASHAPE), Ilaria Giulini (DATASHAPE), Bertrand\n Michel (LSTA)"
] |
cs.LG
| null |
1612.09438
| null | null |
http://arxiv.org/pdf/1612.09438v2
|
2017-01-12T17:26:42Z
|
2016-12-30T09:57:27Z
|
Automatic Discoveries of Physical and Semantic Concepts via Association
Priors of Neuron Groups
|
The recent successful deep neural networks are largely trained in a
supervised manner. It {\it associates} complex patterns of input samples with
neurons in the last layer, which form representations of {\it concepts}. In
spite of their successes, the properties of complex patterns associated a
learned concept remain elusive. In this work, by analyzing how neurons are
associated with concepts in supervised networks, we hypothesize that with
proper priors to regulate learning, neural networks can automatically associate
neurons in the intermediate layers with concepts that are aligned with real
world concepts, when trained only with labels that associate concepts with top
level neurons, which is a plausible way for unsupervised learning. We develop a
prior to verify the hypothesis and experimentally find the proposed prior help
neural networks automatically learn both basic physical concepts at the lower
layers, e.g., rotation of filters, and highly semantic concepts at the higher
layers, e.g., fine-grained categories of an entry-level category.
|
[
"['Shuai Li' 'Kui Jia' 'Xiaogang Wang']",
"Shuai Li, Kui Jia, Xiaogang Wang"
] |
cs.LG cs.AI stat.ML
| null |
1612.09465
| null | null |
http://arxiv.org/pdf/1612.09465v1
|
2016-12-30T11:51:14Z
|
2016-12-30T11:51:14Z
|
Adaptive Lambda Least-Squares Temporal Difference Learning
|
Temporal Difference learning or TD($\lambda$) is a fundamental algorithm in
the field of reinforcement learning. However, setting TD's $\lambda$ parameter,
which controls the timescale of TD updates, is generally left up to the
practitioner. We formalize the $\lambda$ selection problem as a bias-variance
trade-off where the solution is the value of $\lambda$ that leads to the
smallest Mean Squared Value Error (MSVE). To solve this trade-off we suggest
applying Leave-One-Trajectory-Out Cross-Validation (LOTO-CV) to search the
space of $\lambda$ values. Unfortunately, this approach is too computationally
expensive for most practical applications. For Least Squares TD (LSTD) we show
that LOTO-CV can be implemented efficiently to automatically tune $\lambda$ and
apply function optimization methods to efficiently search the space of
$\lambda$ values. The resulting algorithm, ALLSTD, is parameter free and our
experiments demonstrate that ALLSTD is significantly computationally faster
than the na\"{i}ve LOTO-CV implementation while achieving similar performance.
|
[
"['Timothy A. Mann' 'Hugo Penedones' 'Shie Mannor' 'Todd Hester']",
"Timothy A. Mann and Hugo Penedones and Shie Mannor and Todd Hester"
] |
cs.LG
| null |
1612.09529
| null | null |
http://arxiv.org/pdf/1612.09529v1
|
2016-12-29T05:55:34Z
|
2016-12-29T05:55:34Z
|
Linking the Neural Machine Translation and the Prediction of Organic
Chemistry Reactions
|
Finding the main product of a chemical reaction is one of the important
problems of organic chemistry. This paper describes a method of applying a
neural machine translation model to the prediction of organic chemical
reactions. In order to translate 'reactants and reagents' to 'products', a
gated recurrent unit based sequence-to-sequence model and a parser to generate
input tokens for model from reaction SMILES strings were built. Training sets
are composed of reactions from the patent databases, and reactions manually
generated applying the elementary reactions in an organic chemistry textbook of
Wade. The trained models were tested by examples and problems in the textbook.
The prediction process does not need manual encoding of rules (e.g., SMARTS
transformations) to predict products, hence it only needs sufficient training
reaction sets to learn new types of reactions.
|
[
"Juno Nam and Jurae Kim",
"['Juno Nam' 'Jurae Kim']"
] |
stat.AP cs.LG stat.ML
| null |
1612.09596
| null | null |
http://arxiv.org/pdf/1612.09596v1
|
2016-12-30T20:56:41Z
|
2016-12-30T20:56:41Z
|
Counterfactual Prediction with Deep Instrumental Variables Networks
|
We are in the middle of a remarkable rise in the use and capability of
artificial intelligence. Much of this growth has been fueled by the success of
deep learning architectures: models that map from observables to outputs via
multiple layers of latent representations. These deep learning algorithms are
effective tools for unstructured prediction, and they can be combined in AI
systems to solve complex automated reasoning problems. This paper provides a
recipe for combining ML algorithms to solve for causal effects in the presence
of instrumental variables -- sources of treatment randomization that are
conditionally independent from the response. We show that a flexible IV
specification resolves into two prediction tasks that can be solved with deep
neural nets: a first-stage network for treatment prediction and a second-stage
network whose loss function involves integration over the conditional treatment
distribution. This Deep IV framework imposes some specific structure on the
stochastic gradient descent routine used for training, but it is general enough
that we can take advantage of off-the-shelf ML capabilities and avoid extensive
algorithm customization. We outline how to obtain out-of-sample causal
validation in order to avoid over-fit. We also introduce schemes for both
Bayesian and frequentist inference: the former via a novel adaptation of
dropout training, and the latter via a data splitting routine.
|
[
"Jason Hartford, Greg Lewis, Kevin Leyton-Brown, Matt Taddy",
"['Jason Hartford' 'Greg Lewis' 'Kevin Leyton-Brown' 'Matt Taddy']"
] |
astro-ph.IM astro-ph.GA astro-ph.HE cs.LG gr-qc
|
10.1103/PhysRevD.97.044039
|
1701.00008
| null | null |
http://arxiv.org/abs/1701.00008v3
|
2017-11-09T18:50:10Z
|
2016-12-30T21:00:02Z
|
Deep Neural Networks to Enable Real-time Multimessenger Astrophysics
|
Gravitational wave astronomy has set in motion a scientific revolution. To
further enhance the science reach of this emergent field, there is a pressing
need to increase the depth and speed of the gravitational wave algorithms that
have enabled these groundbreaking discoveries. To contribute to this effort, we
introduce Deep Filtering, a new highly scalable method for end-to-end
time-series signal processing, based on a system of two deep convolutional
neural networks, which we designed for classification and regression to rapidly
detect and estimate parameters of signals in highly noisy time-series data
streams. We demonstrate a novel training scheme with gradually increasing noise
levels, and a transfer learning procedure between the two networks. We showcase
the application of this method for the detection and parameter estimation of
gravitational waves from binary black hole mergers. Our results indicate that
Deep Filtering significantly outperforms conventional machine learning
techniques, achieves similar performance compared to matched-filtering while
being several orders of magnitude faster thus allowing real-time processing of
raw big data with minimal resources. More importantly, Deep Filtering extends
the range of gravitational wave signals that can be detected with ground-based
gravitational wave detectors. This framework leverages recent advances in
artificial intelligence algorithms and emerging hardware architectures, such as
deep-learning-optimized GPUs, to facilitate real-time searches of gravitational
wave sources and their electromagnetic and astro-particle counterparts.
|
[
"['Daniel George' 'E. A. Huerta']",
"Daniel George, E. A. Huerta"
] |
cs.LG
| null |
1701.0016
| null | null | null | null | null |
NIPS 2016 Tutorial: Generative Adversarial Networks
|
This report summarizes the tutorial presented by the author at NIPS 2016 on
generative adversarial networks (GANs). The tutorial describes: (1) Why
generative modeling is a topic worth studying, (2) how generative models work,
and how GANs compare to other generative models, (3) the details of how GANs
work, (4) research frontiers in GANs, and (5) state-of-the-art image models
that combine GANs with other methods. Finally, the tutorial contains three
exercises for readers to complete, and the solutions to these exercises.
|
[
"Ian Goodfellow"
] |
null | null |
1701.00160
| null | null |
http://arxiv.org/pdf/1701.00160v4
|
2017-04-03T21:57:48Z
|
2016-12-31T19:17:17Z
|
NIPS 2016 Tutorial: Generative Adversarial Networks
|
This report summarizes the tutorial presented by the author at NIPS 2016 on generative adversarial networks (GANs). The tutorial describes: (1) Why generative modeling is a topic worth studying, (2) how generative models work, and how GANs compare to other generative models, (3) the details of how GANs work, (4) research frontiers in GANs, and (5) state-of-the-art image models that combine GANs with other methods. Finally, the tutorial contains three exercises for readers to complete, and the solutions to these exercises.
|
[
"['Ian Goodfellow']"
] |
stat.ML cs.LG
| null |
1701.00167
| null | null |
http://arxiv.org/pdf/1701.00167v1
|
2016-12-31T21:17:08Z
|
2016-12-31T21:17:08Z
|
Very Fast Kernel SVM under Budget Constraints
|
In this paper we propose a fast online Kernel SVM algorithm under tight
budget constraints. We propose to split the input space using LVQ and train a
Kernel SVM in each cluster. To allow for online training, we propose to limit
the size of the support vector set of each cluster using different strategies.
We show in the experiment that our algorithm is able to achieve high accuracy
while having a very high number of samples processed per second both in
training and in the evaluation.
|
[
"David Picard",
"['David Picard']"
] |
math.OC cs.AI cs.LG cs.SY stat.ML
| null |
1701.00178
| null | null | null | null | null |
Lazily Adapted Constant Kinky Inference for Nonparametric Regression and
Model-Reference Adaptive Control
|
Techniques known as Nonlinear Set Membership prediction, Lipschitz
Interpolation or Kinky Inference are approaches to machine learning that
utilise presupposed Lipschitz properties to compute inferences over unobserved
function values. Provided a bound on the true best Lipschitz constant of the
target function is known a priori they offer convergence guarantees as well as
bounds around the predictions. Considering a more general setting that builds
on Hoelder continuity relative to pseudo-metrics, we propose an online method
for estimating the Hoelder constant online from function value observations
that possibly are corrupted by bounded observational errors. Utilising this to
compute adaptive parameters within a kinky inference rule gives rise to a
nonparametric machine learning method, for which we establish strong universal
approximation guarantees. That is, we show that our prediction rule can learn
any continuous function in the limit of increasingly dense data to within a
worst-case error bound that depends on the level of observational uncertainty.
We apply our method in the context of nonparametric model-reference adaptive
control (MRAC). Across a range of simulated aircraft roll-dynamics and
performance metrics our approach outperforms recently proposed alternatives
that were based on Gaussian processes and RBF-neural networks. For
discrete-time systems, we provide guarantees on the tracking success of our
learning-based controllers both for the batch and the online learning setting.
|
[
"Jan-Peter Calliess"
] |
cs.LG cs.CR
| null |
1701.0022
| null | null | null | null | null |
Classification of Smartphone Users Using Internet Traffic
|
Today, smartphone devices are owned by a large portion of the population and
have become a very popular platform for accessing the Internet. Smartphones
provide the user with immediate access to information and services. However,
they can easily expose the user to many privacy risks. Applications that are
installed on the device and entities with access to the device's Internet
traffic can reveal private information about the smartphone user and steal
sensitive content stored on the device or transmitted by the device over the
Internet. In this paper, we present a method to reveal various demographics and
technical computer skills of smartphone users by their Internet traffic
records, using machine learning classification models. We implement and
evaluate the method on real life data of smartphone users and show that
smartphone users can be classified by their gender, smoking habits, software
programming experience, and other characteristics.
|
[
"Andrey Finkelstein, Ron Biton, Rami Puzis, Asaf Shabtai"
] |
null | null |
1701.00220
| null | null |
http://arxiv.org/pdf/1701.00220v1
|
2017-01-01T08:12:49Z
|
2017-01-01T08:12:49Z
|
Classification of Smartphone Users Using Internet Traffic
|
Today, smartphone devices are owned by a large portion of the population and have become a very popular platform for accessing the Internet. Smartphones provide the user with immediate access to information and services. However, they can easily expose the user to many privacy risks. Applications that are installed on the device and entities with access to the device's Internet traffic can reveal private information about the smartphone user and steal sensitive content stored on the device or transmitted by the device over the Internet. In this paper, we present a method to reveal various demographics and technical computer skills of smartphone users by their Internet traffic records, using machine learning classification models. We implement and evaluate the method on real life data of smartphone users and show that smartphone users can be classified by their gender, smoking habits, software programming experience, and other characteristics.
|
[
"['Andrey Finkelstein' 'Ron Biton' 'Rami Puzis' 'Asaf Shabtai']"
] |
cs.LG stat.ML
| null |
1701.00251
| null | null |
http://arxiv.org/pdf/1701.00251v1
|
2017-01-01T15:18:13Z
|
2017-01-01T15:18:13Z
|
Outlier Robust Online Learning
|
We consider the problem of learning from noisy data in practical settings
where the size of data is too large to store on a single machine. More
challenging, the data coming from the wild may contain malicious outliers. To
address the scalability and robustness issues, we present an online robust
learning (ORL) approach. ORL is simple to implement and has provable robustness
guarantee -- in stark contrast to existing online learning approaches that are
generally fragile to outliers. We specialize the ORL approach for two concrete
cases: online robust principal component analysis and online linear regression.
We demonstrate the efficiency and robustness advantages of ORL through
comprehensive simulations and predicting image tags on a large-scale data set.
We also discuss extension of the ORL to distributed learning and provide
experimental evaluations.
|
[
"['Jiashi Feng' 'Huan Xu' 'Shie Mannor']",
"Jiashi Feng, Huan Xu, Shie Mannor"
] |
cs.LG stat.ML
| null |
1701.00299
| null | null |
http://arxiv.org/pdf/1701.00299v3
|
2018-03-05T02:03:00Z
|
2017-01-02T00:09:14Z
|
Dynamic Deep Neural Networks: Optimizing Accuracy-Efficiency Trade-offs
by Selective Execution
|
We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs.
|
[
"Lanlan Liu, Jia Deng",
"['Lanlan Liu' 'Jia Deng']"
] |
cs.LG cs.CV
| null |
1701.00485
| null | null |
http://arxiv.org/pdf/1701.00485v2
|
2017-01-04T13:54:51Z
|
2017-01-02T04:28:16Z
|
Two-Bit Networks for Deep Learning on Resource-Constrained Embedded
Devices
|
With the rapid proliferation of Internet of Things and intelligent edge
devices, there is an increasing need for implementing machine learning
algorithms, including deep learning, on resource-constrained mobile embedded
devices with limited memory and computation power. Typical large Convolutional
Neural Networks (CNNs) need large amounts of memory and computational power,
and cannot be deployed on embedded devices efficiently. We present Two-Bit
Networks (TBNs) for model compression of CNNs with edge weights constrained to
(-2, -1, 1, 2), which can be encoded with two bits. Our approach can reduce the
memory usage and improve computational efficiency significantly while achieving
good performance in terms of classification accuracy, thus representing a
reasonable tradeoff between model size and performance.
|
[
"Wenjia Meng, Zonghua Gu, Ming Zhang, Zhaohui Wu",
"['Wenjia Meng' 'Zonghua Gu' 'Ming Zhang' 'Zhaohui Wu']"
] |
cs.NA cs.LG stat.ML
| null |
1701.00573
| null | null |
http://arxiv.org/pdf/1701.00573v3
|
2017-03-22T22:19:57Z
|
2017-01-03T03:31:03Z
|
Robust method for finding sparse solutions to linear inverse problems
using an L2 regularization
|
We analyzed the performance of a biologically inspired algorithm called the
Corrected Projections Algorithm (CPA) when a sparseness constraint is required
to unambiguously reconstruct an observed signal using atoms from an
overcomplete dictionary. By changing the geometry of the estimation problem,
CPA gives an analytical expression for a binary variable that indicates the
presence or absence of a dictionary atom using an L2 regularizer. The
regularized solution can be implemented using an efficient real-time
Kalman-filter type of algorithm. The smoother L2 regularization of CPA makes it
very robust to noise, and CPA outperforms other methods in identifying known
atoms in the presence of strong novel atoms in the signal.
|
[
"Gonzalo H Otazu",
"['Gonzalo H Otazu']"
] |
q-bio.QM cs.LG
| null |
1701.00593
| null | null |
http://arxiv.org/pdf/1701.00593v2
|
2017-04-12T23:47:27Z
|
2017-01-03T06:08:52Z
|
HLA class I binding prediction via convolutional neural networks
|
Many biological processes are governed by protein-ligand interactions. One
such example is the recognition of self and nonself cells by the immune system.
This immune response process is regulated by the major histocompatibility
complex (MHC) protein which is encoded by the human leukocyte antigen (HLA)
complex. Understanding the binding potential between MHC and peptides can lead
to the design of more potent, peptide-based vaccines and immunotherapies for
infectious autoimmune diseases.
We apply machine learning techniques from the natural language processing
(NLP) domain to address the task of MHC-peptide binding prediction. More
specifically, we introduce a new distributed representation of amino acids,
name HLA-Vec, that can be used for a variety of downstream proteomic machine
learning tasks. We then propose a deep convolutional neural network
architecture, name HLA-CNN, for the task of HLA class I-peptide binding
prediction. Experimental results show combining the new distributed
representation with our HLA-CNN architecture achieves state-of-the-art results
in the majority of the latest two Immune Epitope Database (IEDB) weekly
automated benchmark datasets. We further apply our model to predict binding on
the human genome and identify 15 genes with potential for self binding.
|
[
"['Yeeleng Scott Vang' 'Xiaohui Xie']",
"Yeeleng Scott Vang and Xiaohui Xie"
] |
cs.LG
| null |
1701.00597
| null | null |
http://arxiv.org/pdf/1701.00597v1
|
2017-01-03T07:07:14Z
|
2017-01-03T07:07:14Z
|
Deep Convolutional Neural Networks for Pairwise Causality
|
Discovering causal models from observational and interventional data is an
important first step preceding what-if analysis or counterfactual reasoning. As
has been shown before, the direction of pairwise causal relations can, under
certain conditions, be inferred from observational data via standard
gradient-boosted classifiers (GBC) using carefully engineered statistical
features. In this paper we apply deep convolutional neural networks (CNNs) to
this problem by plotting attribute pairs as 2-D scatter plots that are fed to
the CNN as images. We evaluate our approach on the 'Cause- Effect Pairs' NIPS
2013 Data Challenge. We observe that a weighted ensemble of CNN with the
earlier GBC approach yields significant improvement. Further, we observe that
when less training data is available, our approach performs better than the GBC
based approach suggesting that CNN models pre-trained to determine the
direction of pairwise causal direction could have wider applicability in causal
discovery and enabling what-if or counterfactual analysis.
|
[
"Karamjit Singh, Garima Gupta, Lovekesh Vig, Gautam Shroff, and Puneet\n Agarwal",
"['Karamjit Singh' 'Garima Gupta' 'Lovekesh Vig' 'Gautam Shroff'\n 'Puneet Agarwal']"
] |
cs.LG cs.DC
| null |
1701.00609
| null | null |
http://arxiv.org/pdf/1701.00609v1
|
2017-01-03T09:18:22Z
|
2017-01-03T09:18:22Z
|
Akid: A Library for Neural Network Research and Production from a
Dataism Approach
|
Neural networks are a revolutionary but immature technique that is fast
evolving and heavily relies on data. To benefit from the newest development and
newly available data, we want the gap between research and production as small
as possibly. On the other hand, differing from traditional machine learning
models, neural network is not just yet another statistic model, but a model for
the natural processing engine --- the brain. In this work, we describe a neural
network library named {\texttt akid}. It provides higher level of abstraction
for entities (abstracted as blocks) in nature upon the abstraction done on
signals (abstracted as tensors) by Tensorflow, characterizing the dataism
observation that all entities in nature processes input and emit out in some
ways. It includes a full stack of software that provides abstraction to let
researchers focus on research instead of implementation, while at the same time
the developed program can also be put into production seamlessly in a
distributed environment, and be production ready. At the top application stack,
it provides out-of-box tools for neural network applications. Lower down, akid
provides a programming paradigm that lets user easily build customized models.
The distributed computing stack handles the concurrency and communication, thus
letting models be trained or deployed to a single GPU, multiple GPUs, or a
distributed environment without affecting how a model is specified in the
programming paradigm stack. Lastly, the distributed deployment stack handles
how the distributed computing is deployed, thus decoupling the research
prototype environment with the actual production environment, and is able to
dynamically allocate computing resources, so development (Devs) and operations
(Ops) could be separated. Please refer to http://akid.readthedocs.io/en/latest/
for documentation.
|
[
"['Shuai Li']",
"Shuai Li"
] |
stat.ML cs.LG
| null |
1701.00677
| null | null |
http://arxiv.org/pdf/1701.00677v1
|
2017-01-03T12:33:53Z
|
2017-01-03T12:33:53Z
|
New Methods of Enhancing Prediction Accuracy in Linear Models with
Missing Data
|
In this paper, prediction for linear systems with missing information is
investigated. New methods are introduced to improve the Mean Squared Error
(MSE) on the test set in comparison to state-of-the-art methods, through
appropriate tuning of Bias-Variance trade-off. First, the use of proposed Soft
Weighted Prediction (SWP) algorithm and its efficacy are depicted and compared
to previous works for non-missing scenarios. The algorithm is then modified and
optimized for missing scenarios. It is shown that controlled over-fitting by
suggested algorithms will improve prediction accuracy in various cases.
Simulation results approve our heuristics in enhancing the prediction accuracy.
|
[
"['Mohammad Amin Fakharian' 'Ashkan Esmaeili' 'Farokh Marvasti']",
"Mohammad Amin Fakharian, Ashkan Esmaeili, and Farokh Marvasti"
] |
cs.LG
|
10.1109/BigData.2016.7840826
|
1701.00705
| null | null |
http://arxiv.org/abs/1701.00705v1
|
2016-12-29T20:40:42Z
|
2016-12-29T20:40:42Z
|
Using Big Data to Enhance the Bosch Production Line Performance: A
Kaggle Challenge
|
This paper describes our approach to the Bosch production line performance
challenge run by Kaggle.com. Maximizing the production yield is at the heart of
the manufacturing industry. At the Bosch assembly line, data is recorded for
products as they progress through each stage. Data science methods are applied
to this huge data repository consisting records of tests and measurements made
for each component along the assembly line to predict internal failures. We
found that it is possible to train a model that predicts which parts are most
likely to fail. Thus a smarter failure detection system can be built and the
parts tagged likely to fail can be salvaged to decrease operating costs and
increase the profit margins.
|
[
"['Ankita Mangal' 'Nishant Kumar']",
"Ankita Mangal and Nishant Kumar"
] |
cs.IT cs.LG math.AG math.IT
| null |
1701.00737
| null | null |
http://arxiv.org/pdf/1701.00737v2
|
2017-04-26T01:06:44Z
|
2017-01-03T16:23:48Z
|
Deterministic and Probabilistic Conditions for Finite Completability of
Low-rank Multi-View Data
|
We consider the multi-view data completion problem, i.e., to complete a
matrix $\mathbf{U}=[\mathbf{U}_1|\mathbf{U}_2]$ where the ranks of
$\mathbf{U},\mathbf{U}_1$, and $\mathbf{U}_2$ are given. In particular, we
investigate the fundamental conditions on the sampling pattern, i.e., locations
of the sampled entries for finite completability of such a multi-view data
given the corresponding rank constraints. In contrast with the existing
analysis on Grassmannian manifold for a single-view matrix, i.e., conventional
matrix completion, we propose a geometric analysis on the manifold structure
for multi-view data to incorporate more than one rank constraint. We provide a
deterministic necessary and sufficient condition on the sampling pattern for
finite completability. We also give a probabilistic condition in terms of the
number of samples per column that guarantees finite completability with high
probability. Finally, using the developed tools, we derive the deterministic
and probabilistic guarantees for unique completability.
|
[
"['Morteza Ashraphijuo' 'Xiaodong Wang' 'Vaneet Aggarwal']",
"Morteza Ashraphijuo and Xiaodong Wang and Vaneet Aggarwal"
] |
cs.LG nlin.CD
| null |
1701.00754
| null | null |
http://arxiv.org/pdf/1701.00754v1
|
2017-01-01T05:17:58Z
|
2017-01-01T05:17:58Z
|
Using Artificial Neural Networks (ANN) to Control Chaos
|
Controlling Chaos could be a big factor in getting great stable amounts of
energy out of small amounts of not necessarily stable resources. By definition,
Chaos is getting huge changes in the system's output due to unpredictable small
changes in initial conditions, and that means we could take advantage of this
fact and select the proper control system to manipulate system's initial
conditions and inputs in general and get a desirable output out of otherwise a
Chaotic system. That was accomplished by first building some known chaotic
circuit (Chua circuit) and the NI's MultiSim was used to simulate the ANN
control system. It was shown that this technique can also be used to stabilize
some hard to stabilize electronic systems.
|
[
"Ibrahim Ighneiwaa, Salwa Hamidatoua, and Fadia Ben Ismaela",
"['Ibrahim Ighneiwaa' 'Salwa Hamidatoua' 'Fadia Ben Ismaela']"
] |
stat.ML cs.LG math.NA
| null |
1701.00757
| null | null |
http://arxiv.org/pdf/1701.00757v1
|
2017-01-03T17:42:34Z
|
2017-01-03T17:42:34Z
|
Clustering Signed Networks with the Geometric Mean of Laplacians
|
Signed networks allow to model positive and negative relationships. We
analyze existing extensions of spectral clustering to signed networks. It turns
out that existing approaches do not recover the ground truth clustering in
several situations where either the positive or the negative network structures
contain no noise. Our analysis shows that these problems arise as existing
approaches take some form of arithmetic mean of the Laplacians of the positive
and negative part. As a solution we propose to use the geometric mean of the
Laplacians of positive and negative part and show that it outperforms the
existing approaches. While the geometric mean of matrices is computationally
expensive, we show that eigenvectors of the geometric mean can be computed
efficiently, leading to a numerical scheme for sparse matrices which is of
independent interest.
|
[
"['Pedro Mercado' 'Francesco Tudisco' 'Matthias Hein']",
"Pedro Mercado, Francesco Tudisco and Matthias Hein"
] |
cs.LG
| null |
1701.00831
| null | null |
http://arxiv.org/pdf/1701.00831v1
|
2017-01-03T20:54:52Z
|
2017-01-03T20:54:52Z
|
Collapsing of dimensionality
|
We analyze a new approach to Machine Learning coming from a modification of
classical regularization networks by casting the process in the time dimension,
leading to a sort of collapse of dimensionality in the problem of learning the
model parameters. This approach allows the definition of a online learning
algorithm that progressively accumulates the knowledge provided in the input
trajectory. The regularization principle leads to a solution based on a
dynamical system that is paired with a procedure to develop a graph structure
that stores the input regularities acquired from the temporal evolution. We
report an extensive experimental exploration on the behavior of the parameter
of the proposed model and an evaluation on artificial dataset.
|
[
"Marco Gori, Marco Maggini, Alessandro Rossi",
"['Marco Gori' 'Marco Maggini' 'Alessandro Rossi']"
] |
cs.CL cs.LG
| null |
1701.00851
| null | null |
http://arxiv.org/pdf/1701.00851v1
|
2017-01-03T22:26:10Z
|
2017-01-03T22:26:10Z
|
Unsupervised neural and Bayesian models for zero-resource speech
processing
|
In settings where only unlabelled speech data is available, zero-resource
speech technology needs to be developed without transcriptions, pronunciation
dictionaries, or language modelling text. There are two central problems in
zero-resource speech processing: (i) finding frame-level feature
representations which make it easier to discriminate between linguistic units
(phones or words), and (ii) segmenting and clustering unlabelled speech into
meaningful units. In this thesis, we argue that a combination of top-down and
bottom-up modelling is advantageous in tackling these two problems.
To address the problem of frame-level representation learning, we present the
correspondence autoencoder (cAE), a neural network trained with weak top-down
supervision from an unsupervised term discovery system. By combining this
top-down supervision with unsupervised bottom-up initialization, the cAE yields
much more discriminative features than previous approaches. We then present our
unsupervised segmental Bayesian model that segments and clusters unlabelled
speech into hypothesized words. By imposing a consistent top-down segmentation
while also using bottom-up knowledge from detected syllable boundaries, our
system outperforms several others on multi-speaker conversational English and
Xitsonga speech data. Finally, we show that the clusters discovered by the
segmental Bayesian model can be made less speaker- and gender-specific by using
features from the cAE instead of traditional acoustic features.
In summary, the different models and systems presented in this thesis show
that both top-down and bottom-up modelling can improve representation learning,
segmentation and clustering of unlabelled speech data.
|
[
"Herman Kamper",
"['Herman Kamper']"
] |
cs.CL cs.LG stat.ML
| null |
1701.00874
| null | null |
http://arxiv.org/pdf/1701.00874v4
|
2017-09-03T21:12:40Z
|
2017-01-04T00:10:17Z
|
Neural Probabilistic Model for Non-projective MST Parsing
|
In this paper, we propose a probabilistic parsing model, which defines a
proper conditional probability distribution over non-projective dependency
trees for a given sentence, using neural representations as inputs. The neural
network architecture is based on bi-directional LSTM-CNNs which benefits from
both word- and character-level representations automatically, by using
combination of bidirectional LSTM and CNN. On top of the neural network, we
introduce a probabilistic structured layer, defining a conditional log-linear
model over non-projective trees. We evaluate our model on 17 different
datasets, across 14 different languages. By exploiting Kirchhoff's Matrix-Tree
Theorem (Tutte, 1984), the partition functions and marginals can be computed
efficiently, leading to a straight-forward end-to-end model training procedure
via back-propagation. Our parser achieves state-of-the-art parsing performance
on nine datasets.
|
[
"['Xuezhe Ma' 'Eduard Hovy']",
"Xuezhe Ma, Eduard Hovy"
] |
cs.AI cs.LG cs.LO
|
10.1007/978-3-319-59271-8_5
|
1701.00877
| null | null |
http://arxiv.org/abs/1701.00877v2
|
2017-01-18T23:39:05Z
|
2017-01-04T00:45:37Z
|
On the Usability of Probably Approximately Correct Implication Bases
|
We revisit the notion of probably approximately correct implication bases
from the literature and present a first formulation in the language of formal
concept analysis, with the goal to investigate whether such bases represent a
suitable substitute for exact implication bases in practical use-cases. To this
end, we quantitatively examine the behavior of probably approximately correct
implication bases on artificial and real-world data sets and compare their
precision and recall with respect to their corresponding exact implication
bases. Using a small example, we also provide qualitative insight that
implications from probably approximately correct bases can still represent
meaningful knowledge from a given data set.
|
[
"Daniel Borchmann, Tom Hanika, Sergei Obiedkov",
"['Daniel Borchmann' 'Tom Hanika' 'Sergei Obiedkov']"
] |
stat.ML cs.LG
| null |
1701.00903
| null | null |
http://arxiv.org/pdf/1701.00903v1
|
2017-01-04T05:53:46Z
|
2017-01-04T05:53:46Z
|
An Interval-Based Bayesian Generative Model for Human Complex Activity
Recognition
|
Complex activity recognition is challenging due to the inherent uncertainty
and diversity of performing a complex activity. Normally, each instance of a
complex activity has its own configuration of atomic actions and their temporal
dependencies. We propose in this paper an atomic action-based Bayesian model
that constructs Allen's interval relation networks to characterize complex
activities with structural varieties in a probabilistic generative way: By
introducing latent variables from the Chinese restaurant process, our approach
is able to capture all possible styles of a particular complex activity as a
unique set of distributions over atomic actions and relations. We also show
that local temporal dependencies can be retained and are globally consistent in
the resulting interval network. Moreover, network structure can be learned from
empirical data. A new dataset of complex hand activities has been constructed
and made publicly available, which is much larger in size than any existing
datasets. Empirical evaluations on benchmark datasets as well as our in-house
dataset demonstrate the competitiveness of our approach.
|
[
"Li Liu and Yongzhong Yang and Lakshmi Narasimhan Govindarajan and Shu\n Wang and Bin Hu and Li Cheng and David S. Rosenblum",
"['Li Liu' 'Yongzhong Yang' 'Lakshmi Narasimhan Govindarajan' 'Shu Wang'\n 'Bin Hu' 'Li Cheng' 'David S. Rosenblum']"
] |
cs.LG cs.CR cs.CV q-bio.NC stat.ML
|
10.1162/neco_a_01143
|
1701.00939
| null | null |
http://arxiv.org/abs/1701.00939v1
|
2017-01-04T09:40:09Z
|
2017-01-04T09:40:09Z
|
Dense Associative Memory is Robust to Adversarial Inputs
|
Deep neural networks (DNN) trained in a supervised way suffer from two known
problems. First, the minima of the objective function used in learning
correspond to data points (also known as rubbish examples or fooling images)
that lack semantic similarity with the training data. Second, a clean input can
be changed by a small, and often imperceptible for human vision, perturbation,
so that the resulting deformed input is misclassified by the network. These
findings emphasize the differences between the ways DNN and humans classify
patterns, and raise a question of designing learning algorithms that more
accurately mimic human perception compared to the existing methods.
Our paper examines these questions within the framework of Dense Associative
Memory (DAM) models. These models are defined by the energy function, with
higher order (higher than quadratic) interactions between the neurons. We show
that in the limit when the power of the interaction vertex in the energy
function is sufficiently large, these models have the following three
properties. First, the minima of the objective function are free from rubbish
images, so that each minimum is a semantically meaningful pattern. Second,
artificial patterns poised precisely at the decision boundary look ambiguous to
human subjects and share aspects of both classes that are separated by that
decision boundary. Third, adversarial images constructed by models with small
power of the interaction vertex, which are equivalent to DNN with rectified
linear units (ReLU), fail to transfer to and fool the models with higher order
interactions. This opens up a possibility to use higher order models for
detecting and stopping malicious adversarial attacks. The presented results
suggest that DAM with higher order energy functions are closer to human visual
perception than DNN with ReLUs.
|
[
"Dmitry Krotov, John J Hopfield",
"['Dmitry Krotov' 'John J Hopfield']"
] |
cs.LG
| null |
1701.01
| null | null | null | null | null |
Online Learning Sensing Matrix and Sparsifying Dictionary Simultaneously
for Compressive Sensing
|
This paper considers the problem of simultaneously learning the Sensing
Matrix and Sparsifying Dictionary (SMSD) on a large training dataset. To
address the formulated joint learning problem, we propose an online algorithm
that consists of a closed-form solution for optimizing the sensing matrix with
a fixed sparsifying dictionary and a stochastic method for learning the
sparsifying dictionary on a large dataset when the sensing matrix is given.
Benefiting from training on a large dataset, the obtained compressive sensing
(CS) system by the proposed algorithm yields a much better performance in terms
of signal recovery accuracy than the existing ones. The simulation results on
natural images demonstrate the effectiveness of the suggested online algorithm
compared with the existing methods.
|
[
"Tao Hong and Zhihui Zhu"
] |
null | null |
1701.01000
| null | null |
http://arxiv.org/pdf/1701.01000v4
|
2018-06-02T16:09:47Z
|
2017-01-04T13:26:57Z
|
Online Learning Sensing Matrix and Sparsifying Dictionary Simultaneously
for Compressive Sensing
|
This paper considers the problem of simultaneously learning the Sensing Matrix and Sparsifying Dictionary (SMSD) on a large training dataset. To address the formulated joint learning problem, we propose an online algorithm that consists of a closed-form solution for optimizing the sensing matrix with a fixed sparsifying dictionary and a stochastic method for learning the sparsifying dictionary on a large dataset when the sensing matrix is given. Benefiting from training on a large dataset, the obtained compressive sensing (CS) system by the proposed algorithm yields a much better performance in terms of signal recovery accuracy than the existing ones. The simulation results on natural images demonstrate the effectiveness of the suggested online algorithm compared with the existing methods.
|
[
"['Tao Hong' 'Zhihui Zhu']"
] |
cs.CV cs.LG cs.NE
| null |
1701.01036
| null | null |
http://arxiv.org/pdf/1701.01036v2
|
2017-07-01T13:21:11Z
|
2017-01-04T14:54:20Z
|
Demystifying Neural Style Transfer
|
Neural Style Transfer has recently demonstrated very exciting results which
catches eyes in both academia and industry. Despite the amazing results, the
principle of neural style transfer, especially why the Gram matrices could
represent style remains unclear. In this paper, we propose a novel
interpretation of neural style transfer by treating it as a domain adaptation
problem. Specifically, we theoretically show that matching the Gram matrices of
feature maps is equivalent to minimize the Maximum Mean Discrepancy (MMD) with
the second order polynomial kernel. Thus, we argue that the essence of neural
style transfer is to match the feature distributions between the style images
and the generated images. To further support our standpoint, we experiment with
several other distribution alignment methods, and achieve appealing results. We
believe this novel interpretation connects these two important research fields,
and could enlighten future researches.
|
[
"Yanghao Li, Naiyan Wang, Jiaying Liu and Xiaodi Hou",
"['Yanghao Li' 'Naiyan Wang' 'Jiaying Liu' 'Xiaodi Hou']"
] |
cs.LG stat.ML
| null |
1701.01095
| null | null |
http://arxiv.org/pdf/1701.01095v3
|
2017-04-20T20:37:39Z
|
2017-01-04T18:20:47Z
|
Estimating Quality in Multi-Objective Bandits Optimization
|
Many real-world applications are characterized by a number of conflicting
performance measures. As optimizing in a multi-objective setting leads to a set
of non-dominated solutions, a preference function is required for selecting the
solution with the appropriate trade-off between the objectives. The question
is: how good do estimations of these objectives have to be in order for the
solution maximizing the preference function to remain unchanged? In this paper,
we introduce the concept of preference radius to characterize the robustness of
the preference function and provide guidelines for controlling the quality of
estimations in the multi-objective setting. More specifically, we provide a
general formulation of multi-objective optimization under the bandits setting.
We show how the preference radius relates to the optimal gap and we use this
concept to provide a theoretical analysis of the Thompson sampling algorithm
from multivariate normal priors. We finally present experiments to support the
theoretical results and highlight the fact that one cannot simply scalarize
multi-objective problems into single-objective problems.
|
[
"['Audrey Durand' 'Christian Gagné']",
"Audrey Durand, Christian Gagn\\'e"
] |
cs.LG cs.CV
| null |
1701.01218
| null | null |
http://arxiv.org/pdf/1701.01218v1
|
2017-01-05T06:04:53Z
|
2017-01-05T06:04:53Z
|
Overlapping Cover Local Regression Machines
|
We present the Overlapping Domain Cover (ODC) notion for kernel machines, as
a set of overlapping subsets of the data that covers the entire training set
and optimized to be spatially cohesive as possible. We show how this notion
benefit the speed of local kernel machines for regression in terms of both
speed while achieving while minimizing the prediction error. We propose an
efficient ODC framework, which is applicable to various regression models and
in particular reduces the complexity of Twin Gaussian Processes (TGP)
regression from cubic to quadratic. Our notion is also applicable to several
kernel methods (e.g., Gaussian Process Regression(GPR) and IWTGP regression, as
shown in our experiments). We also theoretically justified the idea behind our
method to improve local prediction by the overlapping cover. We validated and
analyzed our method on three benchmark human pose estimation datasets and
interesting findings are discussed.
|
[
"Mohamed Elhoseiny and Ahmed Elgammal",
"['Mohamed Elhoseiny' 'Ahmed Elgammal']"
] |
stat.ML cs.LG
|
10.1007/s00180-017-0742-2
|
1701.01293
| null | null |
http://arxiv.org/abs/1701.01293v2
|
2017-05-04T07:03:28Z
|
2017-01-05T12:33:19Z
|
OpenML: An R Package to Connect to the Machine Learning Platform OpenML
|
OpenML is an online machine learning platform where researchers can easily
share data, machine learning tasks and experiments as well as organize them
online to work and collaborate more efficiently. In this paper, we present an R
package to interface with the OpenML platform and illustrate its usage in
combination with the machine learning R package mlr. We show how the OpenML
package allows R users to easily search, download and upload data sets and
machine learning tasks. Furthermore, we also show how to upload results of
experiments, share them with others and download results from other users.
Beyond ensuring reproducibility of results, the OpenML platform automates much
of the drudge work, speeds up research, facilitates collaboration and increases
the users' visibility online.
|
[
"['Giuseppe Casalicchio' 'Jakob Bossek' 'Michel Lang' 'Dominik Kirchhoff'\n 'Pascal Kerschke' 'Benjamin Hofner' 'Heidi Seibold' 'Joaquin Vanschoren'\n 'Bernd Bischl']",
"Giuseppe Casalicchio, Jakob Bossek, Michel Lang, Dominik Kirchhoff,\n Pascal Kerschke, Benjamin Hofner, Heidi Seibold, Joaquin Vanschoren, Bernd\n Bischl"
] |
cs.AI cs.GT cs.LG
| null |
1701.01302
| null | null |
http://arxiv.org/pdf/1701.01302v3
|
2017-05-13T08:33:46Z
|
2017-01-05T13:00:05Z
|
Toward negotiable reinforcement learning: shifting priorities in Pareto
optimal sequential decision-making
|
Existing multi-objective reinforcement learning (MORL) algorithms do not
account for objectives that arise from players with differing beliefs.
Concretely, consider two players with different beliefs and utility functions
who may cooperate to build a machine that takes actions on their behalf. A
representation is needed for how much the machine's policy will prioritize each
player's interests over time. Assuming the players have reached common
knowledge of their situation, this paper derives a recursion that any Pareto
optimal policy must satisfy. Two qualitative observations can be made from the
recursion: the machine must (1) use each player's own beliefs in evaluating how
well an action will serve that player's utility function, and (2) shift the
relative priority it assigns to each player's expected utilities over time, by
a factor proportional to how well that player's beliefs predict the machine's
inputs. Observation (2) represents a substantial divergence from na\"{i}ve
linear utility aggregation (as in Harsanyi's utilitarian theorem, and existing
MORL algorithms), which is shown here to be inadequate for Pareto optimal
sequential decision-making on behalf of players with different beliefs.
|
[
"Andrew Critch",
"['Andrew Critch']"
] |
cs.IR cs.LG stat.ML
| null |
1701.01325
| null | null |
http://arxiv.org/pdf/1701.01325v1
|
2017-01-05T14:14:52Z
|
2017-01-05T14:14:52Z
|
Outlier Detection for Text Data : An Extended Version
|
The problem of outlier detection is extremely challenging in many domains
such as text, in which the attribute values are typically non-negative, and
most values are zero. In such cases, it often becomes difficult to separate the
outliers from the natural variations in the patterns in the underlying data. In
this paper, we present a matrix factorization method, which is naturally able
to distinguish the anomalies with the use of low rank approximations of the
underlying data. Our iterative algorithm TONMF is based on block coordinate
descent (BCD) framework. We define blocks over the term-document matrix such
that the function becomes solvable. Given most recently updated values of other
matrix blocks, we always update one block at a time to its optimal. Our
approach has significant advantages over traditional methods for text outlier
detection. Finally, we present experimental results illustrating the
effectiveness of our method over competing methods.
|
[
"['Ramakrishnan Kannan' 'Hyenkyun Woo' 'Charu C. Aggarwal' 'Haesun Park']",
"Ramakrishnan Kannan, Hyenkyun Woo, Charu C. Aggarwal, Haesun Park"
] |
cs.NE cs.AI cs.LG physics.chem-ph stat.ML
| null |
1701.01329
| null | null |
http://arxiv.org/pdf/1701.01329v1
|
2017-01-05T14:28:34Z
|
2017-01-05T14:28:34Z
|
Generating Focussed Molecule Libraries for Drug Discovery with Recurrent
Neural Networks
|
In de novo drug design, computational strategies are used to generate novel
molecules with good affinity to the desired biological target. In this work, we
show that recurrent neural networks can be trained as generative models for
molecular structures, similar to statistical language models in natural
language processing. We demonstrate that the properties of the generated
molecules correlate very well with the properties of the molecules used to
train the model. In order to enrich libraries with molecules active towards a
given biological target, we propose to fine-tune the model with small sets of
molecules, which are known to be active against that target.
Against Staphylococcus aureus, the model reproduced 14% of 6051 hold-out test
molecules that medicinal chemists designed, whereas against Plasmodium
falciparum (Malaria) it reproduced 28% of 1240 test molecules. When coupled
with a scoring function, our model can perform the complete de novo drug design
cycle to generate large sets of novel molecules for drug discovery.
|
[
"Marwin H.S. Segler, Thierry Kogej, Christian Tyrchan, Mark P. Waller",
"['Marwin H. S. Segler' 'Thierry Kogej' 'Christian Tyrchan'\n 'Mark P. Waller']"
] |
cs.LG
| null |
1701.01358
| null | null |
http://arxiv.org/pdf/1701.01358v1
|
2017-01-05T15:40:44Z
|
2017-01-05T15:40:44Z
|
NeuroRule: A Connectionist Approach to Data Mining
|
Classification, which involves finding rules that partition a given data set
into disjoint groups, is one class of data mining problems. Approaches proposed
so far for mining classification rules for large databases are mainly decision
tree based symbolic learning methods. The connectionist approach based on
neural networks has been thought not well suited for data mining. One of the
major reasons cited is that knowledge generated by neural networks is not
explicitly represented in the form of rules suitable for verification or
interpretation by humans. This paper examines this issue. With our newly
developed algorithms, rules which are similar to, or more concise than those
generated by the symbolic methods can be extracted from the neural networks.
The data mining process using neural networks with the emphasis on rule
extraction is described. Experimental results and comparison with previously
published works are presented.
|
[
"['Hongjun Lu' 'Rudy Setiono' 'Huan Liu']",
"Hongjun Lu and Rudy Setiono and Huan Liu"
] |
cs.DS cs.LG math.NA stat.ML
| null |
1701.01394
| null | null |
http://arxiv.org/pdf/1701.01394v2
|
2018-03-30T00:51:22Z
|
2017-01-05T17:31:16Z
|
On spectral partitioning of signed graphs
|
We argue that the standard graph Laplacian is preferable for spectral
partitioning of signed graphs compared to the signed Laplacian. Simple examples
demonstrate that partitioning based on signs of components of the leading
eigenvectors of the signed Laplacian may be meaningless, in contrast to
partitioning based on the Fiedler vector of the standard graph Laplacian for
signed graphs. We observe that negative eigenvalues are beneficial for spectral
partitioning of signed graphs, making the Fiedler vector easier to compute.
|
[
"Andrew V. Knyazev",
"['Andrew V. Knyazev']"
] |
cs.AI cs.LG cs.RO
|
10.1109/IECON.2016.7793388
|
1701.01497
| null | null |
http://arxiv.org/abs/1701.01497v1
|
2017-01-05T23:01:08Z
|
2017-01-05T23:01:08Z
|
Learning local trajectories for high precision robotic tasks :
application to KUKA LBR iiwa Cartesian positioning
|
To ease the development of robot learning in industry, two conditions need to
be fulfilled. Manipulators must be able to learn high accuracy and precision
tasks while being safe for workers in the factory. In this paper, we extend
previously submitted work which consists in rapid learning of local high
accuracy behaviors. By exploration and regression, linear and quadratic models
are learnt for respectively the dynamics and cost function. Iterative Linear
Quadratic Gaussian Regulator combined with cost quadratic regression can
converge rapidly in the final stages towards high accuracy behavior as the cost
function is modelled quite precisely. In this paper, both a different cost
function and a second order improvement method are implemented within this
framework. We also propose an analysis of the algorithm parameters through
simulation for a positioning task. Finally, an experimental validation on a
KUKA LBR iiwa robot is carried out. This collaborative robot manipulator can be
easily programmed into safety mode, which makes it qualified for the second
industry constraint stated above.
|
[
"Joris Guerin, Olivier Gibaru, Eric Nyiri and Stephane Thiery",
"['Joris Guerin' 'Olivier Gibaru' 'Eric Nyiri' 'Stephane Thiery']"
] |
cs.LG cs.DS math.OC stat.ML
| null |
1701.01722
| null | null |
http://arxiv.org/pdf/1701.01722v3
|
2017-09-18T01:04:08Z
|
2017-01-06T18:43:53Z
|
Follow the Compressed Leader: Faster Online Learning of Eigenvectors and
Faster MMWU
|
The online problem of computing the top eigenvector is fundamental to machine
learning. In both adversarial and stochastic settings, previous results (such
as matrix multiplicative weight update, follow the regularized leader, follow
the compressed leader, block power method) either achieve optimal regret but
run slow, or run fast at the expense of loosing a $\sqrt{d}$ factor in total
regret where $d$ is the matrix dimension.
We propose a $\textit{follow-the-compressed-leader (FTCL)}$ framework which
achieves optimal regret without sacrificing the running time. Our idea is to
"compress" the matrix strategy to dimension 3 in the adversarial setting, or
dimension 1 in the stochastic setting. These respectively resolve two open
questions regarding the design of optimal and efficient algorithms for the
online eigenvector problem.
|
[
"['Zeyuan Allen-Zhu' 'Yuanzhi Li']",
"Zeyuan Allen-Zhu and Yuanzhi Li"
] |
cs.LG
| null |
1701.01887
| null | null |
http://arxiv.org/pdf/1701.01887v1
|
2017-01-07T21:44:04Z
|
2017-01-07T21:44:04Z
|
Deep Learning for Time-Series Analysis
|
In many real-world application, e.g., speech recognition or sleep stage
classification, data are captured over the course of time, constituting a
Time-Series. Time-Series often contain temporal dependencies that cause two
otherwise identical points of time to belong to different classes or predict
different behavior. This characteristic generally increases the difficulty of
analysing them. Existing techniques often depended on hand-crafted features
that were expensive to create and required expert knowledge of the field. With
the advent of Deep Learning new models of unsupervised learning of features for
Time-series analysis and forecast have been developed. Such new developments
are the topic of this paper: a review of the main Deep Learning techniques is
presented, and some applications on Time-Series analysis are summaried. The
results make it clear that Deep Learning has a lot to contribute to the field.
|
[
"John Cristian Borges Gamboa",
"['John Cristian Borges Gamboa']"
] |
cs.LG stat.AP
| null |
1701.01917
| null | null |
http://arxiv.org/pdf/1701.01917v1
|
2017-01-08T06:11:34Z
|
2017-01-08T06:11:34Z
|
See the Near Future: A Short-Term Predictive Methodology to Traffic Load
in ITS
|
The Intelligent Transportation System (ITS) targets to a coordinated traffic
system by applying the advanced wireless communication technologies for road
traffic scheduling. Towards an accurate road traffic control, the short-term
traffic forecasting to predict the road traffic at the particular site in a
short period is often useful and important. In existing works, Seasonal
Autoregressive Integrated Moving Average (SARIMA) model is a popular approach.
The scheme however encounters two challenges: 1) the analysis on related data
is insufficient whereas some important features of data may be neglected; and
2) with data presenting different features, it is unlikely to have one
predictive model that can fit all situations. To tackle above issues, in this
work, we develop a hybrid model to improve accuracy of SARIMA. In specific, we
first explore the autocorrelation and distribution features existed in traffic
flow to revise structure of the time series model. Based on the Gaussian
distribution of traffic flow, a hybrid model with a Bayesian learning algorithm
is developed which can effectively expand the application scenarios of SARIMA.
We show the efficiency and accuracy of our proposal using both analysis and
experimental studies. Using the real-world trace data, we show that the
proposed predicting approach can achieve satisfactory performance in practice.
|
[
"['Xun Zhou' 'Changle Li' 'Zhe Liu' 'Tom H. Luan' 'Zhifang Miao' 'Lina Zhu'\n 'Lei Xiong']",
"Xun Zhou, Changle Li, Zhe Liu, Tom H. Luan, Zhifang Miao, Lina Zhu and\n Lei Xiong"
] |
cs.LG
|
10.1007/s10618-020-00691-y
|
1701.02026
| null | null |
http://arxiv.org/abs/1701.02026v3
|
2019-05-18T15:10:29Z
|
2017-01-08T22:25:04Z
|
Large-scale network motif analysis using compression
|
We introduce a new method for finding network motifs: interesting or
informative subgraph patterns in a network. Subgraphs are motifs when their
frequency in the data is high compared to the expected frequency under a null
model. To compute this expectation, a full or approximate count of the
occurrences of a motif is normally repeated on as many as 1000 random graphs
sampled from the null model; a prohibitively expensive step. We use ideas from
the Minimum Description Length (MDL) literature to define a new measure of
motif relevance. With our method, samples from the null model are not required.
Instead we compute the probability of the data under the null model and compare
this to the probability under a specially designed alternative model. With this
new relevance test, we can search for motifs by random sampling, rather than
requiring an accurate count of all instances of a motif. This allows motif
analysis to scale to networks with billions of links.
|
[
"Peter Bloem and Steven de Rooij",
"['Peter Bloem' 'Steven de Rooij']"
] |
stat.ML cs.LG
| null |
1701.02046
| null | null |
http://arxiv.org/pdf/1701.02046v2
|
2017-03-09T17:25:16Z
|
2017-01-09T01:20:55Z
|
Tunable GMM Kernels
|
The recently proposed "generalized min-max" (GMM) kernel can be efficiently
linearized, with direct applications in large-scale statistical learning and
fast near neighbor search. The linearized GMM kernel was extensively compared
in with linearized radial basis function (RBF) kernel. On a large number of
classification tasks, the tuning-free GMM kernel performs (surprisingly) well
compared to the best-tuned RBF kernel. Nevertheless, one would naturally expect
that the GMM kernel ought to be further improved if we introduce tuning
parameters.
In this paper, we study three simple constructions of tunable GMM kernels:
(i) the exponentiated-GMM (or eGMM) kernel, (ii) the powered-GMM (or pGMM)
kernel, and (iii) the exponentiated-powered-GMM (epGMM) kernel. The pGMM kernel
can still be efficiently linearized by modifying the original hashing procedure
for the GMM kernel. On about 60 publicly available classification datasets, we
verify that the proposed tunable GMM kernels typically improve over the
original GMM kernel. On some datasets, the improvements can be astonishingly
significant.
For example, on 11 popular datasets which were used for testing deep learning
algorithms and tree methods, our experiments show that the proposed tunable GMM
kernels are strong competitors to trees and deep nets. The previous studies
developed tree methods including "abc-robust-logitboost" and demonstrated the
excellent performance on those 11 datasets (and other datasets), by
establishing the second-order tree-split formula and new derivatives for
multi-class logistic loss. Compared to tree methods like
"abc-robust-logitboost" (which are slow and need substantial model sizes), the
tunable GMM kernels produce largely comparable results.
|
[
"Ping Li",
"['Ping Li']"
] |
cs.LG cs.AI stat.ML
| null |
1701.02058
| null | null |
http://arxiv.org/pdf/1701.02058v1
|
2017-01-09T03:49:26Z
|
2017-01-09T03:49:26Z
|
Coupled Compound Poisson Factorization
|
We present a general framework, the coupled compound Poisson factorization
(CCPF), to capture the missing-data mechanism in extremely sparse data sets by
coupling a hierarchical Poisson factorization with an arbitrary data-generating
model. We derive a stochastic variational inference algorithm for the resulting
model and, as examples of our framework, implement three different
data-generating models---a mixture model, linear regression, and factor
analysis---to robustly model non-random missing data in the context of
clustering, prediction, and matrix factorization. In all three cases, we test
our framework against models that ignore the missing-data mechanism on large
scale studies with non-random missing data, and we show that explicitly
modeling the missing-data mechanism substantially improves the quality of the
results, as measured using data log likelihood on a held-out test set.
|
[
"Mehmet E. Basbug, Barbara E. Engelhardt",
"['Mehmet E. Basbug' 'Barbara E. Engelhardt']"
] |
stat.ML cs.LG q-bio.NC
| null |
1701.02133
| null | null |
http://arxiv.org/pdf/1701.02133v1
|
2017-01-09T11:06:39Z
|
2017-01-09T11:06:39Z
|
Deep driven fMRI decoding of visual categories
|
Deep neural networks have been developed drawing inspiration from the brain
visual pathway, implementing an end-to-end approach: from image data to video
object classes. However building an fMRI decoder with the typical structure of
Convolutional Neural Network (CNN), i.e. learning multiple level of
representations, seems impractical due to lack of brain data. As a possible
solution, this work presents the first hybrid fMRI and deep features decoding
approach: collected fMRI and deep learnt representations of video object
classes are linked together by means of Kernel Canonical Correlation Analysis.
In decoding, this allows exploiting the discriminatory power of CNN by relating
the fMRI representation to the last layer of CNN (fc7). We show the
effectiveness of embedding fMRI data onto a subspace related to deep features
in distinguishing semantic visual categories based solely on brain imaging
data.
|
[
"['Michele Svanera' 'Sergio Benini' 'Gal Raz' 'Talma Hendler'\n 'Rainer Goebel' 'Giancarlo Valente']",
"Michele Svanera, Sergio Benini, Gal Raz, Talma Hendler, Rainer Goebel,\n and Giancarlo Valente"
] |
cs.CR cs.LG
| null |
1701.02145
| null | null |
http://arxiv.org/pdf/1701.02145v1
|
2017-01-09T11:46:58Z
|
2017-01-09T11:46:58Z
|
Shallow and Deep Networks Intrusion Detection System: A Taxonomy and
Survey
|
Intrusion detection has attracted a considerable interest from researchers
and industries. The community, after many years of research, still faces the
problem of building reliable and efficient IDS that are capable of handling
large quantities of data, with changing patterns in real time situations. The
work presented in this manuscript classifies intrusion detection systems (IDS).
Moreover, a taxonomy and survey of shallow and deep networks intrusion
detection systems is presented based on previous and current works. This
taxonomy and survey reviews machine learning techniques and their performance
in detecting anomalies. Feature selection which influences the effectiveness of
machine learning (ML) IDS is discussed to explain the role of feature selection
in the classification and training phase of ML IDS. Finally, a discussion of
the false and true positive alarm rates is presented to help researchers model
reliable and efficient machine learning based intrusion detection systems.
|
[
"['Elike Hodo' 'Xavier Bellekens' 'Andrew Hamilton' 'Christos Tachtatzis'\n 'Robert Atkinson']",
"Elike Hodo, Xavier Bellekens, Andrew Hamilton, Christos Tachtatzis and\n Robert Atkinson"
] |
cs.PL cs.LG
| null |
1701.02284
| null | null |
http://arxiv.org/pdf/1701.02284v1
|
2017-01-09T18:02:13Z
|
2017-01-09T18:02:13Z
|
DeepDSL: A Compilation-based Domain-Specific Language for Deep Learning
|
In recent years, Deep Learning (DL) has found great success in domains such
as multimedia understanding. However, the complex nature of multimedia data
makes it difficult to develop DL-based software. The state-of-the art tools,
such as Caffe, TensorFlow, Torch7, and CNTK, while are successful in their
applicable domains, are programming libraries with fixed user interface,
internal representation, and execution environment. This makes it difficult to
implement portable and customized DL applications.
In this paper, we present DeepDSL, a domain specific language (DSL) embedded
in Scala, that compiles deep networks written in DeepDSL to Java source code.
Deep DSL provides (1) intuitive constructs to support compact encoding of deep
networks; (2) symbolic gradient derivation of the networks; (3) static analysis
for memory consumption and error detection; and (4) DSL-level optimization to
improve memory and runtime efficiency.
DeepDSL programs are compiled into compact, efficient, customizable, and
portable Java source code, which operates the CUDA and CUDNN interfaces running
on Nvidia GPU via a Java Native Interface (JNI) library. We evaluated DeepDSL
with a number of popular DL networks. Our experiments show that the compiled
programs have very competitive runtime performance and memory efficiency
compared to the existing libraries.
|
[
"Tian Zhao, Xiaobing Huang, Yu Cao",
"['Tian Zhao' 'Xiaobing Huang' 'Yu Cao']"
] |
cs.LG stat.ML
| null |
1701.02291
| null | null |
http://arxiv.org/pdf/1701.02291v2
|
2017-01-12T07:44:17Z
|
2017-01-09T18:29:07Z
|
QuickNet: Maximizing Efficiency and Efficacy in Deep Architectures
|
We present QuickNet, a fast and accurate network architecture that is both
faster and significantly more accurate than other fast deep architectures like
SqueezeNet. Furthermore, it uses less parameters than previous networks, making
it more memory efficient. We do this by making two major modifications to the
reference Darknet model (Redmon et al, 2015): 1) The use of depthwise separable
convolutions and 2) The use of parametric rectified linear units. We make the
observation that parametric rectified linear units are computationally
equivalent to leaky rectified linear units at test time and the observation
that separable convolutions can be interpreted as a compressed Inception
network (Chollet, 2016). Using these observations, we derive a network
architecture, which we call QuickNet, that is both faster and more accurate
than previous models. Our architecture provides at least four major advantages:
(1) A smaller model size, which is more tenable on memory constrained systems;
(2) A significantly faster network which is more tenable on computationally
constrained systems; (3) A high accuracy of 95.7 percent on the CIFAR-10
Dataset which outperforms all but one result published so far, although we note
that our works are orthogonal approaches and can be combined (4) Orthogonality
to previous model compression approaches allowing for further speed gains to be
realized.
|
[
"['Tapabrata Ghosh']",
"Tapabrata Ghosh"
] |
math.AC cs.CC cs.DM cs.LG math.CO
| null |
1701.02302
| null | null |
http://arxiv.org/pdf/1701.02302v3
|
2017-04-06T07:30:15Z
|
2017-01-09T18:58:27Z
|
A Homological Theory of Functions
|
In computational complexity, a complexity class is given by a set of problems
or functions, and a basic challenge is to show separations of complexity
classes $A \not= B$ especially when $A$ is known to be a subset of $B$. In this
paper we introduce a homological theory of functions that can be used to
establish complexity separations, while also providing other interesting
consequences. We propose to associate a topological space $S_A$ to each class
of functions $A$, such that, to separate complexity classes $A \subseteq B'$,
it suffices to observe a change in "the number of holes", i.e. homology, in
$S_A$ as a subclass $B$ of $B'$ is added to $A$. In other words, if the
homologies of $S_A$ and $S_{A \cup B}$ are different, then $A \not= B'$. We
develop the underlying theory of functions based on combinatorial and
homological commutative algebra and Stanley-Reisner theory, and recover Minsky
and Papert's 1969 result that parity cannot be computed by nonmaximal degree
polynomial threshold functions. In the process, we derive a "maximal principle"
for polynomial threshold functions that is used to extend this result further
to arbitrary symmetric functions. A surprising coincidence is demonstrated,
where the maximal dimension of "holes" in $S_A$ upper bounds the VC dimension
of $A$, with equality for common computational cases such as the class of
polynomial threshold functions or the class of linear functionals in $\mathbb
F_2$, or common algebraic cases such as when the Stanley-Reisner ring of $S_A$
is Cohen-Macaulay. As another interesting application of our theory, we prove a
result that a priori has nothing to do with complexity separation: it
characterizes when a vector subspace intersects the positive cone, in terms of
homological conditions. By analogy to Farkas' result doing the same with
*linear conditions*, we call our theorem the Homological Farkas Lemma.
|
[
"Greg Yang",
"['Greg Yang']"
] |
cs.LG
| null |
1701.02377
| null | null |
http://arxiv.org/pdf/1701.02377v1
|
2017-01-09T22:29:08Z
|
2017-01-09T22:29:08Z
|
The principle of cognitive action - Preliminary experimental analysis
|
In this document we shows a first implementation and some preliminary results
of a new theory, facing Machine Learning problems in the frameworks of
Classical Mechanics and Variational Calculus. We give a general formulation of
the problem and then we studies basic behaviors of the model on simple
practical implementations.
|
[
"Marco Gori, Marco Maggini, Alessandro Rossi",
"['Marco Gori' 'Marco Maggini' 'Alessandro Rossi']"
] |
stat.ML cs.LG
| null |
1701.02386
| null | null |
http://arxiv.org/pdf/1701.02386v2
|
2017-05-24T11:45:00Z
|
2017-01-09T23:19:28Z
|
AdaGAN: Boosting Generative Models
|
Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) are an
effective method for training generative models of complex data such as natural
images. However, they are notoriously hard to train and can suffer from the
problem of missing modes where the model is not able to produce examples in
certain regions of the space. We propose an iterative procedure, called AdaGAN,
where at every step we add a new component into a mixture model by running a
GAN algorithm on a reweighted sample. This is inspired by boosting algorithms,
where many potentially weak individual predictors are greedily aggregated to
form a strong composite predictor. We prove that such an incremental procedure
leads to convergence to the true distribution in a finite number of steps if
each step is optimal, and convergence at an exponential rate otherwise. We also
illustrate experimentally that this procedure addresses the problem of missing
modes.
|
[
"Ilya Tolstikhin, Sylvain Gelly, Olivier Bousquet, Carl-Johann\n Simon-Gabriel and Bernhard Sch\\\"olkopf",
"['Ilya Tolstikhin' 'Sylvain Gelly' 'Olivier Bousquet'\n 'Carl-Johann Simon-Gabriel' 'Bernhard Schölkopf']"
] |
cs.LG cs.AI
| null |
1701.02392
| null | null |
http://arxiv.org/pdf/1701.02392v1
|
2017-01-09T23:36:05Z
|
2017-01-09T23:36:05Z
|
Reinforcement Learning via Recurrent Convolutional Neural Networks
|
Deep Reinforcement Learning has enabled the learning of policies for complex
tasks in partially observable environments, without explicitly learning the
underlying model of the tasks. While such model-free methods achieve
considerable performance, they often ignore the structure of task. We present a
natural representation of to Reinforcement Learning (RL) problems using
Recurrent Convolutional Neural Networks (RCNNs), to better exploit this
inherent structure. We define 3 such RCNNs, whose forward passes execute an
efficient Value Iteration, propagate beliefs of state in partially observable
environments, and choose optimal actions respectively. Backpropagating
gradients through these RCNNs allows the system to explicitly learn the
Transition Model and Reward Function associated with the underlying MDP,
serving as an elegant alternative to classical model-based RL. We evaluate the
proposed algorithms in simulation, considering a robot planning problem. We
demonstrate the capability of our framework to reduce the cost of replanning,
learn accurate MDP models, and finally re-plan with learnt models to achieve
near-optimal policies.
|
[
"['Tanmay Shankar' 'Santosha K. Dwivedy' 'Prithwijit Guha']",
"Tanmay Shankar, Santosha K. Dwivedy, Prithwijit Guha"
] |
cs.LG math.NA stat.ML
|
10.1016/j.jcp.2017.07.050
|
1701.0244
| null | null | null | null | null |
Machine Learning of Linear Differential Equations using Gaussian
Processes
|
This work leverages recent advances in probabilistic machine learning to
discover conservation laws expressed by parametric linear equations. Such
equations involve, but are not limited to, ordinary and partial differential,
integro-differential, and fractional order operators. Here, Gaussian process
priors are modified according to the particular form of such operators and are
employed to infer parameters of the linear equations from scarce and possibly
noisy observations. Such observations may come from experiments or "black-box"
computer simulations.
|
[
"Maziar Raissi and George Em. Karniadakis"
] |
null | null |
1701.02440
| null | null |
http://arxiv.org/abs/1701.02440v1
|
2017-01-10T05:14:22Z
|
2017-01-10T05:14:22Z
|
Machine Learning of Linear Differential Equations using Gaussian
Processes
|
This work leverages recent advances in probabilistic machine learning to discover conservation laws expressed by parametric linear equations. Such equations involve, but are not limited to, ordinary and partial differential, integro-differential, and fractional order operators. Here, Gaussian process priors are modified according to the particular form of such operators and are employed to infer parameters of the linear equations from scarce and possibly noisy observations. Such observations may come from experiments or "black-box" computer simulations.
|
[
"['Maziar Raissi' 'George Em. Karniadakis']"
] |
cs.CL cs.AI cs.CV cs.LG
| null |
1701.02477
| null | null |
http://arxiv.org/pdf/1701.02477v1
|
2017-01-10T08:47:56Z
|
2017-01-10T08:47:56Z
|
Multi-task Learning Of Deep Neural Networks For Audio Visual Automatic
Speech Recognition
|
Multi-task learning (MTL) involves the simultaneous training of two or more
related tasks over shared representations. In this work, we apply MTL to
audio-visual automatic speech recognition(AV-ASR). Our primary task is to learn
a mapping between audio-visual fused features and frame labels obtained from
acoustic GMM/HMM model. This is combined with an auxiliary task which maps
visual features to frame labels obtained from a separate visual GMM/HMM model.
The MTL model is tested at various levels of babble noise and the results are
compared with a base-line hybrid DNN-HMM AV-ASR model. Our results indicate
that MTL is especially useful at higher level of noise. Compared to base-line,
upto 7\% relative improvement in WER is reported at -3 SNR dB
|
[
"['Abhinav Thanda' 'Shankar M Venkatesan']",
"Abhinav Thanda, Shankar M Venkatesan"
] |
cs.CL cs.LG
| null |
1701.02481
| null | null |
http://arxiv.org/pdf/1701.02481v3
|
2017-05-08T03:19:20Z
|
2017-01-10T08:59:38Z
|
Implicitly Incorporating Morphological Information into Word Embedding
|
In this paper, we propose three novel models to enhance word embedding by
implicitly using morphological information. Experiments on word similarity and
syntactic analogy show that the implicit models are superior to traditional
explicit ones. Our models outperform all state-of-the-art baselines and
significantly improve the performance on both tasks. Moreover, our performance
on the smallest corpus is similar to the performance of CBOW on the corpus
which is five times the size of ours. Parameter analysis indicates that the
implicit models can supplement semantic information during the word embedding
training process.
|
[
"['Yang Xu' 'Jiawei Liu']",
"Yang Xu and Jiawei Liu"
] |
cs.LG cs.AI cs.GT
|
10.1145/3018661.3018702
|
1701.0249
| null | null | null | null | null |
Real-Time Bidding by Reinforcement Learning in Display Advertising
|
The majority of online display ads are served through real-time bidding (RTB)
--- each ad display impression is auctioned off in real-time when it is just
being generated from a user visit. To place an ad automatically and optimally,
it is critical for advertisers to devise a learning algorithm to cleverly bid
an ad impression in real-time. Most previous works consider the bid decision as
a static optimization problem of either treating the value of each impression
independently or setting a bid price to each segment of ad volume. However, the
bidding for a given ad campaign would repeatedly happen during its life span
before the budget runs out. As such, each bid is strategically correlated by
the constrained budget and the overall effectiveness of the campaign (e.g., the
rewards from generated clicks), which is only observed after the campaign has
completed. Thus, it is of great interest to devise an optimal bidding strategy
sequentially so that the campaign budget can be dynamically allocated across
all the available impressions on the basis of both the immediate and future
rewards. In this paper, we formulate the bid decision process as a
reinforcement learning problem, where the state space is represented by the
auction information and the campaign's real-time parameters, while an action is
the bid price to set. By modeling the state transition via auction competition,
we build a Markov Decision Process framework for learning the optimal bidding
policy to optimize the advertising performance in the dynamic real-time bidding
environment. Furthermore, the scalability problem from the large real-world
auction volume and campaign budget is well handled by state value approximation
using neural networks.
|
[
"Han Cai, Kan Ren, Weinan Zhang, Kleanthis Malialis, Jun Wang, Yong Yu,\n Defeng Guo"
] |
null | null |
1701.02490
| null | null |
http://arxiv.org/abs/1701.02490v2
|
2017-01-12T01:37:39Z
|
2017-01-10T09:30:29Z
|
Real-Time Bidding by Reinforcement Learning in Display Advertising
|
The majority of online display ads are served through real-time bidding (RTB) --- each ad display impression is auctioned off in real-time when it is just being generated from a user visit. To place an ad automatically and optimally, it is critical for advertisers to devise a learning algorithm to cleverly bid an ad impression in real-time. Most previous works consider the bid decision as a static optimization problem of either treating the value of each impression independently or setting a bid price to each segment of ad volume. However, the bidding for a given ad campaign would repeatedly happen during its life span before the budget runs out. As such, each bid is strategically correlated by the constrained budget and the overall effectiveness of the campaign (e.g., the rewards from generated clicks), which is only observed after the campaign has completed. Thus, it is of great interest to devise an optimal bidding strategy sequentially so that the campaign budget can be dynamically allocated across all the available impressions on the basis of both the immediate and future rewards. In this paper, we formulate the bid decision process as a reinforcement learning problem, where the state space is represented by the auction information and the campaign's real-time parameters, while an action is the bid price to set. By modeling the state transition via auction competition, we build a Markov Decision Process framework for learning the optimal bidding policy to optimize the advertising performance in the dynamic real-time bidding environment. Furthermore, the scalability problem from the large real-world auction volume and campaign budget is well handled by state value approximation using neural networks.
|
[
"['Han Cai' 'Kan Ren' 'Weinan Zhang' 'Kleanthis Malialis' 'Jun Wang'\n 'Yong Yu' 'Defeng Guo']"
] |
cs.LG stat.ML
|
10.1109/TNNLS.2020.2973293
|
1701.02511
| null | null |
http://arxiv.org/abs/1701.02511v5
|
2020-02-10T01:31:57Z
|
2017-01-10T10:42:25Z
|
Heterogeneous domain adaptation: An unsupervised approach
|
Domain adaptation leverages the knowledge in one domain - the source domain -
to improve learning efficiency in another domain - the target domain. Existing
heterogeneous domain adaptation research is relatively well-progressed, but
only in situations where the target domain contains at least a few labeled
instances. In contrast, heterogeneous domain adaptation with an unlabeled
target domain has not been well-studied. To contribute to the research in this
emerging field, this paper presents: (1) an unsupervised knowledge transfer
theorem that guarantees the correctness of transferring knowledge; and (2) a
principal angle-based metric to measure the distance between two pairs of
domains: one pair comprises the original source and target domains and the
other pair comprises two homogeneous representations of two domains. The
theorem and the metric have been implemented in an innovative transfer model,
called a Grassmann-Linear monotonic maps-geodesic flow kernel (GLG), that is
specifically designed for heterogeneous unsupervised domain adaptation (HeUDA).
The linear monotonic maps meet the conditions of the theorem and are used to
construct homogeneous representations of the heterogeneous domains. The metric
shows the extent to which the homogeneous representations have preserved the
information in the original source and target domains. By minimizing the
proposed metric, the GLG model learns the homogeneous representations of
heterogeneous domains and transfers knowledge through these learned
representations via a geodesic flow kernel. To evaluate the model, five public
datasets were reorganized into ten HeUDA tasks across three applications:
cancer detection, credit assessment, and text classification. The experiments
demonstrate that the proposed model delivers superior performance over the
existing baselines.
|
[
"Feng Liu, Guanquan Zhang, Jie Lu",
"['Feng Liu' 'Guanquan Zhang' 'Jie Lu']"
] |
cs.CV cs.LG
| null |
1701.02676
| null | null |
http://arxiv.org/pdf/1701.02676v1
|
2017-01-10T16:43:03Z
|
2017-01-10T16:43:03Z
|
Unsupervised Image-to-Image Translation with Generative Adversarial
Networks
|
It's useful to automatically transform an image from its original form to
some synthetic form (style, partial contents, etc.), while keeping the original
structure or semantics. We define this requirement as the "image-to-image
translation" problem, and propose a general approach to achieve it, based on
deep convolutional and conditional generative adversarial networks (GANs),
which has gained a phenomenal success to learn mapping images from noise input
since 2014. In this work, we develop a two step (unsupervised) learning method
to translate images between different domains by using unlabeled images without
specifying any correspondence between them, so that to avoid the cost of
acquiring labeled data. Compared with prior works, we demonstrated the capacity
of generality in our model, by which variance of translations can be conduct by
a single type of model. Such capability is desirable in applications like
bidirectional translation
|
[
"Hao Dong, Paarth Neekhara, Chao Wu, Yike Guo",
"['Hao Dong' 'Paarth Neekhara' 'Chao Wu' 'Yike Guo']"
] |
cs.CL cs.LG stat.ML
| null |
1701.0272
| null | null | null | null | null |
Towards End-to-End Speech Recognition with Deep Convolutional Neural
Networks
|
Convolutional Neural Networks (CNNs) are effective models for reducing
spectral variations and modeling spectral correlations in acoustic features for
automatic speech recognition (ASR). Hybrid speech recognition systems
incorporating CNNs with Hidden Markov Models/Gaussian Mixture Models
(HMMs/GMMs) have achieved the state-of-the-art in various benchmarks.
Meanwhile, Connectionist Temporal Classification (CTC) with Recurrent Neural
Networks (RNNs), which is proposed for labeling unsegmented sequences, makes it
feasible to train an end-to-end speech recognition system instead of hybrid
settings. However, RNNs are computationally expensive and sometimes difficult
to train. In this paper, inspired by the advantages of both CNNs and the CTC
approach, we propose an end-to-end speech framework for sequence labeling, by
combining hierarchical CNNs with CTC directly without recurrent connections. By
evaluating the approach on the TIMIT phoneme recognition task, we show that the
proposed model is not only computationally efficient, but also competitive with
the existing baseline systems. Moreover, we argue that CNNs have the capability
to model temporal correlations with appropriate context information.
|
[
"Ying Zhang, Mohammad Pezeshki, Philemon Brakel, Saizheng Zhang, Cesar\n Laurent Yoshua Bengio, Aaron Courville"
] |
null | null |
1701.02720
| null | null |
http://arxiv.org/pdf/1701.02720v1
|
2017-01-10T18:30:11Z
|
2017-01-10T18:30:11Z
|
Towards End-to-End Speech Recognition with Deep Convolutional Neural
Networks
|
Convolutional Neural Networks (CNNs) are effective models for reducing spectral variations and modeling spectral correlations in acoustic features for automatic speech recognition (ASR). Hybrid speech recognition systems incorporating CNNs with Hidden Markov Models/Gaussian Mixture Models (HMMs/GMMs) have achieved the state-of-the-art in various benchmarks. Meanwhile, Connectionist Temporal Classification (CTC) with Recurrent Neural Networks (RNNs), which is proposed for labeling unsegmented sequences, makes it feasible to train an end-to-end speech recognition system instead of hybrid settings. However, RNNs are computationally expensive and sometimes difficult to train. In this paper, inspired by the advantages of both CNNs and the CTC approach, we propose an end-to-end speech framework for sequence labeling, by combining hierarchical CNNs with CTC directly without recurrent connections. By evaluating the approach on the TIMIT phoneme recognition task, we show that the proposed model is not only computationally efficient, but also competitive with the existing baseline systems. Moreover, we argue that CNNs have the capability to model temporal correlations with appropriate context information.
|
[
"['Ying Zhang' 'Mohammad Pezeshki' 'Philemon Brakel' 'Saizheng Zhang'\n 'Cesar Laurent Yoshua Bengio' 'Aaron Courville']"
] |
stat.ML cs.IT cs.LG math.IT
| null |
1701.02789
| null | null |
http://arxiv.org/pdf/1701.02789v3
|
2017-03-09T22:50:38Z
|
2017-01-10T21:26:03Z
|
Identifying Best Interventions through Online Importance Sampling
|
Motivated by applications in computational advertising and systems biology,
we consider the problem of identifying the best out of several possible soft
interventions at a source node $V$ in an acyclic causal directed graph, to
maximize the expected value of a target node $Y$ (located downstream of $V$).
Our setting imposes a fixed total budget for sampling under various
interventions, along with cost constraints on different types of interventions.
We pose this as a best arm identification bandit problem with $K$ arms where
each arm is a soft intervention at $V,$ and leverage the information leakage
among the arms to provide the first gap dependent error and simple regret
bounds for this problem. Our results are a significant improvement over the
traditional best arm identification results. We empirically show that our
algorithms outperform the state of the art in the Flow Cytometry data-set, and
also apply our algorithm for model interpretation of the Inception-v3 deep net
that classifies images.
|
[
"Rajat Sen, Karthikeyan Shanmugam, Alexandros G. Dimakis, and Sanjay\n Shakkottai",
"['Rajat Sen' 'Karthikeyan Shanmugam' 'Alexandros G. Dimakis'\n 'Sanjay Shakkottai']"
] |
stat.ML cs.LG
|
10.1109/TSP.2017.2739100
|
1701.02804
| null | null |
http://arxiv.org/abs/1701.02804v1
|
2017-01-07T03:44:54Z
|
2017-01-07T03:44:54Z
|
Similarity Function Tracking using Pairwise Comparisons
|
Recent work in distance metric learning has focused on learning
transformations of data that best align with specified pairwise similarity and
dissimilarity constraints, often supplied by a human observer. The learned
transformations lead to improved retrieval, classification, and clustering
algorithms due to the better adapted distance or similarity measures. Here, we
address the problem of learning these transformations when the underlying
constraint generation process is nonstationary. This nonstationarity can be due
to changes in either the ground-truth clustering used to generate constraints
or changes in the feature subspaces in which the class structure is apparent.
We propose Online Convex Ensemble StrongLy Adaptive Dynamic Learning (OCELAD),
a general adaptive, online approach for learning and tracking optimal metrics
as they change over time that is highly robust to a variety of nonstationary
behaviors in the changing metric. We apply the OCELAD framework to an ensemble
of online learners. Specifically, we create a retro-initialized composite
objective mirror descent (COMID) ensemble (RICE) consisting of a set of
parallel COMID learners with different learning rates, and demonstrate
parameter-free RICE-OCELAD metric learning on both synthetic data and a highly
nonstationary Twitter dataset. We show significant performance improvements and
increased robustness to nonstationary effects relative to previously proposed
batch and online distance metric learning algorithms.
|
[
"['Kristjan Greenewald' 'Stephen Kelley' 'Brandon Oselio'\n 'Alfred O. Hero III']",
"Kristjan Greenewald, Stephen Kelley, Brandon Oselio, Alfred O. Hero\n III"
] |
cs.LG cs.CV stat.ML
| null |
1701.02815
| null | null |
http://arxiv.org/pdf/1701.02815v2
|
2017-08-12T21:36:09Z
|
2017-01-11T00:23:34Z
|
Stochastic Generative Hashing
|
Learning-based binary hashing has become a powerful paradigm for fast search
and retrieval in massive databases. However, due to the requirement of discrete
outputs for the hash functions, learning such functions is known to be very
challenging. In addition, the objective functions adopted by existing hashing
techniques are mostly chosen heuristically. In this paper, we propose a novel
generative approach to learn hash functions through Minimum Description Length
principle such that the learned hash codes maximally compress the dataset and
can also be used to regenerate the inputs. We also develop an efficient
learning algorithm based on the stochastic distributional gradient, which
avoids the notorious difficulty caused by binary output constraints, to jointly
optimize the parameters of the hash function and the associated generative
model. Extensive experiments on a variety of large-scale datasets show that the
proposed method achieves better retrieval results than the existing
state-of-the-art methods.
|
[
"Bo Dai, Ruiqi Guo, Sanjiv Kumar, Niao He, Le Song",
"['Bo Dai' 'Ruiqi Guo' 'Sanjiv Kumar' 'Niao He' 'Le Song']"
] |
cs.LG
| null |
1701.02886
| null | null |
http://arxiv.org/pdf/1701.02886v4
|
2019-02-07T08:26:12Z
|
2017-01-11T08:36:54Z
|
The empirical Christoffel function with applications in data analysis
|
We illustrate the potential applications in machine learning of the
Christoffel function, or more precisely, its empirical counterpart associated
with a counting measure uniformly supported on a finite set of points. Firstly,
we provide a thresholding scheme which allows to approximate the support of a
measure from a finite subset of its moments with strong asymptotic guaranties.
Secondly, we provide a consistency result which relates the empirical
Christoffel function and its population counterpart in the limit of large
samples. Finally, we illustrate the relevance of our results on simulated and
real world datasets for several applications in statistics and machine
learning: (a) density and support estimation from finite samples, (b) outlier
and novelty detection and (c) affine matching.
|
[
"Jean-Bernard Lasserre and Edouard Pauwels",
"['Jean-Bernard Lasserre' 'Edouard Pauwels']"
] |
stat.ML cs.CV cs.LG
| null |
1701.02892
| null | null |
http://arxiv.org/pdf/1701.02892v1
|
2017-01-11T08:52:53Z
|
2017-01-11T08:52:53Z
|
Multivariate Regression with Grossly Corrupted Observations: A Robust
Approach and its Applications
|
This paper studies the problem of multivariate linear regression where a
portion of the observations is grossly corrupted or is missing, and the
magnitudes and locations of such occurrences are unknown in priori. To deal
with this problem, we propose a new approach by explicitly consider the error
source as well as its sparseness nature. An interesting property of our
approach lies in its ability of allowing individual regression output elements
or tasks to possess their unique noise levels. Moreover, despite working with a
non-smooth optimization problem, our approach still guarantees to converge to
its optimal solution. Experiments on synthetic data demonstrate the
competitiveness of our approach compared with existing multivariate regression
models. In addition, empirically our approach has been validated with very
promising results on two exemplar real-world applications: The first concerns
the prediction of \textit{Big-Five} personality based on user behaviors at
social network sites (SNSs), while the second is 3D human hand pose estimation
from depth images. The implementation of our approach and comparison methods as
well as the involved datasets are made publicly available in support of the
open-source and reproducible research initiatives.
|
[
"['Xiaowei Zhang' 'Chi Xu' 'Yu Zhang' 'Tingshao Zhu' 'Li Cheng']",
"Xiaowei Zhang and Chi Xu and Yu Zhang and Tingshao Zhu and Li Cheng"
] |
cs.LG stat.ML
| null |
1701.0296
| null | null | null | null | null |
Fast mixing for Latent Dirichlet allocation
|
Markov chain Monte Carlo (MCMC) algorithms are ubiquitous in probability
theory in general and in machine learning in particular. A Markov chain is
devised so that its stationary distribution is some probability distribution of
interest. Then one samples from the given distribution by running the Markov
chain for a "long time" until it appears to be stationary and then collects the
sample. However these chains are often very complex and there are no
theoretical guarantees that stationarity is actually reached. In this paper we
study the Gibbs sampler of the posterior distribution of a very simple case of
Latent Dirichlet Allocation, the arguably most well known Bayesian unsupervised
learning model for text generation and text classification. It is shown that
when the corpus consists of two long documents of equal length $m$ and the
vocabulary consists of only two different words, the mixing time is at most of
order $m^2\log m$ (which corresponds to $m\log m$ rounds over the corpus). It
will be apparent from our analysis that it seems very likely that the mixing
time is not much worse in the more relevant case when the number of documents
and the size of the vocabulary are also large as long as each word is
represented a large number in each document, even though the computations
involved may be intractable.
|
[
"Johan Jonasson"
] |
null | null |
1701.02960
| null | null |
http://arxiv.org/pdf/1701.02960v2
|
2017-11-01T14:01:34Z
|
2017-01-11T13:08:52Z
|
Fast mixing for Latent Dirichlet allocation
|
Markov chain Monte Carlo (MCMC) algorithms are ubiquitous in probability theory in general and in machine learning in particular. A Markov chain is devised so that its stationary distribution is some probability distribution of interest. Then one samples from the given distribution by running the Markov chain for a "long time" until it appears to be stationary and then collects the sample. However these chains are often very complex and there are no theoretical guarantees that stationarity is actually reached. In this paper we study the Gibbs sampler of the posterior distribution of a very simple case of Latent Dirichlet Allocation, the arguably most well known Bayesian unsupervised learning model for text generation and text classification. It is shown that when the corpus consists of two long documents of equal length $m$ and the vocabulary consists of only two different words, the mixing time is at most of order $m^2log m$ (which corresponds to $mlog m$ rounds over the corpus). It will be apparent from our analysis that it seems very likely that the mixing time is not much worse in the more relevant case when the number of documents and the size of the vocabulary are also large as long as each word is represented a large number in each document, even though the computations involved may be intractable.
|
[
"['Johan Jonasson']"
] |
stat.ML cs.LG
| null |
1701.03006
| null | null |
http://arxiv.org/pdf/1701.03006v1
|
2017-01-11T15:18:18Z
|
2017-01-11T15:18:18Z
|
Compressive Sensing via Convolutional Factor Analysis
|
We solve the compressive sensing problem via convolutional factor analysis,
where the convolutional dictionaries are learned {\em in situ} from the
compressed measurements. An alternating direction method of multipliers (ADMM)
paradigm for compressive sensing inversion based on convolutional factor
analysis is developed. The proposed algorithm provides reconstructed images as
well as features, which can be directly used for recognition ($e.g.$,
classification) tasks. When a deep (multilayer) model is constructed, a
stochastic unpooling process is employed to build a generative model. During
reconstruction and testing, we project the upper layer dictionary to the data
level and only a single layer deconvolution is required. We demonstrate that
using $\sim30\%$ (relative to pixel numbers) compressed measurements, the
proposed model achieves the classification accuracy comparable to the original
data on MNIST. We also observe that when the compressed measurements are very
limited ($e.g.$, $<10\%$), the upper layer dictionary can provide better
reconstruction results than the bottom layer.
|
[
"['Xin Yuan' 'Yunchen Pu' 'Lawrence Carin']",
"Xin Yuan, Yunchen Pu, Lawrence Carin"
] |
cs.CV cs.LG stat.ML
| null |
1701.03077
| null | null | null | null | null |
A General and Adaptive Robust Loss Function
|
We present a generalization of the Cauchy/Lorentzian, Geman-McClure,
Welsch/Leclerc, generalized Charbonnier, Charbonnier/pseudo-Huber/L1-L2, and L2
loss functions. By introducing robustness as a continuous parameter, our loss
function allows algorithms built around robust loss minimization to be
generalized, which improves performance on basic vision tasks such as
registration and clustering. Interpreting our loss as the negative log of a
univariate density yields a general probability distribution that includes
normal and Cauchy distributions as special cases. This probabilistic
interpretation enables the training of neural networks in which the robustness
of the loss automatically adapts itself during training, which improves
performance on learning-based tasks such as generative image synthesis and
unsupervised monocular depth estimation, without requiring any manual parameter
tuning.
|
[
"Jonathan T. Barron"
] |
null | null |
1701.03077v
| null | null |
http://arxiv.org/pdf/1701.03077v10
|
2019-04-04T20:05:33Z
|
2017-01-11T17:39:14Z
|
A General and Adaptive Robust Loss Function
|
We present a generalization of the Cauchy/Lorentzian, Geman-McClure, Welsch/Leclerc, generalized Charbonnier, Charbonnier/pseudo-Huber/L1-L2, and L2 loss functions. By introducing robustness as a continuous parameter, our loss function allows algorithms built around robust loss minimization to be generalized, which improves performance on basic vision tasks such as registration and clustering. Interpreting our loss as the negative log of a univariate density yields a general probability distribution that includes normal and Cauchy distributions as special cases. This probabilistic interpretation enables the training of neural networks in which the robustness of the loss automatically adapts itself during training, which improves performance on learning-based tasks such as generative image synthesis and unsupervised monocular depth estimation, without requiring any manual parameter tuning.
|
[
"['Jonathan T. Barron']"
] |
cs.CV cs.AI cs.LG stat.ML
| null |
1701.03102
| null | null |
http://arxiv.org/pdf/1701.03102v1
|
2017-01-11T16:34:29Z
|
2017-01-11T16:34:29Z
|
Linear Disentangled Representation Learning for Facial Actions
|
Limited annotated data available for the recognition of facial expression and
action units embarrasses the training of deep networks, which can learn
disentangled invariant features. However, a linear model with just several
parameters normally is not demanding in terms of training data. In this paper,
we propose an elegant linear model to untangle confounding factors in
challenging realistic multichannel signals such as 2D face videos. The simple
yet powerful model does not rely on huge training data and is natural for
recognizing facial actions without explicitly disentangling the identity. Base
on well-understood intuitive linear models such as Sparse Representation based
Classification (SRC), previous attempts require a prepossessing of explicit
decoupling which is practically inexact. Instead, we exploit the low-rank
property across frames to subtract the underlying neutral faces which are
modeled jointly with sparse representation on the action components with group
sparsity enforced. On the extended Cohn-Kanade dataset (CK+), our one-shot
automatic method on raw face videos performs as competitive as SRC applied on
manually prepared action components and performs even better than SRC in terms
of true positive rate. We apply the model to the even more challenging task of
facial action unit recognition, verified on the MPI Face Video Database
(MPI-VDB) achieving a decent performance. All the programs and data have been
made publicly available.
|
[
"['Xiang Xiang' 'Trac D. Tran']",
"Xiang Xiang, Trac D. Tran"
] |
stat.AP cs.AI cs.LG
| null |
1701.03162
| null | null |
http://arxiv.org/pdf/1701.03162v1
|
2016-12-10T06:30:25Z
|
2016-12-10T06:30:25Z
|
Real-time eSports Match Result Prediction
|
In this paper, we try to predict the winning team of a match in the
multiplayer eSports game Dota 2. To address the weaknesses of previous work, we
consider more aspects of prior (pre-match) features from individual players'
match history, as well as real-time (during-match) features at each minute as
the match progresses. We use logistic regression, the proposed Attribute
Sequence Model, and their combinations as the prediction models. In a dataset
of 78362 matches where 20631 matches contain replay data, our experiments show
that adding more aspects of prior features improves accuracy from 58.69% to
71.49%, and introducing real-time features achieves up to 93.73% accuracy when
predicting at the 40th minute.
|
[
"Yifan Yang and Tian Qin and Yu-Heng Lei",
"['Yifan Yang' 'Tian Qin' 'Yu-Heng Lei']"
] |
cs.LG cs.SD
| null |
1701.03198
| null | null |
http://arxiv.org/pdf/1701.03198v1
|
2017-01-12T01:02:22Z
|
2017-01-12T01:02:22Z
|
Unsupervised Latent Behavior Manifold Learning from Acoustic Features:
audio2behavior
|
Behavioral annotation using signal processing and machine learning is highly
dependent on training data and manual annotations of behavioral labels.
Previous studies have shown that speech information encodes significant
behavioral information and be used in a variety of automated behavior
recognition tasks. However, extracting behavior information from speech is
still a difficult task due to the sparseness of training data coupled with the
complex, high-dimensionality of speech, and the complex and multiple
information streams it encodes. In this work we exploit the slow varying
properties of human behavior. We hypothesize that nearby segments of speech
share the same behavioral context and hence share a similar underlying
representation in a latent space. Specifically, we propose a Deep Neural
Network (DNN) model to connect behavioral context and derive the behavioral
manifold in an unsupervised manner. We evaluate the proposed manifold in the
couples therapy domain and also provide examples from publicly available data
(e.g. stand-up comedy). We further investigate training within the couples'
therapy domain and from movie data. The results are extremely encouraging and
promise improved behavioral quantification in an unsupervised manner and
warrants further investigation in a range of applications.
|
[
"['Haoqi Li' 'Brian Baucom' 'Panayiotis Georgiou']",
"Haoqi Li, Brian Baucom, Panayiotis Georgiou"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.