categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
cs.CR cs.LG
|
10.1109/AINA.2013.88
|
1608.00848
| null | null |
http://arxiv.org/abs/1608.00848v1
|
2016-08-02T14:48:49Z
|
2016-08-02T14:48:49Z
|
A New Android Malware Detection Approach Using Bayesian Classification
|
Mobile malware has been growing in scale and complexity as smartphone usage
continues to rise. Android has surpassed other mobile platforms as the most
popular whilst also witnessing a dramatic increase in malware targeting the
platform. A worrying trend that is emerging is the increasing sophistication of
Android malware to evade detection by traditional signature-based scanners. As
such, Android app marketplaces remain at risk of hosting malicious apps that
could evade detection before being downloaded by unsuspecting users. Hence, in
this paper we present an effective approach to alleviate this problem based on
Bayesian classification models obtained from static code analysis. The models
are built from a collection of code and app characteristics that provide
indicators of potential malicious activities. The models are evaluated with
real malware samples in the wild and results of experiments are presented to
demonstrate the effectiveness of the proposed approach.
|
[
"Suleiman Y. Yerima, Sakir Sezer, Gavin McWilliams, Igor Muttik",
"['Suleiman Y. Yerima' 'Sakir Sezer' 'Gavin McWilliams' 'Igor Muttik']"
] |
cs.CV cs.LG
| null |
1608.00853
| null | null |
http://arxiv.org/pdf/1608.00853v1
|
2016-08-02T14:57:18Z
|
2016-08-02T14:57:18Z
|
A study of the effect of JPG compression on adversarial images
|
Neural network image classifiers are known to be vulnerable to adversarial
images, i.e., natural images which have been modified by an adversarial
perturbation specifically designed to be imperceptible to humans yet fool the
classifier. Not only can adversarial images be generated easily, but these
images will often be adversarial for networks trained on disjoint subsets of
data or with different architectures. Adversarial images represent a potential
security risk as well as a serious machine learning challenge---it is clear
that vulnerable neural networks perceive images very differently from humans.
Noting that virtually every image classification data set is composed of JPG
images, we evaluate the effect of JPG compression on the classification of
adversarial images. For Fast-Gradient-Sign perturbations of small magnitude, we
found that JPG compression often reverses the drop in classification accuracy
to a large extent, but not always. As the magnitude of the perturbations
increases, JPG recompression alone is insufficient to reverse the effect.
|
[
"['Gintare Karolina Dziugaite' 'Zoubin Ghahramani' 'Daniel M. Roy']",
"Gintare Karolina Dziugaite, Zoubin Ghahramani, Daniel M. Roy"
] |
cs.LG stat.ML
| null |
1608.0086
| null | null | null | null | null |
Hierarchically Compositional Kernels for Scalable Nonparametric Learning
|
We propose a novel class of kernels to alleviate the high computational cost
of large-scale nonparametric learning with kernel methods. The proposed kernel
is defined based on a hierarchical partitioning of the underlying data domain,
where the Nystr\"om method (a globally low-rank approximation) is married with
a locally lossless approximation in a hierarchical fashion. The kernel
maintains (strict) positive-definiteness. The corresponding kernel matrix
admits a recursively off-diagonal low-rank structure, which allows for fast
linear algebra computations. Suppressing the factor of data dimension, the
memory and arithmetic complexities for training a regression or a classifier
are reduced from $O(n^2)$ and $O(n^3)$ to $O(nr)$ and $O(nr^2)$, respectively,
where $n$ is the number of training examples and $r$ is the rank on each level
of the hierarchy. Although other randomized approximate kernels entail a
similar complexity, empirical results show that the proposed kernel achieves a
matching performance with a smaller $r$. We demonstrate comprehensive
experiments to show the effective use of the proposed kernel on data sizes up
to the order of millions.
|
[
"Jie Chen, Haim Avron, Vikas Sindhwani"
] |
null | null |
1608.00860
| null | null |
http://arxiv.org/pdf/1608.00860v2
|
2017-08-14T15:11:25Z
|
2016-08-02T15:07:25Z
|
Hierarchically Compositional Kernels for Scalable Nonparametric Learning
|
We propose a novel class of kernels to alleviate the high computational cost of large-scale nonparametric learning with kernel methods. The proposed kernel is defined based on a hierarchical partitioning of the underlying data domain, where the Nystr"om method (a globally low-rank approximation) is married with a locally lossless approximation in a hierarchical fashion. The kernel maintains (strict) positive-definiteness. The corresponding kernel matrix admits a recursively off-diagonal low-rank structure, which allows for fast linear algebra computations. Suppressing the factor of data dimension, the memory and arithmetic complexities for training a regression or a classifier are reduced from $O(n^2)$ and $O(n^3)$ to $O(nr)$ and $O(nr^2)$, respectively, where $n$ is the number of training examples and $r$ is the rank on each level of the hierarchy. Although other randomized approximate kernels entail a similar complexity, empirical results show that the proposed kernel achieves a matching performance with a smaller $r$. We demonstrate comprehensive experiments to show the effective use of the proposed kernel on data sizes up to the order of millions.
|
[
"['Jie Chen' 'Haim Avron' 'Vikas Sindhwani']"
] |
cs.CR cs.LG
|
10.1145/2811411.2811514
|
1608.00866
| null | null |
http://arxiv.org/abs/1608.00866v1
|
2016-08-02T15:26:41Z
|
2016-08-02T15:26:41Z
|
PageRank in Malware Categorization
|
In this paper, we propose a malware categorization method that models malware
behavior in terms of instructions using PageRank. PageRank computes ranks of
web pages based on structural information and can also compute ranks of
instructions that represent the structural information of the instructions in
malware analysis methods. Our malware categorization method uses the computed
ranks as features in machine learning algorithms. In the evaluation, we compare
the effectiveness of different PageRank algorithms and also investigate bagging
and boosting algorithms to improve the categorization accuracy.
|
[
"['BooJoong Kang' 'Suleiman Y. Yerima' 'Kieran McLaughlin' 'Sakir Sezer']",
"BooJoong Kang, Suleiman Y. Yerima, Kieran McLaughlin, Sakir Sezer"
] |
stat.ML cs.AI cs.LG
| null |
1608.00876
| null | null |
http://arxiv.org/pdf/1608.00876v1
|
2016-08-02T15:48:58Z
|
2016-08-02T15:48:58Z
|
Relational Similarity Machines
|
This paper proposes Relational Similarity Machines (RSM): a fast, accurate,
and flexible relational learning framework for supervised and semi-supervised
learning tasks. Despite the importance of relational learning, most existing
methods are hard to adapt to different settings, due to issues with efficiency,
scalability, accuracy, and flexibility for handling a wide variety of
classification problems, data, constraints, and tasks. For instance, many
existing methods perform poorly for multi-class classification problems, graphs
that are sparsely labeled or network data with low relational autocorrelation.
In contrast, the proposed relational learning framework is designed to be (i)
fast for learning and inference at real-time interactive rates, and (ii)
flexible for a variety of learning settings (multi-class problems), constraints
(few labeled instances), and application domains. The experiments demonstrate
the effectiveness of RSM for a variety of tasks and data.
|
[
"Ryan A. Rossi, Rong Zhou, Nesreen K. Ahmed",
"['Ryan A. Rossi' 'Rong Zhou' 'Nesreen K. Ahmed']"
] |
cs.LG cs.CL cs.NE
| null |
1608.00895
| null | null |
http://arxiv.org/pdf/1608.00895v2
|
2017-01-10T14:25:28Z
|
2016-08-02T16:43:27Z
|
RETURNN: The RWTH Extensible Training framework for Universal Recurrent
Neural Networks
|
In this work we release our extensible and easily configurable neural network
training software. It provides a rich set of functional layers with a
particular focus on efficient training of recurrent neural network topologies
on multiple GPUs. The source of the software package is public and freely
available for academic research purposes and can be used as a framework or as a
standalone tool which supports a flexible configuration. The software allows to
train state-of-the-art deep bidirectional long short-term memory (LSTM) models
on both one dimensional data like speech or two dimensional data like
handwritten text and was used to develop successful submission systems in
several evaluation campaigns.
|
[
"['Patrick Doetsch' 'Albert Zeyer' 'Paul Voigtlaender' 'Ilya Kulikov'\n 'Ralf Schlüter' 'Hermann Ney']",
"Patrick Doetsch, Albert Zeyer, Paul Voigtlaender, Ilya Kulikov, Ralf\n Schl\\\"uter, Hermann Ney"
] |
cs.SI cs.LG physics.soc-ph
|
10.7566/JPSJ.85.114802
|
1608.0092
| null | null | null | null | null |
Community Detection Algorithm Combining Stochastic Block Model and
Attribute Data Clustering
|
We propose a new algorithm to detect the community structure in a network
that utilizes both the network structure and vertex attribute data. Suppose we
have the network structure together with the vertex attribute data, that is,
the information assigned to each vertex associated with the community to which
it belongs. The problem addressed this paper is the detection of the community
structure from the information of both the network structure and the vertex
attribute data. Our approach is based on the Bayesian approach that models the
posterior probability distribution of the community labels. The detection of
the community structure in our method is achieved by using belief propagation
and an EM algorithm. We numerically verified the performance of our method
using computer-generated networks and real-world networks.
|
[
"Shun Kataoka, Takuto Kobayashi, Muneki Yasuda, and Kazuyuki Tanaka"
] |
null | null |
1608.00920
| null | null |
http://arxiv.org/abs/1608.00920v1
|
2016-07-21T10:21:08Z
|
2016-07-21T10:21:08Z
|
Community Detection Algorithm Combining Stochastic Block Model and
Attribute Data Clustering
|
We propose a new algorithm to detect the community structure in a network that utilizes both the network structure and vertex attribute data. Suppose we have the network structure together with the vertex attribute data, that is, the information assigned to each vertex associated with the community to which it belongs. The problem addressed this paper is the detection of the community structure from the information of both the network structure and the vertex attribute data. Our approach is based on the Bayesian approach that models the posterior probability distribution of the community labels. The detection of the community structure in our method is achieved by using belief propagation and an EM algorithm. We numerically verified the performance of our method using computer-generated networks and real-world networks.
|
[
"['Shun Kataoka' 'Takuto Kobayashi' 'Muneki Yasuda' 'Kazuyuki Tanaka']"
] |
cs.LG
| null |
1608.01072
| null | null |
http://arxiv.org/pdf/1608.01072v1
|
2016-08-03T04:42:02Z
|
2016-08-03T04:42:02Z
|
Fuzzy c-Shape: A new algorithm for clustering finite time series
waveforms
|
The existence of large volumes of time series data in many applications has
motivated data miners to investigate specialized methods for mining time series
data. Clustering is a popular data mining method due to its powerful
exploratory nature and its usefulness as a preprocessing step for other data
mining techniques. This article develops two novel clustering algorithms for
time series data that are extensions of a crisp c-shapes algorithm. The two new
algorithms are heuristic derivatives of fuzzy c-means (FCM). Fuzzy c-Shapes
plus (FCS+) replaces the inner product norm in the FCM model with a shape-based
distance function. Fuzzy c-Shapes double plus (FCS++) uses the shape-based
distance, and also replaces the FCM cluster centers with shape-extracted
prototypes. Numerical experiments on 48 real time series data sets show that
the two new algorithms outperform state-of-the-art shape-based clustering
algorithms in terms of accuracy and efficiency. Four external cluster validity
indices (the Rand index, Adjusted Rand Index, Variation of Information, and
Normalized Mutual Information) are used to match candidate partitions generated
by each of the studied algorithms. All four indices agree that for these finite
waveform data sets, FCS++ gives a small improvement over FCS+, and in turn,
FCS+ is better than the original crisp c-shapes method. Finally, we apply two
tests of statistical significance to the three algorithms. The Wilcoxon and
Friedman statistics both rank the three algorithms in exactly the same way as
the four cluster validity indices.
|
[
"Fateme Fahiman, Jame C.Bezdek, Sarah M.Erfani, Christopher Leckie,\n Marimuthu Palaniswami",
"['Fateme Fahiman' 'Jame C. Bezdek' 'Sarah M. Erfani' 'Christopher Leckie'\n 'Marimuthu Palaniswami']"
] |
cs.RO cs.AI cs.CV cs.LG
| null |
1608.01127
| null | null |
http://arxiv.org/pdf/1608.01127v1
|
2016-08-03T09:25:35Z
|
2016-08-03T09:25:35Z
|
Autonomous Grounding of Visual Field Experience through Sensorimotor
Prediction
|
In a developmental framework, autonomous robots need to explore the world and
learn how to interact with it. Without an a priori model of the system, this
opens the challenging problem of having robots master their interface with the
world: how to perceive their environment using their sensors, and how to act in
it using their motors. The sensorimotor approach of perception claims that a
naive agent can learn to master this interface by capturing regularities in the
way its actions transform its sensory inputs. In this paper, we apply such an
approach to the discovery and mastery of the visual field associated with a
visual sensor. A computational model is formalized and applied to a simulated
system to illustrate the approach.
|
[
"Alban Laflaqui\\`ere",
"['Alban Laflaquière']"
] |
cs.LG
| null |
1608.01198
| null | null |
http://arxiv.org/pdf/1608.01198v2
|
2016-08-09T15:28:15Z
|
2016-08-03T14:19:00Z
|
Ensemble-driven support vector clustering: From ensemble learning to
automatic parameter estimation
|
Support vector clustering (SVC) is a versatile clustering technique that is
able to identify clusters of arbitrary shapes by exploiting the kernel trick.
However, one hurdle that restricts the application of SVC lies in its
sensitivity to the kernel parameter and the trade-off parameter. Although many
extensions of SVC have been developed, to the best of our knowledge, there is
still no algorithm that is able to effectively estimate the two crucial
parameters in SVC without supervision. In this paper, we propose a novel
support vector clustering approach termed ensemble-driven support vector
clustering (EDSVC), which for the first time tackles the automatic parameter
estimation problem for SVC based on ensemble learning, and is capable of
producing robust clustering results in a purely unsupervised manner.
Experimental results on multiple real-world datasets demonstrate the
effectiveness of our approach.
|
[
"Dong Huang, Chang-Dong Wang, Jian-Huang Lai, Yun Liang, Shan Bian, Yu\n Chen",
"['Dong Huang' 'Chang-Dong Wang' 'Jian-Huang Lai' 'Yun Liang' 'Shan Bian'\n 'Yu Chen']"
] |
cs.LG stat.ML
| null |
1608.0123
| null | null | null | null | null |
Learning a Driving Simulator
|
Comma.ai's approach to Artificial Intelligence for self-driving cars is based
on an agent that learns to clone driver behaviors and plans maneuvers by
simulating future events in the road. This paper illustrates one of our
research approaches for driving simulation. One where we learn to simulate.
Here we investigate variational autoencoders with classical and learned cost
functions using generative adversarial networks for embedding road frames.
Afterwards, we learn a transition model in the embedded space using action
conditioned Recurrent Neural Networks. We show that our approach can keep
predicting realistic looking video for several frames despite the transition
model being optimized without a cost function in the pixel space.
|
[
"Eder Santana, George Hotz"
] |
null | null |
1608.01230
| null | null |
http://arxiv.org/pdf/1608.01230v1
|
2016-08-03T15:49:12Z
|
2016-08-03T15:49:12Z
|
Learning a Driving Simulator
|
Comma.ai's approach to Artificial Intelligence for self-driving cars is based on an agent that learns to clone driver behaviors and plans maneuvers by simulating future events in the road. This paper illustrates one of our research approaches for driving simulation. One where we learn to simulate. Here we investigate variational autoencoders with classical and learned cost functions using generative adversarial networks for embedding road frames. Afterwards, we learn a transition model in the embedded space using action conditioned Recurrent Neural Networks. We show that our approach can keep predicting realistic looking video for several frames despite the transition model being optimized without a cost function in the pixel space.
|
[
"['Eder Santana' 'George Hotz']"
] |
cs.CL cs.LG
| null |
1608.01238
| null | null |
http://arxiv.org/pdf/1608.01238v1
|
2016-08-03T16:12:23Z
|
2016-08-03T16:12:23Z
|
Improving Quality of Hierarchical Clustering for Large Data Series
|
Brown clustering is a hard, hierarchical, bottom-up clustering of words in a
vocabulary. Words are assigned to clusters based on their usage pattern in a
given corpus. The resulting clusters and hierarchical structure can be used in
constructing class-based language models and for generating features to be used
in NLP tasks. Because of its high computational cost, the most-used version of
Brown clustering is a greedy algorithm that uses a window to restrict its
search space. Like other clustering algorithms, Brown clustering finds a
sub-optimal, but nonetheless effective, mapping of words to clusters. Because
of its ability to produce high-quality, human-understandable cluster, Brown
clustering has seen high uptake the NLP research community where it is used in
the preprocessing and feature generation steps.
Little research has been done towards improving the quality of Brown
clusters, despite the greedy and heuristic nature of the algorithm. The
approaches tried so far have focused on: studying the effect of the
initialisation in a similar algorithm; tuning the parameters used to define the
desired number of clusters and the behaviour of the algorithm; and including a
separate parameter to differentiate the window from the desired number of
clusters. However, some of these approaches have not yielded significant
improvements in cluster quality.
In this thesis, a close analysis of the Brown algorithm is provided,
revealing important under-specifications and weaknesses in the original
algorithm. These have serious effects on cluster quality and reproducibility of
research using Brown clustering. In the second part of the thesis, two
modifications are proposed. Finally, a thorough evaluation is performed,
considering both the optimization criterion of Brown clustering and the
performance of the resulting class-based language models.
|
[
"['Manuel R. Ciosici']",
"Manuel R. Ciosici"
] |
cs.LG math.OC stat.ML
| null |
1608.01264
| null | null |
http://arxiv.org/pdf/1608.01264v1
|
2016-08-03T17:33:16Z
|
2016-08-03T17:33:16Z
|
Fast and Simple Optimization for Poisson Likelihood Models
|
Poisson likelihood models have been prevalently used in imaging, social
networks, and time series analysis. We propose fast, simple,
theoretically-grounded, and versatile, optimization algorithms for Poisson
likelihood modeling. The Poisson log-likelihood is concave but not
Lipschitz-continuous. Since almost all gradient-based optimization algorithms
rely on Lipschitz-continuity, optimizing Poisson likelihood models with a
guarantee of convergence can be challenging, especially for large-scale
problems.
We present a new perspective allowing to efficiently optimize a wide range of
penalized Poisson likelihood objectives. We show that an appropriate saddle
point reformulation enjoys a favorable geometry and a smooth structure.
Therefore, we can design a new gradient-based optimization algorithm with
$O(1/t)$ convergence rate, in contrast to the usual $O(1/\sqrt{t})$ rate of
non-smooth minimization alternatives. Furthermore, in order to tackle problems
with large samples, we also develop a randomized block-decomposition variant
that enjoys the same convergence rate yet more efficient iteration cost.
Experimental results on several point process applications including social
network estimation and temporal recommendation show that the proposed algorithm
and its randomized block variant outperform existing methods both on synthetic
and real-world datasets.
|
[
"Niao He, Zaid Harchaoui, Yichen Wang, Le Song",
"['Niao He' 'Zaid Harchaoui' 'Yichen Wang' 'Le Song']"
] |
cs.LG cs.CL
| null |
1608.01281
| null | null |
http://arxiv.org/pdf/1608.01281v1
|
2016-08-03T18:35:12Z
|
2016-08-03T18:35:12Z
|
Learning Online Alignments with Continuous Rewards Policy Gradient
|
Sequence-to-sequence models with soft attention had significant success in
machine translation, speech recognition, and question answering. Though capable
and easy to use, they require that the entirety of the input sequence is
available at the beginning of inference, an assumption that is not valid for
instantaneous translation and speech recognition. To address this problem, we
present a new method for solving sequence-to-sequence problems using hard
online alignments instead of soft offline alignments. The online alignments
model is able to start producing outputs without the need to first process the
entire input sequence. A highly accurate online sequence-to-sequence model is
useful because it can be used to build an accurate voice-based instantaneous
translator. Our model uses hard binary stochastic decisions to select the
timesteps at which outputs will be produced. The model is trained to produce
these stochastic decisions using a standard policy gradient method. In our
experiments, we show that this model achieves encouraging performance on TIMIT
and Wall Street Journal (WSJ) speech recognition datasets.
|
[
"['Yuping Luo' 'Chung-Cheng Chiu' 'Navdeep Jaitly' 'Ilya Sutskever']",
"Yuping Luo, Chung-Cheng Chiu, Navdeep Jaitly, Ilya Sutskever"
] |
cs.LG stat.ML
| null |
1608.0141
| null | null | null | null | null |
Bayesian Kernel and Mutual $k$-Nearest Neighbor Regression
|
We propose Bayesian extensions of two nonparametric regression methods which
are kernel and mutual $k$-nearest neighbor regression methods. Derived based on
Gaussian process models for regression, the extensions provide distributions
for target value estimates and the framework to select the hyperparameters. It
is shown that both the proposed methods asymptotically converge to kernel and
mutual $k$-nearest neighbor regression methods, respectively. The simulation
results show that the proposed methods can select proper hyperparameters and
are better than or comparable to the former methods for an artificial data set
and a real world data set.
|
[
"Hyun-Chul Kim"
] |
null | null |
1608.01410
| null | null |
http://arxiv.org/pdf/1608.01410v1
|
2016-08-04T01:33:34Z
|
2016-08-04T01:33:34Z
|
Bayesian Kernel and Mutual $k$-Nearest Neighbor Regression
|
We propose Bayesian extensions of two nonparametric regression methods which are kernel and mutual $k$-nearest neighbor regression methods. Derived based on Gaussian process models for regression, the extensions provide distributions for target value estimates and the framework to select the hyperparameters. It is shown that both the proposed methods asymptotically converge to kernel and mutual $k$-nearest neighbor regression methods, respectively. The simulation results show that the proposed methods can select proper hyperparameters and are better than or comparable to the former methods for an artificial data set and a real world data set.
|
[
"['Hyun-Chul Kim']"
] |
cs.LG stat.ML
| null |
1608.01747
| null | null |
http://arxiv.org/pdf/1608.01747v1
|
2016-08-05T03:37:46Z
|
2016-08-05T03:37:46Z
|
A Distance for HMMs based on Aggregated Wasserstein Metric and State
Registration
|
We propose a framework, named Aggregated Wasserstein, for computing a
dissimilarity measure or distance between two Hidden Markov Models with state
conditional distributions being Gaussian. For such HMMs, the marginal
distribution at any time spot follows a Gaussian mixture distribution, a fact
exploited to softly match, aka register, the states in two HMMs. We refer to
such HMMs as Gaussian mixture model-HMM (GMM-HMM). The registration of states
is inspired by the intrinsic relationship of optimal transport and the
Wasserstein metric between distributions. Specifically, the components of the
marginal GMMs are matched by solving an optimal transport problem where the
cost between components is the Wasserstein metric for Gaussian distributions.
The solution of the optimization problem is a fast approximation to the
Wasserstein metric between two GMMs. The new Aggregated Wasserstein distance is
a semi-metric and can be computed without generating Monte Carlo samples. It is
invariant to relabeling or permutation of the states. This distance quantifies
the dissimilarity of GMM-HMMs by measuring both the difference between the two
marginal GMMs and the difference between the two transition matrices. Our new
distance is tested on the tasks of retrieval and classification of time series.
Experiments on both synthetic data and real data have demonstrated its
advantages in terms of accuracy as well as efficiency in comparison with
existing distances based on the Kullback-Leibler divergence.
|
[
"['Yukun Chen' 'Jianbo Ye' 'Jia Li']",
"Yukun Chen, Jianbo Ye, and Jia Li"
] |
cs.LG
| null |
1608.01874
| null | null |
http://arxiv.org/pdf/1608.01874v1
|
2016-08-05T13:19:47Z
|
2016-08-05T13:19:47Z
|
Forward Stagewise Additive Model for Collaborative Multiview Boosting
|
Multiview assisted learning has gained significant attention in recent years
in supervised learning genre. Availability of high performance computing
devices enables learning algorithms to search simultaneously over multiple
views or feature spaces to obtain an optimum classification performance. The
paper is a pioneering attempt of formulating a mathematical foundation for
realizing a multiview aided collaborative boosting architecture for multiclass
classification. Most of the present algorithms apply multiview learning
heuristically without exploring the fundamental mathematical changes imposed on
traditional boosting. Also, most of the algorithms are restricted to two class
or view setting. Our proposed mathematical framework enables collaborative
boosting across any finite dimensional view spaces for multiclass learning. The
boosting framework is based on forward stagewise additive model which minimizes
a novel exponential loss function. We show that the exponential loss function
essentially captures difficulty of a training sample space instead of the
traditional `1/0' loss. The new algorithm restricts a weak view from over
learning and thereby preventing overfitting. The model is inspired by our
earlier attempt on collaborative boosting which was devoid of mathematical
justification. The proposed algorithm is shown to converge much nearer to
global minimum in the exponential loss space and thus supersedes our previous
algorithm. The paper also presents analytical and numerical analysis of
convergence and margin bounds for multiview boosting algorithms and we show
that our proposed ensemble learning manifests lower error bound and higher
margin compared to our previous model. Also, the proposed model is compared
with traditional boosting and recent multiview boosting algorithms.
|
[
"Avisek Lahiri, Biswajit Paria, Prabir Kumar Biswas",
"['Avisek Lahiri' 'Biswajit Paria' 'Prabir Kumar Biswas']"
] |
stat.ML cs.LG
| null |
1608.01976
| null | null |
http://arxiv.org/pdf/1608.01976v1
|
2016-08-05T19:02:19Z
|
2016-08-05T19:02:19Z
|
Kernel Ridge Regression via Partitioning
|
In this paper, we investigate a divide and conquer approach to Kernel Ridge
Regression (KRR). Given n samples, the division step involves separating the
points based on some underlying disjoint partition of the input space (possibly
via clustering), and then computing a KRR estimate for each partition. The
conquering step is simple: for each partition, we only consider its own local
estimate for prediction. We establish conditions under which we can give
generalization bounds for this estimator, as well as achieve optimal minimax
rates. We also show that the approximation error component of the
generalization error is lesser than when a single KRR estimate is fit on the
data: thus providing both statistical and computational advantages over a
single KRR estimate over the entire data (or an averaging over random
partitions as in other recent work, [30]). Lastly, we provide experimental
validation for our proposed estimator and our assumptions.
|
[
"['Rashish Tandon' 'Si Si' 'Pradeep Ravikumar' 'Inderjit Dhillon']",
"Rashish Tandon, Si Si, Pradeep Ravikumar, Inderjit Dhillon"
] |
cs.LG
| null |
1608.0201
| null | null | null | null | null |
Communication-Efficient Parallel Block Minimization for Kernel Machines
|
Kernel machines often yield superior predictive performance on various tasks;
however, they suffer from severe computational challenges. In this paper, we
show how to overcome the important challenge of speeding up kernel machines. In
particular, we develop a parallel block minimization framework for solving
kernel machines, including kernel SVM and kernel logistic regression. Our
framework proceeds by dividing the problem into smaller subproblems by forming
a block-diagonal approximation of the Hessian matrix. The subproblems are then
solved approximately in parallel. After that, a communication efficient line
search procedure is developed to ensure sufficient reduction of the objective
function value at each iteration. We prove global linear convergence rate of
the proposed method with a wide class of subproblem solvers, and our analysis
covers strongly convex and some non-strongly convex functions. We apply our
algorithm to solve large-scale kernel SVM problems on distributed systems, and
show a significant improvement over existing parallel solvers. As an example,
on the covtype dataset with half-a-million samples, our algorithm can obtain an
approximate solution with 96% accuracy in 20 seconds using 32 machines, while
all the other parallel kernel SVM solvers require more than 2000 seconds to
achieve a solution with 95% accuracy. Moreover, our algorithm can scale to very
large data sets, such as the kdd algebra dataset with 8 million samples and 20
million features.
|
[
"Cho-Jui Hsieh and Si Si and Inderjit S. Dhillon"
] |
null | null |
1608.02010
| null | null |
http://arxiv.org/pdf/1608.02010v1
|
2016-08-05T20:15:51Z
|
2016-08-05T20:15:51Z
|
Communication-Efficient Parallel Block Minimization for Kernel Machines
|
Kernel machines often yield superior predictive performance on various tasks; however, they suffer from severe computational challenges. In this paper, we show how to overcome the important challenge of speeding up kernel machines. In particular, we develop a parallel block minimization framework for solving kernel machines, including kernel SVM and kernel logistic regression. Our framework proceeds by dividing the problem into smaller subproblems by forming a block-diagonal approximation of the Hessian matrix. The subproblems are then solved approximately in parallel. After that, a communication efficient line search procedure is developed to ensure sufficient reduction of the objective function value at each iteration. We prove global linear convergence rate of the proposed method with a wide class of subproblem solvers, and our analysis covers strongly convex and some non-strongly convex functions. We apply our algorithm to solve large-scale kernel SVM problems on distributed systems, and show a significant improvement over existing parallel solvers. As an example, on the covtype dataset with half-a-million samples, our algorithm can obtain an approximate solution with 96% accuracy in 20 seconds using 32 machines, while all the other parallel kernel SVM solvers require more than 2000 seconds to achieve a solution with 95% accuracy. Moreover, our algorithm can scale to very large data sets, such as the kdd algebra dataset with 8 million samples and 20 million features.
|
[
"['Cho-Jui Hsieh' 'Si Si' 'Inderjit S. Dhillon']"
] |
cs.LG cs.CL
| null |
1608.02071
| null | null |
http://arxiv.org/pdf/1608.02071v1
|
2016-08-06T06:24:59Z
|
2016-08-06T06:24:59Z
|
Transferring Knowledge from Text to Predict Disease Onset
|
In many domains such as medicine, training data is in short supply. In such
cases, external knowledge is often helpful in building predictive models. We
propose a novel method to incorporate publicly available domain expertise to
build accurate models. Specifically, we use word2vec models trained on a
domain-specific corpus to estimate the relevance of each feature's text
description to the prediction problem. We use these relevance estimates to
rescale the features, causing more important features to experience weaker
regularization.
We apply our method to predict the onset of five chronic diseases in the next
five years in two genders and two age groups. Our rescaling approach improves
the accuracy of the model, particularly when there are few positive examples.
Furthermore, our method selects 60% fewer features, easing interpretation by
physicians. Our method is applicable to other domains where feature and outcome
descriptions are available.
|
[
"Yun Liu, Kun-Ta Chuang, Fu-Wen Liang, Huey-Jen Su, Collin M. Stultz,\n John V. Guttag",
"['Yun Liu' 'Kun-Ta Chuang' 'Fu-Wen Liang' 'Huey-Jen Su' 'Collin M. Stultz'\n 'John V. Guttag']"
] |
cs.CL cs.AI cs.LG
| null |
1608.02076
| null | null |
http://arxiv.org/pdf/1608.02076v2
|
2016-09-22T08:52:31Z
|
2016-08-06T07:16:31Z
|
Bi-directional Attention with Agreement for Dependency Parsing
|
We develop a novel bi-directional attention model for dependency parsing,
which learns to agree on headword predictions from the forward and backward
parsing directions. The parsing procedure for each direction is formulated as
sequentially querying the memory component that stores continuous headword
embeddings. The proposed parser makes use of {\it soft} headword embeddings,
allowing the model to implicitly capture high-order parsing history without
dramatically increasing the computational complexity. We conduct experiments on
English, Chinese, and 12 other languages from the CoNLL 2006 shared task,
showing that the proposed model achieves state-of-the-art unlabeled attachment
scores on 6 languages.
|
[
"['Hao Cheng' 'Hao Fang' 'Xiaodong He' 'Jianfeng Gao' 'Li Deng']",
"Hao Cheng and Hao Fang and Xiaodong He and Jianfeng Gao and Li Deng"
] |
cs.LG
| null |
1608.02126
| null | null |
http://arxiv.org/pdf/1608.02126v1
|
2016-08-06T16:30:47Z
|
2016-08-06T16:30:47Z
|
How Much Did it Rain? Predicting Real Rainfall Totals Based on Radar
Data
|
We applied a variety of parametric and non-parametric machine learning models
to predict the probability distribution of rainfall based on 1M training
examples over a single year across several U.S. states. Our top performing
model based on a squared loss objective was a cross-validated parametric
k-nearest-neighbor predictor that took about six days to compute, and was
competitive in a world-wide competition.
|
[
"Adam Lesnikowski",
"['Adam Lesnikowski']"
] |
cs.CR cs.CV cs.LG
| null |
1608.02128
| null | null |
http://arxiv.org/pdf/1608.02128v1
|
2016-08-06T16:50:26Z
|
2016-08-06T16:50:26Z
|
Spoofing 2D Face Detection: Machines See People Who Aren't There
|
Machine learning is increasingly used to make sense of the physical world yet
may suffer from adversarial manipulation. We examine the Viola-Jones 2D face
detection algorithm to study whether images can be created that humans do not
notice as faces yet the algorithm detects as faces. We show that it is possible
to construct images that Viola-Jones recognizes as containing faces yet no
human would consider a face. Moreover, we show that it is possible to construct
images that fool facial detection even when they are printed and then
photographed.
|
[
"Michael McCoyd and David Wagner",
"['Michael McCoyd' 'David Wagner']"
] |
cs.LG cs.CV
| null |
1608.02146
| null | null |
http://arxiv.org/pdf/1608.02146v2
|
2017-09-13T21:17:33Z
|
2016-08-06T19:29:58Z
|
Leveraging Union of Subspace Structure to Improve Constrained Clustering
|
Many clustering problems in computer vision and other contexts are also
classification problems, where each cluster shares a meaningful label. Subspace
clustering algorithms in particular are often applied to problems that fit this
description, for example with face images or handwritten digits. While it is
straightforward to request human input on these datasets, our goal is to reduce
this input as much as possible. We present a pairwise-constrained clustering
algorithm that actively selects queries based on the union-of-subspaces model.
The central step of the algorithm is in querying points of minimum margin
between estimated subspaces; analogous to classifier margin, these lie near the
decision boundary. We prove that points lying near the intersection of
subspaces are points with low margin. Our procedure can be used after any
subspace clustering algorithm that outputs an affinity matrix. We demonstrate
on several datasets that our algorithm drives the clustering error down
considerably faster than the state-of-the-art active query algorithms on
datasets with subspace structure and is competitive on other datasets.
|
[
"John Lipor and Laura Balzano",
"['John Lipor' 'Laura Balzano']"
] |
cs.LG cs.CC stat.ML
| null |
1608.02198
| null | null |
http://arxiv.org/pdf/1608.02198v3
|
2017-04-17T06:12:23Z
|
2016-08-07T09:35:44Z
|
A General Characterization of the Statistical Query Complexity
|
Statistical query (SQ) algorithms are algorithms that have access to an {\em
SQ oracle} for the input distribution $D$ instead of i.i.d.~ samples from $D$.
Given a query function $\phi:X \rightarrow [-1,1]$, the oracle returns an
estimate of ${\bf E}_{ x\sim D}[\phi(x)]$ within some tolerance $\tau_\phi$
that roughly corresponds to the number of samples.
In this work we demonstrate that the complexity of solving general problems
over distributions using SQ algorithms can be captured by a relatively simple
notion of statistical dimension that we introduce. SQ algorithms capture a
broad spectrum of algorithmic approaches used in theory and practice, most
notably, convex optimization techniques. Hence our statistical dimension allows
to investigate the power of a variety of algorithmic approaches by analyzing a
single linear-algebraic parameter. Such characterizations were investigated
over the past 20 years in learning theory but prior characterizations are
restricted to the much simpler setting of classification problems relative to a
fixed distribution on the domain (Blum et al., 1994; Bshouty and Feldman, 2002;
Yang, 2001; Simon, 2007; Feldman, 2012; Szorenyi, 2009). Our characterization
is also the first to precisely characterize the necessary tolerance of queries.
We give applications of our techniques to two open problems in learning theory
and to algorithms that are subject to memory and communication constraints.
|
[
"['Vitaly Feldman']",
"Vitaly Feldman"
] |
cs.RO cs.CV cs.LG
| null |
1608.02239
| null | null |
http://arxiv.org/pdf/1608.02239v1
|
2016-08-07T16:30:42Z
|
2016-08-07T16:30:42Z
|
Deep Learning a Grasp Function for Grasping under Gripper Pose
Uncertainty
|
This paper presents a new method for parallel-jaw grasping of isolated
objects from depth images, under large gripper pose uncertainty. Whilst most
approaches aim to predict the single best grasp pose from an image, our method
first predicts a score for every possible grasp pose, which we denote the grasp
function. With this, it is possible to achieve grasping robust to the gripper's
pose uncertainty, by smoothing the grasp function with the pose uncertainty
function. Therefore, if the single best pose is adjacent to a region of poor
grasp quality, that pose will no longer be chosen, and instead a pose will be
chosen which is surrounded by a region of high grasp quality. To learn this
function, we train a Convolutional Neural Network which takes as input a single
depth image of an object, and outputs a score for each grasp pose across the
image. Training data for this is generated by use of physics simulation and
depth image simulation with 3D object meshes, to enable acquisition of
sufficient data without requiring exhaustive real-world experiments. We
evaluate with both synthetic and real experiments, and show that the learned
grasp score is more robust to gripper pose uncertainty than when this
uncertainty is not accounted for.
|
[
"Edward Johns, Stefan Leutenegger and Andrew J. Davison",
"['Edward Johns' 'Stefan Leutenegger' 'Andrew J. Davison']"
] |
cs.LG cs.CR stat.ML
| null |
1608.02257
| null | null |
http://arxiv.org/pdf/1608.02257v2
|
2016-08-09T20:20:17Z
|
2016-08-07T19:03:52Z
|
Robust High-Dimensional Linear Regression
|
The effectiveness of supervised learning techniques has made them ubiquitous
in research and practice. In high-dimensional settings, supervised learning
commonly relies on dimensionality reduction to improve performance and identify
the most important factors in predicting outcomes. However, the economic
importance of learning has made it a natural target for adversarial
manipulation of training data, which we term poisoning attacks. Prior
approaches to dealing with robust supervised learning rely on strong
assumptions about the nature of the feature matrix, such as feature
independence and sub-Gaussian noise with low variance. We propose an integrated
method for robust regression that relaxes these assumptions, assuming only that
the feature matrix can be well approximated by a low-rank matrix. Our
techniques integrate improved robust low-rank matrix approximation and robust
principle component regression, and yield strong performance guarantees.
Moreover, we experimentally show that our methods significantly outperform
state of the art both in running time and prediction error.
|
[
"['Chang Liu' 'Bo Li' 'Yevgeniy Vorobeychik' 'Alina Oprea']",
"Chang Liu, Bo Li, Yevgeniy Vorobeychik, Alina Oprea"
] |
cs.LG cs.NE
| null |
1608.02292
| null | null |
http://arxiv.org/pdf/1608.02292v1
|
2016-08-08T01:10:51Z
|
2016-08-08T01:10:51Z
|
Online Adaptation of Deep Architectures with Reinforcement Learning
|
Online learning has become crucial to many problems in machine learning. As
more data is collected sequentially, quickly adapting to changes in the data
distribution can offer several competitive advantages such as avoiding loss of
prior knowledge and more efficient learning. However, adaptation to changes in
the data distribution (also known as covariate shift) needs to be performed
without compromising past knowledge already built in into the model to cope
with voluminous and dynamic data. In this paper, we propose an online stacked
Denoising Autoencoder whose structure is adapted through reinforcement
learning. Our algorithm forces the network to exploit and explore favourable
architectures employing an estimated utility function that maximises the
accuracy of an unseen validation sequence. Different actions, such as Pool,
Increment and Merge are available to modify the structure of the network. As we
observe through a series of experiments, our approach is more responsive,
robust, and principled than its counterparts for non-stationary as well as
stationary data distributions. Experimental results indicate that our algorithm
performs better at preserving gained prior knowledge and responding to changes
in the data distribution.
|
[
"['Thushan Ganegedara' 'Lionel Ott' 'Fabio Ramos']",
"Thushan Ganegedara, Lionel Ott and Fabio Ramos"
] |
cs.LG
| null |
1608.02301
| null | null |
http://arxiv.org/pdf/1608.02301v1
|
2016-08-08T02:39:42Z
|
2016-08-08T02:39:42Z
|
Uncovering Voice Misuse Using Symbolic Mismatch
|
Voice disorders affect an estimated 14 million working-aged Americans, and
many more worldwide. We present the first large scale study of vocal misuse
based on long-term ambulatory data collected by an accelerometer placed on the
neck. We investigate an unsupervised data mining approach to uncovering latent
information about voice misuse.
We segment signals from over 253 days of data from 22 subjects into over a
hundred million single glottal pulses (closures of the vocal folds), cluster
segments into symbols, and use symbolic mismatch to uncover differences between
patients and matched controls, and between patients pre- and post-treatment.
Our results show significant behavioral differences between patients and
controls, as well as between some pre- and post-treatment patients. Our
proposed approach provides an objective basis for helping diagnose behavioral
voice disorders, and is a first step towards a more data-driven understanding
of the impact of voice therapy.
|
[
"Marzyeh Ghassemi, Zeeshan Syed, Daryush D. Mehta, Jarrad H. Van Stan,\n Robert E. Hillman, and John V. Guttag",
"['Marzyeh Ghassemi' 'Zeeshan Syed' 'Daryush D. Mehta' 'Jarrad H. Van Stan'\n 'Robert E. Hillman' 'John V. Guttag']"
] |
cs.LG cs.AI stat.ML
| null |
1608.02341
| null | null |
http://arxiv.org/pdf/1608.02341v1
|
2016-08-08T07:44:24Z
|
2016-08-08T07:44:24Z
|
Towards Representation Learning with Tractable Probabilistic Models
|
Probabilistic models learned as density estimators can be exploited in
representation learning beside being toolboxes used to answer inference queries
only. However, how to extract useful representations highly depends on the
particular model involved. We argue that tractable inference, i.e. inference
that can be computed in polynomial time, can enable general schemes to extract
features from black box models. We plan to investigate how Tractable
Probabilistic Models (TPMs) can be exploited to generate embeddings by random
query evaluations. We devise two experimental designs to assess and compare
different TPMs as feature extractors in an unsupervised representation learning
framework. We show some experimental results on standard image datasets by
applying such a method to Sum-Product Networks and Mixture of Trees as
tractable models generating embeddings.
|
[
"Antonio Vergari and Nicola Di Mauro and Floriana Esposito",
"['Antonio Vergari' 'Nicola Di Mauro' 'Floriana Esposito']"
] |
cs.LG
| null |
1608.02484
| null | null |
http://arxiv.org/pdf/1608.02484v1
|
2016-08-08T15:23:26Z
|
2016-08-08T15:23:26Z
|
Interpolated Discretized Embedding of Single Vectors and Vector Pairs
for Classification, Metric Learning and Distance Approximation
|
We propose a new embedding method for a single vector and for a pair of
vectors. This embedding method enables: a) efficient classification and
regression of functions of single vectors; b) efficient approximation of
distance functions; and c) non-Euclidean, semimetric learning. To the best of
our knowledge, this is the first work that enables learning any general,
non-Euclidean, semimetrics. That is, our method is a universal semimetric
learning and approximation method that can approximate any distance function
with as high accuracy as needed with or without semimetric constraints. The
project homepage including code is at: http://www.ariel.ac.il/sites/ofirpele/ID
|
[
"Ofir Pele and Yakir Ben-Aliz",
"['Ofir Pele' 'Yakir Ben-Aliz']"
] |
null | null |
1608.02546
| null | null |
http://arxiv.org/pdf/1608.02546v2
|
2016-12-08T17:41:49Z
|
2016-08-08T18:30:22Z
|
A Stackelberg Game Perspective on the Conflict Between Machine Learning
and Data Obfuscation
|
Data is the new oil; this refrain is repeated extensively in the age of internet tracking, machine learning, and data analytics. As data collection becomes more personal and pervasive, however, public pressure is mounting for privacy protection. In this atmosphere, developers have created applications to add noise to user attributes visible to tracking algorithms. This creates a strategic interaction between trackers and users when incentives to maintain privacy and improve accuracy are misaligned. In this paper, we conceptualize this conflict through an N+1-player, augmented Stackelberg game. First a machine learner declares a privacy protection level, and then users respond by choosing their own perturbation amounts. We use the general frameworks of differential privacy and empirical risk minimization to quantify the utility components due to privacy and accuracy, respectively. In equilibrium, each user perturbs her data independently, which leads to a high net loss in accuracy. To remedy this scenario, we show that the learner improves his utility by proactively perturbing the data himself. While other work in this area has studied privacy markets and mechanism design for truthful reporting of user information, we take a different viewpoint by considering both user and learner perturbation.
|
[
"['Jeffrey Pawlick' 'Quanyan Zhu']"
] |
cs.CL cs.LG
| null |
1608.02689
| null | null |
http://arxiv.org/pdf/1608.02689v2
|
2017-05-30T14:31:13Z
|
2016-08-09T04:38:38Z
|
Multi-task Domain Adaptation for Sequence Tagging
|
Many domain adaptation approaches rely on learning cross domain shared
representations to transfer the knowledge learned in one domain to other
domains. Traditional domain adaptation only considers adapting for one task. In
this paper, we explore multi-task representation learning under the domain
adaptation scenario. We propose a neural network framework that supports domain
adaptation for multiple tasks simultaneously, and learns shared representations
that better generalize for domain adaptation. We apply the proposed framework
to domain adaptation for sequence tagging problems considering two tasks:
Chinese word segmentation and named entity recognition. Experiments show that
multi-task domain adaptation works better than disjoint domain adaptation for
each task, and achieves the state-of-the-art results for both tasks in the
social media domain.
|
[
"['Nanyun Peng' 'Mark Dredze']",
"Nanyun Peng and Mark Dredze"
] |
cs.AI cs.CV cs.LG cs.LO
| null |
1608.02693
| null | null |
http://arxiv.org/pdf/1608.02693v1
|
2016-08-09T05:48:51Z
|
2016-08-09T05:48:51Z
|
Deeply Semantic Inductive Spatio-Temporal Learning
|
We present an inductive spatio-temporal learning framework rooted in
inductive logic programming. With an emphasis on visuo-spatial language, logic,
and cognition, the framework supports learning with relational spatio-temporal
features identifiable in a range of domains involving the processing and
interpretation of dynamic visuo-spatial imagery. We present a prototypical
system, and an example application in the domain of computing for visual arts
and computational cognitive science.
|
[
"['Jakob Suchan' 'Mehul Bhatt' 'Carl Schultz']",
"Jakob Suchan and Mehul Bhatt and Carl Schultz"
] |
cs.CV cs.AI cs.CL cs.LG
| null |
1608.02717
| null | null |
http://arxiv.org/pdf/1608.02717v1
|
2016-08-09T08:24:02Z
|
2016-08-09T08:24:02Z
|
Mean Box Pooling: A Rich Image Representation and Output Embedding for
the Visual Madlibs Task
|
We present Mean Box Pooling, a novel visual representation that pools over
CNN representations of a large number, highly overlapping object proposals. We
show that such representation together with nCCA, a successful multimodal
embedding technique, achieves state-of-the-art performance on the Visual
Madlibs task. Moreover, inspired by the nCCA's objective function, we extend
classical CNN+LSTM approach to train the network by directly maximizing the
similarity between the internal representation of the deep learning
architecture and candidate answers. Again, such approach achieves a significant
improvement over the prior work that also uses CNN+LSTM approach on Visual
Madlibs.
|
[
"Ashkan Mokarian and Mateusz Malinowski and Mario Fritz",
"['Ashkan Mokarian' 'Mateusz Malinowski' 'Mario Fritz']"
] |
cs.CV cs.LG cs.NE
| null |
1608.02728
| null | null |
http://arxiv.org/pdf/1608.02728v1
|
2016-08-09T08:59:47Z
|
2016-08-09T08:59:47Z
|
OnionNet: Sharing Features in Cascaded Deep Classifiers
|
The focus of our work is speeding up evaluation of deep neural networks in
retrieval scenarios, where conventional architectures may spend too much time
on negative examples. We propose to replace a monolithic network with our novel
cascade of feature-sharing deep classifiers, called OnionNet, where subsequent
stages may add both new layers as well as new feature channels to the previous
ones. Importantly, intermediate feature maps are shared among classifiers,
preventing them from the necessity of being recomputed. To accomplish this, the
model is trained end-to-end in a principled way under a joint loss. We validate
our approach in theory and on a synthetic benchmark. As a result demonstrated
in three applications (patch matching, object detection, and image retrieval),
our cascade can operate significantly faster than both monolithic networks and
traditional cascades without sharing at the cost of marginal decrease in
precision.
|
[
"Martin Simonovsky and Nikos Komodakis",
"['Martin Simonovsky' 'Nikos Komodakis']"
] |
stat.ML cs.LG
| null |
1608.02731
| null | null |
http://arxiv.org/pdf/1608.02731v1
|
2016-08-09T09:01:13Z
|
2016-08-09T09:01:13Z
|
Posterior Sampling for Reinforcement Learning Without Episodes
|
This is a brief technical note to clarify some of the issues with applying
the application of the algorithm posterior sampling for reinforcement learning
(PSRL) in environments without fixed episodes. In particular, this paper aims
to:
- Review some of results which have been proven for finite horizon MDPs
(Osband et al 2013, 2014a, 2014b, 2016) and also for MDPs with finite ergodic
structure (Gopalan et al 2014).
- Review similar results for optimistic algorithms in infinite horizon
problems (Jaksch et al 2010, Bartlett and Tewari 2009, Abbasi-Yadkori and
Szepesvari 2011), with particular attention to the dynamic episode growth.
- Highlight the delicate technical issue which has led to a fault in the
proof of the lazy-PSRL algorithm (Abbasi-Yadkori and Szepesvari 2015). We
present an explicit counterexample to this style of argument. Therefore, we
suggest that the Theorem 2 in (Abbasi-Yadkori and Szepesvari 2015) be instead
considered a conjecture, as it has no rigorous proof.
- Present pragmatic approaches to apply PSRL in infinite horizon problems. We
conjecture that, under some additional assumptions, it will be possible to
obtain bounds $O( \sqrt{T} )$ even without episodic reset.
We hope that this note serves to clarify existing results in the field of
reinforcement learning and provides interesting motivation for future work.
|
[
"Ian Osband, Benjamin Van Roy",
"['Ian Osband' 'Benjamin Van Roy']"
] |
stat.ML cs.LG
| null |
1608.02732
| null | null |
http://arxiv.org/pdf/1608.02732v1
|
2016-08-09T09:02:01Z
|
2016-08-09T09:02:01Z
|
On Lower Bounds for Regret in Reinforcement Learning
|
This is a brief technical note to clarify the state of lower bounds on regret
for reinforcement learning. In particular, this paper:
- Reproduces a lower bound on regret for reinforcement learning, similar to
the result of Theorem 5 in the journal UCRL2 paper (Jaksch et al 2010).
- Clarifies that the proposed proof of Theorem 6 in the REGAL paper (Bartlett
and Tewari 2009) does not hold using the standard techniques without further
work. We suggest that this result should instead be considered a conjecture as
it has no rigorous proof.
- Suggests that the conjectured lower bound given by (Bartlett and Tewari
2009) is incorrect and, in fact, it is possible to improve the scaling of the
upper bound to match the weaker lower bounds presented in this paper.
We hope that this note serves to clarify existing results in the field of
reinforcement learning and provides interesting motivation for future work.
|
[
"Ian Osband, Benjamin Van Roy",
"['Ian Osband' 'Benjamin Van Roy']"
] |
stat.ML cs.LG
| null |
1608.02861
| null | null |
http://arxiv.org/pdf/1608.02861v1
|
2016-08-09T16:46:53Z
|
2016-08-09T16:46:53Z
|
Classification with the pot-pot plot
|
We propose a procedure for supervised classification that is based on
potential functions. The potential of a class is defined as a kernel density
estimate multiplied by the class's prior probability. The method transforms the
data to a potential-potential (pot-pot) plot, where each data point is mapped
to a vector of potentials. Separation of the classes, as well as classification
of new data points, is performed on this plot. For this, either the
$\alpha$-procedure ($\alpha$-P) or $k$-nearest neighbors ($k$-NN) are employed.
For data that are generated from continuous distributions, these classifiers
prove to be strongly Bayes-consistent. The potentials depend on the kernel and
its bandwidth used in the density estimate. We investigate several variants of
bandwidth selection, including joint and separate pre-scaling and a bandwidth
regression approach. The new method is applied to benchmark data from the
literature, including simulated data sets as well as 50 sets of real data. It
compares favorably to known classification methods such as LDA, QDA, max kernel
density estimates, $k$-NN, and $DD$-plot classification using depth functions.
|
[
"Oleksii Pokotylo and Karl Mosler",
"['Oleksii Pokotylo' 'Karl Mosler']"
] |
cs.LG
|
10.14569/IJACSA.2016.070710
|
1608.02888
| null | null |
http://arxiv.org/abs/1608.02888v1
|
2016-08-06T12:48:40Z
|
2016-08-06T12:48:40Z
|
Effective Data Mining Technique for Classification Cancers via Mutations
in Gene using Neural Network
|
The prediction plays the important role in detecting efficient protection and
therapy of cancer. The prediction of mutations in gene needs a diagnostic and
classification, which is based on the whole database (big dataset), to reach
sufficient accuracy results. Since the tumor suppressor P53 is approximately
about fifty percentage of all human tumors because mutations that occur in the
TP53 gene into the cells. So, this paper is applied on tumor p53, where the
problem is there are several primitive databases (excel database) contain
datasets of TP53 gene with its tumor protein p53, these databases are rich
datasets that cover all mutations and cause diseases (cancers). But these Data
Bases cannot reach to predict and diagnosis cancers, i.e. the big datasets have
not efficient Data Mining method, which can predict, diagnosis the mutation,
and classify the cancer of patient. The goal of this paper to reach a Data
Mining technique, that employs neural network, which bases on the big datasets.
Also, offers friendly predictions, flexible, and effective classified cancers,
in order to overcome the previous techniques drawbacks. This proposed technique
is done by using two approaches, first, bioinformatics techniques by using
BLAST, CLUSTALW, etc, in order to know if there are malignant mutations or not.
The second, data mining by using neural network; it is selected (12) out of
(53) TP53 gene database fields. To clarify, one of these 12 fields (gene
location field) did not exists in TP53 gene database; therefore, it is added to
the database of TP53 gene in training and testing back propagation algorithm,
in order to classify specifically the types of cancers. Feed Forward Back
Propagation supports this Data Mining method with data training rate (1) and
Mean Square Error (MSE) (0.00000000000001). This effective technique allows in
a quick, accurate and easy way to classify the type of cancer.
|
[
"['Ayad Ghany Ismaeel' 'Dina Yousif Mikhail']",
"Ayad Ghany Ismaeel, Dina Yousif Mikhail"
] |
cs.LG cs.CL cs.IT math.IT
| null |
1608.02893
| null | null |
http://arxiv.org/pdf/1608.02893v2
|
2016-08-26T20:55:41Z
|
2016-08-08T01:30:45Z
|
Syntactically Informed Text Compression with Recurrent Neural Networks
|
We present a self-contained system for constructing natural language models
for use in text compression. Our system improves upon previous neural network
based models by utilizing recent advances in syntactic parsing -- Google's
SyntaxNet -- to augment character-level recurrent neural networks. RNNs have
proven exceptional in modeling sequence data such as text, as their
architecture allows for modeling of long-term contextual information.
|
[
"['David Cox']",
"David Cox"
] |
cs.NE cs.AI cs.LG
| null |
1608.02971
| null | null |
http://arxiv.org/pdf/1608.02971v1
|
2016-08-09T20:04:40Z
|
2016-08-09T20:04:40Z
|
Neuroevolution-Based Inverse Reinforcement Learning
|
The problem of Learning from Demonstration is targeted at learning to perform
tasks based on observed examples. One approach to Learning from Demonstration
is Inverse Reinforcement Learning, in which actions are observed to infer
rewards. This work combines a feature based state evaluation approach to
Inverse Reinforcement Learning with neuroevolution, a paradigm for modifying
neural networks based on their performance on a given task. Neural networks are
used to learn from a demonstrated expert policy and are evolved to generate a
policy similar to the demonstration. The algorithm is discussed and evaluated
against competitive feature-based Inverse Reinforcement Learning approaches. At
the cost of execution time, neural networks allow for non-linear combinations
of features in state evaluations. These valuations may correspond to state
value or state reward. This results in better correspondence to observed
examples as opposed to using linear combinations. This work also extends
existing work on Bayesian Non-Parametric Feature Construction for Inverse
Reinforcement Learning by using non-linear combinations of intermediate data to
improve performance. The algorithm is observed to be specifically suitable for
a linearly solvable non-deterministic Markov Decision Processes in which
multiple rewards are sparsely scattered in state space. A conclusive
performance hierarchy between evaluated algorithms is presented.
|
[
"['Karan K. Budhraja' 'Tim Oates']",
"Karan K. Budhraja and Tim Oates"
] |
cs.CL cs.LG cs.NE
| null |
1608.02996
| null | null |
http://arxiv.org/pdf/1608.02996v1
|
2016-08-09T22:24:16Z
|
2016-08-09T22:24:16Z
|
Towards cross-lingual distributed representations without parallel text
trained with adversarial autoencoders
|
Current approaches to learning vector representations of text that are
compatible between different languages usually require some amount of parallel
text, aligned at word, sentence or at least document level. We hypothesize
however, that different natural languages share enough semantic structure that
it should be possible, in principle, to learn compatible vector representations
just by analyzing the monolingual distribution of words.
In order to evaluate this hypothesis, we propose a scheme to map word vectors
trained on a source language to vectors semantically compatible with word
vectors trained on a target language using an adversarial autoencoder.
We present preliminary qualitative results and discuss possible future
developments of this technique, such as applications to cross-lingual sentence
representations.
|
[
"Antonio Valerio Miceli Barone",
"['Antonio Valerio Miceli Barone']"
] |
cs.MM cs.LG
|
10.1109/TMM.2017.2690144
|
1608.03016
| null | null |
http://arxiv.org/abs/1608.03016v2
|
2017-04-15T05:26:23Z
|
2016-08-10T01:11:32Z
|
Mining Fashion Outfit Composition Using An End-to-End Deep Learning
Approach on Set Data
|
Composing fashion outfits involves deep understanding of fashion standards
while incorporating creativity for choosing multiple fashion items (e.g.,
Jewelry, Bag, Pants, Dress). In fashion websites, popular or high-quality
fashion outfits are usually designed by fashion experts and followed by large
audiences. In this paper, we propose a machine learning system to compose
fashion outfits automatically. The core of the proposed automatic composition
system is to score fashion outfit candidates based on the appearances and
meta-data. We propose to leverage outfit popularity on fashion oriented
websites to supervise the scoring component. The scoring component is a
multi-modal multi-instance deep learning system that evaluates instance
aesthetics and set compatibility simultaneously. In order to train and evaluate
the proposed composition system, we have collected a large scale fashion outfit
dataset with 195K outfits and 368K fashion items from Polyvore. Although the
fashion outfit scoring and composition is rather challenging, we have achieved
an AUC of 85% for the scoring component, and an accuracy of 77% for a
constrained composition task.
|
[
"Yuncheng Li, LiangLiang Cao, Jiang Zhu, Jiebo Luo",
"['Yuncheng Li' 'LiangLiang Cao' 'Jiang Zhu' 'Jiebo Luo']"
] |
cs.LG stat.ML
| null |
1608.03023
| null | null |
http://arxiv.org/pdf/1608.03023v3
|
2017-03-08T07:58:32Z
|
2016-08-10T01:51:36Z
|
Stochastic Rank-1 Bandits
|
We propose stochastic rank-$1$ bandits, a class of online learning problems
where at each step a learning agent chooses a pair of row and column arms, and
receives the product of their values as a reward. The main challenge of the
problem is that the individual values of the row and column are unobserved. We
assume that these values are stochastic and drawn independently. We propose a
computationally-efficient algorithm for solving our problem, which we call
Rank1Elim. We derive a $O((K + L) (1 / \Delta) \log n)$ upper bound on its
$n$-step regret, where $K$ is the number of rows, $L$ is the number of columns,
and $\Delta$ is the minimum of the row and column gaps; under the assumption
that the mean row and column rewards are bounded away from zero. To the best of
our knowledge, we present the first bandit algorithm that finds the maximum
entry of a rank-$1$ matrix whose regret is linear in $K + L$, $1 / \Delta$, and
$\log n$. We also derive a nearly matching lower bound. Finally, we evaluate
Rank1Elim empirically on multiple problems. We observe that it leverages the
structure of our problems and can learn near-optimal solutions even if our
modeling assumptions are mildly violated.
|
[
"['Sumeet Katariya' 'Branislav Kveton' 'Csaba Szepesvari' 'Claire Vernade'\n 'Zheng Wen']",
"Sumeet Katariya, Branislav Kveton, Csaba Szepesvari, Claire Vernade,\n and Zheng Wen"
] |
stat.ML cs.LG
| null |
1608.031
| null | null | null | null | null |
Estimation from Indirect Supervision with Linear Moments
|
In structured prediction problems where we have indirect supervision of the
output, maximum marginal likelihood faces two computational obstacles:
non-convexity of the objective and intractability of even a single gradient
computation. In this paper, we bypass both obstacles for a class of what we
call linear indirectly-supervised problems. Our approach is simple: we solve a
linear system to estimate sufficient statistics of the model, which we then use
to estimate parameters via convex optimization. We analyze the statistical
properties of our approach and show empirically that it is effective in two
settings: learning with local privacy constraints and learning from low-cost
count-based annotations.
|
[
"Aditi Raghunathan, Roy Frostig, John Duchi, Percy Liang"
] |
null | null |
1608.03100
| null | null |
http://arxiv.org/pdf/1608.03100v1
|
2016-08-10T09:19:07Z
|
2016-08-10T09:19:07Z
|
Estimation from Indirect Supervision with Linear Moments
|
In structured prediction problems where we have indirect supervision of the output, maximum marginal likelihood faces two computational obstacles: non-convexity of the objective and intractability of even a single gradient computation. In this paper, we bypass both obstacles for a class of what we call linear indirectly-supervised problems. Our approach is simple: we solve a linear system to estimate sufficient statistics of the model, which we then use to estimate parameters via convex optimization. We analyze the statistical properties of our approach and show empirically that it is effective in two settings: learning with local privacy constraints and learning from low-cost count-based annotations.
|
[
"['Aditi Raghunathan' 'Roy Frostig' 'John Duchi' 'Percy Liang']"
] |
math.OC cs.IT cs.LG math.IT
| null |
1608.03248
| null | null |
http://arxiv.org/pdf/1608.03248v2
|
2017-11-19T22:48:13Z
|
2016-08-10T18:15:58Z
|
Combination of LMS Adaptive Filters with Coefficients Feedback
|
Parallel combinations of adaptive filters have been effectively used to
improve the performance of adaptive algorithms and address well-known
trade-offs, such as convergence rate vs. steady-state error. Nevertheless,
typical combinations suffer from a convergence stagnation issue due to the fact
that the component filters run independently. Solutions to this issue usually
involve conditional transfers of coefficients between filters, which although
effective, are hard to generalize to combinations with more filters or when
there is no clearly faster adaptive filter. In this work, a more natural
solution is proposed by cyclically feeding back the combined coefficient vector
to all component filters. Besides coping with convergence stagnation, this new
topology improves tracking and supervisor stability, and bridges an important
conceptual gap between combinations of adaptive filters and variable step size
schemes. We analyze the steady-state, tracking, and transient performance of
this topology for LMS component filters and supervisors with generic activation
functions. Numerical examples are used to illustrate how coefficients feedback
can improve the performance of parallel combinations at a small computational
overhead.
|
[
"Luiz F. O. Chamon and Cassio G. Lopes",
"['Luiz F. O. Chamon' 'Cassio G. Lopes']"
] |
cs.LG math.FA
| null |
1608.03287
| null | null |
http://arxiv.org/pdf/1608.03287v1
|
2016-08-10T20:02:40Z
|
2016-08-10T20:02:40Z
|
Deep vs. shallow networks : An approximation theory perspective
|
The paper briefy reviews several recent results on hierarchical architectures
for learning from examples, that may formally explain the conditions under
which Deep Convolutional Neural Networks perform much better in function
approximation problems than shallow, one-hidden layer architectures. The paper
announces new results for a non-smooth activation function - the ReLU function
- used in present-day neural networks, as well as for the Gaussian networks. We
propose a new definition of relative dimension to encapsulate different notions
of sparsity of a function class that can possibly be exploited by deep networks
but not by shallow ones to drastically reduce the complexity required for
approximation and learning.
|
[
"['Hrushikesh Mhaskar' 'Tomaso Poggio']",
"Hrushikesh Mhaskar and Tomaso Poggio"
] |
cs.LG stat.ML
|
10.1145/2987538.2987540
|
1608.03333
| null | null |
http://arxiv.org/abs/1608.03333v1
|
2016-08-11T00:48:00Z
|
2016-08-11T00:48:00Z
|
Temporal Learning and Sequence Modeling for a Job Recommender System
|
We present our solution to the job recommendation task for RecSys Challenge
2016. The main contribution of our work is to combine temporal learning with
sequence modeling to capture complex user-item activity patterns to improve job
recommendations. First, we propose a time-based ranking model applied to
historical observations and a hybrid matrix factorization over time re-weighted
interactions. Second, we exploit sequence properties in user-items activities
and develop a RNN-based recommendation model. Our solution achieved 5$^{th}$
place in the challenge among more than 100 participants. Notably, the strong
performance of our RNN approach shows a promising new direction in employing
sequence modeling for recommendation systems.
|
[
"Kuan Liu, Xing Shi, Anoop Kumar, Linhong Zhu, Prem Natarajan",
"['Kuan Liu' 'Xing Shi' 'Anoop Kumar' 'Linhong Zhu' 'Prem Natarajan']"
] |
cs.LG stat.ML
| null |
1608.03339
| null | null |
http://arxiv.org/pdf/1608.03339v2
|
2017-03-11T07:45:26Z
|
2016-08-11T01:20:23Z
|
Distributed learning with regularized least squares
|
We study distributed learning with the least squares regularization scheme in
a reproducing kernel Hilbert space (RKHS). By a divide-and-conquer approach,
the algorithm partitions a data set into disjoint data subsets, applies the
least squares regularization scheme to each data subset to produce an output
function, and then takes an average of the individual output functions as a
final global estimator or predictor. We show with error bounds in expectation
in both the $L^2$-metric and RKHS-metric that the global output function of
this distributed learning is a good approximation to the algorithm processing
the whole data in one single machine. Our error bounds are sharp and stated in
a general setting without any eigenfunction assumption. The analysis is
achieved by a novel second order decomposition of operator differences in our
integral operator approach. Even for the classical least squares regularization
scheme in the RKHS associated with a general kernel, we give the best learning
rate in the literature.
|
[
"['Shao-Bo Lin' 'Xin Guo' 'Ding-Xuan Zhou']",
"Shao-Bo Lin, Xin Guo, Ding-Xuan Zhou"
] |
cs.DB cs.LG
| null |
1608.03344
| null | null |
http://arxiv.org/pdf/1608.03344v1
|
2016-08-11T01:55:04Z
|
2016-08-11T01:55:04Z
|
Multi-source Hierarchical Prediction Consolidation
|
In big data applications such as healthcare data mining, due to privacy
concerns, it is necessary to collect predictions from multiple information
sources for the same instance, with raw features being discarded or withheld
when aggregating multiple predictions. Besides, crowd-sourced labels need to be
aggregated to estimate the ground truth of the data. Because of the imperfect
predictive models or human crowdsourcing workers, noisy and conflicting
information is ubiquitous and inevitable. Although state-of-the-art aggregation
methods have been proposed to handle label spaces with flat structures, as the
label space is becoming more and more complicated, aggregation under a label
hierarchical structure becomes necessary but has been largely ignored. These
label hierarchies can be quite informative as they are usually created by
domain experts to make sense of highly complex label correlations for many
real-world cases like protein functionality interactions or disease
relationships.
We propose a novel multi-source hierarchical prediction consolidation method
to effectively exploits the complicated hierarchical label structures to
resolve the noisy and conflicting information that inherently originates from
multiple imperfect sources. We formulate the problem as an optimization problem
with a closed-form solution. The proposed method captures the smoothness
overall information sources as well as penalizing any consolidation result that
violates the constraints derived from the label hierarchy. The hierarchical
instance similarity, as well as the consolidation result, are inferred in a
totally unsupervised, iterative fashion. Experimental results on both synthetic
and real-world datasets show the effectiveness of the proposed method over
existing alternatives.
|
[
"['Chenwei Zhang' 'Sihong Xie' 'Yaliang Li' 'Jing Gao' 'Wei Fan'\n 'Philip S. Yu']",
"Chenwei Zhang, Sihong Xie, Yaliang Li, Jing Gao, Wei Fan, Philip S. Yu"
] |
cs.LG q-bio.QM stat.ML
| null |
1608.0353
| null | null | null | null | null |
Semi-Supervised Prediction of Gene Regulatory Networks Using Machine
Learning Algorithms
|
Use of computational methods to predict gene regulatory networks (GRNs) from
gene expression data is a challenging task. Many studies have been conducted
using unsupervised methods to fulfill the task; however, such methods usually
yield low prediction accuracies due to the lack of training data. In this
article, we propose semi-supervised methods for GRN prediction by utilizing two
machine learning algorithms, namely support vector machines (SVM) and random
forests (RF). The semi-supervised methods make use of unlabeled data for
training. We investigate inductive and transductive learning approaches, both
of which adopt an iterative procedure to obtain reliable negative training data
from the unlabeled data. We then apply our semi-supervised methods to gene
expression data of Escherichia coli and Saccharomyces cerevisiae, and evaluate
the performance of our methods using the expression data. Our analysis
indicated that the transductive learning approach outperformed the inductive
learning approach for both organisms. However, there was no conclusive
difference identified in the performance of SVM and RF. Experimental results
also showed that the proposed semi-supervised methods performed better than
existing supervised methods for both organisms.
|
[
"Nihir Patel and Jason T. L. Wang"
] |
null | null |
1608.03530
| null | null |
http://arxiv.org/pdf/1608.03530v1
|
2016-08-11T16:52:03Z
|
2016-08-11T16:52:03Z
|
Semi-Supervised Prediction of Gene Regulatory Networks Using Machine
Learning Algorithms
|
Use of computational methods to predict gene regulatory networks (GRNs) from gene expression data is a challenging task. Many studies have been conducted using unsupervised methods to fulfill the task; however, such methods usually yield low prediction accuracies due to the lack of training data. In this article, we propose semi-supervised methods for GRN prediction by utilizing two machine learning algorithms, namely support vector machines (SVM) and random forests (RF). The semi-supervised methods make use of unlabeled data for training. We investigate inductive and transductive learning approaches, both of which adopt an iterative procedure to obtain reliable negative training data from the unlabeled data. We then apply our semi-supervised methods to gene expression data of Escherichia coli and Saccharomyces cerevisiae, and evaluate the performance of our methods using the expression data. Our analysis indicated that the transductive learning approach outperformed the inductive learning approach for both organisms. However, there was no conclusive difference identified in the performance of SVM and RF. Experimental results also showed that the proposed semi-supervised methods performed better than existing supervised methods for both organisms.
|
[
"['Nihir Patel' 'Jason T. L. Wang']"
] |
stat.ML cs.LG
| null |
1608.03533
| null | null | null | null | null |
Sequence Graph Transform (SGT): A Feature Embedding Function for
Sequence Data Mining
|
Sequence feature embedding is a challenging task due to the unstructuredness
of sequence, i.e., arbitrary strings of arbitrary length. Existing methods are
efficient in extracting short-term dependencies but typically suffer from
computation issues for the long-term. Sequence Graph Transform (SGT), a feature
embedding function, that can extract a varying amount of short- to long-term
dependencies without increasing the computation is proposed. SGT's properties
are analytically proved for interpretation under normal and uniform
distribution assumptions. SGT features yield significantly superior results in
sequence clustering and classification with higher accuracy and lower
computation as compared to the existing methods, including the state-of-the-art
sequence/string Kernels and LSTM.
|
[
"Chitta Ranjan, Samaneh Ebrahimi and Kamran Paynabar"
] |
null | null |
1608.03533v
| null | null |
http://arxiv.org/pdf/1608.03533v15
|
2021-10-05T00:32:17Z
|
2016-08-11T16:59:19Z
|
Sequence Graph Transform (SGT): A Feature Embedding Function for
Sequence Data Mining
|
Sequence feature embedding is a challenging task due to the unstructuredness of sequence, i.e., arbitrary strings of arbitrary length. Existing methods are efficient in extracting short-term dependencies but typically suffer from computation issues for the long-term. Sequence Graph Transform (SGT), a feature embedding function, that can extract a varying amount of short- to long-term dependencies without increasing the computation is proposed. SGT's properties are analytically proved for interpretation under normal and uniform distribution assumptions. SGT features yield significantly superior results in sequence clustering and classification with higher accuracy and lower computation as compared to the existing methods, including the state-of-the-art sequence/string Kernels and LSTM.
|
[
"['Chitta Ranjan' 'Samaneh Ebrahimi' 'Kamran Paynabar']"
] |
cs.LG cs.AI cs.IR stat.ML
| null |
1608.03544
| null | null |
http://arxiv.org/pdf/1608.03544v2
|
2017-02-27T17:16:22Z
|
2016-08-06T14:13:28Z
|
On Context-Dependent Clustering of Bandits
|
We investigate a novel cluster-of-bandit algorithm CAB for collaborative
recommendation tasks that implements the underlying feedback sharing mechanism
by estimating the neighborhood of users in a context-dependent manner. CAB
makes sharp departures from the state of the art by incorporating collaborative
effects into inference as well as learning processes in a manner that
seamlessly interleaving explore-exploit tradeoffs and collaborative steps. We
prove regret bounds under various assumptions on the data, which exhibit a
crisp dependence on the expected number of clusters over the users, a natural
measure of the statistical difficulty of the learning task. Experiments on
production and real-world datasets show that CAB offers significantly increased
prediction performance against a representative pool of state-of-the-art
methods.
|
[
"['Claudio Gentile' 'Shuai Li' 'Purushottam Kar' 'Alexandros Karatzoglou'\n 'Evans Etrue' 'Giovanni Zappella']",
"Claudio Gentile, Shuai Li, Purushottam Kar, Alexandros Karatzoglou,\n Evans Etrue, Giovanni Zappella"
] |
stat.ML cs.LG stat.AP
| null |
1608.03585
| null | null |
http://arxiv.org/pdf/1608.03585v1
|
2016-08-11T19:56:27Z
|
2016-08-11T19:56:27Z
|
Warm Starting Bayesian Optimization
|
We develop a framework for warm-starting Bayesian optimization, that reduces
the solution time required to solve an optimization problem that is one in a
sequence of related problems. This is useful when optimizing the output of a
stochastic simulator that fails to provide derivative information, for which
Bayesian optimization methods are well-suited. Solving sequences of related
optimization problems arises when making several business decisions using one
optimization model and input data collected over different time periods or
markets. While many gradient-based methods can be warm started by initiating
optimization at the solution to the previous problem, this warm start approach
does not apply to Bayesian optimization methods, which carry a full metamodel
of the objective function from iteration to iteration. Our approach builds a
joint statistical model of the entire collection of related objective
functions, and uses a value of information calculation to recommend points to
evaluate.
|
[
"Matthias Poloczek, Jialei Wang, and Peter I. Frazier",
"['Matthias Poloczek' 'Jialei Wang' 'Peter I. Frazier']"
] |
stat.ML cs.LG cs.NE
| null |
1608.03639
| null | null |
http://arxiv.org/pdf/1608.03639v1
|
2016-08-11T23:48:44Z
|
2016-08-11T23:48:44Z
|
Faster Training of Very Deep Networks Via p-Norm Gates
|
A major contributing factor to the recent advances in deep neural networks is
structural units that let sensory information and gradients to propagate
easily. Gating is one such structure that acts as a flow control. Gates are
employed in many recent state-of-the-art recurrent models such as LSTM and GRU,
and feedforward models such as Residual Nets and Highway Networks. This enables
learning in very deep networks with hundred layers and helps achieve
record-breaking results in vision (e.g., ImageNet with Residual Nets) and NLP
(e.g., machine translation with GRU). However, there is limited work in
analysing the role of gating in the learning process. In this paper, we propose
a flexible $p$-norm gating scheme, which allows user-controllable flow and as a
consequence, improve the learning speed. This scheme subsumes other existing
gating schemes, including those in GRU, Highway Networks and Residual Nets as
special cases. Experiments on large sequence and vector datasets demonstrate
that the proposed gating scheme helps improve the learning speed significantly
without extra overhead.
|
[
"Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh",
"['Trang Pham' 'Truyen Tran' 'Dinh Phung' 'Svetha Venkatesh']"
] |
cs.LG cs.DS stat.ML
| null |
1608.03643
| null | null |
http://arxiv.org/pdf/1608.03643v2
|
2016-11-16T16:31:27Z
|
2016-08-12T00:36:42Z
|
Chi-squared Amplification: Identifying Hidden Hubs
|
We consider the following general hidden hubs model: an $n \times n$ random
matrix $A$ with a subset $S$ of $k$ special rows (hubs): entries in rows
outside $S$ are generated from the probability distribution $p_0 \sim
N(0,\sigma_0^2)$; for each row in $S$, some $k$ of its entries are generated
from $p_1 \sim N(0,\sigma_1^2)$, $\sigma_1>\sigma_0$, and the rest of the
entries from $p_0$. The problem is to identify the high-degree hubs
efficiently. This model includes and significantly generalizes the planted
Gaussian Submatrix Model, where the special entries are all in a $k \times k$
submatrix. There are two well-known barriers: if $k\geq c\sqrt{n\ln n}$, just
the row sums are sufficient to find $S$ in the general model. For the submatrix
problem, this can be improved by a $\sqrt{\ln n}$ factor to $k \ge c\sqrt{n}$
by spectral methods or combinatorial methods. In the variant with $p_0=\pm 1$
(with probability $1/2$ each) and $p_1\equiv 1$, neither barrier has been
broken.
We give a polynomial-time algorithm to identify all the hidden hubs with high
probability for $k \ge n^{0.5-\delta}$ for some $\delta >0$, when
$\sigma_1^2>2\sigma_0^2$. The algorithm extends to the setting where planted
entries might have different variances each at least as large as $\sigma_1^2$.
We also show a nearly matching lower bound: for $\sigma_1^2 \le 2\sigma_0^2$,
there is no polynomial-time Statistical Query algorithm for distinguishing
between a matrix whose entries are all from $N(0,\sigma_0^2)$ and a matrix with
$k=n^{0.5-\delta}$ hidden hubs for any $\delta >0$. The lower bound as well as
the algorithm are related to whether the chi-squared distance of the two
distributions diverges. At the critical value $\sigma_1^2=2\sigma_0^2$, we show
that the general hidden hubs problem can be solved for $k\geq c\sqrt n(\ln
n)^{1/4}$, improving on the naive row sum-based method.
|
[
"['Ravi Kannan' 'Santosh Vempala']",
"Ravi Kannan and Santosh Vempala"
] |
cs.LG cs.CV cs.NE
| null |
1608.03644
| null | null |
http://arxiv.org/pdf/1608.03644v4
|
2016-10-18T20:20:22Z
|
2016-08-12T00:43:59Z
|
Deep Motif Dashboard: Visualizing and Understanding Genomic Sequences
Using Deep Neural Networks
|
Deep neural network (DNN) models have recently obtained state-of-the-art
prediction accuracy for the transcription factor binding (TFBS) site
classification task. However, it remains unclear how these approaches identify
meaningful DNA sequence signals and give insights as to why TFs bind to certain
locations. In this paper, we propose a toolkit called the Deep Motif Dashboard
(DeMo Dashboard) which provides a suite of visualization strategies to extract
motifs, or sequence patterns from deep neural network models for TFBS
classification. We demonstrate how to visualize and understand three important
DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our
first visualization method is finding a test sequence's saliency map which uses
first-order derivatives to describe the importance of each nucleotide in making
the final prediction. Second, considering recurrent models make predictions in
a temporal manner (from one end of a TFBS sequence to the other), we introduce
temporal output scores, indicating the prediction score of a model over time
for a sequential input. Lastly, a class-specific visualization strategy finds
the optimal input sequence for a given TFBS positive class via stochastic
gradient optimization. Our experimental results indicate that a
convolutional-recurrent architecture performs the best among the three
architectures. The visualization techniques indicate that CNN-RNN makes
predictions by modeling both motifs as well as dependencies among them.
|
[
"Jack Lanchantin, Ritambhara Singh, Beilun Wang, and Yanjun Qi",
"['Jack Lanchantin' 'Ritambhara Singh' 'Beilun Wang' 'Yanjun Qi']"
] |
cs.LG
| null |
1608.03647
| null | null |
http://arxiv.org/pdf/1608.03647v2
|
2017-04-23T16:46:17Z
|
2016-08-12T07:24:57Z
|
Learning with Value-Ramp
|
We study a learning principle based on the intuition of forming ramps. The
agent tries to follow an increasing sequence of values until the agent meets a
peak of reward. The resulting Value-Ramp algorithm is natural, easy to
configure, and has a robust implementation with natural numbers.
|
[
"Tom J. Ameloot and Jan Van den Bussche",
"['Tom J. Ameloot' 'Jan Van den Bussche']"
] |
cs.NE cs.LG stat.ML
| null |
1608.03665
| null | null |
http://arxiv.org/pdf/1608.03665v4
|
2016-10-18T04:03:41Z
|
2016-08-12T03:20:43Z
|
Learning Structured Sparsity in Deep Neural Networks
|
High demand for computation resources severely hinders deployment of
large-scale Deep Neural Networks (DNN) in resource constrained devices. In this
work, we propose a Structured Sparsity Learning (SSL) method to regularize the
structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs.
SSL can: (1) learn a compact structure from a bigger DNN to reduce computation
cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently
accelerate the DNNs evaluation. Experimental results show that SSL achieves on
average 5.1x and 3.1x speedups of convolutional layer computation of AlexNet
against CPU and GPU, respectively, with off-the-shelf libraries. These speedups
are about twice speedups of non-structured sparsity; (3) regularize the DNN
structure to improve classification accuracy. The results show that for
CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual
Network (ResNet) to 18 layers while improve the accuracy from 91.25% to 92.60%,
which is still slightly higher than that of original ResNet with 32 layers. For
AlexNet, structure regularization by SSL also reduces the error by around ~1%.
Open source code is in https://github.com/wenwei202/caffe/tree/scnn
|
[
"Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, Hai Li",
"['Wei Wen' 'Chunpeng Wu' 'Yandan Wang' 'Yiran Chen' 'Hai Li']"
] |
cs.RO cs.LG
| null |
1608.03694
| null | null |
http://arxiv.org/pdf/1608.03694v1
|
2016-08-12T07:26:59Z
|
2016-08-12T07:26:59Z
|
Density Matching Reward Learning
|
In this paper, we focus on the problem of inferring the underlying reward
function of an expert given demonstrations, which is often referred to as
inverse reinforcement learning (IRL). In particular, we propose a model-free
density-based IRL algorithm, named density matching reward learning (DMRL),
which does not require model dynamics. The performance of DMRL is analyzed
theoretically and the sample complexity is derived. Furthermore, the proposed
DMRL is extended to handle nonlinear IRL problems by assuming that the reward
function is in the reproducing kernel Hilbert space (RKHS) and kernel DMRL
(KDMRL) is proposed. The parameters for KDMRL can be computed analytically,
which greatly reduces the computation time. The performance of KDMRL is
extensively evaluated in two sets of experiments: grid world and track driving
experiments. In grid world experiments, the proposed KDMRL method is compared
with both model-based and model-free IRL methods and shows superior performance
on a nonlinear reward setting and competitive performance on a linear reward
setting in terms of expected value differences. Then we move on to more
realistic experiments of learning different driving styles for autonomous
navigation in complex and dynamic tracks using KDMRL and receding horizon
control.
|
[
"['Sungjoon Choi' 'Kyungjae Lee' 'Andy Park' 'Songhwai Oh']",
"Sungjoon Choi, Kyungjae Lee, Andy Park, Songhwai Oh"
] |
cond-mat.dis-nn cond-mat.stat-mech cs.LG q-bio.NC
|
10.1103/PhysRevE.94.062310
|
1608.03714
| null | null |
http://arxiv.org/abs/1608.03714v2
|
2016-11-11T01:49:13Z
|
2016-08-12T08:35:22Z
|
Unsupervised feature learning from finite data by message passing:
discontinuous versus continuous phase transition
|
Unsupervised neural network learning extracts hidden features from unlabeled
training data. This is used as a pretraining step for further supervised
learning in deep networks. Hence, understanding unsupervised learning is of
fundamental importance. Here, we study the unsupervised learning from a finite
number of data, based on the restricted Boltzmann machine learning. Our study
inspires an efficient message passing algorithm to infer the hidden feature,
and estimate the entropy of candidate features consistent with the data. Our
analysis reveals that the learning requires only a few data if the feature is
salient and extensively many if the feature is weak. Moreover, the entropy of
candidate features monotonically decreases with data size and becomes negative
(i.e., entropy crisis) before the message passing becomes unstable, suggesting
a discontinuous phase transition. In terms of convergence time of the message
passing algorithm, the unsupervised learning exhibits an easy-hard-easy
phenomenon as the training data size increases. All these properties are
reproduced in an approximate Hopfield model, with an exception that the entropy
crisis is absent, and only continuous phase transition is observed. This key
difference is also confirmed in a handwritten digits dataset. This study
deepens our understanding of unsupervised learning from a finite number of
data, and may provide insights into its role in training deep networks.
|
[
"['Haiping Huang' 'Taro Toyoizumi']",
"Haiping Huang and Taro Toyoizumi"
] |
cs.NE cs.CV cs.LG
| null |
1608.03793
| null | null |
http://arxiv.org/pdf/1608.03793v2
|
2016-08-16T18:36:44Z
|
2016-08-12T13:50:24Z
|
Applying Deep Learning to Basketball Trajectories
|
One of the emerging trends for sports analytics is the growing use of player
and ball tracking data. A parallel development is deep learning predictive
approaches that use vast quantities of data with less reliance on feature
engineering. This paper applies recurrent neural networks in the form of
sequence modeling to predict whether a three-point shot is successful. The
models are capable of learning the trajectory of a basketball without any
knowledge of physics. For comparison, a baseline static machine learning model
with a full set of features, such as angle and velocity, in addition to the
positional data is also tested. Using a dataset of over 20,000 three pointers
from NBA SportVu data, the models based simply on sequential positional data
outperform a static feature rich machine learning model in predicting whether a
three-point shot is successful. This suggests deep learning models may offer an
improvement to traditional feature based machine learning methods for tracking
data.
|
[
"['Rajiv Shah' 'Rob Romijnders']",
"Rajiv Shah and Rob Romijnders"
] |
stat.ML cs.IR cs.LG
| null |
1608.03811
| null | null |
http://arxiv.org/pdf/1608.03811v1
|
2016-08-12T14:40:46Z
|
2016-08-12T14:40:46Z
|
Content-based image retrieval tutorial
|
This paper functions as a tutorial for individuals interested to enter the
field of information retrieval but wouldn't know where to begin from. It
describes two fundamental yet efficient image retrieval techniques, the first
being k - nearest neighbors (knn) and the second support vector machines(svm).
The goal is to provide the reader with both the theoretical and practical
aspects in order to acquire a better understanding. Along with this tutorial we
have also developed the equivalent software1 using the MATLAB environment in
order to illustrate the techniques, so that the reader can have a hands-on
experience.
|
[
"['Joani Mitro']",
"Joani Mitro"
] |
cs.DC cs.LG math.OC
| null |
1608.03866
| null | null |
http://arxiv.org/pdf/1608.03866v2
|
2016-12-19T15:19:25Z
|
2016-08-12T18:34:06Z
|
Distributed Optimization for Client-Server Architecture with Negative
Gradient Weights
|
Availability of both massive datasets and computing resources have made
machine learning and predictive analytics extremely pervasive. In this work we
present a synchronous algorithm and architecture for distributed optimization
motivated by privacy requirements posed by applications in machine learning. We
present an algorithm for the recently proposed multi-parameter-server
architecture. We consider a group of parameter servers that learn a model based
on randomized gradients received from clients. Clients are computational
entities with private datasets (inducing a private objective function), that
evaluate and upload randomized gradients to the parameter servers. The
parameter servers perform model updates based on received gradients and share
the model parameters with other servers. We prove that the proposed algorithm
can optimize the overall objective function for a very general architecture
involving $C$ clients connected to $S$ parameter servers in an arbitrary time
varying topology and the parameter servers forming a connected network.
|
[
"['Shripad Gade' 'Nitin H. Vaidya']",
"Shripad Gade and Nitin H. Vaidya"
] |
cs.CL cs.LG cs.SI
| null |
1608.03902
| null | null |
http://arxiv.org/pdf/1608.03902v1
|
2016-08-12T20:19:16Z
|
2016-08-12T20:19:16Z
|
Rapid Classification of Crisis-Related Data on Social Networks using
Convolutional Neural Networks
|
The role of social media, in particular microblogging platforms such as
Twitter, as a conduit for actionable and tactical information during disasters
is increasingly acknowledged. However, time-critical analysis of big crisis
data on social media streams brings challenges to machine learning techniques,
especially the ones that use supervised learning. The Scarcity of labeled data,
particularly in the early hours of a crisis, delays the machine learning
process. The current state-of-the-art classification methods require a
significant amount of labeled data specific to a particular event for training
plus a lot of feature engineering to achieve best results. In this work, we
introduce neural network based classification methods for binary and
multi-class tweet classification task. We show that neural network based models
do not require any feature engineering and perform better than state-of-the-art
methods. In the early hours of a disaster when no labeled data is available,
our proposed method makes the best use of the out-of-event data and achieves
good results.
|
[
"Dat Tien Nguyen, Kamela Ali Al Mannai, Shafiq Joty, Hassan Sajjad,\n Muhammad Imran, Prasenjit Mitra",
"['Dat Tien Nguyen' 'Kamela Ali Al Mannai' 'Shafiq Joty' 'Hassan Sajjad'\n 'Muhammad Imran' 'Prasenjit Mitra']"
] |
cs.LG
| null |
1608.03933
| null | null |
http://arxiv.org/pdf/1608.03933v3
|
2017-11-02T07:15:50Z
|
2016-08-13T03:24:11Z
|
Improved Dynamic Regret for Non-degenerate Functions
|
Recently, there has been a growing research interest in the analysis of
dynamic regret, which measures the performance of an online learner against a
sequence of local minimizers. By exploiting the strong convexity, previous
studies have shown that the dynamic regret can be upper bounded by the
path-length of the comparator sequence. In this paper, we illustrate that the
dynamic regret can be further improved by allowing the learner to query the
gradient of the function multiple times, and meanwhile the strong convexity can
be weakened to other non-degenerate conditions. Specifically, we introduce the
squared path-length, which could be much smaller than the path-length, as a new
regularity of the comparator sequence. When multiple gradients are accessible
to the learner, we first demonstrate that the dynamic regret of strongly convex
functions can be upper bounded by the minimum of the path-length and the
squared path-length. We then extend our theoretical guarantee to functions that
are semi-strongly convex or self-concordant. To the best of our knowledge, this
is the first time that semi-strong convexity and self-concordance are utilized
to tighten the dynamic regret.
|
[
"Lijun Zhang, Tianbao Yang, Jinfeng Yi, Rong Jin, Zhi-Hua Zhou",
"['Lijun Zhang' 'Tianbao Yang' 'Jinfeng Yi' 'Rong Jin' 'Zhi-Hua Zhou']"
] |
stat.ML cs.CV cs.LG
| null |
1608.03974
| null | null |
http://arxiv.org/pdf/1608.03974v1
|
2016-08-13T11:19:22Z
|
2016-08-13T11:19:22Z
|
Recurrent Fully Convolutional Neural Networks for Multi-slice MRI
Cardiac Segmentation
|
In cardiac magnetic resonance imaging, fully-automatic segmentation of the
heart enables precise structural and functional measurements to be taken, e.g.
from short-axis MR images of the left-ventricle. In this work we propose a
recurrent fully-convolutional network (RFCN) that learns image representations
from the full stack of 2D slices and has the ability to leverage inter-slice
spatial dependences through internal memory units. RFCN combines anatomical
detection and segmentation into a single architecture that is trained
end-to-end thus significantly reducing computational time, simplifying the
segmentation pipeline, and potentially enabling real-time applications. We
report on an investigation of RFCN using two datasets, including the publicly
available MICCAI 2009 Challenge dataset. Comparisons have been carried out
between fully convolutional networks and deep restricted Boltzmann machines,
including a recurrent version that leverages inter-slice spatial correlation.
Our studies suggest that RFCN produces state-of-the-art results and can
substantially improve the delineation of contours near the apex of the heart.
|
[
"['Rudra P K Poudel' 'Pablo Lamata' 'Giovanni Montana']",
"Rudra P K Poudel and Pablo Lamata and Giovanni Montana"
] |
cs.LG cs.NE math.OC
| null |
1608.03983
| null | null |
http://arxiv.org/pdf/1608.03983v5
|
2017-05-03T16:28:09Z
|
2016-08-13T13:46:05Z
|
SGDR: Stochastic Gradient Descent with Warm Restarts
|
Restart techniques are common in gradient-free optimization to deal with
multimodal functions. Partial warm restarts are also gaining popularity in
gradient-based optimization to improve the rate of convergence in accelerated
gradient schemes to deal with ill-conditioned functions. In this paper, we
propose a simple warm restart technique for stochastic gradient descent to
improve its anytime performance when training deep neural networks. We
empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where
we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively.
We also demonstrate its advantages on a dataset of EEG recordings and on a
downsampled version of the ImageNet dataset. Our source code is available at
https://github.com/loshchil/SGDR
|
[
"['Ilya Loshchilov' 'Frank Hutter']",
"Ilya Loshchilov and Frank Hutter"
] |
cs.LG cs.IR stat.ML
| null |
1608.04037
| null | null |
http://arxiv.org/pdf/1608.04037v1
|
2016-08-13T23:45:21Z
|
2016-08-13T23:45:21Z
|
An approach to dealing with missing values in heterogeneous data using
k-nearest neighbors
|
Techniques such as clusterization, neural networks and decision making
usually rely on algorithms that are not well suited to deal with missing
values. However, real world data frequently contains such cases. The simplest
solution is to either substitute them by a best guess value or completely
disregard the missing values. Unfortunately, both approaches can lead to biased
results. In this paper, we propose a technique for dealing with missing values
in heterogeneous data using imputation based on the k-nearest neighbors
algorithm. It can handle real (which we refer to as crisp henceforward),
interval and fuzzy data. The effectiveness of the algorithm is tested on
several datasets and the numerical results are promising.
|
[
"Davi E. N. Frossard, Igor O. Nunes, Renato A. Krohling",
"['Davi E. N. Frossard' 'Igor O. Nunes' 'Renato A. Krohling']"
] |
cs.LG cs.CV
| null |
1608.04062
| null | null |
http://arxiv.org/pdf/1608.04062v2
|
2016-09-08T17:46:13Z
|
2016-08-14T05:35:11Z
|
Stacked Approximated Regression Machine: A Simple Deep Learning Approach
|
With the agreement of my coauthors, I Zhangyang Wang would like to withdraw
the manuscript "Stacked Approximated Regression Machine: A Simple Deep Learning
Approach". Some experimental procedures were not included in the manuscript,
which makes a part of important claims not meaningful. In the relevant
research, I was solely responsible for carrying out the experiments; the other
coauthors joined in the discussions leading to the main algorithm.
Please see the updated text for more details.
|
[
"['Zhangyang Wang' 'Shiyu Chang' 'Qing Ling' 'Shuai Huang' 'Xia Hu'\n 'Honghui Shi' 'Thomas S. Huang']",
"Zhangyang Wang, Shiyu Chang, Qing Ling, Shuai Huang, Xia Hu, Honghui\n Shi, Thomas S. Huang"
] |
cs.LG stat.ML
| null |
1608.04063
| null | null |
http://arxiv.org/pdf/1608.04063v1
|
2016-08-14T06:01:21Z
|
2016-08-14T06:01:21Z
|
Bayesian Model Selection Methods for Mutual and Symmetric $k$-Nearest
Neighbor Classification
|
The $k$-nearest neighbor classification method ($k$-NNC) is one of the
simplest nonparametric classification methods. The mutual $k$-NN classification
method (M$k$NNC) is a variant of $k$-NNC based on mutual neighborship. We
propose another variant of $k$-NNC, the symmetric $k$-NN classification method
(S$k$NNC) based on both mutual neighborship and one-sided neighborship. The
performance of M$k$NNC and S$k$NNC depends on the parameter $k$ as the one of
$k$-NNC does. We propose the ways how M$k$NN and S$k$NN classification can be
performed based on Bayesian mutual and symmetric $k$-NN regression methods with
the selection schemes for the parameter $k$. Bayesian mutual and symmetric
$k$-NN regression methods are based on Gaussian process models, and it turns
out that they can do M$k$NN and S$k$NN classification with new encodings of
target values (class labels). The simulation results show that the proposed
methods are better than or comparable to $k$-NNC, M$k$NNC and S$k$NNC with the
parameter $k$ selected by the leave-one-out cross validation method not only
for an artificial data set but also for real world data sets.
|
[
"['Hyun-Chul Kim']",
"Hyun-Chul Kim"
] |
cs.LG
| null |
1608.04077
| null | null |
http://arxiv.org/pdf/1608.04077v3
|
2017-02-28T08:25:33Z
|
2016-08-14T09:19:26Z
|
Generative Knowledge Transfer for Neural Language Models
|
In this paper, we propose a generative knowledge transfer technique that
trains an RNN based language model (student network) using text and output
probabilities generated from a previously trained RNN (teacher network). The
text generation can be conducted by either the teacher or the student network.
We can also improve the performance by taking the ensemble of soft labels
obtained from multiple teacher networks. This method can be used for privacy
conscious language model adaptation because no user data is directly used for
training. Especially, when the soft labels of multiple devices are aggregated
via a trusted third party, we can expect very strong privacy protection.
|
[
"['Sungho Shin' 'Kyuyeon Hwang' 'Wonyong Sung']",
"Sungho Shin, Kyuyeon Hwang, and Wonyong Sung"
] |
cs.CV cs.LG
| null |
1608.0408
| null | null | null | null | null |
Dynamic Hand Gesture Recognition for Wearable Devices with Low
Complexity Recurrent Neural Networks
|
Gesture recognition is a very essential technology for many wearable devices.
While previous algorithms are mostly based on statistical methods including the
hidden Markov model, we develop two dynamic hand gesture recognition techniques
using low complexity recurrent neural network (RNN) algorithms. One is based on
video signal and employs a combined structure of a convolutional neural network
(CNN) and an RNN. The other uses accelerometer data and only requires an RNN.
Fixed-point optimization that quantizes most of the weights into two bits is
conducted to optimize the amount of memory size for weight storage and reduce
the power consumption in hardware and software based implementations.
|
[
"Sungho Shin and Wonyong Sung"
] |
null | null |
1608.04080
| null | null |
http://arxiv.org/pdf/1608.04080v1
|
2016-08-14T09:32:17Z
|
2016-08-14T09:32:17Z
|
Dynamic Hand Gesture Recognition for Wearable Devices with Low
Complexity Recurrent Neural Networks
|
Gesture recognition is a very essential technology for many wearable devices. While previous algorithms are mostly based on statistical methods including the hidden Markov model, we develop two dynamic hand gesture recognition techniques using low complexity recurrent neural network (RNN) algorithms. One is based on video signal and employs a combined structure of a convolutional neural network (CNN) and an RNN. The other uses accelerometer data and only requires an RNN. Fixed-point optimization that quantizes most of the weights into two bits is conducted to optimize the amount of memory size for weight storage and reduce the power consumption in hardware and software based implementations.
|
[
"['Sungho Shin' 'Wonyong Sung']"
] |
cs.NE cs.LG
| null |
1608.04171
| null | null |
http://arxiv.org/pdf/1608.04171v4
|
2017-06-07T04:34:06Z
|
2016-08-15T02:49:17Z
|
Power Data Classification: A Hybrid of a Novel Local Time Warping and
LSTM
|
In this paper, for the purpose of data centre energy consumption monitoring
and analysis, we propose to detect the running programs in a server by
classifying the observed power consumption series. Time series classification
problem has been extensively studied with various distance measurements
developed; also recently the deep learning based sequence models have been
proved to be promising. In this paper, we propose a novel distance measurement
and build a time series classification algorithm hybridizing nearest neighbour
and long short term memory (LSTM) neural network. More specifically, first we
propose a new distance measurement termed as Local Time Warping (LTW), which
utilizes a user-specified set for local warping, and is designed to be
non-commutative and non-dynamic programming. Second we hybridize the 1NN-LTW
and LSTM together. In particular, we combine the prediction probability vector
of 1NN-LTW and LSTM to determine the label of the test cases. Finally, using
the power consumption data from a real data center, we show that the proposed
LTW can improve the classification accuracy of DTW from about 84% to 90%. Our
experimental results prove that the proposed LTW is competitive on our data set
compared with existed DTW variants and its non-commutative feature is indeed
beneficial. We also test a linear version of LTW and it can significantly
outperform existed linear runtime lower bound methods like LB_Keogh.
Furthermore, with the hybrid algorithm, for the power series classification
task we achieve an accuracy up to about 93%. Our research can inspire more
studies on time series distance measurement and the hybrid of the deep learning
models with other traditional models.
|
[
"Yuanlong Li, Han Hu, Yonggang Wen, Jun Zhang",
"['Yuanlong Li' 'Han Hu' 'Yonggang Wen' 'Jun Zhang']"
] |
cs.SC cs.LG
|
10.1109/SYNASC.2016.020
|
1608.04219
| null | null |
http://arxiv.org/abs/1608.04219v1
|
2016-08-15T09:44:29Z
|
2016-08-15T09:44:29Z
|
Using Machine Learning to Decide When to Precondition Cylindrical
Algebraic Decomposition With Groebner Bases
|
Cylindrical Algebraic Decomposition (CAD) is a key tool in computational
algebraic geometry, particularly for quantifier elimination over real-closed
fields. However, it can be expensive, with worst case complexity doubly
exponential in the size of the input. Hence it is important to formulate the
problem in the best manner for the CAD algorithm. One possibility is to
precondition the input polynomials using Groebner Basis (GB) theory. Previous
experiments have shown that while this can often be very beneficial to the CAD
algorithm, for some problems it can significantly worsen the CAD performance.
In the present paper we investigate whether machine learning, specifically a
support vector machine (SVM), may be used to identify those CAD problems which
benefit from GB preconditioning. We run experiments with over 1000 problems
(many times larger than previous studies) and find that the machine learned
choice does better than the human-made heuristic.
|
[
"['Zongyan Huang' 'Matthew England' 'James H. Davenport'\n 'Lawrence C. Paulson']",
"Zongyan Huang, Matthew England, James H. Davenport and Lawrence C.\n Paulson"
] |
cs.CV cs.HC cs.LG stat.ML
| null |
1608.04236
| null | null |
http://arxiv.org/pdf/1608.04236v2
|
2016-08-16T08:06:24Z
|
2016-08-15T11:14:35Z
|
Generative and Discriminative Voxel Modeling with Convolutional Neural
Networks
|
When working with three-dimensional data, choice of representation is key. We
explore voxel-based models, and present evidence for the viability of
voxellated representations in applications including shape modeling and object
classification. Our key contributions are methods for training voxel-based
variational autoencoders, a user interface for exploring the latent space
learned by the autoencoder, and a deep convolutional neural network
architecture for object classification. We address challenges unique to
voxel-based representations, and empirically evaluate our models on the
ModelNet benchmark, where we demonstrate a 51.5% relative improvement in the
state of the art for object classification.
|
[
"Andrew Brock, Theodore Lim, J.M. Ritchie, Nick Weston",
"['Andrew Brock' 'Theodore Lim' 'J. M. Ritchie' 'Nick Weston']"
] |
stat.ML cs.LG
| null |
1608.04245
| null | null |
http://arxiv.org/pdf/1608.04245v2
|
2016-08-16T10:41:32Z
|
2016-08-15T11:42:51Z
|
The Bayesian Low-Rank Determinantal Point Process Mixture Model
|
Determinantal point processes (DPPs) are an elegant model for encoding
probabilities over subsets, such as shopping baskets, of a ground set, such as
an item catalog. They are useful for a number of machine learning tasks,
including product recommendation. DPPs are parametrized by a positive
semi-definite kernel matrix. Recent work has shown that using a low-rank
factorization of this kernel provides remarkable scalability improvements that
open the door to training on large-scale datasets and computing online
recommendations, both of which are infeasible with standard DPP models that use
a full-rank kernel. In this paper we present a low-rank DPP mixture model that
allows us to represent the latent structure present in observed subsets as a
mixture of a number of component low-rank DPPs, where each component DPP is
responsible for representing a portion of the observed data. The mixture model
allows us to effectively address the capacity constraints of the low-rank DPP
model. We present an efficient and scalable Markov Chain Monte Carlo (MCMC)
learning algorithm for our model that uses Gibbs sampling and stochastic
gradient Hamiltonian Monte Carlo (SGHMC). Using an evaluation on several
real-world product recommendation datasets, we show that our low-rank DPP
mixture model provides substantially better predictive performance than is
possible with a single low-rank or full-rank DPP, and significantly better
performance than several other competing recommendation methods in many cases.
|
[
"['Mike Gartrell' 'Ulrich Paquet' 'Noam Koenigstein']",
"Mike Gartrell, Ulrich Paquet, Noam Koenigstein"
] |
cs.LG cs.IT math.IT
| null |
1608.0432
| null | null | null | null | null |
Correlated-PCA: Principal Components' Analysis when Data and Noise are
Correlated
|
Given a matrix of observed data, Principal Components Analysis (PCA) computes
a small number of orthogonal directions that contain most of its variability.
Provably accurate solutions for PCA have been in use for a long time. However,
to the best of our knowledge, all existing theoretical guarantees for it assume
that the data and the corrupting noise are mutually independent, or at least
uncorrelated. This is valid in practice often, but not always. In this paper,
we study the PCA problem in the setting where the data and noise can be
correlated. Such noise is often also referred to as "data-dependent noise". We
obtain a correctness result for the standard eigenvalue decomposition (EVD)
based solution to PCA under simple assumptions on the data-noise correlation.
We also develop and analyze a generalization of EVD, cluster-EVD, that improves
upon EVD in certain regimes.
|
[
"Namrata Vaswani, Han Guo"
] |
null | null |
1608.04320
| null | null |
http://arxiv.org/pdf/1608.04320v2
|
2016-11-02T17:55:02Z
|
2016-08-15T16:32:57Z
|
Correlated-PCA: Principal Components' Analysis when Data and Noise are
Correlated
|
Given a matrix of observed data, Principal Components Analysis (PCA) computes a small number of orthogonal directions that contain most of its variability. Provably accurate solutions for PCA have been in use for a long time. However, to the best of our knowledge, all existing theoretical guarantees for it assume that the data and the corrupting noise are mutually independent, or at least uncorrelated. This is valid in practice often, but not always. In this paper, we study the PCA problem in the setting where the data and noise can be correlated. Such noise is often also referred to as "data-dependent noise". We obtain a correctness result for the standard eigenvalue decomposition (EVD) based solution to PCA under simple assumptions on the data-noise correlation. We also develop and analyze a generalization of EVD, cluster-EVD, that improves upon EVD in certain regimes.
|
[
"['Namrata Vaswani' 'Han Guo']"
] |
cs.LG stat.ML
| null |
1608.04331
| null | null |
http://arxiv.org/pdf/1608.04331v1
|
2016-08-15T17:12:09Z
|
2016-08-15T17:12:09Z
|
Consistency constraints for overlapping data clustering
|
We examine overlapping clustering schemes with functorial constraints, in the
spirit of Carlsson--Memoli. This avoids issues arising from the chaining
required by partition-based methods. Our principal result shows that any
clustering functor is naturally constrained to refine single-linkage clusters
and be refined by maximal-linkage clusters. We work in the context of metric
spaces with non-expansive maps, which is appropriate for modeling data
processing which does not increase information content.
|
[
"['Jared Culbertson' 'Dan P. Guralnik' 'Jakob Hansen' 'Peter F. Stiller']",
"Jared Culbertson, Dan P. Guralnik, Jakob Hansen, Peter F. Stiller"
] |
cs.LG cs.CV cs.DB
|
10.1137/17M1121184
|
1608.04348
| null | null |
http://arxiv.org/abs/1608.04348v2
|
2017-03-15T19:50:07Z
|
2016-08-15T18:03:51Z
|
Anomaly detection and classification for streaming data using PDEs
|
Nondominated sorting, also called Pareto Depth Analysis (PDA), is widely used
in multi-objective optimization and has recently found important applications
in multi-criteria anomaly detection. Recently, a partial differential equation
(PDE) continuum limit was discovered for nondominated sorting leading to a very
fast approximate sorting algorithm called PDE-based ranking. We propose in this
paper a fast real-time streaming version of the PDA algorithm for anomaly
detection that exploits the computational advantages of PDE continuum limits.
Furthermore, we derive new PDE continuum limits for sorting points within their
nondominated layers and show how the new PDEs can be used to classify anomalies
based on which criterion was more significantly violated. We also prove
statistical convergence rates for PDE-based ranking, and present the results of
numerical experiments with both synthetic and real data.
|
[
"Bilal Abbasi, Jeff Calder, Adam M. Oberman",
"['Bilal Abbasi' 'Jeff Calder' 'Adam M. Oberman']"
] |
cs.SD cs.CV cs.LG cs.NE
|
10.1109/LSP.2017.2657381
|
1608.04363
| null | null |
http://arxiv.org/abs/1608.04363v2
|
2016-11-28T17:48:04Z
|
2016-08-15T18:57:10Z
|
Deep Convolutional Neural Networks and Data Augmentation for
Environmental Sound Classification
|
The ability of deep convolutional neural networks (CNN) to learn
discriminative spectro-temporal patterns makes them well suited to
environmental sound classification. However, the relative scarcity of labeled
data has impeded the exploitation of this family of high-capacity models. This
study has two primary contributions: first, we propose a deep convolutional
neural network architecture for environmental sound classification. Second, we
propose the use of audio data augmentation for overcoming the problem of data
scarcity and explore the influence of different augmentations on the
performance of the proposed CNN architecture. Combined with data augmentation,
the proposed model produces state-of-the-art results for environmental sound
classification. We show that the improved performance stems from the
combination of a deep, high-capacity model and an augmented training set: this
combination outperforms both the proposed CNN without augmentation and a
"shallow" dictionary learning model with augmentation. Finally, we examine the
influence of each augmentation on the model's classification accuracy for each
class, and observe that the accuracy for each class is influenced differently
by each augmentation, suggesting that the performance of the model could be
improved further by applying class-conditional data augmentation.
|
[
"Justin Salamon and Juan Pablo Bello",
"['Justin Salamon' 'Juan Pablo Bello']"
] |
cs.LG stat.ML
| null |
1608.04414
| null | null |
http://arxiv.org/pdf/1608.04414v3
|
2016-12-26T06:37:48Z
|
2016-08-15T21:19:51Z
|
Generalization of ERM in Stochastic Convex Optimization: The Dimension
Strikes Back
|
In stochastic convex optimization the goal is to minimize a convex function
$F(x) \doteq {\mathbf E}_{{\mathbf f}\sim D}[{\mathbf f}(x)]$ over a convex set
$\cal K \subset {\mathbb R}^d$ where $D$ is some unknown distribution and each
$f(\cdot)$ in the support of $D$ is convex over $\cal K$. The optimization is
commonly based on i.i.d.~samples $f^1,f^2,\ldots,f^n$ from $D$. A standard
approach to such problems is empirical risk minimization (ERM) that optimizes
$F_S(x) \doteq \frac{1}{n}\sum_{i\leq n} f^i(x)$. Here we consider the question
of how many samples are necessary for ERM to succeed and the closely related
question of uniform convergence of $F_S$ to $F$ over $\cal K$. We demonstrate
that in the standard $\ell_p/\ell_q$ setting of Lipschitz-bounded functions
over a $\cal K$ of bounded radius, ERM requires sample size that scales
linearly with the dimension $d$. This nearly matches standard upper bounds and
improves on $\Omega(\log d)$ dependence proved for $\ell_2/\ell_2$ setting by
Shalev-Shwartz et al. (2009). In stark contrast, these problems can be solved
using dimension-independent number of samples for $\ell_2/\ell_2$ setting and
$\log d$ dependence for $\ell_1/\ell_\infty$ setting using other approaches. We
further show that our lower bound applies even if the functions in the support
of $D$ are smooth and efficiently computable and even if an $\ell_1$
regularization term is added. Finally, we demonstrate that for a more general
class of bounded-range (but not Lipschitz-bounded) stochastic convex programs
an infinite gap appears already in dimension 2.
|
[
"['Vitaly Feldman']",
"Vitaly Feldman"
] |
cs.LG cs.NE
| null |
1608.04426
| null | null |
http://arxiv.org/pdf/1608.04426v4
|
2017-02-17T02:49:12Z
|
2016-08-15T22:28:05Z
|
Regularization for Unsupervised Deep Neural Nets
|
Unsupervised neural networks, such as restricted Boltzmann machines (RBMs)
and deep belief networks (DBNs), are powerful tools for feature selection and
pattern recognition tasks. We demonstrate that overfitting occurs in such
models just as in deep feedforward neural networks, and discuss possible
regularization methods to reduce overfitting. We also propose a "partial"
approach to improve the efficiency of Dropout/DropConnect in this scenario, and
discuss the theoretical justification of these methods from model convergence
and likelihood bounds. Finally, we compare the performance of these methods
based on their likelihood and classification error rates for various pattern
recognition data sets.
|
[
"['Baiyang Wang' 'Diego Klabjan']",
"Baiyang Wang, Diego Klabjan"
] |
cs.LG cs.AI cs.NE
| null |
1608.04428
| null | null |
http://arxiv.org/pdf/1608.04428v1
|
2016-08-15T22:34:50Z
|
2016-08-15T22:34:50Z
|
TerpreT: A Probabilistic Programming Language for Program Induction
|
We study machine learning formulations of inductive program synthesis; given
input-output examples, we try to synthesize source code that maps inputs to
corresponding outputs. Our aims are to develop new machine learning approaches
based on neural networks and graphical models, and to understand the
capabilities of machine learning techniques relative to traditional
alternatives, such as those based on constraint solving from the programming
languages community.
Our key contribution is the proposal of TerpreT, a domain-specific language
for expressing program synthesis problems. TerpreT is similar to a
probabilistic programming language: a model is composed of a specification of a
program representation (declarations of random variables) and an interpreter
describing how programs map inputs to outputs (a model connecting unknowns to
observations). The inference task is to observe a set of input-output examples
and infer the underlying program. TerpreT has two main benefits. First, it
enables rapid exploration of a range of domains, program representations, and
interpreter models. Second, it separates the model specification from the
inference algorithm, allowing like-to-like comparisons between different
approaches to inference. From a single TerpreT specification we automatically
perform inference using four different back-ends. These are based on gradient
descent, linear program (LP) relaxations for graphical models, discrete
satisfiability solving, and the Sketch program synthesis system.
We illustrate the value of TerpreT by developing several interpreter models
and performing an empirical comparison between alternative inference
algorithms. Our key empirical finding is that constraint solvers dominate the
gradient descent and LP-based formulations. We conclude with suggestions for
the machine learning community to make progress on program synthesis.
|
[
"['Alexander L. Gaunt' 'Marc Brockschmidt' 'Rishabh Singh' 'Nate Kushman'\n 'Pushmeet Kohli' 'Jonathan Taylor' 'Daniel Tarlow']",
"Alexander L. Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman,\n Pushmeet Kohli, Jonathan Taylor, Daniel Tarlow"
] |
cs.IR cs.LG
| null |
1608.04468
| null | null |
http://arxiv.org/pdf/1608.04468v1
|
2016-08-16T02:56:24Z
|
2016-08-16T02:56:24Z
|
Unbiased Learning-to-Rank with Biased Feedback
|
Implicit feedback (e.g., clicks, dwell times, etc.) is an abundant source of
data in human-interactive systems. While implicit feedback has many advantages
(e.g., it is inexpensive to collect, user centric, and timely), its inherent
biases are a key obstacle to its effective use. For example, position bias in
search rankings strongly influences how many clicks a result receives, so that
directly using click data as a training signal in Learning-to-Rank (LTR)
methods yields sub-optimal results. To overcome this bias problem, we present a
counterfactual inference framework that provides the theoretical basis for
unbiased LTR via Empirical Risk Minimization despite biased data. Using this
framework, we derive a Propensity-Weighted Ranking SVM for discriminative
learning from implicit feedback, where click models take the role of the
propensity estimator. In contrast to most conventional approaches to de-bias
the data using click models, this allows training of ranking functions even in
settings where queries do not repeat. Beyond the theoretical support, we show
empirically that the proposed learning method is highly effective in dealing
with biases, that it is robust to noise and propensity model misspecification,
and that it scales efficiently. We also demonstrate the real-world
applicability of our approach on an operational search engine, where it
substantially improves retrieval performance.
|
[
"Thorsten Joachims, Adith Swaminathan, Tobias Schnabel",
"['Thorsten Joachims' 'Adith Swaminathan' 'Tobias Schnabel']"
] |
stat.ML cs.LG
| null |
1608.04471
| null | null |
http://arxiv.org/pdf/1608.04471v3
|
2019-09-09T17:31:39Z
|
2016-08-16T03:24:20Z
|
Stein Variational Gradient Descent: A General Purpose Bayesian Inference
Algorithm
|
We propose a general purpose variational inference algorithm that forms a
natural counterpart of gradient descent for optimization. Our method
iteratively transports a set of particles to match the target distribution, by
applying a form of functional gradient descent that minimizes the KL
divergence. Empirical studies are performed on various real world models and
datasets, on which our method is competitive with existing state-of-the-art
methods. The derivation of our method is based on a new theoretical result that
connects the derivative of KL divergence under smooth transforms with Stein's
identity and a recently proposed kernelized Stein discrepancy, which is of
independent interest.
|
[
"Qiang Liu and Dilin Wang",
"['Qiang Liu' 'Dilin Wang']"
] |
stat.ME cs.LG stat.ML
| null |
1608.04478
| null | null |
http://arxiv.org/pdf/1608.04478v1
|
2016-08-16T04:31:52Z
|
2016-08-16T04:31:52Z
|
A Geometrical Approach to Topic Model Estimation
|
In the probabilistic topic models, the quantity of interest---a low-rank
matrix consisting of topic vectors---is hidden in the text corpus matrix,
masked by noise, and the Singular Value Decomposition (SVD) is a potentially
useful tool for learning such a low-rank matrix. However, the connection
between this low-rank matrix and the singular vectors of the text corpus matrix
are usually complicated and hard to spell out, so how to use SVD for learning
topic models faces challenges. In this paper, we overcome the challenge by
revealing a surprising insight: there is a low-dimensional simplex structure
which can be viewed as a bridge between the low-rank matrix of interest and the
SVD of the text corpus matrix, and allows us to conveniently reconstruct the
former using the latter. Such an insight motivates a new SVD approach to
learning topic models, which we analyze with delicate random matrix theory and
derive the rate of convergence. We support our methods and theory numerically,
using both simulated data and real data.
|
[
"Zheng Tracy Ke",
"['Zheng Tracy Ke']"
] |
cs.NE cs.CV cs.LG
| null |
1608.04493
| null | null |
http://arxiv.org/pdf/1608.04493v2
|
2016-11-10T00:17:25Z
|
2016-08-16T06:23:05Z
|
Dynamic Network Surgery for Efficient DNNs
|
Deep learning has become a ubiquitous technology to improve machine
intelligence. However, most of the existing deep models are structurally very
complex, making them difficult to be deployed on the mobile platforms with
limited computational power. In this paper, we propose a novel network
compression method called dynamic network surgery, which can remarkably reduce
the network complexity by making on-the-fly connection pruning. Unlike the
previous methods which accomplish this task in a greedy way, we properly
incorporate connection splicing into the whole process to avoid incorrect
pruning and make it as a continual network maintenance. The effectiveness of
our method is proved with experiments. Without any accuracy loss, our method
can efficiently compress the number of parameters in LeNet-5 and AlexNet by a
factor of $\bm{108}\times$ and $\bm{17.7}\times$ respectively, proving that it
outperforms the recent pruning method by considerable margins. Code and some
models are available at https://github.com/yiwenguo/Dynamic-Network-Surgery.
|
[
"Yiwen Guo, Anbang Yao, Yurong Chen",
"['Yiwen Guo' 'Anbang Yao' 'Yurong Chen']"
] |
cs.CE cs.LG stat.ML
| null |
1608.0455
| null | null | null | null | null |
Fast Calculation of the Knowledge Gradient for Optimization of
Deterministic Engineering Simulations
|
A novel efficient method for computing the Knowledge-Gradient policy for
Continuous Parameters (KGCP) for deterministic optimization is derived. The
differences with Expected Improvement (EI), a popular choice for Bayesian
optimization of deterministic engineering simulations, are explored. Both
policies and the Upper Confidence Bound (UCB) policy are compared on a number
of benchmark functions including a problem from structural dynamics. It is
empirically shown that KGCP has similar performance as the EI policy for many
problems, but has better convergence properties for complex (multi-modal)
optimization problems as it emphasizes more on exploration when the model is
confident about the shape of optimal regions. In addition, the relationship
between Maximum Likelihood Estimation (MLE) and slice sampling for estimation
of the hyperparameters of the underlying models, and the complexity of the
problem at hand, is studied.
|
[
"Joachim van der Herten and Ivo Couckuyt and Dirk Deschrijver and Tom\n Dhaene"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.