title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
MultiView Diffusion Maps | cs.LG stat.ML | In this paper, we address the challenging task of achieving multi-view
dimensionality reduction. The goal is to effectively use the availability of
multiple views for extracting a coherent low-dimensional representation of the
data. The proposed method exploits the intrinsic relation within each view, as
well as the mutual relations between views. The multi-view dimensionality
reduction is achieved by defining a cross-view model in which an implied random
walk process is restrained to hop between objects in the different views. The
method is robust to scaling and insensitive to small structural changes in the
data. We define new diffusion distances and analyze the spectra of the proposed
kernel. We show that the proposed framework is useful for various machine
learning applications such as clustering, classification, and manifold
learning. Finally, by fusing multi-sensor seismic data we present a method for
automatic identification of seismic events.
| Ofir Lindenbaum, Arie Yeredor, Moshe Salhov, Amir Averbuch | null | 1508.05550 | null | null |
Necessary and Sufficient Conditions and a Provably Efficient Algorithm
for Separable Topic Discovery | cs.LG cs.CL cs.IR stat.ML | We develop necessary and sufficient conditions and a novel provably
consistent and efficient algorithm for discovering topics (latent factors) from
observations (documents) that are realized from a probabilistic mixture of
shared latent factors that have certain properties. Our focus is on the class
of topic models in which each shared latent factor contains a novel word that
is unique to that factor, a property that has come to be known as separability.
Our algorithm is based on the key insight that the novel words correspond to
the extreme points of the convex hull formed by the row-vectors of a suitably
normalized word co-occurrence matrix. We leverage this geometric insight to
establish polynomial computation and sample complexity bounds based on a few
isotropic random projections of the rows of the normalized word co-occurrence
matrix. Our proposed random-projections-based algorithm is naturally amenable
to an efficient distributed implementation and is attractive for modern
web-scale distributed data mining applications.
| Weicong Ding, Prakash Ishwar, Venkatesh Saligrama | null | 1508.05565 | null | null |
Learning Sampling Distributions for Efficient Object Detection | cs.CV cs.LG | Object detection is an important task in computer vision and learning
systems. Multistage particle windows (MPW), proposed by Gualdi et al., is an
algorithm of fast and accurate object detection. By sampling particle windows
from a proposal distribution (PD), MPW avoids exhaustively scanning the image.
Despite its success, it is unknown how to determine the number of stages and
the number of particle windows in each stage. Moreover, it has to generate too
many particle windows in the initialization step and it redraws unnecessary too
many particle windows around object-like regions. In this paper, we attempt to
solve the problems of MPW. An important fact we used is that there is large
probability for a randomly generated particle window not to contain the object
because the object is a sparse event relevant to the huge number of candidate
windows. Therefore, we design the proposal distribution so as to efficiently
reject the huge number of non-object windows. Specifically, we propose the
concepts of rejection, acceptance, and ambiguity windows and regions. This
contrasts to MPW which utilizes only on region of support. The PD of MPW is
acceptance-oriented whereas the PD of our method (called iPW) is
rejection-oriented. Experimental results on human and face detection
demonstrate the efficiency and effectiveness of the iPW algorithm. The source
code is publicly accessible.
| Yanwei Pang, Jiale Cao, and Xuelong Li | 10.1109/TCYB.2015.2508603 | 1508.05581 | null | null |
The Max $K$-Armed Bandit: A PAC Lower Bound and tighter Algorithms | stat.ML cs.AI cs.LG | We consider the Max $K$-Armed Bandit problem, where a learning agent is faced
with several sources (arms) of items (rewards), and interested in finding the
best item overall. At each time step the agent chooses an arm, and obtains a
random real valued reward. The rewards of each arm are assumed to be i.i.d.,
with an unknown probability distribution that generally differs among the arms.
Under the PAC framework, we provide lower bounds on the sample complexity of
any $(\epsilon,\delta)$-correct algorithm, and propose algorithms that attain
this bound up to logarithmic factors. We compare the performance of this
multi-arm algorithms to the variant in which the arms are not distinguishable
by the agent and are chosen randomly at each stage. Interestingly, when the
maximal rewards of the arms happen to be similar, the latter approach may
provide better performance.
| Yahel David and Nahum Shimkin | null | 1508.05608 | null | null |
Fast Asynchronous Parallel Stochastic Gradient Decent | stat.ML cs.LG | Stochastic gradient descent~(SGD) and its variants have become more and more
popular in machine learning due to their efficiency and effectiveness. To
handle large-scale problems, researchers have recently proposed several
parallel SGD methods for multicore systems. However, existing parallel SGD
methods cannot achieve satisfactory performance in real applications. In this
paper, we propose a fast asynchronous parallel SGD method, called AsySVRG, by
designing an asynchronous strategy to parallelize the recently proposed SGD
variant called stochastic variance reduced gradient~(SVRG). Both theoretical
and empirical results show that AsySVRG can outperform existing
state-of-the-art parallel SGD methods like Hogwild! in terms of convergence
rate and computation cost.
| Shen-Yi Zhao and Wu-Jun Li | null | 1508.05711 | null | null |
Searching for significant patterns in stratified data | stat.ML cs.LG | Significant pattern mining, the problem of finding itemsets that are
significantly enriched in one class of objects, is statistically challenging,
as the large space of candidate patterns leads to an enormous multiple testing
problem. Recently, the concept of testability was proposed as one approach to
correct for multiple testing in pattern mining while retaining statistical
power. Still, these strategies based on testability do not allow one to
condition the test of significance on the observed covariates, which severely
limits its utility in biomedical applications. Here we propose a strategy and
an efficient algorithm to perform significant pattern mining in the presence of
categorical covariates with K states.
| Felipe Llinares-Lopez, Laetitia Papaxanthos, Dean Bodenham, Karsten
Borgwardt | null | 1508.05803 | null | null |
Stochastic Behavior of the Nonnegative Least Mean Fourth Algorithm for
Stationary Gaussian Inputs and Slow Learning | cs.NA cs.LG | Some system identification problems impose nonnegativity constraints on the
parameters to estimate due to inherent physical characteristics of the unknown
system. The nonnegative least-mean-square (NNLMS) algorithm and its variants
allow to address this problem in an online manner. A nonnegative least mean
fourth (NNLMF) algorithm has been recently proposed to improve the performance
of these algorithms in cases where the measurement noise is not Gaussian. This
paper provides a first theoretical analysis of the stochastic behavior of the
NNLMF algorithm for stationary Gaussian inputs and slow learning. Simulation
results illustrate the accuracy of the proposed analysis.
| Jingen Ni, Jian Yang, Jie Chen, C\'edric Richard, Jos\'e Carlos M.
Bermudez | null | 1508.05873 | null | null |
ERBlox: Combining Matching Dependencies with Machine Learning for Entity
Resolution | cs.DB cs.AI cs.LG | Entity resolution (ER), an important and common data cleaning problem, is
about detecting data duplicate representations for the same external entities,
and merging them into single representations. Relatively recently, declarative
rules called matching dependencies (MDs) have been proposed for specifying
similarity conditions under which attribute values in database records are
merged. In this work we show the process and the benefits of integrating three
components of ER: (a) Classifiers for duplicate/non-duplicate record pairs
built using machine learning (ML) techniques, (b) MDs for supporting both the
blocking phase of ML and the merge itself; and (c) The use of the declarative
language LogiQL -an extended form of Datalog supported by the LogicBlox
platform- for data processing, and the specification and enforcement of MDs.
| Zeinab Bahmani, Leopoldo Bertossi and Nikolaos Vasiloglou | null | 1508.06013 | null | null |
AUC Optimisation and Collaborative Filtering | stat.ML cs.LG | In recommendation systems, one is interested in the ranking of the predicted
items as opposed to other losses such as the mean squared error. Although a
variety of ways to evaluate rankings exist in the literature, here we focus on
the Area Under the ROC Curve (AUC) as it widely used and has a strong
theoretical underpinning. In practical recommendation, only items at the top of
the ranked list are presented to the users. With this in mind, we propose a
class of objective functions over matrix factorisations which primarily
represent a smooth surrogate for the real AUC, and in a special case we show
how to prioritise the top of the list. The objectives are differentiable and
optimised through a carefully designed stochastic gradient-descent-based
algorithm which scales linearly with the size of the data. In the special case
of square loss we show how to improve computational complexity by leveraging
previously computed measures. To understand theoretically the underlying matrix
factorisation approaches we study both the consistency of the loss functions
with respect to AUC, and generalisation using Rademacher theory. The resulting
generalisation analysis gives strong motivation for the optimisation under
study. Finally, we provide computation results as to the efficacy of the
proposed method using synthetic and real data.
| Charanpal Dhanjal (LTCI), Romaric Gaudel (SEQUEL), Stephan Clemencon
(LTCI) | null | 1508.06091 | null | null |
An analysis of numerical issues in neural training by pseudoinversion | cs.LG cs.NE | Some novel strategies have recently been proposed for single hidden layer
neural network training that set randomly the weights from input to hidden
layer, while weights from hidden to output layer are analytically determined by
pseudoinversion. These techniques are gaining popularity in spite of their
known numerical issues when singular and/or almost singular matrices are
involved. In this paper we discuss a critical use of Singular Value Analysis
for identification of these drawbacks and we propose an original use of
regularisation to determine the output weights, based on the concept of
critical hidden layer size. This approach also allows to limit the training
computational effort. Besides, we introduce a novel technique which relies an
effective determination of input weights to the hidden layer dimension. This
approach is tested for both regression and classification tasks, resulting in a
significant performance improvement with respect to alternative methods.
| R. Cancelliere and R. Deluca and M. Gai and P. Gallinari and L. Rubini | null | 1508.06092 | null | null |
OCReP: An Optimally Conditioned Regularization for Pseudoinversion Based
Neural Training | cs.NE cs.LG stat.ML | In this paper we consider the training of single hidden layer neural networks
by pseudoinversion, which, in spite of its popularity, is sometimes affected by
numerical instability issues. Regularization is known to be effective in such
cases, so that we introduce, in the framework of Tikhonov regularization, a
matricial reformulation of the problem which allows us to use the condition
number as a diagnostic tool for identification of instability. By imposing
well-conditioning requirements on the relevant matrices, our theoretical
analysis allows the identification of an optimal value for the regularization
parameter from the standpoint of stability. We compare with the value derived
by cross-validation for overfitting control and optimisation of the
generalization performance. We test our method for both regression and
classification tasks. The proposed method is quite effective in terms of
predictivity, often with some improvement on performance with respect to the
reference cases considered. This approach, due to analytical determination of
the regularization parameter, dramatically reduces the computational load
required by many other techniques.
| Rossella Cancelliere, Mario Gai, Patrick Gallinari, Luca Rubini | 10.1016/j.neunet.2015.07.015 | 1508.06095 | null | null |
Robot Language Learning, Generation, and Comprehension | cs.RO cs.AI cs.CL cs.HC cs.LG | We present a unified framework which supports grounding natural-language
semantics in robotic driving. This framework supports acquisition (learning
grounded meanings of nouns and prepositions from human annotation of robotic
driving paths), generation (using such acquired meanings to generate sentential
description of new robotic driving paths), and comprehension (using such
acquired meanings to support automated driving to accomplish navigational goals
specified in natural language). We evaluate the performance of these three
tasks by having independent human judges rate the semantic fidelity of the
sentences associated with paths, achieving overall average correctness of 94.6%
and overall average completeness of 85.6%.
| Daniel Paul Barrett, Scott Alan Bronikowski, Haonan Yu, and Jeffrey
Mark Siskind | null | 1508.06161 | null | null |
Clustering With Side Information: From a Probabilistic Model to a
Deterministic Algorithm | stat.ML cs.AI cs.LG stat.CO | In this paper, we propose a model-based clustering method (TVClust) that
robustly incorporates noisy side information as soft-constraints and aims to
seek a consensus between side information and the observed data. Our method is
based on a nonparametric Bayesian hierarchical model that combines the
probabilistic model for the data instance and the one for the side-information.
An efficient Gibbs sampling algorithm is proposed for posterior inference.
Using the small-variance asymptotics of our probabilistic model, we then derive
a new deterministic clustering algorithm (RDP-means). It can be viewed as an
extension of K-means that allows for the inclusion of side information and has
the additional property that the number of clusters does not need to be
specified a priori. Empirical studies have been carried out to compare our work
with many constrained clustering algorithms from the literature on both a
variety of data sets and under a variety of conditions such as using noisy side
information and erroneous k values. The results of our experiments show strong
results for our probabilistic and deterministic approaches under these
conditions when compared to other algorithms in the literature.
| Daniel Khashabi, John Wieting, Jeffrey Yufei Liu, Feng Liang | null | 1508.06235 | null | null |
Multiple kernel multivariate performance learning using cutting plane
algorithm | cs.LG cs.CV | In this paper, we propose a multi-kernel classifier learning algorithm to
optimize a given nonlinear and nonsmoonth multivariate classifier performance
measure. Moreover, to solve the problem of kernel function selection and kernel
parameter tuning, we proposed to construct an optimal kernel by weighted linear
combination of some candidate kernels. The learning of the classifier parameter
and the kernel weight are unified in a single objective function considering to
minimize the upper boundary of the given multivariate performance measure. The
objective function is optimized with regard to classifier parameter and kernel
weight alternately in an iterative algorithm by using cutting plane algorithm.
The developed algorithm is evaluated on two different pattern classification
methods with regard to various multivariate performance measure optimization
problems. The experiment results show the proposed algorithm outperforms the
competing methods.
| Jingbin Wang, Haoxiang Wang, Yihua Zhou, Nancy McDonald | null | 1508.06264 | null | null |
SPRIGHT: A Fast and Robust Framework for Sparse Walsh-Hadamard Transform | cs.IT cs.LG math.IT | We consider the problem of computing the Walsh-Hadamard Transform (WHT) of
some $N$-length input vector in the presence of noise, where the $N$-point
Walsh spectrum is $K$-sparse with $K = {O}(N^{\delta})$ scaling sub-linearly in
the input dimension $N$ for some $0<\delta<1$. Over the past decade, there has
been a resurgence in research related to the computation of Discrete Fourier
Transform (DFT) for some length-$N$ input signal that has a $K$-sparse Fourier
spectrum. In particular, through a sparse-graph code design, our earlier work
on the Fast Fourier Aliasing-based Sparse Transform (FFAST) algorithm computes
the $K$-sparse DFT in time ${O}(K\log K)$ by taking ${O}(K)$ noiseless samples.
Inspired by the coding-theoretic design framework, Scheibler et al. proposed
the Sparse Fast Hadamard Transform (SparseFHT) algorithm that elegantly
computes the $K$-sparse WHT in the absence of noise using ${O}(K\log N)$
samples in time ${O}(K\log^2 N)$. However, the SparseFHT algorithm explicitly
exploits the noiseless nature of the problem, and is not equipped to deal with
scenarios where the observations are corrupted by noise. Therefore, a question
of critical interest is whether this coding-theoretic framework can be made
robust to noise. Further, if the answer is yes, what is the extra price that
needs to be paid for being robust to noise? In this paper, we show, quite
interestingly, that there is {\it no extra price} that needs to be paid for
being robust to noise other than a constant factor. In other words, we can
maintain the same sample complexity ${O}(K\log N)$ and the computational
complexity ${O}(K\log^2 N)$ as those of the noiseless case, using our SParse
Robust Iterative Graph-based Hadamard Transform (SPRIGHT) algorithm.
| Xiao Li, Joseph K. Bradley, Sameer Pawar, Kannan Ramchandran | null | 1508.06336 | null | null |
Gaussian Mixture Models with Component Means Constrained in Pre-selected
Subspaces | stat.ML cs.LG | We investigate a Gaussian mixture model (GMM) with component means
constrained in a pre-selected subspace. Applications to classification and
clustering are explored. An EM-type estimation algorithm is derived. We prove
that the subspace containing the component means of a GMM with a common
covariance matrix also contains the modes of the density and the class means.
This motivates us to find a subspace by applying weighted principal component
analysis to the modes of a kernel density and the class means. To circumvent
the difficulty of deciding the kernel bandwidth, we acquire multiple subspaces
from the kernel densities based on a sequence of bandwidths. The GMM
constrained by each subspace is estimated; and the model yielding the maximum
likelihood is chosen. A dimension reduction property is proved in the sense of
being informative for classification or clustering. Experiments on real and
simulated data sets are conducted to examine several ways of determining the
subspace and to compare with the reduced rank mixture discriminant analysis
(MDA). Our new method with the simple technique of spanning the subspace only
by class means often outperforms the reduced rank MDA when the subspace
dimension is very low, making it particularly appealing for visualization.
| Mu Qiao and Jia Li | null | 1508.06388 | null | null |
Nested Hierarchical Dirichlet Processes for Multi-Level Non-Parametric
Admixture Modeling | stat.ML cs.LG | Dirichlet Process(DP) is a Bayesian non-parametric prior for infinite mixture
modeling, where the number of mixture components grows with the number of data
items. The Hierarchical Dirichlet Process (HDP), is an extension of DP for
grouped data, often used for non-parametric topic modeling, where each group is
a mixture over shared mixture densities. The Nested Dirichlet Process (nDP), on
the other hand, is an extension of the DP for learning group level
distributions from data, simultaneously clustering the groups. It allows group
level distributions to be shared across groups in a non-parametric setting,
leading to a non-parametric mixture of mixtures. The nCRF extends the nDP for
multilevel non-parametric mixture modeling, enabling modeling topic
hierarchies. However, the nDP and nCRF do not allow sharing of distributions as
required in many applications, motivating the need for multi-level
non-parametric admixture modeling. We address this gap by proposing multi-level
nested HDPs (nHDP) where the base distribution of the HDP is itself a HDP at
each level thereby leading to admixtures of admixtures at each level. Because
of couplings between various HDP levels, scaling up is naturally a challenge
during inference. We propose a multi-level nested Chinese Restaurant Franchise
(nCRF) representation for the nested HDP, with which we outline an inference
algorithm based on Gibbs Sampling. We evaluate our model with the two level
nHDP for non-parametric entity topic modeling where an inner HDP creates a
countably infinite set of topic mixtures and associates them with author
entities, while an outer HDP associates documents with these author entities.
In our experiments on two real world research corpora, the nHDP is able to
generalize significantly better than existing models and detect missing author
entities with a reasonable level of accuracy.
| Lavanya Sita Tekumalla, Priyanka Agrawal, Indrajit Bhattacharya | null | 1508.06446 | null | null |
Greedy methods, randomization approaches and multi-arm bandit algorithms
for efficient sparsity-constrained optimization | cs.LG | Several sparsity-constrained algorithms such as Orthogonal Matching Pursuit
or the Frank-Wolfe algorithm with sparsity constraints work by iteratively
selecting a novel atom to add to the current non-zero set of variables. This
selection step is usually performed by computing the gradient and then by
looking for the gradient component with maximal absolute entry. This step can
be computationally expensive especially for large-scale and high-dimensional
data. In this work, we aim at accelerating these sparsity-constrained
optimization algorithms by exploiting the key observation that, for these
algorithms to work, one only needs the coordinate of the gradient's top entry.
Hence, we introduce algorithms based on greedy methods and randomization
approaches that aim at cheaply estimating the gradient and its top entry.
Another of our contribution is to cast the problem of finding the best gradient
entry as a best arm identification in a multi-armed bandit problem. Owing to
this novel insight, we are able to provide a bandit-based algorithm that
directly estimates the top entry in a very efficient way. Theoretical
observations stating that the resulting inexact Frank-Wolfe or Orthogonal
Matching Pursuit algorithms act, with high probability, similarly to their
exact versions are also given. We have carried out several experiments showing
that the greedy deterministic and the bandit approaches we propose can achieve
an acceleration of an order of magnitude while being as efficient as the exact
gradient when used in algorithms such as OMP, Frank-Wolfe or CoSaMP.
| A Rakotomamonjy (LITIS), S Ko\c{c}o (QARMA), Liva Ralaivola (QARMA) | null | 1508.06477 | null | null |
Deep Convolutional Neural Networks for Smile Recognition | cs.CV cs.LG cs.NE | This thesis describes the design and implementation of a smile detector based
on deep convolutional neural networks. It starts with a summary of neural
networks, the difficulties of training them and new training methods, such as
Restricted Boltzmann Machines or autoencoders. It then provides a literature
review of convolutional neural networks and recurrent neural networks. In order
to select databases for smile recognition, comprehensive statistics of
databases popular in the field of facial expression recognition were generated
and are summarized in this thesis. It then proposes a model for smile
detection, of which the main part is implemented. The experimental results are
discussed in this thesis and justified based on a comprehensive model selection
performed. All experiments were run on a Tesla K40c GPU benefiting from a
speedup of up to factor 10 over the computations on a CPU. A smile detection
test accuracy of 99.45% is achieved for the Denver Intensity of Spontaneous
Facial Action (DISFA) database, significantly outperforming existing approaches
with accuracies ranging from 65.55% to 79.67%. This experiment is re-run under
various variations, such as retaining less neutral images or only the low or
high intensities, of which the results are extensively compared.
| Patrick O. Glauner | null | 1508.06535 | null | null |
A review of homomorphic encryption and software tools for encrypted
statistical machine learning | stat.ML cs.CR cs.LG | Recent advances in cryptography promise to enable secure statistical
computation on encrypted data, whereby a limited set of operations can be
carried out without the need to first decrypt. We review these homomorphic
encryption schemes in a manner accessible to statisticians and machine
learners, focusing on pertinent limitations inherent in the current state of
the art. These limitations restrict the kind of statistics and machine learning
algorithms which can be implemented and we review those which have been
successfully applied in the literature. Finally, we document a high performance
R package implementing a recent homomorphic scheme in a general framework.
| Louis J. M. Aslett, Pedro M. Esperan\c{c}a, Chris C. Holmes | null | 1508.06574 | null | null |
Towards universal neural nets: Gibbs machines and ACE | cs.CV cs.LG cs.NE | We study from a physics viewpoint a class of generative neural nets, Gibbs
machines, designed for gradual learning. While including variational
auto-encoders, they offer a broader universal platform for incrementally adding
newly learned features, including physical symmetries. Their direct connection
to statistical physics and information geometry is established. A variational
Pythagorean theorem justifies invoking the exponential/Gibbs class of
probabilities for creating brand new objects. Combining these nets with
classifiers, gives rise to a brand of universal generative neural nets -
stochastic auto-classifier-encoders (ACE). ACE have state-of-the-art
performance in their class, both for classification and density estimation for
the MNIST data set.
| Galin Georgiev | null | 1508.06585 | null | null |
Online Anomaly Detection via Class-Imbalance Learning | cs.LG | Anomaly detection is an important task in many real world applications such
as fraud detection, suspicious activity detection, health care monitoring etc.
In this paper, we tackle this problem from supervised learning perspective in
online learning setting. We maximize well known \emph{Gmean} metric for
class-imbalance learning in online learning framework. Specifically, we show
that maximizing \emph{Gmean} is equivalent to minimizing a convex surrogate
loss function and based on that we propose novel online learning algorithm for
anomaly detection. We then show, by extensive experiments, that the performance
of the proposed algorithm with respect to $sum$ metric is as good as a recently
proposed Cost-Sensitive Online Classification(CSOC) algorithm for
class-imbalance learning over various benchmarked data sets while keeping
running time close to the perception algorithm. Our another conclusion is that
other competitive online algorithms do not perform consistently over data sets
of varying size. This shows the potential applicability of our proposed
approach.
| Chandresh Kumar Maurya, Durga Toshniwal, Gopalan Vijendran Venkoparao | null | 1508.06717 | null | null |
Encrypted statistical machine learning: new privacy preserving methods | stat.ML cs.CR cs.LG stat.ME | We present two new statistical machine learning methods designed to learn on
fully homomorphic encrypted (FHE) data. The introduction of FHE schemes
following Gentry (2009) opens up the prospect of privacy preserving statistical
machine learning analysis and modelling of encrypted data without compromising
security constraints. We propose tailored algorithms for applying extremely
random forests, involving a new cryptographic stochastic fraction estimator,
and na\"{i}ve Bayes, involving a semi-parametric model for the class decision
boundary, and show how they can be used to learn and predict from encrypted
data. We demonstrate that these techniques perform competitively on a variety
of classification data sets and provide detailed information about the
computational practicalities of these and other FHE methods.
| Louis J. M. Aslett, Pedro M. Esperan\c{c}a, Chris C. Holmes | null | 1508.06845 | null | null |
Compressive Sensing via Low-Rank Gaussian Mixture Models | stat.ML cs.LG stat.AP | We develop a new compressive sensing (CS) inversion algorithm by utilizing
the Gaussian mixture model (GMM). While the compressive sensing is performed
globally on the entire image as implemented in our lensless camera, a low-rank
GMM is imposed on the local image patches. This low-rank GMM is derived via
eigenvalue thresholding of the GMM trained on the projection of the measurement
data, thus learned {\em in situ}. The GMM and the projection of the measurement
data are updated iteratively during the reconstruction. Our GMM algorithm
degrades to the piecewise linear estimator (PLE) if each patch is represented
by a single Gaussian model. Inspired by this, a low-rank PLE algorithm is also
developed for CS inversion, constituting an additional contribution of this
paper. Extensive results on both simulation data and real data captured by the
lensless camera demonstrate the efficacy of the proposed algorithm.
Furthermore, we compare the CS reconstruction results using our algorithm with
the JPEG compression. Simulation results demonstrate that when limited
bandwidth is available (a small number of measurements), our algorithm can
achieve comparable results as JPEG.
| Xin Yuan, Hong Jiang, Gang Huang, Paul A. Wilford | null | 1508.06901 | null | null |
Rapid Exact Signal Scanning with Deep Convolutional Neural Networks | cs.LG cs.CV cs.NE | A rigorous formulation of the dynamics of a signal processing scheme aimed at
dense signal scanning without any loss in accuracy is introduced and analyzed.
Related methods proposed in the recent past lack a satisfactory analysis of
whether they actually fulfill any exactness constraints. This is improved
through an exact characterization of the requirements for a sound sliding
window approach. The tools developed in this paper are especially beneficial if
Convolutional Neural Networks are employed, but can also be used as a more
general framework to validate related approaches to signal scanning. The
proposed theory helps to eliminate redundant computations and renders special
case treatment unnecessary, resulting in a dramatic boost in efficiency
particularly on massively parallel processors. This is demonstrated both
theoretically in a computational complexity analysis and empirically on modern
parallel processors.
| Markus Thom and Franz Gritschneder | 10.1109/TSP.2016.2631454 | 1508.06904 | null | null |
Multi-armed Bandit Problem with Known Trend | cs.LG | We consider a variant of the multi-armed bandit model, which we call
multi-armed bandit problem with known trend, where the gambler knows the shape
of the reward function of each arm but not its distribution. This new problem
is motivated by different online problems like active learning, music and
interface recommendation applications, where when an arm is sampled by the
model the received reward change according to a known trend. By adapting the
standard multi-armed bandit algorithm UCB1 to take advantage of this setting,
we propose the new algorithm named A-UCB that assumes a stochastic model. We
provide upper bounds of the regret which compare favourably with the ones of
UCB1. We also confirm that experimentally with different simulations
| Djallel Bouneffouf and Rapha\"el Feraud | null | 1508.07091 | null | null |
Partitioning Large Scale Deep Belief Networks Using Dropout | stat.ML cs.LG cs.NE | Deep learning methods have shown great promise in many practical
applications, ranging from speech recognition, visual object recognition, to
text processing. However, most of the current deep learning methods suffer from
scalability problems for large-scale applications, forcing researchers or users
to focus on small-scale problems with fewer parameters.
In this paper, we consider a well-known machine learning model, deep belief
networks (DBNs) that have yielded impressive classification performance on a
large number of benchmark machine learning tasks. To scale up DBN, we propose
an approach that can use the computing clusters in a distributed environment to
train large models, while the dense matrix computations within a single machine
are sped up using graphics processors (GPU). When training a DBN, each machine
randomly drops out a portion of neurons in each hidden layer, for each training
case, making the remaining neurons only learn to detect features that are
generally helpful for producing the correct answer. Within our approach, we
have developed four methods to combine outcomes from each machine to form a
unified model. Our preliminary experiment on the mnst handwritten digit
database demonstrates that our approach outperforms the state of the art test
error rate.
| Yanping Huang, Sai Zhang | null | 1508.07096 | null | null |
Regularized Kernel Recursive Least Square Algoirthm | cs.LG stat.ML | In most adaptive signal processing applications, system linearity is assumed
and adaptive linear filters are thus used. The traditional class of supervised
adaptive filters rely on error-correction learning for their adaptive
capability. The kernel method is a powerful nonparametric modeling tool for
pattern analysis and statistical signal processing. Through a nonlinear
mapping, kernel methods transform the data into a set of points in a
Reproducing Kernel Hilbert Space. KRLS achieves high accuracy and has fast
convergence rate in stationary scenario. However the good performance is
obtained at a cost of high computation complexity. Sparsification in kernel
methods is know to related to less computational complexity and memory
consumption.
| Songlin Zhao | null | 1508.07103 | null | null |
Parallel Dither and Dropout for Regularising Deep Neural Networks | cs.LG cs.NE | Effective regularisation during training can mean the difference between
success and failure for deep neural networks. Recently, dither has been
suggested as alternative to dropout for regularisation during batch-averaged
stochastic gradient descent (SGD). In this article, we show that these methods
fail without batch averaging and we introduce a new, parallel regularisation
method that may be used without batch averaging. Our results for
parallel-regularised non-batch-SGD are substantially better than what is
possible with batch-SGD. Furthermore, our results demonstrate that dither and
dropout are complimentary.
| Andrew J.R. Simpson | null | 1508.07130 | null | null |
Competitive and Penalized Clustering Auto-encoder | cs.LG | The paper has been withdrawn since more effective experiments should be
completed.
Auto-encoders (AE) has been widely applied in different fields of machine
learning. However, as a deep model, there are a large amount of learnable
parameters in the AE, which would cause over-fitting and slow learning speed in
practice. Many researchers have been study the intrinsic structure of AE and
showed different useful methods to regularize those parameters. In this paper,
we present a novel regularization method based on a clustering algorithm which
is able to classify the parameters into different groups. With this
regularization, parameters in a given group have approximate equivalent values
and over-fitting problem could be alleviated. Moreover, due to the competitive
behavior of clustering algorithm, this model also overcomes some intrinsic
problems of clustering algorithms like the determination of number of clusters.
Experiments on handwritten digits recognition verify the effectiveness of our
novel model.
| Zihao Wang, Yiuming Cheung | null | 1508.07175 | null | null |
Varying-coefficient models with isotropic Gaussian process priors | cs.LG stat.ML | We study learning problems in which the conditional distribution of the
output given the input varies as a function of additional task variables. In
varying-coefficient models with Gaussian process priors, a Gaussian process
generates the functional relationship between the task variables and the
parameters of this conditional. Varying-coefficient models subsume hierarchical
Bayesian multitask models, but also generalizations in which the conditional
varies continuously, for instance, in time or space. However, Bayesian
inference in varying-coefficient models is generally intractable. We show that
inference for varying-coefficient models with isotropic Gaussian process priors
resolves to standard inference for a Gaussian process that can be solved
efficiently. MAP inference in this model resolves to multitask learning using
task and instance kernels, and inference for hierarchical Bayesian multitask
models can be carried out efficiently using graph-Laplacian kernels. We report
on experiments for geospatial prediction.
| Matthias Bussas, Christoph Sawade, Tobias Scheffer and Niels Landwehr | null | 1508.07192 | null | null |
Linked Component Analysis from Matrices to High Order Tensors:
Applications to Biomedical Data | cs.CE cs.LG cs.NA | With the increasing availability of various sensor technologies, we now have
access to large amounts of multi-block (also called multi-set,
multi-relational, or multi-view) data that need to be jointly analyzed to
explore their latent connections. Various component analysis methods have
played an increasingly important role for the analysis of such coupled data. In
this paper, we first provide a brief review of existing matrix-based (two-way)
component analysis methods for the joint analysis of such data with a focus on
biomedical applications. Then, we discuss their important extensions and
generalization to multi-block multiway (tensor) data. We show how constrained
multi-block tensor decomposition methods are able to extract similar or
statistically dependent common features that are shared by all blocks, by
incorporating the multiway nature of data. Special emphasis is given to the
flexible common and individual feature analysis of multi-block data with the
aim to simultaneously extract common and individual latent components with
desired properties and types of diversity. Illustrative examples are given to
demonstrate their effectiveness for biomedical data analysis.
| Guoxu Zhou, Qibin Zhao, Yu Zhang, T\"ulay Adal{\i}, Shengli Xie,
Andrzej Cichocki | 10.1109/JPROC.2015.2474704 | 1508.07416 | null | null |
X-TREPAN: a multi class regression and adapted extraction of
comprehensible decision tree in artificial neural networks | cs.LG cs.NE | In this work, the TREPAN algorithm is enhanced and extended for extracting
decision trees from neural networks. We empirically evaluated the performance
of the algorithm on a set of databases from real world events. This benchmark
enhancement was achieved by adapting Single-test TREPAN and C4.5 decision tree
induction algorithms to analyze the datasets. The models are then compared with
X-TREPAN for comprehensibility and classification accuracy. Furthermore, we
validate the experimentations by applying statistical methods. Finally, the
modified algorithm is extended to work with multi-class regression problems and
the ability to comprehend generalized feed forward networks is achieved.
| Awudu Karim and Shangbo Zhou | null | 1508.07551 | null | null |
Feature Selection via Binary Simultaneous Perturbation Stochastic
Approximation | stat.ML cs.LG | Feature selection (FS) has become an indispensable task in dealing with
today's highly complex pattern recognition problems with massive number of
features. In this study, we propose a new wrapper approach for FS based on
binary simultaneous perturbation stochastic approximation (BSPSA). This
pseudo-gradient descent stochastic algorithm starts with an initial feature
vector and moves toward the optimal feature vector via successive iterations.
In each iteration, the current feature vector's individual components are
perturbed simultaneously by random offsets from a qualified probability
distribution. We present computational experiments on datasets with numbers of
features ranging from a few dozens to thousands using three widely-used
classifiers as wrappers: nearest neighbor, decision tree, and linear support
vector machine. We compare our methodology against the full set of features as
well as a binary genetic algorithm and sequential FS methods using
cross-validated classification error rate and AUC as the performance criteria.
Our results indicate that features selected by BSPSA compare favorably to
alternative methods in general and BSPSA can yield superior feature sets for
datasets with tens of thousands of features by examining an extremely small
fraction of the solution space. We are not aware of any other wrapper FS
methods that are computationally feasible with good convergence properties for
such large datasets.
| Vural Aksakalli and Milad Malekipirbazari | null | 1508.07630 | null | null |
Directional Decision Lists | stat.ML cs.LG stat.CO | In this paper we introduce a novel family of decision lists consisting of
highly interpretable models which can be learned efficiently in a greedy
manner. The defining property is that all rules are oriented in the same
direction. Particular examples of this family are decision lists with
monotonically decreasing (or increasing) probabilities. On simulated data we
empirically confirm that the proposed model family is easier to train than
general decision lists. We exemplify the practical usability of our approach by
identifying problem symptoms in a manufacturing process.
| Marc Goessling and Shan Kang | null | 1508.07643 | null | null |
Domain Generalization for Object Recognition with Multi-task
Autoencoders | cs.CV cs.AI cs.LG stat.ML | The problem of domain generalization is to take knowledge acquired from a
number of related domains where training data is available, and to then
successfully apply it to previously unseen domains. We propose a new feature
learning algorithm, Multi-Task Autoencoder (MTAE), that provides good
generalization performance for cross-domain object recognition.
Our algorithm extends the standard denoising autoencoder framework by
substituting artificially induced corruption with naturally occurring
inter-domain variability in the appearance of objects. Instead of
reconstructing images from noisy versions, MTAE learns to transform the
original image into analogs in multiple related domains. It thereby learns
features that are robust to variations across domains. The learnt features are
then used as inputs to a classifier.
We evaluated the performance of the algorithm on benchmark image recognition
datasets, where the task is to learn features from multiple datasets and to
then predict the image label from unseen datasets. We found that (denoising)
MTAE outperforms alternative autoencoder-based models as well as the current
state-of-the-art algorithms for domain generalization.
| Muhammad Ghifary and W. Bastiaan Kleijn and Mengjie Zhang and David
Balduzzi | null | 1508.07680 | null | null |
Word Representations, Tree Models and Syntactic Functions | cs.CL cs.LG stat.ML | Word representations induced from models with discrete latent variables
(e.g.\ HMMs) have been shown to be beneficial in many NLP applications. In this
work, we exploit labeled syntactic dependency trees and formalize the induction
problem as unsupervised learning of tree-structured hidden Markov models.
Syntactic functions are used as additional observed variables in the model,
influencing both transition and emission components. Such syntactic information
can potentially lead to capturing more fine-grain and functional distinctions
between words, which, in turn, may be desirable in many NLP applications. We
evaluate the word representations on two tasks -- named entity recognition and
semantic frame identification. We observe improvements from exploiting
syntactic function information in both cases, and the results rivaling those of
state-of-the-art representation learning methods. Additionally, we revisit the
relationship between sequential and unlabeled-tree models and find that the
advantage of the latter is not self-evident.
| Simon \v{S}uster and Gertjan van Noord and Ivan Titov | null | 1508.07709 | null | null |
Coordinate Dual Averaging for Decentralized Online Optimization with
Nonseparable Global Objectives | math.OC cs.LG cs.SY | We consider a decentralized online convex optimization problem in a network
of agents, where each agent controls only a coordinate (or a part) of the
global decision vector. For such a problem, we propose two decentralized
variants (ODA-C and ODA-PS) of Nesterov's primal-dual algorithm with dual
averaging. In ODA-C, to mitigate the disagreements on the primal-vector
updates, the agents implement a generalization of the local
information-exchange dynamics recently proposed by Li and Marden over a static
undirected graph. In ODA-PS, the agents implement the broadcast-based push-sum
dynamics over a time-varying sequence of uniformly connected digraphs. We show
that the regret bounds in both cases have sublinear growth of $O(\sqrt{T})$,
with the time horizon $T$, when the stepsize is of the form $1/\sqrt{t}$ and
the objective functions are Lipschitz-continuous convex functions with
Lipschitz gradients. We also implement the proposed algorithms on a sensor
network to complement our theoretical analysis.
| Soomin Lee, Angelia Nedi\'c, Maxim Raginsky | 10.1109/TCNS.2016.2573639 | 1508.07933 | null | null |
Wald-Kernel: Learning to Aggregate Information for Sequential Inference | stat.ML cs.LG | Sequential hypothesis testing is a desirable decision making strategy in any
time sensitive scenario. Compared with fixed sample-size testing, sequential
testing is capable of achieving identical probability of error requirements
using less samples in average. For a binary detection problem, it is well known
that for known density functions accumulating the likelihood ratio statistics
is time optimal under a fixed error rate constraint. This paper considers the
problem of learning a binary sequential detector from training samples when
density functions are unavailable. We formulate the problem as a constrained
likelihood ratio estimation which can be solved efficiently through convex
optimization by imposing Reproducing Kernel Hilbert Space (RKHS) structure on
the log-likelihood ratio function. In addition, we provide a computationally
efficient approximated solution for large scale data set. The proposed
algorithm, namely Wald-Kernel, is tested on a synthetic data set and two real
world data sets, together with previous approaches for likelihood ratio
estimation. Our empirical results show that the classifier trained through the
proposed technique achieves smaller average sampling cost than previous
approaches proposed in the literature for the same error rate.
| Diyan Teng and Emre Ertin | null | 1508.07964 | null | null |
Value function approximation via low-rank models | cs.LG cs.AI | We propose a novel value function approximation technique for Markov decision
processes. We consider the problem of compactly representing the state-action
value function using a low-rank and sparse matrix model. The problem is to
decompose a matrix that encodes the true value function into low-rank and
sparse components, and we achieve this using Robust Principal Component
Analysis (PCA). Under minimal assumptions, this Robust PCA problem can be
solved exactly via the Principal Component Pursuit convex optimization problem.
We experiment the procedure on several examples and demonstrate that our method
yields approximations essentially identical to the true function.
| Hao Yi Ong | null | 1509.00061 | null | null |
Metastatic liver tumour segmentation from discriminant Grassmannian
manifolds | cs.LG cs.CV | The early detection, diagnosis and monitoring of liver cancer progression can
be achieved with the precise delineation of metastatic tumours. However,
accurate automated segmentation remains challenging due to the presence of
noise, inhomogeneity and the high appearance variability of malignant tissue.
In this paper, we propose an unsupervised metastatic liver tumour segmentation
framework using a machine learning approach based on discriminant Grassmannian
manifolds which learns the appearance of tumours with respect to normal tissue.
First, the framework learns within-class and between-class similarity
distributions from a training set of images to discover the optimal manifold
discrimination between normal and pathological tissue in the liver. Second, a
conditional optimisation scheme computes nonlocal pairwise as well as
pattern-based clique potentials from the manifold subspace to recognise regions
with similar labelings and to incorporate global consistency in the
segmentation process. The proposed framework was validated on a clinical
database of 43 CT images from patients with metastatic liver cancer. Compared
to state-of-the-art methods, our method achieves a better performance on two
separate datasets of metastatic liver tumours from different clinical sites,
yielding an overall mean Dice similarity coefficient of 90.7 +/- 2.4 in over 50
tumours with an average volume of 27.3 mm3.
| Samuel Kadoury, Eugene Vorontsov, An Tang | 10.1088/0031-9155/60/16/6459 | 1509.00083 | null | null |
Multi-Sensor Slope Change Detection | stat.ML cs.LG math.ST stat.TH | We develop a mixture procedure for multi-sensor systems to monitor data
streams for a change-point that causes a gradual degradation to a subset of the
streams. Observations are assumed to be initially normal random variables with
known constant means and variances. After the change-point, observations in the
subset will have increasing or decreasing means. The subset and the
rate-of-changes are unknown. Our procedure uses a mixture statistics, which
assumes that each sensor is affected by the change-point with probability
$p_0$. Analytic expressions are obtained for the average run length (ARL) and
the expected detection delay (EDD) of the mixture procedure, which are
demonstrated to be quite accurate numerically. We establish the asymptotic
optimality of the mixture procedure. Numerical examples demonstrate the good
performance of the proposed procedure. We also discuss an adaptive mixture
procedure using empirical Bayes. This paper extends our earlier work on
detecting an abrupt change-point that causes a mean-shift, by tackling the
challenges posed by the non-stationarity of the slope-change problem.
| Yang Cao, Yao Xie, and Nagi Gebraeel | null | 1509.00114 | null | null |
Online Supervised Subspace Tracking | cs.LG math.ST stat.ML stat.TH | We present a framework for supervised subspace tracking, when there are two
time series $x_t$ and $y_t$, one being the high-dimensional predictors and the
other being the response variables and the subspace tracking needs to take into
consideration of both sequences. It extends the classic online subspace
tracking work which can be viewed as tracking of $x_t$ only. Our online
sufficient dimensionality reduction (OSDR) is a meta-algorithm that can be
applied to various cases including linear regression, logistic regression,
multiple linear regression, multinomial logistic regression, support vector
machine, the random dot product model and the multi-scale union-of-subspace
model. OSDR reduces data-dimensionality on-the-fly with low-computational
complexity and it can also handle missing data and dynamic data. OSDR uses an
alternating minimization scheme and updates the subspace via gradient descent
on the Grassmannian manifold. The subspace update can be performed efficiently
utilizing the fact that the Grassmannian gradient with respect to the subspace
in many settings is rank-one (or low-rank in certain cases). The optimization
problem for OSDR is non-convex and hard to analyze in general; we provide
convergence analysis of OSDR in a simple linear regression setting. The good
performance of OSDR compared with the conventional unsupervised subspace
tracking are demonstrated via numerical examples on simulated and real data.
| Yao Xie, Ruiyang Song, Hanjun Dai, Qingbin Li, Le Song | null | 1509.00137 | null | null |
Learning A Task-Specific Deep Architecture For Clustering | cs.LG cs.CV stat.ML | While sparse coding-based clustering methods have shown to be successful,
their bottlenecks in both efficiency and scalability limit the practical usage.
In recent years, deep learning has been proved to be a highly effective,
efficient and scalable feature learning tool. In this paper, we propose to
emulate the sparse coding-based clustering pipeline in the context of deep
learning, leading to a carefully crafted deep model benefiting from both. A
feed-forward network structure, named TAGnet, is constructed based on a
graph-regularized sparse coding algorithm. It is then trained with
task-specific loss functions from end to end. We discover that connecting deep
learning to sparse coding benefits not only the model performance, but also its
initialization and interpretation. Moreover, by introducing auxiliary
clustering tasks to the intermediate feature hierarchy, we formulate DTAGnet
and obtain a further performance boost. Extensive experiments demonstrate that
the proposed model gains remarkable margins over several state-of-the-art
methods.
| Zhangyang Wang, Shiyu Chang, Jiayu Zhou, Meng Wang, Thomas S. Huang | null | 1509.00151 | null | null |
Learning Deep $\ell_0$ Encoders | cs.LG stat.ML | Despite its nonconvex nature, $\ell_0$ sparse approximation is desirable in
many theoretical and application cases. We study the $\ell_0$ sparse
approximation problem with the tool of deep learning, by proposing Deep
$\ell_0$ Encoders. Two typical forms, the $\ell_0$ regularized problem and the
$M$-sparse problem, are investigated. Based on solid iterative algorithms, we
model them as feed-forward neural networks, through introducing novel neurons
and pooling functions. Enforcing such structural priors acts as an effective
network regularization. The deep encoders also enjoy faster inference, larger
learning capacity, and better scalability compared to conventional sparse
coding solutions. Furthermore, under task-driven losses, the models can be
conveniently optimized from end to end. Numerical results demonstrate the
impressive performances of the proposed encoders.
| Zhangyang Wang, Qing Ling, Thomas S. Huang | null | 1509.00153 | null | null |
Differentially Private Online Learning for Cloud-Based Video
Recommendation with Multimedia Big Data in Social Networks | cs.LG | With the rapid growth in multimedia services and the enormous offers of video
contents in online social networks, users have difficulty in obtaining their
interests. Therefore, various personalized recommendation systems have been
proposed. However, they ignore that the accelerated proliferation of social
media data has led to the big data era, which has greatly impeded the process
of video recommendation. In addition, none of them has considered both the
privacy of users' contexts (e,g., social status, ages and hobbies) and video
service vendors' repositories, which are extremely sensitive and of significant
commercial value. To handle the problems, we propose a cloud-assisted
differentially private video recommendation system based on distributed online
learning. In our framework, service vendors are modeled as distributed
cooperative learners, recommending videos according to user's context, while
simultaneously adapting the video-selection strategy based on user-click
feedback to maximize total user clicks (reward). Considering the sparsity and
heterogeneity of big social media data, we also propose a novel geometric
differentially private model, which can greatly reduce the performance
(recommendation accuracy) loss. Our simulation shows the proposed algorithms
outperform other existing methods and keep a delicate balance between computing
accuracy and privacy preserving level.
| Pan Zhou, Yingxue Zhou, Dapeng Wu and Hai Jin | null | 1509.00181 | null | null |
Fingerprinting-Based Positioning in Distributed Massive MIMO Systems | cs.IT cs.LG math.IT | Location awareness in wireless networks may enable many applications such as
emergency services, autonomous driving and geographic routing. Although there
are many available positioning techniques, none of them is adapted to work with
massive multiple-in-multiple-out (MIMO) systems, which represent a leading 5G
technology candidate. In this paper, we discuss possible solutions for
positioning of mobile stations using a vector of signals at the base station,
equipped with many antennas distributed over deployment area. Our main proposal
is to use fingerprinting techniques based on a vector of received signal
strengths. This kind of methods are able to work in highly-cluttered multipath
environments, and require just one base station, in contrast to standard
range-based and angle-based techniques. We also provide a solution for
fingerprinting-based positioning based on Gaussian process regression, and
discuss main applications and challenges.
| Vladimir Savic and Erik G. Larsson | null | 1509.00202 | null | null |
Fast Randomized Singular Value Thresholding for Low-rank Optimization | cs.CV cs.LG | Rank minimization can be converted into tractable surrogate problems, such as
Nuclear Norm Minimization (NNM) and Weighted NNM (WNNM). The problems related
to NNM, or WNNM, can be solved iteratively by applying a closed-form proximal
operator, called Singular Value Thresholding (SVT), or Weighted SVT, but they
suffer from high computational cost of Singular Value Decomposition (SVD) at
each iteration. We propose a fast and accurate approximation method for SVT,
that we call fast randomized SVT (FRSVT), with which we avoid direct
computation of SVD. The key idea is to extract an approximate basis for the
range of the matrix from its compressed matrix. Given the basis, we compute
partial singular values of the original matrix from the small factored matrix.
In addition, by developping a range propagation method, our method further
speeds up the extraction of approximate basis at each iteration. Our
theoretical analysis shows the relationship between the approximation bound of
SVD and its effect to NNM via SVT. Along with the analysis, our empirical
results quantitatively and qualitatively show that our approximation rarely
harms the convergence of the host algorithms. We assess the efficiency and
accuracy of the proposed method on various computer vision problems, e.g.,
subspace clustering, weather artifact removal, and simultaneous multi-image
alignment and rectification.
| Tae-Hyun Oh, Yasuyuki Matsushita, Yu-Wing Tai, In So Kweon | null | 1509.00296 | null | null |
Sensor-Type Classification in Buildings | cs.LG | Many sensors/meters are deployed in commercial buildings to monitor and
optimize their performance. However, because sensor metadata is inconsistent
across buildings, software-based solutions are tightly coupled to the sensor
metadata conventions (i.e. schemas and naming) for each building. Running the
same software across buildings requires significant integration effort.
Metadata normalization is critical for scaling the deployment process and
allows us to decouple building-specific conventions from the code written for
building applications. It also allows us to deal with missing metadata. One
important aspect of normalization is to differentiate sensors by the typeof
phenomena being observed. In this paper, we propose a general, simple, yet
effective classification scheme to differentiate sensors in buildings by type.
We perform ensemble learning on data collected from over 2000 sensor streams in
two buildings. Our approach is able to achieve more than 92% accuracy for
classification within buildings and more than 82% accuracy for across
buildings. We also introduce a method for identifying potential misclassified
streams. This is important because it allows us to identify opportunities to
attain more input from experts -- input that could help improve classification
accuracy when ground truth is unavailable. We show that by adjusting a
threshold value we are able to identify at least 30% of the misclassified
instances.
| Dezhi Hong, Jorge Ortiz, Arka Bhattacharya, Kamin Whitehouse | null | 1509.00498 | null | null |
Importance Weighted Autoencoders | cs.LG stat.ML | The variational autoencoder (VAE; Kingma, Welling (2014)) is a recently
proposed generative model pairing a top-down generative network with a
bottom-up recognition network which approximates posterior inference. It
typically makes strong assumptions about posterior inference, for instance that
the posterior distribution is approximately factorial, and that its parameters
can be approximated with nonlinear regression from the observations. As we show
empirically, the VAE objective can lead to overly simplified representations
which fail to use the network's entire modeling capacity. We present the
importance weighted autoencoder (IWAE), a generative model with the same
architecture as the VAE, but which uses a strictly tighter log-likelihood lower
bound derived from importance weighting. In the IWAE, the recognition network
uses multiple samples to approximate the posterior, giving it increased
flexibility to model complex posteriors which do not fit the VAE modeling
assumptions. We show empirically that IWAEs learn richer latent space
representations than VAEs, leading to improved test log-likelihood on density
estimation benchmarks.
| Yuri Burda, Roger Grosse, Ruslan Salakhutdinov | null | 1509.00519 | null | null |
Discovery of Web Usage Profiles Using Various Clustering Techniques | cs.DB cs.IR cs.LG | The explosive growth of World Wide Web (WWW) has necessitated the development
of Web personalization systems in order to understand the user preferences to
dynamically serve customized content to individual users. To reveal information
about user preferences from Web usage data, Web Usage Mining (WUM) techniques
are extensively being applied to the Web log data. Clustering techniques are
widely used in WUM to capture similar interests and trends among users
accessing a Web site. Clustering aims to divide a data set into groups or
clusters where inter-cluster similarities are minimized while the intra cluster
similarities are maximized. This paper reviews four of the popularly used
clustering techniques: k-Means, k-Medoids, Leader and DBSCAN. These techniques
are implemented and tested against the Web user navigational data. Performance
and validity results of each technique are presented and compared.
| Zahid Ansari, Waseem Ahmed, M.F. Azeem and A.Vinaya Babu | null | 1509.00692 | null | null |
Heavy-tailed Independent Component Analysis | cs.LG math.ST stat.CO stat.ML stat.TH | Independent component analysis (ICA) is the problem of efficiently recovering
a matrix $A \in \mathbb{R}^{n\times n}$ from i.i.d. observations of $X=AS$
where $S \in \mathbb{R}^n$ is a random vector with mutually independent
coordinates. This problem has been intensively studied, but all existing
efficient algorithms with provable guarantees require that the coordinates
$S_i$ have finite fourth moments. We consider the heavy-tailed ICA problem
where we do not make this assumption, about the second moment. This problem
also has received considerable attention in the applied literature. In the
present work, we first give a provably efficient algorithm that works under the
assumption that for constant $\gamma > 0$, each $S_i$ has finite
$(1+\gamma)$-moment, thus substantially weakening the moment requirement
condition for the ICA problem to be solvable. We then give an algorithm that
works under the assumption that matrix $A$ has orthogonal columns but requires
no moment assumptions. Our techniques draw ideas from convex geometry and
exploit standard properties of the multivariate spherical Gaussian distribution
in a novel way.
| Joseph Anderson, Navin Goyal, Anupama Nandi, Luis Rademacher | null | 1509.00727 | null | null |
A DEEP analysis of the META-DES framework for dynamic selection of
ensemble of classifiers | cs.LG stat.ML | Dynamic ensemble selection (DES) techniques work by estimating the level of
competence of each classifier from a pool of classifiers. Only the most
competent ones are selected to classify a given test sample. Hence, the key
issue in DES is the criterion used to estimate the level of competence of the
classifiers in predicting the label of a given test sample. In order to perform
a more robust ensemble selection, we proposed the META-DES framework using
meta-learning, where multiple criteria are encoded as meta-features and are
passed down to a meta-classifier that is trained to estimate the competence
level of a given classifier. In this technical report, we present a
step-by-step analysis of each phase of the framework during training and test.
We show how each set of meta-features is extracted as well as their impact on
the estimation of the competence level of the base classifier. Moreover, an
analysis of the impact of several factors in the system performance, such as
the number of classifiers in the pool, the use of different linear base
classifiers, as well as the size of the validation data. We show that using the
dynamic selection of linear classifiers through the META-DES framework, we can
solve complex non-linear classification problems where other combination
techniques such as AdaBoost cannot.
| Rafael M. O. Cruz, Robert Sabourin, George D. C. Cavalcanti | null | 1509.00825 | null | null |
What to talk about and how? Selective Generation using LSTMs with
Coarse-to-Fine Alignment | cs.CL cs.AI cs.LG cs.NE | We propose an end-to-end, domain-independent neural encoder-aligner-decoder
model for selective generation, i.e., the joint task of content selection and
surface realization. Our model first encodes a full set of over-determined
database event records via an LSTM-based recurrent neural network, then
utilizes a novel coarse-to-fine aligner to identify the small subset of salient
records to talk about, and finally employs a decoder to generate free-form
descriptions of the aligned, selected records. Our model achieves the best
selection and generation results reported to-date (with 59% relative
improvement in generation) on the benchmark WeatherGov dataset, despite using
no specialized features or linguistic resources. Using an improved k-nearest
neighbor beam filter helps further. We also perform a series of ablations and
visualizations to elucidate the contributions of our key model components.
Lastly, we evaluate the generalizability of our model on the RoboCup dataset,
and get results that are competitive with or better than the state-of-the-art,
despite being severely data-starved.
| Hongyuan Mei and Mohit Bansal and Matthew R. Walter | null | 1509.00838 | null | null |
On-the-Fly Learning in a Perpetual Learning Machine | cs.LG | Despite the promise of brain-inspired machine learning, deep neural networks
(DNN) have frustratingly failed to bridge the deceptively large gap between
learning and memory. Here, we introduce a Perpetual Learning Machine; a new
type of DNN that is capable of brain-like dynamic 'on the fly' learning because
it exists in a self-supervised state of Perpetual Stochastic Gradient Descent.
Thus, we provide the means to unify learning and memory within a machine
learning framework. We also explore the elegant duality of abstraction and
synthesis: the Yin and Yang of deep learning.
| Andrew J.R. Simpson | null | 1509.00913 | null | null |
Bayesian Masking: Sparse Bayesian Estimation with Weaker Shrinkage Bias | stat.ML cs.LG | A common strategy for sparse linear regression is to introduce
regularization, which eliminates irrelevant features by letting the
corresponding weights be zeros. However, regularization often shrinks the
estimator for relevant features, which leads to incorrect feature selection.
Motivated by the above-mentioned issue, we propose Bayesian masking (BM), a
sparse estimation method which imposes no regularization on the weights. The
key concept of BM is to introduce binary latent variables that randomly mask
features. Estimating the masking rates determines the relevance of the features
automatically. We derive a variational Bayesian inference algorithm that
maximizes the lower bound of the factorized information criterion (FIC), which
is a recently developed asymptotic criterion for evaluating the marginal
log-likelihood. In addition, we propose reparametrization to accelerate the
convergence of the derived algorithm. Finally, we show that BM outperforms
Lasso and automatic relevance determination (ARD) in terms of the
sparsity-shrinkage trade-off.
| Yohei Kondo, Kohei Hayashi, Shin-ichi Maeda | null | 1509.01004 | null | null |
Training a Restricted Boltzmann Machine for Classification by Labeling
Model Samples | cs.LG | We propose an alternative method for training a classification model. Using
the MNIST set of handwritten digits and Restricted Boltzmann Machines, it is
possible to reach a classification performance competitive to semi-supervised
learning if we first train a model in an unsupervised fashion on unlabeled data
only, and then manually add labels to model samples instead of training data
samples with the help of a GUI. This approach can benefit from the fact that
model samples can be presented to the human labeler in a video-like fashion,
resulting in a higher number of labeled examples. Also, after some initial
training, hard-to-classify examples can be distinguished from easy ones
automatically, saving manual work.
| Malte Probst and Franz Rothlauf | null | 1509.01053 | null | null |
A tree-based kernel for graphs with continuous attributes | cs.LG | The availability of graph data with node attributes that can be either
discrete or real-valued is constantly increasing. While existing kernel methods
are effective techniques for dealing with graphs having discrete node labels,
their adaptation to non-discrete or continuous node attributes has been
limited, mainly for computational issues. Recently, a few kernels especially
tailored for this domain, and that trade predictive performance for
computational efficiency, have been proposed. In this paper, we propose a graph
kernel for complex and continuous nodes' attributes, whose features are tree
structures extracted from specific graph visits. The kernel manages to keep the
same complexity of state-of-the-art kernels while implicitly using a larger
feature space. We further present an approximated variant of the kernel which
reduces its complexity significantly. Experimental results obtained on six
real-world datasets show that the kernel is the best performing one on most of
them. Moreover, in most cases the approximated version reaches comparable
performances to current state-of-the-art kernels in terms of classification
accuracy while greatly shortening the running times.
| Giovanni Da San Martino, Nicol\`o Navarin and Alessandro Sperduti | 10.1109/TNNLS.2017.2705694 | 1509.01116 | null | null |
Semi-described and semi-supervised learning with Gaussian processes | stat.ML cs.AI cs.LG math.PR | Propagating input uncertainty through non-linear Gaussian process (GP)
mappings is intractable. This hinders the task of training GPs using uncertain
and partially observed inputs. In this paper we refer to this task as
"semi-described learning". We then introduce a GP framework that solves both,
the semi-described and the semi-supervised learning problems (where missing
values occur in the outputs). Auto-regressive state space simulation is also
recognised as a special case of semi-described learning. To achieve our goal we
develop variational methods for handling semi-described inputs in GPs, and
couple them with algorithms that allow for imputing the missing values while
treating the uncertainty in a principled, Bayesian manner. Extensive
experiments on simulated and real-world data study the problems of iterative
forecasting and regression/classification with missing values. The results
suggest that the principled propagation of uncertainty stemming from our
framework can significantly improve performance in these tasks.
| Andreas Damianou, Neil D. Lawrence | null | 1509.01168 | null | null |
Fast Clustering and Topic Modeling Based on Rank-2 Nonnegative Matrix
Factorization | cs.LG cs.IR cs.NA | The importance of unsupervised clustering and topic modeling is well
recognized with ever-increasing volumes of text data. In this paper, we propose
a fast method for hierarchical clustering and topic modeling called HierNMF2.
Our method is based on fast Rank-2 nonnegative matrix factorization (NMF) that
performs binary clustering and an efficient node splitting rule. Further
utilizing the final leaf nodes generated in HierNMF2 and the idea of
nonnegative least squares fitting, we propose a new clustering/topic modeling
method called FlatNMF2 that recovers a flat clustering/topic modeling result in
a very simple yet significantly more effective way than any other existing
methods. We implement highly optimized open source software in C++ for both
HierNMF2 and FlatNMF2 for hierarchical and partitional clustering/topic
modeling of document data sets.
Substantial experimental tests are presented that illustrate significant
improvements both in computational time as well as quality of solutions. We
compare our methods to other clustering methods including K-means, standard
NMF, and CLUTO, and also topic modeling methods including latent Dirichlet
allocation (LDA) and recently proposed algorithms for NMF with separability
constraints. Overall, we present efficient tools for analyzing large-scale data
sets, and techniques that can be generalized to many other data analytics
problem domains.
| Da Kuang, Barry Drake, Haesun Park | null | 1509.01208 | null | null |
Train faster, generalize better: Stability of stochastic gradient
descent | cs.LG math.OC stat.ML | We show that parametric models trained by a stochastic gradient method (SGM)
with few iterations have vanishing generalization error. We prove our results
by arguing that SGM is algorithmically stable in the sense of Bousquet and
Elisseeff. Our analysis only employs elementary tools from convex and
continuous optimization. We derive stability bounds for both convex and
non-convex optimization under standard Lipschitz and smoothness assumptions.
Applying our results to the convex case, we provide new insights for why
multiple epochs of stochastic gradient methods generalize well in practice. In
the non-convex case, we give a new interpretation of common practices in neural
networks, and formally show that popular techniques for training large deep
models are indeed stability-promoting. Our findings conceptually underscore the
importance of reducing training time beyond its obvious benefit.
| Moritz Hardt and Benjamin Recht and Yoram Singer | null | 1509.01240 | null | null |
Machine Learning Methods to Analyze Arabidopsis Thaliana Plant Root
Growth | cs.LG | One of the challenging problems in biology is to classify plants based on
their reaction on genetic mutation. Arabidopsis Thaliana is a plant that is so
interesting, because its genetic structure has some similarities with that of
human beings. Biologists classify the type of this plant to mutated and not
mutated (wild) types. Phenotypic analysis of these types is a time-consuming
and costly effort by individuals. In this paper, we propose a modified feature
extraction step by using velocity and acceleration of root growth. In the
second step, for plant classification, we employed different Support Vector
Machine (SVM) kernels and two hybrid systems of neural networks. Gated Negative
Correlation Learning (GNCL) and Mixture of Negatively Correlated Experts (MNCE)
are two ensemble methods based on complementary feature of classical
classifiers; Mixture of Expert (ME) and Negative Correlation Learning (NCL).
The hybrid systems conserve of advantages and decrease the effects of
disadvantages of NCL and ME. Our Experimental shows that MNCE and GNCL improve
the efficiency of classical classifiers, however, some SVM kernels function has
better performance than classifiers based on neural network ensemble method.
Moreover, kernels consume less time to obtain a classification rate.
| Hamidreza Farhidzadeh | null | 1509.01270 | null | null |
Probabilistic Neural Network Training for Semi-Supervised Classifiers | cs.LG | In this paper, we propose another version of help-training approach by
employing a Probabilistic Neural Network (PNN) that improves the performance of
the main discriminative classifier in the semi-supervised strategy. We
introduce the PNN-training algorithm and use it for training the support vector
machine (SVM) with a few numbers of labeled data and a large number of
unlabeled data. We try to find the best labels for unlabeled data and then use
SVM to enhance the classification rate. We test our method on two famous
benchmarks and show the efficiency of our method in comparison with pervious
methods.
| Hamidreza Farhidzadeh | null | 1509.01271 | null | null |
Incremental Active Opinion Learning Over a Stream of Opinionated
Documents | cs.IR cs.CL cs.LG | Applications that learn from opinionated documents, like tweets or product
reviews, face two challenges. First, the opinionated documents constitute an
evolving stream, where both the author's attitude and the vocabulary itself may
change. Second, labels of documents are scarce and labels of words are
unreliable, because the sentiment of a word depends on the (unknown) context in
the author's mind. Most of the research on mining over opinionated streams
focuses on the first aspect of the problem, whereas for the second a continuous
supply of labels from the stream is assumed. Such an assumption though is
utopian as the stream is infinite and the labeling cost is prohibitive. To this
end, we investigate the potential of active stream learning algorithms that ask
for labels on demand. Our proposed ACOSTREAM 1 approach works with limited
labels: it uses an initial seed of labeled documents, occasionally requests
additional labels for documents from the human expert and incrementally adapts
to the underlying stream while exploiting the available labeled documents. In
its core, ACOSTREAM consists of a MNB classifier coupled with "sampling"
strategies for requesting class labels for new unlabeled documents. In the
experiments, we evaluate the classifier performance over time by varying: (a)
the class distribution of the opinionated stream, while assuming that the set
of the words in the vocabulary is fixed but their polarities may change with
the class distribution; and (b) the number of unknown words arriving at each
moment, while the class polarity may also change. Our results show that active
learning on a stream of opinionated documents, delivers good performance while
requiring a small selection of labels
| Max Zimmermann, Eirini Ntoutsi, Myra Spiliopoulou | null | 1509.01288 | null | null |
l1-norm Penalized Orthogonal Forward Regression | cs.LG stat.ML | A l1-norm penalized orthogonal forward regression (l1-POFR) algorithm is
proposed based on the concept of leaveone- out mean square error (LOOMSE).
Firstly, a new l1-norm penalized cost function is defined in the constructed
orthogonal space, and each orthogonal basis is associated with an individually
tunable regularization parameter. Secondly, due to orthogonal computation, the
LOOMSE can be analytically computed without actually splitting the data set,
and moreover a closed form of the optimal regularization parameter in terms of
minimal LOOMSE is derived. Thirdly, a lower bound for regularization parameters
is proposed, which can be used for robust LOOMSE estimation by adaptively
detecting and removing regressors to an inactive set so that the computational
cost of the algorithm is significantly reduced. Illustrative examples are
included to demonstrate the effectiveness of this new l1-POFR approach.
| Xia Hong, Sheng Chen, Yi Guo, Junbin Gao | null | 1509.01323 | null | null |
Deep Broad Learning - Big Models for Big Data | cs.LG | Deep learning has demonstrated the power of detailed modeling of complex
high-order (multivariate) interactions in data. For some learning tasks there
is power in learning models that are not only Deep but also Broad. By Broad, we
mean models that incorporate evidence from large numbers of features. This is
of especial value in applications where many different features and
combinations of features all carry small amounts of information about the
class. The most accurate models will integrate all that information. In this
paper, we propose an algorithm for Deep Broad Learning called DBL. The proposed
algorithm has a tunable parameter $n$, that specifies the depth of the model.
It provides straightforward paths towards out-of-core learning for large data.
We demonstrate that DBL learns models from large quantities of data with
accuracy that is highly competitive with the state-of-the-art.
| Nayyar A. Zaidi, Geoffrey I. Webb, Mark J. Carman, Francois Petitjean | null | 1509.01346 | null | null |
Parallel and Distributed Approaches for Graph Based Semi-supervised
Learning | cs.LG | Two approaches for graph based semi-supervised learning are proposed. The
firstapproach is based on iteration of an affine map. A key element of the
affine map iteration is sparsematrix-vector multiplication, which has several
very efficient parallel implementations. The secondapproach belongs to the
class of Markov Chain Monte Carlo (MCMC) algorithms. It is based onsampling of
nodes by performing a random walk on the graph. The latter approach is
distributedby its nature and can be easily implemented on several processors or
over the network. Boththeoretical and practical evaluations are provided. It is
found that the nodes are classified intotheir class with very small error. The
sampling algorithm's ability to track new incoming nodesand to classify them is
also demonstrated.
| Konstantin Avrachenkov (MAESTRO), Vivek Borkar, Krishnakant Saboo | null | 1509.01349 | null | null |
Diffusion-KLMS Algorithm and its Performance Analysis for Non-Linear
Distributed Networks | cs.LG cs.DC cs.IT cs.SY math.IT | In a distributed network environment, the diffusion-least mean squares (LMS)
algorithm gives faster convergence than the original LMS algorithm. It has also
been observed that, the diffusion-LMS generally outperforms other distributed
LMS algorithms like spatial LMS and incremental LMS. However, both the original
LMS and diffusion-LMS are not applicable in non-linear environments where data
may not be linearly separable. A variant of LMS called kernel-LMS (KLMS) has
been proposed in the literature for such non-linearities. In this paper, we
propose kernelised version of diffusion-LMS for non-linear distributed
environments. Simulations show that the proposed approach has superior
convergence as compared to algorithms of the same genre. We also introduce a
technique to predict the transient and steady-state behaviour of the proposed
algorithm. The techniques proposed in this work (or algorithms of same genre)
can be easily extended to distributed parameter estimation applications like
cooperative spectrum sensing and massive multiple input multiple output (MIMO)
receiver design which are potential components for 5G communication systems.
| Rangeet Mitra and Vimal Bhatia | null | 1509.01352 | null | null |
CNN Based Hashing for Image Retrieval | cs.CV cs.LG | Along with data on the web increasing dramatically, hashing is becoming more
and more popular as a method of approximate nearest neighbor search. Previous
supervised hashing methods utilized similarity/dissimilarity matrix to get
semantic information. But the matrix is not easy to construct for a new
dataset. Rather than to reconstruct the matrix, we proposed a straightforward
CNN-based hashing method, i.e. binarilizing the activations of a fully
connected layer with threshold 0 and taking the binary result as hash codes.
This method achieved the best performance on CIFAR-10 and was comparable with
the state-of-the-art on MNIST. And our experiments on CIFAR-10 suggested that
the signs of activations may carry more information than the relative values of
activations between samples, and that the co-adaption between feature extractor
and hash functions is important for hashing.
| Jinma Guo and Jianmin Li | null | 1509.01354 | null | null |
Predicting SLA Violations in Real Time using Online Machine Learning | cs.NI cs.LG cs.SE stat.ML | Detecting faults and SLA violations in a timely manner is critical for
telecom providers, in order to avoid loss in business, revenue and reputation.
At the same time predicting SLA violations for user services in telecom
environments is difficult, due to time-varying user demands and infrastructure
load conditions.
In this paper, we propose a service-agnostic online learning approach,
whereby the behavior of the system is learned on the fly, in order to predict
client-side SLA violations. The approach uses device-level metrics, which are
collected in a streaming fashion on the server side.
Our results show that the approach can produce highly accurate predictions
(>90% classification accuracy and < 10% false alarm rate) in scenarios where
SLA violations are predicted for a video-on-demand service under changing load
patterns. The paper also highlight the limitations of traditional offline
learning methods, which perform significantly worse in many of the considered
scenarios.
| Jawwad Ahmed, Andreas Johnsson, Rerngvit Yanggratoke, John Ardelius,
Christofer Flinta, Rolf Stadler | null | 1509.01386 | null | null |
Coordinate Descent Methods for Symmetric Nonnegative Matrix
Factorization | cs.NA cs.CV cs.LG math.OC stat.ML | Given a symmetric nonnegative matrix $A$, symmetric nonnegative matrix
factorization (symNMF) is the problem of finding a nonnegative matrix $H$,
usually with much fewer columns than $A$, such that $A \approx HH^T$. SymNMF
can be used for data analysis and in particular for various clustering tasks.
In this paper, we propose simple and very efficient coordinate descent schemes
to solve this problem, and that can handle large and sparse input matrices. The
effectiveness of our methods is illustrated on synthetic and real-world data
sets, and we show that they perform favorably compared to recent
state-of-the-art methods.
| Arnaud Vandaele, Nicolas Gillis, Qi Lei, Kai Zhong, Inderjit Dhillon | 10.1109/TSP.2016.2591510 | 1509.01404 | null | null |
Quantization based Fast Inner Product Search | cs.AI cs.LG stat.ML | We propose a quantization based approach for fast approximate Maximum Inner
Product Search (MIPS). Each database vector is quantized in multiple subspaces
via a set of codebooks, learned directly by minimizing the inner product
quantization error. Then, the inner product of a query to a database vector is
approximated as the sum of inner products with the subspace quantizers.
Different from recently proposed LSH approaches to MIPS, the database vectors
and queries do not need to be augmented in a higher dimensional feature space.
We also provide a theoretical analysis of the proposed approach, consisting of
the concentration results under mild assumptions. Furthermore, if a small
sample of example queries is given at the training time, we propose a modified
codebook learning procedure which further improves the accuracy. Experimental
results on a variety of datasets including those arising from deep neural
networks show that the proposed approach significantly outperforms the existing
state-of-the-art.
| Ruiqi Guo, Sanjiv Kumar, Krzysztof Choromanski and David Simcha | null | 1509.01469 | null | null |
EM Algorithms for Weighted-Data Clustering with Application to
Audio-Visual Scene Analysis | cs.CV cs.LG stat.ML | Data clustering has received a lot of attention and numerous methods,
algorithms and software packages are available. Among these techniques,
parametric finite-mixture models play a central role due to their interesting
mathematical properties and to the existence of maximum-likelihood estimators
based on expectation-maximization (EM). In this paper we propose a new mixture
model that associates a weight with each observed point. We introduce the
weighted-data Gaussian mixture and we derive two EM algorithms. The first one
considers a fixed weight for each observation. The second one treats each
weight as a random variable following a gamma distribution. We propose a model
selection method based on a minimum message length criterion, provide a weight
initialization strategy, and validate the proposed algorithms by comparing them
with several state of the art parametric and non-parametric clustering
techniques. We also demonstrate the effectiveness and robustness of the
proposed clustering technique in the presence of heterogeneous data, namely
audio-visual scene analysis.
| Israel D. Gebru, Xavier Alameda-Pineda, Florence Forbes and Radu
Horaud | 10.1109/TPAMI.2016.2522425 | 1509.01509 | null | null |
Minimum Spectral Connectivity Projection Pursuit | stat.ML cs.LG | We study the problem of determining the optimal low dimensional projection
for maximising the separability of a binary partition of an unlabelled dataset,
as measured by spectral graph theory. This is achieved by finding projections
which minimise the second eigenvalue of the graph Laplacian of the projected
data, which corresponds to a non-convex, non-smooth optimisation problem. We
show that the optimal univariate projection based on spectral connectivity
converges to the vector normal to the maximum margin hyperplane through the
data, as the scaling parameter is reduced to zero. This establishes a
connection between connectivity as measured by spectral graph theory and
maximal Euclidean separation. The computational cost associated with each
eigen-problem is quadratic in the number of data. To mitigate this issue, we
propose an approximation method using microclusters with provable approximation
error bounds. Combining multiple binary partitions within a divisive
hierarchical model allows us to construct clustering solutions admitting
clusters with varying scales and lying within different subspaces. We evaluate
the performance of the proposed method on a large collection of benchmark
datasets and find that it compares favourably with existing methods for
projection pursuit and dimension reduction for data clustering.
| David P. Hofmeyr and Nicos G. Pavlidis and Idris A. Eckley | 10.1007/s11222-018-9814-6 | 1509.01546 | null | null |
Giraffe: Using Deep Reinforcement Learning to Play Chess | cs.AI cs.LG cs.NE | This report presents Giraffe, a chess engine that uses self-play to discover
all its domain-specific knowledge, with minimal hand-crafted knowledge given by
the programmer. Unlike previous attempts using machine learning only to perform
parameter-tuning on hand-crafted evaluation functions, Giraffe's learning
system also performs automatic feature extraction and pattern recognition. The
trained evaluation function performs comparably to the evaluation functions of
state-of-the-art chess engines - all of which containing thousands of lines of
carefully hand-crafted pattern recognizers, tuned over many years by both
computer chess experts and human chess masters. Giraffe is the most successful
attempt thus far at using end-to-end machine learning to play chess.
| Matthew Lai | null | 1509.01549 | null | null |
Efficient Sampling for k-Determinantal Point Processes | cs.LG | Determinantal Point Processes (DPPs) are elegant probabilistic models of
repulsion and diversity over discrete sets of items. But their applicability to
large sets is hindered by expensive cubic-complexity matrix operations for
basic tasks such as sampling. In light of this, we propose a new method for
approximate sampling from discrete $k$-DPPs. Our method takes advantage of the
diversity property of subsets sampled from a DPP, and proceeds in two stages:
first it constructs coresets for the ground set of items; thereafter, it
efficiently samples subsets based on the constructed coresets. As opposed to
previous approaches, our algorithm aims to minimize the total variation
distance to the original distribution. Experiments on both synthetic and real
datasets indicate that our sampling algorithm works efficiently on large data
sets, and yields more accurate samples than previous approaches.
| Chengtao Li, Stefanie Jegelka and Suvrit Sra | null | 1509.01618 | null | null |
Character-level Convolutional Networks for Text Classification | cs.LG cs.CL | This article offers an empirical exploration on the use of character-level
convolutional networks (ConvNets) for text classification. We constructed
several large-scale datasets to show that character-level convolutional
networks could achieve state-of-the-art or competitive results. Comparisons are
offered against traditional models such as bag of words, n-grams and their
TFIDF variants, and deep learning models such as word-based ConvNets and
recurrent neural networks.
| Xiang Zhang, Junbo Zhao, Yann LeCun | null | 1509.01626 | null | null |
Reinforcement Learning with Parameterized Actions | cs.AI cs.LG | We introduce a model-free algorithm for learning in Markov decision processes
with parameterized actions-discrete actions with continuous parameters. At each
step the agent must select both which action to use and which parameters to use
with that action. We introduce the Q-PAMDP algorithm for learning in these
domains, show that it converges to a local optimum, and compare it to direct
policy search in the goal-scoring and Platform domains.
| Warwick Masson, Pravesh Ranchod, George Konidaris | null | 1509.01644 | null | null |
Gravitational Clustering | cs.LG | The downfall of many supervised learning algorithms, such as neural networks,
is the inherent need for a large amount of training data. Although there is a
lot of buzz about big data, there is still the problem of doing classification
from a small dataset. Other methods such as support vector machines, although
capable of dealing with few samples, are inherently binary classifiers, and are
in need of learning strategies such as One vs All in the case of
multi-classification. In the presence of a large number of classes this can
become problematic. In this paper we present, a novel approach to supervised
learning through the method of clustering. Unlike traditional methods such as
K-Means, Gravitational Clustering does not require the initial number of
clusters, and automatically builds the clusters, individual samples can be
arbitrarily weighted and it requires only few samples while staying resilient
to over-fitting.
| Armen Aghajanyan | null | 1509.01659 | null | null |
HAMSI: A Parallel Incremental Optimization Algorithm Using Quadratic
Approximations for Solving Partially Separable Problems | stat.ML cs.LG | We propose HAMSI (Hessian Approximated Multiple Subsets Iteration), which is
a provably convergent, second order incremental algorithm for solving
large-scale partially separable optimization problems. The algorithm is based
on a local quadratic approximation, and hence, allows incorporating curvature
information to speed-up the convergence. HAMSI is inherently parallel and it
scales nicely with the number of processors. Combined with techniques for
effectively utilizing modern parallel computer architectures, we illustrate
that the proposed method converges more rapidly than a parallel stochastic
gradient descent when both methods are used to solve large-scale matrix
factorization problems. This performance gain comes only at the expense of
using memory that scales linearly with the total size of the optimization
variables. We conclude that HAMSI may be considered as a viable alternative in
many large scale problems, where first order methods based on variants of
stochastic gradient descent are applicable.
| Kamer Kaya, Figen \"Oztoprak, \c{S}. \.Ilker Birbil, A. Taylan Cemgil,
Umut \c{S}im\c{s}ekli, Nurdan Kuru, Hazal Koptagel, M. Kaan \"Ozt\"urk | null | 1509.01698 | null | null |
Theoretic Analysis and Extremely Easy Algorithms for Domain Adaptive
Feature Learning | cs.LG | Domain adaptation problems arise in a variety of applications, where a
training dataset from the \textit{source} domain and a test dataset from the
\textit{target} domain typically follow different distributions. The primary
difficulty in designing effective learning models to solve such problems lies
in how to bridge the gap between the source and target distributions. In this
paper, we provide comprehensive analysis of feature learning algorithms used in
conjunction with linear classifiers for domain adaptation. Our analysis shows
that in order to achieve good adaptation performance, the second moments of the
source domain distribution and target domain distribution should be similar.
Based on our new analysis, a novel extremely easy feature learning algorithm
for domain adaptation is proposed. Furthermore, our algorithm is extended by
leveraging multiple layers, leading to a deep linear model. We evaluate the
effectiveness of the proposed algorithms in terms of domain adaptation tasks on
the Amazon review dataset and the spam dataset from the ECML/PKDD 2006
discovery challenge.
| Wenhao Jiang, Cheng Deng, Wei Liu, Feiping Nie, Fu-lai Chung, Heng
Huang | null | 1509.01710 | null | null |
Theoretical and Experimental Analyses of Tensor-Based Regression and
Classification | cs.LG stat.ML | We theoretically and experimentally investigate tensor-based regression and
classification. Our focus is regularization with various tensor norms,
including the overlapped trace norm, the latent trace norm, and the scaled
latent trace norm. We first give dual optimization methods using the
alternating direction method of multipliers, which is computationally efficient
when the number of training samples is moderate. We then theoretically derive
an excess risk bound for each tensor norm and clarify their behavior. Finally,
we perform extensive experiments using simulated and real data and demonstrate
the superiority of tensor-based learning methods over vector- and matrix-based
learning methods.
| Kishan Wimalawarne, Ryota Tomioka and Masashi Sugiyama | null | 1509.01770 | null | null |
Sampled Weighted Min-Hashing for Large-Scale Topic Mining | cs.LG cs.CL cs.IR | We present Sampled Weighted Min-Hashing (SWMH), a randomized approach to
automatically mine topics from large-scale corpora. SWMH generates multiple
random partitions of the corpus vocabulary based on term co-occurrence and
agglomerates highly overlapping inter-partition cells to produce the mined
topics. While other approaches define a topic as a probabilistic distribution
over a vocabulary, SWMH topics are ordered subsets of such vocabulary.
Interestingly, the topics mined by SWMH underlie themes from the corpus at
different levels of granularity. We extensively evaluate the meaningfulness of
the mined topics both qualitatively and quantitatively on the NIPS (1.7 K
documents), 20 Newsgroups (20 K), Reuters (800 K) and Wikipedia (4 M) corpora.
Additionally, we compare the quality of SWMH with Online LDA topics for
document representation in classification.
| Gibran Fuentes-Pineda and Ivan Vladimir Meza-Ruiz | 10.1007/978-3-319-19264-2_20 | 1509.01771 | null | null |
Research: Analysis of Transport Model that Approximates Decision Taker's
Preferences | cs.LG cs.AI math.OC stat.AP | Paper provides a method for solving the reverse Monge-Kantorovich transport
problem (TP). It allows to accumulate positive decision-taking experience made
by decision-taker in situations that can be presented in the form of TP. The
initial data for the solution of the inverse TP is the information on orders,
inventories and effective decisions take by decision-taker. The result of
solving the inverse TP contains evaluations of the TPs payoff matrix elements.
It can be used in new situations to select the solution corresponding to the
preferences of the decision-taker. The method allows to gain decision-taker
experience, so it can be used by others. The method allows to build the model
of decision-taker preferences in a specific application area. The model can be
updated regularly to ensure its relevance and adequacy to the decision-taker
system of preferences. This model is adaptive to the current preferences of the
decision taker.
| Valery Vilisov | 10.13140/RG.2.1.5085.6166 | 1509.01815 | null | null |
On collapsed representation of hierarchical Completely Random Measures | math.ST cs.LG stat.TH | The aim of the paper is to provide an exact approach for generating a Poisson
process sampled from a hierarchical CRM, without having to instantiate the
infinitely many atoms of the random measures. We use completely random
measures~(CRM) and hierarchical CRM to define a prior for Poisson processes. We
derive the marginal distribution of the resultant point process, when the
underlying CRM is marginalized out. Using well known properties unique to
Poisson processes, we were able to derive an exact approach for instantiating a
Poisson process with a hierarchical CRM prior. Furthermore, we derive Gibbs
sampling strategies for hierarchical CRM models based on Chinese restaurant
franchise sampling scheme. As an example, we present the sum of generalized
gamma process (SGGP), and show its application in topic-modelling. We show that
one can determine the power-law behaviour of the topics and words in a Bayesian
fashion, by defining a prior on the parameters of SGGP.
| Gaurav Pandey and Ambedkar Dukkipati | null | 1509.01817 | null | null |
Deep Online Convex Optimization by Putting Forecaster to Sleep | cs.LG cs.GT cs.NE | Methods from convex optimization such as accelerated gradient descent are
widely used as building blocks for deep learning algorithms. However, the
reasons for their empirical success are unclear, since neural networks are not
convex and standard guarantees do not apply. This paper develops the first
rigorous link between online convex optimization and error backpropagation on
convolutional networks. The first step is to introduce circadian games, a mild
generalization of convex games with similar convergence properties. The main
result is that error backpropagation on a convolutional network is equivalent
to playing out a circadian game. It follows immediately that the waking-regret
of players in the game (the units in the neural network) controls the overall
rate of convergence of the network. Finally, we explore some implications of
the results: (i) we describe the representations learned by a neural network
game-theoretically, (ii) propose a learning setting at the level of individual
units that can be plugged into deep architectures, and (iii) propose a new
approach to adaptive model selection by applying bandit algorithms to choose
which players to wake on each round.
| David Balduzzi | null | 1509.01851 | null | null |
Hierarchical Deep Learning Architecture For 10K Objects Classification | cs.CV cs.LG cs.NE | Evolution of visual object recognition architectures based on Convolutional
Neural Networks & Convolutional Deep Belief Networks paradigms has
revolutionized artificial Vision Science. These architectures extract & learn
the real world hierarchical visual features utilizing supervised & unsupervised
learning approaches respectively. Both the approaches yet cannot scale up
realistically to provide recognition for a very large number of objects as high
as 10K. We propose a two level hierarchical deep learning architecture inspired
by divide & conquer principle that decomposes the large scale recognition
architecture into root & leaf level model architectures. Each of the root &
leaf level models is trained exclusively to provide superior results than
possible by any 1-level deep learning architecture prevalent today. The
proposed architecture classifies objects in two steps. In the first step the
root level model classifies the object in a high level category. In the second
step, the leaf level recognition model for the recognized high level category
is selected among all the leaf models. This leaf level model is presented with
the same input object image which classifies it in a specific category. Also we
propose a blend of leaf level models trained with either supervised or
unsupervised learning approaches. Unsupervised learning is suitable whenever
labelled data is scarce for the specific leaf level models. Currently the
training of leaf level models is in progress; where we have trained 25 out of
the total 47 leaf level models as of now. We have trained the leaf models with
the best case top-5 error rate of 3.2% on the validation data set for the
particular leaf models. Also we demonstrate that the validation error of the
leaf level models saturates towards the above mentioned accuracy as the number
of epochs are increased to more than sixty.
| Atul Laxman Katole, Krishna Prasad Yellapragada, Amish Kumar Bedi,
Sehaj Singh Kalra and Mynepalli Siva Chaitanya | 10.5121/csit.2015.51408 | 1509.01951 | null | null |
Automated Analysis of Behavioural Variability and Filial Imprinting of
Chicks (G. gallus), using Autonomous Robots | q-bio.QM cs.LG cs.RO physics.bio-ph | Inter-individual variability has various impacts in animal social behaviour.
This implies that not only collective behaviours have to be studied but also
the behavioural variability of each member composing the groups. To understand
those effects on group behaviour, we develop a quantitative methodology based
on automated ethograms and autonomous robots to study the inter-individual
variability among social animals. We choose chicks of \textit{Gallus gallus
domesticus} as a classic social animal model system for their suitability in
laboratory and controlled experimentation. Moreover, even domesticated chicken
present social structures implying forms or leadership and filial imprinting.
We develop an imprinting methodology on autonomous robots to study individual
and social behaviour of free moving animals. This allows to quantify the
behaviours of large number of animals. We develop an automated experimental
methodology that allows to make relatively fast controlled experiments and
efficient data analysis. Our analysis are based on high-throughput data
allowing a fine quantification of individual behavioural traits. We quantify
the efficiency of various state-of-the-art algorithms to automate data analysis
and produce automated ethograms. We show that the use of robots allows to
provide controlled and quantified stimuli to the animals in absence of human
intervention. We quantify the individual behaviour of 205 chicks obtained from
hatching after synchronized fecundation. Our results show a high variability of
individual behaviours and of imprinting quality and success. Three classes of
chicks are observed with various level of imprinting. Our study shows that the
concomitant use of autonomous robots and automated ethograms allows detailed
and quantitative analysis of behavioural patterns of animals in controlled
laboratory experiments.
| A. Gribovskiy, F. Mondada, J.L. Deneubourg, L. Cazenille, N. Bredeche,
J. Halloy | null | 1509.01957 | null | null |
Data-selective Transfer Learning for Multi-Domain Speech Recognition | cs.LG cs.CL cs.SD | Negative transfer in training of acoustic models for automatic speech
recognition has been reported in several contexts such as domain change or
speaker characteristics. This paper proposes a novel technique to overcome
negative transfer by efficient selection of speech data for acoustic model
training. Here data is chosen on relevance for a specific target. A submodular
function based on likelihood ratios is used to determine how acoustically
similar each training utterance is to a target test set. The approach is
evaluated on a wide-domain data set, covering speech from radio and TV
broadcasts, telephone conversations, meetings, lectures and read speech.
Experiments demonstrate that the proposed technique both finds relevant data
and limits negative transfer. Results on a 6--hour test set show a relative
improvement of 4% with data selection over using all data in PLP based models,
and 2% with DNN features.
| Mortaza Doulaty, Oscar Saz, Thomas Hain | null | 1509.02409 | null | null |
Improved Twitter Sentiment Prediction through Cluster-then-Predict Model | cs.IR cs.CL cs.LG cs.SI | Over the past decade humans have experienced exponential growth in the use of
online resources, in particular social media and microblogging websites such as
Facebook, Twitter, YouTube and also mobile applications such as WhatsApp, Line,
etc. Many companies have identified these resources as a rich mine of marketing
knowledge. This knowledge provides valuable feedback which allows them to
further develop the next generation of their product. In this paper, sentiment
analysis of a product is performed by extracting tweets about that product and
classifying the tweets showing it as positive and negative sentiment. The
authors propose a hybrid approach which combines unsupervised learning in the
form of K-means clustering to cluster the tweets and then performing supervised
learning methods such as Decision Trees and Support Vector Machines for
classification.
| Rishabh Soni, K. James Mathai | null | 1509.02437 | null | null |
A Behavior Analysis-Based Game Bot Detection Approach Considering
Various Play Styles | cs.LG cs.AI | An approach for game bot detection in MMORPGs is proposed based on the
analysis of game playing behavior. Since MMORPGs are large scale games, users
can play in various ways. This variety in playing behavior makes it hard to
detect game bots based on play behaviors. In order to cope with this problem,
the proposed approach observes game playing behaviors of users and groups them
by their behavioral similarities. Then, it develops a local bot detection model
for each player group. Since the locally optimized models can more accurately
detect game bots within each player group, the combination of those models
brings about overall improvement. For a practical purpose of reducing the
workloads of the game servers in service, the game data is collected at a low
resolution in time. Behavioral features are selected and developed to
accurately detect game bots with the low resolution data, considering common
aspects of MMORPG playing. Through the experiment with the real data from a
game currently in service, it is shown that the proposed local model approach
yields more accurate results.
| Yeounoh Chung, Chang-yong Park, Noo-ri Kim, Hana Cho, Taebok Yoon,
Hunjoo Lee and Jee-Hyong Lee | 10.4218/etrij.13.2013.0049 | 1509.02458 | null | null |
Deep Attributes from Context-Aware Regional Neural Codes | cs.CV cs.LG cs.NE | Recently, many researches employ middle-layer output of convolutional neural
network models (CNN) as features for different visual recognition tasks.
Although promising results have been achieved in some empirical studies, such
type of representations still suffer from the well-known issue of semantic gap.
This paper proposes so-called deep attribute framework to alleviate this issue
from three aspects. First, we introduce object region proposals as intermedia
to represent target images, and extract features from region proposals. Second,
we study aggregating features from different CNN layers for all region
proposals. The aggregation yields a holistic yet compact representation of
input images. Results show that cross-region max-pooling of soft-max layer
output outperform all other layers. As soft-max layer directly corresponds to
semantic concepts, this representation is named "deep attributes". Third, we
observe that only a small portion of generated regions by object proposals
algorithm are correlated to classification target. Therefore, we introduce
context-aware region refining algorithm to pick out contextual regions and
build context-aware classifiers.
We apply the proposed deep attributes framework for various vision tasks.
Extensive experiments are conducted on standard benchmarks for three visual
recognition tasks, i.e., image classification, fine-grained recognition and
visual instance retrieval. Results show that deep attribute approaches achieve
state-of-the-art results, and outperforms existing peer methods with a
significant margin, even though some benchmarks have little overlap of concepts
with the pre-trained CNN models.
| Jianwei Luo and Jianguo Li and Jun Wang and Zhiguo Jiang and Yurong
Chen | null | 1509.02470 | null | null |
Optimizing Static and Adaptive Probing Schedules for Rapid Event
Detection | cs.DS cs.LG | We formulate and study a fundamental search and detection problem, Schedule
Optimization, motivated by a variety of real-world applications, ranging from
monitoring content changes on the web, social networks, and user activities to
detecting failure on large systems with many individual machines.
We consider a large system consists of many nodes, where each node has its
own rate of generating new events, or items. A monitoring application can probe
a small number of nodes at each step, and our goal is to compute a probing
schedule that minimizes the expected number of undiscovered items at the
system, or equivalently, minimizes the expected time to discover a new item in
the system.
We study the Schedule Optimization problem both for deterministic and
randomized memoryless algorithms. We provide lower bounds on the cost of an
optimal schedule and construct close to optimal schedules with rigorous
mathematical guarantees. Finally, we present an adaptive algorithm that starts
with no prior information on the system and converges to the optimal memoryless
algorithms by adapting to observed data.
| Ahmad Mahmoody, Evgenios M. Kornaropoulos, and Eli Upfal | null | 1509.02487 | null | null |
DeepCough: A Deep Convolutional Neural Network in A Wearable Cough
Detection System | cs.NE cs.LG | In this paper, we present a system that employs a wearable acoustic sensor
and a deep convolutional neural network for detecting coughs. We evaluate the
performance of our system on 14 healthy volunteers and compare it to that of
other cough detection systems that have been reported in the literature.
Experimental results show that our system achieves a classification sensitivity
of 95.1% and a specificity of 99.5%.
| Justice Amoh and Kofi Odame | null | 1509.02512 | null | null |
Asynchronous Distributed ADMM for Large-Scale Optimization- Part I:
Algorithm and Convergence Analysis | cs.DC cs.LG cs.SY | Aiming at solving large-scale learning problems, this paper studies
distributed optimization methods based on the alternating direction method of
multipliers (ADMM). By formulating the learning problem as a consensus problem,
the ADMM can be used to solve the consensus problem in a fully parallel fashion
over a computer network with a star topology. However, traditional synchronized
computation does not scale well with the problem size, as the speed of the
algorithm is limited by the slowest workers. This is particularly true in a
heterogeneous network where the computing nodes experience different
computation and communication delays. In this paper, we propose an asynchronous
distributed ADMM (AD-AMM) which can effectively improve the time efficiency of
distributed optimization. Our main interest lies in analyzing the convergence
conditions of the AD-ADMM, under the popular partially asynchronous model,
which is defined based on a maximum tolerable delay of the network.
Specifically, by considering general and possibly non-convex cost functions, we
show that the AD-ADMM is guaranteed to converge to the set of
Karush-Kuhn-Tucker (KKT) points as long as the algorithm parameters are chosen
appropriately according to the network delay. We further illustrate that the
asynchrony of the ADMM has to be handled with care, as slightly modifying the
implementation of the AD-ADMM can jeopardize the algorithm convergence, even
under a standard convex setting.
| Tsung-Hui Chang, Mingyi Hong, Wei-Cheng Liao and Xiangfeng Wang | 10.1109/TSP.2016.2537271 | 1509.02597 | null | null |
Asynchronous Distributed ADMM for Large-Scale Optimization- Part II:
Linear Convergence Analysis and Numerical Performance | cs.DC cs.LG cs.SY | The alternating direction method of multipliers (ADMM) has been recognized as
a versatile approach for solving modern large-scale machine learning and signal
processing problems efficiently. When the data size and/or the problem
dimension is large, a distributed version of ADMM can be used, which is capable
of distributing the computation load and the data set to a network of computing
nodes. Unfortunately, a direct synchronous implementation of such algorithm
does not scale well with the problem size, as the algorithm speed is limited by
the slowest computing nodes. To address this issue, in a companion paper, we
have proposed an asynchronous distributed ADMM (AD-ADMM) and studied its
worst-case convergence conditions. In this paper, we further the study by
characterizing the conditions under which the AD-ADMM achieves linear
convergence. Our conditions as well as the resulting linear rates reveal the
impact that various algorithm parameters, network delay and network size have
on the algorithm performance. To demonstrate the superior time efficiency of
the proposed AD-ADMM, we test the AD-ADMM on a high-performance computer
cluster by solving a large-scale logistic regression problem.
| Tsung-Hui Chang, Wei-Cheng Liao, Mingyi Hong and Xiangfeng Wang | 10.1109/TSP.2016.2537261 | 1509.02604 | null | null |
Finite Dictionary Variants of the Diffusion KLMS Algorithm | cs.SY cs.DC cs.IT cs.LG math.IT | The diffusion based distributed learning approaches have been found to be a
viable solution for learning over linearly separable datasets over a network.
However, approaches till date are suitable for linearly separable datasets and
need to be extended to scenarios in which we need to learn a non-linearity. In
such scenarios, the recently proposed diffusion kernel least mean squares
(KLMS) has been found to be performing better than diffusion least mean squares
(LMS). The drawback of diffusion KLMS is that it requires infinite storage for
observations (also called dictionary). This paper formulates the diffusion KLMS
in a fixed budget setting such that the storage requirement is curtailed while
maintaining appreciable performance in terms of convergence. Simulations have
been carried out to validate the two newly proposed algorithms named as
quantised diffusion KLMS (QDKLMS) and fixed budget diffusion KLMS (FBDKLMS)
against KLMS, which indicate that both the proposed algorithms deliver better
performance as compared to the KLMS while reducing the dictionary size storage
requirement.
| Rangeet Mitra and Vimal Bhatia | null | 1509.02730 | null | null |
Clustering by Hierarchical Nearest Neighbor Descent (H-NND) | stat.ML cs.CV cs.LG stat.ME | Previously in 2014, we proposed the Nearest Descent (ND) method, capable of
generating an efficient Graph, called the in-tree (IT). Due to some beautiful
and effective features, this IT structure proves well suited for data
clustering. Although there exist some redundant edges in IT, they usually have
salient features and thus it is not hard to remove them.
Subsequently, in order to prevent the seemingly redundant edges from
occurring, we proposed the Nearest Neighbor Descent (NND) by adding the
"Neighborhood" constraint on ND. Consequently, clusters automatically emerged,
without the additional requirement of removing the redundant edges. However,
NND proved still not perfect, since it brought in a new yet worse problem, the
"over-partitioning" problem.
Now, in this paper, we propose a method, called the Hierarchical Nearest
Neighbor Descent (H-NND), which overcomes the over-partitioning problem of NND
via using the hierarchical strategy. Specifically, H-NND uses ND to effectively
merge the over-segmented sub-graphs or clusters that NND produces. Like ND,
H-NND also generates the IT structure, in which the redundant edges once again
appear. This seemingly comes back to the situation that ND faces. However,
compared with ND, the redundant edges in the IT structure generated by H-NND
generally become more salient, thus being much easier and more reliable to be
identified even by the simplest edge-removing method which takes the edge
length as the only measure. In other words, the IT structure constructed by
H-NND becomes more fitted for data clustering. We prove this on several
clustering datasets of varying shapes, dimensions and attributes. Besides,
compared with ND, H-NND generally takes less computation time to construct the
IT data structure for the input data.
| Teng Qiu, Yongjie Li | null | 1509.02805 | null | null |
Statistical Inference, Learning and Models in Big Data | stat.ML cs.LG | The need for new methods to deal with big data is a common theme in most
scientific fields, although its definition tends to vary with the context.
Statistical ideas are an essential part of this, and as a partial response, a
thematic program on statistical inference, learning, and models in big data was
held in 2015 in Canada, under the general direction of the Canadian Statistical
Sciences Institute, with major funding from, and most activities located at,
the Fields Institute for Research in Mathematical Sciences. This paper gives an
overview of the topics covered, describing challenges and strategies that seem
common to many different areas of application, and including some examples of
applications to make these challenges and strategies more concrete.
| Beate Franke and Jean-Fran\c{c}ois Plante and Ribana Roscher and Annie
Lee and Cathal Smyth and Armin Hatefi and Fuqi Chen and Einat Gil and
Alexander Schwing and Alessandro Selvitella and Michael M. Hoffman and Roger
Grosse and Dieter Hendricks and Nancy Reid | 10.1111/insr.12176 | 1509.02900 | null | null |
Sensor Selection by Linear Programming | stat.ML cs.LG | We learn sensor trees from training data to minimize sensor acquisition costs
during test time. Our system adaptively selects sensors at each stage if
necessary to make a confident classification. We pose the problem as empirical
risk minimization over the choice of trees and node decision rules. We
decompose the problem, which is known to be intractable, into combinatorial
(tree structures) and continuous parts (node decision rules) and propose to
solve them separately. Using training data we greedily solve for the
combinatorial tree structures and for the continuous part, which is a
non-convex multilinear objective function, we derive convex surrogate loss
functions that are piecewise linear. The resulting problem can be cast as a
linear program and has the advantage of guaranteed convergence, global
optimality, repeatability and computational efficiency. We show that our
proposed approach outperforms the state-of-art on a number of benchmark
datasets.
| Joseph Wang, Kirill Trapeznikov, Venkatesh Saligrama | null | 1509.02954 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.