title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
FRULER: Fuzzy Rule Learning through Evolution for Regression | cs.LG cs.AI stat.ML | In regression problems, the use of TSK fuzzy systems is widely extended due
to the precision of the obtained models. Moreover, the use of simple linear TSK
models is a good choice in many real problems due to the easy understanding of
the relationship between the output and input variables. In this paper we
present FRULER, a new genetic fuzzy system for automatically learning accurate
and simple linguistic TSK fuzzy rule bases for regression problems. In order to
reduce the complexity of the learned models while keeping a high accuracy, the
algorithm consists of three stages: instance selection, multi-granularity fuzzy
discretization of the input variables, and the evolutionary learning of the
rule base that uses the Elastic Net regularization to obtain the consequents of
the rules. Each stage was validated using 28 real-world datasets and FRULER was
compared with three state of the art enetic fuzzy systems. Experimental results
show that FRULER achieves the most accurate and simple models compared even
with approximative approaches.
| I. Rodr\'iguez-Fdez, M. Mucientes, A. Bugar\'in | null | 1507.04997 | null | null |
Massively Deep Artificial Neural Networks for Handwritten Digit
Recognition | cs.CV cs.LG cs.NE | Greedy Restrictive Boltzmann Machines yield an fairly low 0.72% error rate on
the famous MNIST database of handwritten digits. All that was required to
achieve this result was a high number of hidden layers consisting of many
neurons, and a graphics card to greatly speed up the rate of learning.
| Keiron O'Shea | null | 1507.05053 | null | null |
Type I and Type II Bayesian Methods for Sparse Signal Recovery using
Scale Mixtures | cs.LG stat.ML | In this paper, we propose a generalized scale mixture family of
distributions, namely the Power Exponential Scale Mixture (PESM) family, to
model the sparsity inducing priors currently in use for sparse signal recovery
(SSR). We show that the successful and popular methods such as LASSO,
Reweighted $\ell_1$ and Reweighted $\ell_2$ methods can be formulated in an
unified manner in a maximum a posteriori (MAP) or Type I Bayesian framework
using an appropriate member of the PESM family as the sparsity inducing prior.
In addition, exploiting the natural hierarchical framework induced by the PESM
family, we utilize these priors in a Type II framework and develop the
corresponding EM based estimation algorithms. Some insight into the differences
between Type I and Type II methods is provided and of particular interest in
the algorithmic development is the Type II variant of the popular and
successful reweighted $\ell_1$ method. Extensive empirical results are provided
and they show that the Type II methods exhibit better support recovery than the
corresponding Type I methods.
| Ritwik Giri, Bhaskar D. Rao | 10.1109/TSP.2016.2546231 | 1507.05087 | null | null |
The Mondrian Process for Machine Learning | stat.ML cs.LG | This report is concerned with the Mondrian process and its applications in
machine learning. The Mondrian process is a guillotine-partition-valued
stochastic process that possesses an elegant self-consistency property. The
first part of the report uses simple concepts from applied probability to
define the Mondrian process and explore its properties.
The Mondrian process has been used as the main building block of a clever
online random forest classification algorithm that turns out to be equivalent
to its batch counterpart. We outline a slight adaptation of this algorithm to
regression, as the remainder of the report uses regression as a case study of
how Mondrian processes can be utilized in machine learning. In particular, the
Mondrian process will be used to construct a fast approximation to the
computationally expensive kernel ridge regression problem with a Laplace
kernel.
The complexity of random guillotine partitions generated by a Mondrian
process and hence the complexity of the resulting regression models is
controlled by a lifetime hyperparameter. It turns out that these models can be
efficiently trained and evaluated for all lifetimes in a given range at once,
without needing to retrain them from scratch for each lifetime value. This
leads to an efficient procedure for determining the right model complexity for
a dataset at hand.
The limitation of having a single lifetime hyperparameter will motivate the
final Mondrian grid model, in which each input dimension is endowed with its
own lifetime parameter. In this model we preserve the property that its
hyperparameters can be tweaked without needing to retrain the modified model
from scratch.
| Matej Balog and Yee Whye Teh | null | 1507.05181 | null | null |
Fairness Constraints: Mechanisms for Fair Classification | stat.ML cs.LG | Algorithmic decision making systems are ubiquitous across a wide variety of
online as well as offline services. These systems rely on complex learning
methods and vast amounts of data to optimize the service functionality,
satisfaction of the end user and profitability. However, there is a growing
concern that these automated decisions can lead, even in the absence of intent,
to a lack of fairness, i.e., their outcomes can disproportionately hurt (or,
benefit) particular groups of people sharing one or more sensitive attributes
(e.g., race, sex). In this paper, we introduce a flexible mechanism to design
fair classifiers by leveraging a novel intuitive measure of decision boundary
(un)fairness. We instantiate this mechanism with two well-known classifiers,
logistic regression and support vector machines, and show on real-world data
that our mechanism allows for a fine-grained control on the degree of fairness,
often at a small cost in terms of accuracy.
| Muhammad Bilal Zafar and Isabel Valera and Manuel Gomez Rodriguez and
Krishna P. Gummadi | null | 1507.05259 | null | null |
2 Notes on Classes with Vapnik-Chervonenkis Dimension 1 | cs.LG | The Vapnik-Chervonenkis dimension is a combinatorial parameter that reflects
the "complexity" of a set of sets (a.k.a. concept classes). It has been
introduced by Vapnik and Chervonenkis in their seminal 1971 paper and has since
found many applications, most notably in machine learning theory and in
computational geometry. Arguably the most influential consequence of the VC
analysis is the fundamental theorem of statistical machine learning, stating
that a concept class is learnable (in some precise sense) if and only if its
VC-dimension is finite. Furthermore, for such classes a most simple learning
rule - empirical risk minimization (ERM) - is guaranteed to succeed.
The simplest non-trivial structures, in terms of the VC-dimension, are the
classes (i.e., sets of subsets) for which that dimension is 1.
In this note we show a couple of curious results concerning such classes. The
first result shows that such classes share a very simple structure, and, as a
corollary, the labeling information contained in any sample labeled by such a
class can be compressed into a single instance.
The second result shows that due to some subtle measurability issues, in
spite of the above mentioned fundamental theorem, there are classes of
dimension 1 for which an ERM learning rule fails miserably.
| Shai Ben-David | null | 1507.05307 | null | null |
Fast Adaptive Weight Noise | stat.ML cs.LG | Marginalising out uncertain quantities within the internal representations or
parameters of neural networks is of central importance for a wide range of
learning techniques, such as empirical, variational or full Bayesian methods.
We set out to generalise fast dropout (Wang & Manning, 2013) to cover a wider
variety of noise processes in neural networks. This leads to an efficient
calculation of the marginal likelihood and predictive distribution which evades
sampling and the consequential increase in training time due to highly variant
gradient estimates. This allows us to approximate variational Bayes for the
parameters of feed-forward neural networks. Inspired by the minimum description
length principle, we also propose and experimentally verify the direct
optimisation of the regularised predictive distribution. The methods yield
results competitive with previous neural network based approaches and Gaussian
processes on a wide range of regression tasks.
| Justin Bayer and Maximilian Karl and Daniela Korhammer and Patrick van
der Smagt | null | 1507.05331 | null | null |
Regret Guarantees for Item-Item Collaborative Filtering | cs.LG cs.IR cs.IT math.IT stat.ML | There is much empirical evidence that item-item collaborative filtering works
well in practice. Motivated to understand this, we provide a framework to
design and analyze various recommendation algorithms. The setup amounts to
online binary matrix completion, where at each time a random user requests a
recommendation and the algorithm chooses an entry to reveal in the user's row.
The goal is to minimize regret, or equivalently to maximize the number of +1
entries revealed at any time. We analyze an item-item collaborative filtering
algorithm that can achieve fundamentally better performance compared to
user-user collaborative filtering. The algorithm achieves good "cold-start"
performance (appropriately defined) by quickly making good recommendations to
new users about whom there is little information.
| Guy Bresler, Devavrat Shah, and Luis F. Voloch | null | 1507.05371 | null | null |
Canonical Correlation Forests | stat.ML cs.LG | We introduce canonical correlation forests (CCFs), a new decision tree
ensemble method for classification and regression. Individual canonical
correlation trees are binary decision trees with hyperplane splits based on
local canonical correlation coefficients calculated during training. Unlike
axis-aligned alternatives, the decision surfaces of CCFs are not restricted to
the coordinate system of the inputs features and therefore more naturally
represent data with correlated inputs. CCFs naturally accommodate multiple
outputs, provide a similar computational complexity to random forests, and
inherit their impressive robustness to the choice of input parameters. As part
of the CCF training algorithm, we also introduce projection bootstrapping, a
novel alternative to bagging for oblique decision tree ensembles which
maintains use of the full dataset in selecting split points, often leading to
improvements in predictive accuracy. Our experiments show that, even without
parameter tuning, CCFs out-perform axis-aligned random forests and other
state-of-the-art tree ensemble methods on both classification and regression
problems, delivering both improved predictive accuracy and faster training
times. We further show that they outperform all of the 179 classifiers
considered in a recent extensive survey.
| Tom Rainforth and Frank Wood | null | 1507.05444 | null | null |
AMP: a new time-frequency feature extraction method for intermittent
time-series data | cs.LG | The characterisation of time-series data via their most salient features is
extremely important in a range of machine learning task, not least of all with
regards to classification and clustering. While there exist many feature
extraction techniques suitable for non-intermittent time-series data, these
approaches are not always appropriate for intermittent time-series data, where
intermittency is characterized by constant values for large periods of time
punctuated by sharp and transient increases or decreases in value.
Motivated by this, we present aggregation, mode decomposition and projection
(AMP) a feature extraction technique particularly suited to intermittent
time-series data which contain time-frequency patterns. For our method all
individual time-series within a set are combined to form a non-intermittent
aggregate. This is decomposed into a set of components which represent the
intrinsic time-frequency signals within the data set. Individual time-series
can then be fit to these components to obtain a set of numerical features that
represent their intrinsic time-frequency patterns. To demonstrate the
effectiveness of AMP, we evaluate against the real word task of clustering
intermittent time-series data. Using synthetically generated data we show that
a clustering approach which uses the features derived from AMP significantly
outperforms traditional clustering methods. Our technique is further
exemplified on a real world data set where AMP can be used to discover
groupings of individuals which correspond to real world sub-populations.
| Duncan Barrack, James Goulding, Keith Hopcraft, Simon Preston and
Gavin Smith | null | 1507.05455 | null | null |
On the Minimax Risk of Dictionary Learning | stat.ML cs.IT cs.LG math.IT | We consider the problem of learning a dictionary matrix from a number of
observed signals, which are assumed to be generated via a linear model with a
common underlying dictionary. In particular, we derive lower bounds on the
minimum achievable worst case mean squared error (MSE), regardless of
computational complexity of the dictionary learning (DL) schemes. By casting DL
as a classical (or frequentist) estimation problem, the lower bounds on the
worst case MSE are derived by following an established information-theoretic
approach to minimax estimation. The main conceptual contribution of this paper
is the adaption of the information-theoretic approach to minimax estimation for
the DL problem in order to derive lower bounds on the worst case MSE of any DL
scheme. We derive three different lower bounds applying to different generative
models for the observed signals. The first bound applies to a wide range of
models, it only requires the existence of a covariance matrix of the (unknown)
underlying coefficient vector. By specializing this bound to the case of sparse
coefficient distributions, and assuming the true dictionary satisfies the
restricted isometry property, we obtain a lower bound on the worst case MSE of
DL schemes in terms of a signal to noise ratio (SNR). The third bound applies
to a more restrictive subclass of coefficient distributions by requiring the
non-zero coefficients to be Gaussian. While, compared with the previous two
bounds, the applicability of this final bound is the most limited it is the
tightest of the three bounds in the low SNR regime.
| Alexander Jung, Yonina C. Eldar, Norbert G\"ortz | null | 1507.05498 | null | null |
Clustering Tree-structured Data on Manifold | cs.CV cs.LG | Tree-structured data usually contain both topological and geometrical
information, and are necessarily considered on manifold instead of Euclidean
space for appropriate data parameterization and analysis. In this study, we
propose a novel tree-structured data parameterization, called
Topology-Attribute matrix (T-A matrix), so the data clustering task can be
conducted on matrix manifold. We incorporate the structure constraints embedded
in data into the negative matrix factorization method to determine meta-trees
from the T-A matrix, and the signature vector of each single tree can then be
extracted by meta-tree decomposition. The meta-tree space turns out to be a
cone space, in which we explore the distance metric and implement the
clustering algorithm based on the concepts like Fr\'echet mean. Finally, the
T-A matrix based clustering (TAMBAC) framework is evaluated and compared using
both simulated data and real retinal images to illustrate its efficiency and
accuracy.
| Na Lu, Hongyu Miao | null | 1507.05532 | null | null |
Building a Large-scale Multimodal Knowledge Base System for Answering
Visual Queries | cs.CV cs.LG | The complexity of the visual world creates significant challenges for
comprehensive visual understanding. In spite of recent successes in visual
recognition, today's vision systems would still struggle to deal with visual
queries that require a deeper reasoning. We propose a knowledge base (KB)
framework to handle an assortment of visual queries, without the need to train
new classifiers for new tasks. Building such a large-scale multimodal KB
presents a major challenge of scalability. We cast a large-scale MRF into a KB
representation, incorporating visual, textual and structured data, as well as
their diverse relations. We introduce a scalable knowledge base construction
system that is capable of building a KB with half billion variables and
millions of parameters in a few hours. Our system achieves competitive results
compared to purpose-built models on standard recognition and retrieval tasks,
while exhibiting greater flexibility in answering richer visual queries.
| Yuke Zhu, Ce Zhang, Christopher R\'e and Li Fei-Fei | null | 1507.05670 | null | null |
Compression of Fully-Connected Layer in Neural Network by Kronecker
Product | cs.NE cs.CV cs.LG | In this paper we propose and study a technique to reduce the number of
parameters and computation time in fully-connected layers of neural networks
using Kronecker product, at a mild cost of the prediction quality. The
technique proceeds by replacing Fully-Connected layers with so-called Kronecker
Fully-Connected layers, where the weight matrices of the FC layers are
approximated by linear combinations of multiple Kronecker products of smaller
matrices. In particular, given a model trained on SVHN dataset, we are able to
construct a new KFC model with 73\% reduction in total number of parameters,
while the error only rises mildly. In contrast, using low-rank method can only
achieve 35\% reduction in total number of parameters given similar quality
degradation allowance. If we only compare the KFC layer with its counterpart
fully-connected layer, the reduction in the number of parameters exceeds 99\%.
The amount of computation is also reduced as we replace matrix product of the
large matrices in FC layers with matrix products of a few smaller matrices in
KFC layers. Further experiments on MNIST, SVHN and some Chinese Character
recognition models also demonstrate effectiveness of our technique.
| Shuchang Zhou, Jia-Nan Wu | null | 1507.05775 | null | null |
Bandit-Based Task Assignment for Heterogeneous Crowdsourcing | cs.LG | We consider a task assignment problem in crowdsourcing, which is aimed at
collecting as many reliable labels as possible within a limited budget. A
challenge in this scenario is how to cope with the diversity of tasks and the
task-dependent reliability of workers, e.g., a worker may be good at
recognizing the name of sports teams, but not be familiar with cosmetics
brands. We refer to this practical setting as heterogeneous crowdsourcing. In
this paper, we propose a contextual bandit formulation for task assignment in
heterogeneous crowdsourcing, which is able to deal with the
exploration-exploitation trade-off in worker selection. We also theoretically
investigate the regret bounds for the proposed method, and demonstrate its
practical usefulness experimentally.
| Hao Zhang, Yao Ma, Masashi Sugiyama | null | 1507.05800 | null | null |
On Identifying Anomalies in Tor Usage with Applications in Detecting
Internet Censorship | cs.CY cs.LG cs.NI | We develop a means to detect ongoing per-country anomalies in the daily usage
metrics of the Tor anonymous communication network, and demonstrate the
applicability of this technique to identifying likely periods of internet
censorship and related events. The presented approach identifies contiguous
anomalous periods, rather than daily spikes or drops, and allows anomalies to
be ranked according to deviation from expected behaviour.
The developed method is implemented as a running tool, with outputs published
daily by mailing list. This list highlights per-country anomalous Tor usage,
and produces a daily ranking of countries according to the level of detected
anomalous behaviour. This list has been active since August 2016, and is in use
by a number of individuals, academics, and NGOs as an early warning system for
potential censorship events.
We focus on Tor, however the presented approach is more generally applicable
to usage data of other services, both individually and in combination. We
demonstrate that combining multiple data sources allows more specific
identification of likely Tor blocking events. We demonstrate the our approach
in comparison to existing anomaly detection tools, and against both known
historical internet censorship events and synthetic datasets. Finally, we
detail a number of significant recent anomalous events and behaviours
identified by our tool.
| Joss Wright, Alexander Darer, Oliver Farnan | 10.1145/3201064.3201093 | 1507.05819 | null | null |
A study of the classification of low-dimensional data with supervised
manifold learning | cs.LG | Supervised manifold learning methods learn data representations by preserving
the geometric structure of data while enhancing the separation between data
samples from different classes. In this work, we propose a theoretical study of
supervised manifold learning for classification. We consider nonlinear
dimensionality reduction algorithms that yield linearly separable embeddings of
training data and present generalization bounds for this type of algorithms. A
necessary condition for satisfactory generalization performance is that the
embedding allow the construction of a sufficiently regular interpolation
function in relation with the separation margin of the embedding. We show that
for supervised embeddings satisfying this condition, the classification error
decays at an exponential rate with the number of training samples. Finally, we
examine the separability of supervised nonlinear embeddings that aim to
preserve the low-dimensional geometric structure of data based on graph
representations. The proposed analysis is supported by experiments on several
real data sets.
| Elif Vural and Christine Guillemot | null | 1507.05880 | null | null |
Clustering is Efficient for Approximate Maximum Inner Product Search | cs.LG cs.CL stat.ML | Efficient Maximum Inner Product Search (MIPS) is an important task that has a
wide applicability in recommendation systems and classification with a large
number of classes. Solutions based on locality-sensitive hashing (LSH) as well
as tree-based solutions have been investigated in the recent literature, to
perform approximate MIPS in sublinear time. In this paper, we compare these to
another extremely simple approach for solving approximate MIPS, based on
variants of the k-means clustering algorithm. Specifically, we propose to train
a spherical k-means, after having reduced the MIPS problem to a Maximum Cosine
Similarity Search (MCSS). Experiments on two standard recommendation system
benchmarks as well as on large vocabulary word embeddings, show that this
simple approach yields much higher speedups, for the same retrieval precision,
than current state-of-the-art hashing-based and tree-based methods. This simple
method also yields more robust retrievals when the query is corrupted by noise.
| Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, Yoshua
Bengio | null | 1507.05910 | null | null |
On the Worst-Case Approximability of Sparse PCA | stat.ML cs.CC cs.DS cs.LG | It is well known that Sparse PCA (Sparse Principal Component Analysis) is
NP-hard to solve exactly on worst-case instances. What is the complexity of
solving Sparse PCA approximately? Our contributions include: 1) a simple and
efficient algorithm that achieves an $n^{-1/3}$-approximation; 2) NP-hardness
of approximation to within $(1-\varepsilon)$, for some small constant
$\varepsilon > 0$; 3) SSE-hardness of approximation to within any constant
factor; and 4) an $\exp\exp\left(\Omega\left(\sqrt{\log \log n}\right)\right)$
("quasi-quasi-polynomial") gap for the standard semidefinite program.
| Siu On Chan, Dimitris Papailiopoulos, Aviad Rubinstein | null | 1507.05950 | null | null |
Optimal Testing for Properties of Distributions | cs.DS cs.IT cs.LG math.IT math.ST stat.TH | Given samples from an unknown distribution $p$, is it possible to distinguish
whether $p$ belongs to some class of distributions $\mathcal{C}$ versus $p$
being far from every distribution in $\mathcal{C}$? This fundamental question
has received tremendous attention in statistics, focusing primarily on
asymptotic analysis, and more recently in information theory and theoretical
computer science, where the emphasis has been on small sample size and
computational complexity. Nevertheless, even for basic properties of
distributions such as monotonicity, log-concavity, unimodality, independence,
and monotone-hazard rate, the optimal sample complexity is unknown.
We provide a general approach via which we obtain sample-optimal and
computationally efficient testers for all these distribution families. At the
core of our approach is an algorithm which solves the following problem: Given
samples from an unknown distribution $p$, and a known distribution $q$, are $p$
and $q$ close in $\chi^2$-distance, or far in total variation distance?
The optimality of our testers is established by providing matching lower
bounds with respect to both $n$ and $\varepsilon$. Finally, a necessary
building block for our testers and an important byproduct of our work are the
first known computationally efficient proper learners for discrete log-concave
and monotone hazard rate distributions.
| Jayadev Acharya, Constantinos Daskalakis, Gautam Kamath | null | 1507.05952 | null | null |
Practical Selection of SVM Supervised Parameters with Different Feature
Representations for Vowel Recognition | cs.CL cs.LG | It is known that the classification performance of Support Vector Machine
(SVM) can be conveniently affected by the different parameters of the kernel
tricks and the regularization parameter, C. Thus, in this article, we propose a
study in order to find the suitable kernel with which SVM may achieve good
generalization performance as well as the parameters to use. We need to analyze
the behavior of the SVM classifier when these parameters take very small or
very large values. The study is conducted for a multi-class vowel recognition
using the TIMIT corpus. Furthermore, for the experiments, we used different
feature representations such as MFCC and PLP. Finally, a comparative study was
done to point out the impact of the choice of the parameters, kernel trick and
feature representations on the performance of the SVM classifier
| Rimah Amami, Dorra Ben Ayed, Noureddine Ellouze | null | 1507.06020 | null | null |
An Empirical Comparison of SVM and Some Supervised Learning Algorithms
for Vowel recognition | cs.CL cs.LG | In this article, we conduct a study on the performance of some supervised
learning algorithms for vowel recognition. This study aims to compare the
accuracy of each algorithm. Thus, we present an empirical comparison between
five supervised learning classifiers and two combined classifiers: SVM, KNN,
Naive Bayes, Quadratic Bayes Normal (QDC) and Nearst Mean. Those algorithms
were tested for vowel recognition using TIMIT Corpus and Mel-frequency cepstral
coefficients (MFCCs).
| Rimah Amami, Dorra Ben Ayed, Noureddine Ellouze | null | 1507.06021 | null | null |
Robust speech recognition using consensus function based on multi-layer
networks | cs.CL cs.LG | The clustering ensembles mingle numerous partitions of a specified data into
a single clustering solution. Clustering ensemble has emerged as a potent
approach for ameliorating both the forcefulness and the stability of
unsupervised classification results. One of the major problems in clustering
ensembles is to find the best consensus function. Finding final partition from
different clustering results requires skillfulness and robustness of the
classification algorithm. In addition, the major problem with the consensus
function is its sensitivity to the used data sets quality. This limitation is
due to the existence of noisy, silence or redundant data. This paper proposes a
novel consensus function of cluster ensembles based on Multilayer networks
technique and a maintenance database method. This maintenance database approach
is used in order to handle any given noisy speech and, thus, to guarantee the
quality of databases. This can generates good results and efficient data
partitions. To show its effectiveness, we support our strategy with empirical
evaluation using distorted speech from Aurora speech databases.
| Rimah Amami, Ghaith Manita, Abir Smiti | 10.1109/CISTI.2014.6877093 | 1507.06023 | null | null |
Incorporating Belief Function in SVM for Phoneme Recognition | cs.CL cs.LG | The Support Vector Machine (SVM) method has been widely used in numerous
classification tasks. The main idea of this algorithm is based on the principle
of the margin maximization to find an hyperplane which separates the data into
two different classes.In this paper, SVM is applied to phoneme recognition
task. However, in many real-world problems, each phoneme in the data set for
recognition problems may differ in the degree of significance due to noise,
inaccuracies, or abnormal characteristics; All those problems can lead to the
inaccuracies in the prediction phase. Unfortunately, the standard formulation
of SVM does not take into account all those problems and, in particular, the
variation in the speech input. This paper presents a new formulation of SVM
(B-SVM) that attributes to each phoneme a confidence degree computed based on
its geometric position in the space. Then, this degree is used in order to
strengthen the class membership of the tested phoneme. Hence, we introduce a
reformulation of the standard SVM that incorporates the degree of belief.
Experimental performance on TIMIT database shows the effectiveness of the
proposed method B-SVM on a phoneme recognition problem.
| Rimah Amami, Dorra Ben Ayed, Nouerddine Ellouze | 10.1007/978-3-319-07617-1_17 | 1507.06025 | null | null |
The challenges of SVM optimization using Adaboost on a phoneme
recognition problem | cs.CL cs.LG | The use of digital technology is growing at a very fast pace which led to the
emergence of systems based on the cognitive infocommunications. The expansion
of this sector impose the use of combining methods in order to ensure the
robustness in cognitive systems.
| Rimah Amami, Dorra Ben Ayed, Noureddine Ellouze | 10.1109/CogInfoCom.2013.6719292 | 1507.06028 | null | null |
MixEst: An Estimation Toolbox for Mixture Models | stat.ML cs.LG | Mixture models are powerful statistical models used in many applications
ranging from density estimation to clustering and classification. When dealing
with mixture models, there are many issues that the experimenter should be
aware of and needs to solve. The MixEst toolbox is a powerful and user-friendly
package for MATLAB that implements several state-of-the-art approaches to
address these problems. Additionally, MixEst gives the possibility of using
manifold optimization for fitting the density model, a feature specific to this
toolbox. MixEst simplifies using and integration of mixture models in
statistical models and applications. For developing mixture models of new
densities, the user just needs to provide a few functions for that statistical
distribution and the toolbox takes care of all the issues regarding mixture
models. MixEst is available at visionlab.ut.ac.ir/mixest and is fully
documented and is licensed under GPL.
| Reshad Hosseini and Mohamadreza Mash'al | null | 1507.06065 | null | null |
Banzhaf Random Forests | cs.LG cs.CV stat.ML | Random forests are a type of ensemble method which makes predictions by
combining the results of several independent trees. However, the theory of
random forests has long been outpaced by their application. In this paper, we
propose a novel random forests algorithm based on cooperative game theory.
Banzhaf power index is employed to evaluate the power of each feature by
traversing possible feature coalitions. Unlike the previously used information
gain rate of information theory, which simply chooses the most informative
feature, the Banzhaf power index can be considered as a metric of the
importance of each feature on the dependency among a group of features. More
importantly, we have proved the consistency of the proposed algorithm, named
Banzhaf random forests (BRF). This theoretical analysis takes a step towards
narrowing the gap between the theory and practice of random forests for
classification problems. Experiments on several UCI benchmark data sets show
that BRF is competitive with state-of-the-art classifiers and dramatically
outperforms previous consistent random forests. Particularly, it is much more
efficient than previous consistent random forests.
| Jianyuan Sun and Guoqiang Zhong and Junyu Dong and Yajuan Cai | null | 1507.06105 | null | null |
Training Very Deep Networks | cs.LG cs.NE | Theoretical and empirical evidence indicates that the depth of neural
networks is crucial for their success. However, training becomes more difficult
as depth increases, and training of very deep networks remains an open problem.
Here we introduce a new architecture designed to overcome this. Our so-called
highway networks allow unimpeded information flow across many layers on
information highways. They are inspired by Long Short-Term Memory recurrent
networks and use adaptive gating units to regulate the information flow. Even
with hundreds of layers, highway networks can be trained directly through
simple gradient descent. This enables the study of extremely deep and efficient
architectures.
| Rupesh Kumar Srivastava, Klaus Greff, J\"urgen Schmidhuber | null | 1507.06228 | null | null |
Evaluation of Spectral Learning for the Identification of Hidden Markov
Models | stat.ML cs.LG math.OC | Hidden Markov models have successfully been applied as models of discrete
time series in many fields. Often, when applied in practice, the parameters of
these models have to be estimated. The currently predominating identification
methods, such as maximum-likelihood estimation and especially
expectation-maximization, are iterative and prone to have problems with local
minima. A non-iterative method employing a spectral subspace-like approach has
recently been proposed in the machine learning literature. This paper evaluates
the performance of this algorithm, and compares it to the performance of the
expectation-maximization algorithm, on a number of numerical examples. We find
that the performance is mixed; it successfully identifies some systems with
relatively few available observations, but fails completely for some systems
even when a large amount of observations is available. An open question is how
this discrepancy can be explained. We provide some indications that it could be
related to how well-conditioned some system parameters are.
| Robert Mattila, Cristian R. Rojas, Bo Wahlberg | null | 1507.06346 | null | null |
Sum-of-Squares Lower Bounds for Sparse PCA | cs.LG cs.CC math.ST stat.CO stat.ML stat.TH | This paper establishes a statistical versus computational trade-off for
solving a basic high-dimensional machine learning problem via a basic convex
relaxation method. Specifically, we consider the {\em Sparse Principal
Component Analysis} (Sparse PCA) problem, and the family of {\em
Sum-of-Squares} (SoS, aka Lasserre/Parillo) convex relaxations. It was well
known that in large dimension $p$, a planted $k$-sparse unit vector can be {\em
in principle} detected using only $n \approx k\log p$ (Gaussian or Bernoulli)
samples, but all {\em efficient} (polynomial time) algorithms known require $n
\approx k^2$ samples. It was also known that this quadratic gap cannot be
improved by the the most basic {\em semi-definite} (SDP, aka spectral)
relaxation, equivalent to a degree-2 SoS algorithms. Here we prove that also
degree-4 SoS algorithms cannot improve this quadratic gap. This average-case
lower bound adds to the small collection of hardness results in machine
learning for this powerful family of convex relaxation algorithms. Moreover,
our design of moments (or "pseudo-expectations") for this lower bound is quite
different than previous lower bounds. Establishing lower bounds for higher
degree SoS algorithms for remains a challenging problem.
| Tengyu Ma, Avi Wigderson | null | 1507.06370 | null | null |
Dynamic Matrix Factorization with Priors on Unknown Values | stat.ML cs.IR cs.LG | Advanced and effective collaborative filtering methods based on explicit
feedback assume that unknown ratings do not follow the same model as the
observed ones (\emph{not missing at random}). In this work, we build on this
assumption, and introduce a novel dynamic matrix factorization framework that
allows to set an explicit prior on unknown values. When new ratings, users, or
items enter the system, we can update the factorization in time independent of
the size of data (number of users, items and ratings). Hence, we can quickly
recommend items even to very recent users. We test our methods on three large
datasets, including two very sparse ones, in static and dynamic conditions. In
each case, we outrank state-of-the-art matrix factorization methods that do not
use a prior on unknown ratings.
| Robin Devooght and Nicolas Kourtellis and Amin Mantrach | null | 1507.06452 | null | null |
Deep Recurrent Q-Learning for Partially Observable MDPs | cs.LG | Deep Reinforcement Learning has yielded proficient controllers for complex
tasks. However, these controllers have limited memory and rely on being able to
perceive the complete game screen at each decision point. To address these
shortcomings, this article investigates the effects of adding recurrency to a
Deep Q-Network (DQN) by replacing the first post-convolutional fully-connected
layer with a recurrent LSTM. The resulting \textit{Deep Recurrent Q-Network}
(DRQN), although capable of seeing only a single frame at each timestep,
successfully integrates information through time and replicates DQN's
performance on standard Atari games and partially observed equivalents
featuring flickering game screens. Additionally, when trained with partial
observations and evaluated with incrementally more complete observations,
DRQN's performance scales as a function of observability. Conversely, when
trained with full observations and evaluated with partial observations, DRQN's
performance degrades less than DQN's. Thus, given the same length of history,
recurrency is a viable alternative to stacking a history of frames in the DQN's
input layer and while recurrency confers no systematic advantage when learning
to play the game, the recurrent net can better adapt at evaluation time if the
quality of observations changes.
| Matthew Hausknecht and Peter Stone | null | 1507.06527 | null | null |
Manitest: Are classifiers really invariant? | cs.CV cs.LG stat.ML | Invariance to geometric transformations is a highly desirable property of
automatic classifiers in many image recognition tasks. Nevertheless, it is
unclear to which extent state-of-the-art classifiers are invariant to basic
transformations such as rotations and translations. This is mainly due to the
lack of general methods that properly measure such an invariance. In this
paper, we propose a rigorous and systematic approach for quantifying the
invariance to geometric transformations of any classifier. Our key idea is to
cast the problem of assessing a classifier's invariance as the computation of
geodesics along the manifold of transformed images. We propose the Manitest
method, built on the efficient Fast Marching algorithm to compute the
invariance of classifiers. Our new method quantifies in particular the
importance of data augmentation for learning invariance from data, and the
increased invariance of convolutional neural networks with depth. We foresee
that the proposed generic tool for measuring invariance to a large class of
geometric transformations and arbitrary classifiers will have many applications
for evaluating and comparing classifiers based on their invariance, and help
improving the invariance of existing classifiers.
| Alhussein Fawzi, Pascal Frossard | null | 1507.06535 | null | null |
Human Pose Estimation with Iterative Error Feedback | cs.CV cs.LG cs.NE | Hierarchical feature extractors such as Convolutional Networks (ConvNets)
have achieved impressive performance on a variety of classification tasks using
purely feedforward processing. Feedforward architectures can learn rich
representations of the input space but do not explicitly model dependencies in
the output spaces, that are quite structured for tasks such as articulated
human pose estimation or object segmentation. Here we propose a framework that
expands the expressive power of hierarchical feature extractors to encompass
both input and output spaces, by introducing top-down feedback. Instead of
directly predicting the outputs in one go, we use a self-correcting model that
progressively changes an initial solution by feeding back error predictions, in
a process we call Iterative Error Feedback (IEF). IEF shows excellent
performance on the task of articulated pose estimation in the challenging MPII
and LSP benchmarks, matching the state-of-the-art without requiring ground
truth scale annotation.
| Joao Carreira, Pulkit Agrawal, Katerina Fragkiadaki, Jitendra Malik | null | 1507.06550 | null | null |
Multi-scale exploration of convex functions and bandit convex
optimization | math.MG cs.LG math.OC math.PR stat.ML | We construct a new map from a convex function to a distribution on its
domain, with the property that this distribution is a multi-scale exploration
of the function. We use this map to solve a decade-old open problem in
adversarial bandit convex optimization by showing that the minimax regret for
this problem is $\tilde{O}(\mathrm{poly}(n) \sqrt{T})$, where $n$ is the
dimension and $T$ the number of rounds. This bound is obtained by studying the
dual Bayesian maximin regret via the information ratio analysis of Russo and
Van Roy, and then using the multi-scale exploration to solve the Bayesian
problem.
| S\'ebastien Bubeck and Ronen Eldan | null | 1507.06580 | null | null |
Supervised Collective Classification for Crowdsourcing | cs.SI cs.LG stat.ML | Crowdsourcing utilizes the wisdom of crowds for collective classification via
information (e.g., labels of an item) provided by labelers. Current
crowdsourcing algorithms are mainly unsupervised methods that are unaware of
the quality of crowdsourced data. In this paper, we propose a supervised
collective classification algorithm that aims to identify reliable labelers
from the training data (e.g., items with known labels). The reliability (i.e.,
weighting factor) of each labeler is determined via a saddle point algorithm.
The results on several crowdsourced data show that supervised methods can
achieve better classification accuracy than unsupervised methods, and our
proposed method outperforms other algorithms.
| Pin-Yu Chen, Chia-Wei Lien, Fu-Jen Chu, Pai-Shun Ting, Shin-Ming Cheng | 10.1109/GLOCOMW.2015.7414077 | 1507.06682 | null | null |
Linear Contextual Bandits with Knapsacks | cs.LG math.OC stat.ML | We consider the linear contextual bandit problem with resource consumption,
in addition to reward generation. In each round, the outcome of pulling an arm
is a reward as well as a vector of resource consumptions. The expected values
of these outcomes depend linearly on the context of that arm. The
budget/capacity constraints require that the total consumption doesn't exceed
the budget for each resource. The objective is once again to maximize the total
reward. This problem turns out to be a common generalization of classic linear
contextual bandits (linContextual), bandits with knapsacks (BwK), and the
online stochastic packing problem (OSPP). We present algorithms with
near-optimal regret bounds for this problem. Our bounds compare favorably to
results on the unstructured version of the problem where the relation between
the contexts and the outcomes could be arbitrary, but the algorithm only
competes against a fixed set of policies accessible through an optimization
oracle. We combine techniques from the work on linContextual, BwK, and OSPP in
a nontrivial manner while also tackling new difficulties that are not present
in any of these special cases.
| Shipra Agrawal and Nikhil R. Devanur | null | 1507.06738 | null | null |
Differentially Private Analysis of Outliers | stat.ML cs.CR cs.LG | This paper investigates differentially private analysis of distance-based
outliers. The problem of outlier detection is to find a small number of
instances that are apparently distant from the remaining instances. On the
other hand, the objective of differential privacy is to conceal presence (or
absence) of any particular instance. Outlier detection and privacy protection
are thus intrinsically conflicting tasks. In this paper, instead of reporting
outliers detected, we present two types of differentially private queries that
help to understand behavior of outliers. One is the query to count outliers,
which reports the number of outliers that appear in a given subspace. Our
formal analysis on the exact global sensitivity of outlier counts reveals that
regular global sensitivity based method can make the outputs too noisy,
particularly when the dimensionality of the given subspace is high. Noting that
the counts of outliers are typically expected to be relatively small compared
to the number of data, we introduce a mechanism based on the smooth upper bound
of the local sensitivity. The other is the query to discovery top-$h$ subspaces
containing a large number of outliers. This task can be naively achieved by
issuing count queries to each subspace in turn. However, the variation of
subspaces can grow exponentially in the data dimensionality. This can cause
serious consumption of the privacy budget. For this task, we propose an
exponential mechanism with a customized score function for subspace discovery.
To the best of our knowledge, this study is the first trial to ensure
differential privacy for distance-based outlier analysis. We demonstrated our
methods with synthesized datasets and real datasets. The experimental results
show that out method achieve better utility compared to the global sensitivity
based methods.
| Rina Okada, Kazuto Fukuchi, Kazuya Kakizaki and Jun Sakuma | null | 1507.06763 | null | null |
Implicitly Constrained Semi-Supervised Least Squares Classification | stat.ML cs.LG | We introduce a novel semi-supervised version of the least squares classifier.
This implicitly constrained least squares (ICLS) classifier minimizes the
squared loss on the labeled data among the set of parameters implied by all
possible labelings of the unlabeled data. Unlike other discriminative
semi-supervised methods, our approach does not introduce explicit additional
assumptions into the objective function, but leverages implicit assumptions
already present in the choice of the supervised least squares classifier. We
show this approach can be formulated as a quadratic programming problem and its
solution can be found using a simple gradient descent procedure. We prove that,
in a certain way, our method never leads to performance worse than the
supervised classifier. Experimental results corroborate this theoretical result
in the multidimensional case on benchmark datasets, also in terms of the error
rate.
| Jesse H. Krijthe and Marco Loog | null | 1507.06802 | null | null |
A Neighbourhood-Based Stopping Criterion for Contrastive Divergence
Learning | cs.NE cs.LG | Restricted Boltzmann Machines (RBMs) are general unsupervised learning
devices to ascertain generative models of data distributions. RBMs are often
trained using the Contrastive Divergence learning algorithm (CD), an
approximation to the gradient of the data log-likelihood. A simple
reconstruction error is often used as a stopping criterion for CD, although
several authors
\cite{schulz-et-al-Convergence-Contrastive-Divergence-2010-NIPSw,
fischer-igel-Divergence-Contrastive-Divergence-2010-ICANN} have raised doubts
concerning the feasibility of this procedure. In many cases the evolution curve
of the reconstruction error is monotonic while the log-likelihood is not, thus
indicating that the former is not a good estimator of the optimal stopping
point for learning. However, not many alternatives to the reconstruction error
have been discussed in the literature. In this manuscript we investigate simple
alternatives to the reconstruction error, based on the inclusion of information
contained in neighboring states to the training set, as a stopping criterion
for CD learning.
| E. Romero, F. Mazzanti, J. Delgado | null | 1507.06803 | null | null |
Multimodal Deep Learning for Robust RGB-D Object Recognition | cs.CV cs.LG cs.NE cs.RO | Robust object recognition is a crucial ingredient of many, if not all,
real-world robotics applications. This paper leverages recent progress on
Convolutional Neural Networks (CNNs) and proposes a novel RGB-D architecture
for object recognition. Our architecture is composed of two separate CNN
processing streams - one for each modality - which are consecutively combined
with a late fusion network. We focus on learning with imperfect sensor data, a
typical problem in real-world robotics tasks. For accurate learning, we
introduce a multi-stage training methodology and two crucial ingredients for
handling depth data with CNNs. The first, an effective encoding of depth
information for CNNs that enables learning without the need for large depth
datasets. The second, a data augmentation scheme for robust learning with depth
images by corrupting them with realistic noise patterns. We present
state-of-the-art results on the RGB-D object dataset and show recognition in
challenging RGB-D real-world noisy settings.
| Andreas Eitel, Jost Tobias Springenberg, Luciano Spinello, Martin
Riedmiller, Wolfram Burgard | null | 1507.06821 | null | null |
The Polylingual Labeled Topic Model | cs.CL cs.IR cs.LG | In this paper, we present the Polylingual Labeled Topic Model, a model which
combines the characteristics of the existing Polylingual Topic Model and
Labeled LDA. The model accounts for multiple languages with separate topic
distributions for each language while restricting the permitted topics of a
document to a set of predefined labels. We explore the properties of the model
in a two-language setting on a dataset from the social science domain. Our
experiments show that our model outperforms LDA and Labeled LDA in terms of
their held-out perplexity and that it produces semantically coherent topics
which are well interpretable by human subjects.
| Lisa Posch, Arnim Bleier, Philipp Schaer, Markus Strohmaier | 10.1007/978-3-319-24489-1_26 | 1507.06829 | null | null |
A Reinforcement Learning Approach to Online Learning of Decision Trees | cs.LG | Online decision tree learning algorithms typically examine all features of a
new data point to update model parameters. We propose a novel alternative,
Reinforcement Learning- based Decision Trees (RLDT), that uses Reinforcement
Learning (RL) to actively examine a minimal number of features of a data point
to classify it with high accuracy. Furthermore, RLDT optimizes a long term
return, providing a better alternative to the traditional myopic greedy
approach to growing decision trees. We demonstrate that this approach performs
as well as batch learning algorithms and other online decision tree learning
algorithms, while making significantly fewer queries about the features of the
data points. We also show that RLDT can effectively handle concept drift.
| Abhinav Garlapati, Aditi Raghunathan, Vaishnavh Nagarajan and
Balaraman Ravindran | null | 1507.06923 | null | null |
Fast and Accurate Recurrent Neural Network Acoustic Models for Speech
Recognition | cs.CL cs.LG cs.NE stat.ML | We have recently shown that deep Long Short-Term Memory (LSTM) recurrent
neural networks (RNNs) outperform feed forward deep neural networks (DNNs) as
acoustic models for speech recognition. More recently, we have shown that the
performance of sequence trained context dependent (CD) hidden Markov model
(HMM) acoustic models using such LSTM RNNs can be equaled by sequence trained
phone models initialized with connectionist temporal classification (CTC). In
this paper, we present techniques that further improve performance of LSTM RNN
acoustic models for large vocabulary speech recognition. We show that frame
stacking and reduced frame rate lead to more accurate models and faster
decoding. CD phone modeling leads to further improvements. We also present
initial results for LSTM RNN models outputting words directly.
| Ha\c{s}im Sak, Andrew Senior, Kanishka Rao, Fran\c{c}oise Beaufays | null | 1507.06947 | null | null |
Perturbed Iterate Analysis for Asynchronous Stochastic Optimization | stat.ML cs.DC cs.DS cs.LG math.OC | We introduce and analyze stochastic optimization methods where the input to
each gradient update is perturbed by bounded noise. We show that this framework
forms the basis of a unified approach to analyze asynchronous implementations
of stochastic optimization algorithms.In this framework, asynchronous
stochastic optimization algorithms can be thought of as serial methods
operating on noisy inputs. Using our perturbed iterate framework, we provide
new analyses of the Hogwild! algorithm and asynchronous stochastic coordinate
descent, that are simpler than earlier analyses, remove many assumptions of
previous models, and in some cases yield improved upper bounds on the
convergence rates. We proceed to apply our framework to develop and analyze
KroMagnon: a novel, parallel, sparse stochastic variance-reduced gradient
(SVRG) algorithm. We demonstrate experimentally on a 16-core machine that the
sparse and parallel version of SVRG is in some cases more than four orders of
magnitude faster than the standard SVRG algorithm.
| Horia Mania, Xinghao Pan, Dimitris Papailiopoulos, Benjamin Recht,
Kannan Ramchandran, Michael I. Jordan | null | 1507.06970 | null | null |
Dimensionality-reduced subspace clustering | stat.ML cs.IT cs.LG math.IT | Subspace clustering refers to the problem of clustering unlabeled
high-dimensional data points into a union of low-dimensional linear subspaces,
whose number, orientations, and dimensions are all unknown. In practice one may
have access to dimensionality-reduced observations of the data only, resulting,
e.g., from undersampling due to complexity and speed constraints on the
acquisition device or mechanism. More pertinently, even if the high-dimensional
data set is available it is often desirable to first project the data points
into a lower-dimensional space and to perform clustering there; this reduces
storage requirements and computational cost. The purpose of this paper is to
quantify the impact of dimensionality reduction through random projection on
the performance of three subspace clustering algorithms, all of which are based
on principles from sparse signal recovery. Specifically, we analyze the
thresholding based subspace clustering (TSC) algorithm, the sparse subspace
clustering (SSC) algorithm, and an orthogonal matching pursuit variant thereof
(SSC-OMP). We find, for all three algorithms, that dimensionality reduction
down to the order of the subspace dimensions is possible without incurring
significant performance degradation. Moreover, these results are order-wise
optimal in the sense that reducing the dimensionality further leads to a
fundamentally ill-posed clustering problem. Our findings carry over to the
noisy case as illustrated through analytical results for TSC and simulations
for SSC and SSC-OMP. Extensive experiments on synthetic and real data
complement our theoretical findings.
| Reinhard Heckel, Michael Tschannen, and Helmut B\"olcskei | null | 1507.07105 | null | null |
A Framework of Sparse Online Learning and Its Applications | cs.LG | The amount of data in our society has been exploding in the era of big data
today. In this paper, we address several open challenges of big data stream
classification, including high volume, high velocity, high dimensionality, high
sparsity, and high class-imbalance. Many existing studies in data mining
literature solve data stream classification tasks in a batch learning setting,
which suffers from poor efficiency and scalability when dealing with big data.
To overcome the limitations, this paper investigates an online learning
framework for big data stream classification tasks. Unlike some existing online
data stream classification techniques that are often based on first-order
online learning, we propose a framework of Sparse Online Classification (SOC)
for data stream classification, which includes some state-of-the-art
first-order sparse online learning algorithms as special cases and allows us to
derive a new effective second-order online learning algorithm for data stream
classification. In addition, we also propose a new cost-sensitive sparse online
learning algorithm by extending the framework with application to tackle online
anomaly detection tasks where class distribution of data could be very
imbalanced. We also analyze the theoretical bounds of the proposed method, and
finally conduct an extensive set of experiments, in which encouraging results
validate the efficacy of the proposed algorithms in comparison to a family of
state-of-the-art techniques on a variety of data stream classification tasks.
| Dayong Wang and Pengcheng Wu and Peilin Zhao and Steven C.H. Hoi | null | 1507.07146 | null | null |
True Online Emphatic TD($\lambda$): Quick Reference and Implementation
Guide | cs.LG | This document is a guide to the implementation of true online emphatic
TD($\lambda$), a model-free temporal-difference algorithm for learning to make
long-term predictions which combines the emphasis idea (Sutton, Mahmood & White
2015) and the true-online idea (van Seijen & Sutton 2014). The setting used
here includes linear function approximation, the possibility of off-policy
training, and all the generality of general value functions, as well as the
emphasis algorithm's notion of "interest".
| Richard S. Sutton | null | 1507.07147 | null | null |
Task Selection for Bandit-Based Task Assignment in Heterogeneous
Crowdsourcing | cs.LG | Task selection (picking an appropriate labeling task) and worker selection
(assigning the labeling task to a suitable worker) are two major challenges in
task assignment for crowdsourcing. Recently, worker selection has been
successfully addressed by the bandit-based task assignment (BBTA) method, while
task selection has not been thoroughly investigated yet. In this paper, we
experimentally compare several task selection strategies borrowed from active
learning literature, and show that the least confidence strategy significantly
improves the performance of task assignment in crowdsourcing.
| Hao Zhang, Masashi Sugiyama | null | 1507.07199 | null | null |
Reduced-Set Kernel Principal Components Analysis for Improving the
Training and Execution Speed of Kernel Machines | stat.ML cs.LG | This paper presents a practical, and theoretically well-founded, approach to
improve the speed of kernel manifold learning algorithms relying on spectral
decomposition. Utilizing recent insights in kernel smoothing and learning with
integral operators, we propose Reduced Set KPCA (RSKPCA), which also suggests
an easy-to-implement method to remove or replace samples with minimal effect on
the empirical operator. A simple data point selection procedure is given to
generate a substitute density for the data, with accuracy that is governed by a
user-tunable parameter . The effect of the approximation on the quality of the
KPCA solution, in terms of spectral and operator errors, can be shown directly
in terms of the density estimate error and as a function of the parameter . We
show in experiments that RSKPCA can improve both training and evaluation time
of KPCA by up to an order of magnitude, and compares favorably to the
widely-used Nystrom and density-weighted Nystrom methods.
| Hassan A. Kingravi, Patricio A. Vela, Alexandar Gray | null | 1507.07260 | null | null |
A genetic algorithm for autonomous navigation in partially observable
domain | cs.LG cs.AI cs.NE | The problem of autonomous navigation is one of the basic problems for
robotics. Although, in general, it may be challenging when an autonomous
vehicle is placed into partially observable domain. In this paper we consider
simplistic environment model and introduce a navigation algorithm based on
Learning Classifier System.
| Maxim Borisyak, Andrey Ustyuzhanin | null | 1507.07374 | null | null |
Estimating an Activity Driven Hidden Markov Model | stat.ML cs.DS cs.LG cs.SI math.ST stat.TH | We define a Hidden Markov Model (HMM) in which each hidden state has
time-dependent $\textit{activity levels}$ that drive transitions and emissions,
and show how to estimate its parameters. Our construction is motivated by the
problem of inferring human mobility on sub-daily time scales from, for example,
mobile phone records.
| David A. Meyer and Asif Shakeel | null | 1507.07495 | null | null |
Distributed Stochastic Variance Reduced Gradient Methods and A Lower
Bound for Communication Complexity | math.OC cs.LG stat.ML | We study distributed optimization algorithms for minimizing the average of
convex functions. The applications include empirical risk minimization problems
in statistical machine learning where the datasets are large and have to be
stored on different machines. We design a distributed stochastic variance
reduced gradient algorithm that, under certain conditions on the condition
number, simultaneously achieves the optimal parallel runtime, amount of
communication and rounds of communication among all distributed first-order
methods up to constant factors. Our method and its accelerated extension also
outperform existing distributed algorithms in terms of the rounds of
communication as long as the condition number is not too large compared to the
size of data in each machine. We also prove a lower bound for the number of
rounds of communication for a broad class of distributed first-order methods
including the proposed algorithms in this paper. We show that our accelerated
distributed stochastic variance reduced gradient algorithm achieves this lower
bound so that it uses the fewest rounds of communication among all distributed
first-order algorithms.
| Jason D. Lee, Qihang Lin, Tengyu Ma, Tianbao Yang | null | 1507.07595 | null | null |
Training recurrent networks online without backtracking | cs.NE cs.LG stat.ML | We introduce the "NoBackTrack" algorithm to train the parameters of dynamical
systems such as recurrent neural networks. This algorithm works in an online,
memoryless setting, thus requiring no backpropagation through time, and is
scalable, avoiding the large computational and memory cost of maintaining the
full gradient of the current state with respect to the parameters.
The algorithm essentially maintains, at each time, a single search direction
in parameter space. The evolution of this search direction is partly stochastic
and is constructed in such a way to provide, at every time, an unbiased random
estimate of the gradient of the loss function with respect to the parameters.
Because the gradient estimate is unbiased, on average over time the parameter
is updated as it should.
The resulting gradient estimate can then be fed to a lightweight Kalman-like
filter to yield an improved algorithm. For recurrent neural networks, the
resulting algorithms scale linearly with the number of parameters.
Small-scale experiments confirm the suitability of the approach, showing that
the stochastic approximation of the gradient introduced in the algorithm is not
detrimental to learning. In particular, the Kalman-like version of NoBackTrack
is superior to backpropagation through time (BPTT) when the time span of
dependencies in the data is longer than the truncation span for BPTT.
| Yann Ollivier, Corentin Tallec, Guillaume Charpiat | null | 1507.07680 | null | null |
Zero-Shot Domain Adaptation via Kernel Regression on the Grassmannian | cs.LG cs.CV | Most visual recognition methods implicitly assume the data distribution
remains unchanged from training to testing. However, in practice domain shift
often exists, where real-world factors such as lighting and sensor type change
between train and test, and classifiers do not generalise from source to target
domains. It is impractical to train separate models for all possible situations
because collecting and labelling the data is expensive. Domain adaptation
algorithms aim to ameliorate domain shift, allowing a model trained on a source
to perform well on a different target domain. However, even for the setting of
unsupervised domain adaptation, where the target domain is unlabelled,
collecting data for every possible target domain is still costly. In this
paper, we propose a new domain adaptation method that has no need to access
either data or labels of the target domain when it can be described by a
parametrised vector and there exits several related source domains within the
same parametric space. It greatly reduces the burden of data collection and
annotation, and our experiments show some promising results.
| Yongxin Yang and Timothy Hospedales | null | 1507.07830 | null | null |
Detect & Describe: Deep learning of bank stress in the news | q-fin.CP cs.AI cs.LG cs.NE q-fin.RM | News is a pertinent source of information on financial risks and stress
factors, which nevertheless is challenging to harness due to the sparse and
unstructured nature of natural text. We propose an approach based on
distributional semantics and deep learning with neural networks to model and
link text to a scarce set of bank distress events. Through unsupervised
training, we learn semantic vector representations of news articles as
predictors of distress events. The predictive model that we learn can signal
coinciding stress with an aggregated index at bank or European level, while
crucially allowing for automatic extraction of text descriptions of the events,
based on passages with high stress levels. The method offers insight that
models based on other types of data cannot provide, while offering a general
means for interpreting this type of semantic-predictive model. We model bank
distress with data on 243 events and 6.6M news articles for 101 large European
banks.
| Samuel R\"onnqvist and Peter Sarlin | null | 1507.07870 | null | null |
Optimally Confident UCB: Improved Regret for Finite-Armed Bandits | cs.LG math.OC | I present the first algorithm for stochastic finite-armed bandits that
simultaneously enjoys order-optimal problem-dependent regret and worst-case
regret. Besides the theoretical results, the new algorithm is simple, efficient
and empirically superb. The approach is based on UCB, but with a carefully
chosen confidence parameter that optimally balances the risk of failing
confidence intervals against the cost of excessive optimism.
| Tor Lattimore | null | 1507.07880 | null | null |
Sparse Multidimensional Patient Modeling using Auxiliary Confidence
Labels | cs.LG | In this work, we focus on the problem of learning a classification model that
performs inference on patient Electronic Health Records (EHRs). Often, a large
amount of costly expert supervision is required to learn such a model. To
reduce this cost, we obtain confidence labels that indicate how sure an expert
is in the class labels she provides. If meaningful confidence information can
be incorporated into a learning method, fewer patient instances may need to be
labeled to learn an accurate model. In addition, while accuracy of predictions
is important for any inference model, a model of patients must be interpretable
so that clinicians can understand how the model is making decisions. To these
ends, we develop a novel metric learning method called Confidence bAsed MEtric
Learning (CAMEL) that supports inclusion of confidence labels, but also
emphasizes interpretability in three ways. First, our method induces sparsity,
thus producing simple models that use only a few features from patient EHRs.
Second, CAMEL naturally produces confidence scores that can be taken into
consideration when clinicians make treatment decisions. Third, the metrics
learned by CAMEL induce multidimensional spaces where each dimension represents
a different "factor" that clinicians can use to assess patients. In our
experimental evaluation, we show on a real-world clinical data set that our
CAMEL methods are able to learn models that are as or more accurate as other
methods that use the same supervision. Furthermore, we show that when CAMEL
uses confidence scores it is able to learn models as or more accurate as others
we tested while using only 10% of the training instances. Finally, we perform
qualitative assessments on the metrics learned by CAMEL and show that they
identify and clearly articulate important factors in how the model performs
inference.
| Eric Heim and Milos Hauskrecht (University of Pittsburgh) | null | 1507.07955 | null | null |
An algorithm for online tensor prediction | stat.ML cs.IT cs.LG math.IT | We present a new method for online prediction and learning of tensors
($N$-way arrays, $N >2$) from sequential measurements. We focus on the specific
case of 3-D tensors and exploit a recently developed framework of structured
tensor decompositions proposed in [1]. In this framework it is possible to
treat 3-D tensors as linear operators and appropriately generalize notions of
rank and positive definiteness to tensors in a natural way. Using these notions
we propose a generalization of the matrix exponentiated gradient descent
algorithm [2] to a tensor exponentiated gradient descent algorithm using an
extension of the notion of von-Neumann divergence to tensors. Then following a
similar construction as in [3], we exploit this algorithm to propose an online
algorithm for learning and prediction of tensors with provable regret
guarantees. Simulations results are presented on semi-synthetic data sets of
ratings evolving in time under local influence over a social network. The
result indicate superior performance compared to other (online) convex tensor
completion methods.
| John Pothier, Josh Girson, Shuchin Aeron | null | 1507.07974 | null | null |
A constrained optimization perspective on actor critic algorithms and
application to network routing | cs.LG math.OC | We propose a novel actor-critic algorithm with guaranteed convergence to an
optimal policy for a discounted reward Markov decision process. The actor
incorporates a descent direction that is motivated by the solution of a certain
non-linear optimization problem. We also discuss an extension to incorporate
function approximation and demonstrate the practicality of our algorithms on a
network routing application.
| Prashanth L.A., H.L. Prasad, Shalabh Bhatnagar and Prakash Chandra | null | 1507.07984 | null | null |
Document Embedding with Paragraph Vectors | cs.CL cs.AI cs.LG | Paragraph Vectors has been recently proposed as an unsupervised method for
learning distributed representations for pieces of texts. In their work, the
authors showed that the method can learn an embedding of movie review texts
which can be leveraged for sentiment analysis. That proof of concept, while
encouraging, was rather narrow. Here we consider tasks other than sentiment
analysis, provide a more thorough comparison of Paragraph Vectors to other
document modelling algorithms such as Latent Dirichlet Allocation, and evaluate
performance of the method as we vary the dimensionality of the learned
representation. We benchmarked the models on two document similarity data sets,
one from Wikipedia, one from arXiv. We observe that the Paragraph Vector method
performs significantly better than other methods, and propose a simple
improvement to enhance embedding quality. Somewhat surprisingly, we also show
that much like word embeddings, vector operations on Paragraph Vectors can
perform useful semantic results.
| Andrew M. Dai and Christopher Olah and Quoc V. Le | null | 1507.07998 | null | null |
STC Anti-spoofing Systems for the ASVspoof 2015 Challenge | cs.SD cs.LG stat.ML | This paper presents the Speech Technology Center (STC) systems submitted to
Automatic Speaker Verification Spoofing and Countermeasures (ASVspoof)
Challenge 2015. In this work we investigate different acoustic feature spaces
to determine reliable and robust countermeasures against spoofing attacks. In
addition to the commonly used front-end MFCC features we explored features
derived from phase spectrum and features based on applying the multiresolution
wavelet transform. Similar to state-of-the-art ASV systems, we used the
standard TV-JFA approach for probability modelling in spoofing detection
systems. Experiments performed on the development and evaluation datasets of
the Challenge demonstrate that the use of phase-related and wavelet-based
features provides a substantial input into the efficiency of the resulting STC
systems. In our research we also focused on the comparison of the linear (SVM)
and nonlinear (DBN) classifiers.
| Sergey Novoselov, Alexandr Kozlov, Galina Lavrentyeva, Konstantin
Simonchik, Vadim Shchemelinin | null | 1507.08074 | null | null |
Learning Representations for Outlier Detection on a Budget | cs.LG | The problem of detecting a small number of outliers in a large dataset is an
important task in many fields from fraud detection to high-energy physics. Two
approaches have emerged to tackle this problem: unsupervised and supervised.
Supervised approaches require a sufficient amount of labeled data and are
challenged by novel types of outliers and inherent class imbalance, whereas
unsupervised methods do not take advantage of available labeled training
examples and often exhibit poorer predictive performance. We propose BORE (a
Bagged Outlier Representation Ensemble) which uses unsupervised outlier scoring
functions (OSFs) as features in a supervised learning framework. BORE is able
to adapt to arbitrary OSF feature representations, to the imbalance in labeled
data as well as to prediction-time constraints on computational cost. We
demonstrate the good performance of BORE compared to a variety of competing
methods in the non-budgeted and the budgeted outlier detection problem on 12
real-world datasets.
| Barbora Micenkov\'a, Brian McWilliams, Ira Assent | null | 1507.08104 | null | null |
IT-Dendrogram: A New Member of the In-Tree (IT) Clustering Family | stat.ML cs.CV cs.LG stat.ME | Previously, we proposed a physically-inspired method to construct data points
into an effective in-tree (IT) structure, in which the underlying cluster
structure in the dataset is well revealed. Although there are some edges in the
IT structure requiring to be removed, such undesired edges are generally
distinguishable from other edges and thus are easy to be determined. For
instance, when the IT structures for the 2-dimensional (2D) datasets are
graphically presented, those undesired edges can be easily spotted and
interactively determined. However, in practice, there are many datasets that do
not lie in the 2D Euclidean space, thus their IT structures cannot be
graphically presented. But if we can effectively map those IT structures into a
visualized space in which the salient features of those undesired edges are
preserved, then the undesired edges in the IT structures can still be visually
determined in a visualization environment. Previously, this purpose was reached
by our method called IT-map. The outstanding advantage of IT-map is that
clusters can still be found even with the so-called crowding problem in the
embedding.
In this paper, we propose another method, called IT-Dendrogram, to achieve
the same goal through an effective combination of the IT structure and the
single link hierarchical clustering (SLHC) method. Like IT-map, IT-Dendrogram
can also effectively represent the IT structures in a visualization
environment, whereas using another form, called the Dendrogram. IT-Dendrogram
can serve as another visualization method to determine the undesired edges in
the IT structures and thus benefit the IT-based clustering analysis. This was
demonstrated on several datasets with different shapes, dimensions, and
attributes. Unlike IT-map, IT-Dendrogram can always avoid the crowding problem,
which could help users make more reliable cluster analysis in certain problems.
| Teng Qiu, Yongjie Li | null | 1507.08155 | null | null |
EESEN: End-to-End Speech Recognition using Deep RNN Models and
WFST-based Decoding | cs.CL cs.LG | The performance of automatic speech recognition (ASR) has improved
tremendously due to the application of deep neural networks (DNNs). Despite
this progress, building a new ASR system remains a challenging task, requiring
various resources, multiple training stages and significant expertise. This
paper presents our Eesen framework which drastically simplifies the existing
pipeline to build state-of-the-art ASR systems. Acoustic modeling in Eesen
involves learning a single recurrent neural network (RNN) predicting
context-independent targets (phonemes or characters). To remove the need for
pre-generated frame labels, we adopt the connectionist temporal classification
(CTC) objective function to infer the alignments between speech and label
sequences. A distinctive feature of Eesen is a generalized decoding approach
based on weighted finite-state transducers (WFSTs), which enables the efficient
incorporation of lexicons and language models into CTC decoding. Experiments
show that compared with the standard hybrid DNN systems, Eesen achieves
comparable word error rates (WERs), while at the same time speeding up decoding
significantly.
| Yajie Miao, Mohammad Gowayyed, Florian Metze | null | 1507.08240 | null | null |
A Gauss-Newton Method for Markov Decision Processes | cs.AI cs.LG stat.ML | Approximate Newton methods are a standard optimization tool which aim to
maintain the benefits of Newton's method, such as a fast rate of convergence,
whilst alleviating its drawbacks, such as computationally expensive calculation
or estimation of the inverse Hessian. In this work we investigate approximate
Newton methods for policy optimization in Markov Decision Processes (MDPs). We
first analyse the structure of the Hessian of the objective function for MDPs.
We show that, like the gradient, the Hessian exhibits useful structure in the
context of MDPs and we use this analysis to motivate two Gauss-Newton Methods
for MDPs. Like the Gauss-Newton method for non-linear least squares, these
methods involve approximating the Hessian by ignoring certain terms in the
Hessian which are difficult to estimate. The approximate Hessians possess
desirable properties, such as negative definiteness, and we demonstrate several
important performance guarantees including guaranteed ascent directions,
invariance to affine transformation of the parameter space, and convergence
guarantees. We finally provide a unifying perspective of key policy search
algorithms, demonstrating that our second Gauss-Newton algorithm is closely
related to both the EM-algorithm and natural gradient ascent applied to MDPs,
but performs significantly better in practice on a range of challenging
domains.
| Thomas Furmston and Guy Lever | null | 1507.08271 | null | null |
Deep Learning for Single-View Instance Recognition | cs.CV cs.LG cs.NE cs.RO | Deep learning methods have typically been trained on large datasets in which
many training examples are available. However, many real-world product datasets
have only a small number of images available for each product. We explore the
use of deep learning methods for recognizing object instances when we have only
a single training example per class. We show that feedforward neural networks
outperform state-of-the-art methods for recognizing objects from novel
viewpoints even when trained from just a single image per object. To further
improve our performance on this task, we propose to take advantage of a
supplementary dataset in which we observe a separate set of objects from
multiple viewpoints. We introduce a new approach for training deep learning
methods for instance recognition with limited training data, in which we use an
auxiliary multi-view dataset to train our network to be robust to viewpoint
changes. We find that this approach leads to a more robust classifier for
recognizing objects from novel viewpoints, outperforming previous
state-of-the-art approaches including keypoint-matching, template-based
techniques, and sparse coding.
| David Held, Sebastian Thrun, Silvio Savarese | null | 1507.08286 | null | null |
Distributed Mini-Batch SDCA | cs.LG math.OC | We present an improved analysis of mini-batched stochastic dual coordinate
ascent for regularized empirical loss minimization (i.e. SVM and SVM-type
objectives). Our analysis allows for flexible sampling schemes, including where
data is distribute across machines, and combines a dependence on the smoothness
of the loss and/or the data spread (measured through the spectral norm).
| Martin Tak\'a\v{c} and Peter Richt\'arik and Nathan Srebro | null | 1507.08322 | null | null |
VMF-SNE: Embedding for Spherical Data | cs.LG | T-SNE is a well-known approach to embedding high-dimensional data and has
been widely used in data visualization. The basic assumption of t-SNE is that
the data are non-constrained in the Euclidean space and the local proximity can
be modelled by Gaussian distributions. This assumption does not hold for a wide
range of data types in practical applications, for instance spherical data for
which the local proximity is better modelled by the von Mises-Fisher (vMF)
distribution instead of the Gaussian. This paper presents a vMF-SNE embedding
algorithm to embed spherical data. An iterative process is derived to produce
an efficient embedding. The results on a simulation data set demonstrated that
vMF-SNE produces better embeddings than t-SNE for spherical data.
| Mian Wang, Dong Wang | 10.1007/s11040-015-9171-z | 1507.08379 | null | null |
Tag-Weighted Topic Model For Large-scale Semi-Structured Documents | cs.CL cs.IR cs.LG stat.ML | To date, there have been massive Semi-Structured Documents (SSDs) during the
evolution of the Internet. These SSDs contain both unstructured features (e.g.,
plain text) and metadata (e.g., tags). Most previous works focused on modeling
the unstructured text, and recently, some other methods have been proposed to
model the unstructured text with specific tags. To build a general model for
SSDs remains an important problem in terms of both model fitness and
efficiency. We propose a novel method to model the SSDs by a so-called
Tag-Weighted Topic Model (TWTM). TWTM is a framework that leverages both the
tags and words information, not only to learn the document-topic and topic-word
distributions, but also to infer the tag-topic distributions for text mining
tasks. We present an efficient variational inference method with an EM
algorithm for estimating the model parameters. Meanwhile, we propose three
large-scale solutions for our model under the MapReduce distributed computing
platform for modeling large-scale SSDs. The experimental results show the
effectiveness, efficiency and the robustness by comparing our model with the
state-of-the-art methods in document modeling, tags prediction and text
classification. We also show the performance of the three distributed solutions
in terms of time and accuracy on document modeling.
| Shuangyin Li, Jiefei Li, Guan Huang, Ruiyang Tan, and Rong Pan | null | 1507.08396 | null | null |
Framework for learning agents in quantum environments | quant-ph cs.AI cs.LG | In this paper we provide a broad framework for describing learning agents in
general quantum environments. We analyze the types of classically specified
environments which allow for quantum enhancements in learning, by contrasting
environments to quantum oracles. We show that whether or not quantum
improvements are at all possible depends on the internal structure of the
quantum environment. If the environments are constructed and the internal
structure is appropriately chosen, or if the agent has limited capacities to
influence the internal states of the environment, we show that improvements in
learning times are possible in a broad range of scenarios. Such scenarios we
call luck-favoring settings. The case of constructed environments is
particularly relevant for the class of model-based learning agents, where our
results imply a near-generic improvement.
| Vedran Dunjko, Jacob M. Taylor and Hans J. Briegel | null | 1507.08482 | null | null |
Robustness in sparse linear models: relative efficiency based on robust
approximate message passing | math.ST cs.AI cs.LG stat.ME stat.ML stat.TH | Understanding efficiency in high dimensional linear models is a longstanding
problem of interest. Classical work with smaller dimensional problems dating
back to Huber and Bickel has illustrated the benefits of efficient loss
functions. When the number of parameters $p$ is of the same order as the sample
size $n$, $p \approx n$, an efficiency pattern different from the one of Huber
was recently established. In this work, we consider the effects of model
selection on the estimation efficiency of penalized methods. In particular, we
explore whether sparsity, results in new efficiency patterns when $p > n$. In
the interest of deriving the asymptotic mean squared error for regularized
M-estimators, we use the powerful framework of approximate message passing. We
propose a novel, robust and sparse approximate message passing algorithm
(RAMP), that is adaptive to the error distribution. Our algorithm includes many
non-quadratic and non-differentiable loss functions. We derive its asymptotic
mean squared error and show its convergence, while allowing $p, n, s \to
\infty$, with $n/p \in (0,1)$ and $n/s \in (1,\infty)$. We identify new
patterns of relative efficiency regarding a number of penalized $M$ estimators,
when $p$ is much larger than $n$. We show that the classical information bound
is no longer reachable, even for light--tailed error distributions. We show
that the penalized least absolute deviation estimator dominates the penalized
least square estimator, in cases of heavy--tailed distributions. We observe
this pattern for all choices of the number of non-zero parameters $s$, both $s
\leq n$ and $s \approx n$. In non-penalized problems where $s =p \approx n$,
the opposite regime holds. Therefore, we discover that the presence of model
selection significantly changes the efficiency patterns.
| Jelena Bradic | null | 1507.08726 | null | null |
Action-Conditional Video Prediction using Deep Networks in Atari Games | cs.LG cs.AI cs.CV | Motivated by vision-based reinforcement learning (RL) problems, in particular
Atari games from the recent benchmark Aracade Learning Environment (ALE), we
consider spatio-temporal prediction problems where future (image-)frames are
dependent on control variables or actions as well as previous frames. While not
composed of natural scenes, frames in Atari games are high-dimensional in size,
can involve tens of objects with one or more objects being controlled by the
actions directly and many other objects being influenced indirectly, can
involve entry and departure of objects, and can involve deep partial
observability. We propose and evaluate two deep neural network architectures
that consist of encoding, action-conditional transformation, and decoding
layers based on convolutional neural networks and recurrent neural networks.
Experimental results show that the proposed architectures are able to generate
visually-realistic frames that are also useful for control over approximately
100-step action-conditional futures in some games. To the best of our
knowledge, this paper is the first to make and evaluate long-term predictions
on high-dimensional video conditioned by control inputs.
| Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard Lewis, Satinder Singh | null | 1507.08750 | null | null |
An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with
Two-Point Feedback | cs.LG math.OC stat.ML | We consider the closely related problems of bandit convex optimization with
two-point feedback, and zero-order stochastic convex optimization with two
function evaluations per round. We provide a simple algorithm and analysis
which is optimal for convex Lipschitz functions. This improves on
\cite{dujww13}, which only provides an optimal result for smooth functions;
Moreover, the algorithm and analysis are simpler, and readily extend to
non-Euclidean problems. The algorithm is based on a small but surprisingly
powerful modification of the gradient estimator.
| Ohad Shamir | null | 1507.08752 | null | null |
Fast Stochastic Algorithms for SVD and PCA: Convergence Properties and
Convexity | cs.LG cs.NA math.NA math.OC stat.ML | We study the convergence properties of the VR-PCA algorithm introduced by
\cite{shamir2015stochastic} for fast computation of leading singular vectors.
We prove several new results, including a formal analysis of a block version of
the algorithm, and convergence from random initialization. We also make a few
observations of independent interest, such as how pre-initializing with just a
single exact power iteration can significantly improve the runtime of
stochastic methods, and what are the convexity and non-convexity properties of
the underlying optimization problem.
| Ohad Shamir | null | 1507.08788 | null | null |
A Visual Embedding for the Unsupervised Extraction of Abstract Semantics | cs.CV cs.LG cs.NE | Vector-space word representations obtained from neural network models have
been shown to enable semantic operations based on vector arithmetic. In this
paper, we explore the existence of similar information on vector
representations of images. For that purpose we define a methodology to obtain
large, sparse vector representations of image classes, and generate vectors
through the state-of-the-art deep learning architecture GoogLeNet for 20K
images obtained from ImageNet. We first evaluate the resultant vector-space
semantics through its correlation with WordNet distances, and find vector
distances to be strongly correlated with linguistic semantics. We then explore
the location of images within the vector space, finding elements close in
WordNet to be clustered together, regardless of significant visual variances
(e.g. 118 dog types). More surprisingly, we find that the space unsupervisedly
separates complex classes without prior knowledge (e.g. living things).
Afterwards, we consider vector arithmetics. Although we are unable to obtain
meaningful results on this regard, we discuss the various problem we
encountered, and how we consider to solve them. Finally, we discuss the impact
of our research for cognitive systems, focusing on the role of the architecture
being used.
| D. Garcia-Gasulla, J. B\'ejar, U. Cort\'es, E. Ayguad\'e, J. Labarta,
T. Suzumura and R. Chen | null | 1507.08818 | null | null |
A novel multivariate performance optimization method based on sparse
coding and hyper-predictor learning | cs.LG cs.CV cs.NA | In this paper, we investigate the problem of optimization multivariate
performance measures, and propose a novel algorithm for it. Different from
traditional machine learning methods which optimize simple loss functions to
learn prediction function, the problem studied in this paper is how to learn
effective hyper-predictor for a tuple of data points, so that a complex loss
function corresponding to a multivariate performance measure can be minimized.
We propose to present the tuple of data points to a tuple of sparse codes via a
dictionary, and then apply a linear function to compare a sparse code against a
give candidate class label. To learn the dictionary, sparse codes, and
parameter of the linear function, we propose a joint optimization problem. In
this problem, the both the reconstruction error and sparsity of sparse code,
and the upper bound of the complex loss function are minimized. Moreover, the
upper bound of the loss function is approximated by the sparse codes and the
linear function parameter. To optimize this problem, we develop an iterative
algorithm based on descent gradient methods to learn the sparse codes and
hyper-predictor parameter alternately. Experiment results on some benchmark
data sets show the advantage of the proposed methods over other
state-of-the-art algorithms.
| Jiachen Yanga, Zhiyong Dinga, Fei Guoa, Huogen Wanga, Nick Hughesb | 10.1016/j.neunet.2015.07.011 | 1507.08847 | null | null |
Artificial Neural Networks Applied to Taxi Destination Prediction | cs.LG cs.NE | We describe our first-place solution to the ECML/PKDD discovery challenge on
taxi destination prediction. The task consisted in predicting the destination
of a taxi based on the beginning of its trajectory, represented as a
variable-length sequence of GPS points, and diverse associated
meta-information, such as the departure time, the driver id and client
information. Contrary to most published competitor approaches, we used an
almost fully automated approach based on neural networks and we ranked first
out of 381 teams. The architectures we tried use multi-layer perceptrons,
bidirectional recurrent neural networks and models inspired from recently
introduced memory networks. Our approach could easily be adapted to other
applications in which the goal is to predict a fixed-length output from a
variable-length sequence.
| Alexandre de Br\'ebisson, \'Etienne Simon, Alex Auvolat, Pascal
Vincent, Yoshua Bengio | null | 1508.00021 | null | null |
Turnover Prediction Of Shares using Data Mining techniques : A Case
Study | cs.LG | Predicting the turnover of a company in the ever fluctuating Stock market has
always proved to be a precarious situation and most certainly a difficult task
in hand. Data mining is a well-known sphere of Computer Science that aims on
extracting meaningful information from large databases. However, despite the
existence of many algorithms for the purpose of predicting the future trends,
their efficiency is questionable as their predictions suffer from a high error
rate. The objective of this paper is to investigate various classification
algorithms to predict the turnover of different companies based on the Stock
price. The authorized dataset for predicting the turnover was taken from
www.bsc.com and included the stock market values of various companies over the
past 10 years. The algorithms were investigated using the "R" tool. The feature
selection algorithm, Boruta, was run on this dataset to extract the important
and influential features for classification. With these extracted features, the
Total Turnover of the company was predicted using various classification
algorithms like Random Forest, Decision Tree, SVM and Multinomial Regression.
This prediction mechanism was implemented to predict the turnover of a company
on an everyday basis and hence could help navigate through dubious stock market
trades. An accuracy rate of 95% was achieved by the above prediction process.
Moreover, the importance of stock market attributes was established as well.
| D.S. Shashaank, V. Sruthi, M.L.S Vijayalakshimi and Jacob Shomona
Garcia | null | 1508.00088 | null | null |
An Analytic Framework for Maritime Situation Analysis | cs.LG | Maritime domain awareness is critical for protecting sea lanes, ports,
harbors, offshore structures and critical infrastructures against common
threats and illegal activities. Limited surveillance resources constrain
maritime domain awareness and compromise full security coverage at all times.
This situation calls for innovative intelligent systems for interactive
situation analysis to assist marine authorities and security personal in their
routine surveillance operations. In this article, we propose a novel situation
analysis framework to analyze marine traffic data and differentiate various
scenarios of vessel engagement for the purpose of detecting anomalies of
interest for marine vessels that operate over some period of time in relative
proximity to each other. The proposed framework views vessel behavior as
probabilistic processes and uses machine learning to model common vessel
interaction patterns. We represent patterns of interest as left-to-right Hidden
Markov Models and classify such patterns using Support Vector Machines.
| Hamed Yaghoubi Shahir, Uwe Gl\"asser, Amir Yaghoubi Shahir, Hans Wehn | null | 1508.00181 | null | null |
PTE: Predictive Text Embedding through Large-scale Heterogeneous Text
Networks | cs.CL cs.LG cs.NE | Unsupervised text embedding methods, such as Skip-gram and Paragraph Vector,
have been attracting increasing attention due to their simplicity, scalability,
and effectiveness. However, comparing to sophisticated deep learning
architectures such as convolutional neural networks, these methods usually
yield inferior results when applied to particular machine learning tasks. One
possible reason is that these text embedding methods learn the representation
of text in a fully unsupervised way, without leveraging the labeled information
available for the task. Although the low dimensional representations learned
are applicable to many different tasks, they are not particularly tuned for any
task. In this paper, we fill this gap by proposing a semi-supervised
representation learning method for text data, which we call the
\textit{predictive text embedding} (PTE). Predictive text embedding utilizes
both labeled and unlabeled data to learn the embedding of text. The labeled
information and different levels of word co-occurrence information are first
represented as a large-scale heterogeneous text network, which is then embedded
into a low dimensional space through a principled and efficient algorithm. This
low dimensional embedding not only preserves the semantic closeness of words
and documents, but also has a strong predictive power for the particular task.
Compared to recent supervised approaches based on convolutional neural
networks, predictive text embedding is comparable or more effective, much more
efficient, and has fewer parameters to tune.
| Jian Tang, Meng Qu, Qiaozhu Mei | 10.1145/2783258.2783307 | 1508.00200 | null | null |
Toward a Robust Sparse Data Representation for Wireless Sensor Networks | cs.NI cs.LG cs.NE | Compressive sensing has been successfully used for optimized operations in
wireless sensor networks. However, raw data collected by sensors may be neither
originally sparse nor easily transformed into a sparse data representation.
This paper addresses the problem of transforming source data collected by
sensor nodes into a sparse representation with a few nonzero elements. Our
contributions that address three major issues include: 1) an effective method
that extracts population sparsity of the data, 2) a sparsity ratio guarantee
scheme, and 3) a customized learning algorithm of the sparsifying dictionary.
We introduce an unsupervised neural network to extract an intrinsic sparse
coding of the data. The sparse codes are generated at the activation of the
hidden layer using a sparsity nomination constraint and a shrinking mechanism.
Our analysis using real data samples shows that the proposed method outperforms
conventional sparsity-inducing methods.
| Mohammad Abu Alsheikh, Shaowei Lin, Hwee-Pink Tan, and Dusit Niyato | 10.1109/LCN.2015.7366290 | 1508.00230 | null | null |
Optimal Radio Frequency Energy Harvesting with Limited Energy Arrival
Knowledge | cs.IT cs.LG math.IT | In this paper, we develop optimal policies for deciding when a wireless node
with radio frequency (RF) energy harvesting (EH) capabilities should try and
harvest ambient RF energy. While the idea of RF-EH is appealing, it is not
always beneficial to attempt to harvest energy; in environments where the
ambient energy is low, nodes could consume more energy being awake with their
harvesting circuits turned on than what they can extract from the ambient radio
signals; it is then better to enter a sleep mode until the ambient RF energy
increases. Towards this end, we consider a scenario with intermittent energy
arrivals and a wireless node that wakes up for a period of time (herein called
the time-slot) and harvests energy. If enough energy is harvested during the
time-slot, then the harvesting is successful and excess energy is stored;
however, if there does not exist enough energy the harvesting is unsuccessful
and energy is lost.
We assume that the ambient energy level is constant during the time-slot, and
changes at slot boundaries. The energy level dynamics are described by a
two-state Gilbert-Elliott Markov chain model, where the state of the Markov
chain can only be observed during the harvesting action, and not when in sleep
mode. Two scenarios are studied under this model. In the first scenario, we
assume that we have knowledge of the transition probabilities of the Markov
chain and formulate the problem as a Partially Observable Markov Decision
Process (POMDP), where we find a threshold-based optimal policy. In the second
scenario, we assume that we don't have any knowledge about these parameters and
formulate the problem as a Bayesian adaptive POMDP; to reduce the complexity of
the computations we also propose a heuristic posterior sampling algorithm. The
performance of our approaches is demonstrated via numerical examples.
| Zhenhua Zou and Anders Gidmark and Themistoklis Charalambous and
Mikael Johansson | null | 1508.00285 | null | null |
Time-series modeling with undecimated fully convolutional neural
networks | stat.ML cs.LG | We present a new convolutional neural network-based time-series model.
Typical convolutional neural network (CNN) architectures rely on the use of
max-pooling operators in between layers, which leads to reduced resolution at
the top layers. Instead, in this work we consider a fully convolutional network
(FCN) architecture that uses causal filtering operations, and allows for the
rate of the output signal to be the same as that of the input signal. We
furthermore propose an undecimated version of the FCN, which we refer to as the
undecimated fully convolutional neural network (UFCNN), and is motivated by the
undecimated wavelet transform. Our experimental results verify that using the
undecimated version of the FCN is necessary in order to allow for effective
time-series modeling. The UFCNN has several advantages compared to other
time-series models such as the recurrent neural network (RNN) and long
short-term memory (LSTM), since it does not suffer from either the vanishing or
exploding gradients problems, and is therefore easier to train. Convolution
operations can also be implemented more efficiently compared to the recursion
that is involved in RNN-based models. We evaluate the performance of our model
in a synthetic target tracking task using bearing only measurements generated
from a state-space model, a probabilistic modeling of polyphonic music
sequences problem, and a high frequency trading task using a time-series of
ask/bid quotes and their corresponding volumes. Our experimental results using
synthetic and real datasets verify the significant advantages of the UFCNN
compared to the RNN and LSTM baselines.
| Roni Mittelman | null | 1508.00317 | null | null |
On the Importance of Normalisation Layers in Deep Learning with
Piecewise Linear Activation Units | cs.CV cs.LG cs.NE | Deep feedforward neural networks with piecewise linear activations are
currently producing the state-of-the-art results in several public datasets.
The combination of deep learning models and piecewise linear activation
functions allows for the estimation of exponentially complex functions with the
use of a large number of subnetworks specialized in the classification of
similar input examples. During the training process, these subnetworks avoid
overfitting with an implicit regularization scheme based on the fact that they
must share their parameters with other subnetworks. Using this framework, we
have made an empirical observation that can improve even more the performance
of such models. We notice that these models assume a balanced initial
distribution of data points with respect to the domain of the piecewise linear
activation function. If that assumption is violated, then the piecewise linear
activation units can degenerate into purely linear activation units, which can
result in a significant reduction of their capacity to learn complex functions.
Furthermore, as the number of model layers increases, this unbalanced initial
distribution makes the model ill-conditioned. Therefore, we propose the
introduction of batch normalisation units into deep feedforward neural networks
with piecewise linear activations, which drives a more balanced use of these
activation units, where each region of the activation function is trained with
a relatively large proportion of training samples. Also, this batch
normalisation promotes the pre-conditioning of very deep learning models. We
show that by introducing maxout and batch normalisation units to the network in
network model results in a model that produces classification results that are
better than or comparable to the current state of the art in CIFAR-10,
CIFAR-100, MNIST, and SVHN datasets.
| Zhibin Liao, Gustavo Carneiro | null | 1508.00330 | null | null |
Integrated Inference and Learning of Neural Factors in Structural
Support Vector Machines | stat.ML cs.CV cs.LG cs.NE | Tackling pattern recognition problems in areas such as computer vision,
bioinformatics, speech or text recognition is often done best by taking into
account task-specific statistical relations between output variables. In
structured prediction, this internal structure is used to predict multiple
outputs simultaneously, leading to more accurate and coherent predictions.
Structural support vector machines (SSVMs) are nonprobabilistic models that
optimize a joint input-output function through margin-based learning. Because
SSVMs generally disregard the interplay between unary and interaction factors
during the training phase, final parameters are suboptimal. Moreover, its
factors are often restricted to linear combinations of input features, limiting
its generalization power. To improve prediction accuracy, this paper proposes:
(i) Joint inference and learning by integration of back-propagation and
loss-augmented inference in SSVM subgradient descent; (ii) Extending SSVM
factors to neural networks that form highly nonlinear functions of input
features. Image segmentation benchmark results demonstrate improvements over
conventional SSVM training methods in terms of accuracy, highlighting the
feasibility of end-to-end SSVM training with neural factors.
| Rein Houthooft, Filip De Turck | 10.1016/j.patcog.2016.03.014 | 1508.00451 | null | null |
A variational approach to path estimation and parameter inference of
hidden diffusion processes | math.OC cs.LG cs.SY math.PR math.ST stat.TH | We consider a hidden Markov model, where the signal process, given by a
diffusion, is only indirectly observed through some noisy measurements. The
article develops a variational method for approximating the hidden states of
the signal process given the full set of observations. This, in particular,
leads to systematic approximations of the smoothing densities of the signal
process. The paper then demonstrates how an efficient inference scheme, based
on this variational approach to the approximation of the hidden states, can be
designed to estimate the unknown parameters of stochastic differential
equations. Two examples at the end illustrate the efficacy and the accuracy of
the presented method.
| Tobias Sutter, Arnab Ganguly, Heinz Koeppl | null | 1508.00506 | null | null |
A Weakly Supervised Learning Approach based on Spectral Graph-Theoretic
Grouping | cs.LG cs.AI | In this study, a spectral graph-theoretic grouping strategy for weakly
supervised classification is introduced, where a limited number of labelled
samples and a larger set of unlabelled samples are used to construct a larger
annotated training set composed of strongly labelled and weakly labelled
samples. The inherent relationship between the set of strongly labelled samples
and the set of unlabelled samples is established via spectral grouping, with
the unlabelled samples subsequently weakly annotated based on the strongly
labelled samples within the associated spectral groups. A number of similarity
graph models for spectral grouping, including two new similarity graph models
introduced in this study, are explored to investigate their performance in the
context of weakly supervised classification in handling different types of
data. Experimental results using benchmark datasets as well as real EMG
datasets demonstrate that the proposed approach to weakly supervised
classification can provide noticeable improvements in classification
performance, and that the proposed similarity graph models can lead to ultimate
learning results that are either better than or on a par with existing
similarity graph models in the context of spectral grouping for weakly
supervised classification.
| Tameem Adel, Alexander Wong, Daniel Stashuk | null | 1508.00507 | null | null |
Maintaining prediction quality under the condition of a growing
knowledge space | cs.AI cs.LG | Intelligence can be understood as an agent's ability to predict its
environment's dynamic by a level of precision which allows it to effectively
foresee opportunities and threats. Under the assumption that such intelligence
relies on a knowledge space any effective reasoning would benefit from a
maximum portion of useful and a minimum portion of misleading knowledge
fragments. It begs the question of how the quality of such knowledge space can
be kept high as the amount of knowledge keeps growing. This article proposes a
mathematical model to describe general principles of how quality of a growing
knowledge space evolves depending on error rate, error propagation and
countermeasures. There is also shown to which extend the quality of a knowledge
space collapses as removal of low quality knowledge fragments occurs too slowly
for a given knowledge space's growth rate.
| Christoph Jahnz | null | 1508.00509 | null | null |
Sparse PCA via Bipartite Matchings | stat.ML cs.DS cs.LG math.OC | We consider the following multi-component sparse PCA problem: given a set of
data points, we seek to extract a small number of sparse components with
disjoint supports that jointly capture the maximum possible variance. These
components can be computed one by one, repeatedly solving the single-component
problem and deflating the input data matrix, but as we show this greedy
procedure is suboptimal. We present a novel algorithm for sparse PCA that
jointly optimizes multiple disjoint components. The extracted features capture
variance that lies within a multiplicative factor arbitrarily close to 1 from
the optimal. Our algorithm is combinatorial and computes the desired components
by solving multiple instances of the bipartite maximum weight matching problem.
Its complexity grows as a low order polynomial in the ambient dimension of the
input data matrix, but exponentially in its rank. However, it can be
effectively applied on a low-dimensional sketch of the data; this allows us to
obtain polynomial-time approximation guarantees via spectral bounds. We
evaluate our algorithm on real data-sets and empirically demonstrate that in
many cases it outperforms existing, deflation-based approaches.
| Megasthenis Asteris, Dimitris Papailiopoulos, Anastasios Kyrillidis,
Alexandros G. Dimakis | null | 1508.00625 | null | null |
Bayesian mixtures of spatial spline regressions | stat.ME cs.LG stat.CO stat.ML | This work relates the framework of model-based clustering for spatial
functional data where the data are surfaces. We first introduce a Bayesian
spatial spline regression model with mixed-effects (BSSR) for modeling spatial
function data. The BSSR model is based on Nodal basis functions for spatial
regression and accommodates both common mean behavior for the data through a
fixed-effects part, and variability inter-individuals thanks to a
random-effects part. Then, in order to model populations of spatial functional
data issued from heterogeneous groups, we integrate the BSSR model into a
mixture framework. The resulting model is a Bayesian mixture of spatial spline
regressions with mixed-effects (BMSSR) used for density estimation and
model-based surface clustering. The models, through their Bayesian formulation,
allow to integrate possible prior knowledge on the data structure and
constitute a good alternative to recent mixture of spatial spline regressions
model estimated in a maximum likelihood framework via the
expectation-maximization (EM) algorithm. The Bayesian model inference is
performed by Markov Chain Monte Carlo (MCMC) sampling. We derive two Gibbs
sampler to infer the BSSR and the BMSSR models and apply them on simulated
surfaces and a real problem of handwritten digit recognition using the MNIST
data set. The obtained results highlight the potential benefit of the proposed
Bayesian approaches for modeling surfaces possibly dispersed in particular in
clusters.
| Faicel Chamroukhi | null | 1508.00635 | null | null |
Episodic Multi-armed Bandits | cs.LG stat.ML | We introduce a new class of reinforcement learning methods referred to as
{\em episodic multi-armed bandits} (eMAB). In eMAB the learner proceeds in {\em
episodes}, each composed of several {\em steps}, in which it chooses an action
and observes a feedback signal. Moreover, in each step, it can take a special
action, called the $stop$ action, that ends the current episode. After the
$stop$ action is taken, the learner collects a terminal reward, and observes
the costs and terminal rewards associated with each step of the episode. The
goal of the learner is to maximize its cumulative gain (i.e., the terminal
reward minus costs) over all episodes by learning to choose the best sequence
of actions based on the feedback. First, we define an {\em oracle} benchmark,
which sequentially selects the actions that maximize the expected immediate
gain. Then, we propose our online learning algorithm, named {\em FeedBack
Adaptive Learning} (FeedBAL), and prove that its regret with respect to the
benchmark is bounded with high probability and increases logarithmically in
expectation. Moreover, the regret only has polynomial dependence on the number
of steps, actions and states. eMAB can be used to model applications that
involve humans in the loop, ranging from personalized medical screening to
personalized web-based education, where sequences of actions are taken in each
episode, and optimal behavior requires adapting the chosen actions based on the
feedback.
| Cem Tekin and Mihaela van der Schaar | null | 1508.00641 | null | null |
Adaptivity and Computation-Statistics Tradeoffs for Kernel and Distance
based High Dimensional Two Sample Testing | math.ST cs.AI cs.IT cs.LG math.IT stat.ML stat.TH | Nonparametric two sample testing is a decision theoretic problem that
involves identifying differences between two random variables without making
parametric assumptions about their underlying distributions. We refer to the
most common settings as mean difference alternatives (MDA), for testing
differences only in first moments, and general difference alternatives (GDA),
which is about testing for any difference in distributions. A large number of
test statistics have been proposed for both these settings. This paper connects
three classes of statistics - high dimensional variants of Hotelling's t-test,
statistics based on Reproducing Kernel Hilbert Spaces, and energy statistics
based on pairwise distances. We ask the question: how much statistical power do
popular kernel and distance based tests for GDA have when the unknown
distributions differ in their means, compared to specialized tests for MDA?
We formally characterize the power of popular tests for GDA like the Maximum
Mean Discrepancy with the Gaussian kernel (gMMD) and bandwidth-dependent
variants of the Energy Distance with the Euclidean norm (eED) in the
high-dimensional MDA regime. Some practically important properties include (a)
eED and gMMD have asymptotically equal power; furthermore they enjoy a free
lunch because, while they are additionally consistent for GDA, they also have
the same power as specialized high-dimensional t-test variants for MDA. All
these tests are asymptotically optimal (including matching constants) under MDA
for spherical covariances, according to simple lower bounds, (b) The power of
gMMD is independent of the kernel bandwidth, as long as it is larger than the
choice made by the median heuristic, (c) There is a clear and smooth
computation-statistics tradeoff for linear-time, subquadratic-time and
quadratic-time versions of these tests, with more computation resulting in
higher power.
| Aaditya Ramdas, Sashank J. Reddi, Barnabas Poczos, Aarti Singh, Larry
Wasserman | null | 1508.00655 | null | null |
Parameter Database : Data-centric Synchronization for Scalable Machine
Learning | cs.DB cs.LG | We propose a new data-centric synchronization framework for carrying out of
machine learning (ML) tasks in a distributed environment. Our framework
exploits the iterative nature of ML algorithms and relaxes the application
agnostic bulk synchronization parallel (BSP) paradigm that has previously been
used for distributed machine learning. Data-centric synchronization complements
function-centric synchronization based on using stale updates to increase the
throughput of distributed ML computations. Experiments to validate our
framework suggest that we can attain substantial improvement over BSP while
guaranteeing sequential correctness of ML tasks.
| Naman Goel, Divyakant Agrawal, Sanjay Chawla, Ahmed Elmagarmid | null | 1508.00703 | null | null |
Multi-Label Active Learning from Crowds | cs.LG cs.SI | Multi-label active learning is a hot topic in reducing the label cost by
optimally choosing the most valuable instance to query its label from an
oracle. In this paper, we consider the poolbased multi-label active learning
under the crowdsourcing setting, where during the active query process, instead
of resorting to a high cost oracle for the ground-truth, multiple low cost
imperfect annotators with various expertise are available for labeling. To deal
with this problem, we propose the MAC (Multi-label Active learning from Crowds)
approach which incorporate the local influence of label correlations to build a
probabilistic model over the multi-label classifier and annotators. Based on
this model, we can estimate the labels for instances as well as the expertise
of each annotator. Then we propose the instance selection and annotator
selection criteria that consider the uncertainty/diversity of instances and the
reliability of annotators, such that the most reliable annotator will be
queried for the most valuable instances. Experimental results demonstrate the
effectiveness of the proposed approach.
| Shao-Yuan Li, Yuan Jiang, Zhi-Hua Zhou | null | 1508.00722 | null | null |
Fixed-point algorithms for learning determinantal point processes | cs.LG | Determinantal point processes (DPPs) offer an elegant tool for encoding
probabilities over subsets of a ground set. Discrete DPPs are parametrized by a
positive semidefinite matrix (called the DPP kernel), and estimating this
kernel is key to learning DPPs from observed data. We consider the task of
learning the DPP kernel, and develop for it a surprisingly simple yet effective
new algorithm. Our algorithm offers the following benefits over previous
approaches: (a) it is much simpler; (b) it yields equally good and sometimes
even better local maxima; and (c) it runs an order of magnitude faster on large
problems. We present experimental results on both real and simulated data to
illustrate the numerical performance of our technique.
| Zelda Mariet, Suvrit Sra | null | 1508.00792 | null | null |
Perceptron like Algorithms for Online Learning to Rank | cs.LG stat.ML | Perceptron is a classic online algorithm for learning a classification
function. In this paper, we provide a novel extension of the perceptron
algorithm to the learning to rank problem in information retrieval. We consider
popular listwise performance measures such as Normalized Discounted Cumulative
Gain (NDCG) and Average Precision (AP). A modern perspective on perceptron for
classification is that it is simply an instance of online gradient descent
(OGD), during mistake rounds, using the hinge loss function. Motivated by this
interpretation, we propose a novel family of listwise, large margin ranking
surrogates. Members of this family can be thought of as analogs of the hinge
loss. Exploiting a certain self-bounding property of the proposed family, we
provide a guarantee on the cumulative NDCG (or AP) induced loss incurred by our
perceptron-like algorithm. We show that, if there exists a perfect oracle
ranker which can correctly rank each instance in an online sequence of ranking
data, with some margin, the cumulative loss of perceptron algorithm on that
sequence is bounded by a constant, irrespective of the length of the sequence.
This result is reminiscent of Novikoff's convergence theorem for the
classification perceptron. Moreover, we prove a lower bound on the cumulative
loss achievable by any deterministic algorithm, under the assumption of
existence of perfect oracle ranker. The lower bound shows that our perceptron
bound is not tight, and we propose another, \emph{purely online}, algorithm
which achieves the lower bound. We provide empirical results on simulated and
large commercial datasets to corroborate our theoretical results.
| Sougata Chaudhuri and Ambuj Tewari | null | 1508.00842 | null | null |
Structured Prediction: From Gaussian Perturbations to Linear-Time
Principled Algorithms | stat.ML cs.LG | Margin-based structured prediction commonly uses a maximum loss over all
possible structured outputs \cite{Altun03,Collins04b,Taskar03}. In natural
language processing, recent work \cite{Zhang14,Zhang15} has proposed the use of
the maximum loss over random structured outputs sampled independently from some
proposal distribution. This method is linear-time in the number of random
structured outputs and trivially parallelizable. We study this family of loss
functions in the PAC-Bayes framework under Gaussian perturbations
\cite{McAllester07}. Under some technical conditions and up to statistical
accuracy, we show that this family of loss functions produces a tighter upper
bound of the Gibbs decoder distortion than commonly used methods. Thus, using
the maximum loss over random structured outputs is a principled way of learning
the parameter of structured prediction models. Besides explaining the
experimental success of \cite{Zhang14,Zhang15}, our theoretical results show
that more general techniques are possible.
| Jean Honorio, Tommi Jaakkola | null | 1508.00945 | null | null |
MAP Support Detection for Greedy Sparse Signal Recovery Algorithms in
Compressive Sensing | cs.IT cs.LG math.IT | A reliable support detection is essential for a greedy algorithm to
reconstruct a sparse signal accurately from compressed and noisy measurements.
This paper proposes a novel support detection method for greedy algorithms,
which is referred to as "\textit{maximum a posteriori (MAP) support
detection}". Unlike existing support detection methods that identify support
indices with the largest correlation value in magnitude per iteration, the
proposed method selects them with the largest likelihood ratios computed under
the true and null support hypotheses by simultaneously exploiting the
distributions of sensing matrix, sparse signal, and noise. Leveraging this
technique, MAP-Matching Pursuit (MAP-MP) is first presented to show the
advantages of exploiting the proposed support detection method, and a
sufficient condition for perfect signal recovery is derived for the case when
the sparse signal is binary. Subsequently, a set of iterative greedy
algorithms, called MAP-generalized Orthogonal Matching Pursuit (MAP-gOMP),
MAP-Compressive Sampling Matching Pursuit (MAP-CoSaMP), and MAP-Subspace
Pursuit (MAP-SP) are presented to demonstrate the applicability of the proposed
support detection method to existing greedy algorithms. From empirical results,
it is shown that the proposed greedy algorithms with highly reliable support
detection can be better, faster, and easier to implement than basis pursuit via
linear programming.
| Namyoon Lee | 10.1109/TSP.2016.2580527 | 1508.00964 | null | null |
Progressive EM for Latent Tree Models and Hierarchical Topic Detection | cs.LG cs.CL cs.IR stat.ML | Hierarchical latent tree analysis (HLTA) is recently proposed as a new method
for topic detection. It differs fundamentally from the LDA-based methods in
terms of topic definition, topic-document relationship, and learning method. It
has been shown to discover significantly more coherent topics and better topic
hierarchies. However, HLTA relies on the Expectation-Maximization (EM)
algorithm for parameter estimation and hence is not efficient enough to deal
with large datasets. In this paper, we propose a method to drastically speed up
HLTA using a technique inspired by recent advances in the moments method.
Empirical experiments show that our method greatly improves the efficiency of
HLTA. It is as efficient as the state-of-the-art LDA-based method for
hierarchical topic detection and finds substantially better topics and topic
hierarchies.
| Peixian Chen, Nevin L. Zhang, Leonard K.M. Poon, Zhourong Chen | null | 1508.00973 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.