title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Non-convex Regularizations for Feature Selection in Ranking With Sparse
SVM | cs.LG | Feature selection in learning to rank has recently emerged as a crucial
issue. Whereas several preprocessing approaches have been proposed, only a few
works have been focused on integrating the feature selection into the learning
process. In this work, we propose a general framework for feature selection in
learning to rank using SVM with a sparse regularization term. We investigate
both classical convex regularizations such as $\ell\_1$ or weighted $\ell\_1$
and non-convex regularization terms such as log penalty, Minimax Concave
Penalty (MCP) or $\ell\_p$ pseudo norm with $p\textless{}1$. Two algorithms are
proposed, first an accelerated proximal approach for solving the convex
problems, second a reweighted $\ell\_1$ scheme to address the non-convex
regularizations. We conduct intensive experiments on nine datasets from Letor
3.0 and Letor 4.0 corpora. Numerical results show that the use of non-convex
regularizations we propose leads to more sparsity in the resulting models while
prediction performance is preserved. The number of features is decreased by up
to a factor of six compared to the $\ell\_1$ regularization. In addition, the
software is publicly available on the web.
| L\'ea Laporte (IRIT), R\'emi Flamary (OCA, LAGRANGE), Stephane Canu
(LITIS), S\'ebastien D\'ejean (IMT), Josiane Mothe (IRIT) | 10.1109/TNNLS.2013.2286696 | 1507.00500 | null | null |
Optimal Transport for Domain Adaptation | cs.LG | Domain adaptation from one data space (or domain) to another is one of the
most challenging tasks of modern data analytics. If the adaptation is done
correctly, models built on a specific data space become more robust when
confronted to data depicting the same semantic concepts (the classes), but
observed by another observation system with its own specificities. Among the
many strategies proposed to adapt a domain to another, finding a common
representation has shown excellent properties: by finding a common
representation for both domains, a single classifier can be effective in both
and use labelled samples from the source domain to predict the unlabelled
samples of the target domain. In this paper, we propose a regularized
unsupervised optimal transportation model to perform the alignment of the
representations in the source and target domains. We learn a transportation
plan matching both PDFs, which constrains labelled samples in the source domain
to remain close during transport. This way, we exploit at the same time the few
labeled information in the source and the unlabelled distributions observed in
both domains. Experiments in toy and challenging real visual adaptation
examples show the interest of the method, that consistently outperforms state
of the art approaches.
| Nicolas Courty (OBELIX), R\'emi Flamary (LAGRANGE, OCA), Devis Tuia
(LASIG), Alain Rakotomamonjy (LITIS) | null | 1507.00504 | null | null |
Regularized linear system identification using atomic, nuclear and
kernel-based norms: the role of the stability constraint | cs.SY cs.LG | Inspired by ideas taken from the machine learning literature, new
regularization techniques have been recently introduced in linear system
identification. In particular, all the adopted estimators solve a regularized
least squares problem, differing in the nature of the penalty term assigned to
the impulse response. Popular choices include atomic and nuclear norms (applied
to Hankel matrices) as well as norms induced by the so called stable spline
kernels. In this paper, a comparative study of estimators based on these
different types of regularizers is reported. Our findings reveal that stable
spline kernels outperform approaches based on atomic and nuclear norms since
they suitably embed information on impulse response stability and smoothness.
This point is illustrated using the Bayesian interpretation of regularization.
We also design a new class of regularizers defined by "integral" versions of
stable spline/TC kernels. Under quite realistic experimental conditions, the
new estimators outperform classical prediction error methods also when the
latter are equipped with an oracle for model order selection.
| Gianluigi Pillonetto, Tianshi Chen, Alessandro Chiuso, Giuseppe De
Nicolao, Lennart Ljung | null | 1507.00564 | null | null |
Self-Learning Cloud Controllers: Fuzzy Q-Learning for Knowledge
Evolution | cs.SY cs.AI cs.DC cs.LG cs.SE | Cloud controllers aim at responding to application demands by automatically
scaling the compute resources at runtime to meet performance guarantees and
minimize resource costs. Existing cloud controllers often resort to scaling
strategies that are codified as a set of adaptation rules. However, for a cloud
provider, applications running on top of the cloud infrastructure are more or
less black-boxes, making it difficult at design time to define optimal or
pre-emptive adaptation rules. Thus, the burden of taking adaptation decisions
often is delegated to the cloud application. Yet, in most cases, application
developers in turn have limited knowledge of the cloud infrastructure. In this
paper, we propose learning adaptation rules during runtime. To this end, we
introduce FQL4KE, a self-learning fuzzy cloud controller. In particular, FQL4KE
learns and modifies fuzzy rules at runtime. The benefit is that for designing
cloud controllers, we do not have to rely solely on precise design-time
knowledge, which may be difficult to acquire. FQL4KE empowers users to specify
cloud controllers by simply adjusting weights representing priorities in system
goals instead of specifying complex adaptation rules. The applicability of
FQL4KE has been experimentally assessed as part of the cloud application
framework ElasticBench. The experimental results indicate that FQL4KE
outperforms our previously developed fuzzy controller without learning
mechanisms and the native Azure auto-scaling.
| Pooyan Jamshidi, Amir Sharifloo, Claus Pahl, Andreas Metzger, Giovani
Estrada | null | 1507.00567 | null | null |
SQL for SRL: Structure Learning Inside a Database System | cs.LG cs.DB | The position we advocate in this paper is that relational algebra can provide
a unified language for both representing and computing with
statistical-relational objects, much as linear algebra does for traditional
single-table machine learning. Relational algebra is implemented in the
Structured Query Language (SQL), which is the basis of relational database
management systems. To support our position, we have developed the FACTORBASE
system, which uses SQL as a high-level scripting language for
statistical-relational learning of a graphical model structure. The design
philosophy of FACTORBASE is to manage statistical models as first-class
citizens inside a database. Our implementation shows how our SQL constructs in
FACTORBASE facilitate fast, modular, and reliable program development.
Empirical evidence from six benchmark databases indicates that leveraging
database system capabilities achieves scalable model structure learning.
| Oliver Schulte and Zhensong Qian | null | 1507.00646 | null | null |
Distributional Smoothing with Virtual Adversarial Training | stat.ML cs.LG | We propose local distributional smoothness (LDS), a new notion of smoothness
for statistical model that can be used as a regularization term to promote the
smoothness of the model distribution. We named the LDS based regularization as
virtual adversarial training (VAT). The LDS of a model at an input datapoint is
defined as the KL-divergence based robustness of the model distribution against
local perturbation around the datapoint. VAT resembles adversarial training,
but distinguishes itself in that it determines the adversarial direction from
the model distribution alone without using the label information, making it
applicable to semi-supervised learning. The computational cost for VAT is
relatively low. For neural network, the approximated gradient of the LDS can be
computed with no more than three pairs of forward and back propagations. When
we applied our technique to supervised and semi-supervised learning for the
MNIST dataset, it outperformed all the training methods other than the current
state of the art method, which is based on a highly advanced generative model.
We also applied our method to SVHN and NORB, and confirmed our method's
superior performance over the current state of the art semi-supervised method
applied to these datasets.
| Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, Shin Ishii | null | 1507.00677 | null | null |
Fast, Provable Algorithms for Isotonic Regression in all
$\ell_{p}$-norms | cs.LG cs.DS math.ST stat.TH | Given a directed acyclic graph $G,$ and a set of values $y$ on the vertices,
the Isotonic Regression of $y$ is a vector $x$ that respects the partial order
described by $G,$ and minimizes $||x-y||,$ for a specified norm. This paper
gives improved algorithms for computing the Isotonic Regression for all
weighted $\ell_{p}$-norms with rigorous performance guarantees. Our algorithms
are quite practical, and their variants can be implemented to run fast in
practice.
| Rasmus Kyng and Anup Rao and Sushant Sachdeva | null | 1507.00710 | null | null |
Incentivizing Exploration In Reinforcement Learning With Deep Predictive
Models | cs.AI cs.LG stat.ML | Achieving efficient and scalable exploration in complex domains poses a major
challenge in reinforcement learning. While Bayesian and PAC-MDP approaches to
the exploration problem offer strong formal guarantees, they are often
impractical in higher dimensions due to their reliance on enumerating the
state-action space. Hence, exploration in complex domains is often performed
with simple epsilon-greedy methods. In this paper, we consider the challenging
Atari games domain, which requires processing raw pixel inputs and delayed
rewards. We evaluate several more sophisticated exploration strategies,
including Thompson sampling and Boltzman exploration, and propose a new
exploration method based on assigning exploration bonuses from a concurrently
learned model of the system dynamics. By parameterizing our learned model with
a neural network, we are able to develop a scalable and efficient approach to
exploration bonuses that can be applied to tasks with complex, high-dimensional
state spaces. In the Atari domain, our method provides the most consistent
improvement across a range of games that pose a major challenge for prior
methods. In addition to raw game-scores, we also develop an AUC-100 metric for
the Atari Learning domain to evaluate the impact of exploration on this
benchmark.
| Bradly C. Stadie, Sergey Levine, Pieter Abbeel | null | 1507.00814 | null | null |
D-MFVI: Distributed Mean Field Variational Inference using Bregman ADMM | cs.LG stat.ML | Bayesian models provide a framework for probabilistic modelling of complex
datasets. However, many of such models are computationally demanding especially
in the presence of large datasets. On the other hand, in sensor network
applications, statistical (Bayesian) parameter estimation usually needs
distributed algorithms, in which both data and computation are distributed
across the nodes of the network. In this paper we propose a general framework
for distributed Bayesian learning using Bregman Alternating Direction Method of
Multipliers (B-ADMM). We demonstrate the utility of our framework, with Mean
Field Variational Bayes (MFVB) as the primitive for distributed Matrix
Factorization (MF) and distributed affine structure from motion (SfM).
| Behnam Babagholami-Mohamadabadi, Sejong Yoon, Vladimir Pavlovic | null | 1507.00824 | null | null |
Ridge Regression, Hubness, and Zero-Shot Learning | cs.LG stat.ML | This paper discusses the effect of hubness in zero-shot learning, when ridge
regression is used to find a mapping between the example space to the label
space. Contrary to the existing approach, which attempts to find a mapping from
the example space to the label space, we show that mapping labels into the
example space is desirable to suppress the emergence of hubs in the subsequent
nearest neighbor search step. Assuming a simple data model, we prove that the
proposed approach indeed reduces hubness. This was verified empirically on the
tasks of bilingual lexicon extraction and image labeling: hubness was reduced
with both of these tasks and the accuracy was improved accordingly.
| Yutaro Shigeto, Ikumi Suzuki, Kazuo Hara, Masashi Shimbo, Yuji
Matsumoto | null | 1507.00825 | null | null |
LogDet Rank Minimization with Application to Subspace Clustering | cs.CV cs.LG stat.ML | Low-rank matrix is desired in many machine learning and computer vision
problems. Most of the recent studies use the nuclear norm as a convex surrogate
of the rank operator. However, all singular values are simply added together by
the nuclear norm, and thus the rank may not be well approximated in practical
problems. In this paper, we propose to use a log-determinant (LogDet) function
as a smooth and closer, though non-convex, approximation to rank for obtaining
a low-rank representation in subspace clustering. Augmented Lagrange
multipliers strategy is applied to iteratively optimize the LogDet-based
non-convex objective function on potentially large-scale data. By making use of
the angular information of principal directions of the resultant low-rank
representation, an affinity graph matrix is constructed for spectral
clustering. Experimental results on motion segmentation and face clustering
data demonstrate that the proposed method often outperforms state-of-the-art
subspace clustering algorithms.
| Zhao Kang, Chong Peng, Jie Cheng and Qiang Chen | 10.1155/2015/824289 | 1507.00908 | null | null |
Twitter Sentiment Analysis: Lexicon Method, Machine Learning Method and
Their Combination | cs.CL cs.IR cs.LG stat.ME stat.ML | This paper covers the two approaches for sentiment analysis: i) lexicon based
method; ii) machine learning method. We describe several techniques to
implement these approaches and discuss how they can be adopted for sentiment
classification of Twitter messages. We present a comparative study of different
lexicon combinations and show that enhancing sentiment lexicons with emoticons,
abbreviations and social-media slang expressions increases the accuracy of
lexicon-based classification for Twitter. We discuss the importance of feature
generation and feature selection processes for machine learning sentiment
classification. To quantify the performance of the main sentiment analysis
methods over Twitter we run these algorithms on a benchmark Twitter dataset
from the SemEval-2013 competition, task 2-B. The results show that machine
learning method based on SVM and Naive Bayes classifiers outperforms the
lexicon method. We present a new ensemble method that uses a lexicon based
sentiment score as input feature for the machine learning approach. The
combined method proved to produce more precise classifications. We also show
that employing a cost-sensitive classifier for highly unbalanced datasets
yields an improvement of sentiment classification performance up to 7%.
| Olga Kolchyna, Tharsis T. P. Souza, Philip Treleaven, Tomaso Aste | null | 1507.00955 | null | null |
Describing Multimedia Content using Attention-based Encoder--Decoder
Networks | cs.NE cs.CL cs.CV cs.LG | Whereas deep neural networks were first mostly used for classification tasks,
they are rapidly expanding in the realm of structured output problems, where
the observed target is composed of multiple random variables that have a rich
joint distribution, given the input. We focus in this paper on the case where
the input also has a rich structure and the input and output structures are
somehow related. We describe systems that learn to attend to different places
in the input, for each element of the output, for a variety of tasks: machine
translation, image caption generation, video clip description and speech
recognition. All these systems are based on a shared set of building blocks:
gated recurrent neural networks and convolutional neural networks, along with
trained attention mechanisms. We report on experimental results with these
systems, showing impressively good performance and the advantage of the
attention mechanism.
| Kyunghyun Cho, Aaron Courville, Yoshua Bengio | 10.1109/TMM.2015.2477044 | 1507.01053 | null | null |
Convex Factorization Machine for Regression | stat.ML cs.LG | We propose the convex factorization machine (CFM), which is a convex variant
of the widely used Factorization Machines (FMs). Specifically, we employ a
linear+quadratic model and regularize the linear term with the
$\ell_2$-regularizer and the quadratic term with the trace norm regularizer.
Then, we formulate the CFM optimization as a semidefinite programming problem
and propose an efficient optimization procedure with Hazan's algorithm. A key
advantage of CFM over existing FMs is that it can find a globally optimal
solution, while FMs may get a poor locally optimal solution since the objective
function of FMs is non-convex. In addition, the proposed algorithm is simple
yet effective and can be implemented easily. Finally, CFM is a general
factorization method and can also be used for other factorization problems
including including multi-view matrix factorization and tensor completion
problems. Through synthetic and movielens datasets, we first show that the
proposed CFM achieves results competitive to FMs. Furthermore, in a
toxicogenomics prediction task, we show that CFM outperforms a state-of-the-art
tensor factorization method.
| Makoto Yamada, Wenzhao Lian, Amit Goyal, Jianhui Chen, Kishan
Wimalawarne, Suleiman A Khan, Samuel Kaski, Hiroshi Mamitsuka, Yi Chang | null | 1507.01073 | null | null |
Correlated Multiarmed Bandit Problem: Bayesian Algorithms and Regret
Analysis | math.OC cs.LG stat.ML | We consider the correlated multiarmed bandit (MAB) problem in which the
rewards associated with each arm are modeled by a multivariate Gaussian random
variable, and we investigate the influence of the assumptions in the Bayesian
prior on the performance of the upper credible limit (UCL) algorithm and a new
correlated UCL algorithm. We rigorously characterize the influence of accuracy,
confidence, and correlation scale in the prior on the decision-making
performance of the algorithms. Our results show how priors and correlation
structure can be leveraged to improve performance.
| Vaibhav Srivastava, Paul Reverdy, Naomi Ehrich Leonard | null | 1507.01160 | null | null |
Dependency Recurrent Neural Language Models for Sentence Completion | cs.CL cs.AI cs.LG | Recent work on language modelling has shifted focus from count-based models
to neural models. In these works, the words in each sentence are always
considered in a left-to-right order. In this paper we show how we can improve
the performance of the recurrent neural network (RNN) language model by
incorporating the syntactic dependencies of a sentence, which have the effect
of bringing relevant contexts closer to the word being predicted. We evaluate
our approach on the Microsoft Research Sentence Completion Challenge and show
that the dependency RNN proposed improves over the RNN by about 10 points in
accuracy. Furthermore, we achieve results comparable with the state-of-the-art
models on this task.
| Piotr Mirowski, Andreas Vlachos | null | 1507.01193 | null | null |
Combining Models of Approximation with Partial Learning | cs.LG | In Gold's framework of inductive inference, the model of partial learning
requires the learner to output exactly one correct index for the target object
and only the target object infinitely often. Since infinitely many of the
learner's hypotheses may be incorrect, it is not obvious whether a partial
learner can be modifed to "approximate" the target object.
Fulk and Jain (Approximate inference and scientific method. Information and
Computation 114(2):179--191, 1994) introduced a model of approximate learning
of recursive functions. The present work extends their research and solves an
open problem of Fulk and Jain by showing that there is a learner which
approximates and partially identifies every recursive function by outputting a
sequence of hypotheses which, in addition, are also almost all finite variants
of the target function.
The subsequent study is dedicated to the question how these findings
generalise to the learning of r.e. languages from positive data. Here three
variants of approximate learning will be introduced and investigated with
respect to the question whether they can be combined with partial learning.
Following the line of Fulk and Jain's research, further investigations provide
conditions under which partial language learners can eventually output only
finite variants of the target language. The combinabilities of other partial
learning criteria will also be briefly studied.
| Ziyuan Gao, Frank Stephan and Sandra Zilles | null | 1507.01215 | null | null |
Scalable Sparse Subspace Clustering by Orthogonal Matching Pursuit | cs.CV cs.LG stat.ML | Subspace clustering methods based on $\ell_1$, $\ell_2$ or nuclear norm
regularization have become very popular due to their simplicity, theoretical
guarantees and empirical success. However, the choice of the regularizer can
greatly impact both theory and practice. For instance, $\ell_1$ regularization
is guaranteed to give a subspace-preserving affinity (i.e., there are no
connections between points from different subspaces) under broad conditions
(e.g., arbitrary subspaces and corrupted data). However, it requires solving a
large scale convex optimization problem. On the other hand, $\ell_2$ and
nuclear norm regularization provide efficient closed form solutions, but
require very strong assumptions to guarantee a subspace-preserving affinity,
e.g., independent subspaces and uncorrupted data. In this paper we study a
subspace clustering method based on orthogonal matching pursuit. We show that
the method is both computationally efficient and guaranteed to give a
subspace-preserving affinity under broad conditions. Experiments on synthetic
data verify our theoretical analysis, and applications in handwritten digit and
face clustering show that our approach achieves the best trade off between
accuracy and efficiency.
| Chong You, Daniel P. Robinson, Rene Vidal | null | 1507.01238 | null | null |
Experiments on Parallel Training of Deep Neural Network using Model
Averaging | cs.LG cs.NE | In this work we apply model averaging to parallel training of deep neural
network (DNN). Parallelization is done in a model averaging manner. Data is
partitioned and distributed to different nodes for local model updates, and
model averaging across nodes is done every few minibatches. We use multiple
GPUs for data parallelization, and Message Passing Interface (MPI) for
communication between nodes, which allows us to perform model averaging
frequently without losing much time on communication. We investigate the
effectiveness of Natural Gradient Stochastic Gradient Descent (NG-SGD) and
Restricted Boltzmann Machine (RBM) pretraining for parallel training in
model-averaging framework, and explore the best setups in term of different
learning rate schedules, averaging frequencies and minibatch sizes. It is shown
that NG-SGD and RBM pretraining benefits parameter-averaging based model
training. On the 300h Switchboard dataset, a 9.3 times speedup is achieved
using 16 GPUs and 17 times speedup using 32 GPUs with limited decoding accuracy
loss.
| Hang Su, Haoyu Chen | null | 1507.01239 | null | null |
Semi-supervised Multi-sensor Classification via Consensus-based
Multi-View Maximum Entropy Discrimination | cs.IT cs.AI cs.LG math.IT | In this paper, we consider multi-sensor classification when there is a large
number of unlabeled samples. The problem is formulated under the multi-view
learning framework and a Consensus-based Multi-View Maximum Entropy
Discrimination (CMV-MED) algorithm is proposed. By iteratively maximizing the
stochastic agreement between multiple classifiers on the unlabeled dataset, the
algorithm simultaneously learns multiple high accuracy classifiers. We
demonstrate that our proposed method can yield improved performance over
previous multi-view learning approaches by comparing performance on three real
multi-sensor data sets.
| Tianpei Xie, Nasser M. Nasrabadi and Alfred O. Hero III | 10.1109/ICASSP.2015.7178308 | 1507.01269 | null | null |
Learning Deep Neural Network Policies with Continuous Memory States | cs.LG cs.RO | Policy learning for partially observed control tasks requires policies that
can remember salient information from past observations. In this paper, we
present a method for learning policies with internal memory for
high-dimensional, continuous systems, such as robotic manipulators. Our
approach consists of augmenting the state and action space of the system with
continuous-valued memory states that the policy can read from and write to.
Learning general-purpose policies with this type of memory representation
directly is difficult, because the policy must automatically figure out the
most salient information to memorize at each time step. We show that, by
decomposing this policy search problem into a trajectory optimization phase and
a supervised learning phase through a method called guided policy search, we
can acquire policies with effective memorization and recall strategies.
Intuitively, the trajectory optimization phase chooses the values of the memory
states that will make it easier for the policy to produce the right action in
future states, while the supervised learning phase encourages the policy to use
memorization actions to produce those memory states. We evaluate our method on
tasks involving continuous control in manipulation and navigation settings, and
show that our method can learn complex policies that successfully complete a
range of tasks that require memory.
| Marvin Zhang, Zoe McCarthy, Chelsea Finn, Sergey Levine, Pieter Abbeel | null | 1507.01273 | null | null |
Scan $B$-Statistic for Kernel Change-Point Detection | cs.LG math.ST stat.ML stat.TH | Detecting the emergence of an abrupt change-point is a classic problem in
statistics and machine learning. Kernel-based nonparametric statistics have
been used for this task which enjoy fewer assumptions on the distributions than
the parametric approach and can handle high-dimensional data. In this paper we
focus on the scenario when the amount of background data is large, and propose
two related computationally efficient kernel-based statistics for change-point
detection, which are inspired by the recently developed $B$-statistics. A novel
theoretical result of the paper is the characterization of the tail probability
of these statistics using the change-of-measure technique, which focuses on
characterizing the tail of the detection statistics rather than obtaining its
asymptotic distribution under the null distribution. Such approximations are
crucial to control the false alarm rate, which corresponds to the significance
level in offline change-point detection and the average-run-length in online
change-point detection. Our approximations are shown to be highly accurate.
Thus, they provide a convenient way to find detection thresholds for both
offline and online cases without the need to resort to the more expensive
simulations or bootstrapping. We show that our methods perform well on both
synthetic data and real data.
| Shuang Li, Yao Xie, Hanjun Dai, and Le Song | null | 1507.01279 | null | null |
End-to-end Convolutional Network for Saliency Prediction | cs.CV cs.LG cs.NE | The prediction of saliency areas in images has been traditionally addressed
with hand crafted features based on neuroscience principles. This paper however
addresses the problem with a completely data-driven approach by training a
convolutional network. The learning process is formulated as a minimization of
a loss function that measures the Euclidean distance of the predicted saliency
map with the provided ground truth. The recent publication of large datasets of
saliency prediction has provided enough data to train a not very deep
architecture which is both fast and accurate. The convolutional network in this
paper, named JuntingNet, won the LSUN 2015 challenge on saliency prediction
with a superior performance in all considered metrics.
| Junting Pan and Xavier Gir\'o-i-Nieto | null | 1507.01422 | null | null |
Revisiting Large Scale Distributed Machine Learning | cs.DC cs.LG | Nowadays, with the widespread of smartphones and other portable gadgets
equipped with a variety of sensors, data is ubiquitous available and the focus
of machine learning has shifted from being able to infer from small training
samples to dealing with large scale high-dimensional data. In domains such as
personal healthcare applications, which motivates this survey, distributed
machine learning is a promising line of research, both for scaling up learning
algorithms, but mostly for dealing with data which is inherently produced at
different locations. This report offers a thorough overview of and
state-of-the-art algorithms for distributed machine learning, for both
supervised and unsupervised learning, ranging from simple linear logistic
regression to graphical models and clustering. We propose future directions for
most categories, specific to the potential personal healthcare applications.
With this in mind, the report focuses on how security and low communication
overhead can be assured in the specific case of a strictly client-server
architectural model. As particular directions we provides an exhaustive
presentation of an empirical clustering algorithm, k-windows, and proposed an
asynchronous distributed machine learning algorithm that would scale well and
also would be computationally cheap and easy to implement.
| Radu Cristian Ionescu | null | 1507.01461 | null | null |
Semi-proximal Mirror-Prox for Nonsmooth Composite Minimization | math.OC cs.LG | We propose a new first-order optimisation algorithm to solve high-dimensional
non-smooth composite minimisation problems. Typical examples of such problems
have an objective that decomposes into a non-smooth empirical risk part and a
non-smooth regularisation penalty. The proposed algorithm, called Semi-Proximal
Mirror-Prox, leverages the Fenchel-type representation of one part of the
objective while handling the other part of the objective via linear
minimization over the domain. The algorithm stands in contrast with more
classical proximal gradient algorithms with smoothing, which require the
computation of proximal operators at each iteration and can therefore be
impractical for high-dimensional problems. We establish the theoretical
convergence rate of Semi-Proximal Mirror-Prox, which exhibits the optimal
complexity bounds, i.e. $O(1/\epsilon^2)$, for the number of calls to linear
minimization oracle. We present promising experimental results showing the
interest of the approach in comparison to competing methods.
| Niao He and Zaid Harchaoui | null | 1507.01476 | null | null |
Grid Long Short-Term Memory | cs.NE cs.CL cs.LG | This paper introduces Grid Long Short-Term Memory, a network of LSTM cells
arranged in a multidimensional grid that can be applied to vectors, sequences
or higher dimensional data such as images. The network differs from existing
deep LSTM architectures in that the cells are connected between network layers
as well as along the spatiotemporal dimensions of the data. The network
provides a unified way of using LSTM for both deep and sequential computation.
We apply the model to algorithmic tasks such as 15-digit integer addition and
sequence memorization, where it is able to significantly outperform the
standard LSTM. We then give results for two empirical tasks. We find that 2D
Grid LSTM achieves 1.47 bits per character on the Wikipedia character
prediction benchmark, which is state-of-the-art among neural approaches. In
addition, we use the Grid LSTM to define a novel two-dimensional translation
model, the Reencoder, and show that it outperforms a phrase-based reference
system on a Chinese-to-English translation task.
| Nal Kalchbrenner, Ivo Danihelka, Alex Graves | null | 1507.01526 | null | null |
A Simple Algorithm for Maximum Margin Classification, Revisited | cs.LG | In this note, we revisit the algorithm of Har-Peled et. al. [HRZ07] for
computing a linear maximum margin classifier. Our presentation is self
contained, and the algorithm itself is slightly simpler than the original
algorithm. The algorithm itself is a simple Perceptron like iterative
algorithm. For more details and background, the reader is referred to the
original paper.
| Sariel Har-Peled | null | 1507.01563 | null | null |
Emphatic Temporal-Difference Learning | cs.LG cs.AI | Emphatic algorithms are temporal-difference learning algorithms that change
their effective state distribution by selectively emphasizing and
de-emphasizing their updates on different time steps. Recent works by Sutton,
Mahmood and White (2015), and Yu (2015) show that by varying the emphasis in a
particular way, these algorithms become stable and convergent under off-policy
training with linear function approximation. This paper serves as a unified
summary of the available results from both works. In addition, we demonstrate
the empirical benefits from the flexibility of emphatic algorithms, including
state-dependent discounting, state-dependent bootstrapping, and the
user-specified allocation of function approximation resources.
| A. Rupam Mahmood, Huizhen Yu, Martha White, Richard S. Sutton | null | 1507.01569 | null | null |
Learning Tractable Probabilistic Models for Fault Localization | cs.SE cs.LG | In recent years, several probabilistic techniques have been applied to
various debugging problems. However, most existing probabilistic debugging
systems use relatively simple statistical models, and fail to generalize across
multiple programs. In this work, we propose Tractable Fault Localization Models
(TFLMs) that can be learned from data, and probabilistically infer the location
of the bug. While most previous statistical debugging methods generalize over
many executions of a single program, TFLMs are trained on a corpus of
previously seen buggy programs, and learn to identify recurring patterns of
bugs. Widely-used fault localization techniques such as TARANTULA evaluate the
suspiciousness of each line in isolation; in contrast, a TFLM defines a joint
probability distribution over buggy indicator variables for each line. Joint
distributions with rich dependency structure are often computationally
intractable; TFLMs avoid this by exploiting recent developments in tractable
probabilistic models (specifically, Relational SPNs). Further, TFLMs can
incorporate additional sources of information, including coverage-based
features such as TARANTULA. We evaluate the fault localization performance of
TFLMs that include TARANTULA scores as features in the probabilistic model. Our
study shows that the learned TFLMs isolate bugs more effectively than previous
statistical methods or using TARANTULA directly.
| Aniruddh Nath and Pedro Domingos | null | 1507.01698 | null | null |
Rethinking LDA: moment matching for discrete ICA | stat.ML cs.LG | We consider moment matching techniques for estimation in Latent Dirichlet
Allocation (LDA). By drawing explicit links between LDA and discrete versions
of independent component analysis (ICA), we first derive a new set of
cumulant-based tensors, with an improved sample complexity. Moreover, we reuse
standard ICA techniques such as joint diagonalization of tensors to improve
over existing methods based on the tensor power method. In an extensive set of
experiments on both synthetic and real datasets, we show that our new
combination of tensors and orthogonal joint diagonalization techniques
outperforms existing moment matching methods.
| Anastasia Podosinnikova, Francis Bach, and Simon Lacoste-Julien | null | 1507.01784 | null | null |
Dependency-based Convolutional Neural Networks for Sentence Embedding | cs.CL cs.AI cs.LG | In sentence modeling and classification, convolutional neural network
approaches have recently achieved state-of-the-art results, but all such
efforts process word vectors sequentially and neglect long-distance
dependencies. To exploit both deep learning and linguistic structures, we
propose a tree-based convolutional neural network model which exploit various
long-distance relationships between words. Our model improves the sequential
baselines on all three sentiment and question classification tasks, and
achieves the highest published accuracy on TREC.
| Mingbo Ma and Liang Huang and Bing Xiang and Bowen Zhou | null | 1507.01839 | null | null |
A linear approach for sparse coding by a two-layer neural network | cs.LG physics.data-an | Many approaches to transform classification problems from non-linear to
linear by feature transformation have been recently presented in the
literature. These notably include sparse coding methods and deep neural
networks. However, many of these approaches require the repeated application of
a learning process upon the presentation of unseen data input vectors, or else
involve the use of large numbers of parameters and hyper-parameters, which must
be chosen through cross-validation, thus increasing running time dramatically.
In this paper, we propose and experimentally investigate a new approach for the
purpose of overcoming limitations of both kinds. The proposed approach makes
use of a linear auto-associative network (called SCNN) with just one hidden
layer. The combination of this architecture with a specific error function to
be minimized enables one to learn a linear encoder computing a sparse code
which turns out to be as similar as possible to the sparse coding that one
obtains by re-training the neural network. Importantly, the linearity of SCNN
and the choice of the error function allow one to achieve reduced running time
in the learning phase. The proposed architecture is evaluated on the basis of
two standard machine learning tasks. Its performances are compared with those
of recently proposed non-linear auto-associative neural networks. The overall
results suggest that linear encoders can be profitably used to obtain sparse
data representations in the context of machine learning problems, provided that
an appropriate error function is used during the learning phase.
| Alessandro Montalto, Giovanni Tessitore, Roberto Prevete | null | 1507.01892 | null | null |
Wasserstein Training of Boltzmann Machines | stat.ML cs.LG | The Boltzmann machine provides a useful framework to learn highly complex,
multimodal and multiscale data distributions that occur in the real world. The
default method to learn its parameters consists of minimizing the
Kullback-Leibler (KL) divergence from training samples to the Boltzmann model.
We propose in this work a novel approach for Boltzmann training which assumes
that a meaningful metric between observations is given. This metric can be
represented by the Wasserstein distance between distributions, for which we
derive a gradient with respect to the model parameters. Minimization of this
new Wasserstein objective leads to generative models that are better when
considering the metric and that have a cluster-like structure. We demonstrate
the practical potential of these models for data completion and denoising, for
which the metric between observations plays a crucial role.
| Gr\'egoire Montavon, Klaus-Robert M\"uller, Marco Cuturi | null | 1507.01972 | null | null |
Learning Leading Indicators for Time Series Predictions | cs.LG stat.ML | We consider the problem of learning models for forecasting multiple
time-series systems together with discovering the leading indicators that serve
as good predictors for the system. We model the systems by linear vector
autoregressive models (VAR) and link the discovery of leading indicators to
inferring sparse graphs of Granger-causality. We propose new problem
formulations and develop two new methods to learn such models, gradually
increasing the complexity of assumptions and approaches. While the first method
assumes common structures across the whole system, our second method uncovers
model clusters based on the Granger-causality and leading indicators together
with learning the model parameters. We study the performance of our methods on
a comprehensive set of experiments and confirm their efficacy and their
advantages over state-of-the-art sparse VAR and graphical Granger learning
methods.
| Magda Gregorova, Alexandros Kalousis, St\'ephane Marchand-Maillet | null | 1507.01978 | null | null |
A Bayesian Approach for Online Classifier Ensemble | cs.LG | We propose a Bayesian approach for recursively estimating the classifier
weights in online learning of a classifier ensemble. In contrast with past
methods, such as stochastic gradient descent or online boosting, our approach
estimates the weights by recursively updating its posterior distribution. For a
specified class of loss functions, we show that it is possible to formulate a
suitably defined likelihood function and hence use the posterior distribution
as an approximation to the global empirical loss minimizer. If the stream of
training data is sampled from a stationary process, we can also show that our
approach admits a superior rate of convergence to the expected loss minimizer
than is possible with standard stochastic gradient descent. In experiments with
real-world datasets, our formulation often performs better than
state-of-the-art stochastic gradient descent and online boosting algorithms.
| Qinxun Bai, Henry Lam, Stan Sclaroff | null | 1507.02011 | null | null |
Beyond Convexity: Stochastic Quasi-Convex Optimization | cs.LG math.OC | Stochastic convex optimization is a basic and well studied primitive in
machine learning. It is well known that convex and Lipschitz functions can be
minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized
Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which
updates according to the direction of the gradients, rather than the gradients
themselves. In this paper we analyze a stochastic version of NGD and prove its
convergence to a global minimum for a wider class of functions: we require the
functions to be quasi-convex and locally-Lipschitz. Quasi-convexity broadens
the con- cept of unimodality to multidimensions and allows for certain types of
saddle points, which are a known hurdle for first-order optimization methods
such as gradient descent. Locally-Lipschitz functions are only required to be
Lipschitz in a small region around the optimum. This assumption circumvents
gradient explosion, which is another known hurdle for gradient descent
variants. Interestingly, unlike the vanilla SGD algorithm, the stochastic
normalized gradient descent algorithm provably requires a minimal minibatch
size.
| Elad Hazan, Kfir Y. Levy, Shai Shalev-Shwartz | null | 1507.02030 | null | null |
Shedding Light on the Asymmetric Learning Capability of AdaBoost | cs.LG cs.AI cs.CV | In this paper, we propose a different insight to analyze AdaBoost. This
analysis reveals that, beyond some preconceptions, AdaBoost can be directly
used as an asymmetric learning algorithm, preserving all its theoretical
properties. A novel class-conditional description of AdaBoost, which models the
actual asymmetric behavior of the algorithm, is presented.
| Iago Landesa-V\'azquez, Jos\'e Luis Alba-Castro | 10.1016/j.patrec.2011.10.022 | 1507.02084 | null | null |
Double-Base Asymmetric AdaBoost | cs.CV cs.AI cs.LG | Based on the use of different exponential bases to define class-dependent
error bounds, a new and highly efficient asymmetric boosting scheme, coined as
AdaBoostDB (Double-Base), is proposed. Supported by a fully theoretical
derivation procedure, unlike most of the other approaches in the literature,
our algorithm preserves all the formal guarantees and properties of original
(cost-insensitive) AdaBoost, similarly to the state-of-the-art Cost-Sensitive
AdaBoost algorithm. However, the key advantage of AdaBoostDB is that our novel
derivation scheme enables an extremely efficient conditional search procedure,
dramatically improving and simplifying the training phase of the algorithm.
Experiments, both over synthetic and real datasets, reveal that AdaBoostDB is
able to save over 99% training time with regard to Cost-Sensitive AdaBoost,
providing the same cost-sensitive results. This computational advantage of
AdaBoostDB can make a difference in problems managing huge pools of weak
classifiers in which boosting techniques are commonly used.
| Iago Landesa-V\'azquez, Jos\'e Luis Alba-Castro | 10.1016/j.neucom.2013.02.019 | 1507.02154 | null | null |
An Empirical Study on Budget-Aware Online Kernel Algorithms for Streams
of Graphs | cs.LG | Kernel methods are considered an effective technique for on-line learning.
Many approaches have been developed for compactly representing the dual
solution of a kernel method when the problem imposes memory constraints.
However, in literature no work is specifically tailored to streams of graphs.
Motivated by the fact that the size of the feature space representation of many
state-of-the-art graph kernels is relatively small and thus it is explicitly
computable, we study whether executing kernel algorithms in the feature space
can be more effective than the classical dual approach. We study three
different algorithms and various strategies for managing the budget. Efficiency
and efficacy of the proposed approaches are experimentally assessed on
relatively large graph streams exhibiting concept drift. It turns out that,
when strict memory budget constraints have to be enforced, working in feature
space, given the current state of the art on graph kernels, is more than a
viable alternative to dual approaches, both in terms of speed and
classification performance.
| Giovanni Da San Martino, Nicol\`o Navarin, Alessandro Sperduti | null | 1507.02158 | null | null |
Extending local features with contextual information in graph kernels | cs.LG | Graph kernels are usually defined in terms of simpler kernels over local
substructures of the original graphs. Different kernels consider different
types of substructures. However, in some cases they have similar predictive
performances, probably because the substructures can be interpreted as
approximations of the subgraphs they induce. In this paper, we propose to
associate to each feature a piece of information about the context in which the
feature appears in the graph. A substructure appearing in two different graphs
will match only if it appears with the same context in both graphs. We propose
a kernel based on this idea that considers trees as substructures, and where
the contexts are features too. The kernel is inspired from the framework in
[6], even if it is not part of it. We give an efficient algorithm for computing
the kernel and show promising results on real-world graph classification
datasets.
| Nicol\`o Navarin, Alessandro Sperduti, Riccardo Tesselli | 10.1007/978-3-319-26561-2_33 | 1507.02186 | null | null |
AutoCompete: A Framework for Machine Learning Competition | stat.ML cs.LG | In this paper, we propose AutoCompete, a highly automated machine learning
framework for tackling machine learning competitions. This framework has been
learned by us, validated and improved over a period of more than two years by
participating in online machine learning competitions. It aims at minimizing
human interference required to build a first useful predictive model and to
assess the practical difficulty of a given machine learning challenge. The
proposed system helps in identifying data types, choosing a machine learn- ing
model, tuning hyper-parameters, avoiding over-fitting and optimization for a
provided evaluation metric. We also observe that the proposed system produces
better (or comparable) results with less runtime as compared to other
approaches.
| Abhishek Thakur and Artus Krohn-Grimberghe | null | 1507.02188 | null | null |
Intersecting Faces: Non-negative Matrix Factorization With New
Guarantees | cs.LG stat.ML | Non-negative matrix factorization (NMF) is a natural model of admixture and
is widely used in science and engineering. A plethora of algorithms have been
developed to tackle NMF, but due to the non-convex nature of the problem, there
is little guarantee on how well these methods work. Recently a surge of
research have focused on a very restricted class of NMFs, called separable NMF,
where provably correct algorithms have been developed. In this paper, we
propose the notion of subset-separable NMF, which substantially generalizes the
property of separability. We show that subset-separability is a natural
necessary condition for the factorization to be unique or to have minimum
volume. We developed the Face-Intersect algorithm which provably and
efficiently solves subset-separable NMF under natural conditions, and we prove
that our algorithm is robust to small noise. We explored the performance of
Face-Intersect on simulations and discuss settings where it empirically
outperformed the state-of-art methods. Our work is a step towards finding
provably correct algorithms that solve large classes of NMF problems.
| Rong Ge and James Zou | null | 1507.02189 | null | null |
Robust Sparse Blind Source Separation | stat.AP cs.LG stat.ML | Blind Source Separation is a widely used technique to analyze multichannel
data. In many real-world applications, its results can be significantly
hampered by the presence of unknown outliers. In this paper, a novel algorithm
coined rGMCA (robust Generalized Morphological Component Analysis) is
introduced to retrieve sparse sources in the presence of outliers. It
explicitly estimates the sources, the mixing matrix, and the outliers. It also
takes advantage of the estimation of the outliers to further implement a
weighting scheme, which provides a highly robust separation procedure.
Numerical experiments demonstrate the efficiency of rGMCA to estimate the
mixing matrix in comparison with standard BSS techniques.
| Cecile Chenot, Jerome Bobin and Jeremy Rapin | 10.1109/LSP.2015.2463232 | 1507.02216 | null | null |
Optimal approximate matrix product in terms of stable rank | cs.DS cs.LG stat.ML | We prove, using the subspace embedding guarantee in a black box way, that one
can achieve the spectral norm guarantee for approximate matrix multiplication
with a dimensionality-reducing map having $m = O(\tilde{r}/\varepsilon^2)$
rows. Here $\tilde{r}$ is the maximum stable rank, i.e. squared ratio of
Frobenius and operator norms, of the two matrices being multiplied. This is a
quantitative improvement over previous work of [MZ11, KVZ14], and is also
optimal for any oblivious dimensionality-reducing map. Furthermore, due to the
black box reliance on the subspace embedding property in our proofs, our
theorem can be applied to a much more general class of sketching matrices than
what was known before, in addition to achieving better bounds. For example, one
can apply our theorem to efficient subspace embeddings such as the Subsampled
Randomized Hadamard Transform or sparse subspace embeddings, or even with
subspace embedding constructions that may be developed in the future.
Our main theorem, via connections with spectral error matrix multiplication
shown in prior work, implies quantitative improvements for approximate least
squares regression and low rank approximation. Our main result has also already
been applied to improve dimensionality reduction guarantees for $k$-means
clustering [CEMMP14], and implies new results for nonparametric regression
[YPW15].
We also separately point out that the proof of the "BSS" deterministic
row-sampling result of [BSS12] can be modified to show that for any matrices
$A, B$ of stable rank at most $\tilde{r}$, one can achieve the spectral norm
guarantee for approximate matrix multiplication of $A^T B$ by deterministically
sampling $O(\tilde{r}/\varepsilon^2)$ rows that can be found in polynomial
time. The original result of [BSS12] was for rank instead of stable rank. Our
observation leads to a stronger version of a main theorem of [KMST10].
| Michael B. Cohen, Jelani Nelson, David P. Woodruff | null | 1507.02268 | null | null |
The Information Sieve | stat.ML cs.IT cs.LG math.IT | We introduce a new framework for unsupervised learning of representations
based on a novel hierarchical decomposition of information. Intuitively, data
is passed through a series of progressively fine-grained sieves. Each layer of
the sieve recovers a single latent factor that is maximally informative about
multivariate dependence in the data. The data is transformed after each pass so
that the remaining unexplained information trickles down to the next layer.
Ultimately, we are left with a set of latent factors explaining all the
dependence in the original data and remainder information consisting of
independent noise. We present a practical implementation of this framework for
discrete variables and apply it to a variety of fundamental tasks in
unsupervised learning including independent component analysis, lossy and
lossless compression, and predicting missing values in data.
| Greg Ver Steeg and Aram Galstyan | null | 1507.02284 | null | null |
COEVOLVE: A Joint Point Process Model for Information Diffusion and
Network Co-evolution | cs.SI cs.LG physics.soc-ph stat.ML | Information diffusion in online social networks is affected by the underlying
network topology, but it also has the power to change it. Online users are
constantly creating new links when exposed to new information sources, and in
turn these links are alternating the way information spreads. However, these
two highly intertwined stochastic processes, information diffusion and network
evolution, have been predominantly studied separately, ignoring their
co-evolutionary dynamics.
We propose a temporal point process model, COEVOLVE, for such joint dynamics,
allowing the intensity of one process to be modulated by that of the other.
This model allows us to efficiently simulate interleaved diffusion and network
events, and generate traces obeying common diffusion and network patterns
observed in real-world networks. Furthermore, we also develop a convex
optimization framework to learn the parameters of the model from historical
diffusion and network evolution traces. We experimented with both synthetic
data and data gathered from Twitter, and show that our model provides a good
fit to the data as well as more accurate predictions than alternatives.
| Mehrdad Farajtabar and Yichen Wang and Manuel Gomez Rodriguez and
Shuang Li and Hongyuan Zha and Le Song | null | 1507.02293 | null | null |
Achieving Synergy in Cognitive Behavior of Humanoids via Deep Learning
of Dynamic Visuo-Motor-Attentional Coordination | cs.AI cs.LG cs.RO | The current study examines how adequate coordination among different
cognitive processes including visual recognition, attention switching, action
preparation and generation can be developed via learning of robots by
introducing a novel model, the Visuo-Motor Deep Dynamic Neural Network (VMDNN).
The proposed model is built on coupling of a dynamic vision network, a motor
generation network, and a higher level network allocated on top of these two.
The simulation experiments using the iCub simulator were conducted for
cognitive tasks including visual object manipulation responding to human
gestures. The results showed that synergetic coordination can be developed via
iterative learning through the whole network when spatio-temporal hierarchy and
temporal one can be self-organized in the visual pathway and in the motor
pathway, respectively, such that the higher level can manipulate them with
abstraction.
| Jungsik Hwang, Minju Jung, Naveen Madapana, Jinhyung Kim, Minkyu Choi
and Jun Tani | 10.1109/HUMANOIDS.2015.7363448 | 1507.02347 | null | null |
Intrinsic Non-stationary Covariance Function for Climate Modeling | stat.ML cs.LG | Designing a covariance function that represents the underlying correlation is
a crucial step in modeling complex natural systems, such as climate models.
Geospatial datasets at a global scale usually suffer from non-stationarity and
non-uniformly smooth spatial boundaries. A Gaussian process regression using a
non-stationary covariance function has shown promise for this task, as this
covariance function adapts to the variable correlation structure of the
underlying distribution. In this paper, we generalize the non-stationary
covariance function to address the aforementioned global scale geospatial
issues. We define this generalized covariance function as an intrinsic
non-stationary covariance function, because it uses intrinsic statistics of the
symmetric positive definite matrices to represent the characteristic length
scale and, thereby, models the local stochastic process. Experiments on a
synthetic and real dataset of relative sea level changes across the world
demonstrate improvements in the error metrics for the regression estimates
using our newly proposed approach.
| Chintan A. Dalal, Vladimir Pavlovic, Robert E. Kopp | null | 1507.02356 | null | null |
Decentralized Joint-Sparse Signal Recovery: A Sparse Bayesian Learning
Approach | cs.LG cs.IT math.IT | This work proposes a decentralized, iterative, Bayesian algorithm called
CB-DSBL for in-network estimation of multiple jointly sparse vectors by a
network of nodes, using noisy and underdetermined linear measurements. The
proposed algorithm exploits the network wide joint sparsity of the un- known
sparse vectors to recover them from significantly fewer number of local
measurements compared to standalone sparse signal recovery schemes. To reduce
the amount of inter-node communication and the associated overheads, the nodes
exchange messages with only a small subset of their single hop neighbors. Under
this communication scheme, we separately analyze the convergence of the
underlying Alternating Directions Method of Multipliers (ADMM) iterations used
in our proposed algorithm and establish its linear convergence rate. The
findings from the convergence analysis of decentralized ADMM are used to
accelerate the convergence of the proposed CB-DSBL algorithm. Using Monte Carlo
simulations, we demonstrate the superior signal reconstruction as well as
support recovery performance of our proposed algorithm compared to existing
decentralized algorithms: DRL-1, DCOMP and DCSP.
| Saurabh Khanna, Chandra R. Murthy | 10.1109/TSIPN.2016.2612120 | 1507.02387 | null | null |
Differentially Private Ordinary Least Squares | cs.DS cs.CR cs.LG | Linear regression is one of the most prevalent techniques in machine
learning, however, it is also common to use linear regression for its
\emph{explanatory} capabilities rather than label prediction. Ordinary Least
Squares (OLS) is often used in statistics to establish a correlation between an
attribute (e.g. gender) and a label (e.g. income) in the presence of other
(potentially correlated) features. OLS assumes a particular model that randomly
generates the data, and derives \emph{$t$-values} --- representing the
likelihood of each real value to be the true correlation. Using $t$-values, OLS
can release a \emph{confidence interval}, which is an interval on the reals
that is likely to contain the true correlation, and when this interval does not
intersect the origin, we can \emph{reject the null hypothesis} as it is likely
that the true correlation is non-zero. Our work aims at achieving similar
guarantees on data under differentially private estimators. First, we show that
for well-spread data, the Gaussian Johnson-Lindenstrauss Transform (JLT) gives
a very good approximation of $t$-values, secondly, when JLT approximates Ridge
regression (linear regression with $l_2$-regularization) we derive, under
certain conditions, confidence intervals using the projected data, lastly, we
derive, under different conditions, confidence intervals for the "Analyze
Gauss" algorithm (Dwork et al, STOC 2014).
| Or Sheffet | null | 1507.02482 | null | null |
Faster Convex Optimization: Simulated Annealing with an Efficient
Universal Barrier | math.OC cs.LG | This paper explores a surprising equivalence between two seemingly-distinct
convex optimization methods. We show that simulated annealing, a well-studied
random walk algorithms, is directly equivalent, in a certain sense, to the
central path interior point algorithm for the the entropic universal barrier
function. This connection exhibits several benefits. First, we are able improve
the state of the art time complexity for convex optimization under the
membership oracle model. We improve the analysis of the randomized algorithm of
Kalai and Vempala by utilizing tools developed by Nesterov and Nemirovskii that
underly the central path following interior point algorithm. We are able to
tighten the temperature schedule for simulated annealing which gives an
improved running time, reducing by square root of the dimension in certain
instances. Second, we get an efficient randomized interior point method with an
efficiently computable universal barrier for any convex set described by a
membership oracle. Previously, efficiently computable barriers were known only
for particular convex sets.
| Jacob Abernethy, Elad Hazan | null | 1507.02528 | null | null |
Sampling from a log-concave distribution with Projected Langevin Monte
Carlo | math.PR cs.DS cs.LG | We extend the Langevin Monte Carlo (LMC) algorithm to compactly supported
measures via a projection step, akin to projected Stochastic Gradient Descent
(SGD). We show that (projected) LMC allows to sample in polynomial time from a
log-concave distribution with smooth potential. This gives a new Markov chain
to sample from a log-concave distribution. Our main result shows in particular
that when the target distribution is uniform, LMC mixes in $\tilde{O}(n^7)$
steps (where $n$ is the dimension). We also provide preliminary experimental
evidence that LMC performs at least as well as hit-and-run, for which a better
mixing time of $\tilde{O}(n^4)$ was proved by Lov{\'a}sz and Vempala.
| S\'ebastien Bubeck, Ronen Eldan, Joseph Lehec | null | 1507.02564 | null | null |
Sparse Approximation via Generating Point Sets | cs.CG cs.LG | $ \newcommand{\kalg}{{k_{\mathrm{alg}}}}
\newcommand{\kopt}{{k_{\mathrm{opt}}}}
\newcommand{\algset}{{T}} \renewcommand{\Re}{\mathbb{R}}
\newcommand{\eps}{\varepsilon} \newcommand{\pth}[2][\!]{#1\left({#2}\right)}
\newcommand{\npoints}{n} \newcommand{\ballD}{\mathsf{b}}
\newcommand{\dataset}{{P}} $ For a set $\dataset$ of $\npoints$ points in the
unit ball $\ballD \subseteq \Re^d$, consider the problem of finding a small
subset $\algset \subseteq \dataset$ such that its convex-hull
$\eps$-approximates the convex-hull of the original set. We present an
efficient algorithm to compute such a $\eps'$-approximation of size $\kalg$,
where $\eps'$ is function of $\eps$, and $\kalg$ is a function of the minimum
size $\kopt$ of such an $\eps$-approximation. Surprisingly, there is no
dependency on the dimension $d$ in both bounds. Furthermore, every point of
$\dataset$ can be $\eps$-approximated by a convex-combination of points of
$\algset$ that is $O(1/\eps^2)$-sparse.
Our result can be viewed as a method for sparse, convex autoencoding:
approximately representing the data in a compact way using sparse combinations
of a small subset $\algset$ of the original data. The new algorithm can be
kernelized, and it preserves sparsity in the original input.
| Avrim Blum, Sariel Har-Peled and Benjamin Raichel | null | 1507.02574 | null | null |
Fast rates in statistical and online learning | cs.LG stat.ML | The speed with which a learning algorithm converges as it is presented with
more data is a central problem in machine learning --- a fast rate of
convergence means less data is needed for the same level of performance. The
pursuit of fast rates in online and statistical learning has led to the
discovery of many conditions in learning theory under which fast learning is
possible. We show that most of these conditions are special cases of a single,
unifying condition, that comes in two forms: the central condition for 'proper'
learning algorithms that always output a hypothesis in the given model, and
stochastic mixability for online algorithms that may make predictions outside
of the model. We show that under surprisingly weak assumptions both conditions
are, in a certain sense, equivalent. The central condition has a
re-interpretation in terms of convexity of a set of pseudoprobabilities,
linking it to density estimation under misspecification. For bounded losses, we
show how the central condition enables a direct proof of fast rates and we
prove its equivalence to the Bernstein condition, itself a generalization of
the Tsybakov margin condition, both of which have played a central role in
obtaining fast rates in statistical learning. Yet, while the Bernstein
condition is two-sided, the central condition is one-sided, making it more
suitable to deal with unbounded losses. In its stochastic mixability form, our
condition generalizes both a stochastic exp-concavity condition identified by
Juditsky, Rigollet and Tsybakov and Vovk's notion of mixability. Our unifying
conditions thus provide a substantial step towards a characterization of fast
rates in statistical learning, similar to how classical mixability
characterizes constant regret in the sequential prediction with expert advice
setting.
| Tim van Erven and Peter D. Gr\"unwald and Nishant A. Mehta and Mark D.
Reid and Robert C. Williamson | null | 1507.02592 | null | null |
Quantum Inspired Training for Boltzmann Machines | cs.LG quant-ph | We present an efficient classical algorithm for training deep Boltzmann
machines (DBMs) that uses rejection sampling in concert with variational
approximations to estimate the gradients of the training objective function.
Our algorithm is inspired by a recent quantum algorithm for training DBMs. We
obtain rigorous bounds on the errors in the approximate gradients; in turn, we
find that choosing the instrumental distribution to minimize the alpha=2
divergence with the Gibbs state minimizes the asymptotic algorithmic
complexity. Our rejection sampling approach can yield more accurate gradients
than low-order contrastive divergence training and the costs incurred in
finding increasingly accurate gradients can be easily parallelized. Finally our
algorithm can train full Boltzmann machines and scales more favorably with the
number of layers in a DBM than greedy contrastive divergence training.
| Nathan Wiebe, Ashish Kapoor, Christopher Granade, Krysta M Svore | null | 1507.02642 | null | null |
Semi-Supervised Learning with Ladder Networks | cs.NE cs.LG stat.ML | We combine supervised learning with unsupervised learning in deep neural
networks. The proposed model is trained to simultaneously minimize the sum of
supervised and unsupervised cost functions by backpropagation, avoiding the
need for layer-wise pre-training. Our work builds on the Ladder network
proposed by Valpola (2015), which we extend by combining the model with
supervision. We show that the resulting model reaches state-of-the-art
performance in semi-supervised MNIST and CIFAR-10 classification, in addition
to permutation-invariant MNIST classification with all labels.
| Antti Rasmus and Harri Valpola and Mikko Honkala and Mathias Berglund
and Tapani Raiko | null | 1507.02672 | null | null |
Locally Non-linear Embeddings for Extreme Multi-label Learning | cs.LG cs.IR math.OC stat.ML | The objective in extreme multi-label learning is to train a classifier that
can automatically tag a novel data point with the most relevant subset of
labels from an extremely large label set. Embedding based approaches make
training and prediction tractable by assuming that the training label matrix is
low-rank and hence the effective number of labels can be reduced by projecting
the high dimensional label vectors onto a low dimensional linear subspace.
Still, leading embedding approaches have been unable to deliver high prediction
accuracies or scale to large problems as the low rank assumption is violated in
most real world applications.
This paper develops the X-One classifier to address both limitations. The
main technical contribution in X-One is a formulation for learning a small
ensemble of local distance preserving embeddings which can accurately predict
infrequently occurring (tail) labels. This allows X-One to break free of the
traditional low-rank assumption and boost classification accuracy by learning
embeddings which preserve pairwise distances between only the nearest label
vectors.
We conducted extensive experiments on several real-world as well as benchmark
data sets and compared our method against state-of-the-art methods for extreme
multi-label classification. Experiments reveal that X-One can make
significantly more accurate predictions then the state-of-the-art methods
including both embeddings (by as much as 35%) as well as trees (by as much as
6%). X-One can also scale efficiently to data sets with a million labels which
are beyond the pale of leading embedding methods.
| Kush Bhatia and Himanshu Jain and Purushottam Kar and Prateek Jain and
Manik Varma | null | 1507.02743 | null | null |
Utility-based Dueling Bandits as a Partial Monitoring Game | cs.LG | Partial monitoring is a generic framework for sequential decision-making with
incomplete feedback. It encompasses a wide class of problems such as dueling
bandits, learning with expect advice, dynamic pricing, dark pools, and label
efficient prediction. We study the utility-based dueling bandit problem as an
instance of partial monitoring problem and prove that it fits the time-regret
partial monitoring hierarchy as an easy - i.e. Theta (sqrt{T})- instance. We
survey some partial monitoring algorithms and see how they could be used to
solve dueling bandits efficiently. Keywords: Online learning, Dueling Bandits,
Partial Monitoring, Partial Feedback, Multiarmed Bandits
| Pratik Gajane and Tanguy Urvoy | null | 1507.02750 | null | null |
Adaptive Mixtures of Factor Analyzers | stat.ML cs.IT cs.LG math.IT | A mixture of factor analyzers is a semi-parametric density estimator that
generalizes the well-known mixtures of Gaussians model by allowing each
Gaussian in the mixture to be represented in a different lower-dimensional
manifold. This paper presents a robust and parsimonious model selection
algorithm for training a mixture of factor analyzers, carrying out simultaneous
clustering and locally linear, globally nonlinear dimensionality reduction.
Permitting different number of factors per mixture component, the algorithm
adapts the model complexity to the data complexity. We compare the proposed
algorithm with related automatic model selection algorithms on a number of
benchmarks. The results indicate the effectiveness of this fast and robust
approach in clustering, manifold learning and class-conditional modeling.
| Heysem Kaya and Albert Ali Salah | null | 1507.02801 | null | null |
Spectral Smoothing via Random Matrix Perturbations | cs.LG | We consider stochastic smoothing of spectral functions of matrices using
perturbations commonly studied in random matrix theory. We show that a spectral
function remains spectral when smoothed using a unitarily invariant
perturbation distribution. We then derive state-of-the-art smoothing bounds for
the maximum eigenvalue function using the Gaussian Orthogonal Ensemble (GOE).
Smoothing the maximum eigenvalue function is important for applications in
semidefinite optimization and online learning. As a direct consequence of our
GOE smoothing results, we obtain an $O((N \log N)^{1/4} \sqrt{T})$ expected
regret bound for the online variance minimization problem using an algorithm
that performs only a single maximum eigenvector computation per time step. Here
$T$ is the number of rounds and $N$ is the matrix dimension. Our algorithm and
its analysis also extend to the more general online PCA problem where the
learner has to output a rank $k$ subspace. The algorithm just requires
computing $k$ maximum eigenvectors per step and enjoys an $O(k (N \log N)^{1/4}
\sqrt{T})$ expected regret bound.
| Jacob Abernethy, Chansoo Lee, Ambuj Tewari | null | 1507.03032 | null | null |
Tight Risk Bounds for Multi-Class Margin Classifiers | stat.ML cs.LG | We consider a problem of risk estimation for large-margin multi-class
classifiers. We propose a novel risk bound for the multi-class classification
problem. The bound involves the marginal distribution of the classifier and the
Rademacher complexity of the hypothesis class. We prove that our bound is tight
in the number of classes. Finally, we compare our bound with the related ones
and provide a simplified version of the bound for the multi-class
classification with kernel based hypotheses.
| Yury Maximov, Daria Reshetova | 10.1134/S105466181604009X | 1507.03040 | null | null |
A new boosting algorithm based on dual averaging scheme | cs.LG | The fields of machine learning and mathematical optimization increasingly
intertwined. The special topic on supervised learning and convex optimization
examines this interplay. The training part of most supervised learning
algorithms can usually be reduced to an optimization problem that minimizes a
loss between model predictions and training data. While most optimization
techniques focus on accuracy and speed of convergence, the qualities of good
optimization algorithm from the machine learning perspective can be quite
different since machine learning is more than fitting the data. Better
optimization algorithms that minimize the training loss can possibly give very
poor generalization performance. In this paper, we examine a particular kind of
machine learning algorithm, boosting, whose training process can be viewed as
functional coordinate descent on the exponential loss. We study the relation
between optimization techniques and machine learning by implementing a new
boosting algorithm. DABoost, based on dual-averaging scheme and study its
generalization performance. We show that DABoost, although slower in reducing
the training error, in general enjoys a better generalization error than
AdaBoost.
| Nan Wang | null | 1507.03125 | null | null |
A Review of Nonnegative Matrix Factorization Methods for Clustering | stat.ML cs.LG cs.NA | Nonnegative Matrix Factorization (NMF) was first introduced as a low-rank
matrix approximation technique, and has enjoyed a wide area of applications.
Although NMF does not seem related to the clustering problem at first, it was
shown that they are closely linked. In this report, we provide a gentle
introduction to clustering and NMF before reviewing the theoretical
relationship between them. We then explore several NMF variants, namely Sparse
NMF, Projective NMF, Nonnegative Spectral Clustering and Cluster-NMF, along
with their clustering interpretations.
| Ali Caner T\"urkmen | null | 1507.03194 | null | null |
Homotopy Continuation Approaches for Robust SV Classification and
Regression | stat.ML cs.LG | In support vector machine (SVM) applications with unreliable data that
contains a portion of outliers, non-robustness of SVMs often causes
considerable performance deterioration. Although many approaches for improving
the robustness of SVMs have been studied, two major challenges remain in robust
SVM learning. First, robust learning algorithms are essentially formulated as
non-convex optimization problems. It is thus important to develop a non-convex
optimization method for robust SVM that can find a good local optimal solution.
The second practical issue is how one can tune the hyperparameter that controls
the balance between robustness and efficiency. Unfortunately, due to the
non-convexity, robust SVM solutions with slightly different hyper-parameter
values can be significantly different, which makes model selection highly
unstable. In this paper, we address these two issues simultaneously by
introducing a novel homotopy approach to non-convex robust SVM learning. Our
basic idea is to introduce parametrized formulations of robust SVM which bridge
the standard SVM and fully robust SVM via the parameter that represents the
influence of outliers. We characterize the necessary and sufficient conditions
of the local optimal solutions of robust SVM, and develop an algorithm that can
trace a path of local optimal solutions when the influence of outliers is
gradually decreased. An advantage of our homotopy approach is that it can be
interpreted as simulated annealing, a common approach for finding a good local
optimal solution in non-convex optimization problems. In addition, our homotopy
method allows stable and efficient model selection based on the path of local
optimal solutions. Empirical performances of the proposed approach are
demonstrated through intensive numerical experiments both on robust
classification and regression problems.
| Shinya Suzumura, Kohei Ogawa, Masashi Sugiyama, Masayuki Karasuyama,
Ichiro Takeuchi | null | 1507.03229 | null | null |
Tensor principal component analysis via sum-of-squares proofs | cs.LG cs.CC cs.DS stat.ML | We study a statistical model for the tensor principal component analysis
problem introduced by Montanari and Richard: Given a order-$3$ tensor $T$ of
the form $T = \tau \cdot v_0^{\otimes 3} + A$, where $\tau \geq 0$ is a
signal-to-noise ratio, $v_0$ is a unit vector, and $A$ is a random noise
tensor, the goal is to recover the planted vector $v_0$. For the case that $A$
has iid standard Gaussian entries, we give an efficient algorithm to recover
$v_0$ whenever $\tau \geq \omega(n^{3/4} \log(n)^{1/4})$, and certify that the
recovered vector is close to a maximum likelihood estimator, all with high
probability over the random choice of $A$. The previous best algorithms with
provable guarantees required $\tau \geq \Omega(n)$.
In the regime $\tau \leq o(n)$, natural tensor-unfolding-based spectral
relaxations for the underlying optimization problem break down (in the sense
that their integrality gap is large). To go beyond this barrier, we use convex
relaxations based on the sum-of-squares method. Our recovery algorithm proceeds
by rounding a degree-$4$ sum-of-squares relaxations of the
maximum-likelihood-estimation problem for the statistical model. To complement
our algorithmic results, we show that degree-$4$ sum-of-squares relaxations
break down for $\tau \leq O(n^{3/4}/\log(n)^{1/4})$, which demonstrates that
improving our current guarantees (by more than logarithmic factors) would
require new techniques or might even be intractable.
Finally, we show how to exploit additional problem structure in order to
solve our sum-of-squares relaxations, up to some approximation, very
efficiently. Our fastest algorithm runs in nearly-linear time using shifted
(matrix) power iteration and has similar guarantees as above. The analysis of
this algorithm also confirms a variant of a conjecture of Montanari and Richard
about singular vectors of tensor unfoldings.
| Samuel B. Hopkins and Jonathan Shi and David Steurer | null | 1507.03269 | null | null |
Cluster-Aided Mobility Predictions | cs.LG | Predicting the future location of users in wireless net- works has numerous
applications, and can help service providers to improve the quality of service
perceived by their clients. The location predictors proposed so far estimate
the next location of a specific user by inspecting the past individual
trajectories of this user. As a consequence, when the training data collected
for a given user is limited, the resulting prediction is inaccurate. In this
paper, we develop cluster-aided predictors that exploit past trajectories
collected from all users to predict the next location of a given user. These
predictors rely on clustering techniques and extract from the training data
similarities among the mobility patterns of the various users to improve the
prediction accuracy. Specifically, we present CAMP (Cluster-Aided Mobility
Predictor), a cluster-aided predictor whose design is based on recent
non-parametric bayesian statistical tools. CAMP is robust and adaptive in the
sense that it exploits similarities in users' mobility only if such
similarities are really present in the training data. We analytically prove the
consistency of the predictions provided by CAMP, and investigate its
performance using two large-scale datasets. CAMP significantly outperforms
existing predictors, and in particular those that only exploit individual past
trajectories.
| Jaeseong Jeong, Mathieu Leconte and Alexandre Proutiere | null | 1507.03292 | null | null |
Quantitative Evaluation of Performance and Validity Indices for
Clustering the Web Navigational Sessions | cs.LG cs.SI | Clustering techniques are widely used in Web Usage Mining to capture similar
interests and trends among users accessing a Web site. For this purpose, web
access logs generated at a particular web site are preprocessed to discover the
user navigational sessions. Clustering techniques are then applied to group the
user session data into user session clusters, where intercluster similarities
are minimized while the intra cluster similarities are maximized. Since the
application of different clustering algorithms generally results in different
sets of cluster formation, it is important to evaluate the performance of these
methods in terms of accuracy and validity of the clusters, and also the time
required to generate them, using appropriate performance measures. This paper
describes various validity and accuracy measures including Dunn's Index, Davies
Bouldin Index, C Index, Rand Index, Jaccard Index, Silhouette Index, Fowlkes
Mallows and Sum of the Squared Error (SSE). We conducted the performance
evaluation of the following clustering techniques: k-Means, k-Medoids, Leader,
Single Link Agglomerative Hierarchical and DBSCAN. These techniques are
implemented and tested against the Web user navigational data. Finally their
performance results are presented and compared.
| Zahid Ansari, M.F. Azeem, Waseem Ahmed and A.Vinaya Babu | null | 1507.03340 | null | null |
Ordered Decompositional DAG Kernels Enhancements | cs.LG | In this paper, we show how the Ordered Decomposition DAGs (ODD) kernel
framework, a framework that allows the definition of graph kernels from tree
kernels, allows to easily define new state-of-the-art graph kernels. Here we
consider a fast graph kernel based on the Subtree kernel (ST), and we propose
various enhancements to increase its expressiveness. The proposed DAG kernel
has the same worst-case complexity as the one based on ST, but an improved
expressivity due to an augmented set of features. Moreover, we propose a novel
weighting scheme for the features, which can be applied to other kernels of the
ODD framework. These improvements allow the proposed kernels to improve on the
classification performances of the ST-based kernel for several real-world
datasets, reaching state-of-the-art performances.
| Giovanni Da San Martino, Nicol\`o Navarin, Alessandro Sperduti | 10.1016/j.neucom.2015.12.110 | 1507.03372 | null | null |
Projected Wirtinger Gradient Descent for Low-Rank Hankel Matrix
Completion in Spectral Compressed Sensing | cs.IT cs.LG math.IT math.OC | This paper considers reconstructing a spectrally sparse signal from a small
number of randomly observed time-domain samples. The signal of interest is a
linear combination of complex sinusoids at $R$ distinct frequencies. The
frequencies can assume any continuous values in the normalized frequency domain
$[0,1)$. After converting the spectrally sparse signal recovery into a low rank
structured matrix completion problem, we propose an efficient feasible point
approach, named projected Wirtinger gradient descent (PWGD) algorithm, to
efficiently solve this structured matrix completion problem. We further
accelerate our proposed algorithm by a scheme inspired by FISTA. We give the
convergence analysis of our proposed algorithms. Extensive numerical
experiments are provided to illustrate the efficiency of our proposed
algorithm. Different from earlier approaches, our algorithm can solve problems
of very large dimensions very efficiently.
| Jian-Feng Cai, Suhui Liu, and Weiyu Xu | null | 1507.03707 | null | null |
A New Framework for Distributed Submodular Maximization | cs.DS cs.AI cs.DC cs.LG | A wide variety of problems in machine learning, including exemplar
clustering, document summarization, and sensor placement, can be cast as
constrained submodular maximization problems. A lot of recent effort has been
devoted to developing distributed algorithms for these problems. However, these
results suffer from high number of rounds, suboptimal approximation ratios, or
both. We develop a framework for bringing existing algorithms in the sequential
setting to the distributed setting, achieving near optimal approximation ratios
for many settings in only a constant number of MapReduce rounds. Our techniques
also give a fast sequential algorithm for non-monotone maximization subject to
a matroid constraint.
| Rafael da Ponte Barbosa, Alina Ene, Huy L. Nguyen, Justin Ward | null | 1507.03719 | null | null |
Closed Curves and Elementary Visual Object Identification | cs.CV cs.LG q-bio.NC | For two closed curves on a plane (discrete version) and local criteria for
similarity of points on the curves one gets a potential, which describes the
similarity between curve points. This is the base for a global similarity
measure of closed curves (Fr\'echet distance). I use borderlines of handwritten
digits to demonstrate an area of application. I imagine, measuring the
similarity of closed curves is an essential and elementary task performed by a
visual system. This approach to similarity measures may be used by visual
systems.
| Manfred Harringer | null | 1507.03751 | null | null |
Rich Component Analysis | cs.LG stat.ML | In many settings, we have multiple data sets (also called views) that capture
different and overlapping aspects of the same phenomenon. We are often
interested in finding patterns that are unique to one or to a subset of the
views. For example, we might have one set of molecular observations and one set
of physiological observations on the same group of individuals, and we want to
quantify molecular patterns that are uncorrelated with physiology. Despite
being a common problem, this is highly challenging when the correlations come
from complex distributions. In this paper, we develop the general framework of
Rich Component Analysis (RCA) to model settings where the observations from
different views are driven by different sets of latent components, and each
component can be a complex, high-dimensional distribution. We introduce
algorithms based on cumulant extraction that provably learn each of the
components without having to model the other components. We show how to
integrate RCA with stochastic gradient descent into a meta-algorithm for
learning general models, and demonstrate substantial improvement in accuracy on
several synthetic and real datasets in both supervised and unsupervised tasks.
Our method makes it possible to learn latent variable models when we don't have
samples from the true model but only samples after complex perturbations.
| Rong Ge and James Zou | null | 1507.03867 | null | null |
Training artificial neural networks to learn a nondeterministic game | cs.LG | It is well known that artificial neural networks (ANNs) can learn
deterministic automata. Learning nondeterministic automata is another matter.
This is important because much of the world is nondeterministic, taking the
form of unpredictable or probabilistic events that must be acted upon. If ANNs
are to engage such phenomena, then they must be able to learn how to deal with
nondeterminism. In this project the game of Pong poses a nondeterministic
environment. The learner is given an incomplete view of the game state and
underlying deterministic physics, resulting in a nondeterministic game. Three
models were trained and tested on the game: Mona, Elman, and Numenta's NuPIC.
| Thomas E. Portegys | null | 1507.04029 | null | null |
Solomonoff Induction Violates Nicod's Criterion | cs.LG cs.AI math.ST stat.TH | Nicod's criterion states that observing a black raven is evidence for the
hypothesis H that all ravens are black. We show that Solomonoff induction does
not satisfy Nicod's criterion: there are time steps in which observing black
ravens decreases the belief in H. Moreover, while observing any computable
infinite string compatible with H, the belief in H decreases infinitely often
when using the unnormalized Solomonoff prior, but only finitely often when
using the normalized Solomonoff prior. We argue that the fault is not with
Solomonoff induction; instead we should reject Nicod's criterion.
| Jan Leike and Marcus Hutter | null | 1507.04121 | null | null |
On the Computability of Solomonoff Induction and Knowledge-Seeking | cs.AI cs.LG | Solomonoff induction is held as a gold standard for learning, but it is known
to be incomputable. We quantify its incomputability by placing various flavors
of Solomonoff's prior M in the arithmetical hierarchy. We also derive
computability bounds for knowledge-seeking agents, and give a limit-computable
weakly asymptotically optimal reinforcement learning agent.
| Jan Leike and Marcus Hutter | null | 1507.04124 | null | null |
Untangling AdaBoost-based Cost-Sensitive Classification. Part I:
Theoretical Perspective | cs.CV cs.AI cs.LG | Boosting algorithms have been widely used to tackle a plethora of problems.
In the last few years, a lot of approaches have been proposed to provide
standard AdaBoost with cost-sensitive capabilities, each with a different
focus. However, for the researcher, these algorithms shape a tangled set with
diffuse differences and properties, lacking a unifying analysis to jointly
compare, classify, evaluate and discuss those approaches on a common basis. In
this series of two papers we aim to revisit the various proposals, both from
theoretical (Part I) and practical (Part II) perspectives, in order to analyze
their specific properties and behavior, with the final goal of identifying the
algorithm providing the best and soundest results.
| Iago Landesa-V\'azquez, Jos\'e Luis Alba-Castro | null | 1507.04125 | null | null |
Untangling AdaBoost-based Cost-Sensitive Classification. Part II:
Empirical Analysis | cs.CV cs.AI cs.LG | A lot of approaches, each following a different strategy, have been proposed
in the literature to provide AdaBoost with cost-sensitive properties. In the
first part of this series of two papers, we have presented these algorithms in
a homogeneous notational framework, proposed a clustering scheme for them and
performed a thorough theoretical analysis of those approaches with a fully
theoretical foundation. The present paper, in order to complete our analysis,
is focused on the empirical study of all the algorithms previously presented
over a wide range of heterogeneous classification problems. The results of our
experiments, confirming the theoretical conclusions, seem to reveal that the
simplest approach, just based on cost-sensitive weight initialization, is the
one showing the best and soundest results, despite having been recurrently
overlooked in the literature.
| Iago Landesa-V\'azquez, Jos\'e Luis Alba-Castro | null | 1507.04126 | null | null |
ALEVS: Active Learning by Statistical Leverage Sampling | cs.LG stat.ML | Active learning aims to obtain a classifier of high accuracy by using fewer
label requests in comparison to passive learning by selecting effective
queries. Many active learning methods have been developed in the past two
decades, which sample queries based on informativeness or representativeness of
unlabeled data points. In this work, we explore a novel querying criterion
based on statistical leverage scores. The statistical leverage scores of a row
in a matrix are the squared row-norms of the matrix containing its (top) left
singular vectors and is a measure of influence of the row on the matrix.
Leverage scores have been used for detecting high influential points in
regression diagnostics and have been recently shown to be useful for data
analysis and randomized low-rank matrix approximation algorithms. We explore
how sampling data instances with high statistical leverage scores perform in
active learning. Our empirical comparison on several binary classification
datasets indicate that querying high leverage points is an effective strategy.
| Cem Orhan and \"Oznur Ta\c{s}tan | null | 1507.04155 | null | null |
Minimum Density Hyperplanes | stat.ML cs.LG | Associating distinct groups of objects (clusters) with contiguous regions of
high probability density (high-density clusters), is central to many
statistical and machine learning approaches to the classification of unlabelled
data. We propose a novel hyperplane classifier for clustering and
semi-supervised classification which is motivated by this objective. The
proposed minimum density hyperplane minimises the integral of the empirical
probability density function along it, thereby avoiding intersection with high
density clusters. We show that the minimum density and the maximum margin
hyperplanes are asymptotically equivalent, thus linking this approach to
maximum margin clustering and semi-supervised support vector classifiers. We
propose a projection pursuit formulation of the associated optimisation problem
which allows us to find minimum density hyperplanes efficiently in practice,
and evaluate its performance on a range of benchmark datasets. The proposed
approach is found to be very competitive with state of the art methods for
clustering and semi-supervised classification.
| Nicos G. Pavlidis, David P. Hofmeyr, Sotiris K. Tasoulis | null | 1507.04201 | null | null |
Combinatorial Cascading Bandits | cs.LG stat.ML | We propose combinatorial cascading bandits, a class of partial monitoring
problems where at each step a learning agent chooses a tuple of ground items
subject to constraints and receives a reward if and only if the weights of all
chosen items are one. The weights of the items are binary, stochastic, and
drawn independently of each other. The agent observes the index of the first
chosen item whose weight is zero. This observation model arises in network
routing, for instance, where the learning agent may only observe the first link
in the routing path which is down, and blocks the path. We propose a UCB-like
algorithm for solving our problems, CombCascade; and prove gap-dependent and
gap-free upper bounds on its $n$-step regret. Our proofs build on recent work
in stochastic combinatorial semi-bandits but also address two novel challenges
of our setting, a non-linear reward function and partial observability. We
evaluate CombCascade on two real-world problems and show that it performs well
even when our modeling assumptions are violated. We also demonstrate that our
setting requires a new learning algorithm.
| Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesvari | null | 1507.04208 | null | null |
The Role of Principal Angles in Subspace Classification | stat.ML cs.LG | Subspace models play an important role in a wide range of signal processing
tasks, and this paper explores how the pairwise geometry of subspaces
influences the probability of misclassification. When the mismatch between the
signal and the model is vanishingly small, the probability of misclassification
is determined by the product of the sines of the principal angles between
subspaces. When the mismatch is more significant, the probability of
misclassification is determined by the sum of the squares of the sines of the
principal angles. Reliability of classification is derived in terms of the
distribution of signal energy across principal vectors. Larger principal angles
lead to smaller classification error, motivating a linear transform that
optimizes principal angles. The transform presented here (TRAIT) preserves some
specific characteristic of each individual class, and this approach is shown to
be complementary to a previously developed transform (LRT) that enlarges
inter-class distance while suppressing intra-class dispersion. Theoretical
results are supported by demonstration of superior classification accuracy on
synthetic and measured data even in the presence of significant model mismatch.
| Jiaji Huang and Qiang Qiu and Robert Calderbank | 10.1109/TSP.2015.2500889 | 1507.04230 | null | null |
Learning Action Models: Qualitative Approach | cs.LG cs.AI cs.LO | In dynamic epistemic logic, actions are described using action models. In
this paper we introduce a framework for studying learnability of action models
from observations. We present first results concerning propositional action
models. First we check two basic learnability criteria: finite identifiability
(conclusively inferring the appropriate action model in finite time) and
identifiability in the limit (inconclusive convergence to the right action
model). We show that deterministic actions are finitely identifiable, while
non-deterministic actions require more learning power-they are identifiable in
the limit. We then move on to a particular learning method, which proceeds via
restriction of a space of events within a learning-specific action model. This
way of learning closely resembles the well-known update method from dynamic
epistemic logic. We introduce several different learning methods suited for
finite identifiability of particular types of deterministic actions.
| Thomas Bolander and Nina Gierasimczuk | null | 1507.04285 | null | null |
Massively Parallel Methods for Deep Reinforcement Learning | cs.LG cs.AI cs.DC cs.NE | We present the first massively distributed architecture for deep
reinforcement learning. This architecture uses four main components: parallel
actors that generate new behaviour; parallel learners that are trained from
stored experience; a distributed neural network to represent the value function
or behaviour policy; and a distributed store of experience. We used our
architecture to implement the Deep Q-Network algorithm (DQN). Our distributed
algorithm was applied to 49 games from Atari 2600 games from the Arcade
Learning Environment, using identical hyperparameters. Our performance
surpassed non-distributed DQN in 41 of the 49 games and also reduced the
wall-time required to achieve these results by an order of magnitude on most
games.
| Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory
Fearon, Alessandro De Maria, Vedavyas Panneershelvam, Mustafa Suleyman,
Charles Beattie, Stig Petersen, Shane Legg, Volodymyr Mnih, Koray
Kavukcuoglu, David Silver | null | 1507.04296 | null | null |
Learning Boolean functions with concentrated spectra | cs.LG cs.IT math.FA math.IT | This paper discusses the theory and application of learning Boolean functions
that are concentrated in the Fourier domain. We first estimate the VC dimension
of this function class in order to establish a small sample complexity of
learning in this case. Next, we propose a computationally efficient method of
empirical risk minimization, and we apply this method to the MNIST database of
handwritten digits. These results demonstrate the effectiveness of our model
for modern classification tasks. We conclude with a list of open problems for
future investigation.
| Dustin G. Mixon, Jesse Peterson | 10.1117/12.2189112 | 1507.04319 | null | null |
Parallel MMF: a Multiresolution Approach to Matrix Computation | cs.NA cs.LG stat.ML | Multiresolution Matrix Factorization (MMF) was recently introduced as a
method for finding multiscale structure and defining wavelets on
graphs/matrices. In this paper we derive pMMF, a parallel algorithm for
computing the MMF factorization. Empirically, the running time of pMMF scales
linearly in the dimension for sparse matrices. We argue that this makes pMMF a
valuable new computational primitive in its own right, and present experiments
on using pMMF for two distinct purposes: compressing matrices and
preconditioning large sparse linear systems.
| Risi Kondor, Nedelina Teneva, Pramod K. Mudrakarta | null | 1507.04396 | null | null |
Preference Completion: Large-scale Collaborative Ranking from Pairwise
Comparisons | stat.ML cs.LG | In this paper we consider the collaborative ranking setting: a pool of users
each provides a small number of pairwise preferences between $d$ possible
items; from these we need to predict preferences of the users for items they
have not yet seen. We do so by fitting a rank $r$ score matrix to the pairwise
data, and provide two main contributions: (a) we show that an algorithm based
on convex optimization provides good generalization guarantees once each user
provides as few as $O(r\log^2 d)$ pairwise comparisons -- essentially matching
the sample complexity required in the related matrix completion setting (which
uses actual numerical as opposed to pairwise information), and (b) we develop a
large-scale non-convex implementation, which we call AltSVM, that trains a
factored form of the matrix via alternating minimization (which we show reduces
to alternating SVM problems), and scales and parallelizes very well to large
problem settings. It also outperforms common baselines on many moderately large
popular collaborative filtering datasets in both NDCG and in other measures of
ranking performance.
| Dohyung Park, Joe Neeman, Jin Zhang, Sujay Sanghavi, Inderjit S.
Dhillon | null | 1507.04457 | null | null |
Towards Predicting First Daily Departure Times: a Gaussian Modeling
Approach for Load Shift Forecasting | cs.LG | This work provides two statistical Gaussian forecasting methods for
predicting First Daily Departure Times (FDDTs) of everyday use electric
vehicles. This is important in smart grid applications to understand
disconnection times of such mobile storage units, for instance to forecast
storage of non dispatchable loads (e.g. wind and solar power). We provide a
review of the relevant state-of-the-art driving behavior features towards FDDT
prediction, to then propose an approximated Gaussian method which qualitatively
forecasts how many vehicles will depart within a given time frame, by assuming
that departure times follow a normal distribution. This method considers
sampling sessions as Poisson distributions which are superimposed to obtain a
single approximated Gaussian model. Given the Gaussian distribution assumption
of the departure times, we also model the problem with Gaussian Mixture Models
(GMM), in which the priorly set number of clusters represents the desired time
granularity. Evaluation has proven that for the dataset tested, low error and
high confidence ($\approx 95\%$) is possible for 15 and 10 minute intervals,
and that GMM outperforms traditional modeling but is less generalizable across
datasets, as it is a closer fit to the sampling data. Conclusively we discuss
future possibilities and practical applications of the discussed model.
| Nicholas H. Kirk and Ilya Dianov | null | 1507.04502 | null | null |
Upper-Confidence-Bound Algorithms for Active Learning in Multi-Armed
Bandits | cs.LG | In this paper, we study the problem of estimating uniformly well the mean
values of several distributions given a finite budget of samples. If the
variance of the distributions were known, one could design an optimal sampling
strategy by collecting a number of independent samples per distribution that is
proportional to their variance. However, in the more realistic case where the
distributions are not known in advance, one needs to design adaptive sampling
strategies in order to select which distribution to sample from according to
the previously observed samples. We describe two strategies based on pulling
the distributions a number of times that is proportional to a high-probability
upper-confidence-bound on their variance (built from previous observed samples)
and report a finite-sample performance analysis on the excess estimation error
compared to the optimal allocation. We show that the performance of these
allocation strategies depends not only on the variances but also on the full
shape of the distributions.
| Alexandra Carpentier, Alessandro Lazaric, Mohammad Ghavamzadeh, R\'emi
Munos, Peter Auer, Andr\'as Antos | null | 1507.04523 | null | null |
Learning to classify with possible sensor failures | cs.LG cs.IT math.IT stat.ML | In this paper, we propose a general framework to learn a robust large-margin
binary classifier when corrupt measurements, called anomalies, caused by sensor
failure might be present in the training set. The goal is to minimize the
generalization error of the classifier on non-corrupted measurements while
controlling the false alarm rate associated with anomalous samples. By
incorporating a non-parametric regularizer based on an empirical entropy
estimator, we propose a Geometric-Entropy-Minimization regularized Maximum
Entropy Discrimination (GEM-MED) method to learn to classify and detect
anomalies in a joint manner. We demonstrate using simulated data and a real
multimodal data set. Our GEM-MED method can yield improved performance over
previous robust classification methods in terms of both classification accuracy
and anomaly detection rate.
| Tianpei Xie, Nasser M. Nasrabadi and Alfred O. Hero | 10.1109/ICASSP.2014.6854029 | 1507.04540 | null | null |
A Dependency-Based Neural Network for Relation Classification | cs.CL cs.LG cs.NE | Previous research on relation classification has verified the effectiveness
of using dependency shortest paths or subtrees. In this paper, we further
explore how to make full use of the combination of these dependency
information. We first propose a new structure, termed augmented dependency path
(ADP), which is composed of the shortest dependency path between two entities
and the subtrees attached to the shortest path. To exploit the semantic
representation behind the ADP structure, we develop dependency-based neural
networks (DepNN): a recursive neural network designed to model the subtrees,
and a convolutional neural network to capture the most important features on
the shortest path. Experiments on the SemEval-2010 dataset show that our
proposed method achieves state-of-art results.
| Yang Liu, Furu Wei, Sujian Li, Heng Ji, Ming Zhou, Houfeng Wang | null | 1507.04646 | null | null |
Less is More: Nystr\"om Computational Regularization | stat.ML cs.LG | We study Nystr\"om type subsampling approaches to large scale kernel methods,
and prove learning bounds in the statistical learning setting, where random
sampling and high probability estimates are considered. In particular, we prove
that these approaches can achieve optimal learning bounds, provided the
subsampling level is suitably chosen. These results suggest a simple
incremental variant of Nystr\"om Kernel Regularized Least Squares, where the
subsampling level implements a form of computational regularization, in the
sense that it controls at the same time regularization and computations.
Extensive experimental analysis shows that the considered approach achieves
state of the art performances on benchmark large scale datasets.
| Alessandro Rudi, Raffaello Camoriano, Lorenzo Rosasco | null | 1507.04717 | null | null |
Variational Gram Functions: Convex Analysis and Optimization | math.OC cs.LG stat.ML | We propose a new class of convex penalty functions, called \emph{variational
Gram functions} (VGFs), that can promote pairwise relations, such as
orthogonality, among a set of vectors in a vector space. These functions can
serve as regularizers in convex optimization problems arising from hierarchical
classification, multitask learning, and estimating vectors with disjoint
supports, among other applications. We study convexity for VGFs, and give
efficient characterizations for their convex conjugates, subdifferentials, and
proximal operators. We discuss efficient optimization algorithms for
regularized loss minimization problems where the loss admits a common, yet
simple, variational representation and the regularizer is a VGF. These
algorithms enjoy a simple kernel trick, an efficient line search, as well as
computational advantages over first order methods based on the subdifferential
or proximal maps. We also establish a general representer theorem for such
learning problems. Lastly, numerical experiments on a hierarchical
classification problem are presented to demonstrate the effectiveness of VGFs
and the associated optimization algorithms.
| Amin Jalali, Maryam Fazel, Lin Xiao | null | 1507.04734 | null | null |
Deep Learning and Music Adversaries | cs.LG cs.NE cs.SD | An adversary is essentially an algorithm intent on making a classification
system perform in some particular way given an input, e.g., increase the
probability of a false negative. Recent work builds adversaries for deep
learning systems applied to image object recognition, which exploits the
parameters of the system to find the minimal perturbation of the input image
such that the network misclassifies it with high confidence. We adapt this
approach to construct and deploy an adversary of deep learning systems applied
to music content analysis. In our case, however, the input to the systems is
magnitude spectral frames, which requires special care in order to produce
valid input audio signals from network-derived perturbations. For two different
train-test partitionings of two benchmark datasets, and two different deep
architectures, we find that this adversary is very effective in defeating the
resulting systems. We find the convolutional networks are more robust, however,
compared with systems based on a majority vote over individually classified
audio frames. Furthermore, we integrate the adversary into the training of new
deep systems, but do not find that this improves their resilience against the
same adversary.
| Corey Kereliuk and Bob L. Sturm and Jan Larsen | null | 1507.04761 | null | null |
Sparse Probit Linear Mixed Model | stat.ML cs.LG | Linear Mixed Models (LMMs) are important tools in statistical genetics. When
used for feature selection, they allow to find a sparse set of genetic traits
that best predict a continuous phenotype of interest, while simultaneously
correcting for various confounding factors such as age, ethnicity and
population structure. Formulated as models for linear regression, LMMs have
been restricted to continuous phenotypes. We introduce the Sparse Probit Linear
Mixed Model (Probit-LMM), where we generalize the LMM modeling paradigm to
binary phenotypes. As a technical challenge, the model no longer possesses a
closed-form likelihood function. In this paper, we present a scalable
approximate inference algorithm that lets us fit the model to high-dimensional
data sets. We show on three real-world examples from different domains that in
the setup of binary labels, our algorithm leads to better prediction accuracies
and also selects features which show less correlation with the confounding
factors.
| Stephan Mandt, Florian Wenzel, Shinichi Nakajima, John P. Cunningham,
Christoph Lippert, and Marius Kloft | 10.1007/s10994-017-5652-6 | 1507.04777 | null | null |
Sharp Time--Data Tradeoffs for Linear Inverse Problems | cs.IT cs.LG math.IT math.OC math.ST stat.TH | In this paper we characterize sharp time-data tradeoffs for optimization
problems used for solving linear inverse problems. We focus on the minimization
of a least-squares objective subject to a constraint defined as the sub-level
set of a penalty function. We present a unified convergence analysis of the
gradient projection algorithm applied to such problems. We sharply characterize
the convergence rate associated with a wide variety of random measurement
ensembles in terms of the number of measurements and structural complexity of
the signal with respect to the chosen penalty function. The results apply to
both convex and nonconvex constraints, demonstrating that a linear convergence
rate is attainable even though the least squares objective is not strongly
convex in these settings. When specialized to Gaussian measurements our results
show that such linear convergence occurs when the number of measurements is
merely 4 times the minimal number required to recover the desired signal at all
(a.k.a. the phase transition). We also achieve a slower but geometric rate of
convergence precisely above the phase transition point. Extensive numerical
results suggest that the derived rates exactly match the empirical performance.
| Samet Oymak, Benjamin Recht, and Mahdi Soltanolkotabi | null | 1507.04793 | null | null |
Exploratory topic modeling with distributional semantics | cs.IR cs.CL cs.LG | As we continue to collect and store textual data in a multitude of domains,
we are regularly confronted with material whose largely unknown thematic
structure we want to uncover. With unsupervised, exploratory analysis, no prior
knowledge about the content is required and highly open-ended tasks can be
supported. In the past few years, probabilistic topic modeling has emerged as a
popular approach to this problem. Nevertheless, the representation of the
latent topics as aggregations of semi-coherent terms limits their
interpretability and level of detail.
This paper presents an alternative approach to topic modeling that maps
topics as a network for exploration, based on distributional semantics using
learned word vectors. From the granular level of terms and their semantic
similarity relations global topic structures emerge as clustered regions and
gradients of concepts. Moreover, the paper discusses the visual interactive
representation of the topic map, which plays an important role in supporting
its exploration.
| Samuel R\"onnqvist | null | 1507.04798 | null | null |
Building End-To-End Dialogue Systems Using Generative Hierarchical
Neural Network Models | cs.CL cs.AI cs.LG cs.NE | We investigate the task of building open domain, conversational dialogue
systems based on large dialogue corpora using generative models. Generative
models produce system responses that are autonomously generated word-by-word,
opening up the possibility for realistic, flexible interactions. In support of
this goal, we extend the recently proposed hierarchical recurrent
encoder-decoder neural network to the dialogue domain, and demonstrate that
this model is competitive with state-of-the-art neural language models and
back-off n-gram models. We investigate the limitations of this and similar
approaches, and show how its performance can be improved by bootstrapping the
learning from a larger question-answer pair corpus and from pretrained word
embeddings.
| Iulian V. Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville
and Joelle Pineau | null | 1507.04808 | null | null |
Deep Multimodal Speaker Naming | cs.CV cs.LG cs.MM cs.SD | Automatic speaker naming is the problem of localizing as well as identifying
each speaking character in a TV/movie/live show video. This is a challenging
problem mainly attributes to its multimodal nature, namely face cue alone is
insufficient to achieve good performance. Previous multimodal approaches to
this problem usually process the data of different modalities individually and
merge them using handcrafted heuristics. Such approaches work well for simple
scenes, but fail to achieve high performance for speakers with large appearance
variations. In this paper, we propose a novel convolutional neural networks
(CNN) based learning framework to automatically learn the fusion function of
both face and audio cues. We show that without using face tracking, facial
landmark localization or subtitle/transcript, our system with robust multimodal
feature extraction is able to achieve state-of-the-art speaker naming
performance evaluated on two diverse TV series. The dataset and implementation
of our algorithm are publicly available online.
| Yongtao Hu, Jimmy Ren, Jingwen Dai, Chang Yuan, Li Xu, and Wenping
Wang | 10.1145/2733373.2806293 | 1507.04831 | null | null |
Maximum Entropy Deep Inverse Reinforcement Learning | cs.LG | This paper presents a general framework for exploiting the representational
capacity of neural networks to approximate complex, nonlinear reward functions
in the context of solving the inverse reinforcement learning (IRL) problem. We
show in this context that the Maximum Entropy paradigm for IRL lends itself
naturally to the efficient training of deep architectures. At test time, the
approach leads to a computational complexity independent of the number of
demonstrations, which makes it especially well-suited for applications in
life-long learning scenarios. Our approach achieves performance commensurate to
the state-of-the-art on existing benchmarks while exceeding on an alternative
benchmark based on highly varying reward structures. Finally, we extend the
basic architecture - which is equivalent to a simplified subclass of Fully
Convolutional Neural Networks (FCNNs) with width one - to include larger
convolutions in order to eliminate dependency on precomputed spatial features
and work on raw input representations.
| Markus Wulfmeier, Peter Ondruska, Ingmar Posner | null | 1507.04888 | null | null |
Lower Bounds for Multi-armed Bandit with Non-equivalent Multiple Plays | cs.LG | We study the stochastic multi-armed bandit problem with non-equivalent
multiple plays where, at each step, an agent chooses not only a set of arms,
but also their order, which influences reward distribution. In several problem
formulations with different assumptions, we provide lower bounds for regret
with standard asymptotics $O(\log{t})$ but novel coefficients and provide
optimal algorithms, thus proving that these bounds cannot be improved.
| Aleksandr Vorobev and Gleb Gusev | null | 1507.04910 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.