title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Spectral Analysis of Symmetric and Anti-Symmetric Pairwise Kernels | cs.LG stat.ML | We consider the problem of learning regression functions from pairwise data
when there exists prior knowledge that the relation to be learned is symmetric
or anti-symmetric. Such prior knowledge is commonly enforced by symmetrizing or
anti-symmetrizing pairwise kernel functions. Through spectral analysis, we show
that these transformations reduce the kernel's effective dimension. Further, we
provide an analysis of the approximation properties of the resulting kernels,
and bound the regularization bias of the kernels in terms of the corresponding
bias of the original kernel.
| Tapio Pahikkala, Markus Viljanen, Antti Airola, Willem Waegeman | null | 1506.05950 | null | null |
Enhanced Lasso Recovery on Graph | cs.LG stat.ML | This work aims at recovering signals that are sparse on graphs. Compressed
sensing offers techniques for signal recovery from a few linear measurements
and graph Fourier analysis provides a signal representation on graph. In this
paper, we leverage these two frameworks to introduce a new Lasso recovery
algorithm on graphs. More precisely, we present a non-convex, non-smooth
algorithm that outperforms the standard convex Lasso technique. We carry out
numerical experiments on three benchmark graph datasets.
| Xavier Bresson and Thomas Laurent and James von Brecht | null | 1506.05985 | null | null |
Measuring Emotional Contagion in Social Media | cs.SI cs.LG physics.soc-ph | Social media are used as main discussion channels by millions of individuals
every day. The content individuals produce in daily social-media-based
micro-communications, and the emotions therein expressed, may impact the
emotional states of others. A recent experiment performed on Facebook
hypothesized that emotions spread online, even in absence of non-verbal cues
typical of in-person interactions, and that individuals are more likely to
adopt positive or negative emotions if these are over-expressed in their social
network. Experiments of this type, however, raise ethical concerns, as they
require massive-scale content manipulation with unknown consequences for the
individuals therein involved. Here, we study the dynamics of emotional
contagion using Twitter. Rather than manipulating content, we devise a null
model that discounts some confounding factors (including the effect of
emotional contagion). We measure the emotional valence of content the users are
exposed to before posting their own tweets. We determine that on average a
negative post follows an over-exposure to 4.34% more negative content than
baseline, while positive posts occur after an average over-exposure to 4.50%
more positive contents. We highlight the presence of a linear relationship
between the average emotional valence of the stimuli users are exposed to, and
that of the responses they produce. We also identify two different classes of
individuals: highly and scarcely susceptible to emotional contagion. Highly
susceptible users are significantly less inclined to adopt negative emotions
than the scarcely susceptible ones, but equally likely to adopt positive
emotions. In general, the likelihood of adopting positive emotions is much
greater than that of negative emotions.
| Emilio Ferrara, Zeyao Yang | 10.1371/journal.pone.0142390 | 1506.06021 | null | null |
A general framework for the IT-based clustering methods | cs.CV cs.LG stat.ML | Previously, we proposed a physically inspired rule to organize the data
points in a sparse yet effective structure, called the in-tree (IT) graph,
which is able to capture a wide class of underlying cluster structures in the
datasets, especially for the density-based datasets. Although there are some
redundant edges or lines between clusters requiring to be removed by computer,
this IT graph has a big advantage compared with the k-nearest-neighborhood
(k-NN) or the minimal spanning tree (MST) graph, in that the redundant edges in
the IT graph are much more distinguishable and thus can be easily determined by
several methods previously proposed by us.
In this paper, we propose a general framework to re-construct the IT graph,
based on an initial neighborhood graph, such as the k-NN or MST, etc, and the
corresponding graph distances. For this general framework, our previous way of
constructing the IT graph turns out to be a special case of it. This general
framework 1) can make the IT graph capture a wider class of underlying cluster
structures in the datasets, especially for the manifolds, and 2) should be more
effective to cluster the sparse or graph-based datasets.
| Teng Qiu, Yongjie Li | null | 1506.06068 | null | null |
Quantifying the Effect of Sentiment on Information Diffusion in Social
Media | cs.SI cs.LG physics.soc-ph | Social media have become the main vehicle of information production and
consumption online. Millions of users every day log on their Facebook or
Twitter accounts to get updates and news, read about their topics of interest,
and become exposed to new opportunities and interactions. Although recent
studies suggest that the contents users produce will affect the emotions of
their readers, we still lack a rigorous understanding of the role and effects
of contents sentiment on the dynamics of information diffusion. This work aims
at quantifying the effect of sentiment on information diffusion, to understand:
(i) whether positive conversations spread faster and/or broader than negative
ones (or vice-versa); (ii) what kind of emotions are more typical of popular
conversations on social media; and, (iii) what type of sentiment is expressed
in conversations characterized by different temporal dynamics. Our findings
show that, at the level of contents, negative messages spread faster than
positive ones, but positive ones reach larger audiences, suggesting that people
are more inclined to share and favorite positive contents, the so-called
positive bias. As for the entire conversations, we highlight how different
temporal dynamics exhibit different sentiment patterns: for example, positive
sentiment builds up for highly-anticipated events, while unexpected events are
mainly characterized by negative sentiment. Our contribution is a milestone to
understand how the emotions expressed in short texts affect their spreading in
online social ecosystems, and may help to craft effective policies and
strategies for content generation and diffusion.
| Emilio Ferrara, Zeyao Yang | 10.7717/peerj-cs.26 | 1506.06072 | null | null |
A Convergent Gradient Descent Algorithm for Rank Minimization and
Semidefinite Programming from Random Linear Measurements | stat.ML cs.LG | We propose a simple, scalable, and fast gradient descent algorithm to
optimize a nonconvex objective for the rank minimization problem and a closely
related family of semidefinite programs. With $O(r^3 \kappa^2 n \log n)$ random
measurements of a positive semidefinite $n \times n$ matrix of rank $r$ and
condition number $\kappa$, our method is guaranteed to converge linearly to the
global optimum.
| Qinqing Zheng, John Lafferty | null | 1506.06081 | null | null |
Approximate Inference with the Variational Holder Bound | stat.ML cs.LG math.FA | We introduce the Variational Holder (VH) bound as an alternative to
Variational Bayes (VB) for approximate Bayesian inference. Unlike VB which
typically involves maximization of a non-convex lower bound with respect to the
variational parameters, the VH bound involves minimization of a convex upper
bound to the intractable integral with respect to the variational parameters.
Minimization of the VH bound is a convex optimization problem; hence the VH
method can be applied using off-the-shelf convex optimization algorithms and
the approximation error of the VH bound can also be analyzed using tools from
convex optimization literature. We present experiments on the task of
integrating a truncated multivariate Gaussian distribution and compare our
method to VB, EP and a state-of-the-art numerical integration method for this
problem.
| Guillaume Bouchard, Balaji Lakshminarayanan | null | 1506.06100 | null | null |
The Extreme Value Machine | cs.LG | It is often desirable to be able to recognize when inputs to a recognition
function learned in a supervised manner correspond to classes unseen at
training time. With this ability, new class labels could be assigned to these
inputs by a human operator, allowing them to be incorporated into the
recognition function --- ideally under an efficient incremental update
mechanism. While good algorithms that assume inputs from a fixed set of classes
exist, e.g., artificial neural networks and kernel machines, it is not
immediately obvious how to extend them to perform incremental learning in the
presence of unknown query classes. Existing algorithms take little to no
distributional information into account when learning recognition functions and
lack a strong theoretical foundation. We address this gap by formulating a
novel, theoretically sound classifier --- the Extreme Value Machine (EVM). The
EVM has a well-grounded interpretation derived from statistical Extreme Value
Theory (EVT), and is the first classifier to be able to perform nonlinear
kernel-free variable bandwidth incremental learning. Compared to other
classifiers in the same deep network derived feature space, the EVM is accurate
and efficient on an established benchmark partition of the ImageNet dataset.
| Ethan M. Rudd, Lalit P. Jain, Walter J. Scheirer, Terrance E. Boult | 10.1109/TPAMI.2017.2707495 | 1506.06112 | null | null |
A simple application of FIC to model selection | physics.data-an cs.LG stat.ML | We have recently proposed a new information-based approach to model
selection, the Frequentist Information Criterion (FIC), that reconciles
information-based and frequentist inference. The purpose of this current paper
is to provide a simple example of the application of this criterion and a
demonstration of the natural emergence of model complexities with both AIC-like
($N^0$) and BIC-like ($\log N$) scaling with observation number $N$. The
application developed is deliberately simplified to make the analysis
analytically tractable.
| Paul A. Wiggins | null | 1506.06129 | null | null |
CO2 Forest: Improved Random Forest by Continuous Optimization of Oblique
Splits | cs.LG cs.CV | We propose a novel algorithm for optimizing multivariate linear threshold
functions as split functions of decision trees to create improved Random Forest
classifiers. Standard tree induction methods resort to sampling and exhaustive
search to find good univariate split functions. In contrast, our method
computes a linear combination of the features at each node, and optimizes the
parameters of the linear combination (oblique) split functions by adopting a
variant of latent variable SVM formulation. We develop a convex-concave upper
bound on the classification loss for a one-level decision tree, and optimize
the bound by stochastic gradient descent at each internal node of the tree.
Forests of up to 1000 Continuously Optimized Oblique (CO2) decision trees are
created, which significantly outperform Random Forest with univariate splits
and previous techniques for constructing oblique trees. Experimental results
are reported on multi-class classification benchmarks and on Labeled Faces in
the Wild (LFW) dataset.
| Mohammad Norouzi, Maxwell D. Collins, David J. Fleet, Pushmeet Kohli | null | 1506.06155 | null | null |
Detectability thresholds and optimal algorithms for community structure
in dynamic networks | stat.ML cond-mat.dis-nn cs.LG cs.SI physics.data-an | We study the fundamental limits on learning latent community structure in
dynamic networks. Specifically, we study dynamic stochastic block models where
nodes change their community membership over time, but where edges are
generated independently at each time step. In this setting (which is a special
case of several existing models), we are able to derive the detectability
threshold exactly, as a function of the rate of change and the strength of the
communities. Below this threshold, we claim that no algorithm can identify the
communities better than chance. We then give two algorithms that are optimal in
the sense that they succeed all the way down to this limit. The first uses
belief propagation (BP), which gives asymptotically optimal accuracy, and the
second is a fast spectral clustering algorithm, based on linearizing the BP
equations. We verify our analytic and algorithmic results via numerical
simulation, and close with a brief discussion of extensions and open questions.
| Amir Ghasemian and Pan Zhang and Aaron Clauset and Cristopher Moore
and Leto Peel | 10.1103/PhysRevX.6.031005 | 1506.06179 | null | null |
Collective Mind, Part II: Towards Performance- and Cost-Aware Software
Engineering as a Natural Science | cs.SE cs.LG cs.PF | Nowadays, engineers have to develop software often without even knowing which
hardware it will eventually run on in numerous mobile phones, tablets,
desktops, laptops, data centers, supercomputers and cloud services.
Unfortunately, optimizing compilers are not keeping pace with ever increasing
complexity of computer systems anymore and may produce severely underperforming
executable codes while wasting expensive resources and energy.
We present our practical and collaborative solution to this problem via
light-weight wrappers around any software piece when more than one
implementation or optimization choice available. These wrappers are connected
with a public Collective Mind autotuning infrastructure and repository of
knowledge (c-mind.org/repo) to continuously monitor various important
characteristics of these pieces (computational species) across numerous
existing hardware configurations together with randomly selected optimizations.
Similar to natural sciences, we can now continuously track winning solutions
(optimizations for a given hardware) that minimize all costs of a computation
(execution time, energy spent, code size, failures, memory and storage
footprint, optimization time, faults, contentions, inaccuracy and so on) of a
given species on a Pareto frontier along with any unexpected behavior. The
community can then collaboratively classify solutions, prune redundant ones,
and correlate them with various features of software, its inputs (data sets)
and used hardware either manually or using powerful predictive analytics
techniques. Our approach can then help create a large, realistic, diverse,
representative, and continuously evolving benchmark with related optimization
knowledge while gradually covering all possible software and hardware to be
able to predict best optimizations and improve compilers and hardware depending
on usage scenarios and requirements.
| Grigori Fursin and Abdul Memon and Christophe Guillon and Anton
Lokhmotov | null | 1506.06256 | null | null |
Aligning where to see and what to tell: image caption with region-based
attention and scene factorization | cs.CV cs.LG stat.ML | Recent progress on automatic generation of image captions has shown that it
is possible to describe the most salient information conveyed by images with
accurate and meaningful sentences. In this paper, we propose an image caption
system that exploits the parallel structures between images and sentences. In
our model, the process of generating the next word, given the previously
generated ones, is aligned with the visual perception experience where the
attention shifting among the visual regions imposes a thread of visual
ordering. This alignment characterizes the flow of "abstract meaning", encoding
what is semantically shared by both the visual scene and the text description.
Our system also makes another novel modeling contribution by introducing
scene-specific contexts that capture higher-level semantic information encoded
in an image. The contexts adapt language models for word generation to specific
scene types. We benchmark our system and contrast to published results on
several popular datasets. We show that using either region-based attention or
scene-specific contexts improves systems without those components. Furthermore,
combining these two modeling ingredients attains the state-of-the-art
performance.
| Junqi Jin, Kun Fu, Runpeng Cui, Fei Sha and Changshui Zhang | null | 1506.06272 | null | null |
Pose Estimation Based on 3D Models | cs.CV cs.LG cs.RO | In this paper, we proposed a pose estimation system based on rendered image
training set, which predicts the pose of objects in real image, with knowledge
of object category and tight bounding box. We developed a patch-based
multi-class classification algorithm, and an iterative approach to improve the
accuracy. We achieved state-of-the-art performance on pose estimation task.
| Chuiwen Ma, Hao Su, Liang Shi | null | 1506.06274 | null | null |
Communication Efficient Distributed Agnostic Boosting | cs.LG stat.ML | We consider the problem of learning from distributed data in the agnostic
setting, i.e., in the presence of arbitrary forms of noise. Our main
contribution is a general distributed boosting-based procedure for learning an
arbitrary concept space, that is simultaneously noise tolerant, communication
efficient, and computationally efficient. This improves significantly over
prior works that were either communication efficient only in noise-free
scenarios or computationally prohibitive. Empirical results on large synthetic
and real-world datasets demonstrate the effectiveness and scalability of the
proposed approach.
| Shang-Tse Chen, Maria-Florina Balcan, Duen Horng Chau | null | 1506.06318 | null | null |
Taming the Wild: A Unified Analysis of Hogwild!-Style Algorithms | cs.LG math.OC stat.ML | Stochastic gradient descent (SGD) is a ubiquitous algorithm for a variety of
machine learning problems. Researchers and industry have developed several
techniques to optimize SGD's runtime performance, including asynchronous
execution and reduced precision. Our main result is a martingale-based analysis
that enables us to capture the rich noise models that may arise from such
techniques. Specifically, we use our new analysis in three ways: (1) we derive
convergence rates for the convex case (Hogwild!) with relaxed assumptions on
the sparsity of the problem; (2) we analyze asynchronous SGD algorithms for
non-convex matrix problems including matrix completion; and (3) we design and
analyze an asynchronous SGD algorithm, called Buckwild!, that uses
lower-precision arithmetic. We show experimentally that our algorithms run
efficiently for a variety of problems on modern hardware.
| Christopher De Sa, Ce Zhang, Kunle Olukotun, Christopher R\'e | null | 1506.06438 | null | null |
A Deep Memory-based Architecture for Sequence-to-Sequence Learning | cs.CL cs.LG cs.NE | We propose DEEPMEMORY, a novel deep architecture for sequence-to-sequence
learning, which performs the task through a series of nonlinear transformations
from the representation of the input sequence (e.g., a Chinese sentence) to the
final output sequence (e.g., translation to English). Inspired by the recently
proposed Neural Turing Machine (Graves et al., 2014), we store the intermediate
representations in stacked layers of memories, and use read-write operations on
the memories to realize the nonlinear transformations between the
representations. The types of transformations are designed in advance but the
parameters are learned from data. Through layer-by-layer transformations,
DEEPMEMORY can model complicated relations between sequences necessary for
applications such as machine translation between distant languages. The
architecture can be trained with normal back-propagation on sequenceto-sequence
data, and the learning can be easily scaled up to a large corpus. DEEPMEMORY is
broad enough to subsume the state-of-the-art neural translation model in
(Bahdanau et al., 2015) as its special case, while significantly improving upon
the model with its deeper architecture. Remarkably, DEEPMEMORY, being purely
neural network-based, can achieve performance comparable to the traditional
phrase-based machine translation system Moses with a small vocabulary and a
modest parameter size.
| Fandong Meng, Zhengdong Lu, Zhaopeng Tu, Hang Li, and Qun Liu | null | 1506.06442 | null | null |
A Theory of Local Learning, the Learning Channel, and the Optimality of
Backpropagation | cs.LG cs.NE stat.ML | In a physical neural system, where storage and processing are intimately
intertwined, the rules for adjusting the synaptic weights can only depend on
variables that are available locally, such as the activity of the pre- and
post-synaptic neurons, resulting in local learning rules. A systematic
framework for studying the space of local learning rules is obtained by first
specifying the nature of the local variables, and then the functional form that
ties them together into each learning rule. Such a framework enables also the
systematic discovery of new learning rules and exploration of relationships
between learning rules and group symmetries. We study polynomial local learning
rules stratified by their degree and analyze their behavior and capabilities in
both linear and non-linear units and networks. Stacking local learning rules in
deep feedforward networks leads to deep local learning. While deep local
learning can learn interesting representations, it cannot learn complex
input-output functions, even when targets are available for the top layer.
Learning complex input-output functions requires local deep learning where
target information is communicated to the deep layers through a backward
learning channel. The nature of the communicated information about the targets
and the structure of the learning channel partition the space of learning
algorithms. We estimate the learning channel capacity associated with several
algorithms and show that backpropagation outperforms them by simultaneously
maximizing the information rate and minimizing the computational cost, even in
recurrent networks. The theory clarifies the concept of Hebbian learning,
establishes the power and limitations of local learning rules, introduces the
learning channel which enables a formal analysis of the optimality of
backpropagation, and explains the sparsity of the space of learning rules
discovered so far.
| Pierre Baldi and Peter Sadowski | 10.1016/j.neunet.2016.07.006 | 1506.06472 | null | null |
Answer Sequence Learning with Neural Networks for Answer Selection in
Community Question Answering | cs.CL cs.IR cs.LG | In this paper, the answer selection problem in community question answering
(CQA) is regarded as an answer sequence labeling task, and a novel approach is
proposed based on the recurrent architecture for this problem. Our approach
applies convolution neural networks (CNNs) to learning the joint representation
of question-answer pair firstly, and then uses the joint representation as
input of the long short-term memory (LSTM) to learn the answer sequence of a
question for labeling the matching quality of each answer. Experiments
conducted on the SemEval 2015 CQA dataset shows the effectiveness of our
approach.
| Xiaoqiang Zhou, Baotian Hu, Qingcai Chen, Buzhou Tang, Xiaolong Wang | null | 1506.06490 | null | null |
PAC-Bayes Iterated Logarithm Bounds for Martingale Mixtures | cs.LG math.PR stat.ML | We give tight concentration bounds for mixtures of martingales that are
simultaneously uniform over (a) mixture distributions, in a PAC-Bayes sense;
and (b) all finite times. These bounds are proved in terms of the martingale
variance, extending classical Bernstein inequalities, and sharpening and
simplifying prior work.
| Akshay Balsubramani | null | 1506.06573 | null | null |
Understanding Neural Networks Through Deep Visualization | cs.CV cs.LG cs.NE | Recent years have produced great advances in training large, deep neural
networks (DNNs), including notable successes in training convolutional neural
networks (convnets) to recognize natural images. However, our understanding of
how these models work, especially what computations they perform at
intermediate layers, has lagged behind. Progress in the field will be further
accelerated by the development of better tools for visualizing and interpreting
neural nets. We introduce two such tools here. The first is a tool that
visualizes the activations produced on each layer of a trained convnet as it
processes an image or video (e.g. a live webcam stream). We have found that
looking at live activations that change in response to user input helps build
valuable intuitions about how convnets work. The second tool enables
visualizing features at each layer of a DNN via regularized optimization in
image space. Because previous versions of this idea produced less recognizable
images, here we introduce several new regularization methods that combine to
produce qualitatively clearer, more interpretable visualizations. Both tools
are open source and work on a pre-trained convnet with minimal setup.
| Jason Yosinski and Jeff Clune and Anh Nguyen and Thomas Fuchs and Hod
Lipson | null | 1506.06579 | null | null |
Modality-dependent Cross-media Retrieval | cs.CV cs.IR cs.LG | In this paper, we investigate the cross-media retrieval between images and
text, i.e., using image to search text (I2T) and using text to search images
(T2I). Existing cross-media retrieval methods usually learn one couple of
projections, by which the original features of images and text can be projected
into a common latent space to measure the content similarity. However, using
the same projections for the two different retrieval tasks (I2T and T2I) may
lead to a tradeoff between their respective performances, rather than their
best performances. Different from previous works, we propose a
modality-dependent cross-media retrieval (MDCR) model, where two couples of
projections are learned for different cross-media retrieval tasks instead of
one couple of projections. Specifically, by jointly optimizing the correlation
between images and text and the linear regression from one modal space (image
or text) to the semantic space, two couples of mappings are learned to project
images and text from their original feature spaces into two common latent
subspaces (one for I2T and the other for T2I). Extensive experiments show the
superiority of the proposed MDCR compared with other methods. In particular,
based the 4,096 dimensional convolutional neural network (CNN) visual feature
and 100 dimensional LDA textual feature, the mAP of the proposed method
achieves 41.5\%, which is a new state-of-the-art performance on the Wikipedia
dataset.
| Yunchao Wei, Yao Zhao, Zhenfeng Zhu, Shikui Wei, Yanhui Xiao, Jiashi
Feng and Shuicheng Yan | null | 1506.06628 | null | null |
Nonparametric Bayesian Double Articulation Analyzer for Direct Language
Acquisition from Continuous Speech Signals | cs.AI cs.CL cs.LG stat.ML | Human infants can discover words directly from unsegmented speech signals
without any explicitly labeled data. In this paper, we develop a novel machine
learning method called nonparametric Bayesian double articulation analyzer
(NPB-DAA) that can directly acquire language and acoustic models from observed
continuous speech signals. For this purpose, we propose an integrative
generative model that combines a language model and an acoustic model into a
single generative model called the "hierarchical Dirichlet process hidden
language model" (HDP-HLM). The HDP-HLM is obtained by extending the
hierarchical Dirichlet process hidden semi-Markov model (HDP-HSMM) proposed by
Johnson et al. An inference procedure for the HDP-HLM is derived using the
blocked Gibbs sampler originally proposed for the HDP-HSMM. This procedure
enables the simultaneous and direct inference of language and acoustic models
from continuous speech signals. Based on the HDP-HLM and its inference
procedure, we developed a novel double articulation analyzer. By assuming
HDP-HLM as a generative model of observed time series data, and by inferring
latent variables of the model, the method can analyze latent double
articulation structure, i.e., hierarchically organized latent words and
phonemes, of the data in an unsupervised manner. The novel unsupervised double
articulation analyzer is called NPB-DAA.
The NPB-DAA can automatically estimate double articulation structure embedded
in speech signals. We also carried out two evaluation experiments using
synthetic data and actual human continuous speech signals representing Japanese
vowel sequences. In the word acquisition and phoneme categorization tasks, the
NPB-DAA outperformed a conventional double articulation analyzer (DAA) and
baseline automatic speech recognition system whose acoustic model was trained
in a supervised manner.
| Tadahiro Taniguchi, Ryo Nakashima, and Shogo Nagasaka | 10.1109/TCDS.2016.2550591 | 1506.06646 | null | null |
Non-Normal Mixtures of Experts | stat.ME cs.LG stat.ML | Mixture of Experts (MoE) is a popular framework for modeling heterogeneity in
data for regression, classification and clustering. For continuous data which
we consider here in the context of regression and cluster analysis, MoE usually
use normal experts, that is, expert components following the Gaussian
distribution. However, for a set of data containing a group or groups of
observations with asymmetric behavior, heavy tails or atypical observations,
the use of normal experts may be unsuitable and can unduly affect the fit of
the MoE model. In this paper, we introduce new non-normal mixture of experts
(NNMoE) which can deal with these issues regarding possibly skewed,
heavy-tailed data and with outliers. The proposed models are the skew-normal
MoE and the robust $t$ MoE and skew $t$ MoE, respectively named SNMoE, TMoE and
STMoE. We develop dedicated expectation-maximization (EM) and expectation
conditional maximization (ECM) algorithms to estimate the parameters of the
proposed models by monotonically maximizing the observed data log-likelihood.
We describe how the presented models can be used in prediction and in
model-based clustering of regression data. Numerical experiments carried out on
simulated data show the effectiveness and the robustness of the proposed models
in terms modeling non-linear regression functions as well as in model-based
clustering. Then, to show their usefulness for practical applications, the
proposed models are applied to the real-world data of tone perception for
musical data analysis, and the one of temperature anomalies for the analysis of
climate change data.
| Faicel Chamroukhi | null | 1506.06707 | null | null |
A Neural Network Approach to Context-Sensitive Generation of
Conversational Responses | cs.CL cs.AI cs.LG cs.NE | We present a novel response generation system that can be trained end to end
on large quantities of unstructured Twitter conversations. A neural network
architecture is used to address sparsity issues that arise when integrating
contextual information into classic statistical models, allowing the system to
take into account previous dialog utterances. Our dynamic-context generative
models show consistent gains over both context-sensitive and
non-context-sensitive Machine Translation and Information Retrieval baselines.
| Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett,
Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan | null | 1506.06714 | null | null |
Skip-Thought Vectors | cs.CL cs.LG | We describe an approach for unsupervised learning of a generic, distributed
sentence encoder. Using the continuity of text from books, we train an
encoder-decoder model that tries to reconstruct the surrounding sentences of an
encoded passage. Sentences that share semantic and syntactic properties are
thus mapped to similar vector representations. We next introduce a simple
vocabulary expansion method to encode words that were not seen as part of
training, allowing us to expand our vocabulary to a million words. After
training our model, we extract and evaluate our vectors with linear models on 8
tasks: semantic relatedness, paraphrase detection, image-sentence ranking,
question-type classification and 4 benchmark sentiment and subjectivity
datasets. The end result is an off-the-shelf encoder that can produce highly
generic sentence representations that are robust and perform well in practice.
We will make our encoder publicly available.
| Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio
Torralba, Raquel Urtasun, Sanja Fidler | null | 1506.06726 | null | null |
On Variance Reduction in Stochastic Gradient Descent and its
Asynchronous Variants | cs.LG stat.ML | We study optimization algorithms based on variance reduction for stochastic
gradient descent (SGD). Remarkable recent progress has been made in this
direction through development of algorithms like SAG, SVRG, SAGA. These
algorithms have been shown to outperform SGD, both theoretically and
empirically. However, asynchronous versions of these algorithms---a crucial
requirement for modern large-scale applications---have not been studied. We
bridge this gap by presenting a unifying framework for many variance reduction
techniques. Subsequently, we propose an asynchronous algorithm grounded in our
framework, and prove its fast convergence. An important consequence of our
general approach is that it yields asynchronous versions of variance reduction
algorithms such as SVRG and SAGA as a byproduct. Our method achieves near
linear speedup in sparse settings common to machine learning. We demonstrate
the empirical performance of our method through a concrete realization of
asynchronous SVRG.
| Sashank J. Reddi, Ahmed Hefny, Suvrit Sra, Barnab\'as P\'oczos, Alex
Smola | null | 1506.06840 | null | null |
Learning Discriminative Bayesian Networks from High-dimensional
Continuous Neuroimaging Data | cs.CV cs.LG | Due to its causal semantics, Bayesian networks (BN) have been widely employed
to discover the underlying data relationship in exploratory studies, such as
brain research. Despite its success in modeling the probability distribution of
variables, BN is naturally a generative model, which is not necessarily
discriminative. This may cause the ignorance of subtle but critical network
changes that are of investigation values across populations. In this paper, we
propose to improve the discriminative power of BN models for continuous
variables from two different perspectives. This brings two general
discriminative learning frameworks for Gaussian Bayesian networks (GBN). In the
first framework, we employ Fisher kernel to bridge the generative models of GBN
and the discriminative classifiers of SVMs, and convert the GBN parameter
learning to Fisher kernel learning via minimizing a generalization error bound
of SVMs. In the second framework, we employ the max-margin criterion and build
it directly upon GBN models to explicitly optimize the classification
performance of the GBNs. The advantages and disadvantages of the two frameworks
are discussed and experimentally compared. Both of them demonstrate strong
power in learning discriminative parameters of GBNs for neuroimaging based
brain network analysis, as well as maintaining reasonable representation
capacity. The contributions of this paper also include a new Directed Acyclic
Graph (DAG) constraint with theoretical guarantee to ensure the graph validity
of GBN.
| Luping Zhou, Lei Wang, Lingqiao Liu, Philip Ogunbona, Dinggang Shen | null | 1506.06868 | null | null |
Graphs in machine learning: an introduction | stat.ML cs.LG cs.SI physics.soc-ph | Graphs are commonly used to characterise interactions between objects of
interest. Because they are based on a straightforward formalism, they are used
in many scientific fields from computer science to historical sciences. In this
paper, we give an introduction to some methods relying on graphs for learning.
This includes both unsupervised and supervised methods. Unsupervised learning
algorithms usually aim at visualising graphs in latent spaces and/or clustering
the nodes. Both focus on extracting knowledge from graph topologies. While most
existing techniques are only applicable to static graphs, where edges do not
evolve through time, recent developments have shown that they could be extended
to deal with evolving networks. In a supervised context, one generally aims at
inferring labels or numerical values attached to nodes using both the graph
and, when they are available, node characteristics. Balancing the two sources
of information can be challenging, especially as they can disagree locally or
globally. In both contexts, supervised and un-supervised, data can be
relational (augmented with one or several global graphs) as described above, or
graph valued. In this latter case, each object of interest is given as a full
graph (possibly completed by other characteristics). In this context, natural
tasks include graph clustering (as in producing clusters of graphs rather than
clusters of nodes in a single graph), graph classification, etc. 1 Real
networks One of the first practical studies on graphs can be dated back to the
original work of Moreno [51] in the 30s. Since then, there has been a growing
interest in graph analysis associated with strong developments in the modelling
and the processing of these data. Graphs are now used in many scientific
fields. In Biology [54, 2, 7], for instance, metabolic networks can describe
pathways of biochemical reactions [41], while in social sciences networks are
used to represent relation ties between actors [66, 56, 36, 34]. Other examples
include powergrids [71] and the web [75]. Recently, networks have also been
considered in other areas such as geography [22] and history [59, 39]. In
machine learning, networks are seen as powerful tools to model problems in
order to extract information from data and for prediction purposes. This is the
object of this paper. For more complete surveys, we refer to [28, 62, 49, 45].
In this section, we introduce notations and highlight properties shared by most
real networks. In Section 2, we then consider methods aiming at extracting
information from a unique network. We will particularly focus on clustering
methods where the goal is to find clusters of vertices. Finally, in Section 3,
techniques that take a series of networks into account, where each network is
| Pierre Latouche (SAMM), Fabrice Rossi (SAMM) | null | 1506.06962 | null | null |
GEFCOM 2014 - Probabilistic Electricity Price Forecasting | stat.ML cs.CE cs.LG stat.AP | Energy price forecasting is a relevant yet hard task in the field of
multi-step time series forecasting. In this paper we compare a well-known and
established method, ARMA with exogenous variables with a relatively new
technique Gradient Boosting Regression. The method was tested on data from
Global Energy Forecasting Competition 2014 with a year long rolling window
forecast. The results from the experiment reveal that a multi-model approach is
significantly better performing in terms of error metrics. Gradient Boosting
can deal with seasonality and auto-correlation out-of-the box and achieve lower
rate of normalized mean absolute error on real-world data.
| Gergo Barta, Gyula Borbely, Gabor Nagy, Sandor Kazi, Tamas Henk | 10.1007/978-3-319-19857-6_7 | 1506.06972 | null | null |
Strategic Classification | cs.LG | Machine learning relies on the assumption that unseen test instances of a
classification problem follow the same distribution as observed training data.
However, this principle can break down when machine learning is used to make
important decisions about the welfare (employment, education, health) of
strategic individuals. Knowing information about the classifier, such
individuals may manipulate their attributes in order to obtain a better
classification outcome. As a result of this behavior---often referred to as
gaming---the performance of the classifier may deteriorate sharply. Indeed,
gaming is a well-known obstacle for using machine learning methods in practice;
in financial policy-making, the problem is widely known as Goodhart's law. In
this paper, we formalize the problem, and pursue algorithms for learning
classifiers that are robust to gaming.
We model classification as a sequential game between a player named "Jury"
and a player named "Contestant." Jury designs a classifier, and Contestant
receives an input to the classifier, which he may change at some cost. Jury's
goal is to achieve high classification accuracy with respect to Contestant's
original input and some underlying target classification function. Contestant's
goal is to achieve a favorable classification outcome while taking into account
the cost of achieving it.
For a natural class of cost functions, we obtain computationally efficient
learning algorithms which are near-optimal. Surprisingly, our algorithms are
efficient even on concept classes that are computationally hard to learn. For
general cost functions, designing an approximately optimal strategy-proof
classifier, for inverse-polynomial approximation, is NP-hard.
| Moritz Hardt, Nimrod Megiddo, Christos Papadimitriou, Mary Wootters | null | 1506.06980 | null | null |
Multi-domain Dialog State Tracking using Recurrent Neural Networks | cs.CL cs.LG | Dialog state tracking is a key component of many modern dialog systems, most
of which are designed with a single, well-defined domain in mind. This paper
shows that dialog data drawn from different dialog domains can be used to train
a general belief tracking model which can operate across all of these domains,
exhibiting superior performance to each of the domain-specific models. We
propose a training procedure which uses out-of-domain data to initialise belief
tracking models for entirely new domains. This procedure leads to improvements
in belief tracking performance regardless of the amount of in-domain data
available for training the model.
| Nikola Mrk\v{s}i\'c, Diarmuid \'O S\'eaghdha, Blaise Thomson, Milica
Ga\v{s}i\'c, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen and Steve Young | null | 1506.07190 | null | null |
Elicitation Complexity of Statistical Properties | cs.LG math.OC math.ST q-fin.MF stat.TH | A property, or statistical functional, is said to be elicitable if it
minimizes expected loss for some loss function. The study of which properties
are elicitable sheds light on the capabilities and limitations of point
estimation and empirical risk minimization. While recent work asks which
properties are elicitable, we instead advocate for a more nuanced question: how
many dimensions are required to indirectly elicit a given property? This number
is called the elicitation complexity of the property. We lay the foundation for
a general theory of elicitation complexity, including several basic results
about how elicitation complexity behaves, and the complexity of standard
properties of interest. Building on this foundation, our main result gives
tight complexity bounds for the broad class of Bayes risks. We apply these
results to several properties of interest, including variance, entropy, norms,
and several classes of financial risk measures. We conclude with discussion and
open directions.
| Rafael Frongillo, Ian A. Kash | null | 1506.07212 | null | null |
Communication Lower Bounds for Statistical Estimation Problems via a
Distributed Data Processing Inequality | cs.LG cs.CC cs.IT math.IT stat.ML | We study the tradeoff between the statistical error and communication cost of
distributed statistical estimation problems in high dimensions. In the
distributed sparse Gaussian mean estimation problem, each of the $m$ machines
receives $n$ data points from a $d$-dimensional Gaussian distribution with
unknown mean $\theta$ which is promised to be $k$-sparse. The machines
communicate by message passing and aim to estimate the mean $\theta$. We
provide a tight (up to logarithmic factors) tradeoff between the estimation
error and the number of bits communicated between the machines. This directly
leads to a lower bound for the distributed \textit{sparse linear regression}
problem: to achieve the statistical minimax error, the total communication is
at least $\Omega(\min\{n,d\}m)$, where $n$ is the number of observations that
each machine receives and $d$ is the ambient dimension. These lower results
improve upon [Sha14,SD'14] by allowing multi-round iterative communication
model. We also give the first optimal simultaneous protocol in the dense case
for mean estimation.
As our main technique, we prove a \textit{distributed data processing
inequality}, as a generalization of usual data processing inequalities, which
might be of independent interest and useful for other problems.
| Mark Braverman, Ankit Garg, Tengyu Ma, Huy L. Nguyen, and David P.
Woodruff | null | 1506.07216 | null | null |
Benchmark of structured machine learning methods for microbial
identification from mass-spectrometry data | stat.ML cs.LG q-bio.QM | Microbial identification is a central issue in microbiology, in particular in
the fields of infectious diseases diagnosis and industrial quality control. The
concept of species is tightly linked to the concept of biological and clinical
classification where the proximity between species is generally measured in
terms of evolutionary distances and/or clinical phenotypes. Surprisingly, the
information provided by this well-known hierarchical structure is rarely used
by machine learning-based automatic microbial identification systems.
Structured machine learning methods were recently proposed for taking into
account the structure embedded in a hierarchy and using it as additional a
priori information, and could therefore allow to improve microbial
identification systems. We test and compare several state-of-the-art machine
learning methods for microbial identification on a new Matrix-Assisted Laser
Desorption/Ionization Time-of-Flight mass spectrometry (MALDI-TOF MS) dataset.
We include in the benchmark standard and structured methods, that leverage the
knowledge of the underlying hierarchical structure in the learning process. Our
results show that although some methods perform better than others, structured
methods do not consistently perform better than their "flat" counterparts. We
postulate that this is partly due to the fact that standard methods already
reach a high level of accuracy in this context, and that they mainly confuse
species close to each other in the tree, a case where using the known hierarchy
is not helpful.
| K\'evin Vervier (CBIO), Pierre Mah\'e, Jean-Baptiste Veyrieras,
Jean-Philippe Vert (CBIO) | null | 1506.07251 | null | null |
Unconfused ultraconservative multiclass algorithms | cs.LG | We tackle the problem of learning linear classifiers from noisy datasets in a
multiclass setting. The two-class version of this problem was studied a few
years ago where the proposed approaches to combat the noise revolve around a
Per-ceptron learning scheme fed with peculiar examples computed through a
weighted average of points from the noisy training set. We propose to build
upon these approaches and we introduce a new algorithm called UMA (for
Unconfused Multiclass additive Algorithm) which may be seen as a generalization
to the multiclass setting of the previous approaches. In order to characterize
the noise we use the confusion matrix as a multiclass extension of the
classification noise studied in the aforemen-tioned literature. Theoretically
well-founded, UMA furthermore displays very good empirical noise robustness, as
evidenced by numerical simulations conducted on both synthetic and real data.
| Ugo Louche, Liva Ralaivola | 10.1007/s10994-015-5490-3 | 1506.07254 | null | null |
Ask Me Anything: Dynamic Memory Networks for Natural Language Processing | cs.CL cs.LG cs.NE | Most tasks in natural language processing can be cast into question answering
(QA) problems over language input. We introduce the dynamic memory network
(DMN), a neural network architecture which processes input sequences and
questions, forms episodic memories, and generates relevant answers. Questions
trigger an iterative attention process which allows the model to condition its
attention on the inputs and the result of previous iterations. These results
are then reasoned over in a hierarchical recurrent sequence model to generate
answers. The DMN can be trained end-to-end and obtains state-of-the-art results
on several types of tasks and datasets: question answering (Facebook's bAbI
dataset), text classification for sentiment analysis (Stanford Sentiment
Treebank) and sequence modeling for part-of-speech tagging (WSJ-PTB). The
training for these different tasks relies exclusively on trained word vector
representations and input-question-answer triplets.
| Ankit Kumar and Ozan Irsoy and Peter Ondruska and Mohit Iyyer and
James Bradbury and Ishaan Gulrajani and Victor Zhong and Romain Paulus and
Richard Socher | null | 1506.07285 | null | null |
Flexible Multi-layer Sparse Approximations of Matrices and Applications | cs.LG | The computational cost of many signal processing and machine learning
techniques is often dominated by the cost of applying certain linear operators
to high-dimensional vectors. This paper introduces an algorithm aimed at
reducing the complexity of applying linear operators in high dimension by
approximately factorizing the corresponding matrix into few sparse factors. The
approach relies on recent advances in non-convex optimization. It is first
explained and analyzed in details and then demonstrated experimentally on
various problems including dictionary learning for image denoising, and the
approximation of large matrices arising in inverse problems.
| Luc Le Magoarou and R\'emi Gribonval | 10.1109/JSTSP.2016.2543461 | 1506.07300 | null | null |
Embed to Control: A Locally Linear Latent Dynamics Model for Control
from Raw Images | cs.LG cs.CV stat.ML | We introduce Embed to Control (E2C), a method for model learning and control
of non-linear dynamical systems from raw pixel images. E2C consists of a deep
generative model, belonging to the family of variational autoencoders, that
learns to generate image trajectories from a latent space in which the dynamics
is constrained to be locally linear. Our model is derived directly from an
optimal control formulation in latent space, supports long-term prediction of
image sequences and exhibits strong performance on a variety of complex control
problems.
| Manuel Watter, Jost Tobias Springenberg, Joschka Boedecker, Martin
Riedmiller | null | 1506.07365 | null | null |
Parallel Multi-Dimensional LSTM, With Application to Fast Biomedical
Volumetric Image Segmentation | cs.CV cs.LG | Convolutional Neural Networks (CNNs) can be shifted across 2D images or 3D
videos to segment them. They have a fixed input size and typically perceive
only small local contexts of the pixels to be classified as foreground or
background. In contrast, Multi-Dimensional Recurrent NNs (MD-RNNs) can perceive
the entire spatio-temporal context of each pixel in a few sweeps through all
pixels, especially when the RNN is a Long Short-Term Memory (LSTM). Despite
these theoretical advantages, however, unlike CNNs, previous MD-LSTM variants
were hard to parallelize on GPUs. Here we re-arrange the traditional cuboid
order of computations in MD-LSTM in pyramidal fashion. The resulting
PyraMiD-LSTM is easy to parallelize, especially for 3D data such as stacks of
brain slice images. PyraMiD-LSTM achieved best known pixel-wise brain image
segmentation results on MRBrainS13 (and competitive results on EM-ISBI12).
| Marijn F. Stollenga, Wonmin Byeon, Marcus Liwicki, Juergen Schmidhuber | null | 1506.07452 | null | null |
Efficient Learning for Undirected Topic Models | cs.LG cs.CL cs.IR stat.ML | Replicated Softmax model, a well-known undirected topic model, is powerful in
extracting semantic representations of documents. Traditional learning
strategies such as Contrastive Divergence are very inefficient. This paper
provides a novel estimator to speed up the learning based on Noise Contrastive
Estimate, extended for documents of variant lengths and weighted inputs.
Experiments on two benchmarks show that the new estimator achieves great
learning efficiency and high accuracy on document retrieval and classification.
| Jiatao Gu and Victor O.K. Li | null | 1506.07477 | null | null |
Attention-Based Models for Speech Recognition | cs.CL cs.LG cs.NE stat.ML | Recurrent sequence generators conditioned on input data through an attention
mechanism have recently shown very good performance on a range of tasks in-
cluding machine translation, handwriting synthesis and image caption gen-
eration. We extend the attention-mechanism with features needed for speech
recognition. We show that while an adaptation of the model used for machine
translation in reaches a competitive 18.7% phoneme error rate (PER) on the
TIMIT phoneme recognition task, it can only be applied to utterances which are
roughly as long as the ones it was trained on. We offer a qualitative
explanation of this failure and propose a novel and generic method of adding
location-awareness to the attention mechanism to alleviate this issue. The new
method yields a model that is robust to long inputs and achieves 18% PER in
single utterances and 20% in 10-times longer (repeated) utterances. Finally, we
propose a change to the at- tention mechanism that prevents it from
concentrating too much on single frames, which further reduces PER to 17.6%
level.
| Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho,
Yoshua Bengio | null | 1506.07503 | null | null |
Objective Variables for Probabilistic Revenue Maximization in
Second-Price Auctions with Reserve | stat.ML cs.AI cs.GT cs.LG stat.AP | Many online companies sell advertisement space in second-price auctions with
reserve. In this paper, we develop a probabilistic method to learn a profitable
strategy to set the reserve price. We use historical auction data with features
to fit a predictor of the best reserve price. This problem is delicate - the
structure of the auction is such that a reserve price set too high is much
worse than a reserve price set too low. To address this we develop objective
variables, a new framework for combining probabilistic modeling with optimal
decision-making. Objective variables are "hallucinated observations" that
transform the revenue maximization task into a regularized maximum likelihood
estimation problem, which we solve with an EM algorithm. This framework enables
a variety of prediction mechanisms to set the reserve price. As examples, we
study objective variable methods with regression, kernelized regression, and
neural networks on simulated and real data. Our methods outperform previous
approaches both in terms of scalability and profit.
| Maja R. Rudolph, Joseph G. Ellis, and David M. Blei | null | 1506.07504 | null | null |
Un-regularizing: approximate proximal point and faster stochastic
algorithms for empirical risk minimization | stat.ML cs.DS cs.LG | We develop a family of accelerated stochastic algorithms that minimize sums
of convex functions. Our algorithms improve upon the fastest running time for
empirical risk minimization (ERM), and in particular linear least-squares
regression, across a wide range of problem settings. To achieve this, we
establish a framework based on the classical proximal point algorithm. Namely,
we provide several algorithms that reduce the minimization of a strongly convex
function to approximate minimizations of regularizations of the function. Using
these results, we accelerate recent fast stochastic algorithms in a black-box
fashion. Empirically, we demonstrate that the resulting algorithms exhibit
notions of stability that are advantageous in practice. Both in theory and in
practice, the provided algorithms reap the computational benefits of adding a
large strongly convex regularization term, without incurring a corresponding
bias to the original problem.
| Roy Frostig, Rong Ge, Sham M. Kakade, Aaron Sidford | null | 1506.07512 | null | null |
Global Optimality in Tensor Factorization, Deep Learning, and Beyond | cs.NA cs.LG stat.ML | Techniques involving factorization are found in a wide range of applications
and have enjoyed significant empirical success in many fields. However, common
to a vast majority of these problems is the significant disadvantage that the
associated optimization problems are typically non-convex due to a multilinear
form or other convexity destroying transformation. Here we build on ideas from
convex relaxations of matrix factorizations and present a very general
framework which allows for the analysis of a wide range of non-convex
factorization problems - including matrix factorization, tensor factorization,
and deep neural network training formulations. We derive sufficient conditions
to guarantee that a local minimum of the non-convex optimization problem is a
global minimum and show that if the size of the factorized variables is large
enough then from any initialization it is possible to find a global minimizer
using a purely local descent algorithm. Our framework also provides a partial
theoretical justification for the increasingly common use of Rectified Linear
Units (ReLUs) in deep neural networks and offers guidance on deep network
architectures and regularization strategies to facilitate efficient
optimization.
| Benjamin D. Haeffele and Rene Vidal | null | 1506.07540 | null | null |
Splash: User-friendly Programming Interface for Parallelizing Stochastic
Algorithms | cs.LG | Stochastic algorithms are efficient approaches to solving machine learning
and optimization problems. In this paper, we propose a general framework called
Splash for parallelizing stochastic algorithms on multi-node distributed
systems. Splash consists of a programming interface and an execution engine.
Using the programming interface, the user develops sequential stochastic
algorithms without concerning any detail about distributed computing. The
algorithm is then automatically parallelized by a communication-efficient
execution engine. We provide theoretical justifications on the optimal rate of
convergence for parallelizing stochastic gradient descent. Splash is built on
top of Apache Spark. The real-data experiments on logistic regression,
collaborative filtering and topic modeling verify that Splash yields
order-of-magnitude speedup over single-thread stochastic algorithms and over
state-of-the-art implementations on Spark.
| Yuchen Zhang and Michael I. Jordan | null | 1506.07552 | null | null |
CRAFT: ClusteR-specific Assorted Feature selecTion | cs.LG stat.ML | We present a framework for clustering with cluster-specific feature
selection. The framework, CRAFT, is derived from asymptotic log posterior
formulations of nonparametric MAP-based clustering models. CRAFT handles
assorted data, i.e., both numeric and categorical data, and the underlying
objective functions are intuitively appealing. The resulting algorithm is
simple to implement and scales nicely, requires minimal parameter tuning,
obviates the need to specify the number of clusters a priori, and compares
favorably with other methods on real datasets.
| Vikas K. Garg, Cynthia Rudin, and Tommi Jaakkola | null | 1506.07609 | null | null |
Generalized Majorization-Minimization | cs.CV cs.IT cs.LG math.IT stat.ML | Non-convex optimization is ubiquitous in machine learning.
Majorization-Minimization (MM) is a powerful iterative procedure for optimizing
non-convex functions that works by optimizing a sequence of bounds on the
function. In MM, the bound at each iteration is required to \emph{touch} the
objective function at the optimizer of the previous bound. We show that this
touching constraint is unnecessary and overly restrictive. We generalize MM by
relaxing this constraint, and propose a new optimization framework, named
Generalized Majorization-Minimization (G-MM), that is more flexible. For
instance, G-MM can incorporate application-specific biases into the
optimization procedure without changing the objective function. We derive G-MM
algorithms for several latent variable models and show empirically that they
consistently outperform their MM counterparts in optimizing non-convex
objectives. In particular, G-MM algorithms appear to be less sensitive to
initialization.
| Sobhan Naderi Parizi, Kun He, Reza Aghajani, Stan Sclaroff, Pedro
Felzenszwalb | null | 1506.07613 | null | null |
Completing Low-Rank Matrices with Corrupted Samples from Few
Coefficients in General Basis | cs.IT cs.LG cs.NA math.IT math.NA stat.ML | Subspace recovery from corrupted and missing data is crucial for various
applications in signal processing and information theory. To complete missing
values and detect column corruptions, existing robust Matrix Completion (MC)
methods mostly concentrate on recovering a low-rank matrix from few corrupted
coefficients w.r.t. standard basis, which, however, does not apply to more
general basis, e.g., Fourier basis. In this paper, we prove that the range
space of an $m\times n$ matrix with rank $r$ can be exactly recovered from few
coefficients w.r.t. general basis, though $r$ and the number of corrupted
samples are both as high as $O(\min\{m,n\}/\log^3 (m+n))$. Our model covers
previous ones as special cases, and robust MC can recover the intrinsic matrix
with a higher rank. Moreover, we suggest a universal choice of the
regularization parameter, which is $\lambda=1/\sqrt{\log n}$. By our
$\ell_{2,1}$ filtering algorithm, which has theoretical guarantees, we can
further reduce the computational cost of our model. As an application, we also
find that the solutions to extended robust Low-Rank Representation and to our
extended robust MC are mutually expressible, so both our theory and algorithm
can be applied to the subspace clustering problem with missing values under
certain conditions. Experiments verify our theories.
| Hongyang Zhang, Zhouchen Lin, Chao Zhang | 10.1109/TIT.2016.2573311 | 1506.07615 | null | null |
Conservativeness of untied auto-encoders | cs.LG | We discuss necessary and sufficient conditions for an auto-encoder to define
a conservative vector field, in which case it is associated with an energy
function akin to the unnormalized log-probability of the data. We show that the
conditions for conservativeness are more general than for encoder and decoder
weights to be the same ("tied weights"), and that they also depend on the form
of the hidden unit activation function, but that contractive training criteria,
such as denoising, will enforce these conditions locally. Based on these
observations, we show how we can use auto-encoders to extract the conservative
component of a vector field.
| Daniel Jiwoong Im, Mohamed Ishmael Diwan Belghazi, Roland Memisevic | null | 1506.07643 | null | null |
Semantic Relation Classification via Convolutional Neural Networks with
Simple Negative Sampling | cs.CL cs.LG | Syntactic features play an essential role in identifying relationship in a
sentence. Previous neural network models often suffer from irrelevant
information introduced when subjects and objects are in a long distance. In
this paper, we propose to learn more robust relation representations from the
shortest dependency path through a convolution neural network. We further
propose a straightforward negative sampling strategy to improve the assignment
of subjects and objects. Experimental results show that our method outperforms
the state-of-the-art methods on the SemEval-2010 Task 8 dataset.
| Kun Xu, Yansong Feng, Songfang Huang, Dongyan Zhao | null | 1506.07650 | null | null |
Manifold Optimization for Gaussian Mixture Models | stat.ML cs.LG math.OC | We take a new look at parameter estimation for Gaussian Mixture Models
(GMMs). In particular, we propose using \emph{Riemannian manifold optimization}
as a powerful counterpart to Expectation Maximization (EM). An out-of-the-box
invocation of manifold optimization, however, fails spectacularly: it converges
to the same solution but vastly slower. Driven by intuition from manifold
convexity, we then propose a reparamerization that has remarkable empirical
consequences. It makes manifold optimization not only match EM---a highly
encouraging result in itself given the poor record nonlinear programming
methods have had against EM so far---but also outperform EM in many practical
settings, while displaying much less variability in running times. We further
highlight the strengths of manifold optimization by developing a somewhat tuned
manifold LBFGS method that proves even more competitive and reliable than
existing manifold optimization tools. We hope that our results encourage a
wider consideration of manifold optimization for parameter estimation problems.
| Reshad Hosseini and Suvrit Sra | null | 1506.07677 | null | null |
AttentionNet: Aggregating Weak Directions for Accurate Object Detection | cs.CV cs.LG | We present a novel detection method using a deep convolutional neural network
(CNN), named AttentionNet. We cast an object detection problem as an iterative
classification problem, which is the most suitable form of a CNN. AttentionNet
provides quantized weak directions pointing a target object and the ensemble of
iterative predictions from AttentionNet converges to an accurate object
boundary box. Since AttentionNet is a unified network for object detection, it
detects objects without any separated models from the object proposal to the
post bounding-box regression. We evaluate AttentionNet by a human detection
task and achieve the state-of-the-art performance of 65% (AP) on PASCAL VOC
2007/2012 with an 8-layered architecture only.
| Donggeun Yoo, Sunggyun Park, Joon-Young Lee, Anthony S. Paek, In So
Kweon | null | 1506.07704 | null | null |
Fairness-Aware Learning with Restriction of Universal Dependency using
f-Divergences | stat.ML cs.LG | Fairness-aware learning is a novel framework for classification tasks. Like
regular empirical risk minimization (ERM), it aims to learn a classifier with a
low error rate, and at the same time, for the predictions of the classifier to
be independent of sensitive features, such as gender, religion, race, and
ethnicity. Existing methods can achieve low dependencies on given samples, but
this is not guaranteed on unseen samples. The existing fairness-aware learning
algorithms employ different dependency measures, and each algorithm is
specifically designed for a particular one. Such diversity makes it difficult
to theoretically analyze and compare them. In this paper, we propose a general
framework for fairness-aware learning that uses f-divergences and that covers
most of the dependency measures employed in the existing methods. We introduce
a way to estimate the f-divergences that allows us to give a unified analysis
for the upper bound of the estimation error; this bound is tighter than that of
the existing convergence rate analysis of the divergence estimation. With our
divergence estimate, we propose a fairness-aware learning algorithm, and
perform a theoretical analysis of its generalization error. Our analysis
reveals that, under mild assumptions and even with enforcement of fairness, the
generalization error of our method is $O(\sqrt{1/n})$, which is the same as
that of the regular ERM. In addition, and more importantly, we show that, for
any f-divergence, the upper bound of the estimation error of the divergence is
$O(\sqrt{1/n})$. This indicates that our fairness-aware learning algorithm
guarantees low dependencies on unseen samples for any dependency measure
represented by an f-divergence.
| Kazuto Fukuchi and Jun Sakuma | null | 1506.07721 | null | null |
Diffusion Nets | stat.ML cs.LG math.CA | Non-linear manifold learning enables high-dimensional data analysis, but
requires out-of-sample-extension methods to process new data points. In this
paper, we propose a manifold learning algorithm based on deep learning to
create an encoder, which maps a high-dimensional dataset and its
low-dimensional embedding, and a decoder, which takes the embedded data back to
the high-dimensional space. Stacking the encoder and decoder together
constructs an autoencoder, which we term a diffusion net, that performs
out-of-sample-extension as well as outlier detection. We introduce new neural
net constraints for the encoder, which preserves the local geometry of the
points, and we prove rates of convergence for the encoder. Also, our approach
is efficient in both computational complexity and memory requirements, as
opposed to previous methods that require storage of all training points in both
the high-dimensional and the low-dimensional spaces to calculate the
out-of-sample-extension and the pre-image.
| Gal Mishne, Uri Shaham, Alexander Cloninger and Israel Cohen | null | 1506.07840 | null | null |
Decentralized Q-Learning for Stochastic Teams and Games | math.OC cs.GT cs.LG | There are only a few learning algorithms applicable to stochastic dynamic
teams and games which generalize Markov decision processes to decentralized
stochastic control problems involving possibly self-interested decision makers.
Learning in games is generally difficult because of the non-stationary
environment in which each decision maker aims to learn its optimal decisions
with minimal information in the presence of the other decision makers who are
also learning. In stochastic dynamic games, learning is more challenging
because, while learning, the decision makers alter the state of the system and
hence the future cost. In this paper, we present decentralized Q-learning
algorithms for stochastic games, and study their convergence for the weakly
acyclic case which includes team problems as an important special case. The
algorithm is decentralized in that each decision maker has access to only its
local information, the state information, and the local cost realizations;
furthermore, it is completely oblivious to the presence of other decision
makers. We show that these algorithms converge to equilibrium policies almost
surely in large classes of stochastic games.
| G\"urdal Arslan and Serdar Y\"uksel | null | 1506.07924 | null | null |
Collaboratively Learning Preferences from Ordinal Data | cs.LG cs.IT math.IT stat.ML | In applications such as recommendation systems and revenue management, it is
important to predict preferences on items that have not been seen by a user or
predict outcomes of comparisons among those that have never been compared. A
popular discrete choice model of multinomial logit model captures the structure
of the hidden preferences with a low-rank matrix. In order to predict the
preferences, we want to learn the underlying model from noisy observations of
the low-rank matrix, collected as revealed preferences in various forms of
ordinal data. A natural approach to learn such a model is to solve a convex
relaxation of nuclear norm minimization. We present the convex relaxation
approach in two contexts of interest: collaborative ranking and bundled choice
modeling. In both cases, we show that the convex relaxation is minimax optimal.
We prove an upper bound on the resulting error with finite samples, and provide
a matching information-theoretic lower bound.
| Sewoong Oh, Kiran K. Thekumparampil, and Jiaming Xu | null | 1506.07947 | null | null |
Skopus: Mining top-k sequential patterns under leverage | cs.AI cs.LG stat.ML | This paper presents a framework for exact discovery of the top-k sequential
patterns under Leverage. It combines (1) a novel definition of the expected
support for a sequential pattern - a concept on which most interestingness
measures directly rely - with (2) SkOPUS: a new branch-and-bound algorithm for
the exact discovery of top-k sequential patterns under a given measure of
interest. Our interestingness measure employs the partition approach. A pattern
is interesting to the extent that it is more frequent than can be explained by
assuming independence between any of the pairs of patterns from which it can be
composed. The larger the support compared to the expectation under
independence, the more interesting is the pattern. We build on these two
elements to exactly extract the k sequential patterns with highest leverage,
consistent with our definition of expected support. We conduct experiments on
both synthetic data with known patterns and real-world datasets; both
experiments confirm the consistency and relevance of our approach with regard
to the state of the art. This article was published in Data Mining and
Knowledge Discovery and is accessible at
http://dx.doi.org/10.1007/s10618-016-0467-9.
| Francois Petitjean, Tao Li, Nikolaj Tatti, Geoffrey I. Webb | 10.1007/s10618-016-0467-9 | 1506.08009 | null | null |
Modelling of directional data using Kent distributions | cs.LG stat.ML | The modelling of data on a spherical surface requires the consideration of
directional probability distributions. To model asymmetrically distributed data
on a three-dimensional sphere, Kent distributions are often used. The moment
estimates of the parameters are typically used in modelling tasks involving
Kent distributions. However, these lack a rigorous statistical treatment. The
focus of the paper is to introduce a Bayesian estimation of the parameters of
the Kent distribution which has not been carried out in the literature, partly
because of its complex mathematical form. We employ the Bayesian
information-theoretic paradigm of Minimum Message Length (MML) to bridge this
gap and derive reliable estimators. The inferred parameters are subsequently
used in mixture modelling of Kent distributions. The problem of inferring the
suitable number of mixture components is also addressed using the MML
criterion. We demonstrate the superior performance of the derived MML-based
parameter estimates against the traditional estimators. We apply the MML
principle to infer mixtures of Kent distributions to model empirical data
corresponding to protein conformations. We demonstrate the effectiveness of
Kent models to act as improved descriptors of protein structural data as
compared to commonly used von Mises-Fisher distributions.
| Parthan Kasarapu | null | 1506.08105 | null | null |
An Empirical Study of Stochastic Variational Algorithms for the Beta
Bernoulli Process | stat.ML cs.LG stat.AP stat.CO stat.ME | Stochastic variational inference (SVI) is emerging as the most promising
candidate for scaling inference in Bayesian probabilistic models to large
datasets. However, the performance of these methods has been assessed primarily
in the context of Bayesian topic models, particularly latent Dirichlet
allocation (LDA). Deriving several new algorithms, and using synthetic, image
and genomic datasets, we investigate whether the understanding gleaned from LDA
applies in the setting of sparse latent factor models, specifically beta
process factor analysis (BPFA). We demonstrate that the big picture is
consistent: using Gibbs sampling within SVI to maintain certain posterior
dependencies is extremely effective. However, we find that different posterior
dependencies are important in BPFA relative to LDA. Particularly,
approximations able to model intra-local variable dependence perform best.
| Amar Shah and David A. Knowles and Zoubin Ghahramani | null | 1506.08180 | null | null |
A geometric alternative to Nesterov's accelerated gradient descent | math.OC cs.DS cs.LG cs.NA | We propose a new method for unconstrained optimization of a smooth and
strongly convex function, which attains the optimal rate of convergence of
Nesterov's accelerated gradient descent. The new algorithm has a simple
geometric interpretation, loosely inspired by the ellipsoid method. We provide
some numerical evidence that the new method can be superior to Nesterov's
accelerated gradient descent.
| S\'ebastien Bubeck, Yin Tat Lee, Mohit Singh | null | 1506.08187 | null | null |
Correlation Clustering and Biclustering with Locally Bounded Errors | cs.DS cs.LG | We consider a generalized version of the correlation clustering problem,
defined as follows. Given a complete graph $G$ whose edges are labeled with $+$
or $-$, we wish to partition the graph into clusters while trying to avoid
errors: $+$ edges between clusters or $-$ edges within clusters. Classically,
one seeks to minimize the total number of such errors. We introduce a new
framework that allows the objective to be a more general function of the number
of errors at each vertex (for example, we may wish to minimize the number of
errors at the worst vertex) and provide a rounding algorithm which converts
"fractional clusterings" into discrete clusterings while causing only a
constant-factor blowup in the number of errors at each vertex. This rounding
algorithm yields constant-factor approximation algorithms for the discrete
problem under a wide variety of objective functions.
| Gregory J. Puleo, Olgica Milenkovic | null | 1506.08189 | null | null |
Convolutional networks and learning invariant to homogeneous
multiplicative scalings | cs.LG cs.NE | The conventional classification schemes -- notably multinomial logistic
regression -- used in conjunction with convolutional networks (convnets) are
classical in statistics, designed without consideration for the usual coupling
with convnets, stochastic gradient descent, and backpropagation. In the
specific application to supervised learning for convnets, a simple
scale-invariant classification stage turns out to be more robust than
multinomial logistic regression, appears to result in slightly lower errors on
several standard test sets, has similar computational costs, and features
precise control over the actual rate of learning. "Scale-invariant" means that
multiplying the input values by any nonzero scalar leaves the output unchanged.
| Mark Tygert, Arthur Szlam, Soumith Chintala, Marc'Aurelio Ranzato,
Yuandong Tian, and Wojciech Zaremba | null | 1506.08230 | null | null |
Occam's Gates | cs.LG | We present a complimentary objective for training recurrent neural networks
(RNN) with gating units that helps with regularization and interpretability of
the trained model. Attention-based RNN models have shown success in many
difficult sequence to sequence classification problems with long and short term
dependencies, however these models are prone to overfitting. In this paper, we
describe how to regularize these models through an L1 penalty on the activation
of the gating units, and show that this technique reduces overfitting on a
variety of tasks while also providing to us a human-interpretable visualization
of the inputs used by the network. These tasks include sentiment analysis,
paraphrase recognition, and question answering.
| Jonathan Raiman and Szymon Sidor | null | 1506.08251 | null | null |
A Novel Approach for Stable Selection of Informative Redundant Features
from High Dimensional fMRI Data | cs.CV cs.LG stat.ML | Feature selection is among the most important components because it not only
helps enhance the classification accuracy, but also or even more important
provides potential biomarker discovery. However, traditional multivariate
methods is likely to obtain unstable and unreliable results in case of an
extremely high dimensional feature space and very limited training samples,
where the features are often correlated or redundant. In order to improve the
stability, generalization and interpretations of the discovered potential
biomarker and enhance the robustness of the resultant classifier, the redundant
but informative features need to be also selected. Therefore we introduced a
novel feature selection method which combines a recent implementation of the
stability selection approach and the elastic net approach. The advantage in
terms of better control of false discoveries and missed discoveries of our
approach, and the resulted better interpretability of the obtained potential
biomarker is verified in both synthetic and real fMRI experiments. In addition,
we are among the first to demonstrate the robustness of feature selection
benefiting from the incorporation of stability selection and also among the
first to demonstrate the possible unrobustness of the classical univariate
two-sample t-test method. Specifically, we show the robustness of our feature
selection results in existence of noisy (wrong) training labels, as well as the
robustness of the resulted classifier based on our feature selection results in
the existence of data variation, demonstrated by a multi-center
attention-deficit/hyperactivity disorder (ADHD) fMRI data.
| Yilun Wang, Zhiqiang Li, Yifeng Wang, Xiaona Wang, Junjie Zheng,
Xujuan Duan, Huafu Chen | null | 1506.08301 | null | null |
Improved Deep Speaker Feature Learning for Text-Dependent Speaker
Recognition | cs.CL cs.LG cs.NE | A deep learning approach has been proposed recently to derive speaker
identifies (d-vector) by a deep neural network (DNN). This approach has been
applied to text-dependent speaker recognition tasks and shows reasonable
performance gains when combined with the conventional i-vector approach.
Although promising, the existing d-vector implementation still can not compete
with the i-vector baseline. This paper presents two improvements for the deep
learning approach: a phonedependent DNN structure to normalize phone variation,
and a new scoring approach based on dynamic time warping (DTW). Experiments on
a text-dependent speaker recognition task demonstrated that the proposed
methods can provide considerable performance improvement over the existing
d-vector implementation.
| Lantian Li and Yiye Lin and Zhiyong Zhang and Dong Wang | null | 1506.08349 | null | null |
Stochastic Gradient Made Stable: A Manifold Propagation Approach for
Large-Scale Optimization | cs.LG cs.NA | Stochastic gradient descent (SGD) holds as a classical method to build large
scale machine learning models over big data. A stochastic gradient is typically
calculated from a limited number of samples (known as mini-batch), so it
potentially incurs a high variance and causes the estimated parameters bounce
around the optimal solution. To improve the stability of stochastic gradient,
recent years have witnessed the proposal of several semi-stochastic gradient
descent algorithms, which distinguish themselves from standard SGD by
incorporating global information into gradient computation. In this paper we
contribute a novel stratified semi-stochastic gradient descent (S3GD) algorithm
to this nascent research area, accelerating the optimization of a large family
of composite convex functions. Though theoretically converging faster, prior
semi-stochastic algorithms are found to suffer from high iteration complexity,
which makes them even slower than SGD in practice on many datasets. In our
proposed S3GD, the semi-stochastic gradient is calculated based on efficient
manifold propagation, which can be numerically accomplished by sparse matrix
multiplications. This way S3GD is able to generate a highly-accurate estimate
of the exact gradient from each mini-batch with largely-reduced computational
complexity. Theoretic analysis reveals that the proposed S3GD elegantly
balances the geometric algorithmic convergence rate against the space and time
complexities during the optimization. The efficacy of S3GD is also
experimentally corroborated on several large-scale benchmark datasets.
| Yadong Mu and Wei Liu and Wei Fan | null | 1506.08350 | null | null |
Topic2Vec: Learning Distributed Representations of Topics | cs.CL cs.LG | Latent Dirichlet Allocation (LDA) mining thematic structure of documents
plays an important role in nature language processing and machine learning
areas. However, the probability distribution from LDA only describes the
statistical relationship of occurrences in the corpus and usually in practice,
probability is not the best choice for feature representations. Recently,
embedding methods have been proposed to represent words and documents by
learning essential concepts and representations, such as Word2Vec and Doc2Vec.
The embedded representations have shown more effectiveness than LDA-style
representations in many tasks. In this paper, we propose the Topic2Vec approach
which can learn topic representations in the same semantic vector space with
words, as an alternative to probability. The experimental results show that
Topic2Vec achieves interesting and meaningful results.
| Li-Qiang Niu and Xin-Yu Dai | null | 1506.08422 | null | null |
Neural Simpletrons - Minimalistic Directed Generative Networks for
Learning with Few Labels | stat.ML cs.LG | Classifiers for the semi-supervised setting often combine strong supervised
models with additional learning objectives to make use of unlabeled data. This
results in powerful though very complex models that are hard to train and that
demand additional labels for optimal parameter tuning, which are often not
given when labeled data is very sparse. We here study a minimalistic
multi-layer generative neural network for semi-supervised learning in a form
and setting as similar to standard discriminative networks as possible. Based
on normalized Poisson mixtures, we derive compact and local learning and neural
activation rules. Learning and inference in the network can be scaled using
standard deep learning tools for parallelized GPU implementation. With the
single objective of likelihood optimization, both labeled and unlabeled data
are naturally incorporated into learning. Empirical evaluations on standard
benchmarks show, that for datasets with few labels the derived minimalistic
network improves on all classical deep learning approaches and is competitive
with their recent variants without the need of additional labels for parameter
tuning. Furthermore, we find that the studied network is the best performing
monolithic (`non-hybrid') system for few labels, and that it can be applied in
the limit of very few labels, where no other system has been reported to
operate so far.
| Dennis Forster, Abdul-Saboor Sheikh and J\"org L\"ucke | 10.1162/neco_a_01100 | 1506.08448 | null | null |
Beating the Perils of Non-Convexity: Guaranteed Training of Neural
Networks using Tensor Methods | cs.LG cs.NE stat.ML | Training neural networks is a challenging non-convex optimization problem,
and backpropagation or gradient descent can get stuck in spurious local optima.
We propose a novel algorithm based on tensor decomposition for guaranteed
training of two-layer neural networks. We provide risk bounds for our proposed
method, with a polynomial sample complexity in the relevant parameters, such as
input dimension and number of neurons. While learning arbitrary target
functions is NP-hard, we provide transparent conditions on the function and the
input for learnability. Our training method is based on tensor decomposition,
which provably converges to the global optimum, under a set of mild
non-degeneracy conditions. It consists of simple embarrassingly parallel linear
and multi-linear operations, and is competitive with standard stochastic
gradient descent (SGD), in terms of computational complexity. Thus, we propose
a computationally efficient method with guaranteed risk bounds for training
neural networks with one hidden layer.
| Majid Janzamin and Hanie Sedghi and Anima Anandkumar | null | 1506.08473 | null | null |
A simple yet efficient algorithm for multiple kernel learning under
elastic-net constraints | stat.ML cs.LG | This papers introduces an algorithm for the solution of multiple kernel
learning (MKL) problems with elastic-net constraints on the kernel weights. The
algorithm compares very favourably in terms of time and space complexity to
existing approaches and can be implemented with simple code that does not rely
on external libraries (except a conventional SVM solver).
| Luca Citi | null | 1506.08536 | null | null |
Exact and approximate inference in graphical models: variable
elimination and beyond | stat.ML cs.AI cs.LG | Probabilistic graphical models offer a powerful framework to account for the
dependence structure between variables, which is represented as a graph.
However, the dependence between variables may render inference tasks
intractable. In this paper we review techniques exploiting the graph structure
for exact inference, borrowed from optimisation and computer science. They are
built on the principle of variable elimination whose complexity is dictated in
an intricate way by the order in which variables are eliminated. The so-called
treewidth of the graph characterises this algorithmic complexity: low-treewidth
graphs can be processed efficiently. The first message that we illustrate is
therefore the idea that for inference in graphical model, the number of
variables is not the limiting factor, and it is worth checking for the
treewidth before turning to approximate methods. We show how algorithms
providing an upper bound of the treewidth can be exploited to derive a 'good'
elimination order enabling to perform exact inference. The second message is
that when the treewidth is too large, algorithms for approximate inference
linked to the principle of variable elimination, such as loopy belief
propagation and variational approaches, can lead to accurate results while
being much less time consuming than Monte-Carlo approaches. We illustrate the
techniques reviewed in this article on benchmarks of inference problems in
genetic linkage analysis and computer vision, as well as on hidden variables
restoration in coupled Hidden Markov Models.
| Nathalie Peyrard and Marie-Jos\'ee Cros and Simon de Givry and Alain
Franc and St\'ephane Robin and R\'egis Sabbadin and Thomas Schiex and
Matthieu Vignes | null | 1506.08544 | null | null |
Variational Inference for Background Subtraction in Infrared Imagery | cs.CV cs.LG | We propose a Gaussian mixture model for background subtraction in infrared
imagery. Following a Bayesian approach, our method automatically estimates the
number of Gaussian components as well as their parameters, while simultaneously
it avoids over/under fitting. The equations for estimating model parameters are
analytically derived and thus our method does not require any sampling
algorithm that is computationally and memory inefficient. The pixel density
estimate is followed by an efficient and highly accurate updating mechanism,
which permits our system to be automatically adapted to dynamically changing
operation conditions. Experimental results and comparisons with other methods
show that our method outperforms, in terms of precision and recall, while at
the same time it keeps computational cost suitable for real-time applications.
| Konstantinos Makantasis, Anastasios Doulamis, Nikolaos Doulamis | null | 1506.08581 | null | null |
A spectral method for community detection in moderately-sparse
degree-corrected stochastic block models | math.PR cs.LG cs.SI stat.ML | We consider community detection in Degree-Corrected Stochastic Block Models
(DC-SBM). We propose a spectral clustering algorithm based on a suitably
normalized adjacency matrix. We show that this algorithm consistently recovers
the block-membership of all but a vanishing fraction of nodes, in the regime
where the lowest degree is of order log$(n)$ or higher. Recovery succeeds even
for very heterogeneous degree-distributions. The used algorithm does not rely
on parameters as input. In particular, it does not need to know the number of
communities.
| Lennart Gulikers, Marc Lelarge, Laurent Massouli\'e | null | 1506.08621 | null | null |
Efficient and Parsimonious Agnostic Active Learning | cs.LG stat.ML | We develop a new active learning algorithm for the streaming setting
satisfying three important properties: 1) It provably works for any classifier
representation and classification problem including those with severe noise. 2)
It is efficiently implementable with an ERM oracle. 3) It is more aggressive
than all previous approaches satisfying 1 and 2. To do this we create an
algorithm based on a newly defined optimization problem and analyze it. We also
conduct the first experimental analysis of all efficient agnostic active
learning algorithms, evaluating their strengths and weaknesses in different
settings.
| Tzu-Kuo Huang, Alekh Agarwal, Daniel J. Hsu, John Langford, Robert E.
Schapire | null | 1506.08669 | null | null |
Portfolio optimization using local linear regression ensembles in
RapidMiner | q-fin.PM cs.LG stat.ML | In this paper we implement a Local Linear Regression Ensemble Committee
(LOLREC) to predict 1-day-ahead returns of 453 assets form the S&P500. The
estimates and the historical returns of the committees are used to compute the
weights of the portfolio from the 453 stock. The proposed method outperforms
benchmark portfolio selection strategies that optimize the growth rate of the
capital. We investigate the effect of algorithm parameter m: the number of
selected stocks on achieved average annual yields. Results suggest the
algorithm's practical usefulness in everyday trading.
| Gabor Nagy and Gergo Barta and Tamas Henk | null | 1506.08690 | null | null |
Dropout as data augmentation | stat.ML cs.LG | Dropout is typically interpreted as bagging a large number of models sharing
parameters. We show that using dropout in a network can also be interpreted as
a kind of data augmentation in the input space without domain knowledge. We
present an approach to projecting the dropout noise within a network back into
the input space, thereby generating augmented versions of the training data,
and we show that training a deterministic network on the augmented samples
yields similar results. Finally, we propose a new dropout noise scheme based on
our observations and show that it improves dropout results without adding
significant computational cost.
| Xavier Bouthillier, Kishore Konda, Pascal Vincent, Roland Memisevic | null | 1506.08700 | null | null |
S2: An Efficient Graph Based Active Learning Algorithm with Application
to Nonparametric Classification | cs.LG stat.ML | This paper investigates the problem of active learning for binary label
prediction on a graph. We introduce a simple and label-efficient algorithm
called S2 for this task. At each step, S2 selects the vertex to be labeled
based on the structure of the graph and all previously gathered labels.
Specifically, S2 queries for the label of the vertex that bisects the *shortest
shortest* path between any pair of oppositely labeled vertices. We present a
theoretical estimate of the number of queries S2 needs in terms of a novel
parametrization of the complexity of binary functions on graphs. We also
present experimental results demonstrating the performance of S2 on both real
and synthetic data. While other graph-based active learning algorithms have
shown promise in practice, our algorithm is the first with both good
performance and theoretical guarantees. Finally, we demonstrate the
implications of the S2 algorithm to the theory of nonparametric active
learning. In particular, we show that S2 achieves near minimax optimal excess
risk for an important class of nonparametric classification problems.
| Gautam Dasarathy, Robert Nowak, Xiaojin Zhu | null | 1506.08760 | null | null |
On Design Mining: Coevolution and Surrogate Models | cs.NE cs.AI cs.CE cs.LG | Design mining is the use of computational intelligence techniques to
iteratively search and model the attribute space of physical objects evaluated
directly through rapid prototyping to meet given objectives. It enables the
exploitation of novel materials and processes without formal models or complex
simulation. In this paper, we focus upon the coevolutionary nature of the
design process when it is decomposed into concurrent sub-design threads due to
the overall complexity of the task. Using an abstract, tuneable model of
coevolution we consider strategies to sample sub-thread designs for whole
system testing and how best to construct and use surrogate models within the
coevolutionary scenario. Drawing on our findings, the paper then describes the
effective design of an array of six heterogeneous vertical-axis wind turbines.
| Richard J. Preen and Larry Bull | 10.1162/ARTL_a_00225 | 1506.08781 | null | null |
The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured
Multi-Turn Dialogue Systems | cs.CL cs.AI cs.LG cs.NE | This paper introduces the Ubuntu Dialogue Corpus, a dataset containing almost
1 million multi-turn dialogues, with a total of over 7 million utterances and
100 million words. This provides a unique resource for research into building
dialogue managers based on neural language models that can make use of large
amounts of unlabeled data. The dataset has both the multi-turn property of
conversations in the Dialog State Tracking Challenge datasets, and the
unstructured nature of interactions from microblog services such as Twitter. We
also describe two neural learning architectures suitable for analyzing this
dataset, and provide benchmark performance on the task of selecting the best
next response.
| Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau | null | 1506.08909 | null | null |
Learning Single Index Models in High Dimensions | stat.ML cs.LG stat.ME | Single Index Models (SIMs) are simple yet flexible semi-parametric models for
classification and regression. Response variables are modeled as a nonlinear,
monotonic function of a linear combination of features. Estimation in this
context requires learning both the feature weights, and the nonlinear function.
While methods have been described to learn SIMs in the low dimensional regime,
a method that can efficiently learn SIMs in high dimensions has not been
forthcoming. We propose three variants of a computationally and statistically
efficient algorithm for SIM inference in high dimensions. We establish excess
risk bounds for the proposed algorithms and experimentally validate the
advantages that our SIM learning methods provide relative to Generalized Linear
Model (GLM) and low dimensional SIM based learning methods.
| Ravi Ganti, Nikhil Rao, Rebecca M. Willett and Robert Nowak | null | 1506.08910 | null | null |
Fast ADMM Algorithm for Distributed Optimization with Adaptive Penalty | cs.LG cs.CV math.OC | We propose new methods to speed up convergence of the Alternating Direction
Method of Multipliers (ADMM), a common optimization tool in the context of
large scale and distributed learning. The proposed method accelerates the speed
of convergence by automatically deciding the constraint penalty needed for
parameter consensus in each iteration. In addition, we also propose an
extension of the method that adaptively determines the maximum number of
iterations to update the penalty. We show that this approach effectively leads
to an adaptive, dynamic network topology underlying the distributed
optimization. The utility of the new penalty update schemes is demonstrated on
both synthetic and real data, including a computer vision application of
distributed structure from motion.
| Changkyu Song and Sejong Yoon and Vladimir Pavlovic | null | 1506.08928 | null | null |
Online Learning to Sample | cs.LG cs.CV cs.NA math.OC stat.ML | Stochastic Gradient Descent (SGD) is one of the most widely used techniques
for online optimization in machine learning. In this work, we accelerate SGD by
adaptively learning how to sample the most useful training examples at each
time step. First, we show that SGD can be used to learn the best possible
sampling distribution of an importance sampling estimator. Second, we show that
the sampling distribution of an SGD algorithm can be estimated online by
incrementally minimizing the variance of the gradient. The resulting algorithm
- called Adaptive Weighted SGD (AW-SGD) - maintains a set of parameters to
optimize, as well as a set of parameters to sample learning examples. We show
that AWSGD yields faster convergence in three different applications: (i) image
classification with deep features, where the sampling of images depends on
their labels, (ii) matrix factorization, where rows and columns are not sampled
uniformly, and (iii) reinforcement learning, where the optimized and
exploration policies are estimated at the same time, where our approach
corresponds to an off-policy gradient algorithm.
| Guillaume Bouchard, Th\'eo Trouillon, Julien Perez, Adrien Gaidon | null | 1506.09016 | null | null |
Scalable Discrete Sampling as a Multi-Armed Bandit Problem | stat.ML cs.LG | Drawing a sample from a discrete distribution is one of the building
components for Monte Carlo methods. Like other sampling algorithms, discrete
sampling suffers from the high computational burden in large-scale inference
problems. We study the problem of sampling a discrete random variable with a
high degree of dependency that is typical in large-scale Bayesian inference and
graphical models, and propose an efficient approximate solution with a
subsampling approach. We make a novel connection between the discrete sampling
and Multi-Armed Bandits problems with a finite reward population and provide
three algorithms with theoretical guarantees. Empirical evaluations show the
robustness and efficiency of the approximate algorithms in both synthetic and
real-world large-scale problems.
| Yutian Chen, Zoubin Ghahramani | null | 1506.09039 | null | null |
Framework for Multi-task Multiple Kernel Learning and Applications in
Genome Analysis | stat.ML cs.CE cs.LG | We present a general regularization-based framework for Multi-task learning
(MTL), in which the similarity between tasks can be learned or refined using
$\ell_p$-norm Multiple Kernel learning (MKL). Based on this very general
formulation (including a general loss function), we derive the corresponding
dual formulation using Fenchel duality applied to Hermitian matrices. We show
that numerous established MTL methods can be derived as special cases from
both, the primal and dual of our formulation. Furthermore, we derive a modern
dual-coordinate descend optimization strategy for the hinge-loss variant of our
formulation and provide convergence bounds for our algorithm. As a special
case, we implement in C++ a fast LibLinear-style solver for $\ell_p$-norm MKL.
In the experimental section, we analyze various aspects of our algorithm such
as predictive performance and ability to reconstruct task relationships on
biologically inspired synthetic data, where we have full control over the
underlying ground truth. We also experiment on a new dataset from the domain of
computational biology that we collected for the purpose of this paper. It
concerns the prediction of transcription start sites (TSS) over nine organisms,
which is a crucial task in gene finding. Our solvers including all discussed
special cases are made available as open-source software as part of the SHOGUN
machine learning toolbox (available at \url{http://shogun.ml}).
| Christian Widmer, Marius Kloft, Vipin T Sreedharan, Gunnar R\"atsch | null | 1506.09153 | null | null |
Unsupervised Learning from Narrated Instruction Videos | cs.CV cs.LG | We address the problem of automatically learning the main steps to complete a
certain task, such as changing a car tire, from a set of narrated instruction
videos. The contributions of this paper are three-fold. First, we develop a new
unsupervised learning approach that takes advantage of the complementary nature
of the input video and the associated narration. The method solves two
clustering problems, one in text and one in video, applied one after each other
and linked by joint constraints to obtain a single coherent sequence of steps
in both modalities. Second, we collect and annotate a new challenging dataset
of real-world instruction videos from the Internet. The dataset contains about
800,000 frames for five different tasks that include complex interactions
between people and objects, and are captured in a variety of indoor and outdoor
settings. Third, we experimentally demonstrate that the proposed method can
automatically discover, in an unsupervised manner, the main steps to achieve
the task and locate the steps in the input videos.
| Jean-Baptiste Alayrac, Piotr Bojanowski, Nishant Agrawal, Josef Sivic,
Ivan Laptev, Simon Lacoste-Julien | null | 1506.09215 | null | null |
Selective Inference and Learning Mixed Graphical Models | stat.ML cs.LG | This thesis studies two problems in modern statistics. First, we study
selective inference, or inference for hypothesis that are chosen after looking
at the data. The motiving application is inference for regression coefficients
selected by the lasso. We present the Condition-on-Selection method that allows
for valid selective inference, and study its application to the lasso, and
several other selection algorithms.
In the second part, we consider the problem of learning the structure of a
pairwise graphical model over continuous and discrete variables. We present a
new pairwise model for graphical models with both continuous and discrete
variables that is amenable to structure learning. In previous work, authors
have considered structure learning of Gaussian graphical models and structure
learning of discrete models. Our approach is a natural generalization of these
two lines of work to the mixed case. The penalization scheme involves a novel
symmetric use of the group-lasso norm and follows naturally from a particular
parametrization of the model. We provide conditions under which our estimator
is model selection consistent in the high-dimensional regime.
| Jason D. Lee | null | 1507.00039 | null | null |
Fast Cross-Validation for Incremental Learning | stat.ML cs.AI cs.LG | Cross-validation (CV) is one of the main tools for performance estimation and
parameter tuning in machine learning. The general recipe for computing CV
estimate is to run a learning algorithm separately for each CV fold, a
computationally expensive process. In this paper, we propose a new approach to
reduce the computational burden of CV-based performance estimation. As opposed
to all previous attempts, which are specific to a particular learning model or
problem domain, we propose a general method applicable to a large class of
incremental learning algorithms, which are uniquely fitted to big data
problems. In particular, our method applies to a wide range of supervised and
unsupervised learning tasks with different performance criteria, as long as the
base learning algorithm is incremental. We show that the running time of the
algorithm scales logarithmically, rather than linearly, in the number of CV
folds. Furthermore, the algorithm has favorable properties for parallel and
distributed implementation. Experiments with state-of-the-art incremental
learning algorithms confirm the practicality of the proposed method.
| Pooria Joulani, Andr\'as Gy\"orgy, Csaba Szepesv\'ari | null | 1507.00066 | null | null |
A Study of Gradient Descent Schemes for General-Sum Stochastic Games | cs.LG cs.GT | Zero-sum stochastic games are easy to solve as they can be cast as simple
Markov decision processes. This is however not the case with general-sum
stochastic games. A fairly general optimization problem formulation is
available for general-sum stochastic games by Filar and Vrieze [2004]. However,
the optimization problem there has a non-linear objective and non-linear
constraints with special structure. Since gradients of both the objective as
well as constraints of this optimization problem are well defined, gradient
based schemes seem to be a natural choice. We discuss a gradient scheme tuned
for two-player stochastic games. We show in simulations that this scheme indeed
converges to a Nash equilibrium, for a simple terrain exploration problem
modelled as a general-sum stochastic game. However, it turns out that only
global minima of the optimization problem correspond to Nash equilibria of the
underlying general-sum stochastic game, while gradient schemes only guarantee
convergence to local minima. We then provide important necessary conditions for
gradient schemes to converge to Nash equilibria in general-sum stochastic
games.
| H. L. Prasad and Shalabh Bhatnagar | null | 1507.00093 | null | null |
Natural Neural Networks | stat.ML cs.LG cs.NE | We introduce Natural Neural Networks, a novel family of algorithms that speed
up convergence by adapting their internal representation during training to
improve conditioning of the Fisher matrix. In particular, we show a specific
example that employs a simple and efficient reparametrization of the neural
network weights by implicitly whitening the representation obtained at each
layer, while preserving the feed-forward computation of the network. Such
networks can be trained efficiently via the proposed Projected Natural Gradient
Descent algorithm (PRONG), which amortizes the cost of these reparametrizations
over many parameter updates and is closely related to the Mirror Descent online
learning algorithm. We highlight the benefits of our method on both
unsupervised and supervised learning tasks, and showcase its scalability by
training on the large-scale ImageNet Challenge dataset.
| Guillaume Desjardins, Karen Simonyan, Razvan Pascanu, Koray
Kavukcuoglu | null | 1507.00210 | null | null |
Bigeometric Organization of Deep Nets | stat.ML cs.LG | In this paper, we build an organization of high-dimensional datasets that
cannot be cleanly embedded into a low-dimensional representation due to missing
entries and a subset of the features being irrelevant to modeling functions of
interest. Our algorithm begins by defining coarse neighborhoods of the points
and defining an expected empirical function value on these neighborhoods. We
then generate new non-linear features with deep net representations tuned to
model the approximate function, and re-organize the geometry of the points with
respect to the new representation. Finally, the points are locally z-scored to
create an intrinsic geometric organization which is independent of the
parameters of the deep net, a geometry designed to assure smoothness with
respect to the empirical function. We examine this approach on data from the
Center for Medicare and Medicaid Services Hospital Quality Initiative, and
generate an intrinsic low-dimensional organization of the hospitals that is
smooth with respect to an expert driven function of quality.
| Alexander Cloninger, Ronald R. Coifman, Nicholas Downing, Harlan M.
Krumholz | null | 1507.00220 | null | null |
Bootstrapped Thompson Sampling and Deep Exploration | stat.ML cs.LG | This technical note presents a new approach to carrying out the kind of
exploration achieved by Thompson sampling, but without explicitly maintaining
or sampling from posterior distributions. The approach is based on a bootstrap
technique that uses a combination of observed and artificially generated data.
The latter serves to induce a prior distribution which, as we will demonstrate,
is critical to effective exploration. We explain how the approach can be
applied to multi-armed bandit and reinforcement learning problems and how it
relates to Thompson sampling. The approach is particularly well-suited for
contexts in which exploration is coupled with deep learning, since in these
settings, maintaining or generating samples from a posterior distribution
becomes computationally infeasible.
| Ian Osband and Benjamin Van Roy | null | 1507.00300 | null | null |
Notes on Low-rank Matrix Factorization | cs.NA cs.IR cs.LG | Low-rank matrix factorization (MF) is an important technique in data science.
The key idea of MF is that there exists latent structures in the data, by
uncovering which we could obtain a compressed representation of the data. By
factorizing an original matrix to low-rank matrices, MF provides a unified
method for dimension reduction, clustering, and matrix completion. In this
article we review several important variants of MF, including: Basic MF,
Non-negative MF, Orthogonal non-negative MF. As can be told from their names,
non-negative MF and orthogonal non-negative MF are variants of basic MF with
non-negativity and/or orthogonality constraints. Such constraints are useful in
specific senarios. In the first part of this article, we introduce, for each of
these models, the application scenarios, the distinctive properties, and the
optimizing method. By properly adapting MF, we can go beyond the problem of
clustering and matrix completion. In the second part of this article, we will
extend MF to sparse matrix compeletion, enhance matrix compeletion using
various regularization methods, and make use of MF for (semi-)supervised
learning by introducing latent space reinforcement and transformation. We will
see that MF is not only a useful model but also as a flexible framework that is
applicable for various prediction problems.
| Yuan Lu and Jie Yang | null | 1507.00333 | null | null |
An Empirical Evaluation of True Online TD({\lambda}) | cs.AI cs.LG stat.ML | The true online TD({\lambda}) algorithm has recently been proposed (van
Seijen and Sutton, 2014) as a universal replacement for the popular
TD({\lambda}) algorithm, in temporal-difference learning and reinforcement
learning. True online TD({\lambda}) has better theoretical properties than
conventional TD({\lambda}), and the expectation is that it also results in
faster learning. In this paper, we put this hypothesis to the test.
Specifically, we compare the performance of true online TD({\lambda}) with that
of TD({\lambda}) on challenging examples, random Markov reward processes, and a
real-world myoelectric prosthetic arm. We use linear function approximation
with tabular, binary, and non-binary features. We assess the algorithms along
three dimensions: computational cost, learning speed, and ease of use. Our
results confirm the strength of true online TD({\lambda}): 1) for sparse
feature vectors, the computational overhead with respect to TD({\lambda}) is
minimal; for non-sparse features the computation time is at most twice that of
TD({\lambda}), 2) across all domains/representations the learning speed of true
online TD({\lambda}) is often better, but never worse than that of
TD({\lambda}), and 3) true online TD({\lambda}) is easier to use, because it
does not require choosing between trace types, and it is generally more stable
with respect to the step-size. Overall, our results suggest that true online
TD({\lambda}) should be the first choice when looking for an efficient,
general-purpose TD method.
| Harm van Seijen, A. Rupam Mahmood, Patrick M. Pilarski, Richard S.
Sutton | null | 1507.00353 | null | null |
Fast Convergence of Regularized Learning in Games | cs.GT cs.AI cs.LG | We show that natural classes of regularized learning algorithms with a form
of recency bias achieve faster convergence rates to approximate efficiency and
to coarse correlated equilibria in multiplayer normal form games. When each
player in a game uses an algorithm from our class, their individual regret
decays at $O(T^{-3/4})$, while the sum of utilities converges to an approximate
optimum at $O(T^{-1})$--an improvement upon the worst case $O(T^{-1/2})$ rates.
We show a black-box reduction for any algorithm in the class to achieve
$\tilde{O}(T^{-1/2})$ rates against an adversary, while maintaining the faster
rates against algorithms in the class. Our results extend those of [Rakhlin and
Shridharan 2013] and [Daskalakis et al. 2014], who only analyzed two-player
zero-sum games for specific algorithms.
| Vasilis Syrgkanis, Alekh Agarwal, Haipeng Luo, Robert E. Schapire | null | 1507.00407 | null | null |
No-Regret Learning in Bayesian Games | cs.GT cs.LG | Recent price-of-anarchy analyses of games of complete information suggest
that coarse correlated equilibria, which characterize outcomes resulting from
no-regret learning dynamics, have near-optimal welfare. This work provides two
main technical results that lift this conclusion to games of incomplete
information, a.k.a., Bayesian games. First, near-optimal welfare in Bayesian
games follows directly from the smoothness-based proof of near-optimal welfare
in the same game when the private information is public. Second, no-regret
learning dynamics converge to Bayesian coarse correlated equilibrium in these
incomplete information games. These results are enabled by interpretation of a
Bayesian game as a stochastic game of complete information.
| Jason Hartline, Vasilis Syrgkanis, Eva Tardos | null | 1507.00418 | null | null |
Categorical Matrix Completion | cs.NA cs.LG math.ST stat.ML stat.TH | We consider the problem of completing a matrix with categorical-valued
entries from partial observations. This is achieved by extending the
formulation and theory of one-bit matrix completion. We recover a low-rank
matrix $X$ by maximizing the likelihood ratio with a constraint on the nuclear
norm of $X$, and the observations are mapped from entries of $X$ through
multiple link functions. We establish theoretical upper and lower bounds on the
recovery error, which meet up to a constant factor $\mathcal{O}(K^{3/2})$ where
$K$ is the fixed number of categories. The upper bound in our case depends on
the number of categories implicitly through a maximization of terms that
involve the smoothness of the link functions. In contrast to one-bit matrix
completion, our bounds for categorical matrix completion are optimal up to a
factor on the order of the square root of the number of categories, which is
consistent with an intuition that the problem becomes harder when the number of
categories increases. By comparing the performance of our method with the
conventional matrix completion method on the MovieLens dataset, we demonstrate
the advantage of our method.
| Yang Cao, Yao Xie | null | 1507.00421 | null | null |
Online Transfer Learning in Reinforcement Learning Domains | cs.AI cs.LG | This paper proposes an online transfer framework to capture the interaction
among agents and shows that current transfer learning in reinforcement learning
is a special case of online transfer. Furthermore, this paper re-characterizes
existing agents-teaching-agents methods as online transfer and analyze one such
teaching method in three ways. First, the convergence of Q-learning and Sarsa
with tabular representation with a finite budget is proven. Second, the
convergence of Q-learning and Sarsa with linear function approximation is
established. Third, the we show the asymptotic performance cannot be hurt
through teaching. Additionally, all theoretical results are empirically
validated.
| Yusen Zhan and Matthew E. Taylor | null | 1507.00436 | null | null |
DC Proximal Newton for Non-Convex Optimization Problems | cs.LG cs.NA stat.ML | We introduce a novel algorithm for solving learning problems where both the
loss function and the regularizer are non-convex but belong to the class of
difference of convex (DC) functions. Our contribution is a new general purpose
proximal Newton algorithm that is able to deal with such a situation. The
algorithm consists in obtaining a descent direction from an approximation of
the loss function and then in performing a line search to ensure sufficient
descent. A theoretical analysis is provided showing that the iterates of the
proposed algorithm {admit} as limit points stationary points of the DC
objective function. Numerical experiments show that our approach is more
efficient than current state of the art for a problem with a convex loss
functions and non-convex regularizer. We have also illustrated the benefit of
our algorithm in high-dimensional transductive learning problem where both loss
function and regularizers are non-convex.
| Alain Rakotomamonjy (LITIS), Remi Flamary (LAGRANGE, OCA), Gilles
Gasso (LITIS) | null | 1507.00438 | null | null |
The Optimal Sample Complexity of PAC Learning | cs.LG stat.ML | This work establishes a new upper bound on the number of samples sufficient
for PAC learning in the realizable case. The bound matches known lower bounds
up to numerical constant factors. This solves a long-standing open problem on
the sample complexity of PAC learning. The technique and analysis build on a
recent breakthrough by Hans Simon.
| Steve Hanneke | null | 1507.00473 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.