title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Tight Bounds on $\ell_1$ Approximation and Learning of Self-Bounding
Functions | cs.LG cs.DS | We study the complexity of learning and approximation of self-bounding
functions over the uniform distribution on the Boolean hypercube ${0,1}^n$.
Informally, a function $f:{0,1}^n \rightarrow \mathbb{R}$ is self-bounding if
for every $x \in {0,1}^n$, $f(x)$ upper bounds the sum of all the $n$ marginal
decreases in the value of the function at $x$. Self-bounding functions include
such well-known classes of functions as submodular and fractionally-subadditive
(XOS) functions. They were introduced by Boucheron et al. (2000) in the context
of concentration of measure inequalities. Our main result is a nearly tight
$\ell_1$-approximation of self-bounding functions by low-degree juntas.
Specifically, all self-bounding functions can be $\epsilon$-approximated in
$\ell_1$ by a polynomial of degree $\tilde{O}(1/\epsilon)$ over
$2^{\tilde{O}(1/\epsilon)}$ variables. We show that both the degree and
junta-size are optimal up to logarithmic terms. Previous techniques considered
stronger $\ell_2$ approximation and proved nearly tight bounds of
$\Theta(1/\epsilon^{2})$ on the degree and $2^{\Theta(1/\epsilon^2)}$ on the
number of variables. Our bounds rely on the analysis of noise stability of
self-bounding functions together with a stronger connection between noise
stability and $\ell_1$ approximation by low-degree polynomials. This technique
can also be used to get tighter bounds on $\ell_1$ approximation by low-degree
polynomials and faster learning algorithm for halfspaces.
These results lead to improved and in several cases almost tight bounds for
PAC and agnostic learning of self-bounding functions relative to the uniform
distribution. In particular, assuming hardness of learning juntas, we show that
PAC and agnostic learning of self-bounding functions have complexity of
$n^{\tilde{\Theta}(1/\epsilon)}$.
| Vitaly Feldman, Pravesh Kothari and Jan Vondr\'ak | null | 1404.4702 | null | null |
Supervised detection of anomalous light-curves in massive astronomical
catalogs | cs.CE astro-ph.IM cs.LG | The development of synoptic sky surveys has led to a massive amount of data
for which resources needed for analysis are beyond human capabilities. To
process this information and to extract all possible knowledge, machine
learning techniques become necessary. Here we present a new method to
automatically discover unknown variable objects in large astronomical catalogs.
With the aim of taking full advantage of all the information we have about
known objects, our method is based on a supervised algorithm. In particular, we
train a random forest classifier using known variability classes of objects and
obtain votes for each of the objects in the training set. We then model this
voting distribution with a Bayesian network and obtain the joint voting
distribution among the training objects. Consequently, an unknown object is
considered as an outlier insofar it has a low joint probability. Our method is
suitable for exploring massive datasets given that the training process is
performed offline. We tested our algorithm on 20 millions light-curves from the
MACHO catalog and generated a list of anomalous candidates. We divided the
candidates into two main classes of outliers: artifacts and intrinsic outliers.
Artifacts were principally due to air mass variation, seasonal variation, bad
calibration or instrumental errors and were consequently removed from our
outlier list and added to the training set. After retraining, we selected about
4000 objects, which we passed to a post analysis stage by perfoming a
cross-match with all publicly available catalogs. Within these candidates we
identified certain known but rare objects such as eclipsing Cepheids, blue
variables, cataclysmic variables and X-ray sources. For some outliers there
were no additional information. Among them we identified three unknown
variability types and few individual outliers that will be followed up for a
deeper analysis.
| Isadora Nun, Karim Pichara, Pavlos Protopapas, Dae-Won Kim | 10.1088/0004-637X/793/1/23 | 1404.4888 | null | null |
CTBNCToolkit: Continuous Time Bayesian Network Classifier Toolkit | cs.AI cs.LG cs.MS | Continuous time Bayesian network classifiers are designed for temporal
classification of multivariate streaming data when time duration of events
matters and the class does not change over time. This paper introduces the
CTBNCToolkit: an open source Java toolkit which provides a stand-alone
application for temporal classification and a library for continuous time
Bayesian network classifiers. CTBNCToolkit implements the inference algorithm,
the parameter learning algorithm, and the structural learning algorithm for
continuous time Bayesian network classifiers. The structural learning algorithm
is based on scoring functions: the marginal log-likelihood score and the
conditional log-likelihood score are provided. CTBNCToolkit provides also an
implementation of the expectation maximization algorithm for clustering
purpose. The paper introduces continuous time Bayesian network classifiers. How
to use the CTBNToolkit from the command line is described in a specific
section. Tutorial examples are included to facilitate users to understand how
the toolkit must be used. A section dedicate to the Java library is proposed to
help further code extensions.
| Daniele Codecasa and Fabio Stella | null | 1404.4893 | null | null |
Agent Behavior Prediction and Its Generalization Analysis | cs.LG | Machine learning algorithms have been applied to predict agent behaviors in
real-world dynamic systems, such as advertiser behaviors in sponsored search
and worker behaviors in crowdsourcing. The behavior data in these systems are
generated by live agents: once the systems change due to the adoption of the
prediction models learnt from the behavior data, agents will observe and
respond to these changes by changing their own behaviors accordingly. As a
result, the behavior data will evolve and will not be identically and
independently distributed, posing great challenges to the theoretical analysis
on the machine learning algorithms for behavior prediction. To tackle this
challenge, in this paper, we propose to use Markov Chain in Random Environments
(MCRE) to describe the behavior data, and perform generalization analysis of
the machine learning algorithms on its basis. Since the one-step transition
probability matrix of MCRE depends on both previous states and the random
environment, conventional techniques for generalization analysis cannot be
directly applied. To address this issue, we propose a novel technique that
transforms the original MCRE into a higher-dimensional time-homogeneous Markov
chain. The new Markov chain involves more variables but is more regular, and
thus easier to deal with. We prove the convergence of the new Markov chain when
time approaches infinity. Then we prove a generalization bound for the machine
learning algorithms on the behavior data generated by the new Markov chain,
which depends on both the Markovian parameters and the covering number of the
function class compounded by the loss function for behavior prediction and the
behavior prediction model. To the best of our knowledge, this is the first work
that performs the generalization analysis on data generated by complex
processes in real-world dynamic systems.
| Fei Tian, Haifang Li, Wei Chen, Tao Qin, Enhong Chen, Tie-Yan Liu | null | 1404.4960 | null | null |
Tight bounds for learning a mixture of two gaussians | cs.LG cs.DS stat.ML | We consider the problem of identifying the parameters of an unknown mixture
of two arbitrary $d$-dimensional gaussians from a sequence of independent
random samples. Our main results are upper and lower bounds giving a
computationally efficient moment-based estimator with an optimal convergence
rate, thus resolving a problem introduced by Pearson (1894). Denoting by
$\sigma^2$ the variance of the unknown mixture, we prove that
$\Theta(\sigma^{12})$ samples are necessary and sufficient to estimate each
parameter up to constant additive error when $d=1.$ Our upper bound extends to
arbitrary dimension $d>1$ up to a (provably necessary) logarithmic loss in $d$
using a novel---yet simple---dimensionality reduction technique. We further
identify several interesting special cases where the sample complexity is
notably smaller than our optimal worst-case bound. For instance, if the means
of the two components are separated by $\Omega(\sigma)$ the sample complexity
reduces to $O(\sigma^2)$ and this is again optimal.
Our results also apply to learning each component of the mixture up to small
error in total variation distance, where our algorithm gives strong
improvements in sample complexity over previous work. We also extend our lower
bound to mixtures of $k$ Gaussians, showing that $\Omega(\sigma^{6k-2})$
samples are necessary to estimate each parameter up to constant additive error.
| Moritz Hardt and Eric Price | null | 1404.4997 | null | null |
Efficient Semidefinite Branch-and-Cut for MAP-MRF Inference | cs.CV cs.LG cs.NA | We propose a Branch-and-Cut (B&C) method for solving general MAP-MRF
inference problems. The core of our method is a very efficient bounding
procedure, which combines scalable semidefinite programming (SDP) and a
cutting-plane method for seeking violated constraints. In order to further
speed up the computation, several strategies have been exploited, including
model reduction, warm start and removal of inactive constraints.
We analyze the performance of the proposed method under different settings,
and demonstrate that our method either outperforms or performs on par with
state-of-the-art approaches. Especially when the connectivities are dense or
when the relative magnitudes of the unary costs are low, we achieve the best
reported results. Experiments show that the proposed algorithm achieves better
approximation than the state-of-the-art methods within a variety of time
budgets on challenging non-submodular MAP-MRF inference problems.
| Peng Wang, Chunhua Shen, Anton van den Hengel, Philip Torr | null | 1404.5009 | null | null |
Multi-Target Regression via Random Linear Target Combinations | cs.LG | Multi-target regression is concerned with the simultaneous prediction of
multiple continuous target variables based on the same set of input variables.
It arises in several interesting industrial and environmental application
domains, such as ecological modelling and energy forecasting. This paper
presents an ensemble method for multi-target regression that constructs new
target variables via random linear combinations of existing targets. We discuss
the connection of our approach with multi-label classification algorithms, in
particular RA$k$EL, which originally inspired this work, and a family of recent
multi-label classification algorithms that involve output coding. Experimental
results on 12 multi-target datasets show that it performs significantly better
than a strong baseline that learns a single model for each target using
gradient boosting and compares favourably to multi-objective random forest
approach, which is a state-of-the-art approach. The experiments further show
that our approach improves more when stronger unconditional dependencies exist
among the targets.
| Grigorios Tsoumakas, Eleftherios Spyromitros-Xioufis, Aikaterini
Vrekou, Ioannis Vlahavas | 10.1007/978-3-662-44845-8_15 | 1404.5065 | null | null |
Spatiotemporal Sparse Bayesian Learning with Applications to Compressed
Sensing of Multichannel Physiological Signals | cs.IT cs.LG math.IT stat.ML | Energy consumption is an important issue in continuous wireless
telemonitoring of physiological signals. Compressed sensing (CS) is a promising
framework to address it, due to its energy-efficient data compression
procedure. However, most CS algorithms have difficulty in data recovery due to
non-sparsity characteristic of many physiological signals. Block sparse
Bayesian learning (BSBL) is an effective approach to recover such signals with
satisfactory recovery quality. However, it is time-consuming in recovering
multichannel signals, since its computational load almost linearly increases
with the number of channels.
This work proposes a spatiotemporal sparse Bayesian learning algorithm to
recover multichannel signals simultaneously. It not only exploits temporal
correlation within each channel signal, but also exploits inter-channel
correlation among different channel signals. Furthermore, its computational
load is not significantly affected by the number of channels. The proposed
algorithm was applied to brain computer interface (BCI) and EEG-based driver's
drowsiness estimation. Results showed that the algorithm had both better
recovery performance and much higher speed than BSBL. Particularly, the
proposed algorithm ensured that the BCI classification and the drowsiness
estimation had little degradation even when data were compressed by 80%, making
it very suitable for continuous wireless telemonitoring of multichannel
signals.
| Zhilin Zhang, Tzyy-Ping Jung, Scott Makeig, Zhouyue Pi, Bhaskar D. Rao | 10.1109/TNSRE.2014.2319334 | 1404.5122 | null | null |
GP-Localize: Persistent Mobile Robot Localization using Online Sparse
Gaussian Process Observation Model | cs.RO cs.LG stat.ML | Central to robot exploration and mapping is the task of persistent
localization in environmental fields characterized by spatially correlated
measurements. This paper presents a Gaussian process localization (GP-Localize)
algorithm that, in contrast to existing works, can exploit the spatially
correlated field measurements taken during a robot's exploration (instead of
relying on prior training data) for efficiently and scalably learning the GP
observation model online through our proposed novel online sparse GP. As a
result, GP-Localize is capable of achieving constant time and memory (i.e.,
independent of the size of the data) per filtering step, which demonstrates the
practical feasibility of using GPs for persistent robot localization and
autonomy. Empirical evaluation via simulated experiments with real-world
datasets and a real robot experiment shows that GP-Localize outperforms
existing GP localization algorithms.
| Nuo Xu, Kian Hsiang Low, Jie Chen, Keng Kiat Lim, Etkin Baris Ozgul | null | 1404.5165 | null | null |
Graph Kernels via Functional Embedding | cs.LG cs.AI stat.ML | We propose a representation of graph as a functional object derived from the
power iteration of the underlying adjacency matrix. The proposed functional
representation is a graph invariant, i.e., the functional remains unchanged
under any reordering of the vertices. This property eliminates the difficulty
of handling exponentially many isomorphic forms. Bhattacharyya kernel
constructed between these functionals significantly outperforms the
state-of-the-art graph kernels on 3 out of the 4 standard benchmark graph
classification datasets, demonstrating the superiority of our approach. The
proposed methodology is simple and runs in time linear in the number of edges,
which makes our kernel more efficient and scalable compared to many widely
adopted graph kernels with running time cubic in the number of vertices.
| Anshumali Shrivastava and Ping Li | null | 1404.5214 | null | null |
Sum-of-squares proofs and the quest toward optimal algorithms | cs.DS cs.CC cs.LG math.OC | In order to obtain the best-known guarantees, algorithms are traditionally
tailored to the particular problem we want to solve. Two recent developments,
the Unique Games Conjecture (UGC) and the Sum-of-Squares (SOS) method,
surprisingly suggest that this tailoring is not necessary and that a single
efficient algorithm could achieve best possible guarantees for a wide range of
different problems.
The Unique Games Conjecture (UGC) is a tantalizing conjecture in
computational complexity, which, if true, will shed light on the complexity of
a great many problems. In particular this conjecture predicts that a single
concrete algorithm provides optimal guarantees among all efficient algorithms
for a large class of computational problems.
The Sum-of-Squares (SOS) method is a general approach for solving systems of
polynomial constraints. This approach is studied in several scientific
disciplines, including real algebraic geometry, proof complexity, control
theory, and mathematical programming, and has found applications in fields as
diverse as quantum information theory, formal verification, game theory and
many others.
We survey some connections that were recently uncovered between the Unique
Games Conjecture and the Sum-of-Squares method. In particular, we discuss new
tools to rigorously bound the running time of the SOS method for obtaining
approximate solutions to hard optimization problems, and how these tools give
the potential for the sum-of-squares method to provide new guarantees for many
problems of interest, and possibly to even refute the UGC.
| Boaz Barak and David Steurer | null | 1404.5236 | null | null |
Concurrent bandits and cognitive radio networks | cs.LG cs.MA | We consider the problem of multiple users targeting the arms of a single
multi-armed stochastic bandit. The motivation for this problem comes from
cognitive radio networks, where selfish users need to coexist without any side
communication between them, implicit cooperation or common control. Even the
number of users may be unknown and can vary as users join or leave the network.
We propose an algorithm that combines an $\epsilon$-greedy learning rule with a
collision avoidance mechanism. We analyze its regret with respect to the
system-wide optimum and show that sub-linear regret can be obtained in this
setting. Experiments show dramatic improvement compared to other algorithms for
this setting.
| Orly Avner and Shie Mannor | null | 1404.5421 | null | null |
Combining pattern-based CRFs and weighted context-free grammars | cs.FL cs.DS cs.LG | We consider two models for the sequence labeling (tagging) problem. The first
one is a {\em Pattern-Based Conditional Random Field }(\PB), in which the
energy of a string (chain labeling) $x=x_1\ldots x_n\in D^n$ is a sum of terms
over intervals $[i,j]$ where each term is non-zero only if the substring
$x_i\ldots x_j$ equals a prespecified word $w\in \Lambda$. The second model is
a {\em Weighted Context-Free Grammar }(\WCFG) frequently used for natural
language processing. \PB and \WCFG encode local and non-local interactions
respectively, and thus can be viewed as complementary.
We propose a {\em Grammatical Pattern-Based CRF model }(\GPB) that combines
the two in a natural way. We argue that it has certain advantages over existing
approaches such as the {\em Hybrid model} of Bened{\'i} and Sanchez that
combines {\em $\mbox{$N$-grams}$} and \WCFGs. The focus of this paper is to
analyze the complexity of inference tasks in a \GPB such as computing MAP. We
present a polynomial-time algorithm for general \GPBs and a faster version for
a special case that we call {\em Interaction Grammars}.
| Rustem Takhanov and Vladimir Kolmogorov | null | 1404.5475 | null | null |
Coactive Learning for Locally Optimal Problem Solving | cs.LG | Coactive learning is an online problem solving setting where the solutions
provided by a solver are interactively improved by a domain expert, which in
turn drives learning. In this paper we extend the study of coactive learning to
problems where obtaining a globally optimal or near-optimal solution may be
intractable or where an expert can only be expected to make small, local
improvements to a candidate solution. The goal of learning in this new setting
is to minimize the cost as measured by the expert effort over time. We first
establish theoretical bounds on the average cost of the existing coactive
Perceptron algorithm. In addition, we consider new online algorithms that use
cost-sensitive and Passive-Aggressive (PA) updates, showing similar or improved
theoretical bounds. We provide an empirical evaluation of the learners in
various domains, which show that the Perceptron based algorithms are quite
effective and that unlike the case for online classification, the PA algorithms
do not yield significant performance gains.
| Robby Goetschalckx, Alan Fern, Prasad Tadepalli | null | 1404.5511 | null | null |
Together we stand, Together we fall, Together we win: Dynamic Team
Formation in Massive Open Online Courses | cs.SI cs.CY cs.LG cs.MA | Massive Open Online Courses (MOOCs) offer a new scalable paradigm for
e-learning by providing students with global exposure and opportunities for
connecting and interacting with millions of people all around the world. Very
often, students work as teams to effectively accomplish course related tasks.
However, due to lack of face to face interaction, it becomes difficult for MOOC
students to collaborate. Additionally, the instructor also faces challenges in
manually organizing students into teams because students flock to these MOOCs
in huge numbers. Thus, the proposed research is aimed at developing a robust
methodology for dynamic team formation in MOOCs, the theoretical framework for
which is grounded at the confluence of organizational team theory, social
network analysis and machine learning. A prerequisite for such an undertaking
is that we understand the fact that, each and every informal tie established
among students offers the opportunities to influence and be influenced.
Therefore, we aim to extract value from the inherent connectedness of students
in the MOOC. These connections carry with them radical implications for the way
students understand each other in the networked learning community. Our
approach will enable course instructors to automatically group students in
teams that have fairly balanced social connections with their peers, well
defined in terms of appropriately selected qualitative and quantitative network
metrics.
| Tanmay Sinha | 10.1109/ICADIWT.2014.6814694 | 1404.5521 | null | null |
Forward - Backward Greedy Algorithms for Atomic Norm Regularization | cs.DS cs.LG math.OC stat.ML | In many signal processing applications, the aim is to reconstruct a signal
that has a simple representation with respect to a certain basis or frame.
Fundamental elements of the basis known as "atoms" allow us to define "atomic
norms" that can be used to formulate convex regularizations for the
reconstruction problem. Efficient algorithms are available to solve these
formulations in certain special cases, but an approach that works well for
general atomic norms, both in terms of speed and reconstruction accuracy,
remains to be found. This paper describes an optimization algorithm called
CoGEnT that produces solutions with succinct atomic representations for
reconstruction problems, generally formulated with atomic-norm constraints.
CoGEnT combines a greedy selection scheme based on the conditional gradient
approach with a backward (or "truncation") step that exploits the quadratic
nature of the objective to reduce the basis size. We establish convergence
properties and validate the algorithm via extensive numerical experiments on a
suite of signal processing applications. Our algorithm and analysis also allow
for inexact forward steps and for occasional enhancements of the current
representation to be performed. CoGEnT can outperform the basic conditional
gradient method, and indeed many methods that are tailored to specific
applications, when the enhancement and truncation steps are defined
appropriately. We also introduce several novel applications that are enabled by
the atomic-norm framework, including tensor completion, moment problems in
signal processing, and graph deconvolution.
| Nikhil Rao, Parikshit Shah, Stephen Wright | 10.1109/TSP.2015.2461515 | 1404.5692 | null | null |
Sequential Click Prediction for Sponsored Search with Recurrent Neural
Networks | cs.IR cs.LG cs.NE | Click prediction is one of the fundamental problems in sponsored search. Most
of existing studies took advantage of machine learning approaches to predict ad
click for each event of ad view independently. However, as observed in the
real-world sponsored search system, user's behaviors on ads yield high
dependency on how the user behaved along with the past time, especially in
terms of what queries she submitted, what ads she clicked or ignored, and how
long she spent on the landing pages of clicked ads, etc. Inspired by these
observations, we introduce a novel framework based on Recurrent Neural Networks
(RNN). Compared to traditional methods, this framework directly models the
dependency on user's sequential behaviors into the click prediction process
through the recurrent structure in RNN. Large scale evaluations on the
click-through logs from a commercial search engine demonstrate that our
approach can significantly improve the click prediction accuracy, compared to
sequence-independent approaches.
| Yuyu Zhang, Hanjun Dai, Chang Xu, Jun Feng, Taifeng Wang, Jiang Bian,
Bin Wang and Tie-Yan Liu | null | 1404.5772 | null | null |
A Comparison of Clustering and Missing Data Methods for Health Sciences | math.NA cs.LG | In this paper, we compare and analyze clustering methods with missing data in
health behavior research. In particular, we propose and analyze the use of
compressive sensing's matrix completion along with spectral clustering to
cluster health related data. The empirical tests and real data results show
that these methods can outperform standard methods like LPA and FIML, in terms
of lower misclassification rates in clustering and better matrix completion
performance in missing data problems. According to our examination, a possible
explanation of these improvements is that spectral clustering takes advantage
of high data dimension and compressive sensing methods utilize the
near-to-low-rank property of health data.
| Ran Zhao, Deanna Needell, Christopher Johansen, Jerry L. Grenard | null | 1404.5899 | null | null |
Most Correlated Arms Identification | stat.ML cs.LG | We study the problem of finding the most mutually correlated arms among many
arms. We show that adaptive arms sampling strategies can have significant
advantages over the non-adaptive uniform sampling strategy. Our proposed
algorithms rely on a novel correlation estimator. The use of this accurate
estimator allows us to get improved results for a wide range of problem
instances.
| Che-Yu Liu, S\'ebastien Bubeck | null | 1404.5903 | null | null |
One weird trick for parallelizing convolutional neural networks | cs.NE cs.DC cs.LG | I present a new way to parallelize the training of convolutional neural
networks across multiple GPUs. The method scales significantly better than all
alternatives when applied to modern convolutional neural networks.
| Alex Krizhevsky | null | 1404.5997 | null | null |
Classifying pairs with trees for supervised biological network inference | cs.LG stat.ML | Networks are ubiquitous in biology and computational approaches have been
largely investigated for their inference. In particular, supervised machine
learning methods can be used to complete a partially known network by
integrating various measurements. Two main supervised frameworks have been
proposed: the local approach, which trains a separate model for each network
node, and the global approach, which trains a single model over pairs of nodes.
Here, we systematically investigate, theoretically and empirically, the
exploitation of tree-based ensemble methods in the context of these two
approaches for biological network inference. We first formalize the problem of
network inference as classification of pairs, unifying in the process
homogeneous and bipartite graphs and discussing two main sampling schemes. We
then present the global and the local approaches, extending the later for the
prediction of interactions between two unseen network nodes, and discuss their
specializations to tree-based ensemble methods, highlighting their
interpretability and drawing links with clustering techniques. Extensive
computational experiments are carried out with these methods on various
biological networks that clearly highlight that these methods are competitive
with existing methods.
| Marie Schrynemackers, Louis Wehenkel, M. Madan Babu and Pierre Geurts | null | 1404.6074 | null | null |
Overlapping Trace Norms in Multi-View Learning | cs.LG | Multi-view learning leverages correlations between different sources of data
to make predictions in one view based on observations in another view. A
popular approach is to assume that, both, the correlations between the views
and the view-specific covariances have a low-rank structure, leading to
inter-battery factor analysis, a model closely related to canonical correlation
analysis. We propose a convex relaxation of this model using structured norm
regularization. Further, we extend the convex formulation to a robust version
by adding an l1-penalized matrix to our estimator, similarly to convex robust
PCA. We develop and compare scalable algorithms for several convex multi-view
models. We show experimentally that the view-specific correlations are
improving data imputation performances, as well as labeling accuracy in
real-world multi-label prediction tasks.
| Behrouz Behmardi, Cedric Archambeau, Guillaume Bouchard | null | 1404.6163 | null | null |
CoRE Kernels | stat.ML cs.DS cs.LG stat.ME | The term "CoRE kernel" stands for correlation-resemblance kernel. In many
applications (e.g., vision), the data are often high-dimensional, sparse, and
non-binary. We propose two types of (nonlinear) CoRE kernels for non-binary
sparse data and demonstrate the effectiveness of the new kernels through a
classification experiment. CoRE kernels are simple with no tuning parameters.
However, training nonlinear kernel SVM can be (very) costly in time and memory
and may not be suitable for truly large-scale industrial applications (e.g.
search). In order to make the proposed CoRE kernels more practical, we develop
basic probabilistic hashing algorithms which transform nonlinear kernels into
linear kernels.
| Ping Li | null | 1404.6216 | null | null |
Scalable Similarity Learning using Large Margin Neighborhood Embedding | cs.CV cs.LG | Classifying large-scale image data into object categories is an important
problem that has received increasing research attention. Given the huge amount
of data, non-parametric approaches such as nearest neighbor classifiers have
shown promising results, especially when they are underpinned by a learned
distance or similarity measurement. Although metric learning has been well
studied in the past decades, most existing algorithms are impractical to handle
large-scale data sets. In this paper, we present an image similarity learning
method that can scale well in both the number of images and the dimensionality
of image descriptors. To this end, similarity comparison is restricted to each
sample's local neighbors and a discriminative similarity measure is induced
from large margin neighborhood embedding. We also exploit the ensemble of
projections so that high-dimensional features can be processed in a set of
lower-dimensional subspaces in parallel without much performance compromise.
The similarity function is learned online using a stochastic gradient descent
algorithm in which the triplet sampling strategy is customized for quick
convergence of classification performance. The effectiveness of our proposed
model is validated on several data sets with scales varying from tens of
thousands to one million images. Recognition accuracies competitive with the
state-of-the-art performance are achieved with much higher efficiency and
scalability.
| Zhaowen Wang, Jianchao Yang, Zhe Lin, Jonathan Brandt, Shiyu Chang,
Thomas Huang | null | 1404.6272 | null | null |
Applying machine learning to the problem of choosing a heuristic to
select the variable ordering for cylindrical algebraic decomposition | cs.SC cs.LG | Cylindrical algebraic decomposition(CAD) is a key tool in computational
algebraic geometry, particularly for quantifier elimination over real-closed
fields. When using CAD, there is often a choice for the ordering placed on the
variables. This can be important, with some problems infeasible with one
variable ordering but easy with another. Machine learning is the process of
fitting a computer model to a complex function based on properties learned from
measured data. In this paper we use machine learning (specifically a support
vector machine) to select between heuristics for choosing a variable ordering,
outperforming each of the separate heuristics.
| Zongyan Huang, Matthew England, David Wilson, James H. Davenport,
Lawrence C. Paulson and James Bridge | 10.1007/978-3-319-08434-3_8 | 1404.6369 | null | null |
Multitask Learning for Sequence Labeling Tasks | cs.LG | In this paper, we present a learning method for sequence labeling tasks in
which each example sequence has multiple label sequences. Our method learns
multiple models, one model for each label sequence. Each model computes the
joint probability of all label sequences given the example sequence. Although
each model considers all label sequences, its primary focus is only one label
sequence, and therefore, each model becomes a task-specific model, for the task
belonging to that primary label. Such multiple models are learned {\it
simultaneously} by facilitating the learning transfer among models through {\it
explicit parameter sharing}. We experiment the proposed method on two
applications and show that our method significantly outperforms the
state-of-the-art method.
| Arvind Agarwal, Saurabh Kataria | null | 1404.6580 | null | null |
A Comparison of First-order Algorithms for Machine Learning | cs.LG | Using an optimization algorithm to solve a machine learning problem is one of
mainstreams in the field of science. In this work, we demonstrate a
comprehensive comparison of some state-of-the-art first-order optimization
algorithms for convex optimization problems in machine learning. We concentrate
on several smooth and non-smooth machine learning problems with a loss function
plus a regularizer. The overall experimental results show the superiority of
primal-dual algorithms in solving a machine learning problem from the
perspectives of the ease to construct, running time and accuracy.
| Yu Wei and Pock Thomas | null | 1404.6674 | null | null |
A Constrained Matrix-Variate Gaussian Process for Transposable Data | stat.ML cs.LG | Transposable data represents interactions among two sets of entities, and are
typically represented as a matrix containing the known interaction values.
Additional side information may consist of feature vectors specific to entities
corresponding to the rows and/or columns of such a matrix. Further information
may also be available in the form of interactions or hierarchies among entities
along the same mode (axis). We propose a novel approach for modeling
transposable data with missing interactions given additional side information.
The interactions are modeled as noisy observations from a latent noise free
matrix generated from a matrix-variate Gaussian process. The construction of
row and column covariances using side information provides a flexible mechanism
for specifying a-priori knowledge of the row and column correlations in the
data. Further, the use of such a prior combined with the side information
enables predictions for new rows and columns not observed in the training data.
In this work, we combine the matrix-variate Gaussian process model with low
rank constraints. The constrained Gaussian process approach is applied to the
prediction of hidden associations between genes and diseases using a small set
of observed associations as well as prior covariances induced by gene-gene
interaction networks and disease ontologies. The proposed approach is also
applied to recommender systems data which involves predicting the item ratings
of users using known associations as well as prior covariances induced by
social networks. We present experimental results that highlight the performance
of constrained matrix-variate Gaussian process as compared to state of the art
approaches in each domain.
| Oluwasanmi Koyejo, Cheng Lee, Joydeep Ghosh | null | 1404.6702 | null | null |
Conditional Density Estimation with Dimensionality Reduction via
Squared-Loss Conditional Entropy Minimization | cs.LG stat.ML | Regression aims at estimating the conditional mean of output given input.
However, regression is not informative enough if the conditional density is
multimodal, heteroscedastic, and asymmetric. In such a case, estimating the
conditional density itself is preferable, but conditional density estimation
(CDE) is challenging in high-dimensional space. A naive approach to coping with
high-dimensionality is to first perform dimensionality reduction (DR) and then
execute CDE. However, such a two-step process does not perform well in practice
because the error incurred in the first DR step can be magnified in the second
CDE step. In this paper, we propose a novel single-shot procedure that performs
CDE and DR simultaneously in an integrated way. Our key idea is to formulate DR
as the problem of minimizing a squared-loss variant of conditional entropy, and
this is solved via CDE. Thus, an additional CDE step is not needed after DR. We
demonstrate the usefulness of the proposed method through extensive experiments
on various datasets including humanoid robot transition and computer art.
| Voot Tangkaratt, Ning Xie, and Masashi Sugiyama | null | 1404.6876 | null | null |
Probabilistic graphs using coupled random variables | cs.LG cs.IT cs.NE math.IT | Neural network design has utilized flexible nonlinear processes which can
mimic biological systems, but has suffered from a lack of traceability in the
resulting network. Graphical probabilistic models ground network design in
probabilistic reasoning, but the restrictions reduce the expressive capability
of each node making network designs complex. The ability to model coupled
random variables using the calculus of nonextensive statistical mechanics
provides a neural node design incorporating nonlinear coupling between input
states while maintaining the rigor of probabilistic reasoning. A generalization
of Bayes rule using the coupled product enables a single node to model
correlation between hundreds of random variables. A coupled Markov random field
is designed for the inferencing and classification of UCI's MLR 'Multiple
Features Data Set' such that thousands of linear correlation parameters can be
replaced with a single coupling parameter with just a (3%, 4%) percent
reduction in (classification, inference) performance.
| Kenric P. Nelson, Madalina Barbu, Brian J. Scannell | 10.1117/12.2050759 | 1404.6955 | null | null |
Multiscale Event Detection in Social Media | cs.SI cs.LG physics.soc-ph stat.ML | Event detection has been one of the most important research topics in social
media analysis. Most of the traditional approaches detect events based on fixed
temporal and spatial resolutions, while in reality events of different scales
usually occur simultaneously, namely, they span different intervals in time and
space. In this paper, we propose a novel approach towards multiscale event
detection using social media data, which takes into account different temporal
and spatial scales of events in the data. Specifically, we explore the
properties of the wavelet transform, which is a well-developed multiscale
transform in signal processing, to enable automatic handling of the interaction
between temporal and spatial scales. We then propose a novel algorithm to
compute a data similarity graph at appropriate scales and detect events of
different scales simultaneously by a single graph-based clustering process.
Furthermore, we present spatiotemporal statistical analysis of the noisy
information present in the data stream, which allows us to define a novel
term-filtering procedure for the proposed event detection algorithm and helps
us study its behavior using simulated noisy data. Experimental results on both
synthetically generated data and real world data collected from Twitter
demonstrate the meaningfulness and effectiveness of the proposed approach. Our
framework further extends to numerous application domains that involve
multiscale and multiresolution data analysis.
| Xiaowen Dong, Dimitrios Mavroeidis, Francesco Calabrese, Pascal
Frossard | 10.1007/s10618-015-0421-2 | 1404.7048 | null | null |
Probably Approximately Correct MDP Learning and Control With Temporal
Logic Constraints | cs.SY cs.LG cs.LO cs.RO | We consider synthesis of control policies that maximize the probability of
satisfying given temporal logic specifications in unknown, stochastic
environments. We model the interaction between the system and its environment
as a Markov decision process (MDP) with initially unknown transition
probabilities. The solution we develop builds on the so-called model-based
probably approximately correct Markov decision process (PAC-MDP) methodology.
The algorithm attains an $\varepsilon$-approximately optimal policy with
probability $1-\delta$ using samples (i.e. observations), time and space that
grow polynomially with the size of the MDP, the size of the automaton
expressing the temporal logic specification, $\frac{1}{\varepsilon}$,
$\frac{1}{\delta}$ and a finite time horizon. In this approach, the system
maintains a model of the initially unknown MDP, and constructs a product MDP
based on its learned model and the specification automaton that expresses the
temporal logic constraints. During execution, the policy is iteratively updated
using observation of the transitions taken by the system. The iteration
terminates in finitely many steps. With high probability, the resulting policy
is such that, for any state, the difference between the probability of
satisfying the specification under this policy and the optimal one is within a
predefined bound.
| Jie Fu and Ufuk Topcu | null | 1404.7073 | null | null |
Fast Approximation of Rotations and Hessians matrices | cs.LG | A new method to represent and approximate rotation matrices is introduced.
The method represents approximations of a rotation matrix $Q$ with linearithmic
complexity, i.e. with $\frac{1}{2}n\lg(n)$ rotations over pairs of coordinates,
arranged in an FFT-like fashion. The approximation is "learned" using gradient
descent. It allows to represent symmetric matrices $H$ as $QDQ^T$ where $D$ is
a diagonal matrix. It can be used to approximate covariance matrix of Gaussian
models in order to speed up inference, or to estimate and track the inverse
Hessian of an objective function by relating changes in parameters to changes
in gradient along the trajectory followed by the optimization procedure.
Experiments were conducted to approximate synthetic matrices, covariance
matrices of real data, and Hessian matrices of objective functions involved in
machine learning problems.
| Michael Mathieu and Yann LeCun | null | 1404.7195 | null | null |
Meteorological time series forecasting based on MLP modelling using
heterogeneous transfer functions | cs.LG | In this paper, we propose to study four meteorological and seasonal time
series coupled with a multi-layer perceptron (MLP) modeling. We chose to
combine two transfer functions for the nodes of the hidden layer, and to use a
temporal indicator (time index as input) in order to take into account the
seasonal aspect of the studied time series. The results of the prediction
concern two years of measurements and the learning step, eight independent
years. We show that this methodology can improve the accuracy of meteorological
data estimation compared to a classical MLP modelling with a homogenous
transfer function.
| Cyril Voyant (SPE), Marie Laure Nivet (SPE), Christophe Paoli (SPE),
Marc Muselli (SPE), Gilles Notton (SPE) | 10.1088/1742-6596/574/1/012064 | 1404.7255 | null | null |
Generalized Nonconvex Nonsmooth Low-Rank Minimization | cs.CV cs.LG stat.ML | As surrogate functions of $L_0$-norm, many nonconvex penalty functions have
been proposed to enhance the sparse vector recovery. It is easy to extend these
nonconvex penalty functions on singular values of a matrix to enhance low-rank
matrix recovery. However, different from convex optimization, solving the
nonconvex low-rank minimization problem is much more challenging than the
nonconvex sparse minimization problem. We observe that all the existing
nonconvex penalty functions are concave and monotonically increasing on
$[0,\infty)$. Thus their gradients are decreasing functions. Based on this
property, we propose an Iteratively Reweighted Nuclear Norm (IRNN) algorithm to
solve the nonconvex nonsmooth low-rank minimization problem. IRNN iteratively
solves a Weighted Singular Value Thresholding (WSVT) problem. By setting the
weight vector as the gradient of the concave penalty function, the WSVT problem
has a closed form solution. In theory, we prove that IRNN decreases the
objective function value monotonically, and any limit point is a stationary
point. Extensive experiments on both synthetic data and real images demonstrate
that IRNN enhances the low-rank matrix recovery compared with state-of-the-art
convex algorithms.
| Canyi Lu, Jinhui Tang, Shuicheng Yan, Zhouchen Lin | 10.1109/CVPR.2014.526 | 1404.7306 | null | null |
Automatic Differentiation of Algorithms for Machine Learning | cs.LG cs.SC stat.ML | Automatic differentiation---the mechanical transformation of numeric computer
programs to calculate derivatives efficiently and accurately---dates to the
origin of the computer age. Reverse mode automatic differentiation both
antedates and generalizes the method of backwards propagation of errors used in
machine learning. Despite this, practitioners in a variety of fields, including
machine learning, have been little influenced by automatic differentiation, and
make scant use of available tools. Here we review the technique of automatic
differentiation, describe its two main modes, and explain how it can benefit
machine learning practitioners. To reach the widest possible audience our
treatment assumes only elementary differential calculus, and does not assume
any knowledge of linear algebra.
| Atilim Gunes Baydin, Barak A. Pearlmutter | null | 1404.7456 | null | null |
Implementing spectral methods for hidden Markov models with real-valued
emissions | cs.LG | Hidden Markov models (HMMs) are widely used statistical models for modeling
sequential data. The parameter estimation for HMMs from time series data is an
important learning problem. The predominant methods for parameter estimation
are based on local search heuristics, most notably the expectation-maximization
(EM) algorithm. These methods are prone to local optima and oftentimes suffer
from high computational and sample complexity. Recent years saw the emergence
of spectral methods for the parameter estimation of HMMs, based on a method of
moments approach. Two spectral learning algorithms as proposed by Hsu, Kakade
and Zhang 2012 (arXiv:0811.4413) and Anandkumar, Hsu and Kakade 2012
(arXiv:1203.0683) are assessed in this work. Using experiments with synthetic
data, the algorithms are compared with each other. Furthermore, the spectral
methods are compared to the Baum-Welch algorithm, a well-established method
applying the EM algorithm to HMMs. The spectral algorithms are found to have a
much more favorable computational and sample complexity. Even though the
algorithms readily handle high dimensional observation spaces, instability
issues are encountered in this regime. In view of learning from real-world
experimental data, the representation of real-valued observations for the use
in spectral methods is discussed, presenting possible methods to represent data
for the use in the learning algorithms.
| Carl Mattfeld | null | 1404.7472 | null | null |
A Map of Update Constraints in Inductive Inference | cs.LG | We investigate how different learning restrictions reduce learning power and
how the different restrictions relate to one another. We give a complete map
for nine different restrictions both for the cases of complete information
learning and set-driven learning. This completes the picture for these
well-studied \emph{delayable} learning restrictions. A further insight is
gained by different characterizations of \emph{conservative} learning in terms
of variants of \emph{cautious} learning.
Our analyses greatly benefit from general theorems we give, for example
showing that learners with exclusively delayable restrictions can always be
assumed total.
| Timo K\"otzing and Raphaela Palenta | null | 1404.7527 | null | null |
Majority Vote of Diverse Classifiers for Late Fusion | stat.ML cs.LG cs.MM | In the past few years, a lot of attention has been devoted to multimedia
indexing by fusing multimodal informations. Two kinds of fusion schemes are
generally considered: The early fusion and the late fusion. We focus on late
classifier fusion, where one combines the scores of each modality at the
decision level. To tackle this problem, we investigate a recent and elegant
well-founded quadratic program named MinCq coming from the machine learning
PAC-Bayesian theory. MinCq looks for the weighted combination, over a set of
real-valued functions seen as voters, leading to the lowest misclassification
rate, while maximizing the voters' diversity. We propose an extension of MinCq
tailored to multimedia indexing. Our method is based on an order-preserving
pairwise loss adapted to ranking that allows us to improve Mean Averaged
Precision measure while taking into account the diversity of the voters that we
want to fuse. We provide evidence that this method is naturally adapted to late
fusion procedures and confirm the good behavior of our approach on the
challenging PASCAL VOC'07 benchmark.
| Emilie Morvant (IST Austria), Amaury Habrard (LHC), St\'ephane Ayache
(LIF) | null | 1404.7796 | null | null |
Deep Learning in Neural Networks: An Overview | cs.NE cs.LG | In recent years, deep artificial neural networks (including recurrent ones)
have won numerous contests in pattern recognition and machine learning. This
historical survey compactly summarises relevant work, much of it from the
previous millennium. Shallow and deep learners are distinguished by the depth
of their credit assignment paths, which are chains of possibly learnable,
causal links between actions and effects. I review deep supervised learning
(also recapitulating the history of backpropagation), unsupervised learning,
reinforcement learning & evolutionary computation, and indirect search for
short programs encoding deep and large networks.
| Juergen Schmidhuber | 10.1016/j.neunet.2014.09.003 | 1404.7828 | null | null |
Learning with incremental iterative regularization | stat.ML cs.LG math.OC math.PR | Within a statistical learning setting, we propose and study an iterative
regularization algorithm for least squares defined by an incremental gradient
method. In particular, we show that, if all other parameters are fixed a
priori, the number of passes over the data (epochs) acts as a regularization
parameter, and prove strong universal consistency, i.e. almost sure convergence
of the risk, as well as sharp finite sample bounds for the iterates. Our
results are a step towards understanding the effect of multiple epochs in
stochastic gradient techniques in machine learning and rely on integrating
statistical and optimization results.
| Lorenzo Rosasco, Silvia Villa | null | 1405.0042 | null | null |
Fast MLE Computation for the Dirichlet Multinomial | stat.ML cs.LG | Given a collection of categorical data, we want to find the parameters of a
Dirichlet distribution which maximizes the likelihood of that data. Newton's
method is typically used for this purpose but current implementations require
reading through the entire dataset on each iteration. In this paper, we propose
a modification which requires only a single pass through the dataset and
substantially decreases running time. Furthermore we analyze both theoretically
and empirically the performance of the proposed algorithm, and provide an open
source implementation.
| Max Sklar | null | 1405.0099 | null | null |
Geodesic Distance Function Learning via Heat Flow on Vector Fields | cs.LG math.DG stat.ML | Learning a distance function or metric on a given data manifold is of great
importance in machine learning and pattern recognition. Many of the previous
works first embed the manifold to Euclidean space and then learn the distance
function. However, such a scheme might not faithfully preserve the distance
function if the original manifold is not Euclidean. Note that the distance
function on a manifold can always be well-defined. In this paper, we propose to
learn the distance function directly on the manifold without embedding. We
first provide a theoretical characterization of the distance function by its
gradient field. Based on our theoretical analysis, we propose to first learn
the gradient field of the distance function and then learn the distance
function itself. Specifically, we set the gradient field of a local distance
function as an initial vector field. Then we transport it to the whole manifold
via heat flow on vector fields. Finally, the geodesic distance function can be
obtained by requiring its gradient field to be close to the normalized vector
field. Experimental results on both synthetic and real data demonstrate the
effectiveness of our proposed algorithm.
| Binbin Lin, Ji Yang, Xiaofei He and Jieping Ye | null | 1405.0133 | null | null |
Exchangeable Variable Models | cs.LG cs.AI | A sequence of random variables is exchangeable if its joint distribution is
invariant under variable permutations. We introduce exchangeable variable
models (EVMs) as a novel class of probabilistic models whose basic building
blocks are partially exchangeable sequences, a generalization of exchangeable
sequences. We prove that a family of tractable EVMs is optimal under zero-one
loss for a large class of functions, including parity and threshold functions,
and strictly subsumes existing tractable independence-based model families.
Extensive experiments show that EVMs outperform state of the art classifiers
such as SVMs and probabilistic models which are solely based on independence
assumptions.
| Mathias Niepert and Pedro Domingos | null | 1405.0501 | null | null |
Complexity of Equivalence and Learning for Multiplicity Tree Automata | cs.LG cs.FL | We consider the complexity of equivalence and learning for multiplicity tree
automata, i.e., weighted tree automata over a field. We first show that the
equivalence problem is logspace equivalent to polynomial identity testing, the
complexity of which is a longstanding open problem. Secondly, we derive lower
bounds on the number of queries needed to learn multiplicity tree automata in
Angluin's exact learning model, over both arbitrary and fixed fields.
Habrard and Oncina (2006) give an exact learning algorithm for multiplicity
tree automata, in which the number of queries is proportional to the size of
the target automaton and the size of a largest counterexample, represented as a
tree, that is returned by the Teacher. However, the smallest
tree-counterexample may be exponential in the size of the target automaton.
Thus the above algorithm does not run in time polynomial in the size of the
target automaton, and has query complexity exponential in the lower bound.
Assuming a Teacher that returns minimal DAG representations of
counterexamples, we give a new exact learning algorithm whose query complexity
is quadratic in the target automaton size, almost matching the lower bound, and
improving the best previously-known algorithm by an exponential factor.
| Ines Marusic and James Worrell | null | 1405.0514 | null | null |
On Lipschitz Continuity and Smoothness of Loss Functions in Learning to
Rank | cs.LG stat.ML | In binary classification and regression problems, it is well understood that
Lipschitz continuity and smoothness of the loss function play key roles in
governing generalization error bounds for empirical risk minimization
algorithms. In this paper, we show how these two properties affect
generalization error bounds in the learning to rank problem. The learning to
rank problem involves vector valued predictions and therefore the choice of the
norm with respect to which Lipschitz continuity and smoothness are defined
becomes crucial. Choosing the $\ell_\infty$ norm in our definition of Lipschitz
continuity allows us to improve existing bounds. Furthermore, under smoothness
assumptions, our choice enables us to prove rates that interpolate between
$1/\sqrt{n}$ and $1/n$ rates. Application of our results to ListNet, a popular
learning to rank method, gives state-of-the-art performance guarantees.
| Ambuj Tewari and Sougata Chaudhuri | null | 1405.0586 | null | null |
Perceptron-like Algorithms and Generalization Bounds for Learning to
Rank | cs.LG stat.ML | Learning to rank is a supervised learning problem where the output space is
the space of rankings but the supervision space is the space of relevance
scores. We make theoretical contributions to the learning to rank problem both
in the online and batch settings. First, we propose a perceptron-like algorithm
for learning a ranking function in an online setting. Our algorithm is an
extension of the classic perceptron algorithm for the classification problem.
Second, in the setting of batch learning, we introduce a sufficient condition
for convex ranking surrogates to ensure a generalization bound that is
independent of number of objects per query. Our bound holds when linear ranking
functions are used: a common practice in many learning to rank algorithms. En
route to developing the online algorithm and generalization bound, we propose a
novel family of listwise large margin ranking surrogates. Our novel surrogate
family is obtained by modifying a well-known pairwise large margin ranking
surrogate and is distinct from the listwise large margin surrogates developed
using the structured prediction framework. Using the proposed family, we
provide a guaranteed upper bound on the cumulative NDCG (or MAP) induced loss
under the perceptron-like algorithm. We also show that the novel surrogates
satisfy the generalization bound condition.
| Sougata Chaudhuri and Ambuj Tewari | null | 1405.0591 | null | null |
Optimality guarantees for distributed statistical estimation | cs.IT cs.LG math.IT math.ST stat.TH | Large data sets often require performing distributed statistical estimation,
with a full data set split across multiple machines and limited communication
between machines. To study such scenarios, we define and study some refinements
of the classical minimax risk that apply to distributed settings, comparing to
the performance of estimators with access to the entire data. Lower bounds on
these quantities provide a precise characterization of the minimum amount of
communication required to achieve the centralized minimax risk. We study two
classes of distributed protocols: one in which machines send messages
independently over channels without feedback, and a second allowing for
interactive communication, in which a central server broadcasts the messages
from a given machine to all other machines. We establish lower bounds for a
variety of problems, including location estimation in several families and
parameter estimation in different types of regression models. Our results
include a novel class of quantitative data-processing inequalities used to
characterize the effects of limited communication.
| John C. Duchi and Michael I. Jordan and Martin J. Wainwright and
Yuchen Zhang | null | 1405.0782 | null | null |
On Exact Learning Monotone DNF from Membership Queries | cs.LG | In this paper, we study the problem of learning a monotone DNF with at most
$s$ terms of size (number of variables in each term) at most $r$ ($s$ term
$r$-MDNF) from membership queries. This problem is equivalent to the problem of
learning a general hypergraph using hyperedge-detecting queries, a problem
motivated by applications arising in chemical reactions and genome sequencing.
We first present new lower bounds for this problem and then present
deterministic and randomized adaptive algorithms with query complexities that
are almost optimal. All the algorithms we present in this paper run in time
linear in the query complexity and the number of variables $n$. In addition,
all of the algorithms we present in this paper are asymptotically tight for
fixed $r$ and/or $s$.
| Hasan Abasi and Nader H. Bshouty and Hanna Mazzawi | null | 1405.0792 | null | null |
Generalized Risk-Aversion in Stochastic Multi-Armed Bandits | cs.LG stat.ML | We consider the problem of minimizing the regret in stochastic multi-armed
bandit, when the measure of goodness of an arm is not the mean return, but some
general function of the mean and the variance.We characterize the conditions
under which learning is possible and present examples for which no natural
algorithm can achieve sublinear regret.
| Alexander Zimin and Rasmus Ibsen-Jensen and Krishnendu Chatterjee | null | 1405.0833 | null | null |
Robust Subspace Outlier Detection in High Dimensional Space | cs.AI cs.LG stat.ML | Rare data in a large-scale database are called outliers that reveal
significant information in the real world. The subspace-based outlier detection
is regarded as a feasible approach in very high dimensional space. However, the
outliers found in subspaces are only part of the true outliers in high
dimensional space, indeed. The outliers hidden in normal-clustered points are
sometimes neglected in the projected dimensional subspace. In this paper, we
propose a robust subspace method for detecting such inner outliers in a given
dataset, which uses two dimensional-projections: detecting outliers in
subspaces with local density ratio in the first projected dimensions; finding
outliers by comparing neighbor's positions in the second projected dimensions.
Each point's weight is calculated by summing up all related values got in the
two steps projected dimensions, and then the points scoring the largest weight
values are taken as outliers. By taking a series of experiments with the number
of dimensions from 10 to 10000, the results show that our proposed method
achieves high precision in the case of extremely high dimensional space, and
works well in low dimensional space.
| Zhana Bao | null | 1405.0869 | null | null |
Comparing apples to apples in the evaluation of binary coding methods | cs.CV cs.LG | We discuss methodological issues related to the evaluation of unsupervised
binary code construction methods for nearest neighbor search. These issues have
been widely ignored in literature. These coding methods attempt to preserve
either Euclidean distance or angular (cosine) distance in the binary embedding
space. We explain why when comparing a method whose goal is preserving cosine
similarity to one designed for preserving Euclidean distance, the original
features should be normalized by mapping them to the unit hypersphere before
learning the binary mapping functions. To compare a method whose goal is to
preserves Euclidean distance to one that preserves cosine similarity, the
original feature data must be mapped to a higher dimension by including a bias
term in binary mapping functions. These conditions ensure the fair comparison
between different binary code methods for the task of nearest neighbor search.
Our experiments show under these conditions the very simple methods (e.g. LSH
and ITQ) often outperform recent state-of-the-art methods (e.g. MDSH and
OK-means).
| Mohammad Rastegari, Shobeir Fakhraei, Jonghyun Choi, David Jacobs,
Larry S. Davis | null | 1405.1005 | null | null |
K-NS: Section-Based Outlier Detection in High Dimensional Space | cs.AI cs.LG stat.ML | Finding rare information hidden in a huge amount of data from the Internet is
a necessary but complex issue. Many researchers have studied this issue and
have found effective methods to detect anomaly data in low dimensional space.
However, as the dimension increases, most of these existing methods perform
poorly in detecting outliers because of "high dimensional curse". Even though
some approaches aim to solve this problem in high dimensional space, they can
only detect some anomaly data appearing in low dimensional space and cannot
detect all of anomaly data which appear differently in high dimensional space.
To cope with this problem, we propose a new k-nearest section-based method
(k-NS) in a section-based space. Our proposed approach not only detects
outliers in low dimensional space with section-density ratio but also detects
outliers in high dimensional space with the ratio of k-nearest section against
average value. After taking a series of experiments with the dimension from 10
to 10000, the experiment results show that our proposed method achieves 100%
precision and 100% recall result in the case of extremely high dimensional
space, and better improvement in low dimensional space compared to our
previously proposed method.
| Zhana Bao | null | 1405.1027 | null | null |
Feature selection for classification with class-separability strategy
and data envelopment analysis | cs.LG cs.IT math.IT stat.ML | In this paper, a novel feature selection method is presented, which is based
on Class-Separability (CS) strategy and Data Envelopment Analysis (DEA). To
better capture the relationship between features and the class, class labels
are separated into individual variables and relevance and redundancy are
explicitly handled on each class label. Super-efficiency DEA is employed to
evaluate and rank features via their conditional dependence scores on all class
labels, and the feature with maximum super-efficiency score is then added in
the conditioning set for conditional dependence estimation in the next
iteration, in such a way as to iteratively select features and get the final
selected features. Eventually, experiments are conducted to evaluate the
effectiveness of proposed method comparing with four state-of-the-art methods
from the viewpoint of classification accuracy. Empirical results verify the
feasibility and the superiority of proposed feature selection method.
| Yishi Zhang, Chao Yang, Anrong Yang, Chan Xiong, Xingchi Zhou, Zigang
Zhang | null | 1405.1119 | null | null |
Combining Multiple Clusterings via Crowd Agreement Estimation and
Multi-Granularity Link Analysis | stat.ML cs.LG | The clustering ensemble technique aims to combine multiple clusterings into a
probably better and more robust clustering and has been receiving an increasing
attention in recent years. There are mainly two aspects of limitations in the
existing clustering ensemble approaches. Firstly, many approaches lack the
ability to weight the base clusterings without access to the original data and
can be affected significantly by the low-quality, or even ill clusterings.
Secondly, they generally focus on the instance level or cluster level in the
ensemble system and fail to integrate multi-granularity cues into a unified
model. To address these two limitations, this paper proposes to solve the
clustering ensemble problem via crowd agreement estimation and
multi-granularity link analysis. We present the normalized crowd agreement
index (NCAI) to evaluate the quality of base clusterings in an unsupervised
manner and thus weight the base clusterings in accordance with their clustering
validity. To explore the relationship between clusters, the source aware
connected triple (SACT) similarity is introduced with regard to their common
neighbors and the source reliability. Based on NCAI and multi-granularity
information collected among base clusterings, clusters, and data instances, we
further propose two novel consensus functions, termed weighted evidence
accumulation clustering (WEAC) and graph partitioning with multi-granularity
link analysis (GP-MGLA) respectively. The experiments are conducted on eight
real-world datasets. The experimental results demonstrate the effectiveness and
robustness of the proposed methods.
| Dong Huang and Jian-Huang Lai and Chang-Dong Wang | 10.1016/j.neucom.2014.05.094 | 1405.1297 | null | null |
Application of Machine Learning Techniques in Aquaculture | cs.CE cs.LG | In this paper we present applications of different machine learning
algorithms in aquaculture. Machine learning algorithms learn models from
historical data. In aquaculture historical data are obtained from farm
practices, yields, and environmental data sources. Associations between these
different variables can be obtained by applying machine learning algorithms to
historical data. In this paper we present applications of different machine
learning algorithms in aquaculture applications.
| Akhlaqur Rahman and Sumaira Tasnim | 10.14445/22312803/IJCTT-V10P137 | 1405.1304 | null | null |
Is Joint Training Better for Deep Auto-Encoders? | stat.ML cs.LG cs.NE | Traditionally, when generative models of data are developed via deep
architectures, greedy layer-wise pre-training is employed. In a well-trained
model, the lower layer of the architecture models the data distribution
conditional upon the hidden variables, while the higher layers model the hidden
distribution prior. But due to the greedy scheme of the layerwise training
technique, the parameters of lower layers are fixed when training higher
layers. This makes it extremely challenging for the model to learn the hidden
distribution prior, which in turn leads to a suboptimal model for the data
distribution. We therefore investigate joint training of deep autoencoders,
where the architecture is viewed as one stack of two or more single-layer
autoencoders. A single global reconstruction objective is jointly optimized,
such that the objective for the single autoencoders at each layer acts as a
local, layer-level regularizer. We empirically evaluate the performance of this
joint training scheme and observe that it not only learns a better data model,
but also learns better higher layer representations, which highlights its
potential for unsupervised feature learning. In addition, we find that the
usage of regularizations in the joint training scheme is crucial in achieving
good performance. In the supervised setting, joint training also shows superior
performance when training deeper models. The joint training framework can thus
provide a platform for investigating more efficient usage of different types of
regularizers, especially in light of the growing volumes of available unlabeled
data.
| Yingbo Zhou, Devansh Arpit, Ifeoma Nwogu, Venu Govindaraju | null | 1405.1380 | null | null |
Training Restricted Boltzmann Machine by Perturbation | cs.NE cs.LG stat.ML | A new approach to maximum likelihood learning of discrete graphical models
and RBM in particular is introduced. Our method, Perturb and Descend (PD) is
inspired by two ideas (I) perturb and MAP method for sampling (II) learning by
Contrastive Divergence minimization. In contrast to perturb and MAP, PD
leverages training data to learn the models that do not allow efficient MAP
estimation. During the learning, to produce a sample from the current model, we
start from a training data and descend in the energy landscape of the
"perturbed model", for a fixed number of steps, or until a local optima is
reached. For RBM, this involves linear calculations and thresholding which can
be very fast. Furthermore we show that the amount of perturbation is closely
related to the temperature parameter and it can regularize the model by
producing robust features resulting in sparse hidden layer activation.
| Siamak Ravanbakhsh, Russell Greiner, Brendan Frey | null | 1405.1436 | null | null |
Adaptation Algorithm and Theory Based on Generalized Discrepancy | cs.LG | We present a new algorithm for domain adaptation improving upon a discrepancy
minimization algorithm previously shown to outperform a number of algorithms
for this task. Unlike many previous algorithms for domain adaptation, our
algorithm does not consist of a fixed reweighting of the losses over the
training sample. We show that our algorithm benefits from a solid theoretical
foundation and more favorable learning bounds than discrepancy minimization. We
present a detailed description of our algorithm and give several efficient
solutions for solving its optimization problem. We also report the results of
several experiments showing that it outperforms discrepancy minimization.
| Corinna Cortes and Mehryar Mohri and Andres Mu\~noz Medina | null | 1405.1503 | null | null |
A Mathematical Theory of Learning | cs.LG cs.AI cs.IT math.IT | In this paper, a mathematical theory of learning is proposed that has many
parallels with information theory. We consider Vapnik's General Setting of
Learning in which the learning process is defined to be the act of selecting a
hypothesis in response to a given training set. Such hypothesis can, for
example, be a decision boundary in classification, a set of centroids in
clustering, or a set of frequent item-sets in association rule mining.
Depending on the hypothesis space and how the final hypothesis is selected, we
show that a learning process can be assigned a numeric score, called learning
capacity, which is analogous to Shannon's channel capacity and satisfies
similar interesting properties as well such as the data-processing inequality
and the information-cannot-hurt inequality. In addition, learning capacity
provides the tightest possible bound on the difference between true risk and
empirical risk of the learning process for all loss functions that are
parametrized by the chosen hypothesis. It is also shown that the notion of
learning capacity equivalently quantifies how sensitive the choice of the final
hypothesis is to a small perturbation in the training set. Consequently,
algorithmic stability is both necessary and sufficient for generalization.
While the theory does not rely on concentration inequalities, we finally show
that analogs to classical results in learning theory using the Probably
Approximately Correct (PAC) model can be immediately deduced using this theory,
and conclude with information-theoretic bounds to learning capacity.
| Ibrahim Alabdulmohsin | null | 1405.1513 | null | null |
A consistent deterministic regression tree for non-parametric prediction
of time series | math.ST cs.LG stat.ML stat.TH | We study online prediction of bounded stationary ergodic processes. To do so,
we consider the setting of prediction of individual sequences and build a
deterministic regression tree that performs asymptotically as well as the best
L-Lipschitz constant predictors. Then, we show why the obtained regret bound
entails the asymptotical optimality with respect to the class of bounded
stationary ergodic processes.
| Pierre Gaillard (GREGH), Paul Baudin (INRIA Rocquencourt) | null | 1405.1533 | null | null |
On Communication Cost of Distributed Statistical Estimation and
Dimensionality | cs.LG cs.IT math.IT | We explore the connection between dimensionality and communication cost in
distributed learning problems. Specifically we study the problem of estimating
the mean $\vec{\theta}$ of an unknown $d$ dimensional gaussian distribution in
the distributed setting. In this problem, the samples from the unknown
distribution are distributed among $m$ different machines. The goal is to
estimate the mean $\vec{\theta}$ at the optimal minimax rate while
communicating as few bits as possible. We show that in this setting, the
communication cost scales linearly in the number of dimensions i.e. one needs
to deal with different dimensions individually. Applying this result to
previous lower bounds for one dimension in the interactive setting
\cite{ZDJW13} and to our improved bounds for the simultaneous setting, we prove
new lower bounds of $\Omega(md/\log(m))$ and $\Omega(md)$ for the bits of
communication needed to achieve the minimax squared loss, in the interactive
and simultaneous settings respectively. To complement, we also demonstrate an
interactive protocol achieving the minimax squared loss with $O(md)$ bits of
communication, which improves upon the simple simultaneous protocol by a
logarithmic factor. Given the strong lower bounds in the general setting, we
initiate the study of the distributed parameter estimation problems with
structured parameters. Specifically, when the parameter is promised to be
$s$-sparse, we show a simple thresholding based protocol that achieves the same
squared loss while saving a $d/s$ factor of communication. We conjecture that
the tradeoff between communication and squared loss demonstrated by this
protocol is essentially optimal up to logarithmic factor.
| Ankit Garg and Tengyu Ma and Huy L. Nguyen | null | 1405.1665 | null | null |
Texture Based Image Segmentation of Chili Pepper X-Ray Images Using
Gabor Filter | cs.CV cs.LG | Texture segmentation is the process of partitioning an image into regions
with different textures containing a similar group of pixels. Detecting the
discontinuity of the filter's output and their statistical properties help in
segmenting and classifying a given image with different texture regions. In
this proposed paper, chili x-ray image texture segmentation is performed by
using Gabor filter. The texture segmented result obtained from Gabor filter fed
into three texture filters, namely Entropy, Standard Deviation and Range
filter. After performing texture analysis, features can be extracted by using
Statistical methods. In this paper Gray Level Co-occurrence Matrices and First
order statistics are used as feature extraction methods. Features extracted
from statistical methods are given to Support Vector Machine (SVM) classifier.
Using this methodology, it is found that texture segmentation is followed by
the Gray Level Co-occurrence Matrix feature extraction method gives a higher
accuracy rate of 84% when compared with First order feature extraction method.
Key Words: Texture segmentation, Texture filter, Gabor filter, Feature
extraction methods, SVM classifier.
| M.Rajalakshmi and Dr. P.Subashini | null | 1405.1966 | null | null |
Improving Image Clustering using Sparse Text and the Wisdom of the
Crowds | cs.LG cs.CV | We propose a method to improve image clustering using sparse text and the
wisdom of the crowds. In particular, we present a method to fuse two different
kinds of document features, image and text features, and use a common
dictionary or "wisdom of the crowds" as the connection between the two
different kinds of documents. With the proposed fusion matrix, we use topic
modeling via non-negative matrix factorization to cluster documents.
| Anna Ma, Arjuna Flenner, Deanna Needell, Allon G. Percus | null | 1405.2102 | null | null |
Training Deep Fourier Neural Networks To Fit Time-Series Data | cs.NE cs.LG | We present a method for training a deep neural network containing sinusoidal
activation functions to fit to time-series data. Weights are initialized using
a fast Fourier transform, then trained with regularization to improve
generalization. A simple dynamic parameter tuning method is employed to adjust
both the learning rate and regularization term, such that stability and
efficient training are both achieved. We show how deeper layers can be utilized
to model the observed sequence using a sparser set of sinusoid units, and how
non-uniform regularization can improve generalization by promoting the shifting
of weight toward simpler units. The method is demonstrated with time-series
problems to show that it leads to effective extrapolation of nonlinear trends.
| Michael S. Gashler and Stephen C. Ashmore | null | 1405.2262 | null | null |
Hellinger Distance Trees for Imbalanced Streams | cs.LG astro-ph.IM stat.ML | Classifiers trained on data sets possessing an imbalanced class distribution
are known to exhibit poor generalisation performance. This is known as the
imbalanced learning problem. The problem becomes particularly acute when we
consider incremental classifiers operating on imbalanced data streams,
especially when the learning objective is rare class identification. As
accuracy may provide a misleading impression of performance on imbalanced data,
existing stream classifiers based on accuracy can suffer poor minority class
performance on imbalanced streams, with the result being low minority class
recall rates. In this paper we address this deficiency by proposing the use of
the Hellinger distance measure, as a very fast decision tree split criterion.
We demonstrate that by using Hellinger a statistically significant improvement
in recall rates on imbalanced data streams can be achieved, with an acceptable
increase in the false positive rate.
| R. J. Lyon, J. M. Brooke, J. D. Knowles, B. W. Stappers | null | 1405.2278 | null | null |
Nonparametric Detection of Anomalous Data Streams | cs.LG stat.ML | A nonparametric anomalous hypothesis testing problem is investigated, in
which there are totally n sequences with s anomalous sequences to be detected.
Each typical sequence contains m independent and identically distributed
(i.i.d.) samples drawn from a distribution p, whereas each anomalous sequence
contains m i.i.d. samples drawn from a distribution q that is distinct from p.
The distributions p and q are assumed to be unknown in advance.
Distribution-free tests are constructed using maximum mean discrepancy as the
metric, which is based on mean embeddings of distributions into a reproducing
kernel Hilbert space. The probability of error is bounded as a function of the
sample size m, the number s of anomalous sequences and the number n of
sequences. It is then shown that with s known, the constructed test is
exponentially consistent if m is greater than a constant factor of log n, for
any p and q, whereas with s unknown, m should has an order strictly greater
than log n. Furthermore, it is shown that no test can be consistent for
arbitrary p and q if m is less than a constant factor of log n, thus the
order-level optimality of the proposed test is established. Numerical results
are provided to demonstrate that our tests outperform (or perform as well as)
the tests based on other competitive approaches under various cases.
| Shaofeng Zou, Yingbin Liang, H. Vincent Poor, Xinghua Shi | null | 1405.2294 | null | null |
A Hybrid Monte Carlo Architecture for Parameter Optimization | stat.ML cs.LG stat.ME | Much recent research has been conducted in the area of Bayesian learning,
particularly with regard to the optimization of hyper-parameters via Gaussian
process regression. The methodologies rely chiefly on the method of maximizing
the expected improvement of a score function with respect to adjustments in the
hyper-parameters. In this work, we present a novel algorithm that exploits
notions of confidence intervals and uncertainties to enable the discovery of
the best optimal within a targeted region of the parameter space. We
demonstrate the efficacy of our algorithm with respect to machine learning
problems and show cases where our algorithm is competitive with the method of
maximizing expected improvement.
| James Brofos | null | 1405.2377 | null | null |
Optimal Learners for Multiclass Problems | cs.LG | The fundamental theorem of statistical learning states that for binary
classification problems, any Empirical Risk Minimization (ERM) learning rule
has close to optimal sample complexity. In this paper we seek for a generic
optimal learner for multiclass prediction. We start by proving a surprising
result: a generic optimal multiclass learner must be improper, namely, it must
have the ability to output hypotheses which do not belong to the hypothesis
class, even though it knows that all the labels are generated by some
hypothesis from the class. In particular, no ERM learner is optimal. This
brings back the fundmamental question of "how to learn"? We give a complete
answer to this question by giving a new analysis of the one-inclusion
multiclass learner of Rubinstein et al (2006) showing that its sample
complexity is essentially optimal. Then, we turn to study the popular
hypothesis class of generalized linear classifiers. We derive optimal learners
that, unlike the one-inclusion algorithm, are computationally efficient.
Furthermore, we show that the sample complexity of these learners is better
than the sample complexity of the ERM rule, thus settling in negative an open
question due to Collins (2005).
| Amit Daniely and Shai Shalev-Shwartz | null | 1405.2420 | null | null |
Functional Bandits | stat.ML cs.LG | We introduce the functional bandit problem, where the objective is to find an
arm that optimises a known functional of the unknown arm-reward distributions.
These problems arise in many settings such as maximum entropy methods in
natural language processing, and risk-averse decision-making, but current
best-arm identification techniques fail in these domains. We propose a new
approach, that combines functional estimation and arm elimination, to tackle
this problem. This method achieves provably efficient performance guarantees.
In addition, we illustrate this method on a number of important functionals in
risk management and information theory, and refine our generic theoretical
results in those cases.
| Long Tran-Thanh and Jia Yuan Yu | null | 1405.2432 | null | null |
A Canonical Semi-Deterministic Transducer | cs.LG | We prove the existence of a canonical form for semi-deterministic transducers
with incomparable sets of output strings. Based on this, we develop an
algorithm which learns semi-deterministic transducers given access to
translation queries. We also prove that there is no learning algorithm for
semi-deterministic transducers that uses only domain knowledge.
| Achilles Beros, Colin de la Higuera | null | 1405.2476 | null | null |
Learning from networked examples | cs.AI cs.LG stat.ML | Many machine learning algorithms are based on the assumption that training
examples are drawn independently. However, this assumption does not hold
anymore when learning from a networked sample because two or more training
examples may share some common objects, and hence share the features of these
shared objects. We show that the classic approach of ignoring this problem
potentially can have a harmful effect on the accuracy of statistics, and then
consider alternatives. One of these is to only use independent examples,
discarding other information. However, this is clearly suboptimal. We analyze
sample error bounds in this networked setting, providing significantly improved
results. An important component of our approach is formed by efficient sample
weighting schemes, which leads to novel concentration inequalities.
| Yuyi Wang and Jan Ramon and Zheng-Chu Guo | null | 1405.2600 | null | null |
Structural Return Maximization for Reinforcement Learning | stat.ML cs.LG | Batch Reinforcement Learning (RL) algorithms attempt to choose a policy from
a designer-provided class of policies given a fixed set of training data.
Choosing the policy which maximizes an estimate of return often leads to
over-fitting when only limited data is available, due to the size of the policy
class in relation to the amount of data available. In this work, we focus on
learning policy classes that are appropriately sized to the amount of data
available. We accomplish this by using the principle of Structural Risk
Minimization, from Statistical Learning Theory, which uses Rademacher
complexity to identify a policy class that maximizes a bound on the return of
the best policy in the chosen policy class, given the available data. Unlike
similar batch RL approaches, our bound on return requires only extremely weak
assumptions on the true system.
| Joshua Joseph, Javier Velez, Nicholas Roy | null | 1405.2606 | null | null |
Sharp Finite-Time Iterated-Logarithm Martingale Concentration | math.PR cs.LG stat.ML | We give concentration bounds for martingales that are uniform over finite
times and extend classical Hoeffding and Bernstein inequalities. We also
demonstrate our concentration bounds to be optimal with a matching
anti-concentration inequality, proved using the same method. Together these
constitute a finite-time version of the law of the iterated logarithm, and shed
light on the relationship between it and the central limit theorem.
| Akshay Balsubramani | null | 1405.2639 | null | null |
Selecting Near-Optimal Approximate State Representations in
Reinforcement Learning | cs.LG | We consider a reinforcement learning setting introduced in (Maillard et al.,
NIPS 2011) where the learner does not have explicit access to the states of the
underlying Markov decision process (MDP). Instead, she has access to several
models that map histories of past interactions to states. Here we improve over
known regret bounds in this setting, and more importantly generalize to the
case where the models given to the learner do not contain a true model
resulting in an MDP representation but only approximations of it. We also give
improved error bounds for state aggregation.
| Ronald Ortner, Odalric-Ambrym Maillard, Daniil Ryabko | null | 1405.2652 | null | null |
FastMMD: Ensemble of Circular Discrepancy for Efficient Two-Sample Test | cs.AI cs.LG stat.ML | The maximum mean discrepancy (MMD) is a recently proposed test statistic for
two-sample test. Its quadratic time complexity, however, greatly hampers its
availability to large-scale applications. To accelerate the MMD calculation, in
this study we propose an efficient method called FastMMD. The core idea of
FastMMD is to equivalently transform the MMD with shift-invariant kernels into
the amplitude expectation of a linear combination of sinusoid components based
on Bochner's theorem and Fourier transform (Rahimi & Recht, 2007). Taking
advantage of sampling of Fourier transform, FastMMD decreases the time
complexity for MMD calculation from $O(N^2 d)$ to $O(L N d)$, where $N$ and $d$
are the size and dimension of the sample set, respectively. Here $L$ is the
number of basis functions for approximating kernels which determines the
approximation accuracy. For kernels that are spherically invariant, the
computation can be further accelerated to $O(L N \log d)$ by using the Fastfood
technique (Le et al., 2013). The uniform convergence of our method has also
been theoretically proved in both unbiased and biased estimates. We have
further provided a geometric explanation for our method, namely ensemble of
circular discrepancy, which facilitates us to understand the insight of MMD,
and is hopeful to help arouse more extensive metrics for assessing two-sample
test. Experimental results substantiate that FastMMD is with similar accuracy
as exact MMD, while with faster computation speed and lower variance than the
existing MMD approximation methods.
| Ji Zhao, Deyu Meng | 10.1162/NECO_a_00732 | 1405.2664 | null | null |
Policy Gradients for CVaR-Constrained MDPs | stat.ML cs.LG math.OC | We study a risk-constrained version of the stochastic shortest path (SSP)
problem, where the risk measure considered is Conditional Value-at-Risk (CVaR).
We propose two algorithms that obtain a locally risk-optimal policy by
employing four tools: stochastic approximation, mini batches, policy gradients
and importance sampling. Both the algorithms incorporate a CVaR estimation
procedure, along the lines of Bardou et al. [2009], which in turn is based on
Rockafellar-Uryasev's representation for CVaR and utilize the likelihood ratio
principle for estimating the gradient of the sum of one cost function
(objective of the SSP) and the gradient of the CVaR of the sum of another cost
function (in the constraint of SSP). The algorithms differ in the manner in
which they approximate the CVaR estimates/necessary gradients - the first
algorithm uses stochastic approximation, while the second employ mini-batches
in the spirit of Monte Carlo methods. We establish asymptotic convergence of
both the algorithms. Further, since estimating CVaR is related to rare-event
simulation, we incorporate an importance sampling based variance reduction
scheme into our proposed algorithms.
| Prashanth L.A. | null | 1405.2690 | null | null |
Two-Stage Metric Learning | cs.LG cs.AI stat.ML | In this paper, we present a novel two-stage metric learning algorithm. We
first map each learning instance to a probability distribution by computing its
similarities to a set of fixed anchor points. Then, we define the distance in
the input data space as the Fisher information distance on the associated
statistical manifold. This induces in the input data space a new family of
distance metric with unique properties. Unlike kernelized metric learning, we
do not require the similarity measure to be positive semi-definite. Moreover,
it can also be interpreted as a local metric learning algorithm with well
defined distance approximation. We evaluate its performance on a number of
datasets. It outperforms significantly other metric learning methods and SVM.
| Jun Wang, Ke Sun, Fei Sha, Stephane Marchand-Maillet, Alexandros
Kalousis | null | 1405.2798 | null | null |
Adaptive Contract Design for Crowdsourcing Markets: Bandit Algorithms
for Repeated Principal-Agent Problems | cs.DS cs.GT cs.LG | Crowdsourcing markets have emerged as a popular platform for matching
available workers with tasks to complete. The payment for a particular task is
typically set by the task's requester, and may be adjusted based on the quality
of the completed work, for example, through the use of "bonus" payments. In
this paper, we study the requester's problem of dynamically adjusting
quality-contingent payments for tasks. We consider a multi-round version of the
well-known principal-agent model, whereby in each round a worker makes a
strategic choice of the effort level which is not directly observable by the
requester. In particular, our formulation significantly generalizes the
budget-free online task pricing problems studied in prior work.
We treat this problem as a multi-armed bandit problem, with each "arm"
representing a potential contract. To cope with the large (and in fact,
infinite) number of arms, we propose a new algorithm, AgnosticZooming, which
discretizes the contract space into a finite number of regions, effectively
treating each region as a single arm. This discretization is adaptively
refined, so that more promising regions of the contract space are eventually
discretized more finely. We analyze this algorithm, showing that it achieves
regret sublinear in the time horizon and substantially improves over
non-adaptive discretization (which is the only competing approach in the
literature).
Our results advance the state of art on several different topics: the theory
of crowdsourcing markets, principal-agent problems, multi-armed bandits, and
dynamic pricing.
| Chien-Ju Ho, Aleksandrs Slivkins, Jennifer Wortman Vaughan | null | 1405.2875 | null | null |
Approximate Policy Iteration Schemes: A Comparison | cs.AI cs.LG stat.ML | We consider the infinite-horizon discounted optimal control problem
formalized by Markov Decision Processes. We focus on several approximate
variations of the Policy Iteration algorithm: Approximate Policy Iteration,
Conservative Policy Iteration (CPI), a natural adaptation of the Policy Search
by Dynamic Programming algorithm to the infinite-horizon case (PSDP$_\infty$),
and the recently proposed Non-Stationary Policy iteration (NSPI(m)). For all
algorithms, we describe performance bounds, and make a comparison by paying a
particular attention to the concentrability constants involved, the number of
iterations and the memory required. Our analysis highlights the following
points: 1) The performance guarantee of CPI can be arbitrarily better than that
of API/API($\alpha$), but this comes at the cost of a relative---exponential in
$\frac{1}{\epsilon}$---increase of the number of iterations. 2) PSDP$_\infty$
enjoys the best of both worlds: its performance guarantee is similar to that of
CPI, but within a number of iterations similar to that of API. 3) Contrary to
API that requires a constant memory, the memory needed by CPI and PSDP$_\infty$
is proportional to their number of iterations, which may be problematic when
the discount factor $\gamma$ is close to 1 or the approximation error
$\epsilon$ is close to $0$; we show that the NSPI(m) algorithm allows to make
an overall trade-off between memory and performance. Simulations with these
schemes confirm our analysis.
| Bruno Scherrer (INRIA Nancy - Grand Est / LORIA) | null | 1405.2878 | null | null |
Accelerating Minibatch Stochastic Gradient Descent using Stratified
Sampling | stat.ML cs.LG math.OC | Stochastic Gradient Descent (SGD) is a popular optimization method which has
been applied to many important machine learning tasks such as Support Vector
Machines and Deep Neural Networks. In order to parallelize SGD, minibatch
training is often employed. The standard approach is to uniformly sample a
minibatch at each step, which often leads to high variance. In this paper we
propose a stratified sampling strategy, which divides the whole dataset into
clusters with low within-cluster variance; we then take examples from these
clusters using a stratified sampling technique. It is shown that the
convergence rate can be significantly improved by the algorithm. Encouraging
experimental results confirm the effectiveness of the proposed method.
| Peilin Zhao, Tong Zhang | null | 1405.3080 | null | null |
Circulant Binary Embedding | stat.ML cs.LG | Binary embedding of high-dimensional data requires long codes to preserve the
discriminative power of the input space. Traditional binary coding methods
often suffer from very high computation and storage costs in such a scenario.
To address this problem, we propose Circulant Binary Embedding (CBE) which
generates binary codes by projecting the data with a circulant matrix. The
circulant structure enables the use of Fast Fourier Transformation to speed up
the computation. Compared to methods that use unstructured matrices, the
proposed method improves the time complexity from $\mathcal{O}(d^2)$ to
$\mathcal{O}(d\log{d})$, and the space complexity from $\mathcal{O}(d^2)$ to
$\mathcal{O}(d)$ where $d$ is the input dimensionality. We also propose a novel
time-frequency alternating optimization to learn data-dependent circulant
projections, which alternatively minimizes the objective in original and
Fourier domains. We show by extensive experiments that the proposed approach
gives much better performance than the state-of-the-art approaches for fixed
time, and provides much faster computation with no performance degradation for
fixed number of bits.
| Felix X. Yu, Sanjiv Kumar, Yunchao Gong, Shih-Fu Chang | null | 1405.3162 | null | null |
Clustering, Hamming Embedding, Generalized LSH and the Max Norm | cs.LG | We study the convex relaxation of clustering and hamming embedding, focusing
on the asymmetric case (co-clustering and asymmetric hamming embedding),
understanding their relationship to LSH as studied by (Charikar 2002) and to
the max-norm ball, and the differences between their symmetric and asymmetric
versions.
| Behnam Neyshabur, Yury Makarychev, Nathan Srebro | null | 1405.3167 | null | null |
Locally Boosted Graph Aggregation for Community Detection | cs.LG cs.SI physics.soc-ph | Learning the right graph representation from noisy, multi-source data has
garnered significant interest in recent years. A central tenet of this problem
is relational learning. Here the objective is to incorporate the partial
information each data source gives us in a way that captures the true
underlying relationships. To address this challenge, we present a general,
boosting-inspired framework for combining weak evidence of entity associations
into a robust similarity metric. Building on previous work, we explore the
extent to which different local quality measurements yield graph
representations that are suitable for community detection. We present empirical
results on a variety of datasets demonstrating the utility of this framework,
especially with respect to real datasets where noise and scale present serious
challenges. Finally, we prove a convergence theorem in an ideal setting and
outline future research into other application domains.
| Jeremy Kun, Rajmonda Caceres, Kevin Carter | null | 1405.3210 | null | null |
Efficient Implementations of the Generalized Lasso Dual Path Algorithm | stat.CO cs.LG stat.ML | We consider efficient implementations of the generalized lasso dual path
algorithm of Tibshirani and Taylor (2011). We first describe a generic approach
that covers any penalty matrix D and any (full column rank) matrix X of
predictor variables. We then describe fast implementations for the special
cases of trend filtering problems, fused lasso problems, and sparse fused lasso
problems, both with X=I and a general matrix X. These specialized
implementations offer a considerable improvement over the generic
implementation, both in terms of numerical stability and efficiency of the
solution path computation. These algorithms are all available for use in the
genlasso R package, which can be found in the CRAN repository.
| Taylor Arnold and Ryan Tibshirani | null | 1405.3222 | null | null |
On the Complexity of A/B Testing | math.ST cs.LG stat.ML stat.TH | A/B testing refers to the task of determining the best option among two
alternatives that yield random outcomes. We provide distribution-dependent
lower bounds for the performance of A/B testing that improve over the results
currently available both in the fixed-confidence (or delta-PAC) and
fixed-budget settings. When the distribution of the outcomes are Gaussian, we
prove that the complexity of the fixed-confidence and fixed-budget settings are
equivalent, and that uniform sampling of both alternatives is optimal only in
the case of equal variances. In the common variance case, we also provide a
stopping rule that terminates faster than existing fixed-confidence algorithms.
In the case of Bernoulli distributions, we show that the complexity of
fixed-budget setting is smaller than that of fixed-confidence setting and that
uniform sampling of both alternatives -though not optimal- is advisable in
practice when combined with an appropriate stopping criterion.
| Emilie Kaufmann (LTCI), Olivier Capp\'e (LTCI), Aur\'elien Garivier
(IMT) | null | 1405.3224 | null | null |
Rate of Convergence and Error Bounds for LSTD($\lambda$) | cs.LG cs.AI math.OC math.ST stat.TH | We consider LSTD($\lambda$), the least-squares temporal-difference algorithm
with eligibility traces algorithm proposed by Boyan (2002). It computes a
linear approximation of the value function of a fixed policy in a large Markov
Decision Process. Under a $\beta$-mixing assumption, we derive, for any value
of $\lambda \in (0,1)$, a high-probability estimate of the rate of convergence
of this algorithm to its limit. We deduce a high-probability bound on the error
of this algorithm, that extends (and slightly improves) that derived by Lazaric
et al. (2012) in the specific case where $\lambda=0$. In particular, our
analysis sheds some light on the choice of $\lambda$ with respect to the
quality of the chosen linear space and the number of samples, that complies
with simulations.
| Manel Tagorti (INRIA Nancy - Grand Est / LORIA), Bruno Scherrer (INRIA
Nancy - Grand Est / LORIA) | null | 1405.3229 | null | null |
Learning with many experts: model selection and sparsity | stat.ME cs.LG | Experts classifying data are often imprecise. Recently, several models have
been proposed to train classifiers using the noisy labels generated by these
experts. How to choose between these models? In such situations, the true
labels are unavailable. Thus, one cannot perform model selection using the
standard versions of methods such as empirical risk minimization and cross
validation. In order to allow model selection, we present a surrogate loss and
provide theoretical guarantees that assure its consistency. Next, we discuss
how this loss can be used to tune a penalization which introduces sparsity in
the parameters of a traditional class of models. Sparsity provides more
parsimonious models and can avoid overfitting. Nevertheless, it has seldom been
discussed in the context of noisy labels due to the difficulty in model
selection and, therefore, in choosing tuning parameters. We apply these
techniques to several sets of simulated and real data.
| Rafael Izbicki, Rafael Bassi Stern | 10.1002/sam.11206 | 1405.3292 | null | null |
Effects of Sampling Methods on Prediction Quality. The Case of
Classifying Land Cover Using Decision Trees | stat.ML cs.LG stat.AP | Clever sampling methods can be used to improve the handling of big data and
increase its usefulness. The subject of this study is remote sensing,
specifically airborne laser scanning point clouds representing different
classes of ground cover. The aim is to derive a supervised learning model for
the classification using CARTs. In order to measure the effect of different
sampling methods on the classification accuracy, various experiments with
varying types of sampling methods, sample sizes, and accuracy metrics have been
designed. Numerical results for a subset of a large surveying project covering
the lower Rhine area in Germany are shown. General conclusions regarding
sampling design are drawn and presented.
| Ronald Hochreiter and Christoph Waldhauser | null | 1405.3295 | null | null |
Optimal Exploration-Exploitation in a Multi-Armed-Bandit Problem with
Non-stationary Rewards | cs.LG math.OC math.PR stat.ML | In a multi-armed bandit (MAB) problem a gambler needs to choose at each round
of play one of K arms, each characterized by an unknown reward distribution.
Reward realizations are only observed when an arm is selected, and the
gambler's objective is to maximize his cumulative expected earnings over some
given horizon of play T. To do this, the gambler needs to acquire information
about arms (exploration) while simultaneously optimizing immediate rewards
(exploitation); the price paid due to this trade off is often referred to as
the regret, and the main question is how small can this price be as a function
of the horizon length T. This problem has been studied extensively when the
reward distributions do not change over time; an assumption that supports a
sharp characterization of the regret, yet is often violated in practical
settings. In this paper, we focus on a MAB formulation which allows for a broad
range of temporal uncertainties in the rewards, while still maintaining
mathematical tractability. We fully characterize the (regret) complexity of
this class of MAB problems by establishing a direct link between the extent of
allowable reward "variation" and the minimal achievable regret. Our analysis
draws some connections between two rather disparate strands of literature: the
adversarial and the stochastic MAB frameworks.
| Omar Besbes, Yonatan Gur, Assaf Zeevi | null | 1405.3316 | null | null |
Adaptive Monte Carlo via Bandit Allocation | cs.AI cs.LG | We consider the problem of sequentially choosing between a set of unbiased
Monte Carlo estimators to minimize the mean-squared-error (MSE) of a final
combined estimate. By reducing this task to a stochastic multi-armed bandit
problem, we show that well developed allocation strategies can be used to
achieve an MSE that approaches that of the best estimator chosen in retrospect.
We then extend these developments to a scenario where alternative estimators
have different, possibly stochastic costs. The outcome is a new set of adaptive
Monte Carlo strategies that provide stronger guarantees than previous
approaches while offering practical advantages.
| James Neufeld, Andr\'as Gy\"orgy, Dale Schuurmans, Csaba Szepesv\'ari | null | 1405.3318 | null | null |
Active Mining of Parallel Video Streams | cs.CV cs.LG | The practicality of a video surveillance system is adversely limited by the
amount of queries that can be placed on human resources and their vigilance in
response. To transcend this limitation, a major effort under way is to include
software that (fully or at least semi) automatically mines video footage,
reducing the burden imposed to the system. Herein, we propose a semi-supervised
incremental learning framework for evolving visual streams in order to develop
a robust and flexible track classification system. Our proposed method learns
from consecutive batches by updating an ensemble in each time. It tries to
strike a balance between performance of the system and amount of data which
needs to be labelled. As no restriction is considered, the system can address
many practical problems in an evolving multi-camera scenario, such as concept
drift, class evolution and various length of video streams which have not been
addressed before. Experiments were performed on synthetic as well as real-world
visual data in non-stationary environments, showing high accuracy with fairly
little human collaboration.
| Samaneh Khoshrou, Jaime S. Cardoso, Luis F. Teixeira | null | 1405.3382 | null | null |
Reducing Dueling Bandits to Cardinal Bandits | cs.LG | We present algorithms for reducing the Dueling Bandits problem to the
conventional (stochastic) Multi-Armed Bandits problem. The Dueling Bandits
problem is an online model of learning with ordinal feedback of the form "A is
preferred to B" (as opposed to cardinal feedback like "A has value 2.5"),
giving it wide applicability in learning from implicit user feedback and
revealed and stated preferences. In contrast to existing algorithms for the
Dueling Bandits problem, our reductions -- named $\Doubler$, $\MultiSbm$ and
$\DoubleSbm$ -- provide a generic schema for translating the extensive body of
known results about conventional Multi-Armed Bandit algorithms to the Dueling
Bandits setting. For $\Doubler$ and $\MultiSbm$ we prove regret upper bounds in
both finite and infinite settings, and conjecture about the performance of
$\DoubleSbm$ which empirically outperforms the other two as well as previous
algorithms in our experiments. In addition, we provide the first almost optimal
regret bound in terms of second order terms, such as the differences between
the values of the arms.
| Nir Ailon and Thorsten Joachims and Zohar Karnin | null | 1405.3396 | null | null |
Efficient classification using parallel and scalable compressed model
and Its application on intrusion detection | cs.LG cs.CR | In order to achieve high efficiency of classification in intrusion detection,
a compressed model is proposed in this paper which combines horizontal
compression with vertical compression. OneR is utilized as horizontal
com-pression for attribute reduction, and affinity propagation is employed as
vertical compression to select small representative exemplars from large
training data. As to be able to computationally compress the larger volume of
training data with scalability, MapReduce based parallelization approach is
then implemented and evaluated for each step of the model compression process
abovementioned, on which common but efficient classification methods can be
directly used. Experimental application study on two publicly available
datasets of intrusion detection, KDD99 and CMDC2012, demonstrates that the
classification using the compressed model proposed can effectively speed up the
detection procedure at up to 184 times, most importantly at the cost of a
minimal accuracy difference with less than 1% on average.
| Tieming Chen, Xu Zhang, Shichao Jin, Okhee Kim | 10.1016/j.eswa.2014.04.009 | 1405.3410 | null | null |
Improving offline evaluation of contextual bandit algorithms via
bootstrapping techniques | stat.ML cs.LG | In many recommendation applications such as news recommendation, the items
that can be rec- ommended come and go at a very fast pace. This is a challenge
for recommender systems (RS) to face this setting. Online learning algorithms
seem to be the most straight forward solution. The contextual bandit framework
was introduced for that very purpose. In general the evaluation of a RS is a
critical issue. Live evaluation is of- ten avoided due to the potential loss of
revenue, hence the need for offline evaluation methods. Two options are
available. Model based meth- ods are biased by nature and are thus difficult to
trust when used alone. Data driven methods are therefore what we consider here.
Evaluat- ing online learning algorithms with past data is not simple but some
methods exist in the litera- ture. Nonetheless their accuracy is not satisfac-
tory mainly due to their mechanism of data re- jection that only allow the
exploitation of a small fraction of the data. We precisely address this issue
in this paper. After highlighting the limita- tions of the previous methods, we
present a new method, based on bootstrapping techniques. This new method comes
with two important improve- ments: it is much more accurate and it provides a
measure of quality of its estimation. The latter is a highly desirable property
in order to minimize the risks entailed by putting online a RS for the first
time. We provide both theoretical and ex- perimental proofs of its superiority
compared to state-of-the-art methods, as well as an analysis of the convergence
of the measure of quality.
| Olivier Nicol (INRIA Lille - Nord Europe, LIFL), J\'er\'emie Mary
(INRIA Lille - Nord Europe, LIFL), Philippe Preux (INRIA Lille - Nord Europe,
LIFL) | null | 1405.3536 | null | null |
Global disease monitoring and forecasting with Wikipedia | cs.SI cs.LG physics.soc-ph | Infectious disease is a leading threat to public health, economic stability,
and other key social structures. Efforts to mitigate these impacts depend on
accurate and timely monitoring to measure the risk and progress of disease.
Traditional, biologically-focused monitoring techniques are accurate but costly
and slow; in response, new techniques based on social internet data such as
social media and search queries are emerging. These efforts are promising, but
important challenges in the areas of scientific peer review, breadth of
diseases and countries, and forecasting hamper their operational usefulness.
We examine a freely available, open data source for this use: access logs
from the online encyclopedia Wikipedia. Using linear models, language as a
proxy for location, and a systematic yet simple article selection procedure, we
tested 14 location-disease combinations and demonstrate that these data
feasibly support an approach that overcomes these challenges. Specifically, our
proof-of-concept yields models with $r^2$ up to 0.92, forecasting value up to
the 28 days tested, and several pairs of models similar enough to suggest that
transferring models from one location to another without re-training is
feasible.
Based on these preliminary results, we close with a research agenda designed
to overcome these challenges and produce a disease monitoring and forecasting
system that is significantly more effective, robust, and globally comprehensive
than the current state of the art.
| Nicholas Generous (1), Geoffrey Fairchild (1), Alina Deshpande (1),
Sara Y. Del Valle (1), Reid Priedhorsky (1) ((1) Los Alamos National
Laboratory, Los Alamos, NM) | 10.1371/journal.pcbi.1003892 | 1405.3612 | null | null |
Topic words analysis based on LDA model | cs.SI cs.DC cs.IR cs.LG stat.ML | Social network analysis (SNA), which is a research field describing and
modeling the social connection of a certain group of people, is popular among
network services. Our topic words analysis project is a SNA method to visualize
the topic words among emails from Obama.com to accounts registered in Columbus,
Ohio. Based on Latent Dirichlet Allocation (LDA) model, a popular topic model
of SNA, our project characterizes the preference of senders for target group of
receptors. Gibbs sampling is used to estimate topic and word distribution. Our
training and testing data are emails from the carbon-free server
Datagreening.com. We use parallel computing tool BashReduce for word processing
and generate related words under each latent topic to discovers typical
information of political news sending specially to local Columbus receptors.
Running on two instances using paralleling tool BashReduce, our project
contributes almost 30% speedup processing the raw contents, comparing with
processing contents on one instance locally. Also, the experimental result
shows that the LDA model applied in our project provides precision rate 53.96%
higher than TF-IDF model finding target words, on the condition that
appropriate size of topic words list is selected.
| Xi Qiu and Christopher Stewart | null | 1405.3726 | null | null |
Logistic Regression: Tight Bounds for Stochastic and Online Optimization | cs.LG | The logistic loss function is often advocated in machine learning and
statistics as a smooth and strictly convex surrogate for the 0-1 loss. In this
paper we investigate the question of whether these smoothness and convexity
properties make the logistic loss preferable to other widely considered options
such as the hinge loss. We show that in contrast to known asymptotic bounds, as
long as the number of prediction/optimization iterations is sub exponential,
the logistic loss provides no improvement over a generic non-smooth loss
function such as the hinge loss. In particular we show that the convergence
rate of stochastic logistic optimization is bounded from below by a polynomial
in the diameter of the decision set and the number of prediction iterations,
and provide a matching tight upper bound. This resolves the COLT open problem
of McMahan and Streeter (2012).
| Elad Hazan, Tomer Koren, Kfir Y. Levy | null | 1405.3843 | null | null |
Methods and Models for Interpretable Linear Classification | stat.ME cs.LG stat.ML | We present an integer programming framework to build accurate and
interpretable discrete linear classification models. Unlike existing
approaches, our framework is designed to provide practitioners with the control
and flexibility they need to tailor accurate and interpretable models for a
domain of choice. To this end, our framework can produce models that are fully
optimized for accuracy, by minimizing the 0--1 classification loss, and that
address multiple aspects of interpretability, by incorporating a range of
discrete constraints and penalty functions. We use our framework to produce
models that are difficult to create with existing methods, such as scoring
systems and M-of-N rule tables. In addition, we propose specially designed
optimization methods to improve the scalability of our framework through
decomposition and data reduction. We show that discrete linear classifiers can
attain the training accuracy of any other linear classifier, and provide an
Occam's Razor type argument as to why the use of small discrete coefficients
can provide better generalization. We demonstrate the performance and
flexibility of our framework through numerical experiments and a case study in
which we construct a highly tailored clinical tool for sleep apnea diagnosis.
| Berk Ustun and Cynthia Rudin | null | 1405.4047 | null | null |
Distributed Representations of Sentences and Documents | cs.CL cs.AI cs.LG | Many machine learning algorithms require the input to be represented as a
fixed-length feature vector. When it comes to texts, one of the most common
fixed-length features is bag-of-words. Despite their popularity, bag-of-words
features have two major weaknesses: they lose the ordering of the words and
they also ignore semantics of the words. For example, "powerful," "strong" and
"Paris" are equally distant. In this paper, we propose Paragraph Vector, an
unsupervised algorithm that learns fixed-length feature representations from
variable-length pieces of texts, such as sentences, paragraphs, and documents.
Our algorithm represents each document by a dense vector which is trained to
predict words in the document. Its construction gives our algorithm the
potential to overcome the weaknesses of bag-of-words models. Empirical results
show that Paragraph Vectors outperform bag-of-words models as well as other
techniques for text representations. Finally, we achieve new state-of-the-art
results on several text classification and sentiment analysis tasks.
| Quoc V. Le and Tomas Mikolov | null | 1405.4053 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.