title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Monte-Carlo utility estimates for Bayesian reinforcement learning | cs.LG stat.ML | This paper introduces a set of algorithms for Monte-Carlo Bayesian
reinforcement learning. Firstly, Monte-Carlo estimation of upper bounds on the
Bayes-optimal value function is employed to construct an optimistic policy.
Secondly, gradient-based algorithms for approximate upper and lower bounds are
introduced. Finally, we introduce a new class of gradient algorithms for
Bayesian Bellman error minimisation. We theoretically show that the gradient
methods are sound. Experimentally, we demonstrate the superiority of the upper
bound method in terms of reward obtained. However, we also show that the
Bayesian Bellman error method is a close second, despite its significant
computational simplicity.
| Christos Dimitrakakis | 10.1109/CDC.2013.6761048 | 1303.2506 | null | null |
Revealing Cluster Structure of Graph by Path Following Replicator
Dynamic | cs.LG cs.GT | In this paper, we propose a path following replicator dynamic, and
investigate its potentials in uncovering the underlying cluster structure of a
graph. The proposed dynamic is a generalization of the discrete replicator
dynamic. The replicator dynamic has been successfully used to extract dense
clusters of graphs; however, it is often sensitive to the degree distribution
of a graph, and usually biased by vertices with large degrees, thus may fail to
detect the densest cluster. To overcome this problem, we introduce a dynamic
parameter, called path parameter, into the evolution process. The path
parameter can be interpreted as the maximal possible probability of a current
cluster containing a vertex, and it monotonically increases as evolution
process proceeds. By limiting the maximal probability, the phenomenon of some
vertices dominating the early stage of evolution process is suppressed, thus
making evolution process more robust. To solve the optimization problem with a
fixed path parameter, we propose an efficient fixed point algorithm. The time
complexity of the path following replicator dynamic is only linear in the
number of edges of a graph, thus it can analyze graphs with millions of
vertices and tens of millions of edges on a common PC in a few minutes.
Besides, it can be naturally generalized to hypergraph and graph with edges of
different orders. We apply it to four important problems: maximum clique
problem, densest k-subgraph problem, structure fitting, and discovery of
high-density regions. The extensive experimental results clearly demonstrate
its advantages, in terms of robustness, scalability and flexility.
| Hairong Liu, Longin Jan Latecki, Shuicheng Yan | null | 1303.2643 | null | null |
Hybrid Q-Learning Applied to Ubiquitous recommender system | cs.LG cs.IR | Ubiquitous information access becomes more and more important nowadays and
research is aimed at making it adapted to users. Our work consists in applying
machine learning techniques in order to bring a solution to some of the
problems concerning the acceptance of the system by users. To achieve this, we
propose a fundamental shift in terms of how we model the learning of
recommender system: inspired by models of human reasoning developed in robotic,
we combine reinforcement learning and case-base reasoning to define a
recommendation process that uses these two approaches for generating
recommendations on different context dimensions (social, temporal, geographic).
We describe an implementation of the recommender system based on this
framework. We also present preliminary results from experiments with the system
and show how our approach increases the recommendation quality.
| Djallel Bouneffouf | null | 1303.2651 | null | null |
Spectral Clustering with Epidemic Diffusion | cs.SI cs.LG physics.soc-ph stat.ML | Spectral clustering is widely used to partition graphs into distinct modules
or communities. Existing methods for spectral clustering use the eigenvalues
and eigenvectors of the graph Laplacian, an operator that is closely associated
with random walks on graphs. We propose a new spectral partitioning method that
exploits the properties of epidemic diffusion. An epidemic is a dynamic process
that, unlike the random walk, simultaneously transitions to all the neighbors
of a given node. We show that the replicator, an operator describing epidemic
diffusion, is equivalent to the symmetric normalized Laplacian of a reweighted
graph with edges reweighted by the eigenvector centralities of their incident
nodes. Thus, more weight is given to edges connecting more central nodes. We
describe a method that partitions the nodes based on the componentwise ratio of
the replicator's second eigenvector to the first, and compare its performance
to traditional spectral clustering techniques on synthetic graphs with known
community structure. We demonstrate that the replicator gives preference to
dense, clique-like structures, enabling it to more effectively discover
communities that may be obscured by dense intercommunity linking.
| Laura M. Smith, Kristina Lerman, Cristina Garcia-Cardona, Allon G.
Percus, Rumi Ghosh | 10.1103/PhysRevE.88.042813 | 1303.2663 | null | null |
Machine Learning for Bioclimatic Modelling | cs.LG stat.AP | Many machine learning (ML) approaches are widely used to generate bioclimatic
models for prediction of geographic range of organism as a function of climate.
Applications such as prediction of range shift in organism, range of invasive
species influenced by climate change are important parameters in understanding
the impact of climate change. However, success of machine learning-based
approaches depends on a number of factors. While it can be safely said that no
particular ML technique can be effective in all applications and success of a
technique is predominantly dependent on the application or the type of the
problem, it is useful to understand their behavior to ensure informed choice of
techniques. This paper presents a comprehensive review of machine
learning-based bioclimatic model generation and analyses the factors
influencing success of such models. Considering the wide use of statistical
techniques, in our discussion we also include conventional statistical
techniques used in bioclimatic modelling.
| Maumita Bhattacharya | null | 1303.2739 | null | null |
A Cooperative Q-learning Approach for Real-time Power Allocation in
Femtocell Networks | cs.MA cs.LG | In this paper, we address the problem of distributed interference management
of cognitive femtocells that share the same frequency range with macrocells
(primary user) using distributed multi-agent Q-learning. We formulate and solve
three problems representing three different Q-learning algorithms: namely,
centralized, distributed and partially distributed power control using
Q-learning (CPC-Q, DPC-Q and PDPC-Q). CPCQ, although not of practical interest,
characterizes the global optimum. Each of DPC-Q and PDPC-Q works in two
different learning paradigms: Independent (IL) and Cooperative (CL). The former
is considered the simplest form for applying Qlearning in multi-agent
scenarios, where all the femtocells learn independently. The latter is the
proposed scheme in which femtocells share partial information during the
learning process in order to strike a balance between practical relevance and
performance. In terms of performance, the simulation results showed that the CL
paradigm outperforms the IL paradigm and achieves an aggregate femtocells
capacity that is very close to the optimal one. For the practical relevance
issue, we evaluate the robustness and scalability of DPC-Q, in real time, by
deploying new femtocells in the system during the learning process, where we
showed that DPC-Q in the CL paradigm is scalable to large number of femtocells
and more robust to the network dynamics compared to the IL paradigm
| Hussein Saad, Amr Mohamed and Tamer ElBatt | null | 1303.2789 | null | null |
Gaussian Processes for Nonlinear Signal Processing | cs.LG cs.IT math.IT stat.ML | Gaussian processes (GPs) are versatile tools that have been successfully
employed to solve nonlinear estimation problems in machine learning, but that
are rarely used in signal processing. In this tutorial, we present GPs for
regression as a natural nonlinear extension to optimal Wiener filtering. After
establishing their basic formulation, we discuss several important aspects and
extensions, including recursive and adaptive algorithms for dealing with
non-stationarity, low-complexity solutions, non-Gaussian noise models and
classification scenarios. Furthermore, we provide a selection of relevant
applications to wireless digital communications.
| Fernando P\'erez-Cruz, Steven Van Vaerenbergh, Juan Jos\'e
Murillo-Fuentes, Miguel L\'azaro-Gredilla and Ignacio Santamaria | 10.1109/MSP.2013.2250352 | 1303.2823 | null | null |
Online Learning in Markov Decision Processes with Adversarially Chosen
Transition Probability Distributions | cs.LG stat.ML | We study the problem of learning Markov decision processes with finite state
and action spaces when the transition probability distributions and loss
functions are chosen adversarially and are allowed to change with time. We
introduce an algorithm whose regret with respect to any policy in a comparison
class grows as the square root of the number of rounds of the game, provided
the transition probabilities satisfy a uniform mixing condition. Our approach
is efficient as long as the comparison class is polynomial and we can compute
expectations over sample paths for each policy. Designing an efficient
algorithm with small regret for the general case remains an open problem.
| Yasin Abbasi-Yadkori and Peter L. Bartlett and Csaba Szepesvari | null | 1303.3055 | null | null |
A Greedy Approximation of Bayesian Reinforcement Learning with Probably
Optimistic Transition Model | cs.AI cs.LG stat.ML | Bayesian Reinforcement Learning (RL) is capable of not only incorporating
domain knowledge, but also solving the exploration-exploitation dilemma in a
natural way. As Bayesian RL is intractable except for special cases, previous
work has proposed several approximation methods. However, these methods are
usually too sensitive to parameter values, and finding an acceptable parameter
setting is practically impossible in many applications. In this paper, we
propose a new algorithm that greedily approximates Bayesian RL to achieve
robustness in parameter space. We show that for a desired learning behavior,
our proposed algorithm has a polynomial sample complexity that is lower than
those of existing algorithms. We also demonstrate that the proposed algorithm
naturally outperforms other existing algorithms when the prior distributions
are not significantly misleading. On the other hand, the proposed algorithm
cannot handle greatly misspecified priors as well as the other algorithms can.
This is a natural consequence of the fact that the proposed algorithm is
greedier than the other algorithms. Accordingly, we discuss a way to select an
appropriate algorithm for different tasks based on the algorithms' greediness.
We also introduce a new way of simplifying Bayesian planning, based on which
future work would be able to derive new algorithms.
| Kenji Kawaguchi and Mauricio Araya | null | 1303.3163 | null | null |
Toggling a Genetic Switch Using Reinforcement Learning | cs.SY cs.CE cs.LG q-bio.MN | In this paper, we consider the problem of optimal exogenous control of gene
regulatory networks. Our approach consists in adapting an established
reinforcement learning algorithm called the fitted Q iteration. This algorithm
infers the control law directly from the measurements of the system's response
to external control inputs without the use of a mathematical model of the
system. The measurement data set can either be collected from wet-lab
experiments or artificially created by computer simulations of dynamical models
of the system. The algorithm is applicable to a wide range of biological
systems due to its ability to deal with nonlinear and stochastic system
dynamics. To illustrate the application of the algorithm to a gene regulatory
network, the regulation of the toggle switch system is considered. The control
objective of this problem is to drive the concentrations of two specific
proteins to a target region in the state space.
| Aivar Sootla, Natalja Strelkowa, Damien Ernst, Mauricio Barahona,
Guy-Bart Stan | null | 1303.3183 | null | null |
Group-Sparse Model Selection: Hardness and Relaxations | cs.LG cs.IT math.IT stat.ML | Group-based sparsity models are proven instrumental in linear regression
problems for recovering signals from much fewer measurements than standard
compressive sensing. The main promise of these models is the recovery of
"interpretable" signals through the identification of their constituent groups.
In this paper, we establish a combinatorial framework for group-model selection
problems and highlight the underlying tractability issues. In particular, we
show that the group-model selection problem is equivalent to the well-known
NP-hard weighted maximum coverage problem (WMC). Leveraging a graph-based
understanding of group models, we describe group structures which enable
correct model selection in polynomial time via dynamic programming.
Furthermore, group structures that lead to totally unimodular constraints have
tractable discrete as well as convex relaxations. We also present a
generalization of the group-model that allows for within group sparsity, which
can be used to model hierarchical sparsity. Finally, we study the Pareto
frontier of group-sparse approximations for two tractable models, among which
the tree sparsity model, and illustrate selection and computation trade-offs
between our framework and the existing convex relaxations.
| Luca Baldassarre and Nirav Bhan and Volkan Cevher and Anastasios
Kyrillidis and Siddhartha Satpathi | null | 1303.3207 | null | null |
A Unified Framework for Probabilistic Component Analysis | cs.LG cs.CV stat.ML | We present a unifying framework which reduces the construction of
probabilistic component analysis techniques to a mere selection of the latent
neighbourhood, thus providing an elegant and principled framework for creating
novel component analysis models as well as constructing probabilistic
equivalents of deterministic component analysis methods. Under our framework,
we unify many very popular and well-studied component analysis algorithms, such
as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA),
Locality Preserving Projections (LPP) and Slow Feature Analysis (SFA), some of
which have no probabilistic equivalents in literature thus far. We firstly
define the Markov Random Fields (MRFs) which encapsulate the latent
connectivity of the aforementioned component analysis techniques; subsequently,
we show that the projection directions produced by all PCA, LDA, LPP and SFA
are also produced by the Maximum Likelihood (ML) solution of a single joint
probability density function, composed by selecting one of the defined MRF
priors while utilising a simple observation model. Furthermore, we propose
novel Expectation Maximization (EM) algorithms, exploiting the proposed joint
PDF, while we generalize the proposed methodologies to arbitrary connectivities
via parameterizable MRF products. Theoretical analysis and experiments on both
simulated and real world data show the usefulness of the proposed framework, by
deriving methods which well outperform state-of-the-art equivalents.
| Mihalis A. Nicolaou, Stefanos Zafeiriou and Maja Pantic | null | 1303.3240 | null | null |
Ranking and combining multiple predictors without labeled data | stat.ML cs.LG | In a broad range of classification and decision making problems, one is given
the advice or predictions of several classifiers, of unknown reliability, over
multiple questions or queries. This scenario is different from the standard
supervised setting, where each classifier accuracy can be assessed using
available labeled data, and raises two questions: given only the predictions of
several classifiers over a large set of unlabeled test data, is it possible to
a) reliably rank them; and b) construct a meta-classifier more accurate than
most classifiers in the ensemble? Here we present a novel spectral approach to
address these questions. First, assuming conditional independence between
classifiers, we show that the off-diagonal entries of their covariance matrix
correspond to a rank-one matrix. Moreover, the classifiers can be ranked using
the leading eigenvector of this covariance matrix, as its entries are
proportional to their balanced accuracies. Second, via a linear approximation
to the maximum likelihood estimator, we derive the Spectral Meta-Learner (SML),
a novel ensemble classifier whose weights are equal to this eigenvector
entries. On both simulated and real data, SML typically achieves a higher
accuracy than most classifiers in the ensemble and can provide a better
starting point than majority voting, for estimating the maximum likelihood
solution. Furthermore, SML is robust to the presence of small malicious groups
of classifiers designed to veer the ensemble prediction away from the (unknown)
ground truth.
| Fabio Parisi, Francesco Strino, Boaz Nadler and Yuval Kluger | 10.1073/pnas.1219097111 | 1303.3257 | null | null |
Iterative MapReduce for Large Scale Machine Learning | cs.DC cs.DB cs.LG | Large datasets ("Big Data") are becoming ubiquitous because the potential
value in deriving insights from data, across a wide range of business and
scientific applications, is increasingly recognized. In particular, machine
learning - one of the foundational disciplines for data analysis, summarization
and inference - on Big Data has become routine at most organizations that
operate large clouds, usually based on systems such as Hadoop that support the
MapReduce programming paradigm. It is now widely recognized that while
MapReduce is highly scalable, it suffers from a critical weakness for machine
learning: it does not support iteration. Consequently, one has to program
around this limitation, leading to fragile, inefficient code. Further, reliance
on the programmer is inherently flawed in a multi-tenanted cloud environment,
since the programmer does not have visibility into the state of the system when
his or her program executes. Prior work has sought to address this problem by
either developing specialized systems aimed at stylized applications, or by
augmenting MapReduce with ad hoc support for saving state across iterations
(driven by an external loop). In this paper, we advocate support for looping as
a first-class construct, and propose an extension of the MapReduce programming
paradigm called {\em Iterative MapReduce}. We then develop an optimizer for a
class of Iterative MapReduce programs that cover most machine learning
techniques, provide theoretical justifications for the key optimization steps,
and empirically demonstrate that system-optimized programs for significant
machine learning tasks are competitive with state-of-the-art specialized
solutions.
| Joshua Rosen, Neoklis Polyzotis, Vinayak Borkar, Yingyi Bu, Michael J.
Carey, Markus Weimer, Tyson Condie, Raghu Ramakrishnan | null | 1303.3517 | null | null |
A survey on sensing methods and feature extraction algorithms for SLAM
problem | cs.RO cs.CV cs.LG | This paper is a survey work for a bigger project for designing a Visual SLAM
robot to generate 3D dense map of an unknown unstructured environment. A lot of
factors have to be considered while designing a SLAM robot. Sensing method of
the SLAM robot should be determined by considering the kind of environment to
be modeled. Similarly the type of environment determines the suitable feature
extraction method. This paper goes through the sensing methods used in some
recently published papers. The main objective of this survey is to conduct a
comparative study among the current sensing methods and feature extraction
algorithms and to extract out the best for our work.
| Adheen Ajay and D. Venkataraman | null | 1303.3605 | null | null |
Statistical Regression to Predict Total Cumulative CPU Usage of
MapReduce Jobs | cs.DC cs.LG cs.PF | Recently, businesses have started using MapReduce as a popular computation
framework for processing large amount of data, such as spam detection, and
different data mining tasks, in both public and private clouds. Two of the
challenging questions in such environments are (1) choosing suitable values for
MapReduce configuration parameters e.g., number of mappers, number of reducers,
and DFS block size, and (2) predicting the amount of resources that a user
should lease from the service provider. Currently, the tasks of both choosing
configuration parameters and estimating required resources are solely the users
responsibilities. In this paper, we present an approach to provision the total
CPU usage in clock cycles of jobs in MapReduce environment. For a MapReduce
job, a profile of total CPU usage in clock cycles is built from the job past
executions with different values of two configuration parameters e.g., number
of mappers, and number of reducers. Then, a polynomial regression is used to
model the relation between these configuration parameters and total CPU usage
in clock cycles of the job. We also briefly study the influence of input data
scaling on measured total CPU usage in clock cycles. This derived model along
with the scaling result can then be used to provision the total CPU usage in
clock cycles of the same jobs with different input data size. We validate the
accuracy of our models using three realistic applications (WordCount, Exim
MainLog parsing, and TeraSort). Results show that the predicted total CPU usage
in clock cycles of generated resource provisioning options are less than 8% of
the measured total CPU usage in clock cycles in our 20-node virtual Hadoop
cluster.
| Nikzad Babaii Rizvandi, Javid Taheri, Reza Moraveji, Albert Y. Zomaya | null | 1303.3632 | null | null |
Topic Discovery through Data Dependent and Random Projections | stat.ML cs.LG | We present algorithms for topic modeling based on the geometry of
cross-document word-frequency patterns. This perspective gains significance
under the so called separability condition. This is a condition on existence of
novel-words that are unique to each topic. We present a suite of highly
efficient algorithms based on data-dependent and random projections of
word-frequency patterns to identify novel words and associated topics. We will
also discuss the statistical guarantees of the data-dependent projections
method based on two mild assumptions on the prior density of topic document
matrix. Our key insight here is that the maximum and minimum values of
cross-document frequency patterns projected along any direction are associated
with novel words. While our sample complexity bounds for topic recovery are
similar to the state-of-art, the computational complexity of our random
projection scheme scales linearly with the number of documents and the number
of words per document. We present several experiments on synthetic and
real-world datasets to demonstrate qualitative and quantitative merits of our
scheme.
| Weicong Ding, Mohammad H. Rohban, Prakash Ishwar, Venkatesh Saligrama | null | 1303.3664 | null | null |
Subspace Clustering via Thresholding and Spectral Clustering | cs.IT cs.LG math.IT math.ST stat.ML stat.TH | We consider the problem of clustering a set of high-dimensional data points
into sets of low-dimensional linear subspaces. The number of subspaces, their
dimensions, and their orientations are unknown. We propose a simple and
low-complexity clustering algorithm based on thresholding the correlations
between the data points followed by spectral clustering. A probabilistic
performance analysis shows that this algorithm succeeds even when the subspaces
intersect, and when the dimensions of the subspaces scale (up to a log-factor)
linearly in the ambient dimension. Moreover, we prove that the algorithm also
succeeds for data points that are subject to erasures with the number of
erasures scaling (up to a log-factor) linearly in the ambient dimension.
Finally, we propose a simple scheme that provably detects outliers.
| Reinhard Heckel and Helmut B\"olcskei | null | 1303.3716 | null | null |
A Last-Step Regression Algorithm for Non-Stationary Online Learning | cs.LG | The goal of a learner in standard online learning is to maintain an average
loss close to the loss of the best-performing single function in some class. In
many real-world problems, such as rating or ranking items, there is no single
best target function during the runtime of the algorithm, instead the best
(local) target function is drifting over time. We develop a novel last-step
minmax optimal algorithm in context of a drift. We analyze the algorithm in the
worst-case regret framework and show that it maintains an average loss close to
that of the best slowly changing sequence of linear functions, as long as the
total of drift is sublinear. In some situations, our bound improves over
existing bounds, and additionally the algorithm suffers logarithmic regret when
there is no drift. We also build on the H_infinity filter and its bound, and
develop and analyze a second algorithm for drifting setting. Synthetic
simulations demonstrate the advantages of our algorithms in a worst-case
constant drift setting.
| Edward Moroshko, Koby Crammer | null | 1303.3754 | null | null |
A Quorum Sensing Inspired Algorithm for Dynamic Clustering | cs.LG | Quorum sensing is a decentralized biological process, through which a
community of cells with no global awareness coordinate their functional
behaviors based solely on cell-medium interactions and local decisions. This
paper draws inspirations from quorum sensing and colony competition to derive a
new algorithm for data clustering. The algorithm treats each data as a single
cell, and uses knowledge of local connectivity to cluster cells into multiple
colonies simultaneously. It simulates auto-inducers secretion in quorum sensing
to tune the influence radius for each cell. At the same time, sparsely
distributed core cells spread their influences to form colonies, and
interactions between colonies eventually determine each cell's identity. The
algorithm has the flexibility to analyze not only static but also time-varying
data, which surpasses the capacity of many existing algorithms. Its stability
and convergence properties are established. The algorithm is tested on several
applications, including both synthetic and real benchmarks data sets, alleles
clustering, community detection, image segmentation. In particular, the
algorithm's distinctive capability to deal with time-varying data allows us to
experiment it on novel applications such as robotic swarms grouping and
switching model identification. We believe that the algorithm's promising
performance would stimulate many more exciting applications.
| Feng Tan and Jean-Jacques Slotine | null | 1303.3934 | null | null |
On multi-class learning through the minimization of the confusion matrix
norm | cs.LG | In imbalanced multi-class classification problems, the misclassification rate
as an error measure may not be a relevant choice. Several methods have been
developed where the performance measure retained richer information than the
mere misclassification rate: misclassification costs, ROC-based information,
etc. Following this idea of dealing with alternate measures of performance, we
propose to address imbalanced classification problems by using a new measure to
be optimized: the norm of the confusion matrix. Indeed, recent results show
that using the norm of the confusion matrix as an error measure can be quite
interesting due to the fine-grain informations contained in the matrix,
especially in the case of imbalanced classes. Our first contribution then
consists in showing that optimizing criterion based on the confusion matrix
gives rise to a common background for cost-sensitive methods aimed at dealing
with imbalanced classes learning problems. As our second contribution, we
propose an extension of a recent multi-class boosting method --- namely
AdaBoost.MM --- to the imbalanced class problem, by greedily minimizing the
empirical norm of the confusion matrix. A theoretical analysis of the
properties of the proposed method is presented, while experimental results
illustrate the behavior of the algorithm and show the relevancy of the approach
compared to other methods.
| Sokol Ko\c{c}o (LIF), C\'ecile Capponi (LIF) | null | 1303.4015 | null | null |
Markov Chain Monte Carlo for Arrangement of Hyperplanes in
Locality-Sensitive Hashing | cs.LG | Since Hamming distances can be calculated by bitwise computations, they can
be calculated with less computational load than L2 distances. Similarity
searches can therefore be performed faster in Hamming distance space. The
elements of Hamming distance space are bit strings. On the other hand, the
arrangement of hyperplanes induce the transformation from the feature vectors
into feature bit strings. This transformation method is a type of
locality-sensitive hashing that has been attracting attention as a way of
performing approximate similarity searches at high speed. Supervised learning
of hyperplane arrangements allows us to obtain a method that transforms them
into feature bit strings reflecting the information of labels applied to
higher-dimensional feature vectors. In this p aper, we propose a supervised
learning method for hyperplane arrangements in feature space that uses a Markov
chain Monte Carlo (MCMC) method. We consider the probability density functions
used during learning, and evaluate their performance. We also consider the
sampling method for learning data pairs needed in learning, and we evaluate its
performance. We confirm that the accuracy of this learning method when using a
suitable probability density function and sampling method is greater than the
accuracy of existing learning methods.
| Yui Noma, Makiko Konoshima | null | 1303.4169 | null | null |
Margins, Shrinkage, and Boosting | cs.LG stat.ML | This manuscript shows that AdaBoost and its immediate variants can produce
approximate maximum margin classifiers simply by scaling step size choices with
a fixed small constant. In this way, when the unscaled step size is an optimal
choice, these results provide guarantees for Friedman's empirically successful
"shrinkage" procedure for gradient boosting (Friedman, 2000). Guarantees are
also provided for a variety of other step sizes, affirming the intuition that
increasingly regularized line searches provide improved margin guarantees. The
results hold for the exponential loss and similar losses, most notably the
logistic loss.
| Matus Telgarsky | null | 1303.4172 | null | null |
Improving CUR Matrix Decomposition and the Nystr\"{o}m Approximation via
Adaptive Sampling | cs.LG cs.NA | The CUR matrix decomposition and the Nystr\"{o}m approximation are two
important low-rank matrix approximation techniques. The Nystr\"{o}m method
approximates a symmetric positive semidefinite matrix in terms of a small
number of its columns, while CUR approximates an arbitrary data matrix by a
small number of its columns and rows. Thus, CUR decomposition can be regarded
as an extension of the Nystr\"{o}m approximation.
In this paper we establish a more general error bound for the adaptive
column/row sampling algorithm, based on which we propose more accurate CUR and
Nystr\"{o}m algorithms with expected relative-error bounds. The proposed CUR
and Nystr\"{o}m algorithms also have low time complexity and can avoid
maintaining the whole data matrix in RAM. In addition, we give theoretical
analysis for the lower error bounds of the standard Nystr\"{o}m method and the
ensemble Nystr\"{o}m method. The main theoretical results established in this
paper are novel, and our analysis makes no special assumption on the data
matrices.
| Shusen Wang, Zhihua Zhang | null | 1303.4207 | null | null |
A General Iterative Shrinkage and Thresholding Algorithm for Non-convex
Regularized Optimization Problems | cs.LG cs.NA stat.CO stat.ML | Non-convex sparsity-inducing penalties have recently received considerable
attentions in sparse learning. Recent theoretical investigations have
demonstrated their superiority over the convex counterparts in several sparse
learning settings. However, solving the non-convex optimization problems
associated with non-convex penalties remains a big challenge. A commonly used
approach is the Multi-Stage (MS) convex relaxation (or DC programming), which
relaxes the original non-convex problem to a sequence of convex problems. This
approach is usually not very practical for large-scale problems because its
computational cost is a multiple of solving a single convex problem. In this
paper, we propose a General Iterative Shrinkage and Thresholding (GIST)
algorithm to solve the nonconvex optimization problem for a large class of
non-convex penalties. The GIST algorithm iteratively solves a proximal operator
problem, which in turn has a closed-form solution for many commonly used
penalties. At each outer iteration of the algorithm, we use a line search
initialized by the Barzilai-Borwein (BB) rule that allows finding an
appropriate step size quickly. The paper also presents a detailed convergence
analysis of the GIST algorithm. The efficiency of the proposed algorithm is
demonstrated by extensive experiments on large-scale data sets.
| Pinghua Gong, Changshui Zhang, Zhaosong Lu, Jianhua Huang, Jieping Ye | null | 1303.4434 | null | null |
On Improving Energy Efficiency within Green Femtocell Networks: A
Hierarchical Reinforcement Learning Approach | cs.LG cs.GT | One of the efficient solutions of improving coverage and increasing capacity
in cellular networks is the deployment of femtocells. As the cellular networks
are becoming more complex, energy consumption of whole network infrastructure
is becoming important in terms of both operational costs and environmental
impacts. This paper investigates energy efficiency of two-tier femtocell
networks through combining game theory and stochastic learning. With the
Stackelberg game formulation, a hierarchical reinforcement learning framework
is applied for studying the joint expected utility maximization of macrocells
and femtocells subject to the minimum signal-to-interference-plus-noise-ratio
requirements. In the learning procedure, the macrocells act as leaders and the
femtocells are followers. At each time step, the leaders commit to dynamic
strategies based on the best responses of the followers, while the followers
compete against each other with no further information but the leaders'
transmission parameters. In this paper, we propose two reinforcement learning
based intelligent algorithms to schedule each cell's stochastic power levels.
Numerical experiments are presented to validate the investigations. The results
show that the two learning algorithms substantially improve the energy
efficiency of the femtocell networks.
| Xianfu Chen, Honggang Zhang, Tao Chen, Mika Lasanen, and Jacques
Palicot | null | 1303.4638 | null | null |
Large-Scale Learning with Less RAM via Randomization | cs.LG | We reduce the memory footprint of popular large-scale online learning methods
by projecting our weight vector onto a coarse discrete set using randomized
rounding. Compared to standard 32-bit float encodings, this reduces RAM usage
by more than 50% during training and by up to 95% when making predictions from
a fixed model, with almost no loss in accuracy. We also show that randomized
counting can be used to implement per-coordinate learning rates, improving
model quality with little additional RAM. We prove these memory-saving methods
achieve regret guarantees similar to their exact variants. Empirical evaluation
confirms excellent performance, dominating standard approaches across memory
versus accuracy tradeoffs.
| Daniel Golovin, D. Sculley, H. Brendan McMahan, Michael Young | null | 1303.4664 | null | null |
Recovering Non-negative and Combined Sparse Representations | math.NA cs.LG stat.ML | The non-negative solution to an underdetermined linear system can be uniquely
recovered sometimes, even without imposing any additional sparsity constraints.
In this paper, we derive conditions under which a unique non-negative solution
for such a system can exist, based on the theory of polytopes. Furthermore, we
develop the paradigm of combined sparse representations, where only a part of
the coefficient vector is constrained to be non-negative, and the rest is
unconstrained (general). We analyze the recovery of the unique, sparsest
solution, for combined representations, under three different cases of
coefficient support knowledge: (a) the non-zero supports of non-negative and
general coefficients are known, (b) the non-zero support of general
coefficients alone is known, and (c) both the non-zero supports are unknown.
For case (c), we propose the combined orthogonal matching pursuit algorithm for
coefficient recovery and derive the deterministic sparsity threshold under
which recovery of the unique, sparsest coefficient vector is possible. We
quantify the order complexity of the algorithms, and examine their performance
in exact and approximate recovery of coefficients under various conditions of
noise. Furthermore, we also obtain their empirical phase transition
characteristics. We show that the basis pursuit algorithm, with partial
non-negative constraints, and the proposed greedy algorithm perform better in
recovering the unique sparse representation when compared to their
unconstrained counterparts. Finally, we demonstrate the utility of the proposed
methods in recovering images corrupted by saturation noise.
| Karthikeyan Natesan Ramamurthy, Jayaraman J. Thiagarajan and Andreas
Spanias | null | 1303.4694 | null | null |
Marginal Likelihoods for Distributed Parameter Estimation of Gaussian
Graphical Models | stat.ML cs.LG | We consider distributed estimation of the inverse covariance matrix, also
called the concentration or precision matrix, in Gaussian graphical models.
Traditional centralized estimation often requires global inference of the
covariance matrix, which can be computationally intensive in large dimensions.
Approximate inference based on message-passing algorithms, on the other hand,
can lead to unstable and biased estimation in loopy graphical models. In this
paper, we propose a general framework for distributed estimation based on a
maximum marginal likelihood (MML) approach. This approach computes local
parameter estimates by maximizing marginal likelihoods defined with respect to
data collected from local neighborhoods. Due to the non-convexity of the MML
problem, we introduce and solve a convex relaxation. The local estimates are
then combined into a global estimate without the need for iterative
message-passing between neighborhoods. The proposed algorithm is naturally
parallelizable and computationally efficient, thereby making it suitable for
high-dimensional problems. In the classical regime where the number of
variables $p$ is fixed and the number of samples $T$ increases to infinity, the
proposed estimator is shown to be asymptotically consistent and to improve
monotonically as the local neighborhood size increases. In the high-dimensional
scaling regime where both $p$ and $T$ increase to infinity, the convergence
rate to the true parameters is derived and is seen to be comparable to
centralized maximum likelihood estimation. Extensive numerical experiments
demonstrate the improved performance of the two-hop version of the proposed
estimator, which suffices to almost close the gap to the centralized maximum
likelihood estimator at a reduced computational cost.
| Zhaoshi Meng, Dennis Wei, Ami Wiesel, Alfred O. Hero III | 10.1109/TSP.2014.2350956 | 1303.4756 | null | null |
Greedy Feature Selection for Subspace Clustering | cs.LG math.NA stat.ML | Unions of subspaces provide a powerful generalization to linear subspace
models for collections of high-dimensional data. To learn a union of subspaces
from a collection of data, sets of signals in the collection that belong to the
same subspace must be identified in order to obtain accurate estimates of the
subspace structures present in the data. Recently, sparse recovery methods have
been shown to provide a provable and robust strategy for exact feature
selection (EFS)--recovering subsets of points from the ensemble that live in
the same subspace. In parallel with recent studies of EFS with L1-minimization,
in this paper, we develop sufficient conditions for EFS with a greedy method
for sparse signal recovery known as orthogonal matching pursuit (OMP).
Following our analysis, we provide an empirical study of feature selection
strategies for signals living on unions of subspaces and characterize the gap
between sparse recovery methods and nearest neighbor (NN)-based approaches. In
particular, we demonstrate that sparse recovery methods provide significant
advantages over NN methods and the gap between the two approaches is
particularly pronounced when the sampling of subspaces in the dataset is
sparse. Our results suggest that OMP may be employed to reliably recover exact
feature sets in a number of regimes where NN approaches fail to reveal the
subspace membership of points in the ensemble.
| Eva L. Dyer, Aswin C. Sankaranarayanan, Richard G. Baraniuk | null | 1303.4778 | null | null |
Node-Based Learning of Multiple Gaussian Graphical Models | stat.ML cs.LG math.OC | We consider the problem of estimating high-dimensional Gaussian graphical
models corresponding to a single set of variables under several distinct
conditions. This problem is motivated by the task of recovering transcriptional
regulatory networks on the basis of gene expression data {containing
heterogeneous samples, such as different disease states, multiple species, or
different developmental stages}. We assume that most aspects of the conditional
dependence networks are shared, but that there are some structured differences
between them. Rather than assuming that similarities and differences between
networks are driven by individual edges, we take a node-based approach, which
in many cases provides a more intuitive interpretation of the network
differences. We consider estimation under two distinct assumptions: (1)
differences between the K networks are due to individual nodes that are
perturbed across conditions, or (2) similarities among the K networks are due
to the presence of common hub nodes that are shared across all K networks.
Using a row-column overlap norm penalty function, we formulate two convex
optimization problems that correspond to these two assumptions. We solve these
problems using an alternating direction method of multipliers algorithm, and we
derive a set of necessary and sufficient conditions that allows us to decompose
the problem into independent subproblems so that our algorithm can be scaled to
high-dimensional settings. Our proposal is illustrated on synthetic data, a
webpage data set, and a brain cancer gene expression data set.
| Karthik Mohan, Palma London, Maryam Fazel, Daniela Witten, Su-In Lee | null | 1303.5145 | null | null |
Estimating Confusions in the ASR Channel for Improved Topic-based
Language Model Adaptation | cs.CL cs.LG | Human language is a combination of elemental languages/domains/styles that
change across and sometimes within discourses. Language models, which play a
crucial role in speech recognizers and machine translation systems, are
particularly sensitive to such changes, unless some form of adaptation takes
place. One approach to speech language model adaptation is self-training, in
which a language model's parameters are tuned based on automatically
transcribed audio. However, transcription errors can misguide self-training,
particularly in challenging settings such as conversational speech. In this
work, we propose a model that considers the confusions (errors) of the ASR
channel. By modeling the likely confusions in the ASR output instead of using
just the 1-best, we improve self-training efficacy by obtaining a more reliable
reference transcription estimate. We demonstrate improved topic-based language
modeling adaptation results over both 1-best and lattice self-training using
our ASR channel confusion estimates on telephone conversations.
| Damianos Karakos and Mark Dredze and Sanjeev Khudanpur | null | 1303.5148 | null | null |
Separable Dictionary Learning | cs.CV cs.LG stat.ML | Many techniques in computer vision, machine learning, and statistics rely on
the fact that a signal of interest admits a sparse representation over some
dictionary. Dictionaries are either available analytically, or can be learned
from a suitable training set. While analytic dictionaries permit to capture the
global structure of a signal and allow a fast implementation, learned
dictionaries often perform better in applications as they are more adapted to
the considered class of signals. In imagery, unfortunately, the numerical
burden for (i) learning a dictionary and for (ii) employing the dictionary for
reconstruction tasks only allows to deal with relatively small image patches
that only capture local image information. The approach presented in this paper
aims at overcoming these drawbacks by allowing a separable structure on the
dictionary throughout the learning process. On the one hand, this permits
larger patch-sizes for the learning phase, on the other hand, the dictionary is
applied efficiently in reconstruction tasks. The learning procedure is based on
optimizing over a product of spheres which updates the dictionary as a whole,
thus enforces basic dictionary properties such as mutual coherence explicitly
during the learning procedure. In the special case where no separable structure
is enforced, our method competes with state-of-the-art dictionary learning
methods like K-SVD.
| Simon Hawe, Matthias Seibert, and Martin Kleinsteuber | null | 1303.5244 | null | null |
An Entropy-based Learning Algorithm of Bayesian Conditional Trees | cs.LG cs.AI cs.CV | This article offers a modification of Chow and Liu's learning algorithm in
the context of handwritten digit recognition. The modified algorithm directs
the user to group digits into several classes consisting of digits that are
hard to distinguish and then constructing an optimal conditional tree
representation for each class of digits instead of for each single digit as
done by Chow and Liu (1968). Advantages and extensions of the new method are
discussed. Related works of Wong and Wang (1977) and Wong and Poon (1989) which
offer a different entropy-based learning algorithm are shown to rest on
inappropriate assumptions.
| Dan Geiger | null | 1303.5403 | null | null |
Sparse Projections of Medical Images onto Manifolds | cs.CV cs.LG stat.ML | Manifold learning has been successfully applied to a variety of medical
imaging problems. Its use in real-time applications requires fast projection
onto the low-dimensional space. To this end, out-of-sample extensions are
applied by constructing an interpolation function that maps from the input
space to the low-dimensional manifold. Commonly used approaches such as the
Nystr\"{o}m extension and kernel ridge regression require using all training
points. We propose an interpolation function that only depends on a small
subset of the input training data. Consequently, in the testing phase each new
point only needs to be compared against a small number of input training data
in order to project the point onto the low-dimensional space. We interpret our
method as an out-of-sample extension that approximates kernel ridge regression.
Our method involves solving a simple convex optimization problem and has the
attractive property of guaranteeing an upper bound on the approximation error,
which is crucial for medical applications. Tuning this error bound controls the
sparsity of the resulting interpolation function. We illustrate our method in
two clinical applications that require fast mapping of input images onto a
low-dimensional space.
| George H. Chen, Christian Wachinger, Polina Golland | null | 1303.5508 | null | null |
Network Detection Theory and Performance | cs.SI cs.LG math.ST physics.soc-ph stat.ML stat.TH | Network detection is an important capability in many areas of applied
research in which data can be represented as a graph of entities and
relationships. Oftentimes the object of interest is a relatively small subgraph
in an enormous, potentially uninteresting background. This aspect characterizes
network detection as a "big data" problem. Graph partitioning and network
discovery have been major research areas over the last ten years, driven by
interest in internet search, cyber security, social networks, and criminal or
terrorist activities. The specific problem of network discovery is addressed as
a special case of graph partitioning in which membership in a small subgraph of
interest must be determined. Algebraic graph theory is used as the basis to
analyze and compare different network detection methods. A new Bayesian network
detection framework is introduced that partitions the graph based on prior
information and direct observations. The new approach, called space-time threat
propagation, is proved to maximize the probability of detection and is
therefore optimum in the Neyman-Pearson sense. This optimality criterion is
compared to spectral community detection approaches which divide the global
graph into subsets or communities with optimal connectivity properties. We also
explore a new generative stochastic model for covert networks and analyze using
receiver operating characteristics the detection performance of both classes of
optimal detection techniques.
| Steven T. Smith, Kenneth D. Senne, Scott Philips, Edward K. Kao, and
Garrett Bernstein | 10.1109/TSP.2014.2336613 | 1303.5613 | null | null |
Sparse Factor Analysis for Learning and Content Analytics | stat.ML cs.LG math.OC stat.AP | We develop a new model and algorithms for machine learning-based learning
analytics, which estimate a learner's knowledge of the concepts underlying a
domain, and content analytics, which estimate the relationships among a
collection of questions and those concepts. Our model represents the
probability that a learner provides the correct response to a question in terms
of three factors: their understanding of a set of underlying concepts, the
concepts involved in each question, and each question's intrinsic difficulty.
We estimate these factors given the graded responses to a collection of
questions. The underlying estimation problem is ill-posed in general,
especially when only a subset of the questions are answered. The key
observation that enables a well-posed solution is the fact that typical
educational domains of interest involve only a small number of key concepts.
Leveraging this observation, we develop both a bi-convex maximum-likelihood and
a Bayesian solution to the resulting SPARse Factor Analysis (SPARFA) problem.
We also incorporate user-defined tags on questions to facilitate the
interpretability of the estimated factors. Experiments with synthetic and
real-world data demonstrate the efficacy of our approach. Finally, we make a
connection between SPARFA and noisy, binary-valued (1-bit) dictionary learning
that is of independent interest.
| Andrew S. Lan, Andrew E. Waters, Christoph Studer and Richard G.
Baraniuk | null | 1303.5685 | null | null |
A Diffusion Process on Riemannian Manifold for Visual Tracking | cs.CV cs.LG cs.RO stat.ML | Robust visual tracking for long video sequences is a research area that has
many important applications. The main challenges include how the target image
can be modeled and how this model can be updated. In this paper, we model the
target using a covariance descriptor, as this descriptor is robust to problems
such as pixel-pixel misalignment, pose and illumination changes, that commonly
occur in visual tracking. We model the changes in the template using a
generative process. We introduce a new dynamical model for the template update
using a random walk on the Riemannian manifold where the covariance descriptors
lie in. This is done using log-transformed space of the manifold to free the
constraints imposed inherently by positive semidefinite matrices. Modeling
template variations and poses kinetics together in the state space enables us
to jointly quantify the uncertainties relating to the kinematic states and the
template in a principled way. Finally, the sequential inference of the
posterior distribution of the kinematic states and the template is done using a
particle filter. Our results shows that this principled approach can be robust
to changes in illumination, poses and spatial affine transformation. In the
experiments, our method outperformed the current state-of-the-art algorithm -
the incremental Principal Component Analysis method, particularly when a target
underwent fast poses changes and also maintained a comparable performance in
stable target tracking cases.
| Marcus Chen, Cham Tat Jen, Pang Sze Kim, Alvina Goh | null | 1303.5913 | null | null |
On Learnability, Complexity and Stability | stat.ML cs.LG | We consider the fundamental question of learnability of a hypotheses class in
the supervised learning setting and in the general learning setting introduced
by Vladimir Vapnik. We survey classic results characterizing learnability in
term of suitable notions of complexity, as well as more recent results that
establish the connection between learnability and stability of a learning
algorithm.
| Silvia Villa, Lorenzo Rosasco and Tomaso Poggio | null | 1303.5976 | null | null |
Efficient Reinforcement Learning for High Dimensional Linear Quadratic
Systems | stat.ML cs.LG math.OC | We study the problem of adaptive control of a high dimensional linear
quadratic (LQ) system. Previous work established the asymptotic convergence to
an optimal controller for various adaptive control schemes. More recently, for
the average cost LQ problem, a regret bound of ${O}(\sqrt{T})$ was shown, apart
form logarithmic factors. However, this bound scales exponentially with $p$,
the dimension of the state space. In this work we consider the case where the
matrices describing the dynamic of the LQ system are sparse and their
dimensions are large. We present an adaptive control scheme that achieves a
regret bound of ${O}(p \sqrt{T})$, apart from logarithmic factors. In
particular, our algorithm has an average cost of $(1+\eps)$ times the optimum
cost after $T = \polylog(p) O(1/\eps^2)$. This is in comparison to previous
work on the dense dynamics where the algorithm requires time that scales
exponentially with dimension in order to achieve regret of $\eps$ times the
optimal cost.
We believe that our result has prominent applications in the emerging area of
computational advertising, in particular targeted online advertising and
advertising in social networks.
| Morteza Ibrahimi and Adel Javanmard and Benjamin Van Roy | null | 1303.5984 | null | null |
Generalizing k-means for an arbitrary distance matrix | cs.LG cs.CV stat.ML | The original k-means clustering method works only if the exact vectors
representing the data points are known. Therefore calculating the distances
from the centroids needs vector operations, since the average of abstract data
points is undefined. Existing algorithms can be extended for those cases when
the sole input is the distance matrix, and the exact representing vectors are
unknown. This extension may be named relational k-means after a notation for a
similar algorithm invented for fuzzy clustering. A method is then proposed for
generalizing k-means for scenarios when the data points have absolutely no
connection with a Euclidean space.
| Bal\'azs Szalkai | null | 1303.6001 | null | null |
On Sparsity Inducing Regularization Methods for Machine Learning | cs.LG stat.ML | During the past years there has been an explosion of interest in learning
methods based on sparsity regularization. In this paper, we discuss a general
class of such methods, in which the regularizer can be expressed as the
composition of a convex function $\omega$ with a linear function. This setting
includes several methods such the group Lasso, the Fused Lasso, multi-task
learning and many more. We present a general approach for solving
regularization problems of this kind, under the assumption that the proximity
operator of the function $\omega$ is available. Furthermore, we comment on the
application of this approach to support vector machines, a technique pioneered
by the groundbreaking work of Vladimir Vapnik.
| Andreas Argyriou, Luca Baldassarre, Charles A. Micchelli, Massimiliano
Pontil | null | 1303.6086 | null | null |
Adaptivity of averaged stochastic gradient descent to local strong
convexity for logistic regression | math.ST cs.LG math.OC stat.TH | In this paper, we consider supervised learning problems such as logistic
regression and study the stochastic gradient method with averaging, in the
usual stochastic approximation setting where observations are used only once.
We show that after $N$ iterations, with a constant step-size proportional to
$1/R^2 \sqrt{N}$ where $N$ is the number of observations and $R$ is the maximum
norm of the observations, the convergence rate is always of order
$O(1/\sqrt{N})$, and improves to $O(R^2 / \mu N)$ where $\mu$ is the lowest
eigenvalue of the Hessian at the global optimum (when this eigenvalue is
greater than $R^2/\sqrt{N}$). Since $\mu$ does not need to be known in advance,
this shows that averaged stochastic gradient is adaptive to \emph{unknown
local} strong convexity of the objective function. Our proof relies on the
generalized self-concordance properties of the logistic loss and thus extends
to all generalized linear models with uniformly bounded features.
| Francis Bach (INRIA Paris - Rocquencourt, LIENS) | null | 1303.6149 | null | null |
Machine learning of hierarchical clustering to segment 2D and 3D images | cs.CV cs.LG | We aim to improve segmentation through the use of machine learning tools
during region agglomeration. We propose an active learning approach for
performing hierarchical agglomerative segmentation from superpixels. Our method
combines multiple features at all scales of the agglomerative process, works
for data with an arbitrary number of dimensions, and scales to very large
datasets. We advocate the use of variation of information to measure
segmentation accuracy, particularly in 3D electron microscopy (EM) images of
neural tissue, and using this metric demonstrate an improvement over competing
algorithms in EM and natural images.
| Juan Nunez-Iglesias, Ryan Kennedy, Toufiq Parag, Jianbo Shi, Dmitri B.
Chklovskii | 10.1371/journal.pone.0071715 | 1303.6163 | null | null |
Convex Tensor Decomposition via Structured Schatten Norm Regularization | stat.ML cs.LG cs.NA | We discuss structured Schatten norms for tensor decomposition that includes
two recently proposed norms ("overlapped" and "latent") for
convex-optimization-based tensor decomposition, and connect tensor
decomposition with wider literature on structured sparsity. Based on the
properties of the structured Schatten norms, we mathematically analyze the
performance of "latent" approach for tensor decomposition, which was
empirically found to perform better than the "overlapped" approach in some
settings. We show theoretically that this is indeed the case. In particular,
when the unknown true tensor is low-rank in a specific mode, this approach
performs as good as knowing the mode with the smallest rank. Along the way, we
show a novel duality result for structures Schatten norms, establish the
consistency, and discuss the identifiability of this approach. We confirm
through numerical simulations that our theoretical prediction can precisely
predict the scaling behavior of the mean squared error.
| Ryota Tomioka, Taiji Suzuki | null | 1303.6370 | null | null |
A Note on k-support Norm Regularized Risk Minimization | cs.LG | The k-support norm has been recently introduced to perform correlated
sparsity regularization. Although Argyriou et al. only reported experiments
using squared loss, here we apply it to several other commonly used settings
resulting in novel machine learning algorithms with interesting and familiar
limit cases. Source code for the algorithms described here is available.
| Matthew Blaschko (INRIA Saclay - Ile de France, CVN) | null | 1303.6390 | null | null |
Exploiting correlation and budget constraints in Bayesian multi-armed
bandit optimization | stat.ML cs.LG | We address the problem of finding the maximizer of a nonlinear smooth
function, that can only be evaluated point-wise, subject to constraints on the
number of permitted function evaluations. This problem is also known as
fixed-budget best arm identification in the multi-armed bandit literature. We
introduce a Bayesian approach for this problem and show that it empirically
outperforms both the existing frequentist counterpart and other Bayesian
optimization methods. The Bayesian approach places emphasis on detailed
modelling, including the modelling of correlations among the arms. As a result,
it can perform well in situations where the number of arms is much larger than
the number of allowed function evaluation, whereas the frequentist counterpart
is inapplicable. This feature enables us to develop and deploy practical
applications, such as automatic machine learning toolboxes. The paper presents
comprehensive comparisons of the proposed approach, Thompson sampling,
classical Bayesian optimization techniques, more recent Bayesian bandit
approaches, and state-of-the-art best arm identification methods. This is the
first comparison of many of these methods in the literature and allows us to
examine the relative merits of their different features.
| Matthew W. Hoffman, Bobak Shahriari, Nando de Freitas | null | 1303.6746 | null | null |
Sequential testing over multiple stages and performance analysis of data
fusion | stat.ML cs.LG | We describe a methodology for modeling the performance of decision-level data
fusion between different sensor configurations, implemented as part of the
JIEDDO Analytic Decision Engine (JADE). We first discuss a Bayesian network
formulation of classical probabilistic data fusion, which allows elementary
fusion structures to be stacked and analyzed efficiently. We then present an
extension of the Wald sequential test for combining the outputs of the Bayesian
network over time. We discuss an algorithm to compute its performance
statistics and illustrate the approach on some examples. This variant of the
sequential test involves multiple, distinct stages, where the evidence
accumulated from each stage is carried over into the next one, and is motivated
by a need to keep certain sensors in the network inactive unless triggered by
other sensors.
| Gaurav Thakur | 10.1117/12.2017754 | 1303.6750 | null | null |
Efficiently Using Second Order Information in Large l1 Regularization
Problems | stat.ML cs.LG | We propose a novel general algorithm LHAC that efficiently uses second-order
information to train a class of large-scale l1-regularized problems. Our method
executes cheap iterations while achieving fast local convergence rate by
exploiting the special structure of a low-rank matrix, constructed via
quasi-Newton approximation of the Hessian of the smooth loss function. A greedy
active-set strategy, based on the largest violations in the dual constraints,
is employed to maintain a working set that iteratively estimates the complement
of the optimal active set. This allows for smaller size of subproblems and
eventually identifies the optimal active set. Empirical comparisons confirm
that LHAC is highly competitive with several recently proposed state-of-the-art
specialized solvers for sparse logistic regression and sparse inverse
covariance matrix selection.
| Xiaocheng Tang and Katya Scheinberg | null | 1303.6935 | null | null |
ABC Reinforcement Learning | stat.ML cs.LG | This paper introduces a simple, general framework for likelihood-free
Bayesian reinforcement learning, through Approximate Bayesian Computation
(ABC). The main advantage is that we only require a prior distribution on a
class of simulators (generative models). This is useful in domains where an
analytical probabilistic model of the underlying process is too complex to
formulate, but where detailed simulation models are available. ABC-RL allows
the use of any Bayesian reinforcement learning technique, even in this case. In
addition, it can be seen as an extension of rollout algorithms to the case
where we do not know what the correct model to draw rollouts from is. We
experimentally demonstrate the potential of this approach in a comparison with
LSPI. Finally, we introduce a theorem showing that ABC is a sound methodology
in principle, even when non-sufficient statistics are used.
| Christos Dimitrakakis, Nikolaos Tziortziotis | null | 1303.6977 | null | null |
Inductive Hashing on Manifolds | cs.LG | Learning based hashing methods have attracted considerable attention due to
their ability to greatly increase the scale at which existing algorithms may
operate. Most of these methods are designed to generate binary codes that
preserve the Euclidean distance in the original space. Manifold learning
techniques, in contrast, are better able to model the intrinsic structure
embedded in the original high-dimensional data. The complexity of these models,
and the problems with out-of-sample data, have previously rendered them
unsuitable for application to large-scale embedding, however. In this work, we
consider how to learn compact binary embeddings on their intrinsic manifolds.
In order to address the above-mentioned difficulties, we describe an efficient,
inductive solution to the out-of-sample data problem, and a process by which
non-parametric manifold learning may be used as the basis of a hashing method.
Our proposed approach thus allows the development of a range of new hashing
techniques exploiting the flexibility of the wide variety of manifold learning
approaches available. We particularly show that hashing on the basis of t-SNE .
| Fumin Shen, Chunhua Shen, Qinfeng Shi, Anton van den Hengel, Zhenmin
Tang | 10.1109/CVPR.2013.205 | 1303.7043 | null | null |
Relevance As a Metric for Evaluating Machine Learning Algorithms | stat.ML cs.LG | In machine learning, the choice of a learning algorithm that is suitable for
the application domain is critical. The performance metric used to compare
different algorithms must also reflect the concerns of users in the application
domain under consideration. In this work, we propose a novel probability-based
performance metric called Relevance Score for evaluating supervised learning
algorithms. We evaluate the proposed metric through empirical analysis on a
dataset gathered from an intelligent lighting pilot installation. In comparison
to the commonly used Classification Accuracy metric, the Relevance Score proves
to be more appropriate for a certain class of applications.
| Aravind Kota Gopalakrishna, Tanir Ozcelebi, Antonio Liotta, Johan J.
Lukkien | 10.1007/978-3-642-39712-7_15 | 1303.7093 | null | null |
Confidence sets for persistence diagrams | math.ST cs.CG cs.LG stat.TH | Persistent homology is a method for probing topological properties of point
clouds and functions. The method involves tracking the birth and death of
topological features (2000) as one varies a tuning parameter. Features with
short lifetimes are informally considered to be "topological noise," and those
with a long lifetime are considered to be "topological signal." In this paper,
we bring some statistical ideas to persistent homology. In particular, we
derive confidence sets that allow us to separate topological signal from
topological noise.
| Brittany Terese Fasy, Fabrizio Lecci, Alessandro Rinaldo, Larry
Wasserman, Sivaraman Balakrishnan, Aarti Singh | 10.1214/14-AOS1252 | 1303.7117 | null | null |
Detecting Overlapping Temporal Community Structure in Time-Evolving
Networks | cs.SI cs.LG physics.soc-ph stat.ML | We present a principled approach for detecting overlapping temporal community
structure in dynamic networks. Our method is based on the following framework:
find the overlapping temporal community structure that maximizes a quality
function associated with each snapshot of the network subject to a temporal
smoothness constraint. A novel quality function and a smoothness constraint are
proposed to handle overlaps, and a new convex relaxation is used to solve the
resulting combinatorial optimization problem. We provide theoretical guarantees
as well as experimental results that reveal community structure in real and
synthetic networks. Our main insight is that certain structures can be
identified only when temporal correlation is considered and when communities
are allowed to overlap. In general, discovering such overlapping temporal
community structure can enhance our understanding of real-world complex
networks by revealing the underlying stability behind their seemingly chaotic
evolution.
| Yudong Chen, Vikas Kawadia, Rahul Urgaonkar | null | 1303.7226 | null | null |
Scalable Text and Link Analysis with Mixed-Topic Link Models | cs.LG cs.IR cs.SI physics.data-an stat.ML | Many data sets contain rich information about objects, as well as pairwise
relations between them. For instance, in networks of websites, scientific
papers, and other documents, each node has content consisting of a collection
of words, as well as hyperlinks or citations to other nodes. In order to
perform inference on such data sets, and make predictions and recommendations,
it is useful to have models that are able to capture the processes which
generate the text at each node and the links between them. In this paper, we
combine classic ideas in topic modeling with a variant of the mixed-membership
block model recently developed in the statistical physics community. The
resulting model has the advantage that its parameters, including the mixture of
topics of each document and the resulting overlapping communities, can be
inferred with a simple and scalable expectation-maximization algorithm. We test
our model on three data sets, performing unsupervised topic classification and
link prediction. For both tasks, our model outperforms several existing
state-of-the-art methods, achieving higher accuracy with significantly less
computation, analyzing a data set with 1.3 million words and 44 thousand links
in a few minutes.
| Yaojia Zhu, Xiaoran Yan, Lise Getoor and Cristopher Moore | 10.1145/2487575.2487693 | 1303.7264 | null | null |
On the symmetrical Kullback-Leibler Jeffreys centroids | cs.IT cs.LG math.IT stat.ML | Due to the success of the bag-of-word modeling paradigm, clustering
histograms has become an important ingredient of modern information processing.
Clustering histograms can be performed using the celebrated $k$-means
centroid-based algorithm. From the viewpoint of applications, it is usually
required to deal with symmetric distances. In this letter, we consider the
Jeffreys divergence that symmetrizes the Kullback-Leibler divergence, and
investigate the computation of Jeffreys centroids. We first prove that the
Jeffreys centroid can be expressed analytically using the Lambert $W$ function
for positive histograms. We then show how to obtain a fast guaranteed
approximation when dealing with frequency histograms. Finally, we conclude with
some remarks on the $k$-means histogram clustering.
| Frank Nielsen | 10.1109/LSP.2013.2260538 | 1303.7286 | null | null |
Universal Approximation Depth and Errors of Narrow Belief Networks with
Discrete Units | stat.ML cs.LG math.PR | We generalize recent theoretical work on the minimal number of layers of
narrow deep belief networks that can approximate any probability distribution
on the states of their visible units arbitrarily well. We relax the setting of
binary units (Sutskever and Hinton, 2008; Le Roux and Bengio, 2008, 2010;
Mont\'ufar and Ay, 2011) to units with arbitrary finite state spaces, and the
vanishing approximation error to an arbitrary approximation error tolerance.
For example, we show that a $q$-ary deep belief network with $L\geq
2+\frac{q^{\lceil m-\delta \rceil}-1}{q-1}$ layers of width $n \leq m +
\log_q(m) + 1$ for some $m\in \mathbb{N}$ can approximate any probability
distribution on $\{0,1,\ldots,q-1\}^n$ without exceeding a Kullback-Leibler
divergence of $\delta$. Our analysis covers discrete restricted Boltzmann
machines and na\"ive Bayes models as special cases.
| Guido F. Mont\'ufar | null | 1303.7461 | null | null |
Independent Vector Analysis: Identification Conditions and Performance
Bounds | cs.LG cs.IT math.IT stat.ML | Recently, an extension of independent component analysis (ICA) from one to
multiple datasets, termed independent vector analysis (IVA), has been the
subject of significant research interest. IVA has also been shown to be a
generalization of Hotelling's canonical correlation analysis. In this paper, we
provide the identification conditions for a general IVA formulation, which
accounts for linear, nonlinear, and sample-to-sample dependencies. The
identification conditions are a generalization of previous results for ICA and
for IVA when samples are independently and identically distributed.
Furthermore, a principal aim of IVA is the identification of dependent sources
between datasets. Thus, we provide the additional conditions for when the
arbitrary ordering of the sources within each dataset is common. Performance
bounds in terms of the Cramer-Rao lower bound are also provided for the
demixing matrices and interference to source ratio. The performance of two IVA
algorithms are compared to the theoretical bounds.
| Matthew Anderson, Geng-Shen Fu, Ronald Phlypo, and T\"ulay Adal{\i} | 10.1109/TSP.2014.2333554 | 1303.7474 | null | null |
Translation-Invariant Shrinkage/Thresholding of Group Sparse Signals | cs.CV cs.LG cs.SD | This paper addresses signal denoising when large-amplitude coefficients form
clusters (groups). The L1-norm and other separable sparsity models do not
capture the tendency of coefficients to cluster (group sparsity). This work
develops an algorithm, called 'overlapping group shrinkage' (OGS), based on the
minimization of a convex cost function involving a group-sparsity promoting
penalty function. The groups are fully overlapping so the denoising method is
translation-invariant and blocking artifacts are avoided. Based on the
principle of majorization-minimization (MM), we derive a simple iterative
minimization algorithm that reduces the cost function monotonically. A
procedure for setting the regularization parameter, based on attenuating the
noise to a specified level, is also described. The proposed approach is
illustrated on speech enhancement, wherein the OGS approach is applied in the
short-time Fourier transform (STFT) domain. The denoised speech produced by OGS
does not suffer from musical noise.
| Po-Yu Chen and Ivan W. Selesnick | 10.1016/j.sigpro.2013.06 | 1304.0035 | null | null |
Parallel Computation Is ESS | cs.LG cs.AI cs.GT | There are enormous amount of examples of Computation in nature, exemplified
across multiple species in biology. One crucial aim for these computations
across all life forms their ability to learn and thereby increase the chance of
their survival. In the current paper a formal definition of autonomous learning
is proposed. From that definition we establish a Turing Machine model for
learning, where rule tables can be added or deleted, but can not be modified.
Sequential and parallel implementations of this model are discussed. It is
found that for general purpose learning based on this model, the
implementations capable of parallel execution would be evolutionarily stable.
This is proposed to be of the reasons why in Nature parallelism in computation
is found in abundance.
| Nabarun Mondal and Partha P. Ghosh | null | 1304.0160 | null | null |
Sparse Signal Processing with Linear and Nonlinear Observations: A
Unified Shannon-Theoretic Approach | cs.IT cs.LG math.IT math.ST stat.ML stat.TH | We derive fundamental sample complexity bounds for recovering sparse and
structured signals for linear and nonlinear observation models including sparse
regression, group testing, multivariate regression and problems with missing
features. In general, sparse signal processing problems can be characterized in
terms of the following Markovian property. We are given a set of $N$ variables
$X_1,X_2,\ldots,X_N$, and there is an unknown subset of variables $S \subset
\{1,\ldots,N\}$ that are relevant for predicting outcomes $Y$. More
specifically, when $Y$ is conditioned on $\{X_n\}_{n\in S}$ it is conditionally
independent of the other variables, $\{X_n\}_{n \not \in S}$. Our goal is to
identify the set $S$ from samples of the variables $X$ and the associated
outcomes $Y$. We characterize this problem as a version of the noisy channel
coding problem. Using asymptotic information theoretic analyses, we establish
mutual information formulas that provide sufficient and necessary conditions on
the number of samples required to successfully recover the salient variables.
These mutual information expressions unify conditions for both linear and
nonlinear observations. We then compute sample complexity bounds for the
aforementioned models, based on the mutual information expressions in order to
demonstrate the applicability and flexibility of our results in general sparse
signal processing models.
| Cem Aksoylar, George Atia, Venkatesh Saligrama | 10.1109/TIT.2016.2605122 | 1304.0682 | null | null |
Improved Performance of Unsupervised Method by Renovated K-Means | cs.LG cs.CV stat.ML | Clustering is a separation of data into groups of similar objects. Every
group called cluster consists of objects that are similar to one another and
dissimilar to objects of other groups. In this paper, the K-Means algorithm is
implemented by three distance functions and to identify the optimal distance
function for clustering methods. The proposed K-Means algorithm is compared
with K-Means, Static Weighted K-Means (SWK-Means) and Dynamic Weighted K-Means
(DWK-Means) algorithm by using Davis Bouldin index, Execution Time and
Iteration count methods. Experimental results show that the proposed K-Means
algorithm performed better on Iris and Wine dataset when compared with other
three clustering methods.
| P. Ashok, G.M Kadhar Nawaz, E. Elayaraja, V. Vadivel | null | 1304.0725 | null | null |
Representation, Approximation and Learning of Submodular Functions Using
Low-rank Decision Trees | cs.LG cs.CC cs.DS | We study the complexity of approximate representation and learning of
submodular functions over the uniform distribution on the Boolean hypercube
$\{0,1\}^n$. Our main result is the following structural theorem: any
submodular function is $\epsilon$-close in $\ell_2$ to a real-valued decision
tree (DT) of depth $O(1/\epsilon^2)$. This immediately implies that any
submodular function is $\epsilon$-close to a function of at most
$2^{O(1/\epsilon^2)}$ variables and has a spectral $\ell_1$ norm of
$2^{O(1/\epsilon^2)}$. It also implies the closest previous result that states
that submodular functions can be approximated by polynomials of degree
$O(1/\epsilon^2)$ (Cheraghchi et al., 2012). Our result is proved by
constructing an approximation of a submodular function by a DT of rank
$4/\epsilon^2$ and a proof that any rank-$r$ DT can be $\epsilon$-approximated
by a DT of depth $\frac{5}{2}(r+\log(1/\epsilon))$.
We show that these structural results can be exploited to give an
attribute-efficient PAC learning algorithm for submodular functions running in
time $\tilde{O}(n^2) \cdot 2^{O(1/\epsilon^{4})}$. The best previous algorithm
for the problem requires $n^{O(1/\epsilon^{2})}$ time and examples (Cheraghchi
et al., 2012) but works also in the agnostic setting. In addition, we give
improved learning algorithms for a number of related settings.
We also prove that our PAC and agnostic learning algorithms are essentially
optimal via two lower bounds: (1) an information-theoretic lower bound of
$2^{\Omega(1/\epsilon^{2/3})}$ on the complexity of learning monotone
submodular functions in any reasonable model; (2) computational lower bound of
$n^{\Omega(1/\epsilon^{2/3})}$ based on a reduction to learning of sparse
parities with noise, widely-believed to be intractable. These are the first
lower bounds for learning of submodular functions over the uniform
distribution.
| Vitaly Feldman and Pravesh Kothari and Jan Vondrak | null | 1304.0730 | null | null |
O(logT) Projections for Stochastic Optimization of Smooth and Strongly
Convex Functions | cs.LG | Traditional algorithms for stochastic optimization require projecting the
solution at each iteration into a given domain to ensure its feasibility. When
facing complex domains, such as positive semi-definite cones, the projection
operation can be expensive, leading to a high computational cost per iteration.
In this paper, we present a novel algorithm that aims to reduce the number of
projections for stochastic optimization. The proposed algorithm combines the
strength of several recent developments in stochastic optimization, including
mini-batch, extra-gradient, and epoch gradient descent, in order to effectively
explore the smoothness and strong convexity. We show, both in expectation and
with a high probability, that when the objective function is both smooth and
strongly convex, the proposed algorithm achieves the optimal $O(1/T)$ rate of
convergence with only $O(\log T)$ projections. Our empirical study verifies the
theoretical result.
| Lijun Zhang, Tianbao Yang, Rong Jin, Xiaofei He | null | 1304.0740 | null | null |
A Fast Semidefinite Approach to Solving Binary Quadratic Problems | cs.CV cs.LG | Many computer vision problems can be formulated as binary quadratic programs
(BQPs). Two classic relaxation methods are widely used for solving BQPs,
namely, spectral methods and semidefinite programming (SDP), each with their
own advantages and disadvantages. Spectral relaxation is simple and easy to
implement, but its bound is loose. Semidefinite relaxation has a tighter bound,
but its computational complexity is high for large scale problems. We present a
new SDP formulation for BQPs, with two desirable properties. First, it has a
similar relaxation bound to conventional SDP formulations. Second, compared
with conventional SDP methods, the new SDP formulation leads to a significantly
more efficient and scalable dual optimization approach, which has the same
degree of complexity as spectral methods. Extensive experiments on various
applications including clustering, image segmentation, co-segmentation and
registration demonstrate the usefulness of our SDP formulation for solving
large-scale BQPs.
| Peng Wang, Chunhua Shen, Anton van den Hengel | 10.1109/CVPR.2013.173 | 1304.0840 | null | null |
A Novel Frank-Wolfe Algorithm. Analysis and Applications to Large-Scale
SVM Training | cs.CV cs.AI cs.LG math.OC stat.ML | Recently, there has been a renewed interest in the machine learning community
for variants of a sparse greedy approximation procedure for concave
optimization known as {the Frank-Wolfe (FW) method}. In particular, this
procedure has been successfully applied to train large-scale instances of
non-linear Support Vector Machines (SVMs). Specializing FW to SVM training has
allowed to obtain efficient algorithms but also important theoretical results,
including convergence analysis of training algorithms and new characterizations
of model sparsity.
In this paper, we present and analyze a novel variant of the FW method based
on a new way to perform away steps, a classic strategy used to accelerate the
convergence of the basic FW procedure. Our formulation and analysis is focused
on a general concave maximization problem on the simplex. However, the
specialization of our algorithm to quadratic forms is strongly related to some
classic methods in computational geometry, namely the Gilbert and MDM
algorithms.
On the theoretical side, we demonstrate that the method matches the
guarantees in terms of convergence rate and number of iterations obtained by
using classic away steps. In particular, the method enjoys a linear rate of
convergence, a result that has been recently proved for MDM on quadratic forms.
On the practical side, we provide experiments on several classification
datasets, and evaluate the results using statistical tests. Experiments show
that our method is faster than the FW method with classic away steps, and works
well even in the cases in which classic away steps slow down the algorithm.
Furthermore, these improvements are obtained without sacrificing the predictive
accuracy of the obtained SVM model.
| Hector Allende, Emanuele Frandi, Ricardo Nanculef, Claudio Sartori | null | 1304.1014 | null | null |
Estimating Phoneme Class Conditional Probabilities from Raw Speech
Signal using Convolutional Neural Networks | cs.LG cs.CL cs.NE | In hybrid hidden Markov model/artificial neural networks (HMM/ANN) automatic
speech recognition (ASR) system, the phoneme class conditional probabilities
are estimated by first extracting acoustic features from the speech signal
based on prior knowledge such as, speech perception or/and speech production
knowledge, and, then modeling the acoustic features with an ANN. Recent
advances in machine learning techniques, more specifically in the field of
image processing and text processing, have shown that such divide and conquer
strategy (i.e., separating feature extraction and modeling steps) may not be
necessary. Motivated from these studies, in the framework of convolutional
neural networks (CNNs), this paper investigates a novel approach, where the
input to the ANN is raw speech signal and the output is phoneme class
conditional probability estimates. On TIMIT phoneme recognition task, we study
different ANN architectures to show the benefit of CNNs and compare the
proposed approach against conventional approach where, spectral-based feature
MFCC is extracted and modeled by a multilayer perceptron. Our studies show that
the proposed approach can yield comparable or better phoneme recognition
performance when compared to the conventional approach. It indicates that CNNs
can learn features relevant for phoneme classification automatically from the
raw speech signal.
| Dimitri Palaz, Ronan Collobert, Mathew Magimai.-Doss | null | 1304.1018 | null | null |
Efficient Distance Metric Learning by Adaptive Sampling and Mini-Batch
Stochastic Gradient Descent (SGD) | cs.LG | Distance metric learning (DML) is an important task that has found
applications in many domains. The high computational cost of DML arises from
the large number of variables to be determined and the constraint that a
distance metric has to be a positive semi-definite (PSD) matrix. Although
stochastic gradient descent (SGD) has been successfully applied to improve the
efficiency of DML, it can still be computationally expensive because in order
to ensure that the solution is a PSD matrix, it has to, at every iteration,
project the updated distance metric onto the PSD cone, an expensive operation.
We address this challenge by developing two strategies within SGD, i.e.
mini-batch and adaptive sampling, to effectively reduce the number of updates
(i.e., projections onto the PSD cone) in SGD. We also develop hybrid approaches
that combine the strength of adaptive sampling with that of mini-batch online
learning techniques to further improve the computational efficiency of SGD for
DML. We prove the theoretical guarantees for both adaptive sampling and
mini-batch based approaches for DML. We also conduct an extensive empirical
study to verify the effectiveness of the proposed algorithms for DML.
| Qi Qian, Rong Jin, Jinfeng Yi, Lijun Zhang, Shenghuo Zhu | null | 1304.1192 | null | null |
Fast SVM training using approximate extreme points | cs.LG | Applications of non-linear kernel Support Vector Machines (SVMs) to large
datasets is seriously hampered by its excessive training time. We propose a
modification, called the approximate extreme points support vector machine
(AESVM), that is aimed at overcoming this burden. Our approach relies on
conducting the SVM optimization over a carefully selected subset, called the
representative set, of the training dataset. We present analytical results that
indicate the similarity of AESVM and SVM solutions. A linear time algorithm
based on convex hulls and extreme points is used to compute the representative
set in kernel space. Extensive computational experiments on nine datasets
compared AESVM to LIBSVM \citep{LIBSVM}, CVM \citep{Tsang05}, BVM
\citep{Tsang07}, LASVM \citep{Bordes05}, $\text{SVM}^{\text{perf}}$
\citep{Joachims09}, and the random features method \citep{rahimi07}. Our AESVM
implementation was found to train much faster than the other methods, while its
classification accuracy was similar to that of LIBSVM in all cases. In
particular, for a seizure detection dataset, AESVM training was almost $10^3$
times faster than LIBSVM and LASVM and more than forty times faster than CVM
and BVM. Additionally, AESVM also gave competitively fast classification times.
| Manu Nandan, Pramod P. Khargonekar, Sachin S. Talathi | null | 1304.1391 | null | null |
Generalization Bounds for Domain Adaptation | cs.LG math.PR | In this paper, we provide a new framework to obtain the generalization bounds
of the learning process for domain adaptation, and then apply the derived
bounds to analyze the asymptotical convergence of the learning process. Without
loss of generality, we consider two kinds of representative domain adaptation:
one is with multiple sources and the other is combining source and target data.
In particular, we use the integral probability metric to measure the
difference between two domains. For either kind of domain adaptation, we
develop a related Hoeffding-type deviation inequality and a symmetrization
inequality to achieve the corresponding generalization bound based on the
uniform entropy number. We also generalized the classical McDiarmid's
inequality to a more general setting where independent random variables can
take values from different domains. By using this inequality, we then obtain
generalization bounds based on the Rademacher complexity. Afterwards, we
analyze the asymptotic convergence and the rate of convergence of the learning
process for such kind of domain adaptation. Meanwhile, we discuss the factors
that affect the asymptotic behavior of the learning process and the numerical
experiments support our theoretical findings as well.
| Chao Zhang, Lei Zhang, Jieping Ye | null | 1304.1574 | null | null |
Bug Classification: Feature Extraction and Comparison of Event Model
using Na\"ive Bayes Approach | cs.SE cs.IR cs.LG | In software industries, individuals at different levels from customer to an
engineer apply diverse mechanisms to detect to which class a particular bug
should be allocated. Sometimes while a simple search in Internet might help, in
many other cases a lot of effort is spent in analyzing the bug report to
classify the bug. So there is a great need of a structured mining algorithm -
where given a crash log, the existing bug database could be mined to find out
the class to which the bug should be allocated. This would involve Mining
patterns and applying different classification algorithms. This paper focuses
on the feature extraction, noise reduction in data and classification of
network bugs using probabilistic Na\"ive Bayes approach. Different event models
like Bernoulli and Multinomial are applied on the extracted features. When new,
unseen bugs are given as input to the algorithms, the performance comparison of
different algorithms is done on the basis of accuracy and recall parameters.
| Sunil Joy Dommati, Ruchi Agrawal, Ram Mohana Reddy G. and S. Sowmya
Kamath | null | 1304.1677 | null | null |
Image Retrieval using Histogram Factorization and Contextual Similarity
Learning | cs.CV cs.DB cs.LG | Image retrieval has been a top topic in the field of both computer vision and
machine learning for a long time. Content based image retrieval, which tries to
retrieve images from a database visually similar to a query image, has
attracted much attention. Two most important issues of image retrieval are the
representation and ranking of the images. Recently, bag-of-words based method
has shown its power as a representation method. Moreover, nonnegative matrix
factorization is also a popular way to represent the data samples. In addition,
contextual similarity learning has also been studied and proven to be an
effective method for the ranking problem. However, these technologies have
never been used together. In this paper, we developed an effective image
retrieval system by representing each image using the bag-of-words method as
histograms, and then apply the nonnegative matrix factorization to factorize
the histograms, and finally learn the ranking score using the contextual
similarity learning method. The proposed novel system is evaluated on a large
scale image database and the effectiveness is shown.
| Liu Liang | null | 1304.1995 | null | null |
A General Framework for Interacting Bayes-Optimally with Self-Interested
Agents using Arbitrary Parametric Model and Model Prior | cs.LG cs.AI cs.MA stat.ML | Recent advances in Bayesian reinforcement learning (BRL) have shown that
Bayes-optimality is theoretically achievable by modeling the environment's
latent dynamics using Flat-Dirichlet-Multinomial (FDM) prior. In
self-interested multi-agent environments, the transition dynamics are mainly
controlled by the other agent's stochastic behavior for which FDM's
independence and modeling assumptions do not hold. As a result, FDM does not
allow the other agent's behavior to be generalized across different states nor
specified using prior domain knowledge. To overcome these practical limitations
of FDM, we propose a generalization of BRL to integrate the general class of
parametric models and model priors, thus allowing practitioners' domain
knowledge to be exploited to produce a fine-grained and compact representation
of the other agent's behavior. Empirical evaluation shows that our approach
outperforms existing multi-agent reinforcement learning algorithms.
| Trong Nghia Hoang and Kian Hsiang Low | null | 1304.2024 | null | null |
ClusterCluster: Parallel Markov Chain Monte Carlo for Dirichlet Process
Mixtures | stat.ML cs.DC cs.LG | The Dirichlet process (DP) is a fundamental mathematical tool for Bayesian
nonparametric modeling, and is widely used in tasks such as density estimation,
natural language processing, and time series modeling. Although MCMC inference
methods for the DP often provide a gold standard in terms asymptotic accuracy,
they can be computationally expensive and are not obviously parallelizable. We
propose a reparameterization of the Dirichlet process that induces conditional
independencies between the atoms that form the random measure. This conditional
independence enables many of the Markov chain transition operators for DP
inference to be simulated in parallel across multiple cores. Applied to mixture
modeling, our approach enables the Dirichlet process to simultaneously learn
clusters that describe the data and superclusters that define the granularity
of parallelization. Unlike previous approaches, our technique does not require
alteration of the model and leaves the true posterior distribution invariant.
It also naturally lends itself to a distributed software implementation in
terms of Map-Reduce, which we test in cluster configurations of over 50
machines and 100 cores. We present experiments exploring the parallel
efficiency and convergence properties of our approach on both synthetic and
real-world data, including runs on 1MM data vectors in 256 dimensions.
| Dan Lovell, Jonathan Malmaud, Ryan P. Adams, Vikash K. Mansinghka | null | 1304.2302 | null | null |
The PAV algorithm optimizes binary proper scoring rules | stat.AP cs.LG stat.ML | There has been much recent interest in application of the
pool-adjacent-violators (PAV) algorithm for the purpose of calibrating the
probabilistic outputs of automatic pattern recognition and machine learning
algorithms. Special cost functions, known as proper scoring rules form natural
objective functions to judge the goodness of such calibration. We show that for
binary pattern classifiers, the non-parametric optimization of calibration,
subject to a monotonicity constraint, can be solved by PAV and that this
solution is optimal for all regular binary proper scoring rules. This extends
previous results which were limited to convex binary proper scoring rules. We
further show that this result holds not only for calibration of probabilities,
but also for calibration of log-likelihood-ratios, in which case optimality
holds independently of the prior probabilities of the pattern classes.
| Niko Brummer and Johan du Preez | null | 1304.2331 | null | null |
Multiple decision trees | cs.LG cs.AI stat.ML | This paper describes experiments, on two domains, to investigate the effect
of averaging over predictions of multiple decision trees, instead of using a
single tree. Other authors have pointed out theoretical and commonsense reasons
for preferring the multiple tree approach. Ideally, we would like to consider
predictions from all trees, weighted by their probability. However, there is a
vast number of different trees, and it is difficult to estimate the probability
of each tree. We sidestep the estimation problem by using a modified version of
the ID3 algorithm to build good trees, and average over only these trees. Our
results are encouraging. For each domain, we managed to produce a small number
of good trees. We find that it is best to average across sets of trees with
different structure; this usually gives better performance than any of the
constituent trees, including the ID3 tree.
| Suk Wah Kwok, Chris Carter | null | 1304.2363 | null | null |
Kernel Reconstruction ICA for Sparse Representation | cs.CV cs.LG | Independent Component Analysis (ICA) is an effective unsupervised tool to
learn statistically independent representation. However, ICA is not only
sensitive to whitening but also difficult to learn an over-complete basis.
Consequently, ICA with soft Reconstruction cost(RICA) was presented to learn
sparse representations with over-complete basis even on unwhitened data.
Whereas RICA is infeasible to represent the data with nonlinear structure due
to its intrinsic linearity. In addition, RICA is essentially an unsupervised
method and can not utilize the class information. In this paper, we propose a
kernel ICA model with reconstruction constraint (kRICA) to capture the
nonlinear features. To bring in the class information, we further extend the
unsupervised kRICA to a supervised one by introducing a discrimination
constraint, namely d-kRICA. This constraint leads to learn a structured basis
consisted of basis vectors from different basis subsets corresponding to
different class labels. Then each subset will sparsely represent well for its
own class but not for the others. Furthermore, data samples belonging to the
same class will have similar representations, and thereby the learned sparse
representations can take more discriminative power. Experimental results
validate the effectiveness of kRICA and d-kRICA for image classification.
| Yanhui Xiao, Zhenfeng Zhu, Yao Zhao | null | 1304.2490 | null | null |
Entropy landscape of solutions in the binary perceptron problem | cond-mat.dis-nn cond-mat.stat-mech cs.LG | The statistical picture of the solution space for a binary perceptron is
studied. The binary perceptron learns a random classification of input random
patterns by a set of binary synaptic weights. The learning of this network is
difficult especially when the pattern (constraint) density is close to the
capacity, which is supposed to be intimately related to the structure of the
solution space. The geometrical organization is elucidated by the entropy
landscape from a reference configuration and of solution-pairs separated by a
given Hamming distance in the solution space. We evaluate the entropy at the
annealed level as well as replica symmetric level and the mean field result is
confirmed by the numerical simulations on single instances using the proposed
message passing algorithms. From the first landscape (a random configuration as
a reference), we see clearly how the solution space shrinks as more constraints
are added. From the second landscape of solution-pairs, we deduce the
coexistence of clustering and freezing in the solution space.
| Haiping Huang, K. Y. Michael Wong and Yoshiyuki Kabashima | 10.1088/1751-8113/46/37/375002 | 1304.2850 | null | null |
The BOSARIS Toolkit: Theory, Algorithms and Code for Surviving the New
DCF | stat.AP cs.LG stat.ML | The change of two orders of magnitude in the 'new DCF' of NIST's SRE'10,
relative to the 'old DCF' evaluation criterion, posed a difficult challenge for
participants and evaluator alike. Initially, participants were at a loss as to
how to calibrate their systems, while the evaluator underestimated the required
number of evaluation trials. After the fact, it is now obvious that both
calibration and evaluation require very large sets of trials. This poses the
challenges of (i) how to decide what number of trials is enough, and (ii) how
to process such large data sets with reasonable memory and CPU requirements.
After SRE'10, at the BOSARIS Workshop, we built solutions to these problems
into the freely available BOSARIS Toolkit. This paper explains the principles
and algorithms behind this toolkit. The main contributions of the toolkit are:
1. The Normalized Bayes Error-Rate Plot, which analyses likelihood- ratio
calibration over a wide range of DCF operating points. These plots also help in
judging the adequacy of the sizes of calibration and evaluation databases. 2.
Efficient algorithms to compute DCF and minDCF for large score files, over the
range of operating points required by these plots. 3. A new score file format,
which facilitates working with very large trial lists. 4. A faster logistic
regression optimizer for fusion and calibration. 5. A principled way to define
EER (equal error rate), which is of practical interest when the absolute error
count is small.
| Niko Br\"ummer and Edward de Villiers | null | 1304.2865 | null | null |
A Generalized Online Mirror Descent with Applications to Classification
and Regression | cs.LG | Online learning algorithms are fast, memory-efficient, easy to implement, and
applicable to many prediction problems, including classification, regression,
and ranking. Several online algorithms were proposed in the past few decades,
some based on additive updates, like the Perceptron, and some on multiplicative
updates, like Winnow. A unifying perspective on the design and the analysis of
online algorithms is provided by online mirror descent, a general prediction
strategy from which most first-order algorithms can be obtained as special
cases. We generalize online mirror descent to time-varying regularizers with
generic updates. Unlike standard mirror descent, our more general formulation
also captures second order algorithms, algorithms for composite losses and
algorithms for adaptive filtering. Moreover, we recover, and sometimes improve,
known regret bounds as special cases of our analysis using specific
regularizers. Finally, we show the power of our approach by deriving a new
second order algorithm with a regret bound invariant with respect to arbitrary
rescalings of individual features.
| Francesco Orabona, Koby Crammer, Nicol\`o Cesa-Bianchi | null | 1304.2994 | null | null |
Scaling the Indian Buffet Process via Submodular Maximization | stat.ML cs.LG | Inference for latent feature models is inherently difficult as the inference
space grows exponentially with the size of the input data and number of latent
features. In this work, we use Kurihara & Welling (2008)'s
maximization-expectation framework to perform approximate MAP inference for
linear-Gaussian latent feature models with an Indian Buffet Process (IBP)
prior. This formulation yields a submodular function of the features that
corresponds to a lower bound on the model evidence. By adding a constant to
this function, we obtain a nonnegative submodular function that can be
maximized via a greedy algorithm that obtains at least a one-third
approximation to the optimal solution. Our inference method scales linearly
with the size of the input data, and we show the efficacy of our method on the
largest datasets currently analyzed using an IBP model.
| Colorado Reed and Zoubin Ghahramani | null | 1304.3285 | null | null |
Probabilistic Classification using Fuzzy Support Vector Machines | cs.LG math.ST stat.TH | In medical applications such as recognizing the type of a tumor as Malignant
or Benign, a wrong diagnosis can be devastating. Methods like Fuzzy Support
Vector Machines (FSVM) try to reduce the effect of misplaced training points by
assigning a lower weight to the outliers. However, there are still uncertain
points which are similar to both classes and assigning a class by the given
information will cause errors. In this paper, we propose a two-phase
classification method which probabilistically assigns the uncertain points to
each of the classes. The proposed method is applied to the Breast Cancer
Wisconsin (Diagnostic) Dataset which consists of 569 instances in 2 classes of
Malignant and Benign. This method assigns certain instances to their
appropriate classes with probability of one, and the uncertain instances to
each of the classes with associated probabilities. Therefore, based on the
degree of uncertainty, doctors can suggest further examinations before making
the final diagnosis.
| Marzieh Parandehgheibi | null | 1304.3345 | null | null |
Machine Learning, Clustering, and Polymorphy | cs.AI cs.CL cs.LG | This paper describes a machine induction program (WITT) that attempts to
model human categorization. Properties of categories to which human subjects
are sensitive includes best or prototypical members, relative contrasts between
putative categories, and polymorphy (neither necessary or sufficient features).
This approach represents an alternative to usual Artificial Intelligence
approaches to generalization and conceptual clustering which tend to focus on
necessary and sufficient feature rules, equivalence classes, and simple search
and match schemes. WITT is shown to be more consistent with human
categorization while potentially including results produced by more traditional
clustering schemes. Applications of this approach in the domains of expert
systems and information retrieval are also discussed.
| Stephen Jose Hanson, Malcolm Bauer | null | 1304.3432 | null | null |
Distributed dictionary learning over a sensor network | stat.ML cs.LG stat.AP | We consider the problem of distributed dictionary learning, where a set of
nodes is required to collectively learn a common dictionary from noisy
measurements. This approach may be useful in several contexts including sensor
networks. Diffusion cooperation schemes have been proposed to solve the
distributed linear regression problem. In this work we focus on a
diffusion-based adaptive dictionary learning strategy: each node records
observations and cooperates with its neighbors by sharing its local dictionary.
The resulting algorithm corresponds to a distributed block coordinate descent
(alternate optimization). Beyond dictionary learning, this strategy could be
adapted to many matrix factorization problems and generalized to various
settings. This article presents our approach and illustrates its efficiency on
some numerical examples.
| Pierre Chainais and C\'edric Richard | null | 1304.3568 | null | null |
Advice-Efficient Prediction with Expert Advice | cs.LG stat.ML | Advice-efficient prediction with expert advice (in analogy to label-efficient
prediction) is a variant of prediction with expert advice game, where on each
round of the game we are allowed to ask for advice of a limited number $M$ out
of $N$ experts. This setting is especially interesting when asking for advice
of every expert on every round is expensive. We present an algorithm for
advice-efficient prediction with expert advice that achieves
$O(\sqrt{\frac{N}{M}T\ln N})$ regret on $T$ rounds of the game.
| Yevgeny Seldin and Peter Bartlett and Koby Crammer | null | 1304.3708 | null | null |
Towards more accurate clustering method by using dynamic time warping | cs.LG stat.ML | An intrinsic problem of classifiers based on machine learning (ML) methods is
that their learning time grows as the size and complexity of the training
dataset increases. For this reason, it is important to have efficient
computational methods and algorithms that can be applied on large datasets,
such that it is still possible to complete the machine learning tasks in
reasonable time. In this context, we present in this paper a more accurate
simple process to speed up ML methods. An unsupervised clustering algorithm is
combined with Expectation, Maximization (EM) algorithm to develop an efficient
Hidden Markov Model (HMM) training. The idea of the proposed process consists
of two steps. In the first step, training instances with similar inputs are
clustered and a weight factor which represents the frequency of these instances
is assigned to each representative cluster. Dynamic Time Warping technique is
used as a dissimilarity function to cluster similar examples. In the second
step, all formulas in the classical HMM training algorithm (EM) associated with
the number of training instances are modified to include the weight factor in
appropriate terms. This process significantly accelerates HMM training while
maintaining the same initial, transition and emission probabilities matrixes as
those obtained with the classical HMM training algorithm. Accordingly, the
classification accuracy is preserved. Depending on the size of the training
set, speedups of up to 2200 times is possible when the size is about 100.000
instances. The proposed approach is not limited to training HMMs, but it can be
employed for a large variety of MLs methods.
| Khadoudja Ghanem | 10.5121/ijdkp.2013.3207 | 1304.3745 | null | null |
Identification of relevant subtypes via preweighted sparse clustering | stat.ME cs.LG q-bio.QM stat.AP stat.ML | Cluster analysis methods are used to identify homogeneous subgroups in a data
set. In biomedical applications, one frequently applies cluster analysis in
order to identify biologically interesting subgroups. In particular, one may
wish to identify subgroups that are associated with a particular outcome of
interest. Conventional clustering methods generally do not identify such
subgroups, particularly when there are a large number of high-variance features
in the data set. Conventional methods may identify clusters associated with
these high-variance features when one wishes to obtain secondary clusters that
are more interesting biologically or more strongly associated with a particular
outcome of interest. A modification of sparse clustering can be used to
identify such secondary clusters or clusters associated with an outcome of
interest. This method correctly identifies such clusters of interest in several
simulation scenarios. The method is also applied to a large prospective cohort
study of temporomandibular disorders and a leukemia microarray data set.
| Sheila Gaynor and Eric Bair | null | 1304.3760 | null | null |
A New Homogeneity Inter-Clusters Measure in SemiSupervised Clustering | cs.LG | Many studies in data mining have proposed a new learning called
semi-Supervised. Such type of learning combines unlabeled and labeled data
which are hard to obtain. However, in unsupervised methods, the only unlabeled
data are used. The problem of significance and the effectiveness of
semi-supervised clustering results is becoming of main importance. This paper
pursues the thesis that muchgreater accuracy can be achieved in such clustering
by improving the similarity computing. Hence, we introduce a new approach of
semisupervised clustering using an innovative new homogeneity measure of
generated clusters. Our experimental results demonstrate significantly improved
accuracy as a result.
| Badreddine Meftahi, Ourida Ben Boubaker Saidi | 10.5120/11267-6526 | 1304.3840 | null | null |
A new Bayesian ensemble of trees classifier for identifying multi-class
labels in satellite images | stat.ME cs.CV cs.LG | Classification of satellite images is a key component of many remote sensing
applications. One of the most important products of a raw satellite image is
the classified map which labels the image pixels into meaningful classes.
Though several parametric and non-parametric classifiers have been developed
thus far, accurate labeling of the pixels still remains a challenge. In this
paper, we propose a new reliable multiclass-classifier for identifying class
labels of a satellite image in remote sensing applications. The proposed
multiclass-classifier is a generalization of a binary classifier based on the
flexible ensemble of regression trees model called Bayesian Additive Regression
Trees (BART). We used three small areas from the LANDSAT 5 TM image, acquired
on August 15, 2009 (path/row: 08/29, L1T product, UTM map projection) over
Kings County, Nova Scotia, Canada to classify the land-use. Several prediction
accuracy and uncertainty measures have been used to compare the reliability of
the proposed classifier with the state-of-the-art classifiers in remote
sensing.
| Reshu Agarwal, Pritam Ranjan, Hugh Chipman | null | 1304.4077 | null | null |
Sparse Coding and Dictionary Learning for Symmetric Positive Definite
Matrices: A Kernel Approach | cs.LG cs.CV stat.ML | Recent advances suggest that a wide range of computer vision problems can be
addressed more appropriately by considering non-Euclidean geometry. This paper
tackles the problem of sparse coding and dictionary learning in the space of
symmetric positive definite matrices, which form a Riemannian manifold. With
the aid of the recently introduced Stein kernel (related to a symmetric version
of Bregman matrix divergence), we propose to perform sparse coding by embedding
Riemannian manifolds into reproducing kernel Hilbert spaces. This leads to a
convex and kernel version of the Lasso problem, which can be solved
efficiently. We furthermore propose an algorithm for learning a Riemannian
dictionary (used for sparse coding), closely tied to the Stein kernel.
Experiments on several classification tasks (face recognition, texture
classification, person re-identification) show that the proposed sparse coding
approach achieves notable improvements in discrimination accuracy, in
comparison to state-of-the-art methods such as tensor sparse coding, Riemannian
locality preserving projection, and symmetry-driven accumulation of local
features.
| Mehrtash T. Harandi, Conrad Sanderson, Richard Hartley, Brian C.
Lovell | 10.1007/978-3-642-33709-3_16 | 1304.4344 | null | null |
Spectral Compressed Sensing via Structured Matrix Completion | cs.IT cs.LG math.IT math.NA stat.ML | The paper studies the problem of recovering a spectrally sparse object from a
small number of time domain samples. Specifically, the object of interest with
ambient dimension $n$ is assumed to be a mixture of $r$ complex
multi-dimensional sinusoids, while the underlying frequencies can assume any
value in the unit disk. Conventional compressed sensing paradigms suffer from
the {\em basis mismatch} issue when imposing a discrete dictionary on the
Fourier representation. To address this problem, we develop a novel
nonparametric algorithm, called enhanced matrix completion (EMaC), based on
structured matrix completion. The algorithm starts by arranging the data into a
low-rank enhanced form with multi-fold Hankel structure, then attempts recovery
via nuclear norm minimization. Under mild incoherence conditions, EMaC allows
perfect recovery as soon as the number of samples exceeds the order of
$\mathcal{O}(r\log^{2} n)$. We also show that, in many instances, accurate
completion of a low-rank multi-fold Hankel matrix is possible when the number
of observed entries is proportional to the information theoretical limits
(except for a logarithmic gap). The robustness of EMaC against bounded noise
and its applicability to super resolution are further demonstrated by numerical
experiments.
| Yuxin Chen, Yuejie Chi | null | 1304.4610 | null | null |
PAC Quasi-automatizability of Resolution over Restricted Distributions | cs.DS cs.LG cs.LO | We consider principled alternatives to unsupervised learning in data mining
by situating the learning task in the context of the subsequent analysis task.
Specifically, we consider a query-answering (hypothesis-testing) task: In the
combined task, we decide whether an input query formula is satisfied over a
background distribution by using input examples directly, rather than invoking
a two-stage process in which (i) rules over the distribution are learned by an
unsupervised learning algorithm and (ii) a reasoning algorithm decides whether
or not the query formula follows from the learned rules. In a previous work
(2013), we observed that the learning task could satisfy numerous desirable
criteria in this combined context -- effectively matching what could be
achieved by agnostic learning of CNFs from partial information -- that are not
known to be achievable directly. In this work, we show that likewise, there are
reasoning tasks that are achievable in such a combined context that are not
known to be achievable directly (and indeed, have been seriously conjectured to
be impossible, cf. (Alekhnovich and Razborov, 2008)). Namely, we test for a
resolution proof of the query formula of a given size in quasipolynomial time
(that is, "quasi-automatizing" resolution). The learning setting we consider is
a partial-information, restricted-distribution setting that generalizes
learning parities over the uniform distribution from partial information,
another task that is known not to be achievable directly in various models (cf.
(Ben-David and Dichterman, 1998) and (Michael, 2010)).
| Brendan Juba | null | 1304.4633 | null | null |
Easy and hard functions for the Boolean hidden shift problem | quant-ph cs.CC cs.LG | We study the quantum query complexity of the Boolean hidden shift problem.
Given oracle access to f(x+s) for a known Boolean function f, the task is to
determine the n-bit string s. The quantum query complexity of this problem
depends strongly on f. We demonstrate that the easiest instances of this
problem correspond to bent functions, in the sense that an exact one-query
algorithm exists if and only if the function is bent. We partially characterize
the hardest instances, which include delta functions. Moreover, we show that
the problem is easy for random functions, since two queries suffice. Our
algorithm for random functions is based on performing the pretty good
measurement on several copies of a certain state; its analysis relies on the
Fourier transform. We also use this approach to improve the quantum rejection
sampling approach to the Boolean hidden shift problem.
| Andrew M. Childs, Robin Kothari, Maris Ozols, Martin Roetteler | 10.4230/LIPIcs.TQC.2013.50 | 1304.4642 | null | null |
Unsupervised model-free representation learning | cs.LG q-bio.QM stat.ML | Numerous control and learning problems face the situation where sequences of
high-dimensional highly dependent data are available but no or little feedback
is provided to the learner, which makes any inference rather challenging. To
address this challenge, we formulate the following problem. Given a series of
observations $X_0,\dots,X_n$ coming from a large (high-dimensional) space
$\mathcal X$, find a representation function $f$ mapping $\mathcal X$ to a
finite space $\mathcal Y$ such that the series $f(X_0),\dots,f(X_n)$ preserves
as much information as possible about the original time-series dependence in
$X_0,\dots,X_n$. We show that, for stationary time series, the function $f$ can
be selected as the one maximizing a certain information criterion that we call
time-series information. Some properties of this functions are investigated,
including its uniqueness and consistency of its empirical estimates.
Implications for the problem of optimal control are presented.
| Daniil Ryabko | 10.1109/TIT.2019.2961814 | 1304.4806 | null | null |
Combinaison d'information visuelle, conceptuelle, et contextuelle pour
la construction automatique de hierarchies semantiques adaptees a
l'annotation d'images | cs.CV cs.LG cs.MM | This paper proposes a new methodology to automatically build semantic
hierarchies suitable for image annotation and classification. The building of
the hierarchy is based on a new measure of semantic similarity. The proposed
measure incorporates several sources of information: visual, conceptual and
contextual as we defined in this paper. The aim is to provide a measure that
best represents image semantics. We then propose rules based on this measure,
for the building of the final hierarchy, and which explicitly encode
hierarchical relationships between different concepts. Therefore, the built
hierarchy is used in a semantic hierarchical classification framework for image
annotation. Our experiments and results show that the hierarchy built improves
classification results.
Ce papier propose une nouvelle methode pour la construction automatique de
hierarchies semantiques adaptees a la classification et a l'annotation
d'images. La construction de la hierarchie est basee sur une nouvelle mesure de
similarite semantique qui integre plusieurs sources d'informations: visuelle,
conceptuelle et contextuelle que nous definissons dans ce papier. L'objectif
est de fournir une mesure qui est plus proche de la semantique des images. Nous
proposons ensuite des regles, basees sur cette mesure, pour la construction de
la hierarchie finale qui encode explicitement les relations hierarchiques entre
les differents concepts. La hierarchie construite est ensuite utilisee dans un
cadre de classification semantique hierarchique d'images en concepts visuels.
Nos experiences et resultats montrent que la hierarchie construite permet
d'ameliorer les resultats de la classification.
| Hichem Bannour and C\'eline Hudelot | null | 1304.5063 | null | null |
Image Retrieval based on Bag-of-Words model | cs.IR cs.LG | This article gives a survey for bag-of-words (BoW) or bag-of-features model
in image retrieval system. In recent years, large-scale image retrieval shows
significant potential in both industry applications and research problems. As
local descriptors like SIFT demonstrate great discriminative power in solving
vision problems like object recognition, image classification and annotation,
more and more state-of-the-art large scale image retrieval systems are trying
to rely on them. A common way to achieve this is first quantizing local
descriptors into visual words, and then applying scalable textual indexing and
retrieval schemes. We call this model as bag-of-words or bag-of-features model.
The goal of this survey is to give an overview of this model and introduce
different strategies when building the system based on this model.
| Jialu Liu | null | 1304.5168 | null | null |
Austerity in MCMC Land: Cutting the Metropolis-Hastings Budget | cs.LG stat.ML | Can we make Bayesian posterior MCMC sampling more efficient when faced with
very large datasets? We argue that computing the likelihood for N datapoints in
the Metropolis-Hastings (MH) test to reach a single binary decision is
computationally inefficient. We introduce an approximate MH rule based on a
sequential hypothesis test that allows us to accept or reject samples with high
confidence using only a fraction of the data required for the exact MH rule.
While this method introduces an asymptotic bias, we show that this bias can be
controlled and is more than offset by a decrease in variance due to our ability
to draw more samples per unit of time.
| Anoop Korattikara, Yutian Chen, Max Welling | null | 1304.5299 | null | null |
Parallel Gaussian Process Optimization with Upper Confidence Bound and
Pure Exploration | cs.LG stat.ML | In this paper, we consider the challenge of maximizing an unknown function f
for which evaluations are noisy and are acquired with high cost. An iterative
procedure uses the previous measures to actively select the next estimation of
f which is predicted to be the most useful. We focus on the case where the
function can be evaluated in parallel with batches of fixed size and analyze
the benefit compared to the purely sequential procedure in terms of cumulative
regret. We introduce the Gaussian Process Upper Confidence Bound and Pure
Exploration algorithm (GP-UCB-PE) which combines the UCB strategy and Pure
Exploration in the same batch of evaluations along the parallel iterations. We
prove theoretical upper bounds on the regret with batches of size K for this
procedure which show the improvement of the order of sqrt{K} for fixed
iteration cost over purely sequential versions. Moreover, the multiplicative
constants involved have the property of being dimension-free. We also confirm
empirically the efficiency of GP-UCB-PE on real and synthetic problems compared
to state-of-the-art competitors.
| Emile Contal and David Buffoni and Alexandre Robicquet and Nicolas
Vayatis | 10.1007/978-3-642-40988-2_15 | 1304.5350 | null | null |
Personalized Academic Research Paper Recommendation System | cs.IR cs.DL cs.LG | A huge number of academic papers are coming out from a lot of conferences and
journals these days. In these circumstances, most researchers rely on key-based
search or browsing through proceedings of top conferences and journals to find
their related work. To ease this difficulty, we propose a Personalized Academic
Research Paper Recommendation System, which recommends related articles, for
each researcher, that may be interesting to her/him. In this paper, we first
introduce our web crawler to retrieve research papers from the web. Then, we
define similarity between two research papers based on the text similarity
between them. Finally, we propose our recommender system developed using
collaborative filtering methods. Our evaluation results demonstrate that our
system recommends good quality research papers.
| Joonseok Lee, Kisung Lee, Jennifer G. Kim | null | 1304.5457 | null | null |
Optimal Stochastic Strongly Convex Optimization with a Logarithmic
Number of Projections | cs.LG stat.ML | We consider stochastic strongly convex optimization with a complex inequality
constraint. This complex inequality constraint may lead to computationally
expensive projections in algorithmic iterations of the stochastic gradient
descent~(SGD) methods. To reduce the computation costs pertaining to the
projections, we propose an Epoch-Projection Stochastic Gradient
Descent~(Epro-SGD) method. The proposed Epro-SGD method consists of a sequence
of epochs; it applies SGD to an augmented objective function at each iteration
within the epoch, and then performs a projection at the end of each epoch.
Given a strongly convex optimization and for a total number of $T$ iterations,
Epro-SGD requires only $\log(T)$ projections, and meanwhile attains an optimal
convergence rate of $O(1/T)$, both in expectation and with a high probability.
To exploit the structure of the optimization problem, we propose a proximal
variant of Epro-SGD, namely Epro-ORDA, based on the optimal regularized dual
averaging method. We apply the proposed methods on real-world applications; the
empirical results demonstrate the effectiveness of our methods.
| Jianhui Chen, Tianbao Yang, Qihang Lin, Lijun Zhang, Yi Chang | null | 1304.5504 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.