categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
stat.ML cs.LG cs.SI math.ST stat.TH | 10.1109/LSP.2014.2381553 | 1407.5978 | null | null | http://arxiv.org/abs/1407.5978v3 | 2014-07-24T06:27:05Z | 2014-07-22T19:16:01Z | Sequential Changepoint Approach for Online Community Detection | We present new algorithms for detecting the emergence of a community in large
networks from sequential observations. The networks are modeled using
Erdos-Renyi random graphs with edges forming between nodes in the community
with higher probability. Based on statistical changepoint detection
methodology, we develop three algorithms: the Exhaustive Search (ES), the
mixture, and the Hierarchical Mixture (H-Mix) methods. Performance of these
methods is evaluated by the average run length (ARL), which captures the
frequency of false alarms, and the detection delay. Numerical comparisons show
that the ES method performs the best; however, it is exponentially complex. The
mixture method is polynomially complex by exploiting the fact that the size of
the community is typically small in a large network. However, it may react to a
group of active edges that do not form a community. This issue is resolved by
the H-Mix method, which is based on a dendrogram decomposition of the network.
We present an asymptotic analytical expression for ARL of the mixture method
when the threshold is large. Numerical simulation verifies that our
approximation is accurate even in the non-asymptotic regime. Hence, it can be
used to determine a desired threshold efficiently. Finally, numerical examples
show that the mixture and the H-Mix methods can both detect a community quickly
with a lower complexity than the ES method.
| [
"David Marangoni-Simonsen and Yao Xie",
"['David Marangoni-Simonsen' 'Yao Xie']"
]
|
cs.LG cs.CV | null | 1407.6067 | null | null | http://arxiv.org/pdf/1407.6067v1 | 2014-07-22T23:18:08Z | 2014-07-22T23:18:08Z | The U-curve optimization problem: improvements on the original algorithm
and time complexity analysis | The U-curve optimization problem is characterized by a decomposable in
U-shaped curves cost function over the chains of a Boolean lattice. This
problem can be applied to model the classical feature selection problem in
Machine Learning. Recently, the U-Curve algorithm was proposed to give optimal
solutions to the U-curve problem. In this article, we point out that the
U-Curve algorithm is in fact suboptimal, and introduce the U-Curve-Search (UCS)
algorithm, which is actually optimal. We also present the results of optimal
and suboptimal experiments, in which UCS is compared with the UBB optimal
branch-and-bound algorithm and the SFFS heuristic, respectively. We show that,
in both experiments, $\proc{UCS}$ had a better performance than its competitor.
Finally, we analyze the obtained results and point out improvements on UCS that
might enhance the performance of this algorithm.
| [
"Marcelo S. Reis, Carlos E. Ferreira, and Junior Barrera",
"['Marcelo S. Reis' 'Carlos E. Ferreira' 'Junior Barrera']"
]
|
cs.IR cs.LG stat.ML | null | 1407.6089 | null | null | http://arxiv.org/pdf/1407.6089v2 | 2015-02-07T23:50:43Z | 2014-07-23T01:54:31Z | Learning Rank Functionals: An Empirical Study | Ranking is a key aspect of many applications, such as information retrieval,
question answering, ad placement and recommender systems. Learning to rank has
the goal of estimating a ranking model automatically from training data. In
practical settings, the task often reduces to estimating a rank functional of
an object with respect to a query. In this paper, we investigate key issues in
designing an effective learning to rank algorithm. These include data
representation, the choice of rank functionals, the design of the loss function
so that it is correlated with the rank metrics used in evaluation. For the loss
function, we study three techniques: approximating the rank metric by a smooth
function, decomposition of the loss into a weighted sum of element-wise losses
and into a weighted sum of pairwise losses. We then present derivations of
piecewise losses using the theory of high-order Markov chains and Markov random
fields. In experiments, we evaluate these design aspects on two tasks: answer
ranking in a Social Question Answering site, and Web Information Retrieval.
| [
"Truyen Tran, Dinh Phung, Svetha Venkatesh",
"['Truyen Tran' 'Dinh Phung' 'Svetha Venkatesh']"
]
|
stat.ML cs.LG | null | 1407.6094 | null | null | http://arxiv.org/pdf/1407.6094v1 | 2014-07-23T02:47:47Z | 2014-07-23T02:47:47Z | Stabilizing Sparse Cox Model using Clinical Structures in Electronic
Medical Records | Stability in clinical prediction models is crucial for transferability
between studies, yet has received little attention. The problem is paramount in
high dimensional data which invites sparse models with feature selection
capability. We introduce an effective method to stabilize sparse Cox model of
time-to-events using clinical structures inherent in Electronic Medical
Records. Model estimation is stabilized using a feature graph derived from two
types of EMR structures: temporal structure of disease and intervention
recurrences, and hierarchical structure of medical knowledge and practices. We
demonstrate the efficacy of the method in predicting time-to-readmission of
heart failure patients. On two stability measures - the Jaccard index and the
Consistency index - the use of clinical structures significantly increased
feature stability without hurting discriminative power. Our model reported a
competitive AUC of 0.64 (95% CIs: [0.58,0.69]) for 6 months prediction.
| [
"Shivapratap Gopakumar, Truyen Tran, Dinh Phung, Svetha Venkatesh",
"['Shivapratap Gopakumar' 'Truyen Tran' 'Dinh Phung' 'Svetha Venkatesh']"
]
|
cs.IR cs.LG stat.ML | null | 1407.6128 | null | null | http://arxiv.org/pdf/1407.6128v1 | 2014-07-23T08:20:09Z | 2014-07-23T08:20:09Z | Permutation Models for Collaborative Ranking | We study the problem of collaborative filtering where ranking information is
available. Focusing on the core of the collaborative ranking process, the user
and their community, we propose new models for representation of the underlying
permutations and prediction of ranks. The first approach is based on the
assumption that the user makes successive choice of items in a stage-wise
manner. In particular, we extend the Plackett-Luce model in two ways -
introducing parameter factoring to account for user-specific contribution, and
modelling the latent community in a generative setting. The second approach
relies on log-linear parameterisation, which relaxes the discrete-choice
assumption, but makes learning and inference much more involved. We propose
MCMC-based learning and inference methods and derive linear-time prediction
algorithms.
| [
"Truyen Tran and Svetha Venkatesh",
"['Truyen Tran' 'Svetha Venkatesh']"
]
|
cs.IT cs.LG math.IT | null | 1407.6154 | null | null | http://arxiv.org/pdf/1407.6154v1 | 2014-07-23T10:01:17Z | 2014-07-23T10:01:17Z | Content-Level Selective Offloading in Heterogeneous Networks:
Multi-armed Bandit Optimization and Regret Bounds | We consider content-level selective offloading of cellular downlink traffic
to a wireless infostation terminal which stores high data-rate content in its
cache memory. Cellular users in the vicinity of the infostation can directly
download the stored content from the infostation through a broadband connection
(e.g., WiFi), reducing the latency and load on the cellular network. The goal
of the infostation cache controller (CC) is to store the most popular content
in the cache memory such that the maximum amount of traffic is offloaded to the
infostation. In practice, the popularity profile of the files is not known by
the CC, which observes only the instantaneous demands for those contents stored
in the cache. Hence, the cache content placement is optimised based on the
demand history and on the cost associated to placing each content in the cache.
By refreshing the cache content at regular time intervals, the CC gradually
learns the popularity profile, while at the same time exploiting the limited
cache capacity in the best way possible. This is formulated as a multi-armed
bandit (MAB) problem with switching cost. Several algorithms are presented to
decide on the cache content over time. The performance is measured in terms of
cache efficiency, defined as the amount of net traffic that is offloaded to the
infostation. In addition to theoretical regret bounds, the proposed algorithms
are analysed through numerical simulations. In particular, the impact of system
parameters, such as the number of files, number of users, cache size, and
skewness of the popularity profile, on the performance is studied numerically.
It is shown that the proposed algorithms learn the popularity profile quickly
for a wide range of system parameters.
| [
"['Pol Blasco' 'Deniz Gündüz']",
"Pol Blasco and Deniz G\\\"und\\\"uz"
]
|
math.OC cs.GT cs.LG | null | 1407.6267 | null | null | http://arxiv.org/pdf/1407.6267v2 | 2016-02-08T23:29:36Z | 2014-07-23T15:37:38Z | Learning in games via reinforcement and regularization | We investigate a class of reinforcement learning dynamics where players
adjust their strategies based on their actions' cumulative payoffs over time -
specifically, by playing mixed strategies that maximize their expected
cumulative payoff minus a regularization term. A widely studied example is
exponential reinforcement learning, a process induced by an entropic
regularization term which leads mixed strategies to evolve according to the
replicator dynamics. However, in contrast to the class of regularization
functions used to define smooth best responses in models of stochastic
fictitious play, the functions used in this paper need not be infinitely steep
at the boundary of the simplex; in fact, dropping this requirement gives rise
to an important dichotomy between steep and nonsteep cases. In this general
framework, we extend several properties of exponential learning, including the
elimination of dominated strategies, the asymptotic stability of strict Nash
equilibria, and the convergence of time-averaged trajectories in zero-sum games
with an interior Nash equilibrium.
| [
"Panayotis Mertikopoulos and William H. Sandholm",
"['Panayotis Mertikopoulos' 'William H. Sandholm']"
]
|
cs.AI cs.LG cs.NE math.OC | null | 1407.6315 | null | null | http://arxiv.org/pdf/1407.6315v1 | 2014-07-23T18:04:23Z | 2014-07-23T18:04:23Z | Quadratically constrained quadratic programming for classification using
particle swarms and applications | Particle swarm optimization is used in several combinatorial optimization
problems. In this work, particle swarms are used to solve quadratic programming
problems with quadratic constraints. The approach of particle swarms is an
example for interior point methods in optimization as an iterative technique.
This approach is novel and deals with classification problems without the use
of a traditional classifier. Our method determines the optimal hyperplane or
classification boundary for a data set. In a binary classification problem, we
constrain each class as a cluster, which is enclosed by an ellipsoid. The
estimation of the optimal hyperplane between the two clusters is posed as a
quadratically constrained quadratic problem. The optimization problem is solved
in distributed format using modified particle swarms. Our method has the
advantage of using the direction towards optimal solution rather than searching
the entire feasible region. Our results on the Iris, Pima, Wine, and Thyroid
datasets show that the proposed method works better than a neural network and
the performance is close to that of SVM.
| [
"['Deepak Kumar' 'A G Ramakrishnan']",
"Deepak Kumar, A G Ramakrishnan"
]
|
stat.ML cs.CV cs.LG | null | 1407.6432 | null | null | http://arxiv.org/pdf/1407.6432v1 | 2014-07-24T02:53:52Z | 2014-07-24T02:53:52Z | Learning Structured Outputs from Partial Labels using Forest Ensemble | Learning structured outputs with general structures is computationally
challenging, except for tree-structured models. Thus we propose an efficient
boosting-based algorithm AdaBoost.MRF for this task. The idea is based on the
realization that a graph is a superimposition of trees. Different from most
existing work, our algorithm can handle partial labelling, and thus is
particularly attractive in practice where reliable labels are often sparsely
observed. In addition, our method works exclusively on trees and thus is
guaranteed to converge. We apply the AdaBoost.MRF algorithm to an indoor video
surveillance scenario, where activities are modelled at multiple levels.
| [
"Truyen Tran, Dinh Phung, Svetha Venkatesh",
"['Truyen Tran' 'Dinh Phung' 'Svetha Venkatesh']"
]
|
cs.DB cs.CL cs.LG | null | 1407.6439 | null | null | http://arxiv.org/pdf/1407.6439v3 | 2014-09-18T14:38:06Z | 2014-07-24T03:34:41Z | Feature Engineering for Knowledge Base Construction | Knowledge base construction (KBC) is the process of populating a knowledge
base, i.e., a relational database together with inference rules, with
information extracted from documents and structured sources. KBC blurs the
distinction between two traditional database problems, information extraction
and information integration. For the last several years, our group has been
building knowledge bases with scientific collaborators. Using our approach, we
have built knowledge bases that have comparable and sometimes better quality
than those constructed by human volunteers. In contrast to these knowledge
bases, which took experts a decade or more human years to construct, many of
our projects are constructed by a single graduate student.
Our approach to KBC is based on joint probabilistic inference and learning,
but we do not see inference as either a panacea or a magic bullet: inference is
a tool that allows us to be systematic in how we construct, debug, and improve
the quality of such systems. In addition, inference allows us to construct
these systems in a more loosely coupled way than traditional approaches. To
support this idea, we have built the DeepDive system, which has the design goal
of letting the user "think about features---not algorithms." We think of
DeepDive as declarative in that one specifies what they want but not how to get
it. We describe our approach with a focus on feature engineering, which we
argue is an understudied problem relative to its importance to end-to-end
quality.
| [
"['Christopher Ré' 'Amir Abbas Sadeghian' 'Zifei Shan' 'Jaeho Shin'\n 'Feiran Wang' 'Sen Wu' 'Ce Zhang']",
"Christopher R\\'e, Amir Abbas Sadeghian, Zifei Shan, Jaeho Shin, Feiran\n Wang, Sen Wu, Ce Zhang"
]
|
cs.LG stat.ML | null | 1407.6810 | null | null | http://arxiv.org/pdf/1407.6810v2 | 2016-04-09T03:09:18Z | 2014-07-25T08:30:04Z | Dissimilarity-based Sparse Subset Selection | Finding an informative subset of a large collection of data points or models
is at the center of many problems in computer vision, recommender systems,
bio/health informatics as well as image and natural language processing. Given
pairwise dissimilarities between the elements of a `source set' and a `target
set,' we consider the problem of finding a subset of the source set, called
representatives or exemplars, that can efficiently describe the target set. We
formulate the problem as a row-sparsity regularized trace minimization problem.
Since the proposed formulation is, in general, NP-hard, we consider a convex
relaxation. The solution of our optimization finds representatives and the
assignment of each element of the target set to each representative, hence,
obtaining a clustering. We analyze the solution of our proposed optimization as
a function of the regularization parameter. We show that when the two sets
jointly partition into multiple groups, our algorithm finds representatives
from all groups and reveals clustering of the sets. In addition, we show that
the proposed framework can effectively deal with outliers. Our algorithm works
with arbitrary dissimilarities, which can be asymmetric or violate the triangle
inequality. To efficiently implement our algorithm, we consider an Alternating
Direction Method of Multipliers (ADMM) framework, which results in quadratic
complexity in the problem size. We show that the ADMM implementation allows to
parallelize the algorithm, hence further reducing the computational time.
Finally, by experiments on real-world datasets, we show that our proposed
algorithm improves the state of the art on the two problems of scene
categorization using representative images and time-series modeling and
segmentation using representative~models.
| [
"Ehsan Elhamifar, Guillermo Sapiro and S. Shankar Sastry",
"['Ehsan Elhamifar' 'Guillermo Sapiro' 'S. Shankar Sastry']"
]
|
cs.CL cs.IR cs.LG | null | 1407.6872 | null | null | http://arxiv.org/pdf/1407.6872v1 | 2014-07-25T12:46:18Z | 2014-07-25T12:46:18Z | Interpretable Low-Rank Document Representations with Label-Dependent
Sparsity Patterns | In context of document classification, where in a corpus of documents their
label tags are readily known, an opportunity lies in utilizing label
information to learn document representation spaces with better discriminative
properties. To this end, in this paper application of a Variational Bayesian
Supervised Nonnegative Matrix Factorization (supervised vbNMF) with
label-driven sparsity structure of coefficients is proposed for learning of
discriminative nonsubtractive latent semantic components occuring in TF-IDF
document representations. Constraints are such that the components pursued are
made to be frequently occuring in a small set of labels only, making it
possible to yield document representations with distinctive label-specific
sparse activation patterns. A simple measure of quality of this kind of
sparsity structure, dubbed inter-label sparsity, is introduced and
experimentally brought into tight connection with classification performance.
Representing a great practical convenience, inter-label sparsity is shown to be
easily controlled in supervised vbNMF by a single parameter.
| [
"Ivan Ivek",
"['Ivan Ivek']"
]
|
cs.HC cs.LG | null | 1407.7131 | null | null | http://arxiv.org/pdf/1407.7131v2 | 2014-09-16T23:41:58Z | 2014-07-26T14:18:00Z | Your click decides your fate: Inferring Information Processing and
Attrition Behavior from MOOC Video Clickstream Interactions | In this work, we explore video lecture interaction in Massive Open Online
Courses (MOOCs), which is central to student learning experience on these
educational platforms. As a research contribution, we operationalize video
lecture clickstreams of students into cognitively plausible higher level
behaviors, and construct a quantitative information processing index, which can
aid instructors to better understand MOOC hurdles and reason about
unsatisfactory learning outcomes. Our results illustrate how such a metric
inspired by cognitive psychology can help answer critical questions regarding
students' engagement, their future click interactions and participation
trajectories that lead to in-video & course dropouts. Implications for research
and practice are discussed
| [
"['Tanmay Sinha' 'Patrick Jermann' 'Nan Li' 'Pierre Dillenbourg']",
"Tanmay Sinha, Patrick Jermann, Nan Li, Pierre Dillenbourg"
]
|
cond-mat.mtrl-sci cs.LG | null | 1407.7159 | null | null | http://arxiv.org/pdf/1407.7159v1 | 2014-07-26T20:36:08Z | 2014-07-26T20:36:08Z | Pairwise Correlations in Layered Close-Packed Structures | Given a description of the stacking statistics of layered close-packed
structures in the form of a hidden Markov model, we develop analytical
expressions for the pairwise correlation functions between the layers. These
may be calculated analytically as explicit functions of model parameters or the
expressions may be used as a fast, accurate, and efficient way to obtain
numerical values. We present several examples, finding agreement with previous
work as well as deriving new relations.
| [
"['P. M. Riechers' 'D. P. Varn' 'J. P. Crutchfield']",
"P. M. Riechers and D. P. Varn and J. P. Crutchfield"
]
|
cs.CY cs.LG | null | 1407.7260 | null | null | http://arxiv.org/pdf/1407.7260v1 | 2014-07-27T17:24:14Z | 2014-07-27T17:24:14Z | Leveraging user profile attributes for improving pedagogical accuracy of
learning pathways | In recent years, with the enormous explosion of web based learning resources,
personalization has become a critical factor for the success of services that
wish to leverage the power of Web 2.0. However, the relevance, significance and
impact of tailored content delivery in the learning domain is still
questionable. Apart from considering only interaction based features like
ratings and inferring learner preferences from them, if these services were to
incorporate innate user profile attributes which affect learning activities,
the quality of recommendations produced could be vastly improved. Recognizing
the crucial role of effective guidance in informal educational settings, we
provide a principled way of utilizing multiple sources of information from the
user profile itself for the recommendation task. We explore factors that affect
the choice of learning resources and explain in what way are they helpful to
improve the pedagogical accuracy of learning objects recommended. Through a
systematical application of machine learning techniques, we further provide a
technological solution to convert these indirectly mapped learner specific
attributes into a direct mapping with the learning resources. This mapping has
a distinct advantage of tagging learning resources to make their metadata more
informative. The results of our empirical study depict the similarity of
nominal learning attributes with respect to each other. We further succeed in
capturing the learner subset, whose preferences are most likely to be an
indication of learning resource usage. Our novel system filters learner profile
attributes to discover a tag that links them with learning resources.
| [
"['Tanmay Sinha' 'Ankit Banka' 'Dae Ki Kang']",
"Tanmay Sinha, Ankit Banka, Dae Ki Kang"
]
|
cs.DS cs.GT cs.LG | null | 1407.7294 | null | null | http://arxiv.org/pdf/1407.7294v2 | 2014-11-28T21:45:08Z | 2014-07-27T23:38:09Z | Online Learning and Profit Maximization from Revealed Preferences | We consider the problem of learning from revealed preferences in an online
setting. In our framework, each period a consumer buys an optimal bundle of
goods from a merchant according to her (linear) utility function and current
prices, subject to a budget constraint. The merchant observes only the
purchased goods, and seeks to adapt prices to optimize his profits. We give an
efficient algorithm for the merchant's problem that consists of a learning
phase in which the consumer's utility function is (perhaps partially) inferred,
followed by a price optimization step. We also consider an alternative online
learning algorithm for the setting where prices are set exogenously, but the
merchant would still like to predict the bundle that will be bought by the
consumer for purposes of inventory or supply chain management. In contrast with
most prior work on the revealed preferences problem, we demonstrate that by
making stronger assumptions on the form of utility functions, efficient
algorithms for both learning and profit maximization are possible, even in
adaptive, online settings.
| [
"['Kareem Amin' 'Rachel Cummings' 'Lili Dworkin' 'Michael Kearns'\n 'Aaron Roth']",
"Kareem Amin, Rachel Cummings, Lili Dworkin, Michael Kearns, Aaron Roth"
]
|
cs.NA cs.LG stat.ML | null | 1407.7299 | null | null | http://arxiv.org/pdf/1407.7299v1 | 2014-07-28T00:41:12Z | 2014-07-28T00:41:12Z | Algorithms, Initializations, and Convergence for the Nonnegative Matrix
Factorization | It is well known that good initializations can improve the speed and accuracy
of the solutions of many nonnegative matrix factorization (NMF) algorithms.
Many NMF algorithms are sensitive with respect to the initialization of W or H
or both. This is especially true of algorithms of the alternating least squares
(ALS) type, including the two new ALS algorithms that we present in this paper.
We compare the results of six initialization procedures (two standard and four
new) on our ALS algorithms. Lastly, we discuss the practical issue of choosing
an appropriate convergence criterion.
| [
"['Amy N. Langville' 'Carl D. Meyer' 'Russell Albright' 'James Cox'\n 'David Duling']",
"Amy N. Langville, Carl D. Meyer, Russell Albright, James Cox, David\n Duling"
]
|
cs.LG cs.AI | null | 1407.7417 | null | null | http://arxiv.org/pdf/1407.7417v1 | 2014-07-28T13:44:25Z | 2014-07-28T13:44:25Z | 'Almost Sure' Chaotic Properties of Machine Learning Methods | It has been demonstrated earlier that universal computation is 'almost
surely' chaotic. Machine learning is a form of computational fixed point
iteration, iterating over the computable function space. We showcase some
properties of this iteration, and establish in general that the iteration is
'almost surely' of chaotic nature. This theory explains the observation in the
counter intuitive properties of deep learning methods. This paper demonstrates
that these properties are going to be universal to any learning method.
| [
"Nabarun Mondal, Partha P. Ghosh",
"['Nabarun Mondal' 'Partha P. Ghosh']"
]
|
cs.LG | null | 1407.7449 | null | null | http://arxiv.org/pdf/1407.7449v1 | 2014-07-23T09:14:49Z | 2014-07-23T09:14:49Z | A Fast Synchronization Clustering Algorithm | This paper presents a Fast Synchronization Clustering algorithm (FSynC),
which is an improved version of SynC algorithm. In order to decrease the time
complexity of the original SynC algorithm, we combine grid cell partitioning
method and Red-Black tree to construct the near neighbor point set of every
point. By simulated experiments of some artificial data sets and several real
data sets, we observe that FSynC algorithm can often get less time than SynC
algorithm for many kinds of data sets. At last, it gives some research
expectations to popularize this algorithm.
| [
"['Xinquan Chen']",
"Xinquan Chen"
]
|
cs.LG stat.ML | null | 1407.7508 | null | null | http://arxiv.org/pdf/1407.7508v1 | 2014-07-28T19:28:26Z | 2014-07-28T19:28:26Z | Efficient Regularized Regression for Variable Selection with L0 Penalty | Variable (feature, gene, model, which we use interchangeably) selections for
regression with high-dimensional BIGDATA have found many applications in
bioinformatics, computational biology, image processing, and engineering. One
appealing approach is the L0 regularized regression which penalizes the number
of nonzero features in the model directly. L0 is known as the most essential
sparsity measure and has nice theoretical properties, while the popular L1
regularization is only a best convex relaxation of L0. Therefore, it is natural
to expect that L0 regularized regression performs better than LASSO. However,
it is well-known that L0 optimization is NP-hard and computationally
challenging. Instead of solving the L0 problems directly, most publications so
far have tried to solve an approximation problem that closely resembles L0
regularization.
In this paper, we propose an efficient EM algorithm (L0EM) that directly
solves the L0 optimization problem. $L_0$EM is efficient with high dimensional
data. It also provides a natural solution to all Lp p in [0,2] problems. The
regularized parameter can be either determined through cross-validation or AIC
and BIC. Theoretical properties of the L0-regularized estimator are given under
mild conditions that permit the number of variables to be much larger than the
sample size. We demonstrate our methods through simulation and high-dimensional
genomic data. The results indicate that L0 has better performance than LASSO
and L0 with AIC or BIC has similar performance as computationally intensive
cross-validation. The proposed algorithms are efficient in identifying the
non-zero variables with less-bias and selecting biologically important genes
and pathways with high dimensional BIGDATA.
| [
"['Zhenqiu Liu' 'Gang Li']",
"Zhenqiu Liu and Gang Li"
]
|
cs.CV cs.LG stat.ML | 10.1109/TNNLS.2015.2418332 | 1407.7556 | null | null | http://arxiv.org/abs/1407.7556v3 | 2015-01-11T16:27:23Z | 2014-07-28T20:26:24Z | Entropic one-class classifiers | The one-class classification problem is a well-known research endeavor in
pattern recognition. The problem is also known under different names, such as
outlier and novelty/anomaly detection. The core of the problem consists in
modeling and recognizing patterns belonging only to a so-called target class.
All other patterns are termed non-target, and therefore they should be
recognized as such. In this paper, we propose a novel one-class classification
system that is based on an interplay of different techniques. Primarily, we
follow a dissimilarity representation based approach; we embed the input data
into the dissimilarity space by means of an appropriate parametric
dissimilarity measure. This step allows us to process virtually any type of
data. The dissimilarity vectors are then represented through a weighted
Euclidean graphs, which we use to (i) determine the entropy of the data
distribution in the dissimilarity space, and at the same time (ii) derive
effective decision regions that are modeled as clusters of vertices. Since the
dissimilarity measure for the input data is parametric, we optimize its
parameters by means of a global optimization scheme, which considers both
mesoscopic and structural characteristics of the data represented through the
graphs. The proposed one-class classifier is designed to provide both hard
(Boolean) and soft decisions about the recognition of test patterns, allowing
an accurate description of the classification process. We evaluate the
performance of the system on different benchmarking datasets, containing either
feature-based or structured patterns. Experimental results demonstrate the
effectiveness of the proposed technique.
| [
"Lorenzo Livi, Alireza Sadeghian, Witold Pedrycz",
"['Lorenzo Livi' 'Alireza Sadeghian' 'Witold Pedrycz']"
]
|
cs.CE cs.LG q-bio.BM q-bio.MN | 10.1016/j.ins.2015.07.043 | 1407.7559 | null | null | http://arxiv.org/abs/1407.7559v3 | 2015-04-30T00:06:14Z | 2014-07-28T20:29:52Z | Toward a multilevel representation of protein molecules: comparative
approaches to the aggregation/folding propensity problem | This paper builds upon the fundamental work of Niwa et al. [34], which
provides the unique possibility to analyze the relative aggregation/folding
propensity of the elements of the entire Escherichia coli (E. coli) proteome in
a cell-free standardized microenvironment. The hardness of the problem comes
from the superposition between the driving forces of intra- and inter-molecule
interactions and it is mirrored by the evidences of shift from folding to
aggregation phenotypes by single-point mutations [10]. Here we apply several
state-of-the-art classification methods coming from the field of structural
pattern recognition, with the aim to compare different representations of the
same proteins gathered from the Niwa et al. data base; such representations
include sequences and labeled (contact) graphs enriched with chemico-physical
attributes. By this comparison, we are able to identify also some interesting
general properties of proteins. Notably, (i) we suggest a threshold around 250
residues discriminating "easily foldable" from "hardly foldable" molecules
consistent with other independent experiments, and (ii) we highlight the
relevance of contact graph spectra for folding behavior discrimination and
characterization of the E. coli solubility data. The soundness of the
experimental results presented in this paper is proved by the statistically
relevant relationships discovered among the chemico-physical description of
proteins and the developed cost matrix of substitution used in the various
discrimination systems.
| [
"['Lorenzo Livi' 'Alessandro Giuliani' 'Antonello Rizzi']",
"Lorenzo Livi, Alessandro Giuliani, Antonello Rizzi"
]
|
q-bio.QM cs.LG stat.ML | null | 1407.7566 | null | null | http://arxiv.org/pdf/1407.7566v1 | 2014-07-28T20:52:18Z | 2014-07-28T20:52:18Z | Dependence versus Conditional Dependence in Local Causal Discovery from
Gene Expression Data | Motivation: Algorithms that discover variables which are causally related to
a target may inform the design of experiments. With observational gene
expression data, many methods discover causal variables by measuring each
variable's degree of statistical dependence with the target using dependence
measures (DMs). However, other methods measure each variable's ability to
explain the statistical dependence between the target and the remaining
variables in the data using conditional dependence measures (CDMs), since this
strategy is guaranteed to find the target's direct causes, direct effects, and
direct causes of the direct effects in the infinite sample limit. In this
paper, we design a new algorithm in order to systematically compare the
relative abilities of DMs and CDMs in discovering causal variables from gene
expression data.
Results: The proposed algorithm using a CDM is sample efficient, since it
consistently outperforms other state-of-the-art local causal discovery
algorithms when samples sizes are small. However, the proposed algorithm using
a CDM outperforms the proposed algorithm using a DM only when sample sizes are
above several hundred. These results suggest that accurate causal discovery
from gene expression data using current CDM-based algorithms requires datasets
with at least several hundred samples.
Availability: The proposed algorithm is freely available at
https://github.com/ericstrobl/DvCD.
| [
"['Eric V. Strobl' 'Shyam Visweswaran']",
"Eric V. Strobl, Shyam Visweswaran"
]
|
cs.LG stat.ML | null | 1407.7584 | null | null | http://arxiv.org/pdf/1407.7584v1 | 2014-07-28T21:59:06Z | 2014-07-28T21:59:06Z | Dynamic Feature Scaling for Online Learning of Binary Classifiers | Scaling feature values is an important step in numerous machine learning
tasks. Different features can have different value ranges and some form of a
feature scaling is often required in order to learn an accurate classifier.
However, feature scaling is conducted as a preprocessing task prior to
learning. This is problematic in an online setting because of two reasons.
First, it might not be possible to accurately determine the value range of a
feature at the initial stages of learning when we have observed only a few
number of training instances. Second, the distribution of data can change over
the time, which render obsolete any feature scaling that we perform in a
pre-processing step. We propose a simple but an effective method to dynamically
scale features at train time, thereby quickly adapting to any changes in the
data stream. We compare the proposed dynamic feature scaling method against
more complex methods for estimating scaling parameters using several benchmark
datasets for binary classification. Our proposed feature scaling method
consistently outperforms more complex methods on all of the benchmark datasets
and improves classification accuracy of a state-of-the-art online binary
classifier algorithm.
| [
"Danushka Bollegala",
"['Danushka Bollegala']"
]
|
cs.LG | null | 1407.7635 | null | null | http://arxiv.org/pdf/1407.7635v1 | 2014-07-29T06:17:49Z | 2014-07-29T06:17:49Z | Chasing Ghosts: Competing with Stateful Policies | We consider sequential decision making in a setting where regret is measured
with respect to a set of stateful reference policies, and feedback is limited
to observing the rewards of the actions performed (the so called "bandit"
setting). If either the reference policies are stateless rather than stateful,
or the feedback includes the rewards of all actions (the so called "expert"
setting), previous work shows that the optimal regret grows like
$\Theta(\sqrt{T})$ in terms of the number of decision rounds $T$.
The difficulty in our setting is that the decision maker unavoidably loses
track of the internal states of the reference policies, and thus cannot
reliably attribute rewards observed in a certain round to any of the reference
policies. In fact, in this setting it is impossible for the algorithm to
estimate which policy gives the highest (or even approximately highest) total
reward. Nevertheless, we design an algorithm that achieves expected regret that
is sublinear in $T$, of the form $O( T/\log^{1/4}{T})$. Our algorithm is based
on a certain local repetition lemma that may be of independent interest. We
also show that no algorithm can guarantee expected regret better than $O(
T/\log^{3/2} T)$.
| [
"Uriel Feige, Tomer Koren, Moshe Tennenholtz",
"['Uriel Feige' 'Tomer Koren' 'Moshe Tennenholtz']"
]
|
stat.ML cs.LG | null | 1407.7644 | null | null | http://arxiv.org/pdf/1407.7644v2 | 2014-10-30T11:23:37Z | 2014-07-29T07:19:08Z | Estimating the Accuracies of Multiple Classifiers Without Labeled Data | In various situations one is given only the predictions of multiple
classifiers over a large unlabeled test data. This scenario raises the
following questions: Without any labeled data and without any a-priori
knowledge about the reliability of these different classifiers, is it possible
to consistently and computationally efficiently estimate their accuracies?
Furthermore, also in a completely unsupervised manner, can one construct a more
accurate unsupervised ensemble classifier? In this paper, focusing on the
binary case, we present simple, computationally efficient algorithms to solve
these questions. Furthermore, under standard classifier independence
assumptions, we prove our methods are consistent and study their asymptotic
error. Our approach is spectral, based on the fact that the off-diagonal
entries of the classifiers' covariance matrix and 3-d tensor are rank-one. We
illustrate the competitive performance of our algorithms via extensive
experiments on both artificial and real datasets.
| [
"['Ariel Jaffe' 'Boaz Nadler' 'Yuval Kluger']",
"Ariel Jaffe, Boaz Nadler and Yuval Kluger"
]
|
stat.ML cs.LG | 10.1137/140952314 | 1407.7691 | null | null | http://arxiv.org/abs/1407.7691v1 | 2014-07-29T11:09:59Z | 2014-07-29T11:09:59Z | NMF with Sparse Regularizations in Transformed Domains | Non-negative blind source separation (non-negative BSS), which is also
referred to as non-negative matrix factorization (NMF), is a very active field
in domains as different as astrophysics, audio processing or biomedical signal
processing. In this context, the efficient retrieval of the sources requires
the use of signal priors such as sparsity. If NMF has now been well studied
with sparse constraints in the direct domain, only very few algorithms can
encompass non-negativity together with sparsity in a transformed domain since
simultaneously dealing with two priors in two different domains is challenging.
In this article, we show how a sparse NMF algorithm coined non-negative
generalized morphological component analysis (nGMCA) can be extended to impose
non-negativity in the direct domain along with sparsity in a transformed
domain, with both analysis and synthesis formulations. To our knowledge, this
work presents the first comparison of analysis and synthesis priors ---as well
as their reweighted versions--- in the context of blind source separation.
Comparisons with state-of-the-art NMF algorithms on realistic data show the
efficiency as well as the robustness of the proposed algorithms.
| [
"J\\'er\\'emy Rapin and J\\'er\\^ome Bobin and Anthony Larue and Jean-Luc\n Starck",
"['Jérémy Rapin' 'Jérôme Bobin' 'Anthony Larue' 'Jean-Luc Starck']"
]
|
cs.LG cs.CY | 10.1145/2641190.2641198 | 1407.7722 | null | null | http://arxiv.org/abs/1407.7722v2 | 2014-08-01T13:03:28Z | 2014-07-29T13:32:44Z | OpenML: networked science in machine learning | Many sciences have made significant breakthroughs by adopting online tools
that help organize, structure and mine information that is too detailed to be
printed in journals. In this paper, we introduce OpenML, a place for machine
learning researchers to share and organize data in fine detail, so that they
can work more effectively, be more visible, and collaborate with others to
tackle harder problems. We discuss how OpenML relates to other examples of
networked science and what benefits it brings for machine learning research,
individual scientists, as well as students and practitioners.
| [
"['Joaquin Vanschoren' 'Jan N. van Rijn' 'Bernd Bischl' 'Luis Torgo']",
"Joaquin Vanschoren, Jan N. van Rijn, Bernd Bischl, and Luis Torgo"
]
|
cs.LG | null | 1407.7753 | null | null | http://arxiv.org/pdf/1407.7753v1 | 2014-07-29T15:23:11Z | 2014-07-29T15:23:11Z | A Hash-based Co-Clustering Algorithm for Categorical Data | Many real-life data are described by categorical attributes without a
pre-classification. A common data mining method used to extract information
from this type of data is clustering. This method group together the samples
from the data that are more similar than all other samples. But, categorical
data pose a challenge when extracting information because: the calculation of
two objects similarity is usually done by measuring the number of common
features, but ignore a possible importance weighting; if the data may be
divided differently according to different subsets of the features, the
algorithm may find clusters with different meanings from each other,
difficulting the post analysis. Data Co-Clustering of categorical data is the
technique that tries to find subsets of samples that share a subset of features
in common. By doing so, not only a sample may belong to more than one cluster
but, the feature selection of each cluster describe its own characteristics. In
this paper a novel Co-Clustering technique for categorical data is proposed by
using Locality Sensitive Hashing technique in order to preprocess a list of
Co-Clusters seeds based on a previous research. Results indicate this technique
is capable of finding high quality Co-Clusters in many different categorical
data sets and scales linearly with the data set size.
| [
"Fabricio Olivetti de Fran\\c{c}a",
"['Fabricio Olivetti de França']"
]
|
stat.ML cs.LG | null | 1407.7819 | null | null | http://arxiv.org/pdf/1407.7819v1 | 2014-07-29T18:37:15Z | 2014-07-29T18:37:15Z | Sure Screening for Gaussian Graphical Models | We propose {graphical sure screening}, or GRASS, a very simple and
computationally-efficient screening procedure for recovering the structure of a
Gaussian graphical model in the high-dimensional setting. The GRASS estimate of
the conditional dependence graph is obtained by thresholding the elements of
the sample covariance matrix. The proposed approach possesses the sure
screening property: with very high probability, the GRASS estimated edge set
contains the true edge set. Furthermore, with high probability, the size of the
estimated edge set is controlled. We provide a choice of threshold for GRASS
that can control the expected false positive rate. We illustrate the
performance of GRASS in a simulation study and on a gene expression data set,
and show that in practice it performs quite competitively with more complex and
computationally-demanding techniques for graph estimation.
| [
"Shikai Luo, Rui Song, Daniela Witten",
"['Shikai Luo' 'Rui Song' 'Daniela Witten']"
]
|
cs.LG | null | 1407.7906 | null | null | http://arxiv.org/pdf/1407.7906v3 | 2014-09-18T13:30:31Z | 2014-07-29T23:32:44Z | How Auto-Encoders Could Provide Credit Assignment in Deep Networks via
Target Propagation | We propose to exploit {\em reconstruction} as a layer-local training signal
for deep learning. Reconstructions can be propagated in a form of target
propagation playing a role similar to back-propagation but helping to reduce
the reliance on derivatives in order to perform credit assignment across many
levels of possibly strong non-linearities (which is difficult for
back-propagation). A regularized auto-encoder tends produce a reconstruction
that is a more likely version of its input, i.e., a small move in the direction
of higher likelihood. By generalizing gradients, target propagation may also
allow to train deep networks with discrete hidden units. If the auto-encoder
takes both a representation of input and target (or of any side information) in
input, then its reconstruction of input representation provides a target
towards a representation that is more likely, conditioned on all the side
information. A deep auto-encoder decoding path generalizes gradient propagation
in a learned way that can could thus handle not just infinitesimal changes but
larger, discrete changes, hopefully allowing credit assignment through a long
chain of non-linear operations. In addition to each layer being a good
auto-encoder, the encoder also learns to please the upper layers by
transforming the data into a space where it is easier to model by them,
flattening manifolds and disentangling factors. The motivations and theoretical
justifications for this approach are laid down in this paper, along with
conjectures that will have to be verified either mathematically or
experimentally, including a hypothesis stating that such auto-encoder mediated
target propagation could play in brains the role of credit assignment through
many non-linear, noisy and discrete transformations.
| [
"['Yoshua Bengio']",
"Yoshua Bengio"
]
|
cs.GT cs.LG | null | 1407.7937 | null | null | http://arxiv.org/pdf/1407.7937v1 | 2014-07-30T04:00:29Z | 2014-07-30T04:00:29Z | Learning Economic Parameters from Revealed Preferences | A recent line of work, starting with Beigman and Vohra (2006) and
Zadimoghaddam and Roth (2012), has addressed the problem of {\em learning} a
utility function from revealed preference data. The goal here is to make use of
past data describing the purchases of a utility maximizing agent when faced
with certain prices and budget constraints in order to produce a hypothesis
function that can accurately forecast the {\em future} behavior of the agent.
In this work we advance this line of work by providing sample complexity
guarantees and efficient algorithms for a number of important classes. By
drawing a connection to recent advances in multi-class learning, we provide a
computationally efficient algorithm with tight sample complexity guarantees
($\Theta(d/\epsilon)$ for the case of $d$ goods) for learning linear utility
functions under a linear price model. This solves an open question in
Zadimoghaddam and Roth (2012). Our technique yields numerous generalizations
including the ability to learn other well-studied classes of utility functions,
to deal with a misspecified model, and with non-linear prices.
| [
"['Maria-Florina Balcan' 'Amit Daniely' 'Ruta Mehta' 'Ruth Urner'\n 'Vijay V. Vazirani']",
"Maria-Florina Balcan, Amit Daniely, Ruta Mehta, Ruth Urner, and Vijay\n V. Vazirani"
]
|
stat.ML cs.LG | null | 1407.8042 | null | null | http://arxiv.org/pdf/1407.8042v1 | 2014-07-30T13:54:58Z | 2014-07-30T13:54:58Z | Targeting Optimal Active Learning via Example Quality | In many classification problems unlabelled data is abundant and a subset can
be chosen for labelling. This defines the context of active learning (AL),
where methods systematically select that subset, to improve a classifier by
retraining. Given a classification problem, and a classifier trained on a small
number of labelled examples, consider the selection of a single further
example. This example will be labelled by the oracle and then used to retrain
the classifier. This example selection raises a central question: given a fully
specified stochastic description of the classification problem, which example
is the optimal selection? If optimality is defined in terms of loss, this
definition directly produces expected loss reduction (ELR), a central quantity
whose maximum yields the optimal example selection. This work presents a new
theoretical approach to AL, example quality, which defines optimal AL behaviour
in terms of ELR. Once optimal AL behaviour is defined mathematically, reasoning
about this abstraction provides insights into AL. In a theoretical context the
optimal selection is compared to existing AL methods, showing that heuristics
can make sub-optimal selections. Algorithms are constructed to estimate example
quality directly. A large-scale experimental study shows these algorithms to be
competitive with standard AL methods.
| [
"['Lewis P. G. Evans' 'Niall M. Adams' 'Christoforos Anagnostopoulos']",
"Lewis P. G. Evans and Niall M. Adams and Christoforos Anagnostopoulos"
]
|
stat.ML cs.LG stat.AP | null | 1407.8067 | null | null | http://arxiv.org/pdf/1407.8067v1 | 2014-07-30T14:51:19Z | 2014-07-30T14:51:19Z | Differentially-Private Logistic Regression for Detecting Multiple-SNP
Association in GWAS Databases | Following the publication of an attack on genome-wide association studies
(GWAS) data proposed by Homer et al., considerable attention has been given to
developing methods for releasing GWAS data in a privacy-preserving way. Here,
we develop an end-to-end differentially private method for solving regression
problems with convex penalty functions and selecting the penalty parameters by
cross-validation. In particular, we focus on penalized logistic regression with
elastic-net regularization, a method widely used to in GWAS analyses to
identify disease-causing genes. We show how a differentially private procedure
for penalized logistic regression with elastic-net regularization can be
applied to the analysis of GWAS data and evaluate our method's performance.
| [
"['Fei Yu' 'Michal Rybar' 'Caroline Uhler' 'Stephen E. Fienberg']",
"Fei Yu, Michal Rybar, Caroline Uhler, Stephen E. Fienberg"
]
|
cs.LG cs.DS | null | 1407.8088 | null | null | http://arxiv.org/pdf/1407.8088v1 | 2014-07-30T15:24:46Z | 2014-07-30T15:24:46Z | The Grow-Shrink strategy for learning Markov network structures
constrained by context-specific independences | Markov networks are models for compactly representing complex probability
distributions. They are composed by a structure and a set of numerical weights.
The structure qualitatively describes independences in the distribution, which
can be exploited to factorize the distribution into a set of compact functions.
A key application for learning structures from data is to automatically
discover knowledge. In practice, structure learning algorithms focused on
"knowledge discovery" present a limitation: they use a coarse-grained
representation of the structure. As a result, this representation cannot
describe context-specific independences. Very recently, an algorithm called
CSPC was designed to overcome this limitation, but it has a high computational
complexity. This work tries to mitigate this downside presenting CSGS, an
algorithm that uses the Grow-Shrink strategy for reducing unnecessary
computations. On an empirical evaluation, the structures learned by CSGS
achieve competitive accuracies and lower computational complexity with respect
to those obtained by CSPC.
| [
"['Alejandro Edera' 'Yanela Strappa' 'Facundo Bromberg']",
"Alejandro Edera, Yanela Strappa and Facundo Bromberg"
]
|
cs.LG cs.CE | null | 1407.8147 | null | null | http://arxiv.org/pdf/1407.8147v2 | 2014-12-09T07:03:36Z | 2014-07-30T18:04:20Z | Stochastic Coordinate Coding and Its Application for Drosophila Gene
Expression Pattern Annotation | \textit{Drosophila melanogaster} has been established as a model organism for
investigating the fundamental principles of developmental gene interactions.
The gene expression patterns of \textit{Drosophila melanogaster} can be
documented as digital images, which are annotated with anatomical ontology
terms to facilitate pattern discovery and comparison. The automated annotation
of gene expression pattern images has received increasing attention due to the
recent expansion of the image database. The effectiveness of gene expression
pattern annotation relies on the quality of feature representation. Previous
studies have demonstrated that sparse coding is effective for extracting
features from gene expression images. However, solving sparse coding remains a
computationally challenging problem, especially when dealing with large-scale
data sets and learning large size dictionaries. In this paper, we propose a
novel algorithm to solve the sparse coding problem, called Stochastic
Coordinate Coding (SCC). The proposed algorithm alternatively updates the
sparse codes via just a few steps of coordinate descent and updates the
dictionary via second order stochastic gradient descent. The computational cost
is further reduced by focusing on the non-zero components of the sparse codes
and the corresponding columns of the dictionary only in the updating procedure.
Thus, the proposed algorithm significantly improves the efficiency and the
scalability, making sparse coding applicable for large-scale data sets and
large dictionary sizes. Our experiments on Drosophila gene expression data sets
demonstrate the efficiency and the effectiveness of the proposed algorithm.
| [
"['Binbin Lin' 'Qingyang Li' 'Qian Sun' 'Ming-Jun Lai' 'Ian Davidson'\n 'Wei Fan' 'Jieping Ye']",
"Binbin Lin, Qingyang Li, Qian Sun, Ming-Jun Lai, Ian Davidson, Wei\n Fan, Jieping Ye"
]
|
q-bio.QM cs.LG stat.ML | null | 1407.8187 | null | null | http://arxiv.org/pdf/1407.8187v1 | 2014-07-30T20:00:14Z | 2014-07-30T20:00:14Z | Fast Bayesian Feature Selection for High Dimensional Linear Regression
in Genomics via the Ising Approximation | Feature selection, identifying a subset of variables that are relevant for
predicting a response, is an important and challenging component of many
methods in statistics and machine learning. Feature selection is especially
difficult and computationally intensive when the number of variables approaches
or exceeds the number of samples, as is often the case for many genomic
datasets. Here, we introduce a new approach -- the Bayesian Ising Approximation
(BIA) -- to rapidly calculate posterior probabilities for feature relevance in
L2 penalized linear regression. In the regime where the regression problem is
strongly regularized by the prior, we show that computing the marginal
posterior probabilities for features is equivalent to computing the
magnetizations of an Ising model. Using a mean field approximation, we show it
is possible to rapidly compute the feature selection path described by the
posterior probabilities as a function of the L2 penalty. We present simulations
and analytical results illustrating the accuracy of the BIA on some simple
regression problems. Finally, we demonstrate the applicability of the BIA to
high dimensional regression by analyzing a gene expression dataset with nearly
30,000 features.
| [
"['Charles K. Fisher' 'Pankaj Mehta']",
"Charles K. Fisher, Pankaj Mehta"
]
|
cs.LG | null | 1407.8289 | null | null | http://arxiv.org/pdf/1407.8289v2 | 2014-08-05T07:43:54Z | 2014-07-31T06:33:42Z | DuSK: A Dual Structure-preserving Kernel for Supervised Tensor Learning
with Applications to Neuroimages | With advances in data collection technologies, tensor data is assuming
increasing prominence in many applications and the problem of supervised tensor
learning has emerged as a topic of critical significance in the data mining and
machine learning community. Conventional methods for supervised tensor learning
mainly focus on learning kernels by flattening the tensor into vectors or
matrices, however structural information within the tensors will be lost. In
this paper, we introduce a new scheme to design structure-preserving kernels
for supervised tensor learning. Specifically, we demonstrate how to leverage
the naturally available structure within the tensorial representation to encode
prior knowledge in the kernel. We proposed a tensor kernel that can preserve
tensor structures based upon dual-tensorial mapping. The dual-tensorial mapping
function can map each tensor instance in the input space to another tensor in
the feature space while preserving the tensorial structure. Theoretically, our
approach is an extension of the conventional kernels in the vector space to
tensor space. We applied our novel kernel in conjunction with SVM to real-world
tensor classification problems including brain fMRI classification for three
different diseases (i.e., Alzheimer's disease, ADHD and brain damage by HIV).
Extensive empirical studies demonstrate that our proposed approach can
effectively boost tensor classification performances, particularly with small
sample sizes.
| [
"['Lifang He' 'Xiangnan Kong' 'Philip S. Yu' 'Ann B. Ragin' 'Zhifeng Hao'\n 'Xiaowei Yang']",
"Lifang He, Xiangnan Kong, Philip S. Yu, Ann B. Ragin, Zhifeng Hao,\n Xiaowei Yang"
]
|
cs.LG | null | 1407.8339 | null | null | http://arxiv.org/pdf/1407.8339v6 | 2016-03-29T01:00:59Z | 2014-07-31T10:09:11Z | Combinatorial Multi-Armed Bandit and Its Extension to Probabilistically
Triggered Arms | We define a general framework for a large class of combinatorial multi-armed
bandit (CMAB) problems, where subsets of base arms with unknown distributions
form super arms. In each round, a super arm is played and the base arms
contained in the super arm are played and their outcomes are observed. We
further consider the extension in which more based arms could be
probabilistically triggered based on the outcomes of already triggered arms.
The reward of the super arm depends on the outcomes of all played arms, and it
only needs to satisfy two mild assumptions, which allow a large class of
nonlinear reward instances. We assume the availability of an offline
(\alpha,\beta)-approximation oracle that takes the means of the outcome
distributions of arms and outputs a super arm that with probability {\beta}
generates an {\alpha} fraction of the optimal expected reward. The objective of
an online learning algorithm for CMAB is to minimize
(\alpha,\beta)-approximation regret, which is the difference between the
\alpha{\beta} fraction of the expected reward when always playing the optimal
super arm, and the expected reward of playing super arms according to the
algorithm. We provide CUCB algorithm that achieves O(log n)
distribution-dependent regret, where n is the number of rounds played, and we
further provide distribution-independent bounds for a large class of reward
functions. Our regret analysis is tight in that it matches the bound of UCB1
algorithm (up to a constant factor) for the classical MAB problem, and it
significantly improves the regret bound in a earlier paper on combinatorial
bandits with linear rewards. We apply our CMAB framework to two new
applications, probabilistic maximum coverage and social influence maximization,
both having nonlinear reward structures. In particular, application to social
influence maximization requires our extension on probabilistically triggered
arms.
| [
"Wei Chen, Yajun Wang, Yang Yuan, Qinshi Wang",
"['Wei Chen' 'Yajun Wang' 'Yang Yuan' 'Qinshi Wang']"
]
|
cs.CV cs.LG | null | 1407.8518 | null | null | http://arxiv.org/pdf/1407.8518v1 | 2014-07-28T09:07:03Z | 2014-07-28T09:07:03Z | Beyond KernelBoost | In this Technical Report we propose a set of improvements with respect to the
KernelBoost classifier presented in [Becker et al., MICCAI 2013]. We start with
a scheme inspired by Auto-Context, but that is suitable in situations where the
lack of large training sets poses a potential problem of overfitting. The aim
is to capture the interactions between neighboring image pixels to better
regularize the boundaries of segmented regions. As in Auto-Context [Tu et al.,
PAMI 2009] the segmentation process is iterative and, at each iteration, the
segmentation results for the previous iterations are taken into account in
conjunction with the image itself. However, unlike in [Tu et al., PAMI 2009],
we organize our recursion so that the classifiers can progressively focus on
difficult-to-classify locations. This lets us exploit the power of the
decision-tree paradigm while avoiding over-fitting. In the context of this
architecture, KernelBoost represents a powerful building block due to its
ability to learn on the score maps coming from previous iterations. We first
introduce two important mechanisms to empower the KernelBoost classifier,
namely pooling and the clustering of positive samples based on the appearance
of the corresponding ground-truth. These operations significantly contribute to
increase the effectiveness of the system on biomedical images, where texture
plays a major role in the recognition of the different image components. We
then present some other techniques that can be easily integrated in the
KernelBoost framework to further improve the accuracy of the final
segmentation. We show extensive results on different medical image datasets,
including some multi-label tasks, on which our method is shown to outperform
state-of-the-art approaches. The resulting segmentations display high accuracy,
neat contours, and reduced noise.
| [
"['Roberto Rigamonti' 'Vincent Lepetit' 'Pascal Fua']",
"Roberto Rigamonti, Vincent Lepetit, Pascal Fua"
]
|
cs.LG cs.GT | null | 1408.0017 | null | null | http://arxiv.org/pdf/1408.0017v1 | 2014-07-31T20:10:14Z | 2014-07-31T20:10:14Z | Learning Nash Equilibria in Congestion Games | We study the repeated congestion game, in which multiple populations of
players share resources, and make, at each iteration, a decentralized decision
on which resources to utilize. We investigate the following question: given a
model of how individual players update their strategies, does the resulting
dynamics of strategy profiles converge to the set of Nash equilibria of the
one-shot game? We consider in particular a model in which players update their
strategies using algorithms with sublinear discounted regret. We show that the
resulting sequence of strategy profiles converges to the set of Nash equilibria
in the sense of Ces\`aro means. However, strong convergence is not guaranteed
in general. We show that strong convergence can be guaranteed for a class of
algorithms with a vanishing upper bound on discounted regret, and which satisfy
an additional condition. We call such algorithms AREP algorithms, for
Approximate REPlicator, as they can be interpreted as a discrete-time
approximation of the replicator equation, which models the continuous-time
evolution of population strategies, and which is known to converge for the
class of congestion games. In particular, we show that the discounted Hedge
algorithm belongs to the AREP class, which guarantees its strong convergence.
| [
"Walid Krichene, Benjamin Drigh\\`es and Alexandre M. Bayen",
"['Walid Krichene' 'Benjamin Drighès' 'Alexandre M. Bayen']"
]
|
cs.LG cs.IR stat.ML | null | 1408.0043 | null | null | http://arxiv.org/pdf/1408.0043v1 | 2014-07-31T23:30:37Z | 2014-07-31T23:30:37Z | Learning From Ordered Sets and Applications in Collaborative Ranking | Ranking over sets arise when users choose between groups of items. For
example, a group may be of those movies deemed $5$ stars to them, or a
customized tour package. It turns out, to model this data type properly, we
need to investigate the general combinatorics problem of partitioning a set and
ordering the subsets. Here we construct a probabilistic log-linear model over a
set of ordered subsets. Inference in this combinatorial space is highly
challenging: The space size approaches $(N!/2)6.93145^{N+1}$ as $N$ approaches
infinity. We propose a \texttt{split-and-merge} Metropolis-Hastings procedure
that can explore the state-space efficiently. For discovering hidden aspects in
the data, we enrich the model with latent binary variables so that the
posteriors can be efficiently evaluated. Finally, we evaluate the proposed
model on large-scale collaborative filtering tasks and demonstrate that it is
competitive against state-of-the-art methods.
| [
"Truyen Tran, Dinh Phung, Svetha Venkatesh",
"['Truyen Tran' 'Dinh Phung' 'Svetha Venkatesh']"
]
|
stat.ML cs.IR cs.LG stat.AP stat.ME | null | 1408.0047 | null | null | http://arxiv.org/pdf/1408.0047v1 | 2014-07-31T23:54:16Z | 2014-07-31T23:54:16Z | Cumulative Restricted Boltzmann Machines for Ordinal Matrix Data
Analysis | Ordinal data is omnipresent in almost all multiuser-generated feedback -
questionnaires, preferences etc. This paper investigates modelling of ordinal
data with Gaussian restricted Boltzmann machines (RBMs). In particular, we
present the model architecture, learning and inference procedures for both
vector-variate and matrix-variate ordinal data. We show that our model is able
to capture latent opinion profile of citizens around the world, and is
competitive against state-of-art collaborative filtering techniques on
large-scale public datasets. The model thus has the potential to extend
application of RBMs to diverse domains such as recommendation systems, product
reviews and expert assessments.
| [
"Truyen Tran, Dinh Phung, Svetha Venkatesh",
"['Truyen Tran' 'Dinh Phung' 'Svetha Venkatesh']"
]
|
stat.ML cs.LG stat.ME | null | 1408.0055 | null | null | http://arxiv.org/pdf/1408.0055v1 | 2014-08-01T00:32:32Z | 2014-08-01T00:32:32Z | Thurstonian Boltzmann Machines: Learning from Multiple Inequalities | We introduce Thurstonian Boltzmann Machines (TBM), a unified architecture
that can naturally incorporate a wide range of data inputs at the same time.
Our motivation rests in the Thurstonian view that many discrete data types can
be considered as being generated from a subset of underlying latent continuous
variables, and in the observation that each realisation of a discrete type
imposes certain inequalities on those variables. Thus learning and inference in
TBM reduce to making sense of a set of inequalities. Our proposed TBM naturally
supports the following types: Gaussian, intervals, censored, binary,
categorical, muticategorical, ordinal, (in)-complete rank with and without
ties. We demonstrate the versatility and capacity of the proposed model on
three applications of very different natures; namely handwritten digit
recognition, collaborative filtering and complex social survey analysis.
| [
"Truyen Tran, Dinh Phung, Svetha Venkatesh",
"['Truyen Tran' 'Dinh Phung' 'Svetha Venkatesh']"
]
|
cs.RO cs.LG cs.MA | null | 1408.0058 | null | null | http://arxiv.org/pdf/1408.0058v1 | 2014-08-01T01:29:08Z | 2014-08-01T01:29:08Z | A Framework for learning multi-agent dynamic formation strategy in
real-time applications | Formation strategy is one of the most important parts of many multi-agent
systems with many applications in real world problems. In this paper, a
framework for learning this task in a limited domain (restricted environment)
is proposed. In this framework, agents learn either directly by observing an
expert behavior or indirectly by observing other agents or objects behavior.
First, a group of algorithms for learning formation strategy based on limited
features will be presented. Due to distributed and complex nature of many
multi-agent systems, it is impossible to include all features directly in the
learning process; thus, a modular scheme is proposed in order to reduce the
number of features. In this method, some important features have indirect
influence in learning instead of directly involving them as input features.
This framework has the ability to dynamically assign a group of positions to a
group of agents to improve system performance. In addition, it can change the
formation strategy when the context changes. Finally, this framework is able to
automatically produce many complex and flexible formation strategy algorithms
without directly involving an expert to present and implement such complex
algorithms.
| [
"Mehrab Norouzitallab, Valiallah Monajjemi, Saeed Shiry Ghidary and\n Mohammad Bagher Menhaj",
"['Mehrab Norouzitallab' 'Valiallah Monajjemi' 'Saeed Shiry Ghidary'\n 'Mohammad Bagher Menhaj']"
]
|
cs.IR cs.LG stat.ML | null | 1408.0096 | null | null | http://arxiv.org/pdf/1408.0096v1 | 2014-08-01T07:51:37Z | 2014-08-01T07:51:37Z | Conditional Restricted Boltzmann Machines for Cold Start Recommendations | Restricted Boltzman Machines (RBMs) have been successfully used in
recommender systems. However, as with most of other collaborative filtering
techniques, it cannot solve cold start problems for there is no rating for a
new item. In this paper, we first apply conditional RBM (CRBM) which could take
extra information into account and show that CRBM could solve cold start
problem very well, especially for rating prediction task. CRBM naturally
combine the content and collaborative data under a single framework which could
be fitted effectively. Experiments show that CRBM can be compared favourably
with matrix factorization models, while hidden features learned from the former
models are more easy to be interpreted.
| [
"Jiankou Li and Wei Zhang",
"['Jiankou Li' 'Wei Zhang']"
]
|
cs.LG cs.SD | null | 1408.0193 | null | null | http://arxiv.org/pdf/1408.0193v1 | 2014-08-01T14:47:33Z | 2014-08-01T14:47:33Z | A RobustICA Based Algorithm for Blind Separation of Convolutive Mixtures | We propose a frequency domain method based on robust independent component
analysis (RICA) to address the multichannel Blind Source Separation (BSS)
problem of convolutive speech mixtures in highly reverberant environments. We
impose regularization processes to tackle the ill-conditioning problem of the
covariance matrix and to mitigate the performance degradation in the frequency
domain. We apply an algorithm to separate the source signals in adverse
conditions, i.e. high reverberation conditions when short observation signals
are available. Furthermore, we study the impact of several parameters on the
performance of separation, e.g. overlapping ratio and window type of the
frequency domain method. We also compare different techniques to solve the
frequency-domain permutation ambiguity. Through simulations and real world
experiments, we verify the superiority of the presented convolutive algorithm
among other BSS algorithms, including recursive regularized ICA (RR ICA),
independent vector analysis (IVA).
| [
"Zaid Albataineh and Fathi M. Salem",
"['Zaid Albataineh' 'Fathi M. Salem']"
]
|
cs.IT cs.LG math.IT | null | 1408.0196 | null | null | http://arxiv.org/pdf/1408.0196v2 | 2016-01-14T22:26:25Z | 2014-08-01T14:52:47Z | A Blind Adaptive CDMA Receiver Based on State Space Structures | Code Division Multiple Access (CDMA) is a channel access method, based on
spread-spectrum technology, used by various radio technologies world-wide. In
general, CDMA is used as an access method in many mobile standards such as
CDMA2000 and WCDMA. We address the problem of blind multiuser equalization in
the wideband CDMA system, in the noisy multipath propagation environment.
Herein, we propose three new blind receiver schemes, which are based on state
space structures and Independent Component Analysis (ICA). These blind
state-space receivers (BSSR) do not require knowledge of the propagation
parameters or spreading code sequences of the users they primarily exploit the
natural assumption of statistical independence among the source signals. We
also develop three semi blind adaptive detectors by incorporating the new
adaptive methods into the standard RAKE receiver structure. Extensive
comparative case study, based on Bit error rate (BER) performance of these
methods, is carried out for different number of users, symbols per user, and
signal to noise ratio (SNR) in comparison with conventional detectors,
including the Blind Multiuser Detectors (BMUD) and Linear Minimum mean squared
error (LMMSE). The results show that the proposed methods outperform the other
detectors in estimating the symbol signals from the received mixed CDMA
signals. Moreover, the new blind detectors mitigate the multi access
interference (MAI) in CDMA.
| [
"Zaid Albataineh and Fathi M. Salem",
"['Zaid Albataineh' 'Fathi M. Salem']"
]
|
stat.ML cs.AI cs.CV cs.LG | 10.1371/journal.pone.0132945 | 1408.0204 | null | null | http://arxiv.org/abs/1408.0204v1 | 2014-08-01T15:15:48Z | 2014-08-01T15:15:48Z | Functional Principal Component Analysis and Randomized Sparse Clustering
Algorithm for Medical Image Analysis | Due to advances in sensors, growing large and complex medical image data have
the ability to visualize the pathological change in the cellular or even the
molecular level or anatomical changes in tissues and organs. As a consequence,
the medical images have the potential to enhance diagnosis of disease,
prediction of clinical outcomes, characterization of disease progression,
management of health care and development of treatments, but also pose great
methodological and computational challenges for representation and selection of
features in image cluster analysis. To address these challenges, we first
extend one dimensional functional principal component analysis to the two
dimensional functional principle component analyses (2DFPCA) to fully capture
space variation of image signals. Image signals contain a large number of
redundant and irrelevant features which provide no additional or no useful
information for cluster analysis. Widely used methods for removing redundant
and irrelevant features are sparse clustering algorithms using a lasso-type
penalty to select the features. However, the accuracy of clustering using a
lasso-type penalty depends on how to select penalty parameters and a threshold
for selecting features. In practice, they are difficult to determine. Recently,
randomized algorithms have received a great deal of attention in big data
analysis. This paper presents a randomized algorithm for accurate feature
selection in image cluster analysis. The proposed method is applied to ovarian
and kidney cancer histology image data from the TCGA database. The results
demonstrate that the randomized feature selection method coupled with
functional principal component analysis substantially outperforms the current
sparse clustering algorithms in image cluster analysis.
| [
"['Nan Lin' 'Junhai Jiang' 'Shicheng Guo' 'Momiao Xiong']",
"Nan Lin, Junhai Jiang, Shicheng Guo and Momiao Xiong"
]
|
cs.SI cs.IR cs.LG | null | 1408.0325 | null | null | http://arxiv.org/pdf/1408.0325v1 | 2014-08-02T01:56:10Z | 2014-08-02T01:56:10Z | Matrix Factorization with Explicit Trust and Distrust Relationships | With the advent of online social networks, recommender systems have became
crucial for the success of many online applications/services due to their
significance role in tailoring these applications to user-specific needs or
preferences. Despite their increasing popularity, in general recommender
systems suffer from the data sparsity and the cold-start problems. To alleviate
these issues, in recent years there has been an upsurge of interest in
exploiting social information such as trust relations among users along with
the rating data to improve the performance of recommender systems. The main
motivation for exploiting trust information in recommendation process stems
from the observation that the ideas we are exposed to and the choices we make
are significantly influenced by our social context. However, in large user
communities, in addition to trust relations, the distrust relations also exist
between users. For instance, in Epinions the concepts of personal "web of
trust" and personal "block list" allow users to categorize their friends based
on the quality of reviews into trusted and distrusted friends, respectively. In
this paper, we propose a matrix factorization based model for recommendation in
social rating networks that properly incorporates both trust and distrust
relationships aiming to improve the quality of recommendations and mitigate the
data sparsity and the cold-start users issues. Through experiments on the
Epinions data set, we show that our new algorithm outperforms its standard
trust-enhanced or distrust-enhanced counterparts with respect to accuracy,
thereby demonstrating the positive effect that incorporation of explicit
distrust information can have on recommender systems.
| [
"Rana Forsati, Mehrdad Mahdavi, Mehrnoush Shamsfard, Mohamed Sarwat",
"['Rana Forsati' 'Mehrdad Mahdavi' 'Mehrnoush Shamsfard' 'Mohamed Sarwat']"
]
|
cs.LG math.PR stat.ML | null | 1408.0553 | null | null | http://arxiv.org/pdf/1408.0553v2 | 2014-12-16T22:21:23Z | 2014-08-03T23:21:33Z | Sample Complexity Analysis for Learning Overcomplete Latent Variable
Models through Tensor Methods | We provide guarantees for learning latent variable models emphasizing on the
overcomplete regime, where the dimensionality of the latent space can exceed
the observed dimensionality. In particular, we consider multiview mixtures,
spherical Gaussian mixtures, ICA, and sparse coding models. We provide tight
concentration bounds for empirical moments through novel covering arguments. We
analyze parameter recovery through a simple tensor power update algorithm. In
the semi-supervised setting, we exploit the label or prior information to get a
rough estimate of the model parameters, and then refine it using the tensor
method on unlabeled samples. We establish that learning is possible when the
number of components scales as $k=o(d^{p/2})$, where $d$ is the observed
dimension, and $p$ is the order of the observed moment employed in the tensor
method. Our concentration bound analysis also leads to minimax sample
complexity for semi-supervised learning of spherical Gaussian mixtures. In the
unsupervised setting, we use a simple initialization algorithm based on SVD of
the tensor slices, and provide guarantees under the stricter condition that
$k\le \beta d$ (where constant $\beta$ can be larger than $1$), where the
tensor method recovers the components under a polynomial running time (and
exponential in $\beta$). Our analysis establishes that a wide range of
overcomplete latent variable models can be learned efficiently with low
computational and sample complexity through tensor decomposition methods.
| [
"Animashree Anandkumar and Rong Ge and Majid Janzamin",
"['Animashree Anandkumar' 'Rong Ge' 'Majid Janzamin']"
]
|
cs.LG cs.NA math.OC stat.ML | null | 1408.0838 | null | null | http://arxiv.org/pdf/1408.0838v1 | 2014-08-04T23:30:20Z | 2014-08-04T23:30:20Z | Estimating Maximally Probable Constrained Relations by Mathematical
Programming | Estimating a constrained relation is a fundamental problem in machine
learning. Special cases are classification (the problem of estimating a map
from a set of to-be-classified elements to a set of labels), clustering (the
problem of estimating an equivalence relation on a set) and ranking (the
problem of estimating a linear order on a set). We contribute a family of
probability measures on the set of all relations between two finite, non-empty
sets, which offers a joint abstraction of multi-label classification,
correlation clustering and ranking by linear ordering. Estimating (learning) a
maximally probable measure, given (a training set of) related and unrelated
pairs, is a convex optimization problem. Estimating (inferring) a maximally
probable relation, given a measure, is a 01-linear program. It is solved in
linear time for maps. It is NP-hard for equivalence relations and linear
orders. Practical solutions for all three cases are shown in experiments with
real data. Finally, estimating a maximally probable measure and relation
jointly is posed as a mixed-integer nonlinear program. This formulation
suggests a mathematical programming approach to semi-supervised learning.
| [
"Lizhen Qu and Bjoern Andres",
"['Lizhen Qu' 'Bjoern Andres']"
]
|
cs.LG cs.NE stat.ML | null | 1408.0848 | null | null | http://arxiv.org/pdf/1408.0848v8 | 2018-03-06T15:59:10Z | 2014-08-05T02:13:50Z | Multilayer bootstrap networks | Multilayer bootstrap network builds a gradually narrowed multilayer nonlinear
network from bottom up for unsupervised nonlinear dimensionality reduction.
Each layer of the network is a nonparametric density estimator. It consists of
a group of k-centroids clusterings. Each clustering randomly selects data
points with randomly selected features as its centroids, and learns a one-hot
encoder by one-nearest-neighbor optimization. Geometrically, the nonparametric
density estimator at each layer projects the input data space to a
uniformly-distributed discrete feature space, where the similarity of two data
points in the discrete feature space is measured by the number of the nearest
centroids they share in common. The multilayer network gradually reduces the
nonlinear variations of data from bottom up by building a vast number of
hierarchical trees implicitly on the original data space. Theoretically, the
estimation error caused by the nonparametric density estimator is proportional
to the correlation between the clusterings, both of which are reduced by the
randomization steps.
| [
"Xiao-Lei Zhang",
"['Xiao-Lei Zhang']"
]
|
cs.LG stat.ML | 10.1109/TSP.2015.2463261 | 1408.0853 | null | null | http://arxiv.org/abs/1408.0853v2 | 2014-11-05T04:53:17Z | 2014-08-05T02:31:27Z | Adaptive Learning in Cartesian Product of Reproducing Kernel Hilbert
Spaces | We propose a novel adaptive learning algorithm based on iterative orthogonal
projections in the Cartesian product of multiple reproducing kernel Hilbert
spaces (RKHSs). The task is estimating/tracking nonlinear functions which are
supposed to contain multiple components such as (i) linear and nonlinear
components, (ii) high- and low- frequency components etc. In this case, the use
of multiple RKHSs permits a compact representation of multicomponent functions.
The proposed algorithm is where two different methods of the author meet:
multikernel adaptive filtering and the algorithm of hyperplane projection along
affine subspace (HYPASS). In a certain particular case, the sum space of the
RKHSs is isomorphic to the product space and hence the proposed algorithm can
also be regarded as an iterative projection method in the sum space. The
efficacy of the proposed algorithm is shown by numerical examples.
| [
"Masahiro Yukawa",
"['Masahiro Yukawa']"
]
|
stat.ML cs.CV cs.LG | 10.1137/1.9781611972832.11 | 1408.0967 | null | null | http://arxiv.org/abs/1408.0967v1 | 2014-08-05T13:40:03Z | 2014-08-05T13:40:03Z | Determining the Number of Clusters via Iterative Consensus Clustering | We use a cluster ensemble to determine the number of clusters, k, in a group
of data. A consensus similarity matrix is formed from the ensemble using
multiple algorithms and several values for k. A random walk is induced on the
graph defined by the consensus matrix and the eigenvalues of the associated
transition probability matrix are used to determine the number of clusters. For
noisy or high-dimensional data, an iterative technique is presented to refine
this consensus matrix in way that encourages a block-diagonal form. It is shown
that the resulting consensus matrix is generally superior to existing
similarity matrices for this type of spectral analysis.
| [
"['Shaina Race' 'Carl Meyer' 'Kevin Valakuzhy']",
"Shaina Race, Carl Meyer, Kevin Valakuzhy"
]
|
stat.ML cs.CV cs.LG | null | 1408.0972 | null | null | http://arxiv.org/pdf/1408.0972v1 | 2014-08-05T13:54:01Z | 2014-08-05T13:54:01Z | A Flexible Iterative Framework for Consensus Clustering | A novel framework for consensus clustering is presented which has the ability
to determine both the number of clusters and a final solution using multiple
algorithms. A consensus similarity matrix is formed from an ensemble using
multiple algorithms and several values for k. A variety of dimension reduction
techniques and clustering algorithms are considered for analysis. For noisy or
high-dimensional data, an iterative technique is presented to refine this
consensus matrix in way that encourages algorithms to agree upon a common
solution. We utilize the theory of nearly uncoupled Markov chains to determine
the number, k , of clusters in a dataset by considering a random walk on the
graph defined by the consensus matrix. The eigenvalues of the associated
transition probability matrix are used to determine the number of clusters.
This method succeeds at determining the number of clusters in many datasets
where previous methods fail. On every considered dataset, our consensus method
provides a final result with accuracy well above the average of the individual
algorithms.
| [
"['Shaina Race' 'Carl Meyer']",
"Shaina Race and Carl Meyer"
]
|
cs.IT cs.DS cs.LG math.IT | null | 1408.1000 | null | null | http://arxiv.org/pdf/1408.1000v3 | 2016-03-10T08:35:51Z | 2014-08-02T18:52:52Z | Estimating Renyi Entropy of Discrete Distributions | It was recently shown that estimating the Shannon entropy $H({\rm p})$ of a
discrete $k$-symbol distribution ${\rm p}$ requires $\Theta(k/\log k)$ samples,
a number that grows near-linearly in the support size. In many applications
$H({\rm p})$ can be replaced by the more general R\'enyi entropy of order
$\alpha$, $H_\alpha({\rm p})$. We determine the number of samples needed to
estimate $H_\alpha({\rm p})$ for all $\alpha$, showing that $\alpha < 1$
requires a super-linear, roughly $k^{1/\alpha}$ samples, noninteger $\alpha>1$
requires a near-linear $k$ samples, but, perhaps surprisingly, integer
$\alpha>1$ requires only $\Theta(k^{1-1/\alpha})$ samples. Furthermore,
developing on a recently established connection between polynomial
approximation and estimation of additive functions of the form $\sum_{x} f({\rm
p}_x)$, we reduce the sample complexity for noninteger values of $\alpha$ by a
factor of $\log k$ compared to the empirical estimator. The estimators
achieving these bounds are simple and run in time linear in the number of
samples. Our lower bounds provide explicit constructions of distributions with
different R\'enyi entropies that are hard to distinguish.
| [
"Jayadev Acharya, Alon Orlitsky, Ananda Theertha Suresh, and Himanshu\n Tyagi",
"['Jayadev Acharya' 'Alon Orlitsky' 'Ananda Theertha Suresh'\n 'Himanshu Tyagi']"
]
|
cs.LG stat.ML | 10.1016/j.eswa.2015.03.007 | 1408.1054 | null | null | http://arxiv.org/abs/1408.1054v1 | 2014-08-04T18:01:29Z | 2014-08-04T18:01:29Z | Multithreshold Entropy Linear Classifier | Linear classifiers separate the data with a hyperplane. In this paper we
focus on the novel method of construction of multithreshold linear classifier,
which separates the data with multiple parallel hyperplanes. Proposed model is
based on the information theory concepts -- namely Renyi's quadratic entropy
and Cauchy-Schwarz divergence.
We begin with some general properties, including data scale invariance. Then
we prove that our method is a multithreshold large margin classifier, which
shows the analogy to the SVM, while in the same time works with much broader
class of hypotheses. What is also interesting, proposed method is aimed at the
maximization of the balanced quality measure (such as Matthew's Correlation
Coefficient) as opposed to very common maximization of the accuracy. This
feature comes directly from the optimization problem statement and is further
confirmed by the experiments on the UCI datasets.
It appears, that our Multithreshold Entropy Linear Classifier (MELC) obtaines
similar or higher scores than the ones given by SVM on both synthetic and real
data. We show how proposed approach can be benefitial for the cheminformatics
in the task of ligands activity prediction, where despite better classification
results, MELC gives some additional insight into the data structure (classes of
underrepresented chemical compunds).
| [
"Wojciech Marian Czarnecki, Jacek Tabor",
"['Wojciech Marian Czarnecki' 'Jacek Tabor']"
]
|
stat.ML cs.LG stat.ME | null | 1408.1160 | null | null | http://arxiv.org/pdf/1408.1160v1 | 2014-08-06T01:43:05Z | 2014-08-06T01:43:05Z | Mixed-Variate Restricted Boltzmann Machines | Modern datasets are becoming heterogeneous. To this end, we present in this
paper Mixed-Variate Restricted Boltzmann Machines for simultaneously modelling
variables of multiple types and modalities, including binary and continuous
responses, categorical options, multicategorical choices, ordinal assessment
and category-ranked preferences. Dependency among variables is modeled using
latent binary variables, each of which can be interpreted as a particular
hidden aspect of the data. The proposed model, similar to the standard RBMs,
allows fast evaluation of the posterior for the latent variables. Hence, it is
naturally suitable for many common tasks including, but not limited to, (a) as
a pre-processing step to convert complex input data into a more convenient
vectorial representation through the latent posteriors, thereby offering a
dimensionality reduction capacity, (b) as a classifier supporting binary,
multiclass, multilabel, and label-ranking outputs, or a regression tool for
continuous outputs and (c) as a data completion tool for multimodal and
heterogeneous data. We evaluate the proposed model on a large-scale dataset
using the world opinion survey results on three tasks: feature extraction and
visualization, data completion and prediction.
| [
"Truyen Tran, Dinh Phung, Svetha Venkatesh",
"['Truyen Tran' 'Dinh Phung' 'Svetha Venkatesh']"
]
|
stat.ML cs.LG stat.ME | null | 1408.1162 | null | null | http://arxiv.org/pdf/1408.1162v1 | 2014-08-06T02:04:43Z | 2014-08-06T02:04:43Z | MCMC for Hierarchical Semi-Markov Conditional Random Fields | Deep architecture such as hierarchical semi-Markov models is an important
class of models for nested sequential data. Current exact inference schemes
either cost cubic time in sequence length, or exponential time in model depth.
These costs are prohibitive for large-scale problems with arbitrary length and
depth. In this contribution, we propose a new approximation technique that may
have the potential to achieve sub-cubic time complexity in length and linear
time depth, at the cost of some loss of quality. The idea is based on two
well-known methods: Gibbs sampling and Rao-Blackwellisation. We provide some
simulation-based evaluation of the quality of the RGBS with respect to run time
and sequence length.
| [
"Truyen Tran, Dinh Phung, Svetha Venkatesh, Hung H. Bui",
"['Truyen Tran' 'Dinh Phung' 'Svetha Venkatesh' 'Hung H. Bui']"
]
|
cs.LG cs.CV stat.ML | null | 1408.1167 | null | null | http://arxiv.org/pdf/1408.1167v1 | 2014-08-06T02:45:51Z | 2014-08-06T02:45:51Z | Boosted Markov Networks for Activity Recognition | We explore a framework called boosted Markov networks to combine the learning
capacity of boosting and the rich modeling semantics of Markov networks and
applying the framework for video-based activity recognition. Importantly, we
extend the framework to incorporate hidden variables. We show how the framework
can be applied for both model learning and feature selection. We demonstrate
that boosted Markov networks with hidden variables perform comparably with the
standard maximum likelihood estimation. However, our framework is able to learn
sparse models, and therefore can provide computational savings when the learned
models are used for classification.
| [
"['Truyen Tran' 'Hung Bui' 'Svetha Venkatesh']",
"Truyen Tran, Hung Bui, Svetha Venkatesh"
]
|
cs.CV cs.LG | 10.1016/j.cviu.2016.09.003 | 1408.1292 | null | null | http://arxiv.org/abs/1408.1292v4 | 2016-06-18T00:17:50Z | 2014-08-06T14:27:57Z | Scalable Greedy Algorithms for Transfer Learning | In this paper we consider the binary transfer learning problem, focusing on
how to select and combine sources from a large pool to yield a good performance
on a target task. Constraining our scenario to real world, we do not assume the
direct access to the source data, but rather we employ the source hypotheses
trained from them. We propose an efficient algorithm that selects relevant
source hypotheses and feature dimensions simultaneously, building on the
literature on the best subset selection problem. Our algorithm achieves
state-of-the-art results on three computer vision datasets, substantially
outperforming both transfer learning and popular feature selection baselines in
a small-sample setting. We also present a randomized variant that achieves the
same results with the computational cost independent from the number of source
hypotheses and feature dimensions. Also, we theoretically prove that, under
reasonable assumptions on the source hypotheses, our algorithm can learn
effectively from few examples.
| [
"['Ilja Kuzborskij' 'Francesco Orabona' 'Barbara Caputo']",
"Ilja Kuzborskij, Francesco Orabona, Barbara Caputo"
]
|
stat.ML cs.LG | null | 1408.1319 | null | null | http://arxiv.org/pdf/1408.1319v1 | 2014-08-06T15:27:20Z | 2014-08-06T15:27:20Z | When does Active Learning Work? | Active Learning (AL) methods seek to improve classifier performance when
labels are expensive or scarce. We consider two central questions: Where does
AL work? How much does it help? To address these questions, a comprehensive
experimental simulation study of Active Learning is presented. We consider a
variety of tasks, classifiers and other AL factors, to present a broad
exploration of AL performance in various settings. A precise way to quantify
performance is needed in order to know when AL works. Thus we also present a
detailed methodology for tackling the complexities of assessing AL performance
in the context of this experimental study.
| [
"Lewis Evans and Niall M. Adams and Christoforos Anagnostopoulos",
"['Lewis Evans' 'Niall M. Adams' 'Christoforos Anagnostopoulos']"
]
|
cs.GT cs.HC cs.LG | null | 1408.1387 | null | null | http://arxiv.org/pdf/1408.1387v3 | 2015-12-16T19:53:47Z | 2014-08-06T19:52:28Z | Double or Nothing: Multiplicative Incentive Mechanisms for Crowdsourcing | Crowdsourcing has gained immense popularity in machine learning applications
for obtaining large amounts of labeled data. Crowdsourcing is cheap and fast,
but suffers from the problem of low-quality data. To address this fundamental
challenge in crowdsourcing, we propose a simple payment mechanism to
incentivize workers to answer only the questions that they are sure of and skip
the rest. We show that surprisingly, under a mild and natural "no-free-lunch"
requirement, this mechanism is the one and only incentive-compatible payment
mechanism possible. We also show that among all possible incentive-compatible
mechanisms (that may or may not satisfy no-free-lunch), our mechanism makes the
smallest possible payment to spammers. We further extend our results to a more
general setting in which workers are required to provide a quantized confidence
for each question. Interestingly, this unique mechanism takes a
"multiplicative" form. The simplicity of the mechanism is an added benefit. In
preliminary experiments involving over 900 worker-task pairs, we observe a
significant drop in the error rates under this unique mechanism for the same or
lower monetary expenditure.
| [
"['Nihar B. Shah' 'Dengyong Zhou']",
"Nihar B. Shah and Dengyong Zhou"
]
|
cs.LG cs.CC cs.DS | null | 1408.1655 | null | null | http://arxiv.org/pdf/1408.1655v1 | 2014-08-06T17:39:56Z | 2014-08-06T17:39:56Z | Preventing False Discovery in Interactive Data Analysis is Hard | We show that, under a standard hardness assumption, there is no
computationally efficient algorithm that given $n$ samples from an unknown
distribution can give valid answers to $n^{3+o(1)}$ adaptively chosen
statistical queries. A statistical query asks for the expectation of a
predicate over the underlying distribution, and an answer to a statistical
query is valid if it is "close" to the correct expectation over the
distribution.
Our result stands in stark contrast to the well known fact that exponentially
many statistical queries can be answered validly and efficiently if the queries
are chosen non-adaptively (no query may depend on the answers to previous
queries). Moreover, a recent work by Dwork et al. shows how to accurately
answer exponentially many adaptively chosen statistical queries via a
computationally inefficient algorithm; and how to answer a quadratic number of
adaptive queries via a computationally efficient algorithm. The latter result
implies that our result is tight up to a linear factor in $n.$
Conceptually, our result demonstrates that achieving statistical validity
alone can be a source of computational intractability in adaptive settings. For
example, in the modern large collaborative research environment, data analysts
typically choose a particular approach based on previous findings. False
discovery occurs if a research finding is supported by the data but not by the
underlying distribution. While the study of preventing false discovery in
Statistics is decades old, to the best of our knowledge our result is the first
to demonstrate a computational barrier. In particular, our result suggests that
the perceived difficulty of preventing false discovery in today's collaborative
research environment may be inherent.
| [
"Moritz Hardt and Jonathan Ullman",
"['Moritz Hardt' 'Jonathan Ullman']"
]
|
cs.AI cs.DC cs.LG | null | 1408.1664 | null | null | http://arxiv.org/pdf/1408.1664v3 | 2016-08-13T04:25:55Z | 2014-08-07T17:40:36Z | A Parallel Algorithm for Exact Bayesian Structure Discovery in Bayesian
Networks | Exact Bayesian structure discovery in Bayesian networks requires exponential
time and space. Using dynamic programming (DP), the fastest known sequential
algorithm computes the exact posterior probabilities of structural features in
$O(2(d+1)n2^n)$ time and space, if the number of nodes (variables) in the
Bayesian network is $n$ and the in-degree (the number of parents) per node is
bounded by a constant $d$. Here we present a parallel algorithm capable of
computing the exact posterior probabilities for all $n(n-1)$ edges with optimal
parallel space efficiency and nearly optimal parallel time efficiency. That is,
if $p=2^k$ processors are used, the run-time reduces to
$O(5(d+1)n2^{n-k}+k(n-k)^d)$ and the space usage becomes $O(n2^{n-k})$ per
processor. Our algorithm is based the observation that the subproblems in the
sequential DP algorithm constitute a $n$-$D$ hypercube. We take a delicate way
to coordinate the computation of correlated DP procedures such that large
amount of data exchange is suppressed. Further, we develop parallel techniques
for two variants of the well-known \emph{zeta transform}, which have
applications outside the context of Bayesian networks. We demonstrate the
capability of our algorithm on datasets with up to 33 variables and its
scalability on up to 2048 processors. We apply our algorithm to a biological
data set for discovering the yeast pheromone response pathways.
| [
"['Yetian Chen' 'Jin Tian' 'Olga Nikolova' 'Srinivas Aluru']",
"Yetian Chen, Jin Tian, Olga Nikolova and Srinivas Aluru"
]
|
cs.LG stat.ML | null | 1408.1717 | null | null | http://arxiv.org/pdf/1408.1717v3 | 2014-11-27T11:12:27Z | 2014-08-07T21:33:51Z | Matrix Completion on Graphs | The problem of finding the missing values of a matrix given a few of its
entries, called matrix completion, has gathered a lot of attention in the
recent years. Although the problem under the standard low rank assumption is
NP-hard, Cand\`es and Recht showed that it can be exactly relaxed if the number
of observed entries is sufficiently large. In this work, we introduce a novel
matrix completion model that makes use of proximity information about rows and
columns by assuming they form communities. This assumption makes sense in
several real-world problems like in recommender systems, where there are
communities of people sharing preferences, while products form clusters that
receive similar ratings. Our main goal is thus to find a low-rank solution that
is structured by the proximities of rows and columns encoded by graphs. We
borrow ideas from manifold learning to constrain our solution to be smooth on
these graphs, in order to implicitly force row and column proximities. Our
matrix recovery model is formulated as a convex non-smooth optimization
problem, for which a well-posed iterative scheme is provided. We study and
evaluate the proposed matrix completion on synthetic and real data, showing
that the proposed structured low-rank recovery model outperforms the standard
matrix completion model in many situations.
| [
"Vassilis Kalofolias, Xavier Bresson, Michael Bronstein, Pierre\n Vandergheynst",
"['Vassilis Kalofolias' 'Xavier Bresson' 'Michael Bronstein'\n 'Pierre Vandergheynst']"
]
|
cond-mat.dis-nn cond-mat.stat-mech cs.LG q-bio.NC | 10.1103/PhysRevE.90.052813 | 1408.1784 | null | null | http://arxiv.org/abs/1408.1784v1 | 2014-08-08T08:13:52Z | 2014-08-08T08:13:52Z | Origin of the computational hardness for learning with binary synapses | Supervised learning in a binary perceptron is able to classify an extensive
number of random patterns by a proper assignment of binary synaptic weights.
However, to find such assignments in practice, is quite a nontrivial task. The
relation between the weight space structure and the algorithmic hardness has
not yet been fully understood. To this end, we analytically derive the
Franz-Parisi potential for the binary preceptron problem, by starting from an
equilibrium solution of weights and exploring the weight space structure around
it. Our result reveals the geometrical organization of the weight
space\textemdash the weight space is composed of isolated solutions, rather
than clusters of exponentially many close-by solutions. The point-like clusters
far apart from each other in the weight space explain the previously observed
glassy behavior of stochastic local search heuristics.
| [
"['Haiping Huang' 'Yoshiyuki Kabashima']",
"Haiping Huang and Yoshiyuki Kabashima"
]
|
cs.AI cs.HC cs.LG cs.RO | null | 1408.1913 | null | null | http://arxiv.org/pdf/1408.1913v1 | 2014-08-08T16:57:22Z | 2014-08-08T16:57:22Z | Using Learned Predictions as Feedback to Improve Control and
Communication with an Artificial Limb: Preliminary Findings | Many people suffer from the loss of a limb. Learning to get by without an arm
or hand can be very challenging, and existing prostheses do not yet fulfil the
needs of individuals with amputations. One promising solution is to provide
greater communication between a prosthesis and its user. Towards this end, we
present a simple machine learning interface to supplement the control of a
robotic limb with feedback to the user about what the limb will be experiencing
in the near future. A real-time prediction learner was implemented to predict
impact-related electrical load experienced by a robot limb; the learning
system's predictions were then communicated to the device's user to aid in
their interactions with a workspace. We tested this system with five
able-bodied subjects. Each subject manipulated the robot arm while receiving
different forms of vibrotactile feedback regarding the arm's contact with its
workspace. Our trials showed that communicable predictions could be learned
quickly during human control of the robot arm. Using these predictions as a
basis for feedback led to a statistically significant improvement in task
performance when compared to purely reactive feedback from the device. Our
study therefore contributes initial evidence that prediction learning and
machine intelligence can benefit not just control, but also feedback from an
artificial limb. We expect that a greater level of acceptance and ownership can
be achieved if the prosthesis itself takes an active role in transmitting
learned knowledge about its state and its situation of use.
| [
"['Adam S. R. Parker' 'Ann L. Edwards' 'Patrick M. Pilarski']",
"Adam S. R. Parker, Ann L. Edwards, and Patrick M. Pilarski"
]
|
cs.LG stat.ML | 10.1016/j.neucom.2014.01.069 | 1408.2003 | null | null | http://arxiv.org/abs/1408.2003v2 | 2014-08-27T02:54:54Z | 2014-08-09T01:31:02Z | LARSEN-ELM: Selective Ensemble of Extreme Learning Machines using LARS
for Blended Data | Extreme learning machine (ELM) as a neural network algorithm has shown its
good performance, such as fast speed, simple structure etc, but also, weak
robustness is an unavoidable defect in original ELM for blended data. We
present a new machine learning framework called LARSEN-ELM for overcoming this
problem. In our paper, we would like to show two key steps in LARSEN-ELM. In
the first step, preprocessing, we select the input variables highly related to
the output using least angle regression (LARS). In the second step, training,
we employ Genetic Algorithm (GA) based selective ensemble and original ELM. In
the experiments, we apply a sum of two sines and four datasets from UCI
repository to verify the robustness of our approach. The experimental results
show that compared with original ELM and other methods such as OP-ELM,
GASEN-ELM and LSBoost, LARSEN-ELM significantly improve robustness performance
while keeping a relatively high speed.
| [
"Bo Han, Bo He, Rui Nian, Mengmeng Ma, Shujing Zhang, Minghui Li and\n Amaury Lendasse",
"['Bo Han' 'Bo He' 'Rui Nian' 'Mengmeng Ma' 'Shujing Zhang' 'Minghui Li'\n 'Amaury Lendasse']"
]
|
cs.LG cs.NE | null | 1408.2004 | null | null | http://arxiv.org/pdf/1408.2004v3 | 2014-09-23T07:48:35Z | 2014-08-09T01:36:03Z | RMSE-ELM: Recursive Model based Selective Ensemble of Extreme Learning
Machines for Robustness Improvement | Extreme learning machine (ELM) as an emerging branch of shallow networks has
shown its excellent generalization and fast learning speed. However, for
blended data, the robustness of ELM is weak because its weights and biases of
hidden nodes are set randomly. Moreover, the noisy data exert a negative
effect. To solve this problem, a new framework called RMSE-ELM is proposed in
this paper. It is a two-layer recursive model. In the first layer, the
framework trains lots of ELMs in different groups concurrently, then employs
selective ensemble to pick out an optimal set of ELMs in each group, which can
be merged into a large group of ELMs called candidate pool. In the second
layer, selective ensemble is recursively used on candidate pool to acquire the
final ensemble. In the experiments, we apply UCI blended datasets to confirm
the robustness of our new approach in two key aspects (mean square error and
standard deviation). The space complexity of our method is increased to some
degree, but the results have shown that RMSE-ELM significantly improves
robustness with slightly computational time compared with representative
methods (ELM, OP-ELM, GASEN-ELM, GASEN-BP and E-GASEN). It becomes a potential
framework to solve robustness issue of ELM for high-dimensional blended data in
the future.
| [
"['Bo Han' 'Bo He' 'Mengmeng Ma' 'Tingting Sun' 'Tianhong Yan'\n 'Amaury Lendasse']",
"Bo Han, Bo He, Mengmeng Ma, Tingting Sun, Tianhong Yan, Amaury\n Lendasse"
]
|
cs.LG stat.ML | null | 1408.2025 | null | null | http://arxiv.org/pdf/1408.2025v1 | 2014-08-09T05:18:20Z | 2014-08-09T05:18:20Z | Blind Construction of Optimal Nonlinear Recursive Predictors for
Discrete Sequences | We present a new method for nonlinear prediction of discrete random sequences
under minimal structural assumptions. We give a mathematical construction for
optimal predictors of such processes, in the form of hidden Markov models. We
then describe an algorithm, CSSR (Causal-State Splitting Reconstruction), which
approximates the ideal predictor from data. We discuss the reliability of CSSR,
its data requirements, and its performance in simulations. Finally, we compare
our approach to existing methods using variablelength Markov models and
cross-validated hidden Markov models, and show theoretically and experimentally
that our method delivers results superior to the former and at least comparable
to the latter.
| [
"Cosma Shalizi, Kristina Lisa Klinkner",
"['Cosma Shalizi' 'Kristina Lisa Klinkner']"
]
|
null | null | 1408.2031 | null | null | http://arxiv.org/pdf/1408.2031v1 | 2014-08-09T05:25:07Z | 2014-08-09T05:25:07Z | Conditional Probability Tree Estimation Analysis and Algorithms | We consider the problem of estimating the conditional probability of a label in time O(log n), where n is the number of possible labels. We analyze a natural reduction of this problem to a set of binary regression problems organized in a tree structure, proving a regret bound that scales with the depth of the tree. Motivated by this analysis, we propose the first online algorithm which provably constructs a logarithmic depth tree on the set of labels to solve this problem. We test the algorithm empirically, showing that it works succesfully on a dataset with roughly 106 labels. | [
"['Alina Beygelzimer' 'John Langford' 'Yuri Lifshits' 'Gregory Sorkin'\n 'Alexander L. Strehl']"
]
|
null | null | 1408.2032 | null | null | http://arxiv.org/pdf/1408.2032v1 | 2014-08-09T05:26:02Z | 2014-08-09T05:26:02Z | Bayesian Multitask Learning with Latent Hierarchies | We learn multiple hypotheses for related tasks under a latent hierarchical relationship between tasks. We exploit the intuition that for domain adaptation, we wish to share classifier structure, but for multitask learning, we wish to share covariance structure. Our hierarchical model is seen to subsume several previously proposed multitask learning models and performs well on three distinct real-world data sets. | [
"['Hal Daume III']"
]
|
cs.LG stat.ML | null | 1408.2033 | null | null | http://arxiv.org/pdf/1408.2033v1 | 2014-08-09T05:26:59Z | 2014-08-09T05:26:59Z | Robust Graphical Modeling with t-Distributions | Graphical Gaussian models have proven to be useful tools for exploring
network structures based on multivariate data. Applications to studies of gene
expression have generated substantial interest in these models, and resulting
recent progress includes the development of fitting methodology involving
penalization of the likelihood function. In this paper we advocate the use of
the multivariate t and related distributions for more robust inference of
graphs. In particular, we demonstrate that penalized likelihood inference
combined with an application of the EM algorithm provides a simple and
computationally efficient approach to model selection in the t-distribution
case.
| [
"Michael A. Finegold, Mathias Drton",
"['Michael A. Finegold' 'Mathias Drton']"
]
|
null | null | 1408.2035 | null | null | http://arxiv.org/pdf/1408.2035v1 | 2014-08-09T05:31:06Z | 2014-08-09T05:31:06Z | Quantum Annealing for Clustering | This paper studies quantum annealing (QA) for clustering, which can be seen as an extension of simulated annealing (SA). We derive a QA algorithm for clustering and propose an annealing schedule, which is crucial in practice. Experiments show the proposed QA algorithm finds better clustering assignments than SA. Furthermore, QA is as easy as SA to implement. | [
"['Kenichi Kurihara' 'Shu Tanaka' 'Seiji Miyashita']"
]
|
cs.LG stat.ML | null | 1408.2036 | null | null | http://arxiv.org/pdf/1408.2036v2 | 2015-10-16T16:08:12Z | 2014-08-09T05:32:03Z | Characterizing predictable classes of processes | The problem is sequence prediction in the following setting. A sequence
x1,..., xn,... of discrete-valued observations is generated according to some
unknown probabilistic law (measure) mu. After observing each outcome, it is
required to give the conditional probabilities of the next observation. The
measure mu belongs to an arbitrary class C of stochastic processes. We are
interested in predictors ? whose conditional probabilities converge to the
'true' mu-conditional probabilities if any mu { C is chosen to generate the
data. We show that if such a predictor exists, then a predictor can also be
obtained as a convex combination of a countably many elements of C. In other
words, it can be obtained as a Bayesian predictor whose prior is concentrated
on a countable set. This result is established for two very different measures
of performance of prediction, one of which is very strong, namely, total
variation, and the other is very weak, namely, prediction in expected average
Kullback-Leibler divergence.
| [
"Daniil Ryabko",
"['Daniil Ryabko']"
]
|
null | null | 1408.2037 | null | null | http://arxiv.org/pdf/1408.2037v1 | 2014-08-09T05:33:21Z | 2014-08-09T05:33:21Z | Quantum Annealing for Variational Bayes Inference | This paper presents studies on a deterministic annealing algorithm based on quantum annealing for variational Bayes (QAVB) inference, which can be seen as an extension of the simulated annealing for variational Bayes (SAVB) inference. QAVB is as easy as SAVB to implement. Experiments revealed QAVB finds a better local optimum than SAVB in terms of the variational free energy in latent Dirichlet allocation (LDA). | [
"['Issei Sato' 'Kenichi Kurihara' 'Shu Tanaka' 'Hiroshi Nakagawa'\n 'Seiji Miyashita']"
]
|
cs.LG stat.ML | null | 1408.2038 | null | null | http://arxiv.org/pdf/1408.2038v1 | 2014-08-09T05:34:21Z | 2014-08-09T05:34:21Z | A direct method for estimating a causal ordering in a linear
non-Gaussian acyclic model | Structural equation models and Bayesian networks have been widely used to
analyze causal relations between continuous variables. In such frameworks,
linear acyclic models are typically used to model the datagenerating process of
variables. Recently, it was shown that use of non-Gaussianity identifies a
causal ordering of variables in a linear acyclic model without using any prior
knowledge on the network structure, which is not the case with conventional
methods. However, existing estimation methods are based on iterative search
algorithms and may not converge to a correct solution in a finite number of
steps. In this paper, we propose a new direct method to estimate a causal
ordering based on non-Gaussianity. In contrast to the previous methods, our
algorithm requires no algorithmic parameters and is guaranteed to converge to
the right solution within a small fixed number of steps if the data strictly
follows the model.
| [
"Shohei Shimizu, Aapo Hyvarinen, Yoshinobu Kawahara",
"['Shohei Shimizu' 'Aapo Hyvarinen' 'Yoshinobu Kawahara']"
]
|
null | null | 1408.2039 | null | null | http://arxiv.org/pdf/1408.2039v1 | 2014-08-09T05:35:48Z | 2014-08-09T05:35:48Z | Incorporating Side Information in Probabilistic Matrix Factorization
with Gaussian Processes | Probabilistic matrix factorization (PMF) is a powerful method for modeling data associ- ated with pairwise relationships, Finding use in collaborative Filtering, computational bi- ology, and document analysis, among other areas. In many domains, there are additional covariates that can assist in prediction. For example, when modeling movie ratings, we might know when the rating occurred, where the user lives, or what actors appear in the movie. It is difficult, however, to incorporate this side information into the PMF model. We propose a framework for incorporating side information by coupling together multi- ple PMF problems via Gaussian process priors. We replace scalar latent features with func- tions that vary over the covariate space. The GP priors on these functions require them to vary smoothly and share information. We apply this new method to predict the scores of professional basketball games, where side information about the venue and date of the game are relevant for the outcome. | [
"['Ryan Prescott Adams' 'George E. Dahl' 'Iain Murray']"
]
|
null | null | 1408.2040 | null | null | http://arxiv.org/pdf/1408.2040v1 | 2014-08-09T05:36:41Z | 2014-08-09T05:36:41Z | Prediction with Advice of Unknown Number of Experts | In the framework of prediction with expert advice, we consider a recently introduced kind of regret bounds: the bounds that depend on the effective instead of nominal number of experts. In contrast to the Normal- Hedge bound, which mainly depends on the effective number of experts but also weakly depends on the nominal one, we obtain a bound that does not contain the nominal number of experts at all. We use the defensive forecasting method and introduce an application of defensive forecasting to multivalued supermartingales. | [
"['Alexey Chernov' 'Vladimir Vovk']"
]
|
cs.LG cs.DC | null | 1408.2041 | null | null | http://arxiv.org/pdf/1408.2041v1 | 2014-08-09T05:38:37Z | 2014-08-09T05:38:37Z | GraphLab: A New Framework For Parallel Machine Learning | Designing and implementing efficient, provably correct parallel machine
learning (ML) algorithms is challenging. Existing high-level parallel
abstractions like MapReduce are insufficiently expressive while low-level tools
like MPI and Pthreads leave ML experts repeatedly solving the same design
challenges. By targeting common patterns in ML, we developed GraphLab, which
improves upon abstractions like MapReduce by compactly expressing asynchronous
iterative algorithms with sparse computational dependencies while ensuring data
consistency and achieving a high degree of parallel performance. We demonstrate
the expressiveness of the GraphLab framework by designing and implementing
parallel versions of belief propagation, Gibbs sampling, Co-EM, Lasso and
Compressed Sensing. We show that using GraphLab we can achieve excellent
parallel performance on large scale real-world problems.
| [
"Yucheng Low, Joseph E. Gonzalez, Aapo Kyrola, Danny Bickson, Carlos E.\n Guestrin, Joseph Hellerstein",
"['Yucheng Low' 'Joseph E. Gonzalez' 'Aapo Kyrola' 'Danny Bickson'\n 'Carlos E. Guestrin' 'Joseph Hellerstein']"
]
|
null | null | 1408.2042 | null | null | http://arxiv.org/pdf/1408.2042v1 | 2014-08-09T05:39:50Z | 2014-08-09T05:39:50Z | Gaussian Process Structural Equation Models with Latent Variables | In a variety of disciplines such as social sciences, psychology, medicine and economics, the recorded data are considered to be noisy measurements of latent variables connected by some causal structure. This corresponds to a family of graphical models known as the structural equation model with latent variables. While linear non-Gaussian variants have been well-studied, inference in nonparametric structural equation models is still underdeveloped. We introduce a sparse Gaussian process parameterization that defines a non-linear structure connecting latent variables, unlike common formulations of Gaussian process latent variable models. The sparse parameterization is given a full Bayesian treatment without compromising Markov chain Monte Carlo efficiency. We compare the stability of the sampling procedure and the predictive ability of the model against the current practice. | [
"['Ricardo Silva' 'Robert B. Gramacy']"
]
|
cs.LG stat.ML | null | 1408.2044 | null | null | http://arxiv.org/pdf/1408.2044v1 | 2014-08-09T05:40:28Z | 2014-08-09T05:40:28Z | Matrix Coherence and the Nystrom Method | The Nystrom method is an efficient technique used to speed up large-scale
learning applications by generating low-rank approximations. Crucial to the
performance of this technique is the assumption that a matrix can be well
approximated by working exclusively with a subset of its columns. In this work
we relate this assumption to the concept of matrix coherence, connecting
coherence to the performance of the Nystrom method. Making use of related work
in the compressed sensing and the matrix completion literature, we derive novel
coherence-based bounds for the Nystrom method in the low-rank setting. We then
present empirical results that corroborate these theoretical bounds. Finally,
we present more general empirical results for the full-rank setting that
convincingly demonstrate the ability of matrix coherence to measure the degree
to which information can be extracted from a subset of columns.
| [
"Ameet Talwalkar, Afshin Rostamizadeh",
"['Ameet Talwalkar' 'Afshin Rostamizadeh']"
]
|
cs.LG cs.AI | null | 1408.2045 | null | null | http://arxiv.org/pdf/1408.2045v1 | 2014-08-09T05:41:26Z | 2014-08-09T05:41:26Z | Efficient Clustering with Limited Distance Information | Given a point set S and an unknown metric d on S, we study the problem of
efficiently partitioning S into k clusters while querying few distances between
the points. In our model we assume that we have access to one versus all
queries that given a point s 2 S return the distances between s and all other
points. We show that given a natural assumption about the structure of the
instance, we can efficiently find an accurate clustering using only O(k)
distance queries. We use our algorithm to cluster proteins by sequence
similarity. This setting nicely fits our model because we can use a fast
sequence database search program to query a sequence against an entire dataset.
We conduct an empirical study that shows that even though we query a small
fraction of the distances between the points, we produce clusterings that are
close to a desired clustering given by manual classification.
| [
"['Konstantin Voevodski' 'Maria-Florina Balcan' 'Heiko Roglin'\n 'Shang-Hua Teng' 'Yu Xia']",
"Konstantin Voevodski, Maria-Florina Balcan, Heiko Roglin, Shang-Hua\n Teng, Yu Xia"
]
|
null | null | 1408.2047 | null | null | http://arxiv.org/pdf/1408.2047v1 | 2014-08-09T05:45:11Z | 2014-08-09T05:45:11Z | Bayesian Structure Learning for Markov Random Fields with a Spike and
Slab Prior | In recent years a number of methods have been developed for automatically learning the (sparse) connectivity structure of Markov Random Fields. These methods are mostly based on L1-regularized optimization which has a number of disadvantages such as the inability to assess model uncertainty and expensive crossvalidation to find the optimal regularization parameter. Moreover, the model's predictive performance may degrade dramatically with a suboptimal value of the regularization parameter (which is sometimes desirable to induce sparseness). We propose a fully Bayesian approach based on a "spike and slab" prior (similar to L0 regularization) that does not suffer from these shortcomings. We develop an approximate MCMC method combining Langevin dynamics and reversible jump MCMC to conduct inference in this model. Experiments show that the proposed model learns a good combination of the structure and parameter values without the need for separate hyper-parameter tuning. Moreover, the model's predictive performance is much more robust than L1-based methods with hyper-parameter settings that induce highly sparse model structures. | [
"['Yutian Chen' 'Max Welling']"
]
|
cs.LG stat.ML | null | 1408.2049 | null | null | http://arxiv.org/pdf/1408.2049v2 | 2016-07-13T21:01:52Z | 2014-08-09T05:47:25Z | Optimally-Weighted Herding is Bayesian Quadrature | Herding and kernel herding are deterministic methods of choosing samples
which summarise a probability distribution. A related task is choosing samples
for estimating integrals using Bayesian quadrature. We show that the criterion
minimised when selecting samples in kernel herding is equivalent to the
posterior variance in Bayesian quadrature. We then show that sequential
Bayesian quadrature can be viewed as a weighted version of kernel herding which
achieves performance superior to any other weighted herding method. We
demonstrate empirically a rate of convergence faster than O(1/N). Our results
also imply an upper bound on the empirical error of the Bayesian quadrature
estimate.
| [
"Ferenc Huszar, David Duvenaud",
"['Ferenc Huszar' 'David Duvenaud']"
]
|
null | null | 1408.2051 | null | null | http://arxiv.org/pdf/1408.2051v1 | 2014-08-09T05:48:31Z | 2014-08-09T05:48:31Z | Algorithms for Approximate Minimization of the Difference Between
Submodular Functions, with Applications | We extend the work of Narasimhan and Bilmes [30] for minimizing set functions representable as a dierence between submodular functions. Similar to [30], our new algorithms are guaranteed to monotonically reduce the objective function at every step. We empirically and theoretically show that the per-iteration cost of our algorithms is much less than [30], and our algorithms can be used to efficiently minimize a dierence between submodular functions under various combinatorial constraints, a problem not previously addressed. We provide computational bounds and a hardness result on the multiplicative inapproximability of minimizing the dierence between submodular functions. We show, however, that it is possible to give worst-case additive bounds by providing a polynomial time computable lower-bound on the minima. Finally we show how a number of machine learning problems can be modeled as minimizing the dierence between submodular functions. We experimentally show the validity of our algorithms by testing them on the problem of feature selection with submodular cost features. | [
"['Rishabh Iyer' 'Jeff A. Bilmes']"
]
|
cs.LG cs.NA stat.ML | null | 1408.2054 | null | null | http://arxiv.org/pdf/1408.2054v1 | 2014-08-09T05:52:02Z | 2014-08-09T05:52:02Z | Non-Convex Rank Minimization via an Empirical Bayesian Approach | In many applications that require matrix solutions of minimal rank, the
underlying cost function is non-convex leading to an intractable, NP-hard
optimization problem. Consequently, the convex nuclear norm is frequently used
as a surrogate penalty term for matrix rank. The problem is that in many
practical scenarios there is no longer any guarantee that we can correctly
estimate generative low-rank matrices of interest, theoretical special cases
notwithstanding. Consequently, this paper proposes an alternative empirical
Bayesian procedure build upon a variational approximation that, unlike the
nuclear norm, retains the same globally minimizing point estimate as the rank
function under many useful constraints. However, locally minimizing solutions
are largely smoothed away via marginalization, allowing the algorithm to
succeed when standard convex relaxations completely fail. While the proposed
methodology is generally applicable to a wide range of low-rank applications,
we focus our attention on the robust principal component analysis problem
(RPCA), which involves estimating an unknown low-rank matrix with unknown
sparse corruptions. Theoretical and empirical evidence are presented to show
that our method is potentially superior to related MAP-based approaches, for
which the convex principle component pursuit (PCP) algorithm (Candes et al.,
2011) can be viewed as a special case.
| [
"['David Wipf']",
"David Wipf"
]
|
null | null | 1408.2055 | null | null | http://arxiv.org/pdf/1408.2055v1 | 2014-08-09T05:54:49Z | 2014-08-09T05:54:49Z | Guess Who Rated This Movie: Identifying Users Through Subspace
Clustering | It is often the case that, within an online recommender system, multiple users share a common account. Can such shared accounts be identified solely on the basis of the userprovided ratings? Once a shared account is identified, can the different users sharing it be identified as well? Whenever such user identification is feasible, it opens the way to possible improvements in personalized recommendations, but also raises privacy concerns. We develop a model for composite accounts based on unions of linear subspaces, and use subspace clustering for carrying out the identification task. We show that a significant fraction of such accounts is identifiable in a reliable manner, and illustrate potential uses for personalized recommendation. | [
"['Amy Zhang' 'Nadia Fawaz' 'Stratis Ioannidis' 'Andrea Montanari']"
]
|
null | null | 1408.2060 | null | null | http://arxiv.org/pdf/1408.2060v1 | 2014-08-09T05:58:33Z | 2014-08-09T05:58:33Z | Parallel Gaussian Process Regression with Low-Rank Covariance Matrix
Approximations | Gaussian processes (GP) are Bayesian non-parametric models that are widely used for probabilistic regression. Unfortunately, it cannot scale well with large data nor perform real-time predictions due to its cubic time cost in the data size. This paper presents two parallel GP regression methods that exploit low-rank covariance matrix approximations for distributing the computational load among parallel machines to achieve time efficiency and scalability. We theoretically guarantee the predictive performances of our proposed parallel GPs to be equivalent to that of some centralized approximate GP regression methods: The computation of their centralized counterparts can be distributed among parallel machines, hence achieving greater time efficiency and scalability. We analytically compare the properties of our parallel GPs such as time, space, and communication complexity. Empirical evaluation on two real-world datasets in a cluster of 20 computing nodes shows that our parallel GPs are significantly more time-efficient and scalable than their centralized counterparts and exact/full GP while achieving predictive performances comparable to full GP. | [
"['Jie Chen' 'Nannan Cao' 'Kian Hsiang Low' 'Ruofei Ouyang'\n 'Colin Keng-Yan Tan' 'Patrick Jaillet']"
]
|
cs.LG stat.ML | null | 1408.2061 | null | null | http://arxiv.org/pdf/1408.2061v1 | 2014-08-09T06:00:05Z | 2014-08-09T06:00:05Z | Warped Mixtures for Nonparametric Cluster Shapes | A mixture of Gaussians fit to a single curved or heavy-tailed cluster will
report that the data contains many clusters. To produce more appropriate
clusterings, we introduce a model which warps a latent mixture of Gaussians to
produce nonparametric cluster shapes. The possibly low-dimensional latent
mixture model allows us to summarize the properties of the high-dimensional
clusters (or density manifolds) describing the data. The number of manifolds,
as well as the shape and dimension of each manifold is automatically inferred.
We derive a simple inference scheme for this model which analytically
integrates out both the mixture parameters and the warping function. We show
that our model is effective for density estimation, performs better than
infinite Gaussian mixture models at recovering the true number of clusters, and
produces interpretable summaries of high-dimensional datasets.
| [
"Tomoharu Iwata, David Duvenaud, Zoubin Ghahramani",
"['Tomoharu Iwata' 'David Duvenaud' 'Zoubin Ghahramani']"
]
|
null | null | 1408.2062 | null | null | http://arxiv.org/pdf/1408.2062v1 | 2014-08-09T06:01:37Z | 2014-08-09T06:01:37Z | The Lovasz-Bregman Divergence and connections to rank aggregation,
clustering, and web ranking | We extend the recently introduced theory of Lovasz-Bregman (LB) divergences (Iyer & Bilmes 2012) in several ways. We show that they represent a distortion between a "score" and an "ordering", thus providing a new view of rank aggregation and order based clustering with interesting connections to web ranking. We show how the LB divergences have a number of properties akin to many permutation based metrics, and in fact have as special cases forms very similar to the Kendall-tau metric. We also show how the LB divergences subsume a number of commonly used ranking measures in information retrieval, like NDCG and AUC. Unlike the traditional permutation based metrics, however, the LB divergence naturally captures a notion of "confidence" in the orderings, thus providing a new representation to applications involving aggregating scores as opposed to just orderings. We show how a number of recently used web ranking models are forms of Lovasz-Bregman rank aggregation and also observe that a natural form of Mallow's model using the LB divergence has been used as conditional ranking models for the "Learning to Rank" problem. | [
"['Rishabh Iyer' 'Jeff A. Bilmes']"
]
|
null | null | 1408.2064 | null | null | http://arxiv.org/pdf/1408.2064v1 | 2014-08-09T06:04:33Z | 2014-08-09T06:04:33Z | One-Class Support Measure Machines for Group Anomaly Detection | We propose one-class support measure machines (OCSMMs) for group anomaly detection which aims at recognizing anomalous aggregate behaviors of data points. The OCSMMs generalize well-known one-class support vector machines (OCSVMs) to a space of probability measures. By formulating the problem as quantile estimation on distributions, we can establish an interesting connection to the OCSVMs and variable kernel density estimators (VKDEs) over the input space on which the distributions are defined, bridging the gap between large-margin methods and kernel density estimators. In particular, we show that various types of VKDEs can be considered as solutions to a class of regularization problems studied in this paper. Experiments on Sloan Digital Sky Survey dataset and High Energy Particle Physics dataset demonstrate the benefits of the proposed framework in real-world applications. | [
"['Krikamol Muandet' 'Bernhard Schoelkopf']"
]
|
null | null | 1408.2065 | null | null | http://arxiv.org/pdf/1408.2065v1 | 2014-08-09T06:05:51Z | 2014-08-09T06:05:51Z | Normalized Online Learning | We introduce online learning algorithms which are independent of feature scales, proving regret bounds dependent on the ratio of scales existent in the data rather than the absolute scale. This has several useful effects: there is no need to pre-normalize data, the test-time and test-space complexity are reduced, and the algorithms are more robust. | [
"['Stephane Ross' 'Paul Mineiro' 'John Langford']"
]
|
null | null | 1408.2066 | null | null | http://arxiv.org/pdf/1408.2066v1 | 2014-08-09T06:06:49Z | 2014-08-09T06:06:49Z | Scalable Matrix-valued Kernel Learning for High-dimensional Nonlinear
Multivariate Regression and Granger Causality | We propose a general matrix-valued multiple kernel learning framework for high-dimensional nonlinear multivariate regression problems. This framework allows a broad class of mixed norm regularizers, including those that induce sparsity, to be imposed on a dictionary of vector-valued Reproducing Kernel Hilbert Spaces. We develop a highly scalable and eigendecomposition-free algorithm that orchestrates two inexact solvers for simultaneously learning both the input and output components of separable matrix-valued kernels. As a key application enabled by our framework, we show how high-dimensional causal inference tasks can be naturally cast as sparse function estimation problems, leading to novel nonlinear extensions of a class of Graphical Granger Causality techniques. Our algorithmic developments and extensive empirical studies are complemented by theoretical analyses in terms of Rademacher generalization bounds. | [
"['Vikas Sindhwani' 'Ha Quang Minh' 'Aurelie Lozano']"
]
|
null | null | 1408.2067 | null | null | http://arxiv.org/pdf/1408.2067v1 | 2014-08-09T06:07:52Z | 2014-08-09T06:07:52Z | Probabilistic inverse reinforcement learning in unknown environments | We consider the problem of learning by demonstration from agents acting in unknown stochastic Markov environments or games. Our aim is to estimate agent preferences in order to construct improved policies for the same task that the agents are trying to solve. To do so, we extend previous probabilistic approaches for inverse reinforcement learning in known MDPs to the case of unknown dynamics or opponents. We do this by deriving two simplified probabilistic models of the demonstrator's policy and utility. For tractability, we use maximum a posteriori estimation rather than full Bayesian inference. Under a flat prior, this results in a convex optimisation problem. We find that the resulting algorithms are highly competitive against a variety of other methods for inverse reinforcement learning that do have knowledge of the dynamics. | [
"['Aristide Tossou' 'Christos Dimitrakakis']"
]
|
math.ST cs.LG stat.ML stat.TH | null | 1408.2156 | null | null | http://arxiv.org/pdf/1408.2156v1 | 2014-08-09T21:40:15Z | 2014-08-09T21:40:15Z | Statistical guarantees for the EM algorithm: From population to
sample-based analysis | We develop a general framework for proving rigorous guarantees on the
performance of the EM algorithm and a variant known as gradient EM. Our
analysis is divided into two parts: a treatment of these algorithms at the
population level (in the limit of infinite data), followed by results that
apply to updates based on a finite set of samples. First, we characterize the
domain of attraction of any global maximizer of the population likelihood. This
characterization is based on a novel view of the EM updates as a perturbed form
of likelihood ascent, or in parallel, of the gradient EM updates as a perturbed
form of standard gradient ascent. Leveraging this characterization, we then
provide non-asymptotic guarantees on the EM and gradient EM algorithms when
applied to a finite set of samples. We develop consequences of our general
theory for three canonical examples of incomplete-data problems: mixture of
Gaussians, mixture of regressions, and linear regression with covariates
missing completely at random. In each case, our theory guarantees that with a
suitable initialization, a relatively small number of EM (or gradient EM) steps
will yield (with high probability) an estimate that is within statistical error
of the MLE. We provide simulations to confirm this theoretically predicted
behavior.
| [
"Sivaraman Balakrishnan, Martin J. Wainwright, Bin Yu",
"['Sivaraman Balakrishnan' 'Martin J. Wainwright' 'Bin Yu']"
]
|
cs.IR cs.LG | null | 1408.2195 | null | null | http://arxiv.org/pdf/1408.2195v1 | 2014-08-10T07:28:20Z | 2014-08-10T07:28:20Z | R-UCB: a Contextual Bandit Algorithm for Risk-Aware Recommender Systems | Mobile Context-Aware Recommender Systems can be naturally modelled as an
exploration/exploitation trade-off (exr/exp) problem, where the system has to
choose between maximizing its expected rewards dealing with its current
knowledge (exploitation) and learning more about the unknown user's preferences
to improve its knowledge (exploration). This problem has been addressed by the
reinforcement learning community but they do not consider the risk level of the
current user's situation, where it may be dangerous to recommend items the user
may not desire in her current situation if the risk level is high. We introduce
in this paper an algorithm named R-UCB that considers the risk level of the
user's situation to adaptively balance between exr and exp. The detailed
analysis of the experimental results reveals several important discoveries in
the exr/exp behaviour.
| [
"['Djallel Bouneffouf']",
"Djallel Bouneffouf"
]
|
cs.LG cs.AI | null | 1408.2196 | null | null | http://arxiv.org/pdf/1408.2196v1 | 2014-08-10T07:47:50Z | 2014-08-10T07:47:50Z | Exponentiated Gradient Exploration for Active Learning | Active learning strategies respond to the costly labelling task in a
supervised classification by selecting the most useful unlabelled examples in
training a predictive model. Many conventional active learning algorithms focus
on refining the decision boundary, rather than exploring new regions that can
be more informative. In this setting, we propose a sequential algorithm named
EG-Active that can improve any Active learning algorithm by an optimal random
exploration. Experimental results show a statistically significant and
appreciable improvement in the performance of our new approach over the
existing active feedback methods.
| [
"['Djallel Bouneffouf']",
"Djallel Bouneffouf"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.