title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
On GROUSE and Incremental SVD | cs.NA cs.LG stat.ML | GROUSE (Grassmannian Rank-One Update Subspace Estimation) is an incremental
algorithm for identifying a subspace of Rn from a sequence of vectors in this
subspace, where only a subset of components of each vector is revealed at each
iteration. Recent analysis has shown that GROUSE converges locally at an
expected linear rate, under certain assumptions. GROUSE has a similar flavor to
the incremental singular value decomposition algorithm, which updates the SVD
of a matrix following addition of a single column. In this paper, we modify the
incremental SVD approach to handle missing data, and demonstrate that this
modified approach is equivalent to GROUSE, for a certain choice of an
algorithmic parameter.
| Laura Balzano and Stephen J. Wright | null | 1307.5494 | null | null |
A scalable stage-wise approach to large-margin multi-class loss based
boosting | cs.LG | We present a scalable and effective classification model to train multi-class
boosting for multi-class classification problems. Shen and Hao introduced a
direct formulation of multi- class boosting in the sense that it directly
maximizes the multi- class margin [C. Shen and Z. Hao, "A direct formulation
for totally-corrective multi- class boosting", in Proc. IEEE Conf. Comp. Vis.
Patt. Recogn., 2011]. The major problem of their approach is its high
computational complexity for training, which hampers its application on
real-world problems. In this work, we propose a scalable and simple stage-wise
multi-class boosting method, which also directly maximizes the multi-class
margin. Our approach of- fers a few advantages: 1) it is simple and
computationally efficient to train. The approach can speed up the training time
by more than two orders of magnitude without sacrificing the classification
accuracy. 2) Like traditional AdaBoost, it is less sensitive to the choice of
parameters and empirically demonstrates excellent generalization performance.
Experimental results on challenging multi-class machine learning and vision
tasks demonstrate that the proposed approach substantially improves the
convergence rate and accuracy of the final visual detector at no additional
computational cost compared to existing multi-class boosting.
| Sakrapee Paisitkriangkrai, Chunhua Shen, Anton van den Hengel | null | 1307.5497 | null | null |
Performance comparison of State-of-the-art Missing Value Imputation
Algorithms on Some Bench mark Datasets | cs.LG stat.ML | Decision making from data involves identifying a set of attributes that
contribute to effective decision making through computational intelligence. The
presence of missing values greatly influences the selection of right set of
attributes and this renders degradation in classification accuracies of the
classifiers. As missing values are quite common in data collection phase during
field experiments or clinical trails appropriate handling would improve the
classifier performance. In this paper we present a review of recently developed
missing value imputation algorithms and compare their performance on some bench
mark datasets.
| M. Naresh Kumar | null | 1307.5599 | null | null |
Dimension Reduction via Colour Refinement | cs.DS cs.DM cs.LG math.OC | Colour refinement is a basic algorithmic routine for graph isomorphism
testing, appearing as a subroutine in almost all practical isomorphism solvers.
It partitions the vertices of a graph into "colour classes" in such a way that
all vertices in the same colour class have the same number of neighbours in
every colour class. Tinhofer (Disc. App. Math., 1991), Ramana, Scheinerman, and
Ullman (Disc. Math., 1994) and Godsil (Lin. Alg. and its App., 1997)
established a tight correspondence between colour refinement and fractional
isomorphisms of graphs, which are solutions to the LP relaxation of a natural
ILP formulation of graph isomorphism.
We introduce a version of colour refinement for matrices and extend existing
quasilinear algorithms for computing the colour classes. Then we generalise the
correspondence between colour refinement and fractional automorphisms and
develop a theory of fractional automorphisms and isomorphisms of matrices.
We apply our results to reduce the dimensions of systems of linear equations
and linear programs. Specifically, we show that any given LP L can efficiently
be transformed into a (potentially) smaller LP L' whose number of variables and
constraints is the number of colour classes of the colour refinement algorithm,
applied to a matrix associated with the LP. The transformation is such that we
can easily (by a linear mapping) map both feasible and optimal solutions back
and forth between the two LPs. We demonstrate empirically that colour
refinement can indeed greatly reduce the cost of solving linear programs.
| Martin Grohe, Kristian Kersting, Martin Mladenov, Erkal Selman | null | 1307.5697 | null | null |
A New Strategy of Cost-Free Learning in the Class Imbalance Problem | cs.LG | In this work, we define cost-free learning (CFL) formally in comparison with
cost-sensitive learning (CSL). The main difference between them is that a CFL
approach seeks optimal classification results without requiring any cost
information, even in the class imbalance problem. In fact, several CFL
approaches exist in the related studies, such as sampling and some
criteria-based pproaches. However, to our best knowledge, none of the existing
CFL and CSL approaches are able to process the abstaining classifications
properly when no information is given about errors and rejects. Based on
information theory, we propose a novel CFL which seeks to maximize normalized
mutual information of the targets and the decision outputs of classifiers.
Using the strategy, we can deal with binary/multi-class classifications
with/without abstaining. Significant features are observed from the new
strategy. While the degree of class imbalance is changing, the proposed
strategy is able to balance the errors and rejects accordingly and
automatically. Another advantage of the strategy is its ability of deriving
optimal rejection thresholds for abstaining classifications and the
"equivalent" costs in binary classifications. The connection between rejection
thresholds and ROC curve is explored. Empirical investigation is made on
several benchmark data sets in comparison with other existing approaches. The
classification results demonstrate a promising perspective of the strategy in
machine learning.
| Xiaowan Zhang and Bao-Gang Hu | null | 1307.5730 | null | null |
Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery | stat.ML cs.LG | Recovering a low-rank tensor from incomplete information is a recurring
problem in signal processing and machine learning. The most popular convex
relaxation of this problem minimizes the sum of the nuclear norms of the
unfoldings of the tensor. We show that this approach can be substantially
suboptimal: reliably recovering a $K$-way tensor of length $n$ and Tucker rank
$r$ from Gaussian measurements requires $\Omega(r n^{K-1})$ observations. In
contrast, a certain (intractable) nonconvex formulation needs only $O(r^K +
nrK)$ observations. We introduce a very simple, new convex relaxation, which
partially bridges this gap. Our new formulation succeeds with $O(r^{\lfloor K/2
\rfloor}n^{\lceil K/2 \rceil})$ observations. While these results pertain to
Gaussian measurements, simulations strongly suggest that the new norm also
outperforms the sum of nuclear norms for tensor completion from a random subset
of entries.
Our lower bound for the sum-of-nuclear-norms model follows from a new result
on recovering signals with multiple sparse structures (e.g. sparse, low rank),
which perhaps surprisingly demonstrates the significant suboptimality of the
commonly used recovery approach via minimizing the sum of individual sparsity
inducing norms (e.g. $l_1$, nuclear norm). Our new formulation for low-rank
tensor recovery however opens the possibility in reducing the sample complexity
by exploiting several structures jointly.
| Cun Mu, Bo Huang, John Wright, Donald Goldfarb | null | 1307.5870 | null | null |
A Near-Optimal Dynamic Learning Algorithm for Online Matching Problems
with Concave Returns | cs.DS cs.LG math.OC | We consider an online matching problem with concave returns. This problem is
a significant generalization of the Adwords allocation problem and has vast
applications in online advertising. In this problem, a sequence of items arrive
sequentially and each has to be allocated to one of the bidders, who bid a
certain value for each item. At each time, the decision maker has to allocate
the current item to one of the bidders without knowing the future bids and the
objective is to maximize the sum of some concave functions of each bidder's
aggregate value. In this work, we propose an algorithm that achieves
near-optimal performance for this problem when the bids arrive in a random
order and the input data satisfies certain conditions. The key idea of our
algorithm is to learn the input data pattern dynamically: we solve a sequence
of carefully chosen partial allocation problems and use their optimal solutions
to assist with the future decision. Our analysis belongs to the primal-dual
paradigm, however, the absence of linearity of the objective function and the
dynamic feature of the algorithm makes our analysis quite unique.
| Xiao Alison Chen, Zizhuo Wang | null | 1307.5934 | null | null |
Online Optimization in Dynamic Environments | stat.ML cs.LG math.OC | High-velocity streams of high-dimensional data pose significant "big data"
analysis challenges across a range of applications and settings. Online
learning and online convex programming play a significant role in the rapid
recovery of important or anomalous information from these large datastreams.
While recent advances in online learning have led to novel and rapidly
converging algorithms, these methods are unable to adapt to nonstationary
environments arising in real-world problems. This paper describes a dynamic
mirror descent framework which addresses this challenge, yielding low
theoretical regret bounds and accurate, adaptive, and computationally efficient
algorithms which are applicable to broad classes of problems. The methods are
capable of learning and adapting to an underlying and possibly time-varying
dynamical model. Empirical results in the context of dynamic texture analysis,
solar flare detection, sequential compressed sensing of a dynamic scene,
traffic surveillance,tracking self-exciting point processes and network
behavior in the Enron email corpus support the core theoretical findings.
| Eric C. Hall and Rebecca M. Willett | null | 1307.5944 | null | null |
Modeling Human Decision-making in Generalized Gaussian Multi-armed
Bandits | cs.LG math.OC stat.ML | We present a formal model of human decision-making in explore-exploit tasks
using the context of multi-armed bandit problems, where the decision-maker must
choose among multiple options with uncertain rewards. We address the standard
multi-armed bandit problem, the multi-armed bandit problem with transition
costs, and the multi-armed bandit problem on graphs. We focus on the case of
Gaussian rewards in a setting where the decision-maker uses Bayesian inference
to estimate the reward values. We model the decision-maker's prior knowledge
with the Bayesian prior on the mean reward. We develop the upper credible limit
(UCL) algorithm for the standard multi-armed bandit problem and show that this
deterministic algorithm achieves logarithmic cumulative expected regret, which
is optimal performance for uninformative priors. We show how good priors and
good assumptions on the correlation structure among arms can greatly enhance
decision-making performance, even over short time horizons. We extend to the
stochastic UCL algorithm and draw several connections to human decision-making
behavior. We present empirical data from human experiments and show that human
performance is efficiently captured by the stochastic UCL algorithm with
appropriate parameters. For the multi-armed bandit problem with transition
costs and the multi-armed bandit problem on graphs, we generalize the UCL
algorithm to the block UCL algorithm and the graphical block UCL algorithm,
respectively. We show that these algorithms also achieve logarithmic cumulative
expected regret and require a sub-logarithmic expected number of transitions
among arms. We further illustrate the performance of these algorithms with
numerical examples. NB: Appendix G included in this version details minor
modifications that correct for an oversight in the previously-published proofs.
The remainder of the text reflects the published work.
| Paul Reverdy, Vaibhav Srivastava, Naomi E. Leonard | null | 1307.6134 | null | null |
Generative, Fully Bayesian, Gaussian, Openset Pattern Classifier | stat.ML cs.LG | This report works out the details of a closed-form, fully Bayesian,
multiclass, openset, generative pattern classifier using multivariate Gaussian
likelihoods, with conjugate priors. The generative model has a common
within-class covariance, which is proportional to the between-class covariance
in the conjugate prior. The scalar proportionality constant is the only plugin
parameter. All other model parameters are intergated out in closed form. An
expression is given for the model evidence, which can be used to make plugin
estimates for the proportionality constant. Pattern recognition is done via the
predictive likeihoods of classes for which training data is available, as well
as a predicitve likelihood for any as yet unseen class.
| Niko Brummer | null | 1307.6143 | null | null |
Time-Series Classification Through Histograms of Symbolic Polynomials | cs.AI cs.DB cs.LG | Time-series classification has attracted considerable research attention due
to the various domains where time-series data are observed, ranging from
medicine to econometrics. Traditionally, the focus of time-series
classification has been on short time-series data composed of a unique pattern
with intraclass pattern distortions and variations, while recently there have
been attempts to focus on longer series composed of various local patterns.
This study presents a novel method which can detect local patterns in long
time-series via fitting local polynomial functions of arbitrary degrees. The
coefficients of the polynomial functions are converted to symbolic words via
equivolume discretizations of the coefficients' distributions. The symbolic
polynomial words enable the detection of similar local patterns by assigning
the same words to similar polynomials. Moreover, a histogram of the frequencies
of the words is constructed from each time-series' bag of words. Each row of
the histogram enables a new representation for the series and symbolize the
existence of local patterns and their frequencies. Experimental evidence
demonstrates outstanding results of our method compared to the state-of-art
baselines, by exhibiting the best classification accuracies in all the datasets
and having statistically significant improvements in the absolute majority of
experiments.
| Josif Grabocka, Martin Wistuba, Lars Schmidt-Thieme | 10.1109/TKDE.2014.2377746 | 1307.6365 | null | null |
Cluster Trees on Manifolds | stat.ML cs.LG | In this paper we investigate the problem of estimating the cluster tree for a
density $f$ supported on or near a smooth $d$-dimensional manifold $M$
isometrically embedded in $\mathbb{R}^D$. We analyze a modified version of a
$k$-nearest neighbor based algorithm recently proposed by Chaudhuri and
Dasgupta. The main results of this paper show that under mild assumptions on
$f$ and $M$, we obtain rates of convergence that depend on $d$ only but not on
the ambient dimension $D$. We also show that similar (albeit non-algorithmic)
results can be obtained for kernel density estimators. We sketch a construction
of a sample complexity lower bound instance for a natural class of manifold
oblivious clustering algorithms. We further briefly consider the known manifold
case and show that in this case a spatially adaptive algorithm achieves better
rates.
| Sivaraman Balakrishnan, Srivatsan Narayanan, Alessandro Rinaldo, Aarti
Singh, Larry Wasserman | null | 1307.6515 | null | null |
Does generalization performance of $l^q$ regularization learning depend
on $q$? A negative example | cs.LG stat.ML | $l^q$-regularization has been demonstrated to be an attractive technique in
machine learning and statistical modeling. It attempts to improve the
generalization (prediction) capability of a machine (model) through
appropriately shrinking its coefficients. The shape of a $l^q$ estimator
differs in varying choices of the regularization order $q$. In particular,
$l^1$ leads to the LASSO estimate, while $l^{2}$ corresponds to the smooth
ridge regression. This makes the order $q$ a potential tuning parameter in
applications. To facilitate the use of $l^{q}$-regularization, we intend to
seek for a modeling strategy where an elaborative selection on $q$ is
avoidable. In this spirit, we place our investigation within a general
framework of $l^{q}$-regularized kernel learning under a sample dependent
hypothesis space (SDHS). For a designated class of kernel functions, we show
that all $l^{q}$ estimators for $0< q < \infty$ attain similar generalization
error bounds. These estimated bounds are almost optimal in the sense that up to
a logarithmic factor, the upper and lower bounds are asymptotically identical.
This finding tentatively reveals that, in some modeling contexts, the choice of
$q$ might not have a strong impact in terms of the generalization capability.
From this perspective, $q$ can be arbitrarily specified, or specified merely by
other no generalization criteria like smoothness, computational complexity,
sparsity, etc..
| Shaobo Lin, Chen Xu, Jingshan Zeng, Jian Fang | null | 1307.6616 | null | null |
Streaming Variational Bayes | stat.ML cs.LG | We present SDA-Bayes, a framework for (S)treaming, (D)istributed,
(A)synchronous computation of a Bayesian posterior. The framework makes
streaming updates to the estimated posterior according to a user-specified
approximation batch primitive. We demonstrate the usefulness of our framework,
with variational Bayes (VB) as the primitive, by fitting the latent Dirichlet
allocation model to two large-scale document collections. We demonstrate the
advantages of our algorithm over stochastic variational inference (SVI) by
comparing the two after a single pass through a known amount of data---a case
where SVI may be applied---and in the streaming setting, where SVI does not
apply.
| Tamara Broderick, Nicholas Boyd, Andre Wibisono, Ashia C. Wilson,
Michael I. Jordan | null | 1307.6769 | null | null |
A Propound Method for the Improvement of Cluster Quality | cs.LG | In this paper Knockout Refinement Algorithm (KRA) is proposed to refine
original clusters obtained by applying SOM and K-Means clustering algorithms.
KRA Algorithm is based on Contingency Table concepts. Metrics are computed for
the Original and Refined Clusters. Quality of Original and Refined Clusters are
compared in terms of metrics. The proposed algorithm (KRA) is tested in the
educational domain and results show that it generates better quality clusters
in terms of improved metric values.
| Shveta Kundra Bhatia, V.S. Dixit | null | 1307.6814 | null | null |
Sequential Transfer in Multi-armed Bandit with Finite Set of Models | stat.ML cs.LG | Learning from prior tasks and transferring that experience to improve future
performance is critical for building lifelong learning agents. Although results
in supervised and reinforcement learning show that transfer may significantly
improve the learning performance, most of the literature on transfer is focused
on batch learning tasks. In this paper we study the problem of
\textit{sequential transfer in online learning}, notably in the multi-armed
bandit framework, where the objective is to minimize the cumulative regret over
a sequence of tasks by incrementally transferring knowledge from prior tasks.
We introduce a novel bandit algorithm based on a method-of-moments approach for
the estimation of the possible tasks and derive regret bounds for it.
| Mohammad Gheshlaghi Azar and Alessandro Lazaric and Emma Brunskill | null | 1307.6887 | null | null |
Multi-view Laplacian Support Vector Machines | cs.LG stat.ML | We propose a new approach, multi-view Laplacian support vector machines
(SVMs), for semi-supervised learning under the multi-view scenario. It
integrates manifold regularization and multi-view regularization into the usual
formulation of SVMs and is a natural extension of SVMs from supervised learning
to multi-view semi-supervised learning. The function optimization problem in a
reproducing kernel Hilbert space is converted to an optimization in a
finite-dimensional Euclidean space. After providing a theoretical bound for the
generalization performance of the proposed method, we further give a
formulation of the empirical Rademacher complexity which affects the bound
significantly. From this bound and the empirical Rademacher complexity, we can
gain insights into the roles played by different regularization terms to the
generalization performance. Experimental results on synthetic and real-world
data sets are presented, which validate the effectiveness of the proposed
multi-view Laplacian SVMs approach.
| Shiliang Sun | null | 1307.7024 | null | null |
Infinite Mixtures of Multivariate Gaussian Processes | cs.LG stat.ML | This paper presents a new model called infinite mixtures of multivariate
Gaussian processes, which can be used to learn vector-valued functions and
applied to multitask learning. As an extension of the single multivariate
Gaussian process, the mixture model has the advantages of modeling multimodal
data and alleviating the computationally cubic complexity of the multivariate
Gaussian process. A Dirichlet process prior is adopted to allow the (possibly
infinite) number of mixture components to be automatically inferred from
training data, and Markov chain Monte Carlo sampling techniques are used for
parameter and latent variable inference. Preliminary experimental results on
multivariate regression show the feasibility of the proposed model.
| Shiliang Sun | null | 1307.7028 | null | null |
A Comprehensive Evaluation of Machine Learning Techniques for Cancer
Class Prediction Based on Microarray Data | cs.LG cs.CE | Prostate cancer is among the most common cancer in males and its
heterogeneity is well known. Its early detection helps making therapeutic
decision. There is no standard technique or procedure yet which is full-proof
in predicting cancer class. The genomic level changes can be detected in gene
expression data and those changes may serve as standard model for any random
cancer data for class prediction. Various techniques were implied on prostate
cancer data set in order to accurately predict cancer class including machine
learning techniques. Huge number of attributes and few number of sample in
microarray data leads to poor machine learning, therefore the most challenging
part is attribute reduction or non significant gene reduction. In this work we
have compared several machine learning techniques for their accuracy in
predicting the cancer class. Machine learning is effective when number of
attributes (genes) are larger than the number of samples which is rarely
possible with gene expression data. Attribute reduction or gene filtering is
absolutely required in order to make the data more meaningful as most of the
genes do not participate in tumor development and are irrelevant for cancer
prediction. Here we have applied combination of statistical techniques such as
inter-quartile range and t-test, which has been effective in filtering
significant genes and minimizing noise from data. Further we have done a
comprehensive evaluation of ten state-of-the-art machine learning techniques
for their accuracy in class prediction of prostate cancer. Out of these
techniques, Bayes Network out performed with an accuracy of 94.11% followed by
Navie Bayes with an accuracy of 91.17%. To cross validate our results, we
modified our training dataset in six different way and found that average
sensitivity, specificity, precision and accuracy of Bayes Network is highest
among all other techniques used.
| Khalid Raza, Atif N Hasan | 10.1504/IJBRA.2015.071940 | 1307.7050 | null | null |
MixedGrad: An O(1/T) Convergence Rate Algorithm for Stochastic Smooth
Optimization | cs.LG math.OC | It is well known that the optimal convergence rate for stochastic
optimization of smooth functions is $O(1/\sqrt{T})$, which is same as
stochastic optimization of Lipschitz continuous convex functions. This is in
contrast to optimizing smooth functions using full gradients, which yields a
convergence rate of $O(1/T^2)$. In this work, we consider a new setup for
optimizing smooth functions, termed as {\bf Mixed Optimization}, which allows
to access both a stochastic oracle and a full gradient oracle. Our goal is to
significantly improve the convergence rate of stochastic optimization of smooth
functions by having an additional small number of accesses to the full gradient
oracle. We show that, with an $O(\ln T)$ calls to the full gradient oracle and
an $O(T)$ calls to the stochastic oracle, the proposed mixed optimization
algorithm is able to achieve an optimization error of $O(1/T)$.
| Mehrdad Mahdavi and Rong Jin | null | 1307.7192 | null | null |
A Review of Machine Learning based Anomaly Detection Techniques | cs.LG cs.CR | Intrusion detection is so much popular since the last two decades where
intrusion is attempted to break into or misuse the system. It is mainly of two
types based on the intrusions, first is Misuse or signature based detection and
the other is Anomaly detection. In this paper Machine learning based methods
which are one of the types of Anomaly detection techniques is discussed.
| Harjinder Kaur, Gurpreet Singh, Jaspreet Minhas | null | 1307.7286 | null | null |
Learning to Understand by Evolving Theories | cs.LG cs.AI | In this paper, we describe an approach that enables an autonomous system to
infer the semantics of a command (i.e. a symbol sequence representing an
action) in terms of the relations between changes in the observations and the
action instances. We present a method of how to induce a theory (i.e. a
semantic description) of the meaning of a command in terms of a minimal set of
background knowledge. The only thing we have is a sequence of observations from
which we extract what kinds of effects were caused by performing the command.
This way, we yield a description of the semantics of the action and, hence, a
definition.
| Martin E. Mueller and Madhura D. Thosar | null | 1307.7303 | null | null |
Participation anticipating in elections using data mining methods | cs.CY cs.LG | Anticipating the political behavior of people will be considerable help for
election candidates to assess the possibility of their success and to be
acknowledged about the public motivations to select them. In this paper, we
provide a general schematic of the architecture of participation anticipating
system in presidential election by using KNN, Classification Tree and Na\"ive
Bayes and tools orange based on crisp which had hopeful output. To test and
assess the proposed model, we begin to use the case study by selecting 100
qualified persons who attend in 11th presidential election of Islamic republic
of Iran and anticipate their participation in Kohkiloye & Boyerahmad. We
indicate that KNN can perform anticipation and classification processes with
high accuracy in compared with two other algorithms to anticipate
participation.
| Amin Babazadeh Sangar, Seyyed Reza Khaze, Laya Ebrahimi | null | 1307.7429 | null | null |
Data mining application for cyber space users tendency in blog writing:
a case study | cs.CY cs.LG | Blogs are the recent emerging media which relies on information technology
and technological advance. Since the mass media in some less-developed and
developing countries are in government service and their policies are developed
based on governmental interests, so blogs are provided for ideas and exchanging
opinions. In this paper, we highlighted performed simulations from obtained
information from 100 users and bloggers in Kohkiloye and Boyer Ahmad Province
and using Weka 3.6 tool and c4.5 algorithm by applying decision tree with more
than %82 precision for getting future tendency anticipation of users to
blogging and using in strategically areas.
| Farhad Soleimanian Gharehchopogh, Seyyed Reza Khaze | null | 1307.7432 | null | null |
Safe Screening With Variational Inequalities and Its Application to
LASSO | cs.LG stat.ML | Sparse learning techniques have been routinely used for feature selection as
the resulting model usually has a small number of non-zero entries. Safe
screening, which eliminates the features that are guaranteed to have zero
coefficients for a certain value of the regularization parameter, is a
technique for improving the computational efficiency. Safe screening is gaining
increasing attention since 1) solving sparse learning formulations usually has
a high computational cost especially when the number of features is large and
2) one needs to try several regularization parameters to select a suitable
model. In this paper, we propose an approach called "Sasvi" (Safe screening
with variational inequalities). Sasvi makes use of the variational inequality
that provides the sufficient and necessary optimality condition for the dual
problem. Several existing approaches for Lasso screening can be casted as
relaxed versions of the proposed Sasvi, thus Sasvi provides a stronger safe
screening rule. We further study the monotone properties of Sasvi for Lasso,
based on which a sure removal regularization parameter can be identified for
each feature. Experimental results on both synthetic and real data sets are
reported to demonstrate the effectiveness of the proposed Sasvi for Lasso
screening.
| Jun Liu, Zheng Zhao, Jie Wang, Jieping Ye | null | 1307.7577 | null | null |
Multi-dimensional Parametric Mincuts for Constrained MAP Inference | cs.LG cs.AI | In this paper, we propose novel algorithms for inferring the Maximum a
Posteriori (MAP) solution of discrete pairwise random field models under
multiple constraints. We show how this constrained discrete optimization
problem can be formulated as a multi-dimensional parametric mincut problem via
its Lagrangian dual, and prove that our algorithm isolates all constraint
instances for which the problem can be solved exactly. These multiple solutions
enable us to even deal with `soft constraints' (higher order penalty
functions). Moreover, we propose two practical variants of our algorithm to
solve problems with hard constraints. We also show how our method can be
applied to solve various constrained discrete optimization problems such as
submodular minimization and shortest path computation. Experimental evaluation
using the foreground-background image segmentation problem with statistic
constraints reveals that our method is faster and its results are closer to the
ground truth labellings compared with the popular continuous relaxation based
methods.
| Yongsub Lim, Kyomin Jung, Pushmeet Kohli | null | 1307.7793 | null | null |
Protein (Multi-)Location Prediction: Using Location Inter-Dependencies
in a Probabilistic Framework | q-bio.QM cs.CE cs.LG q-bio.GN | Knowing the location of a protein within the cell is important for
understanding its function, role in biological processes, and potential use as
a drug target. Much progress has been made in developing computational methods
that predict single locations for proteins, assuming that proteins localize to
a single location. However, it has been shown that proteins localize to
multiple locations. While a few recent systems have attempted to predict
multiple locations of proteins, they typically treat locations as independent
or capture inter-dependencies by treating each locations-combination present in
the training set as an individual location-class. We present a new method and a
preliminary system we have developed that directly incorporates
inter-dependencies among locations into the multiple-location-prediction
process, using a collection of Bayesian network classifiers. We evaluate our
system on a dataset of single- and multi-localized proteins. Our results,
obtained by incorporating inter-dependencies are significantly higher than
those obtained by classifiers that do not use inter-dependencies. The
performance of our system on multi-localized proteins is comparable to a top
performing system (YLoc+), without restricting predictions to be based only on
location-combinations present in the training set.
| Ramanuja Simha and Hagit Shatkay | null | 1307.7795 | null | null |
Scalable $k$-NN graph construction | cs.CV cs.LG stat.ML | The $k$-NN graph has played a central role in increasingly popular
data-driven techniques for various learning and vision tasks; yet, finding an
efficient and effective way to construct $k$-NN graphs remains a challenge,
especially for large-scale high-dimensional data. In this paper, we propose a
new approach to construct approximate $k$-NN graphs with emphasis in:
efficiency and accuracy. We hierarchically and randomly divide the data points
into subsets and build an exact neighborhood graph over each subset, achieving
a base approximate neighborhood graph; we then repeat this process for several
times to generate multiple neighborhood graphs, which are combined to yield a
more accurate approximate neighborhood graph. Furthermore, we propose a
neighborhood propagation scheme to further enhance the accuracy. We show both
theoretical and empirical accuracy and efficiency of our approach to $k$-NN
graph construction and demonstrate significant speed-up in dealing with large
scale visual data.
| Jingdong Wang, Jing Wang, Gang Zeng, Zhuowen Tu, Rui Gan, and Shipeng
Li | null | 1307.7852 | null | null |
On the accuracy of the Viterbi alignment | stat.ME cs.LG stat.CO | In a hidden Markov model, the underlying Markov chain is usually hidden.
Often, the maximum likelihood alignment (Viterbi alignment) is used as its
estimate. Although having the biggest likelihood, the Viterbi alignment can
behave very untypically by passing states that are at most unexpected. To avoid
such situations, the Viterbi alignment can be modified by forcing it not to
pass these states. In this article, an iterative procedure for improving the
Viterbi alignment is proposed and studied. The iterative approach is compared
with a simple bunch approach where a number of states with low probability are
all replaced at the same time. It can be seen that the iterative way of
adjusting the Viterbi alignment is more efficient and it has several advantages
over the bunch approach. The same iterative algorithm for improving the Viterbi
alignment can be used in the case of peeping, that is when it is possible to
reveal hidden states. In addition, lower bounds for classification
probabilities of the Viterbi alignment under different conditions on the model
parameters are studied.
| Kristi Kuljus and J\"uri Lember | null | 1307.7948 | null | null |
Connecting Language and Knowledge Bases with Embedding Models for
Relation Extraction | cs.CL cs.IR cs.LG | This paper proposes a novel approach for relation extraction from free text
which is trained to jointly use information from the text and from existing
knowledge. Our model is based on two scoring functions that operate by learning
low-dimensional embeddings of words and of entities and relationships from a
knowledge base. We empirically show on New York Times articles aligned with
Freebase relations that our approach is able to efficiently use the extra
information provided by a large subset of Freebase data (4M entities, 23k
relationships) to improve over existing methods that rely on text features
alone.
| Jason Weston, Antoine Bordes, Oksana Yakhnenko, Nicolas Usunier | null | 1307.7973 | null | null |
Likelihood-ratio calibration using prior-weighted proper scoring rules | stat.ML cs.LG | Prior-weighted logistic regression has become a standard tool for calibration
in speaker recognition. Logistic regression is the optimization of the expected
value of the logarithmic scoring rule. We generalize this via a parametric
family of proper scoring rules. Our theoretical analysis shows how different
members of this family induce different relative weightings over a spectrum of
applications of which the decision thresholds range from low to high. Special
attention is given to the interaction between prior weighting and proper
scoring rule parameters. Experiments on NIST SRE'12 suggest that for
applications with low false-alarm rate requirements, scoring rules tailored to
emphasize higher score thresholds may give better accuracy than logistic
regression.
| Niko Br\"ummer and George Doddington | null | 1307.7981 | null | null |
Sharp Threshold for Multivariate Multi-Response Linear Regression via
Block Regularized Lasso | cs.LG stat.ML | In this paper, we investigate a multivariate multi-response (MVMR) linear
regression problem, which contains multiple linear regression models with
differently distributed design matrices, and different regression and output
vectors. The goal is to recover the support union of all regression vectors
using $l_1/l_2$-regularized Lasso. We characterize sufficient and necessary
conditions on sample complexity \emph{as a sharp threshold} to guarantee
successful recovery of the support union. Namely, if the sample size is above
the threshold, then $l_1/l_2$-regularized Lasso correctly recovers the support
union; and if the sample size is below the threshold, $l_1/l_2$-regularized
Lasso fails to recover the support union. In particular, the threshold
precisely captures the impact of the sparsity of regression vectors and the
statistical properties of the design matrices on sample complexity. Therefore,
the threshold function also captures the advantages of joint support union
recovery using multi-task Lasso over individual support recovery using
single-task Lasso.
| Weiguang Wang, Yingbin Liang, Eric P. Xing | null | 1307.7993 | null | null |
A Study on Classification in Imbalanced and Partially-Labelled Data
Streams | astro-ph.IM cs.LG | The domain of radio astronomy is currently facing significant computational
challenges, foremost amongst which are those posed by the development of the
world's largest radio telescope, the Square Kilometre Array (SKA). Preliminary
specifications for this instrument suggest that the final design will
incorporate between 2000 and 3000 individual 15 metre receiving dishes, which
together can be expected to produce a data rate of many TB/s. Given such a high
data rate, it becomes crucial to consider how this information will be
processed and stored to maximise its scientific utility. In this paper, we
consider one possible data processing scenario for the SKA, for the purposes of
an all-sky pulsar survey. In particular we treat the selection of promising
signals from the SKA processing pipeline as a data stream classification
problem. We consider the feasibility of classifying signals that arrive via an
unlabelled and heavily class imbalanced data stream, using currently available
algorithms and frameworks. Our results indicate that existing stream learners
exhibit unacceptably low recall on real astronomical data when used in standard
configuration; however, good false positive performance and comparable accuracy
to static learners, suggests they have definite potential as an on-line
solution to this particular big data challenge.
| R. J. Lyon, J. M. Brooke, J. D. Knowles, B. W. Stappers | 10.1109/SMC.2013.260 | 1307.8012 | null | null |
Optimistic Concurrency Control for Distributed Unsupervised Learning | cs.LG cs.AI cs.DC | Research on distributed machine learning algorithms has focused primarily on
one of two extremes - algorithms that obey strict concurrency constraints or
algorithms that obey few or no such constraints. We consider an intermediate
alternative in which algorithms optimistically assume that conflicts are
unlikely and if conflicts do arise a conflict-resolution protocol is invoked.
We view this "optimistic concurrency control" paradigm as particularly
appropriate for large-scale machine learning algorithms, particularly in the
unsupervised setting. We demonstrate our approach in three problem areas:
clustering, feature learning and online facility location. We evaluate our
methods via large-scale experiments in a cluster computing environment.
| Xinghao Pan, Joseph E. Gonzalez, Stefanie Jegelka, Tamara Broderick,
Michael I. Jordan | null | 1307.8049 | null | null |
DeBaCl: A Python Package for Interactive DEnsity-BAsed CLustering | stat.ME cs.LG stat.ML | The level set tree approach of Hartigan (1975) provides a probabilistically
based and highly interpretable encoding of the clustering behavior of a
dataset. By representing the hierarchy of data modes as a dendrogram of the
level sets of a density estimator, this approach offers many advantages for
exploratory analysis and clustering, especially for complex and
high-dimensional data. Several R packages exist for level set tree estimation,
but their practical usefulness is limited by computational inefficiency,
absence of interactive graphical capabilities and, from a theoretical
perspective, reliance on asymptotic approximations. To make it easier for
practitioners to capture the advantages of level set trees, we have written the
Python package DeBaCl for DEnsity-BAsed CLustering. In this article we
illustrate how DeBaCl's level set tree estimates can be used for difficult
clustering tasks and interactive graphical data analysis. The package is
intended to promote the practical use of level set trees through improvements
in computational efficiency and a high degree of user customization. In
addition, the flexible algorithms implemented in DeBaCl enjoy finite sample
accuracy, as demonstrated in recent literature on density clustering. Finally,
we show the level set tree framework can be easily extended to deal with
functional data.
| Brian P. Kent, Alessandro Rinaldo, Timothy Verstynen | null | 1307.8136 | null | null |
Towards Minimax Online Learning with Unknown Time Horizon | cs.LG | We consider online learning when the time horizon is unknown. We apply a
minimax analysis, beginning with the fixed horizon case, and then moving on to
two unknown-horizon settings, one that assumes the horizon is chosen randomly
according to some known distribution, and the other which allows the adversary
full control over the horizon. For the random horizon setting with restricted
losses, we derive a fully optimal minimax algorithm. And for the adversarial
horizon setting, we prove a nontrivial lower bound which shows that the
adversary obtains strictly more power than when the horizon is fixed and known.
Based on the minimax solution of the random horizon setting, we then propose a
new adaptive algorithm which "pretends" that the horizon is drawn from a
distribution from a special family, but no matter how the actual horizon is
chosen, the worst-case regret is of the optimal rate. Furthermore, our
algorithm can be combined and applied in many ways, for instance, to online
convex optimization, follow the perturbed leader, exponential weights algorithm
and first order bounds. Experiments show that our algorithm outperforms many
other existing algorithms in an online linear optimization setting.
| Haipeng Luo and Robert E. Schapire | null | 1307.8187 | null | null |
The Planning-ahead SMO Algorithm | cs.LG | The sequential minimal optimization (SMO) algorithm and variants thereof are
the de facto standard method for solving large quadratic programs for support
vector machine (SVM) training. In this paper we propose a simple yet powerful
modification. The main emphasis is on an algorithm improving the SMO step size
by planning-ahead. The theoretical analysis ensures its convergence to the
optimum. Experiments involving a large number of datasets were carried out to
demonstrate the superiority of the new algorithm.
| Tobias Glasmachers | null | 1307.8305 | null | null |
The Power of Localization for Efficiently Learning Linear Separators
with Noise | cs.LG cs.CC cs.DS stat.ML | We introduce a new approach for designing computationally efficient learning
algorithms that are tolerant to noise, and demonstrate its effectiveness by
designing algorithms with improved noise tolerance guarantees for learning
linear separators.
We consider both the malicious noise model and the adversarial label noise
model. For malicious noise, where the adversary can corrupt both the label and
the features, we provide a polynomial-time algorithm for learning linear
separators in $\Re^d$ under isotropic log-concave distributions that can
tolerate a nearly information-theoretically optimal noise rate of $\eta =
\Omega(\epsilon)$. For the adversarial label noise model, where the
distribution over the feature vectors is unchanged, and the overall probability
of a noisy label is constrained to be at most $\eta$, we also give a
polynomial-time algorithm for learning linear separators in $\Re^d$ under
isotropic log-concave distributions that can handle a noise rate of $\eta =
\Omega\left(\epsilon\right)$.
We show that, in the active learning model, our algorithms achieve a label
complexity whose dependence on the error parameter $\epsilon$ is
polylogarithmic. This provides the first polynomial-time active learning
algorithm for learning linear separators in the presence of malicious noise or
adversarial label noise.
| Pranjal Awasthi, Maria Florina Balcan, Philip M. Long | null | 1307.8371 | null | null |
Fast Simultaneous Training of Generalized Linear Models (FaSTGLZ) | cs.LG stat.ML | We present an efficient algorithm for simultaneously training sparse
generalized linear models across many related problems, which may arise from
bootstrapping, cross-validation and nonparametric permutation testing. Our
approach leverages the redundancies across problems to obtain significant
computational improvements relative to solving the problems sequentially by a
conventional algorithm. We demonstrate our fast simultaneous training of
generalized linear models (FaSTGLZ) algorithm on a number of real-world
datasets, and we run otherwise computationally intensive bootstrapping and
permutation test analyses that are typically necessary for obtaining
statistically rigorous classification results and meaningful interpretation.
Code is freely available at http://liinc.bme.columbia.edu/fastglz.
| Bryan R. Conroy, Jennifer M. Walz, Brian Cheung, Paul Sajda | null | 1307.8430 | null | null |
A Time and Space Efficient Junction Tree Architecture | cs.AI cs.LG | The junction tree algorithm is a way of computing marginals of boolean
multivariate probability distributions that factorise over sets of random
variables. The junction tree algorithm first constructs a tree called a
junction tree who's vertices are sets of random variables. The algorithm then
performs a generalised version of belief propagation on the junction tree. The
Shafer-Shenoy and Hugin architectures are two ways to perform this belief
propagation that tradeoff time and space complexities in different ways: Hugin
propagation is at least as fast as Shafer-Shenoy propagation and in the cases
that we have large vertices of high degree is significantly faster. However,
this speed increase comes at the cost of an increased space complexity. This
paper first introduces a simple novel architecture, ARCH-1, which has the best
of both worlds: the speed of Hugin propagation and the low space requirements
of Shafer-Shenoy propagation. A more complicated novel architecture, ARCH-2, is
then introduced which has, up to a factor only linear in the maximum
cardinality of any vertex, time and space complexities at least as good as
ARCH-1 and in the cases that we have large vertices of high degree is
significantly faster than ARCH-1.
| Stephen Pasteris | null | 1308.0187 | null | null |
An Enhanced Features Extractor for a Portfolio of Constraint Solvers | cs.AI cs.LG | Recent research has shown that a single arbitrarily efficient solver can be
significantly outperformed by a portfolio of possibly slower on-average
solvers. The solver selection is usually done by means of (un)supervised
learning techniques which exploit features extracted from the problem
specification. In this paper we present an useful and flexible framework that
is able to extract an extensive set of features from a Constraint
(Satisfaction/Optimization) Problem defined in possibly different modeling
languages: MiniZinc, FlatZinc or XCSP. We also report some empirical results
showing that the performances that can be obtained using these features are
effective and competitive with state of the art CSP portfolio techniques.
| Roberto Amadini and Maurizio Gabbrielli and Jacopo Mauro | null | 1308.0227 | null | null |
Design and Development of an Expert System to Help Head of University
Departments | cs.AI cs.LG | One of the basic tasks which is responded for head of each university
department, is employing lecturers based on some default factors such as
experience, evidences, qualifies and etc. In this respect, to help the heads,
some automatic systems have been proposed until now using machine learning
methods, decision support systems (DSS) and etc. According to advantages and
disadvantages of the previous methods, a full automatic system is designed in
this paper using expert systems. The proposed system is included two main
steps. In the first one, the human expert's knowledge is designed as decision
trees. The second step is included an expert system which is evaluated using
extracted rules of these decision trees. Also, to improve the quality of the
proposed system, a majority voting algorithm is proposed as post processing
step to choose the best lecturer which satisfied more expert's decision trees
for each course. The results are shown that the designed system average
accuracy is 78.88. Low computational complexity, simplicity to program and are
some of other advantages of the proposed system.
| Shervan Fekri-Ershad, Hadi Tajalizadeh, Shahram Jafari | null | 1308.0356 | null | null |
Using Incomplete Information for Complete Weight Annotation of Road
Networks -- Extended Version | cs.LG cs.DB | We are witnessing increasing interests in the effective use of road networks.
For example, to enable effective vehicle routing, weighted-graph models of
transportation networks are used, where the weight of an edge captures some
cost associated with traversing the edge, e.g., greenhouse gas (GHG) emissions
or travel time. It is a precondition to using a graph model for routing that
all edges have weights. Weights that capture travel times and GHG emissions can
be extracted from GPS trajectory data collected from the network. However, GPS
trajectory data typically lack the coverage needed to assign weights to all
edges. This paper formulates and addresses the problem of annotating all edges
in a road network with travel cost based weights from a set of trips in the
network that cover only a small fraction of the edges, each with an associated
ground-truth travel cost. A general framework is proposed to solve the problem.
Specifically, the problem is modeled as a regression problem and solved by
minimizing a judiciously designed objective function that takes into account
the topology of the road network. In particular, the use of weighted PageRank
values of edges is explored for assigning appropriate weights to all edges, and
the property of directional adjacency of edges is also taken into account to
assign weights. Empirical studies with weights capturing travel time and GHG
emissions on two road networks (Skagen, Denmark, and North Jutland, Denmark)
offer insight into the design properties of the proposed techniques and offer
evidence that the techniques are effective.
| Bin Yang, Manohar Kaul, Christian S. Jensen | null | 1308.0484 | null | null |
Exploring The Contribution of Unlabeled Data in Financial Sentiment
Analysis | cs.CL cs.LG | With the proliferation of its applications in various industries, sentiment
analysis by using publicly available web data has become an active research
area in text classification during these years. It is argued by researchers
that semi-supervised learning is an effective approach to this problem since it
is capable to mitigate the manual labeling effort which is usually expensive
and time-consuming. However, there was a long-term debate on the effectiveness
of unlabeled data in text classification. This was partially caused by the fact
that many assumptions in theoretic analysis often do not hold in practice. We
argue that this problem may be further understood by adding an additional
dimension in the experiment. This allows us to address this problem in the
perspective of bias and variance in a broader view. We show that the well-known
performance degradation issue caused by unlabeled data can be reproduced as a
subset of the whole scenario. We argue that if the bias-variance trade-off is
to be better balanced by a more effective feature selection method unlabeled
data is very likely to boost the classification performance. We then propose a
feature selection framework in which labeled and unlabeled training samples are
both considered. We discuss its potential in achieving such a balance. Besides,
the application in financial sentiment analysis is chosen because it not only
exemplifies an important application, the data possesses better illustrative
power as well. The implications of this study in text classification and
financial sentiment analysis are both discussed.
| Jimmy SJ. Ren, Wei Wang, Jiawei Wang, Stephen Shaoyi Liao | null | 1308.0658 | null | null |
MonoStream: A Minimal-Hardware High Accuracy Device-free WLAN
Localization System | cs.NI cs.LG | Device-free (DF) localization is an emerging technology that allows the
detection and tracking of entities that do not carry any devices nor
participate actively in the localization process. Typically, DF systems require
a large number of transmitters and receivers to achieve acceptable accuracy,
which is not available in many scenarios such as homes and small businesses. In
this paper, we introduce MonoStream as an accurate single-stream DF
localization system that leverages the rich Channel State Information (CSI) as
well as MIMO information from the physical layer to provide accurate DF
localization with only one stream. To boost its accuracy and attain low
computational requirements, MonoStream models the DF localization problem as an
object recognition problem and uses a novel set of CSI-context features and
techniques with proven accuracy and efficiency. Experimental evaluation in two
typical testbeds, with a side-by-side comparison with the state-of-the-art,
shows that MonoStream can achieve an accuracy of 0.95m with at least 26%
enhancement in median distance error using a single stream only. This
enhancement in accuracy comes with an efficient execution of less than 23ms per
location update on a typical laptop. This highlights the potential of
MonoStream usage for real-time DF tracking applications.
| Ibrahim Sabek and Moustafa Youssef | null | 1308.0768 | null | null |
Trading USDCHF filtered by Gold dynamics via HMM coupling | stat.ML cs.LG | We devise a USDCHF trading strategy using the dynamics of gold as a filter.
Our strategy involves modelling both USDCHF and gold using a coupled hidden
Markov model (CHMM). The observations will be indicators, RSI and CCI, which
will be used as triggers for our trading signals. Upon decoding the model in
each iteration, we can get the next most probable state and the next most
probable observation. Hopefully by taking advantage of intermarket analysis and
the Markov property implicit in the model, trading with these most probable
values will produce profitable results.
| Donny Lee | null | 1308.0900 | null | null |
Fast Semidifferential-based Submodular Function Optimization | cs.DS cs.DM cs.LG | We present a practical and powerful new framework for both unconstrained and
constrained submodular function optimization based on discrete
semidifferentials (sub- and super-differentials). The resulting algorithms,
which repeatedly compute and then efficiently optimize submodular
semigradients, offer new and generalize many old methods for submodular
optimization. Our approach, moreover, takes steps towards providing a unifying
paradigm applicable to both submodular min- imization and maximization,
problems that historically have been treated quite distinctly. The practicality
of our algorithms is important since interest in submodularity, owing to its
natural and wide applicability, has recently been in ascendance within machine
learning. We analyze theoretical properties of our algorithms for minimization
and maximization, and show that many state-of-the-art maximization algorithms
are special cases. Lastly, we complement our theoretical analyses with
supporting empirical experiments.
| Rishabh Iyer, Stefanie Jegelka and Jeff Bilmes | null | 1308.1006 | null | null |
Sign Stable Projections, Sign Cauchy Projections and Chi-Square Kernels | cs.LG cs.DS cs.IR | The method of stable random projections is popular for efficiently computing
the Lp distances in high dimension (where 0<p<=2), using small space. Because
it adopts nonadaptive linear projections, this method is naturally suitable
when the data are collected in a dynamic streaming fashion (i.e., turnstile
data streams). In this paper, we propose to use only the signs of the projected
data and analyze the probability of collision (i.e., when the two signs
differ). We derive a bound of the collision probability which is exact when p=2
and becomes less sharp when p moves away from 2. Interestingly, when p=1 (i.e.,
Cauchy random projections), we show that the probability of collision can be
accurately approximated as functions of the chi-square similarity. For example,
when the (un-normalized) data are binary, the maximum approximation error of
the collision probability is smaller than 0.0192. In text and vision
applications, the chi-square similarity is a popular measure for nonnegative
data when the features are generated from histograms. Our experiments confirm
that the proposed method is promising for large-scale learning applications.
| Ping Li, Gennady Samorodnitsky, John Hopcroft | null | 1308.1009 | null | null |
Coevolutionary networks of reinforcement-learning agents | cs.MA cs.LG nlin.AO | This paper presents a model of network formation in repeated games where the
players adapt their strategies and network ties simultaneously using a simple
reinforcement-learning scheme. It is demonstrated that the coevolutionary
dynamics of such systems can be described via coupled replicator equations. We
provide a comprehensive analysis for three-player two-action games, which is
the minimum system size with nontrivial structural dynamics. In particular, we
characterize the Nash equilibria (NE) in such games and examine the local
stability of the rest points corresponding to those equilibria. We also study
general n-player networks via both simulations and analytical methods and find
that in the absence of exploration, the stable equilibria consist of star
motifs as the main building blocks of the network. Furthermore, in all stable
equilibria the agents play pure strategies, even when the game allows mixed NE.
Finally, we study the impact of exploration on learning outcomes, and observe
that there is a critical exploration rate above which the symmetric and
uniformly connected network topology becomes stable.
| Ardeshir Kianercy and Aram Galstyan | 10.1103/PhysRevE.88.012815 | 1308.1049 | null | null |
Theoretical Issues for Global Cumulative Treatment Analysis (GCTA) | stat.AP cs.LG | Adaptive trials are now mainstream science. Recently, researchers have taken
the adaptive trial concept to its natural conclusion, proposing what we call
"Global Cumulative Treatment Analysis" (GCTA). Similar to the adaptive trial,
decision making and data collection and analysis in the GCTA are continuous and
integrated, and treatments are ranked in accord with the statistics of this
information, combined with what offers the most information gain. Where GCTA
differs from an adaptive trial, or, for that matter, from any trial design, is
that all patients are implicitly participants in the GCTA process, regardless
of whether they are formally enrolled in a trial. This paper discusses some of
the theoretical and practical issues that arise in the design of a GCTA, along
with some preliminary thoughts on how they might be approached.
| Jeff Shrager | null | 1308.1066 | null | null |
Empirical entropy, minimax regret and minimax risk | math.ST cs.LG stat.TH | We consider the random design regression model with square loss. We propose a
method that aggregates empirical minimizers (ERM) over appropriately chosen
random subsets and reduces to ERM in the extreme case, and we establish sharp
oracle inequalities for its risk. We show that, under the $\varepsilon^{-p}$
growth of the empirical $\varepsilon$-entropy, the excess risk of the proposed
method attains the rate $n^{-2/(2+p)}$ for $p\in(0,2)$ and $n^{-1/p}$ for $p>2$
where $n$ is the sample size. Furthermore, for $p\in(0,2)$, the excess risk
rate matches the behavior of the minimax risk of function estimation in
regression problems under the well-specified model. This yields a conclusion
that the rates of statistical estimation in well-specified models (minimax
risk) and in misspecified models (minimax regret) are equivalent in the regime
$p\in(0,2)$. In other words, for $p\in(0,2)$ the problem of statistical
learning enjoys the same minimax rate as the problem of statistical estimation.
On the contrary, for $p>2$ we show that the rates of the minimax regret are, in
general, slower than for the minimax risk. Our oracle inequalities also imply
the $v\log(n/v)/n$ rates for Vapnik-Chervonenkis type classes of dimension $v$
without the usual convexity assumption on the class; we show that these rates
are optimal. Finally, for a slightly modified method, we derive a bound on the
excess risk of $s$-sparse convex aggregation improving that of Lounici [Math.
Methods Statist. 16 (2007) 246-259] and providing the optimal rate.
| Alexander Rakhlin, Karthik Sridharan, Alexandre B. Tsybakov | 10.3150/14-BEJ679 | 1308.1147 | null | null |
Spatial-Aware Dictionary Learning for Hyperspectral Image Classification | cs.CV cs.LG | This paper presents a structured dictionary-based model for hyperspectral
data that incorporates both spectral and contextual characteristics of a
spectral sample, with the goal of hyperspectral image classification. The idea
is to partition the pixels of a hyperspectral image into a number of spatial
neighborhoods called contextual groups and to model each pixel with a linear
combination of a few dictionary elements learned from the data. Since pixels
inside a contextual group are often made up of the same materials, their linear
combinations are constrained to use common elements from the dictionary. To
this end, dictionary learning is carried out with a joint sparse regularizer to
induce a common sparsity pattern in the sparse coefficients of each contextual
group. The sparse coefficients are then used for classification using a linear
SVM. Experimental results on a number of real hyperspectral images confirm the
effectiveness of the proposed representation for hyperspectral image
classification. Moreover, experiments with simulated multispectral data show
that the proposed model is capable of finding representations that may
effectively be used for classification of multispectral-resolution samples.
| Ali Soltani-Farani, Hamid R. Rabiee, Seyyed Abbas Hosseini | null | 1308.1187 | null | null |
OFF-Set: One-pass Factorization of Feature Sets for Online
Recommendation in Persistent Cold Start Settings | cs.LG cs.IR | One of the most challenging recommendation tasks is recommending to a new,
previously unseen user. This is known as the 'user cold start' problem.
Assuming certain features or attributes of users are known, one approach for
handling new users is to initially model them based on their features.
Motivated by an ad targeting application, this paper describes an extreme
online recommendation setting where the cold start problem is perpetual. Every
user is encountered by the system just once, receives a recommendation, and
either consumes or ignores it, registering a binary reward.
We introduce One-pass Factorization of Feature Sets, OFF-Set, a novel
recommendation algorithm based on Latent Factor analysis, which models users by
mapping their features to a latent space. Furthermore, OFF-Set is able to model
non-linear interactions between pairs of features. OFF-Set is designed for
purely online recommendation, performing lightweight updates of its model per
each recommendation-reward observation. We evaluate OFF-Set against several
state of the art baselines, and demonstrate its superiority on real
ad-targeting data.
| Michal Aharon, Natalie Aizenberg, Edward Bortnikov, Ronny Lempel, Roi
Adadi, Tomer Benyamini, Liron Levin, Ran Roth, Ohad Serfaty | null | 1308.1792 | null | null |
Predicting protein contact map using evolutionary and physical
constraints by integer programming (extended version) | q-bio.QM cs.CE cs.LG math.OC q-bio.BM stat.ML | Motivation. Protein contact map describes the pairwise spatial and functional
relationship of residues in a protein and contains key information for protein
3D structure prediction. Although studied extensively, it remains very
challenging to predict contact map using only sequence information. Most
existing methods predict the contact map matrix element-by-element, ignoring
correlation among contacts and physical feasibility of the whole contact map. A
couple of recent methods predict contact map based upon residue co-evolution,
taking into consideration contact correlation and enforcing a sparsity
restraint, but these methods require a very large number of sequence homologs
for the protein under consideration and the resultant contact map may be still
physically unfavorable.
Results. This paper presents a novel method PhyCMAP for contact map
prediction, integrating both evolutionary and physical restraints by machine
learning and integer linear programming (ILP). The evolutionary restraints
include sequence profile, residue co-evolution and context-specific statistical
potential. The physical restraints specify more concrete relationship among
contacts than the sparsity restraint. As such, our method greatly reduces the
solution space of the contact map matrix and thus, significantly improves
prediction accuracy. Experimental results confirm that PhyCMAP outperforms
currently popular methods no matter how many sequence homologs are available
for the protein under consideration. PhyCMAP can predict contacts within
minutes after PSIBLAST search for sequence homologs is done, much faster than
the two recent methods PSICOV and EvFold.
See http://raptorx.uchicago.edu for the web server.
| Zhiyong Wang and Jinbo Xu | 10.1093/bioinformatics/btt211 | 1308.1975 | null | null |
Coding for Random Projections | cs.LG cs.DS cs.IT math.IT stat.CO | The method of random projections has become very popular for large-scale
applications in statistical learning, information retrieval, bio-informatics
and other applications. Using a well-designed coding scheme for the projected
data, which determines the number of bits needed for each projected value and
how to allocate these bits, can significantly improve the effectiveness of the
algorithm, in storage cost as well as computational speed. In this paper, we
study a number of simple coding schemes, focusing on the task of similarity
estimation and on an application to training linear classifiers. We demonstrate
that uniform quantization outperforms the standard existing influential method
(Datar et. al. 2004). Indeed, we argue that in many cases coding with just a
small number of bits suffices. Furthermore, we also develop a non-uniform 2-bit
coding scheme that generally performs well in practice, as confirmed by our
experiments on training linear support vector machines (SVM).
| Ping Li, Michael Mitzenmacher, Anshumali Shrivastava | null | 1308.2218 | null | null |
High-Dimensional Regression with Gaussian Mixtures and Partially-Latent
Response Variables | cs.LG stat.ML | In this work we address the problem of approximating high-dimensional data
with a low-dimensional representation. We make the following contributions. We
propose an inverse regression method which exchanges the roles of input and
response, such that the low-dimensional variable becomes the regressor, and
which is tractable. We introduce a mixture of locally-linear probabilistic
mapping model that starts with estimating the parameters of inverse regression,
and follows with inferring closed-form solutions for the forward parameters of
the high-dimensional regression problem of interest. Moreover, we introduce a
partially-latent paradigm, such that the vector-valued response variable is
composed of both observed and latent entries, thus being able to deal with data
contaminated by experimental artifacts that cannot be explained with noise
models. The proposed probabilistic formulation could be viewed as a
latent-variable augmentation of regression. We devise expectation-maximization
(EM) procedures based on a data augmentation strategy which facilitates the
maximum-likelihood search over the model parameters. We propose two
augmentation schemes and we describe in detail the associated EM inference
procedures that may well be viewed as generalizations of a number of EM
regression, dimension reduction, and factor analysis algorithms. The proposed
framework is validated with both synthetic and real data. We provide
experimental evidence that our method outperforms several existing regression
techniques.
| Antoine Deleforge and Florence Forbes and Radu Horaud | 10.1007/s11222-014-9461-5 | 1308.2302 | null | null |
Learning Features and their Transformations by Spatial and Temporal
Spherical Clustering | cs.NE cs.AI cs.CV cs.LG q-bio.NC | Learning features invariant to arbitrary transformations in the data is a
requirement for any recognition system, biological or artificial. It is now
widely accepted that simple cells in the primary visual cortex respond to
features while the complex cells respond to features invariant to different
transformations. We present a novel two-layered feedforward neural model that
learns features in the first layer by spatial spherical clustering and
invariance to transformations in the second layer by temporal spherical
clustering. Learning occurs in an online and unsupervised manner following the
Hebbian rule. When exposed to natural videos acquired by a camera mounted on a
cat's head, the first and second layer neurons in our model develop simple and
complex cell-like receptive field properties. The model can predict by learning
lateral connections among the first layer neurons. A topographic map to their
spatial features emerges by exponentially decaying the flow of activation with
distance from one neuron to another in the first layer that fire in close
temporal proximity, thereby minimizing the pooling length in an online manner
simultaneously with feature learning.
| Jayanta K. Dutta, Bonny Banerjee | null | 1308.2350 | null | null |
KL-based Control of the Learning Schedule for Surrogate Black-Box
Optimization | cs.LG cs.AI stat.ML | This paper investigates the control of an ML component within the Covariance
Matrix Adaptation Evolution Strategy (CMA-ES) devoted to black-box
optimization. The known CMA-ES weakness is its sample complexity, the number of
evaluations of the objective function needed to approximate the global optimum.
This weakness is commonly addressed through surrogate optimization, learning an
estimate of the objective function a.k.a. surrogate model, and replacing most
evaluations of the true objective function with the (inexpensive) evaluation of
the surrogate model. This paper presents a principled control of the learning
schedule (when to relearn the surrogate model), based on the Kullback-Leibler
divergence of the current search distribution and the training distribution of
the former surrogate model. The experimental validation of the proposed
approach shows significant performance gains on a comprehensive set of
ill-conditioned benchmark problems, compared to the best state of the art
including the quasi-Newton high-precision BFGS method.
| Ilya Loshchilov (LIS), Marc Schoenauer (INRIA Saclay - Ile de France,
LRI), Mich\`ele Sebag (LRI) | null | 1308.2655 | null | null |
When are Overcomplete Topic Models Identifiable? Uniqueness of Tensor
Tucker Decompositions with Structured Sparsity | cs.LG cs.IR math.NA math.ST stat.ML stat.TH | Overcomplete latent representations have been very popular for unsupervised
feature learning in recent years. In this paper, we specify which overcomplete
models can be identified given observable moments of a certain order. We
consider probabilistic admixture or topic models in the overcomplete regime,
where the number of latent topics can greatly exceed the size of the observed
word vocabulary. While general overcomplete topic models are not identifiable,
we establish generic identifiability under a constraint, referred to as topic
persistence. Our sufficient conditions for identifiability involve a novel set
of "higher order" expansion conditions on the topic-word matrix or the
population structure of the model. This set of higher-order expansion
conditions allow for overcomplete models, and require the existence of a
perfect matching from latent topics to higher order observed words. We
establish that random structured topic models are identifiable w.h.p. in the
overcomplete regime. Our identifiability results allows for general
(non-degenerate) distributions for modeling the topic proportions, and thus, we
can handle arbitrarily correlated topics in our framework. Our identifiability
results imply uniqueness of a class of tensor decompositions with structured
sparsity which is contained in the class of Tucker decompositions, but is more
general than the Candecomp/Parafac (CP) decomposition.
| Animashree Anandkumar, Daniel Hsu, Majid Janzamin, Sham Kakade | null | 1308.2853 | null | null |
Composite Self-Concordant Minimization | stat.ML cs.LG math.OC | We propose a variable metric framework for minimizing the sum of a
self-concordant function and a possibly non-smooth convex function, endowed
with an easily computable proximal operator. We theoretically establish the
convergence of our framework without relying on the usual Lipschitz gradient
assumption on the smooth part. An important highlight of our work is a new set
of analytic step-size selection and correction procedures based on the
structure of the problem. We describe concrete algorithmic instances of our
framework for several interesting applications and demonstrate them numerically
on both synthetic and real data.
| Quoc Tran-Dinh, Anastasios Kyrillidis and Volkan Cevher | null | 1308.2867 | null | null |
Multiclass learnability and the ERM principle | cs.LG | We study the sample complexity of multiclass prediction in several learning
settings. For the PAC setting our analysis reveals a surprising phenomenon: In
sharp contrast to binary classification, we show that there exist multiclass
hypothesis classes for which some Empirical Risk Minimizers (ERM learners) have
lower sample complexity than others. Furthermore, there are classes that are
learnable by some ERM learners, while other ERM learners will fail to learn
them. We propose a principle for designing good ERM learners, and use this
principle to prove tight bounds on the sample complexity of learning {\em
symmetric} multiclass hypothesis classes---classes that are invariant under
permutations of label names. We further provide a characterization of mistake
and regret bounds for multiclass learning in the online setting and the bandit
setting, using new generalizations of Littlestone's dimension.
| Amit Daniely and Sivan Sabato and Shai Ben-David and Shai
Shalev-Shwartz | null | 1308.2893 | null | null |
Compact Relaxations for MAP Inference in Pairwise MRFs with Piecewise
Linear Priors | cs.CV cs.LG stat.ML | Label assignment problems with large state spaces are important tasks
especially in computer vision. Often the pairwise interaction (or smoothness
prior) between labels assigned at adjacent nodes (or pixels) can be described
as a function of the label difference. Exact inference in such labeling tasks
is still difficult, and therefore approximate inference methods based on a
linear programming (LP) relaxation are commonly used in practice. In this work
we study how compact linear programs can be constructed for general piecwise
linear smoothness priors. The number of unknowns is O(LK) per pairwise clique
in terms of the state space size $L$ and the number of linear segments K. This
compares to an O(L^2) size complexity of the standard LP relaxation if the
piecewise linear structure is ignored. Our compact construction and the
standard LP relaxation are equivalent and lead to the same (approximate) label
assignment.
| Christopher Zach and Christian H\"ane | null | 1308.3101 | null | null |
Normalized Google Distance of Multisets with Applications | cs.IR cs.LG | Normalized Google distance (NGD) is a relative semantic distance based on the
World Wide Web (or any other large electronic database, for instance Wikipedia)
and a search engine that returns aggregate page counts. The earlier NGD between
pairs of search terms (including phrases) is not sufficient for all
applications. We propose an NGD of finite multisets of search terms that is
better for many applications. This gives a relative semantics shared by a
multiset of search terms. We give applications and compare the results with
those obtained using the pairwise NGD. The derivation of NGD method is based on
Kolmogorov complexity.
| Andrew R. Cohen (Dept Electrical and Comput. Engin., Drexel Univ.),
P.M.B. Vitanyi (CWI and Comput. Sci., Univ. Amsterdam) | null | 1308.3177 | null | null |
The algorithm of noisy k-means | stat.ML cs.LG | In this note, we introduce a new algorithm to deal with finite dimensional
clustering with errors in variables. The design of this algorithm is based on
recent theoretical advances (see Loustau (2013a,b)) in statistical learning
with errors in variables. As the previous mentioned papers, the algorithm mixes
different tools from the inverse problem literature and the machine learning
community. Coarsely, it is based on a two-step procedure: (1) a deconvolution
step to deal with noisy inputs and (2) Newton's iterations as the popular
k-means.
| Camille Brunet (LAREMA), S\'ebastien Loustau (LAREMA) | null | 1308.3314 | null | null |
High dimensional Sparse Gaussian Graphical Mixture Model | stat.ML cs.LG | This paper considers the problem of networks reconstruction from
heterogeneous data using a Gaussian Graphical Mixture Model (GGMM). It is well
known that parameter estimation in this context is challenging due to large
numbers of variables coupled with the degeneracy of the likelihood. We propose
as a solution a penalized maximum likelihood technique by imposing an $l_{1}$
penalty on the precision matrix. Our approach shrinks the parameters thereby
resulting in better identifiability and variable selection. We use the
Expectation Maximization (EM) algorithm which involves the graphical LASSO to
estimate the mixing coefficients and the precision matrices. We show that under
certain regularity conditions the Penalized Maximum Likelihood (PML) estimates
are consistent. We demonstrate the performance of the PML estimator through
simulations and we show the utility of our method for high dimensional data
analysis in a genomic application.
| Anani Lotsi and Ernst Wit | null | 1308.3381 | null | null |
Axioms for graph clustering quality functions | cs.CV cs.LG stat.ML | We investigate properties that intuitively ought to be satisfied by graph
clustering quality functions, that is, functions that assign a score to a
clustering of a graph. Graph clustering, also known as network community
detection, is often performed by optimizing such a function. Two axioms
tailored for graph clustering quality functions are introduced, and the four
axioms introduced in previous work on distance based clustering are
reformulated and generalized for the graph setting. We show that modularity, a
standard quality function for graph clustering, does not satisfy all of these
six properties. This motivates the derivation of a new family of quality
functions, adaptive scale modularity, which does satisfy the proposed axioms.
Adaptive scale modularity has two parameters, which give greater flexibility in
the kinds of clusterings that can be found. Standard graph clustering quality
functions, such as normalized cut and unnormalized cut, are obtained as special
cases of adaptive scale modularity.
In general, the results of our investigation indicate that the considered
axiomatic framework covers existing `good' quality functions for graph
clustering, and can be used to derive an interesting new family of quality
functions.
| Twan van Laarhoven, Elena Marchiori | null | 1308.3383 | null | null |
Estimating or Propagating Gradients Through Stochastic Neurons for
Conditional Computation | cs.LG | Stochastic neurons and hard non-linearities can be useful for a number of
reasons in deep learning models, but in many cases they pose a challenging
problem: how to estimate the gradient of a loss function with respect to the
input of such stochastic or non-smooth neurons? I.e., can we "back-propagate"
through these stochastic neurons? We examine this question, existing
approaches, and compare four families of solutions, applicable in different
settings. One of them is the minimum variance unbiased gradient estimator for
stochatic binary neurons (a special case of the REINFORCE algorithm). A second
approach, introduced here, decomposes the operation of a binary stochastic
neuron into a stochastic binary part and a smooth differentiable part, which
approximates the expected effect of the pure stochatic binary neuron to first
order. A third approach involves the injection of additive or multiplicative
noise in a computational graph that is otherwise differentiable. A fourth
approach heuristically copies the gradient with respect to the stochastic
output directly as an estimator of the gradient with respect to the sigmoid
argument (we call this the straight-through estimator). To explore a context
where these estimators are useful, we consider a small-scale version of {\em
conditional computation}, where sparse stochastic units form a distributed
representation of gaters that can turn off in combinatorially many ways large
chunks of the computation performed in the rest of the neural network. In this
case, it is important that the gating units produce an actual 0 most of the
time. The resulting sparsity can be potentially be exploited to greatly reduce
the computational cost of large deep networks for which conditional computation
would be useful.
| Yoshua Bengio, Nicholas L\'eonard and Aaron Courville | null | 1308.3432 | null | null |
Computational Rationalization: The Inverse Equilibrium Problem | cs.GT cs.LG stat.ML | Modeling the purposeful behavior of imperfect agents from a small number of
observations is a challenging task. When restricted to the single-agent
decision-theoretic setting, inverse optimal control techniques assume that
observed behavior is an approximately optimal solution to an unknown decision
problem. These techniques learn a utility function that explains the example
behavior and can then be used to accurately predict or imitate future behavior
in similar observed or unobserved situations.
In this work, we consider similar tasks in competitive and cooperative
multi-agent domains. Here, unlike single-agent settings, a player cannot
myopically maximize its reward; it must speculate on how the other agents may
act to influence the game's outcome. Employing the game-theoretic notion of
regret and the principle of maximum entropy, we introduce a technique for
predicting and generalizing behavior.
| Kevin Waugh and Brian D. Ziebart and J. Andrew Bagnell | null | 1308.3506 | null | null |
Stochastic Optimization for Machine Learning | cs.LG | It has been found that stochastic algorithms often find good solutions much
more rapidly than inherently-batch approaches. Indeed, a very useful rule of
thumb is that often, when solving a machine learning problem, an iterative
technique which relies on performing a very large number of
relatively-inexpensive updates will often outperform one which performs a
smaller number of much "smarter" but computationally-expensive updates.
In this thesis, we will consider the application of stochastic algorithms to
two of the most important machine learning problems. Part i is concerned with
the supervised problem of binary classification using kernelized linear
classifiers, for which the data have labels belonging to exactly two classes
(e.g. "has cancer" or "doesn't have cancer"), and the learning problem is to
find a linear classifier which is best at predicting the label. In Part ii, we
will consider the unsupervised problem of Principal Component Analysis, for
which the learning task is to find the directions which contain most of the
variance of the data distribution.
Our goal is to present stochastic algorithms for both problems which are,
above all, practical--they work well on real-world data, in some cases better
than all known competing algorithms. A secondary, but still very important,
goal is to derive theoretical bounds on the performance of these algorithms
which are at least competitive with, and often better than, those known for
other approaches.
| Andrew Cotter | null | 1308.3509 | null | null |
Hidden Parameter Markov Decision Processes: A Semiparametric Regression
Approach for Discovering Latent Task Parametrizations | cs.LG cs.AI | Control applications often feature tasks with similar, but not identical,
dynamics. We introduce the Hidden Parameter Markov Decision Process (HiP-MDP),
a framework that parametrizes a family of related dynamical systems with a
low-dimensional set of latent factors, and introduce a semiparametric
regression approach for learning its structure from data. In the control
setting, we show that a learned HiP-MDP rapidly identifies the dynamics of a
new task instance, allowing an agent to flexibly adapt to task variations.
| Finale Doshi-Velez and George Konidaris | null | 1308.3513 | null | null |
Knapsack Constrained Contextual Submodular List Prediction with
Application to Multi-document Summarization | cs.LG | We study the problem of predicting a set or list of options under knapsack
constraint. The quality of such lists are evaluated by a submodular reward
function that measures both quality and diversity. Similar to DAgger (Ross et
al., 2010), by a reduction to online learning, we show how to adapt two
sequence prediction models to imitate greedy maximization under knapsack
constraint problems: CONSEQOPT (Dey et al., 2012) and SCP (Ross et al., 2013).
Experiments on extractive multi-document summarization show that our approach
outperforms existing state-of-the-art methods.
| Jiaji Zhou, Stephane Ross, Yisong Yue, Debadeepta Dey, J. Andrew
Bagnell | null | 1308.3541 | null | null |
Standardizing Interestingness Measures for Association Rules | stat.AP cs.LG stat.ML | Interestingness measures provide information that can be used to prune or
select association rules. A given value of an interestingness measure is often
interpreted relative to the overall range of the values that the
interestingness measure can take. However, properties of individual association
rules restrict the values an interestingness measure can achieve. An
interesting measure can be standardized to take this into account, but this has
only been done for one interestingness measure to date, i.e., the lift.
Standardization provides greater insight than the raw value and may even alter
researchers' perception of the data. We derive standardized analogues of three
interestingness measures and use real and simulated data to compare them to
their raw versions, each other, and the standardized lift.
| Mateen Shaikh, Paul D. McNicholas, M. Luiza Antonie and T. Brendan
Murphy | null | 1308.3740 | null | null |
Comment on "robustness and regularization of support vector machines" by
H. Xu, et al., (Journal of Machine Learning Research, vol. 10, pp. 1485-1510,
2009, arXiv:0803.3490) | cs.LG | This paper comments on the published work dealing with robustness and
regularization of support vector machines (Journal of Machine Learning
Research, vol. 10, pp. 1485-1510, 2009) [arXiv:0803.3490] by H. Xu, etc. They
proposed a theorem to show that it is possible to relate robustness in the
feature space and robustness in the sample space directly. In this paper, we
propose a counter example that rejects their theorem.
| Yahya Forghani, Hadi Sadoghi Yazdi | null | 1308.3750 | null | null |
Reference Distance Estimator | cs.LG stat.ML | A theoretical study is presented for a simple linear classifier called
reference distance estimator (RDE), which assigns the weight of each feature j
as P(r|j)-P(r), where r is a reference feature relevant to the target class y.
The analysis shows that if r performs better than random guess in predicting y
and is conditionally independent with each feature j, the RDE will have the
same classification performance as that from P(y|j)-P(y), a classifier trained
with the gold standard y. Since the estimation of P(r|j)-P(r) does not require
labeled data, under the assumption above, RDE trained with a large number of
unlabeled examples would be close to that trained with infinite labeled
examples. For the case the assumption does not hold, we theoretically analyze
the factors that influence the closeness of the RDE to the perfect one under
the assumption, and present an algorithm to select reference features and
combine multiple RDEs from different reference features using both labeled and
unlabeled data. The experimental results on 10 text classification tasks show
that the semi-supervised learning method improves supervised methods using
5,000 labeled examples and 13 million unlabeled ones, and in many tasks, its
performance is even close to a classifier trained with 13 million labeled
examples. In addition, the bounds in the theorems provide good estimation of
the classification performance and can be useful for new algorithm design.
| Yanpeng Li | null | 1308.3818 | null | null |
Optimal Algorithms for Testing Closeness of Discrete Distributions | cs.DS cs.IT cs.LG math.IT | We study the question of closeness testing for two discrete distributions.
More precisely, given samples from two distributions $p$ and $q$ over an
$n$-element set, we wish to distinguish whether $p=q$ versus $p$ is at least
$\eps$-far from $q$, in either $\ell_1$ or $\ell_2$ distance. Batu et al. gave
the first sub-linear time algorithms for these problems, which matched the
lower bounds of Valiant up to a logarithmic factor in $n$, and a polynomial
factor of $\eps.$
In this work, we present simple (and new) testers for both the $\ell_1$ and
$\ell_2$ settings, with sample complexity that is information-theoretically
optimal, to constant factors, both in the dependence on $n$, and the dependence
on $\eps$; for the $\ell_1$ testing problem we establish that the sample
complexity is $\Theta(\max\{n^{2/3}/\eps^{4/3}, n^{1/2}/\eps^2 \}).$
| Siu-On Chan and Ilias Diakonikolas and Gregory Valiant and Paul
Valiant | null | 1308.3946 | null | null |
A balanced k-means algorithm for weighted point sets | math.OC cs.LG stat.ML | The classical $k$-means algorithm for partitioning $n$ points in
$\mathbb{R}^d$ into $k$ clusters is one of the most popular and widely spread
clustering methods. The need to respect prescribed lower bounds on the cluster
sizes has been observed in many scientific and business applications.
In this paper, we present and analyze a generalization of $k$-means that is
capable of handling weighted point sets and prescribed lower and upper bounds
on the cluster sizes. We call it weight-balanced $k$-means. The key difference
to existing models lies in the ability to handle the combination of weighted
point sets with prescribed bounds on the cluster sizes. This imposes the need
to perform partial membership clustering, and leads to significant differences.
For example, while finite termination of all $k$-means variants for
unweighted point sets is a simple consequence of the existence of only finitely
many partitions of a given set of points, the situation is more involved for
weighted point sets, as there are infinitely many partial membership
clusterings. Using polyhedral theory, we show that the number of iterations of
weight-balanced $k$-means is bounded above by $n^{O(dk)}$, so in particular it
is polynomial for fixed $k$ and $d$. This is similar to the known worst-case
upper bound for classical $k$-means for unweighted point sets and unrestricted
cluster sizes, despite the much more general framework. We conclude with the
discussion of some additional favorable properties of our method.
| Steffen Borgwardt, Andreas Brieden and Peter Gritzmann | null | 1308.4004 | null | null |
Support Recovery for the Drift Coefficient of High-Dimensional
Diffusions | cs.IT cs.LG math.IT math.PR math.ST stat.TH | Consider the problem of learning the drift coefficient of a $p$-dimensional
stochastic differential equation from a sample path of length $T$. We assume
that the drift is parametrized by a high-dimensional vector, and study the
support recovery problem when both $p$ and $T$ can tend to infinity. In
particular, we prove a general lower bound on the sample-complexity $T$ by
using a characterization of mutual information as a time integral of
conditional variance, due to Kadota, Zakai, and Ziv. For linear stochastic
differential equations, the drift coefficient is parametrized by a $p\times p$
matrix which describes which degrees of freedom interact under the dynamics. In
this case, we analyze a $\ell_1$-regularized least squares estimator and prove
an upper bound on $T$ that nearly matches the lower bound on specific classes
of sparse matrices.
| Jose Bento, and Morteza Ibrahimi | null | 1308.4077 | null | null |
A Likelihood Ratio Approach for Probabilistic Inequalities | math.PR cs.LG math.ST stat.TH | We propose a new approach for deriving probabilistic inequalities based on
bounding likelihood ratios. We demonstrate that this approach is more general
and powerful than the classical method frequently used for deriving
concentration inequalities such as Chernoff bounds. We discover that the
proposed approach is inherently related to statistical concepts such as
monotone likelihood ratio, maximum likelihood, and the method of moments for
parameter estimation. A connection between the proposed approach and the large
deviation theory is also established. We show that, without using moment
generating functions, tightest possible concentration inequalities may be
readily derived by the proposed approach. We have derived new concentration
inequalities using the proposed approach, which cannot be obtained by the
classical approach based on moment generating functions.
| Xinjia Chen | null | 1308.4123 | null | null |
Towards Adapting ImageNet to Reality: Scalable Domain Adaptation with
Implicit Low-rank Transformations | cs.CV cs.LG stat.ML | Images seen during test time are often not from the same distribution as
images used for learning. This problem, known as domain shift, occurs when
training classifiers from object-centric internet image databases and trying to
apply them directly to scene understanding tasks. The consequence is often
severe performance degradation and is one of the major barriers for the
application of classifiers in real-world systems. In this paper, we show how to
learn transform-based domain adaptation classifiers in a scalable manner. The
key idea is to exploit an implicit rank constraint, originated from a
max-margin domain adaptation formulation, to make optimization tractable.
Experiments show that the transformation between domains can be very
efficiently learned from data and easily applied to new categories. This begins
to bridge the gap between large-scale internet image collections and object
images captured in everyday life environments.
| Erik Rodner, Judy Hoffman, Jeff Donahue, Trevor Darrell, Kate Saenko | null | 1308.4200 | null | null |
Nested Nonnegative Cone Analysis | stat.ME cs.LG | Motivated by the analysis of nonnegative data objects, a novel Nested
Nonnegative Cone Analysis (NNCA) approach is proposed to overcome some
drawbacks of existing methods. The application of traditional PCA/SVD method to
nonnegative data often cause the approximation matrix leave the nonnegative
cone, which leads to non-interpretable and sometimes nonsensical results. The
nonnegative matrix factorization (NMF) approach overcomes this issue, however
the NMF approximation matrices suffer several drawbacks: 1) the factorization
may not be unique, 2) the resulting approximation matrix at a specific rank may
not be unique, and 3) the subspaces spanned by the approximation matrices at
different ranks may not be nested. These drawbacks will cause troubles in
determining the number of components and in multi-scale (in ranks)
interpretability. The NNCA approach proposed in this paper naturally generates
a nested structure, and is shown to be unique at each rank. Simulations are
used in this paper to illustrate the drawbacks of the traditional methods, and
the usefulness of the NNCA method.
| Lingsong Zhang and J. S. Marron and Shu Lu | null | 1308.4206 | null | null |
Pylearn2: a machine learning research library | stat.ML cs.LG cs.MS | Pylearn2 is a machine learning research library. This does not just mean that
it is a collection of machine learning algorithms that share a common API; it
means that it has been designed for flexibility and extensibility in order to
facilitate research projects that involve new or unusual use cases. In this
paper we give a brief history of the library, an overview of its basic
philosophy, a summary of the library's architecture, and a description of how
the Pylearn2 community functions socially.
| Ian J. Goodfellow, David Warde-Farley, Pascal Lamblin, Vincent
Dumoulin, Mehdi Mirza, Razvan Pascanu, James Bergstra, Fr\'ed\'eric Bastien,
Yoshua Bengio | null | 1308.4214 | null | null |
Decentralized Online Big Data Classification - a Bandit Framework | cs.LG cs.MA | Distributed, online data mining systems have emerged as a result of
applications requiring analysis of large amounts of correlated and
high-dimensional data produced by multiple distributed data sources. We propose
a distributed online data classification framework where data is gathered by
distributed data sources and processed by a heterogeneous set of distributed
learners which learn online, at run-time, how to classify the different data
streams either by using their locally available classification functions or by
helping each other by classifying each other's data. Importantly, since the
data is gathered at different locations, sending the data to another learner to
process incurs additional costs such as delays, and hence this will be only
beneficial if the benefits obtained from a better classification will exceed
the costs. We assume that the classification functions available to each
processing element are fixed, but their prediction accuracy for various types
of incoming data are unknown and can change dynamically over time, and thus
they need to be learned online. We model the problem of joint classification by
the distributed and heterogeneous learners from multiple data sources as a
distributed contextual bandit problem where each data is characterized by a
specific context. We develop distributed online learning algorithms for which
we can prove that they have sublinear regret. Compared to prior work in
distributed online data mining, our work is the first to provide analytic
regret results characterizing the performance of the proposed algorithms.
| Cem Tekin and Mihaela van der Schaar | null | 1308.4565 | null | null |
Distributed Online Learning via Cooperative Contextual Bandits | cs.LG stat.ML | In this paper we propose a novel framework for decentralized, online learning
by many learners. At each moment of time, an instance characterized by a
certain context may arrive to each learner; based on the context, the learner
can select one of its own actions (which gives a reward and provides
information) or request assistance from another learner. In the latter case,
the requester pays a cost and receives the reward but the provider learns the
information. In our framework, learners are modeled as cooperative contextual
bandits. Each learner seeks to maximize the expected reward from its arrivals,
which involves trading off the reward received from its own actions, the
information learned from its own actions, the reward received from the actions
requested of others and the cost paid for these actions - taking into account
what it has learned about the value of assistance from each other learner. We
develop distributed online learning algorithms and provide analytic bounds to
compare the efficiency of these with algorithms with the complete knowledge
(oracle) benchmark (in which the expected reward of every action in every
context is known by every learner). Our estimates show that regret - the loss
incurred by the algorithm - is sublinear in time. Our theoretical framework can
be used in many practical applications including Big Data mining, event
detection in surveillance sensor networks and distributed online recommendation
systems.
| Cem Tekin and Mihaela van der Schaar | null | 1308.4568 | null | null |
Online and stochastic Douglas-Rachford splitting method for large scale
machine learning | cs.NA cs.LG stat.ML | Online and stochastic learning has emerged as powerful tool in large scale
optimization. In this work, we generalize the Douglas-Rachford splitting (DRs)
method for minimizing composite functions to online and stochastic settings (to
our best knowledge this is the first time DRs been generalized to sequential
version). We first establish an $O(1/\sqrt{T})$ regret bound for batch DRs
method. Then we proved that the online DRs splitting method enjoy an $O(1)$
regret bound and stochastic DRs splitting has a convergence rate of
$O(1/\sqrt{T})$. The proof is simple and intuitive, and the results and
technique can be served as a initiate for the research on the large scale
machine learning employ the DRs method. Numerical experiments of the proposed
method demonstrate the effectiveness of the online and stochastic update rule,
and further confirm our regret and convergence analysis.
| Ziqiang Shi and Rujie Liu | null | 1308.4757 | null | null |
The Sample-Complexity of General Reinforcement Learning | cs.LG | We present a new algorithm for general reinforcement learning where the true
environment is known to belong to a finite class of N arbitrary models. The
algorithm is shown to be near-optimal for all but O(N log^2 N) time-steps with
high probability. Infinite classes are also considered where we show that
compactness is a key criterion for determining the existence of uniform
sample-complexity bounds. A matching lower bound is given for the finite case.
| Tor Lattimore and Marcus Hutter and Peter Sunehag | null | 1308.4828 | null | null |
Minimal Dirichlet energy partitions for graphs | math.OC cs.LG stat.ML | Motivated by a geometric problem, we introduce a new non-convex graph
partitioning objective where the optimality criterion is given by the sum of
the Dirichlet eigenvalues of the partition components. A relaxed formulation is
identified and a novel rearrangement algorithm is proposed, which we show is
strictly decreasing and converges in a finite number of iterations to a local
minimum of the relaxed objective function. Our method is applied to several
clustering problems on graphs constructed from synthetic data, MNIST
handwritten digits, and manifold discretizations. The model has a
semi-supervised extension and provides a natural representative for the
clusters as well.
| Braxton Osting, Chris D. White, Edouard Oudet | 10.1137/130934568 | 1308.4915 | null | null |
Learning Deep Representation Without Parameter Inference for Nonlinear
Dimensionality Reduction | cs.LG stat.ML | Unsupervised deep learning is one of the most powerful representation
learning techniques. Restricted Boltzman machine, sparse coding, regularized
auto-encoders, and convolutional neural networks are pioneering building blocks
of deep learning. In this paper, we propose a new building block -- distributed
random models. The proposed method is a special full implementation of the
product of experts: (i) each expert owns multiple hidden units and different
experts have different numbers of hidden units; (ii) the model of each expert
is a k-center clustering, whose k-centers are only uniformly sampled examples,
and whose output (i.e. the hidden units) is a sparse code that only the
similarity values from a few nearest neighbors are reserved. The relationship
between the pioneering building blocks, several notable research branches and
the proposed method is analyzed. Experimental results show that the proposed
deep model can learn better representations than deep belief networks and
meanwhile can train a much larger network with much less time than deep belief
networks.
| Xiao-Lei Zhang | null | 1308.4922 | null | null |
Group-Sparse Signal Denoising: Non-Convex Regularization, Convex
Optimization | cs.CV cs.LG stat.ML | Convex optimization with sparsity-promoting convex regularization is a
standard approach for estimating sparse signals in noise. In order to promote
sparsity more strongly than convex regularization, it is also standard practice
to employ non-convex optimization. In this paper, we take a third approach. We
utilize a non-convex regularization term chosen such that the total cost
function (consisting of data consistency and regularization terms) is convex.
Therefore, sparsity is more strongly promoted than in the standard convex
formulation, but without sacrificing the attractive aspects of convex
optimization (unique minimum, robust algorithms, etc.). We use this idea to
improve the recently developed 'overlapping group shrinkage' (OGS) algorithm
for the denoising of group-sparse signals. The algorithm is applied to the
problem of speech enhancement with favorable results in terms of both SNR and
perceptual quality.
| Po-Yu Chen, Ivan W. Selesnick | 10.1109/TSP.2014.2329274 | 1308.5038 | null | null |
Manopt, a Matlab toolbox for optimization on manifolds | cs.MS cs.LG math.OC stat.ML | Optimization on manifolds is a rapidly developing branch of nonlinear
optimization. Its focus is on problems where the smooth geometry of the search
space can be leveraged to design efficient numerical algorithms. In particular,
optimization on manifolds is well-suited to deal with rank and orthogonality
constraints. Such structured constraints appear pervasively in machine learning
applications, including low-rank matrix completion, sensor network
localization, camera network registration, independent component analysis,
metric learning, dimensionality reduction and so on. The Manopt toolbox,
available at www.manopt.org, is a user-friendly, documented piece of software
dedicated to simplify experimenting with state of the art Riemannian
optimization algorithms. We aim particularly at reaching practitioners outside
our field.
| Nicolas Boumal and Bamdev Mishra and P.-A. Absil and Rodolphe
Sepulchre | null | 1308.5200 | null | null |
The Lovasz-Bregman Divergence and connections to rank aggregation,
clustering, and web ranking | cs.LG cs.IR stat.ML | We extend the recently introduced theory of Lovasz-Bregman (LB) divergences
(Iyer & Bilmes, 2012) in several ways. We show that they represent a distortion
between a 'score' and an 'ordering', thus providing a new view of rank
aggregation and order based clustering with interesting connections to web
ranking. We show how the LB divergences have a number of properties akin to
many permutation based metrics, and in fact have as special cases forms very
similar to the Kendall-$\tau$ metric. We also show how the LB divergences
subsume a number of commonly used ranking measures in information retrieval,
like the NDCG and AUC. Unlike the traditional permutation based metrics,
however, the LB divergence naturally captures a notion of "confidence" in the
orderings, thus providing a new representation to applications involving
aggregating scores as opposed to just orderings. We show how a number of
recently used web ranking models are forms of Lovasz-Bregman rank aggregation
and also observe that a natural form of Mallow's model using the LB divergence
has been used as conditional ranking models for the 'Learning to Rank' problem.
| Rishabh Iyer and Jeff Bilmes | null | 1308.5275 | null | null |
Ensemble of Distributed Learners for Online Classification of Dynamic
Data Streams | cs.LG | We present an efficient distributed online learning scheme to classify data
captured from distributed, heterogeneous, and dynamic data sources. Our scheme
consists of multiple distributed local learners, that analyze different streams
of data that are correlated to a common event that needs to be classified. Each
learner uses a local classifier to make a local prediction. The local
predictions are then collected by each learner and combined using a weighted
majority rule to output the final prediction. We propose a novel online
ensemble learning algorithm to update the aggregation rule in order to adapt to
the underlying data dynamics. We rigorously determine a bound for the worst
case misclassification probability of our algorithm which depends on the
misclassification probabilities of the best static aggregation rule, and of the
best local classifier. Importantly, the worst case misclassification
probability of our algorithm tends asymptotically to 0 if the misclassification
probability of the best static aggregation rule or the misclassification
probability of the best local classifier tend to 0. Then we extend our
algorithm to address challenges specific to the distributed implementation and
we prove new bounds that apply to these settings. Finally, we test our scheme
by performing an evaluation study on several data sets. When applied to data
sets widely used by the literature dealing with dynamic data streams and
concept drift, our scheme exhibits performance gains ranging from 34% to 71%
with respect to state of the art solutions.
| Luca Canzian, Yu Zhang, and Mihaela van der Schaar | null | 1308.5281 | null | null |
Monitoring with uncertainty | cs.LO cs.LG cs.SY | We discuss the problem of runtime verification of an instrumented program
that misses to emit and to monitor some events. These gaps can occur when a
monitoring overhead control mechanism is introduced to disable the monitor of
an application with real-time constraints. We show how to use statistical
models to learn the application behavior and to "fill in" the introduced gaps.
Finally, we present and discuss some techniques developed in the last three
years to estimate the probability that a property of interest is violated in
the presence of an incomplete trace.
| Ezio Bartocci (TU Wien), Radu Grosu (TU Wien) | 10.4204/EPTCS.124.1 | 1308.5329 | null | null |
A stochastic hybrid model of a biological filter | cs.LG cs.CE q-bio.MN | We present a hybrid model of a biological filter, a genetic circuit which
removes fast fluctuations in the cell's internal representation of the extra
cellular environment. The model takes the classic feed-forward loop (FFL) motif
and represents it as a network of continuous protein concentrations and binary,
unobserved gene promoter states. We address the problem of statistical
inference and parameter learning for this class of models from partial,
discrete time observations. We show that the hybrid representation leads to an
efficient algorithm for approximate statistical inference in this circuit, and
show its effectiveness on a simulated data set.
| Andrea Ocone (School of Informatics, University of Edinburgh), Guido
Sanguinetti (School of Informatics, University of Edinburgh) | 10.4204/EPTCS.124.10 | 1308.5338 | null | null |
Sparse and Non-Negative BSS for Noisy Data | stat.ML cs.LG | Non-negative blind source separation (BSS) has raised interest in various
fields of research, as testified by the wide literature on the topic of
non-negative matrix factorization (NMF). In this context, it is fundamental
that the sources to be estimated present some diversity in order to be
efficiently retrieved. Sparsity is known to enhance such contrast between the
sources while producing very robust approaches, especially to noise. In this
paper we introduce a new algorithm in order to tackle the blind separation of
non-negative sparse sources from noisy measurements. We first show that
sparsity and non-negativity constraints have to be carefully applied on the
sought-after solution. In fact, improperly constrained solutions are unlikely
to be stable and are therefore sub-optimal. The proposed algorithm, named nGMCA
(non-negative Generalized Morphological Component Analysis), makes use of
proximal calculus techniques to provide properly constrained solutions. The
performance of nGMCA compared to other state-of-the-art algorithms is
demonstrated by numerical experiments encompassing a wide variety of settings,
with negligible parameter tuning. In particular, nGMCA is shown to provide
robustness to noise and performs well on synthetic mixtures of real NMR
spectra.
| J\'er\'emy Rapin, J\'er\^ome Bobin, Anthony Larue and Jean-Luc Starck | 10.1109/TSP.2013.2279358 | 1308.5546 | null | null |
Backhaul-Aware Interference Management in the Uplink of Wireless Small
Cell Networks | cs.NI cs.GT cs.LG | The design of distributed mechanisms for interference management is one of
the key challenges in emerging wireless small cell networks whose backhaul is
capacity limited and heterogeneous (wired, wireless and a mix thereof). In this
paper, a novel, backhaul-aware approach to interference management in wireless
small cell networks is proposed. The proposed approach enables macrocell user
equipments (MUEs) to optimize their uplink performance, by exploiting the
presence of neighboring small cell base stations. The problem is formulated as
a noncooperative game among the MUEs that seek to optimize their delay-rate
tradeoff, given the conditions of both the radio access network and the --
possibly heterogeneous -- backhaul. To solve this game, a novel, distributed
learning algorithm is proposed using which the MUEs autonomously choose their
optimal uplink transmission strategies, given a limited amount of available
information. The convergence of the proposed algorithm is shown and its
properties are studied. Simulation results show that, under various types of
backhauls, the proposed approach yields significant performance gains, in terms
of both average throughput and delay for the MUEs, when compared to existing
benchmark algorithms.
| Sumudu Samarakoon and Mehdi Bennis and Walid Saad and Matti Latva-aho | 10.1109/TWC.2013.092413.130221 | 1308.5835 | null | null |
Bayesian Conditional Gaussian Network Classifiers with Applications to
Mass Spectra Classification | cs.LG stat.ML | Classifiers based on probabilistic graphical models are very effective. In
continuous domains, maximum likelihood is usually used to assess the
predictions of those classifiers. When data is scarce, this can easily lead to
overfitting. In any probabilistic setting, Bayesian averaging (BA) provides
theoretically optimal predictions and is known to be robust to overfitting. In
this work we introduce Bayesian Conditional Gaussian Network Classifiers, which
efficiently perform exact Bayesian averaging over the parameters. We evaluate
the proposed classifiers against the maximum likelihood alternatives proposed
so far over standard UCI datasets, concluding that performing BA improves the
quality of the assessed probabilities (conditional log likelihood) whilst
maintaining the error rate.
Overfitting is more likely to occur in domains where the number of data items
is small and the number of variables is large. These two conditions are met in
the realm of bioinformatics, where the early diagnosis of cancer from mass
spectra is a relevant task. We provide an application of our classification
framework to that problem, comparing it with the standard maximum likelihood
alternative, where the improvement of quality in the assessed probabilities is
confirmed.
| Victor Bellon and Jesus Cerquides and Ivo Grosse | null | 1308.6181 | null | null |
New Algorithms for Learning Incoherent and Overcomplete Dictionaries | cs.DS cs.LG stat.ML | In sparse recovery we are given a matrix $A$ (the dictionary) and a vector of
the form $A X$ where $X$ is sparse, and the goal is to recover $X$. This is a
central notion in signal processing, statistics and machine learning. But in
applications such as sparse coding, edge detection, compression and super
resolution, the dictionary $A$ is unknown and has to be learned from random
examples of the form $Y = AX$ where $X$ is drawn from an appropriate
distribution --- this is the dictionary learning problem. In most settings, $A$
is overcomplete: it has more columns than rows. This paper presents a
polynomial-time algorithm for learning overcomplete dictionaries; the only
previously known algorithm with provable guarantees is the recent work of
Spielman, Wang and Wright who gave an algorithm for the full-rank case, which
is rarely the case in applications. Our algorithm applies to incoherent
dictionaries which have been a central object of study since they were
introduced in seminal work of Donoho and Huo. In particular, a dictionary is
$\mu$-incoherent if each pair of columns has inner product at most $\mu /
\sqrt{n}$.
The algorithm makes natural stochastic assumptions about the unknown sparse
vector $X$, which can contain $k \leq c \min(\sqrt{n}/\mu \log n, m^{1/2
-\eta})$ non-zero entries (for any $\eta > 0$). This is close to the best $k$
allowable by the best sparse recovery algorithms even if one knows the
dictionary $A$ exactly. Moreover, both the running time and sample complexity
depend on $\log 1/\epsilon$, where $\epsilon$ is the target accuracy, and so
our algorithms converge very quickly to the true dictionary. Our algorithm can
also tolerate substantial amounts of noise provided it is incoherent with
respect to the dictionary (e.g., Gaussian). In the noisy setting, our running
time and sample complexity depend polynomially on $1/\epsilon$, and this is
necessary.
| Sanjeev Arora and Rong Ge and Ankur Moitra | null | 1308.6273 | null | null |
Prediction of breast cancer recurrence using Classification Restricted
Boltzmann Machine with Dropping | cs.LG | In this paper, we apply Classification Restricted Boltzmann Machine
(ClassRBM) to the problem of predicting breast cancer recurrence. According to
the Polish National Cancer Registry, in 2010 only, the breast cancer caused
almost 25% of all diagnosed cases of cancer in Poland. We propose how to use
ClassRBM for predicting breast cancer return and discovering relevant inputs
(symptoms) in illness reappearance. Next, we outline a general probabilistic
framework for learning Boltzmann machines with masks, which we refer to as
Dropping. The fashion of generating masks leads to different learning methods,
i.e., DropOut, DropConnect. We propose a new method called DropPart which is a
generalization of DropConnect. In DropPart the Beta distribution instead of
Bernoulli distribution in DropConnect is used. At the end, we carry out an
experiment using real-life dataset consisting of 949 cases, provided by the
Institute of Oncology Ljubljana.
| Jakub M. Tomczak | null | 1308.6324 | null | null |
Linear and Parallel Learning of Markov Random Fields | stat.ML cs.LG | We introduce a new embarrassingly parallel parameter learning algorithm for
Markov random fields with untied parameters which is efficient for a large
class of practical models. Our algorithm parallelizes naturally over cliques
and, for graphs of bounded degree, its complexity is linear in the number of
cliques. Unlike its competitors, our algorithm is fully parallel and for
log-linear models it is also data efficient, requiring only the local
sufficient statistics of the data to estimate parameters.
| Yariv Dror Mizrahi, Misha Denil and Nando de Freitas | null | 1308.6342 | null | null |
Learning-Based Procedural Content Generation | cs.AI cs.HC cs.LG cs.NE | Procedural content generation (PCG) has recently become one of the hottest
topics in computational intelligence and AI game researches. Among a variety of
PCG techniques, search-based approaches overwhelmingly dominate PCG development
at present. While SBPCG leads to promising results and successful applications,
it poses a number of challenges ranging from representation to evaluation of
the content being generated. In this paper, we present an alternative yet
generic PCG framework, named learning-based procedure content generation
(LBPCG), to provide potential solutions to several challenging problems in
existing PCG techniques. By exploring and exploiting information gained in game
development and public beta test via data-driven learning, our framework can
generate robust content adaptable to end-user or target players on-line with
minimal interruption to their experience. Furthermore, we develop enabling
techniques to implement the various models required in our framework. For a
proof of concept, we have developed a prototype based on the classic open
source first-person shooter game, Quake. Simulation results suggest that our
framework is promising in generating quality content.
| Jonathan Roberts and Ke Chen | null | 1308.6415 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.