title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Altitude Training: Strong Bounds for Single-Layer Dropout | stat.ML cs.LG math.ST stat.TH | Dropout training, originally designed for deep neural networks, has been
successful on high-dimensional single-layer natural language tasks. This paper
proposes a theoretical explanation for this phenomenon: we show that, under a
generative Poisson topic model with long documents, dropout training improves
the exponent in the generalization bound for empirical risk minimization.
Dropout achieves this gain much like a marathon runner who practices at
altitude: once a classifier learns to perform reasonably well on training
examples that have been artificially corrupted by dropout, it will do very well
on the uncorrupted test set. We also show that, under similar conditions,
dropout preserves the Bayes decision boundary and should therefore induce
minimal bias in high dimensions.
| Stefan Wager, William Fithian, Sida Wang, and Percy Liang | null | 1407.3289 | null | null |
Offline to Online Conversion | cs.LG cs.IT math.IT math.ST stat.CO stat.TH | We consider the problem of converting offline estimators into an online
predictor or estimator with small extra regret. Formally this is the problem of
merging a collection of probability measures over strings of length 1,2,3,...
into a single probability measure over infinite sequences. We describe various
approaches and their pros and cons on various examples. As a side-result we
give an elementary non-heuristic purely combinatoric derivation of Turing's
famous estimator. Our main technical contribution is to determine the
computational complexity of online estimators with good guarantees in general.
| Marcus Hutter | null | 1407.3334 | null | null |
Extreme State Aggregation Beyond MDPs | cs.AI cs.LG | We consider a Reinforcement Learning setup where an agent interacts with an
environment in observation-reward-action cycles without any (esp.\ MDP)
assumptions on the environment. State aggregation and more generally feature
reinforcement learning is concerned with mapping histories/raw-states to
reduced/aggregated states. The idea behind both is that the resulting reduced
process (approximately) forms a small stationary finite-state MDP, which can
then be efficiently solved or learnt. We considerably generalize existing
aggregation results by showing that even if the reduced process is not an MDP,
the (q-)value functions and (optimal) policies of an associated MDP with same
state-space size solve the original problem, as long as the solution can
approximately be represented as a function of the reduced states. This implies
an upper bound on the required state space size that holds uniformly for all RL
problems. It may also explain why RL algorithms designed for MDPs sometimes
perform well beyond MDPs.
| Marcus Hutter | null | 1407.3341 | null | null |
A Spectral Algorithm for Inference in Hidden Semi-Markov Models | stat.ML cs.LG | Hidden semi-Markov models (HSMMs) are latent variable models which allow
latent state persistence and can be viewed as a generalization of the popular
hidden Markov models (HMMs). In this paper, we introduce a novel spectral
algorithm to perform inference in HSMMs. Unlike expectation maximization (EM),
our approach correctly estimates the probability of given observation sequence
based on a set of training sequences. Our approach is based on estimating
moments from the sample, whose number of dimensions depends only
logarithmically on the maximum length of the hidden state persistence.
Moreover, the algorithm requires only a few matrix inversions and is therefore
computationally efficient. Empirical evaluations on synthetic and real data
demonstrate the advantage of the algorithm over EM in terms of speed and
accuracy, especially for large datasets.
| Igor Melnyk and Arindam Banerjee | null | 1407.3422 | null | null |
Robots that can adapt like animals | cs.RO cs.AI cs.LG cs.NE q-bio.NC | As robots leave the controlled environments of factories to autonomously
function in more complex, natural environments, they will have to respond to
the inevitable fact that they will become damaged. However, while animals can
quickly adapt to a wide variety of injuries, current robots cannot "think
outside the box" to find a compensatory behavior when damaged: they are limited
to their pre-specified self-sensing abilities, can diagnose only anticipated
failure modes, and require a pre-programmed contingency plan for every type of
potential damage, an impracticality for complex robots. Here we introduce an
intelligent trial and error algorithm that allows robots to adapt to damage in
less than two minutes, without requiring self-diagnosis or pre-specified
contingency plans. Before deployment, a robot exploits a novel algorithm to
create a detailed map of the space of high-performing behaviors: This map
represents the robot's intuitions about what behaviors it can perform and their
value. If the robot is damaged, it uses these intuitions to guide a
trial-and-error learning algorithm that conducts intelligent experiments to
rapidly discover a compensatory behavior that works in spite of the damage.
Experiments reveal successful adaptations for a legged robot injured in five
different ways, including damaged, broken, and missing legs, and for a robotic
arm with joints broken in 14 different ways. This new technique will enable
more robust, effective, autonomous robots, and suggests principles that animals
may use to adapt to injury.
| Antoine Cully, Jeff Clune, Danesh Tarapore, Jean-Baptiste Mouret | 10.1038/nature14422 | 1407.3501 | null | null |
On the Power of Adaptivity in Matrix Completion and Approximation | stat.ML cs.LG | We consider the related tasks of matrix completion and matrix approximation
from missing data and propose adaptive sampling procedures for both problems.
We show that adaptive sampling allows one to eliminate standard incoherence
assumptions on the matrix row space that are necessary for passive sampling
procedures. For exact recovery of a low-rank matrix, our algorithm judiciously
selects a few columns to observe in full and, with few additional measurements,
projects the remaining columns onto their span. This algorithm exactly recovers
an $n \times n$ rank $r$ matrix using $O(nr\mu_0 \log^2(r))$ observations,
where $\mu_0$ is a coherence parameter on the column space of the matrix. In
addition to completely eliminating any row space assumptions that have pervaded
the literature, this algorithm enjoys a better sample complexity than any
existing matrix completion algorithm. To certify that this improvement is due
to adaptive sampling, we establish that row space coherence is necessary for
passive sampling algorithms to achieve non-trivial sample complexity bounds.
For constructing a low-rank approximation to a high-rank input matrix, we
propose a simple algorithm that thresholds the singular values of a zero-filled
version of the input matrix. The algorithm computes an approximation that is
nearly as good as the best rank-$r$ approximation using $O(nr\mu \log^2(n))$
samples, where $\mu$ is a slightly different coherence parameter on the matrix
columns. Again we eliminate assumptions on the row space.
| Akshay Krishnamurthy and Aarti Singh | null | 1407.3619 | null | null |
Finding Motif Sets in Time Series | cs.LG cs.DB | Time-series motifs are representative subsequences that occur frequently in a
time series; a motif set is the set of subsequences deemed to be instances of a
given motif. We focus on finding motif sets. Our motivation is to detect motif
sets in household electricity-usage profiles, representing repeated patterns of
household usage.
We propose three algorithms for finding motif sets. Two are greedy algorithms
based on pairwise comparison, and the third uses a heuristic measure of set
quality to find the motif set directly. We compare these algorithms on
simulated datasets and on electricity-usage data. We show that Scan MK, the
simplest way of using the best-matching pair to find motif sets, is less
accurate on our synthetic data than Set Finder and Cluster MK, although the
latter is very sensitive to parameter settings. We qualitatively analyse the
outputs for the electricity-usage data and demonstrate that both Scan MK and
Set Finder can discover useful motif sets in such data.
| Anthony Bagnall, Jon Hills and Jason Lines | null | 1407.3685 | null | null |
Bayesian Network Structure Learning Using Quantum Annealing | quant-ph cs.LG | We introduce a method for the problem of learning the structure of a Bayesian
network using the quantum adiabatic algorithm. We do so by introducing an
efficient reformulation of a standard posterior-probability scoring function on
graphs as a pseudo-Boolean function, which is equivalent to a system of 2-body
Ising spins, as well as suitable penalty terms for enforcing the constraints
necessary for the reformulation; our proposed method requires $\mathcal O(n^2)$
qubits for $n$ Bayesian network variables. Furthermore, we prove lower bounds
on the necessary weighting of these penalty terms. The logical structure
resulting from the mapping has the appealing property that it is
instance-independent for a given number of Bayesian network variables, as well
as being independent of the number of data cases.
| Bryan O'Gorman, Alejandro Perdomo-Ortiz, Ryan Babbush, Alan
Aspuru-Guzik, and Vadim Smelyanskiy | 10.1140/epjst/e2015-02349-9 | 1407.3897 | null | null |
Analysis of purely random forests bias | math.ST cs.LG stat.ME stat.TH | Random forests are a very effective and commonly used statistical method, but
their full theoretical analysis is still an open problem. As a first step,
simplified models such as purely random forests have been introduced, in order
to shed light on the good performance of random forests. In this paper, we
study the approximation error (the bias) of some purely random forest models in
a regression framework, focusing in particular on the influence of the number
of trees in the forest. Under some regularity assumptions on the regression
function, we show that the bias of an infinite forest decreases at a faster
rate (with respect to the size of each tree) than a single tree. As a
consequence, infinite forests attain a strictly better risk rate (with respect
to the sample size) than single trees. Furthermore, our results allow to derive
a minimum number of trees sufficient to reach the same rate as an infinite
forest. As a by-product of our analysis, we also show a link between the bias
of purely random forests and the bias of some kernel estimators.
| Sylvain Arlot (DI-ENS, INRIA Paris - Rocquencourt), Robin Genuer
(ISPED, INRIA Bordeaux - Sud-Ouest) | null | 1407.3939 | null | null |
Fast matrix completion without the condition number | cs.LG cs.DS stat.ML | We give the first algorithm for Matrix Completion whose running time and
sample complexity is polynomial in the rank of the unknown target matrix,
linear in the dimension of the matrix, and logarithmic in the condition number
of the matrix. To the best of our knowledge, all previous algorithms either
incurred a quadratic dependence on the condition number of the unknown matrix
or a quadratic dependence on the dimension of the matrix in the running time.
Our algorithm is based on a novel extension of Alternating Minimization which
we show has theoretical guarantees under standard assumptions even in the
presence of noise.
| Moritz Hardt and Mary Wootters | null | 1407.4070 | null | null |
Finding representative sets of optimizations for adaptive
multiversioning applications | cs.PL cs.LG | Iterative compilation is a widely adopted technique to optimize programs for
different constraints such as performance, code size and power consumption in
rapidly evolving hardware and software environments. However, in case of
statically compiled programs, it is often restricted to optimizations for a
specific dataset and may not be applicable to applications that exhibit
different run-time behavior across program phases, multiple datasets or when
executed in heterogeneous, reconfigurable and virtual environments. Several
frameworks have been recently introduced to tackle these problems and enable
run-time optimization and adaptation for statically compiled programs based on
static function multiversioning and monitoring of online program behavior. In
this article, we present a novel technique to select a minimal set of
representative optimization variants (function versions) for such frameworks
while avoiding performance loss across available datasets and code-size
explosion. We developed a novel mapping mechanism using popular decision tree
or rule induction based machine learning techniques to rapidly select best code
versions at run-time based on dataset features and minimize selection overhead.
These techniques enable creation of self-tuning static binaries or libraries
adaptable to changing behavior and environments at run-time using staged
compilation that do not require complex recompilation frameworks while
effectively outperforming traditional single-version non-adaptable code.
| Lianjie Luo and Yang Chen and Chengyong Wu and Shun Long and Grigori
Fursin | null | 1407.4075 | null | null |
In Defense of MinHash Over SimHash | stat.CO cs.DS cs.IR cs.LG stat.ML | MinHash and SimHash are the two widely adopted Locality Sensitive Hashing
(LSH) algorithms for large-scale data processing applications. Deciding which
LSH to use for a particular problem at hand is an important question, which has
no clear answer in the existing literature. In this study, we provide a
theoretical answer (validated by experiments) that MinHash virtually always
outperforms SimHash when the data are binary, as common in practice such as
search.
The collision probability of MinHash is a function of resemblance similarity
($\mathcal{R}$), while the collision probability of SimHash is a function of
cosine similarity ($\mathcal{S}$). To provide a common basis for comparison, we
evaluate retrieval results in terms of $\mathcal{S}$ for both MinHash and
SimHash. This evaluation is valid as we can prove that MinHash is a valid LSH
with respect to $\mathcal{S}$, by using a general inequality $\mathcal{S}^2\leq
\mathcal{R}\leq \frac{\mathcal{S}}{2-\mathcal{S}}$. Our worst case analysis can
show that MinHash significantly outperforms SimHash in high similarity region.
Interestingly, our intensive experiments reveal that MinHash is also
substantially better than SimHash even in datasets where most of the data
points are not too similar to each other. This is partly because, in practical
data, often $\mathcal{R}\geq \frac{\mathcal{S}}{z-\mathcal{S}}$ holds where $z$
is only slightly larger than 2 (e.g., $z\leq 2.1$). Our restricted worst case
analysis by assuming $\frac{\mathcal{S}}{z-\mathcal{S}}\leq \mathcal{R}\leq
\frac{\mathcal{S}}{2-\mathcal{S}}$ shows that MinHash indeed significantly
outperforms SimHash even in low similarity region.
We believe the results in this paper will provide valuable guidelines for
search in practice, especially when the data are sparse.
| Anshumali Shrivastava and Ping Li | null | 1407.4416 | null | null |
Kernel Nonnegative Matrix Factorization Without the Curse of the
Pre-image - Application to Unmixing Hyperspectral Images | cs.CV cs.IT cs.LG cs.NE math.IT stat.ML | The nonnegative matrix factorization (NMF) is widely used in signal and image
processing, including bio-informatics, blind source separation and
hyperspectral image analysis in remote sensing. A great challenge arises when
dealing with a nonlinear formulation of the NMF. Within the framework of kernel
machines, the models suggested in the literature do not allow the
representation of the factorization matrices, which is a fallout of the curse
of the pre-image. In this paper, we propose a novel kernel-based model for the
NMF that does not suffer from the pre-image problem, by investigating the
estimation of the factorization matrices directly in the input space. For
different kernel functions, we describe two schemes for iterative algorithms:
an additive update rule based on a gradient descent scheme and a multiplicative
update rule in the same spirit as in the Lee and Seung algorithm. Within the
proposed framework, we develop several extensions to incorporate constraints,
including sparseness, smoothness, and spatial regularization with a
total-variation-like penalty. The effectiveness of the proposed method is
demonstrated with the problem of unmixing hyperspectral images, using
well-known real images and results with state-of-the-art techniques.
| Fei Zhu, Paul Honeine, Maya Kallas | null | 1407.4420 | null | null |
Subspace Restricted Boltzmann Machine | cs.LG | The subspace Restricted Boltzmann Machine (subspaceRBM) is a third-order
Boltzmann machine where multiplicative interactions are between one visible and
two hidden units. There are two kinds of hidden units, namely, gate units and
subspace units. The subspace units reflect variations of a pattern in data and
the gate unit is responsible for activating the subspace units. Additionally,
the gate unit can be seen as a pooling feature. We evaluate the behavior of
subspaceRBM through experiments with MNIST digit recognition task, measuring
reconstruction error and classification error.
| Jakub M. Tomczak and Adam Gonczarek | null | 1407.4422 | null | null |
Sequential Logistic Principal Component Analysis (SLPCA): Dimensional
Reduction in Streaming Multivariate Binary-State System | stat.ML cs.LG stat.AP | Sequential or online dimensional reduction is of interests due to the
explosion of streaming data based applications and the requirement of adaptive
statistical modeling, in many emerging fields, such as the modeling of energy
end-use profile. Principal Component Analysis (PCA), is the classical way of
dimensional reduction. However, traditional Singular Value Decomposition (SVD)
based PCA fails to model data which largely deviates from Gaussian
distribution. The Bregman Divergence was recently introduced to achieve a
generalized PCA framework. If the random variable under dimensional reduction
follows Bernoulli distribution, which occurs in many emerging fields, the
generalized PCA is called Logistic PCA (LPCA). In this paper, we extend the
batch LPCA to a sequential version (i.e. SLPCA), based on the sequential convex
optimization theory. The convergence property of this algorithm is discussed
compared to the batch version of LPCA (i.e. BLPCA), as well as its performance
in reducing the dimension for multivariate binary-state systems. Its
application in building energy end-use profile modeling is also investigated.
| Zhaoyi Kang and Costas J. Spanos | null | 1407.4430 | null | null |
On the Complexity of Best Arm Identification in Multi-Armed Bandit
Models | stat.ML cs.LG | The stochastic multi-armed bandit model is a simple abstraction that has
proven useful in many different contexts in statistics and machine learning.
Whereas the achievable limit in terms of regret minimization is now well known,
our aim is to contribute to a better understanding of the performance in terms
of identifying the m best arms. We introduce generic notions of complexity for
the two dominant frameworks considered in the literature: fixed-budget and
fixed-confidence settings. In the fixed-confidence setting, we provide the
first known distribution-dependent lower bound on the complexity that involves
information-theoretic quantities and holds when m is larger than 1 under
general assumptions. In the specific case of two armed-bandits, we derive
refined lower bounds in both the fixed-confidence and fixed-budget settings,
along with matching algorithms for Gaussian and Bernoulli bandit models. These
results show in particular that the complexity of the fixed-budget setting may
be smaller than the complexity of the fixed-confidence setting, contradicting
the familiar behavior observed when testing fully specified alternatives. In
addition, we also provide improved sequential stopping rules that have
guaranteed error probabilities and shorter average running times. The proofs
rely on two technical results that are of independent interest : a deviation
lemma for self-normalized sums (Lemma 19) and a novel change of measure
inequality for bandit models (Lemma 1).
| Emilie Kaufmann (SEQUEL, LTCI), Olivier Capp\'e (LTCI), Aur\'elien
Garivier (IMT) | null | 1407.4443 | null | null |
Probabilistic Group Testing under Sum Observations: A Parallelizable
2-Approximation for Entropy Loss | cs.IT cs.LG math.IT math.OC math.ST stat.ML stat.TH | We consider the problem of group testing with sum observations and noiseless
answers, in which we aim to locate multiple objects by querying the number of
objects in each of a sequence of chosen sets. We study a probabilistic setting
with entropy loss, in which we assume a joint Bayesian prior density on the
locations of the objects and seek to choose the sets queried to minimize the
expected entropy of the Bayesian posterior distribution after a fixed number of
questions. We present a new non-adaptive policy, called the dyadic policy, show
it is optimal among non-adaptive policies, and is within a factor of two of
optimal among adaptive policies. This policy is quick to compute, its
nonadaptive nature makes it easy to parallelize, and our bounds show it
performs well even when compared with adaptive policies. We also study an
adaptive greedy policy, which maximizes the one-step expected reduction in
entropy, and show that it performs at least as well as the dyadic policy,
offering greater query efficiency but reduced parallelism. Numerical
experiments demonstrate that both procedures outperform a divide-and-conquer
benchmark policy from the literature, called sequential bifurcation, and show
how these procedures may be applied in a stylized computer vision problem.
| Weidong Han, Purnima Rajan, Peter I. Frazier, Bruno M. Jedynak | null | 1407.4446 | null | null |
A feature construction framework based on outlier detection and
discriminative pattern mining | cs.LG | No matter the expressive power and sophistication of supervised learning
algorithms, their effectiveness is restricted by the features describing the
data. This is not a new insight in ML and many methods for feature selection,
transformation, and construction have been developed. But while this is
on-going for general techniques for feature selection and transformation, i.e.
dimensionality reduction, work on feature construction, i.e. enriching the
data, is by now mainly the domain of image, particularly character,
recognition, and NLP.
In this work, we propose a new general framework for feature construction.
The need for feature construction in a data set is indicated by class outliers
and discriminative pattern mining used to derive features on their
k-neighborhoods. We instantiate the framework with LOF and C4.5-Rules, and
evaluate the usefulness of the derived features on a diverse collection of UCI
data sets. The derived features are more often useful than ones derived by
DC-Fringe, and our approach is much less likely to overfit. But while a weak
learner, Naive Bayes, benefits strongly from the feature construction, the
effect is less pronounced for C4.5, and almost vanishes for an SVM leaner.
Keywords: feature construction, classification, outlier detection
| Albrecht Zimmermann | null | 1407.4668 | null | null |
Sparse Partially Linear Additive Models | stat.ME cs.LG stat.ML | The generalized partially linear additive model (GPLAM) is a flexible and
interpretable approach to building predictive models. It combines features in
an additive manner, allowing each to have either a linear or nonlinear effect
on the response. However, the choice of which features to treat as linear or
nonlinear is typically assumed known. Thus, to make a GPLAM a viable approach
in situations in which little is known $a~priori$ about the features, one must
overcome two primary model selection challenges: deciding which features to
include in the model and determining which of these features to treat
nonlinearly. We introduce the sparse partially linear additive model (SPLAM),
which combines model fitting and $both$ of these model selection challenges
into a single convex optimization problem. SPLAM provides a bridge between the
lasso and sparse additive models. Through a statistical oracle inequality and
thorough simulation, we demonstrate that SPLAM can outperform other methods
across a broad spectrum of statistical regimes, including the high-dimensional
($p\gg N$) setting. We develop efficient algorithms that are applied to real
data sets with half a million samples and over 45,000 features with excellent
predictive performance.
| Yin Lou, Jacob Bien, Rich Caruana, Johannes Gehrke | null | 1407.4729 | null | null |
An landcover fuzzy logic classification by maximumlikelihood | cs.CV cs.LG | In present days remote sensing is most used application in many sectors. This
remote sensing uses different images like multispectral, hyper spectral or
ultra spectral. The remote sensing image classification is one of the
significant method to classify image. In this state we classify the maximum
likelihood classification with fuzzy logic. In this we experimenting fuzzy
logic like spatial, spectral texture methods in that different sub methods to
be used for image classification.
| T.Sarath, G.Nagalakshmi | null | 1407.4739 | null | null |
Efficient On-the-fly Category Retrieval using ConvNets and GPUs | cs.CV cs.LG cs.NE | We investigate the gains in precision and speed, that can be obtained by
using Convolutional Networks (ConvNets) for on-the-fly retrieval - where
classifiers are learnt at run time for a textual query from downloaded images,
and used to rank large image or video datasets.
We make three contributions: (i) we present an evaluation of state-of-the-art
image representations for object category retrieval over standard benchmark
datasets containing 1M+ images; (ii) we show that ConvNets can be used to
obtain features which are incredibly performant, and yet much lower dimensional
than previous state-of-the-art image representations, and that their
dimensionality can be reduced further without loss in performance by
compression using product quantization or binarization. Consequently, features
with the state-of-the-art performance on large-scale datasets of millions of
images can fit in the memory of even a commodity GPU card; (iii) we show that
an SVM classifier can be learnt within a ConvNet framework on a GPU in parallel
with downloading the new training images, allowing for a continuous refinement
of the model as more images become available, and simultaneous training and
ranking. The outcome is an on-the-fly system that significantly outperforms its
predecessors in terms of: precision of retrieval, memory requirements, and
speed, facilitating accurate on-the-fly learning and ranking in under a second
on a single GPU.
| Ken Chatfield, Karen Simonyan and Andrew Zisserman | null | 1407.4764 | null | null |
Collaborative Filtering Ensemble for Personalized Name Recommendation | cs.IR cs.AI cs.LG | Out of thousands of names to choose from, picking the right one for your
child is a daunting task. In this work, our objective is to help parents making
an informed decision while choosing a name for their baby. We follow a
recommender system approach and combine, in an ensemble, the individual
rankings produced by simple collaborative filtering algorithms in order to
produce a personalized list of names that meets the individual parents' taste.
Our experiments were conducted using real-world data collected from the query
logs of 'nameling' (nameling.net), an online portal for searching and exploring
names, which corresponds to the dataset released in the context of the ECML
PKDD Discover Challenge 2013. Our approach is intuitive, easy to implement, and
features fast training and prediction steps.
| Bernat Coma-Puig and Ernesto Diaz-Aviles and Wolfgang Nejdl | null | 1407.4832 | null | null |
Deep Metric Learning for Practical Person Re-Identification | cs.CV cs.LG cs.NE | Various hand-crafted features and metric learning methods prevail in the
field of person re-identification. Compared to these methods, this paper
proposes a more general way that can learn a similarity metric from image
pixels directly. By using a "siamese" deep neural network, the proposed method
can jointly learn the color feature, texture feature and metric in a unified
framework. The network has a symmetry structure with two sub-networks which are
connected by Cosine function. To deal with the big variations of person images,
binomial deviance is used to evaluate the cost between similarities and labels,
which is proved to be robust to outliers.
Compared to existing researches, a more practical setting is studied in the
experiments that is training and test on different datasets (cross dataset
person re-identification). Both in "intra dataset" and "cross dataset"
settings, the superiorities of the proposed method are illustrated on VIPeR and
PRID.
| Dong Yi and Zhen Lei and Stan Z. Li | null | 1407.4979 | null | null |
Classification of Passes in Football Matches using Spatiotemporal Data | cs.LG cs.CG | A knowledgeable observer of a game of football (soccer) can make a subjective
evaluation of the quality of passes made between players during the game. We
investigate the problem of producing an automated system to make the same
evaluation of passes. We present a model that constructs numerical predictor
variables from spatiotemporal match data using feature functions based on
methods from computational geometry, and then learns a classification function
from labelled examples of the predictor variables. Furthermore, the learned
classifiers are analysed to determine if there is a relationship between the
complexity of the algorithm that computed the predictor variable and the
importance of the variable to the classifier. Experimental results show that we
are able to produce a classifier with 85.8% accuracy on classifying passes as
Good, OK or Bad, and that the predictor variables computed using complex
methods from computational geometry are of moderate importance to the learned
classifiers. Finally, we show that the inter-rater agreement on pass
classification between the machine classifier and a human observer is of
similar magnitude to the agreement between two observers.
| Michael Horton, Joachim Gudmundsson, Sanjay Chawla, Jo\"el Estephan | 10.1145/3105576 | 1407.5093 | null | null |
Sparse and spurious: dictionary learning with noise and outliers | cs.LG stat.ML | A popular approach within the signal processing and machine learning
communities consists in modelling signals as sparse linear combinations of
atoms selected from a learned dictionary. While this paradigm has led to
numerous empirical successes in various fields ranging from image to audio
processing, there have only been a few theoretical arguments supporting these
evidences. In particular, sparse coding, or sparse dictionary learning, relies
on a non-convex procedure whose local minima have not been fully analyzed yet.
In this paper, we consider a probabilistic model of sparse signals, and show
that, with high probability, sparse coding admits a local minimum around the
reference dictionary generating the signals. Our study takes into account the
case of over-complete dictionaries, noisy signals, and possible outliers, thus
extending previous work limited to noiseless settings and/or under-complete
dictionaries. The analysis we conduct is non-asymptotic and makes it possible
to understand how the key quantities of the problem, such as the coherence or
the level of noise, can scale with respect to the dimension of the signals, the
number of atoms, the sparsity and the number of observations.
| R\'emi Gribonval (PANAMA), Rodolphe Jenatton (CMAP), Francis Bach
(SIERRA, LIENS) | null | 1407.5155 | null | null |
Tight convex relaxations for sparse matrix factorization | stat.ML cs.LG math.ST stat.TH | Based on a new atomic norm, we propose a new convex formulation for sparse
matrix factorization problems in which the number of nonzero elements of the
factors is assumed fixed and known. The formulation counts sparse PCA with
multiple factors, subspace clustering and low-rank sparse bilinear regression
as potential applications. We compute slow rates and an upper bound on the
statistical dimension of the suggested norm for rank 1 matrices, showing that
its statistical dimension is an order of magnitude smaller than the usual
$\ell\_1$-norm, trace norm and their combinations. Even though our convex
formulation is in theory hard and does not lead to provably polynomial time
algorithmic schemes, we propose an active set algorithm leveraging the
structure of the convex problem to solve it and show promising numerical
results.
| Emile Richard, Guillaume Obozinski (LIGM), Jean-Philippe Vert (CBIO) | null | 1407.5158 | null | null |
Feature and Region Selection for Visual Learning | cs.CV cs.LG | Visual learning problems such as object classification and action recognition
are typically approached using extensions of the popular bag-of-words (BoW)
model. Despite its great success, it is unclear what visual features the BoW
model is learning: Which regions in the image or video are used to discriminate
among classes? Which are the most discriminative visual words? Answering these
questions is fundamental for understanding existing BoW models and inspiring
better models for visual recognition.
To answer these questions, this paper presents a method for feature selection
and region selection in the visual BoW model. This allows for an intermediate
visualization of the features and regions that are important for visual
learning. The main idea is to assign latent weights to the features or regions,
and jointly optimize these latent variables with the parameters of a classifier
(e.g., support vector machine). There are four main benefits of our approach:
(1) Our approach accommodates non-linear additive kernels such as the popular
$\chi^2$ and intersection kernel; (2) our approach is able to handle both
regions in images and spatio-temporal regions in videos in a unified way; (3)
the feature selection problem is convex, and both problems can be solved using
a scalable reduced gradient method; (4) we point out strong connections with
multiple kernel learning and multiple instance learning approaches.
Experimental results in the PASCAL VOC 2007, MSR Action Dataset II and YouTube
illustrate the benefits of our approach.
| Ji Zhao, Liantao Wang, Ricardo Cabral, Fernando De la Torre | 10.1109/TIP.2016.2514503 | 1407.5245 | null | null |
Practical Kernel-Based Reinforcement Learning | cs.LG cs.AI stat.ML | Kernel-based reinforcement learning (KBRL) stands out among reinforcement
learning algorithms for its strong theoretical guarantees. By casting the
learning problem as a local kernel approximation, KBRL provides a way of
computing a decision policy which is statistically consistent and converges to
a unique solution. Unfortunately, the model constructed by KBRL grows with the
number of sample transitions, resulting in a computational cost that precludes
its application to large-scale or on-line domains. In this paper we introduce
an algorithm that turns KBRL into a practical reinforcement learning tool.
Kernel-based stochastic factorization (KBSF) builds on a simple idea: when a
transition matrix is represented as the product of two stochastic matrices, one
can swap the factors of the multiplication to obtain another transition matrix,
potentially much smaller, which retains some fundamental properties of its
precursor. KBSF exploits such an insight to compress the information contained
in KBRL's model into an approximator of fixed size. This makes it possible to
build an approximation that takes into account both the difficulty of the
problem and the associated computational cost. KBSF's computational complexity
is linear in the number of sample transitions, which is the best one can do
without discarding data. Moreover, the algorithm's simple mechanics allow for a
fully incremental implementation that makes the amount of memory used
independent of the number of sample transitions. The result is a kernel-based
reinforcement learning algorithm that can be applied to large-scale problems in
both off-line and on-line regimes. We derive upper bounds for the distance
between the value functions computed by KBRL and KBSF using the same data. We
also illustrate the potential of our algorithm in an extensive empirical study
in which KBSF is applied to difficult tasks based on real-world data.
| Andr\'e M. S. Barreto, Doina Precup, and Joelle Pineau | null | 1407.5358 | null | null |
Are There Good Mistakes? A Theoretical Analysis of CEGIS | cs.LO cs.AI cs.LG cs.PL | Counterexample-guided inductive synthesis CEGIS is used to synthesize
programs from a candidate space of programs. The technique is guaranteed to
terminate and synthesize the correct program if the space of candidate programs
is finite. But the technique may or may not terminate with the correct program
if the candidate space of programs is infinite. In this paper, we perform a
theoretical analysis of counterexample-guided inductive synthesis technique. We
investigate whether the set of candidate spaces for which the correct program
can be synthesized using CEGIS depends on the counterexamples used in inductive
synthesis, that is, whether there are good mistakes which would increase the
synthesis power. We investigate whether the use of minimal counterexamples
instead of arbitrary counterexamples expands the set of candidate spaces of
programs for which inductive synthesis can successfully synthesize a correct
program. We consider two kinds of counterexamples: minimal counterexamples and
history bounded counterexamples. The history bounded counterexample used in any
iteration of CEGIS is bounded by the examples used in previous iterations of
inductive synthesis. We examine the relative change in power of inductive
synthesis in both cases. We show that the synthesis technique using minimal
counterexamples MinCEGIS has the same synthesis power as CEGIS but the
synthesis technique using history bounded counterexamples HCEGIS has different
power than that of CEGIS, but none dominates the other.
| Susmit Jha (Strategic CAD Labs, Intel), Sanjit A. Seshia (EECS, UC
Berkeley) | 10.4204/EPTCS.157.10 | 1407.5397 | null | null |
Scalable Kernel Methods via Doubly Stochastic Gradients | cs.LG stat.ML | The general perception is that kernel methods are not scalable, and neural
nets are the methods of choice for nonlinear learning problems. Or have we
simply not tried hard enough for kernel methods? Here we propose an approach
that scales up kernel methods using a novel concept called "doubly stochastic
functional gradients". Our approach relies on the fact that many kernel methods
can be expressed as convex optimization problems, and we solve the problems by
making two unbiased stochastic approximations to the functional gradient, one
using random training points and another using random functions associated with
the kernel, and then descending using this noisy functional gradient. We show
that a function produced by this procedure after $t$ iterations converges to
the optimal function in the reproducing kernel Hilbert space in rate $O(1/t)$,
and achieves a generalization performance of $O(1/\sqrt{t})$. This doubly
stochasticity also allows us to avoid keeping the support vectors and to
implement the algorithm in a small memory footprint, which is linear in number
of iterations and independent of data dimension. Our approach can readily scale
kernel methods up to the regimes which are dominated by neural nets. We show
that our method can achieve competitive performance to neural nets in datasets
such as 8 million handwritten digits from MNIST, 2.3 million energy materials
from MolecularSpace, and 1 million photos from ImageNet.
| Bo Dai, Bo Xie, Niao He, Yingyu Liang, Anant Raj, Maria-Florina
Balcan, Le Song | null | 1407.5599 | null | null |
PGMHD: A Scalable Probabilistic Graphical Model for Massive Hierarchical
Data Problems | cs.AI cs.LG | In the big data era, scalability has become a crucial requirement for any
useful computational model. Probabilistic graphical models are very useful for
mining and discovering data insights, but they are not scalable enough to be
suitable for big data problems. Bayesian Networks particularly demonstrate this
limitation when their data is represented using few random variables while each
random variable has a massive set of values. With hierarchical data - data that
is arranged in a treelike structure with several levels - one would expect to
see hundreds of thousands or millions of values distributed over even just a
small number of levels. When modeling this kind of hierarchical data across
large data sets, Bayesian networks become infeasible for representing the
probability distributions for the following reasons: i) Each level represents a
single random variable with hundreds of thousands of values, ii) The number of
levels is usually small, so there are also few random variables, and iii) The
structure of the network is predefined since the dependency is modeled top-down
from each parent to each of its child nodes, so the network would contain a
single linear path for the random variables from each parent to each child
node. In this paper we present a scalable probabilistic graphical model to
overcome these limitations for massive hierarchical data. We believe the
proposed model will lead to an easily-scalable, more readable, and expressive
implementation for problems that require probabilistic-based solutions for
massive amounts of hierarchical data. We successfully applied this model to
solve two different challenging probabilistic-based problems on massive
hierarchical data sets for different domains, namely, bioinformatics and latent
semantic discovery over search logs.
| Khalifeh AlJadda, Mohammed Korayem, Camilo Ortiz, Trey Grainger, John
A. Miller, William S. York | null | 1407.5656 | null | null |
Exploiting Smoothness in Statistical Learning, Sequential Prediction,
and Stochastic Optimization | cs.LG | In the last several years, the intimate connection between convex
optimization and learning problems, in both statistical and sequential
frameworks, has shifted the focus of algorithmic machine learning to examine
this interplay. In particular, on one hand, this intertwinement brings forward
new challenges in reassessment of the performance of learning algorithms
including generalization and regret bounds under the assumptions imposed by
convexity such as analytical properties of loss functions (e.g., Lipschitzness,
strong convexity, and smoothness). On the other hand, emergence of datasets of
an unprecedented size, demands the development of novel and more efficient
optimization algorithms to tackle large-scale learning problems.
The overarching goal of this thesis is to reassess the smoothness of loss
functions in statistical learning, sequential prediction/online learning, and
stochastic optimization and explicate its consequences. In particular we
examine how smoothness of loss function could be beneficial or detrimental in
these settings in terms of sample complexity, statistical consistency, regret
analysis, and convergence rate, and investigate how smoothness can be leveraged
to devise more efficient learning algorithms.
| Mehrdad Mahdavi | null | 1407.5908 | null | null |
Sequential Changepoint Approach for Online Community Detection | stat.ML cs.LG cs.SI math.ST stat.TH | We present new algorithms for detecting the emergence of a community in large
networks from sequential observations. The networks are modeled using
Erdos-Renyi random graphs with edges forming between nodes in the community
with higher probability. Based on statistical changepoint detection
methodology, we develop three algorithms: the Exhaustive Search (ES), the
mixture, and the Hierarchical Mixture (H-Mix) methods. Performance of these
methods is evaluated by the average run length (ARL), which captures the
frequency of false alarms, and the detection delay. Numerical comparisons show
that the ES method performs the best; however, it is exponentially complex. The
mixture method is polynomially complex by exploiting the fact that the size of
the community is typically small in a large network. However, it may react to a
group of active edges that do not form a community. This issue is resolved by
the H-Mix method, which is based on a dendrogram decomposition of the network.
We present an asymptotic analytical expression for ARL of the mixture method
when the threshold is large. Numerical simulation verifies that our
approximation is accurate even in the non-asymptotic regime. Hence, it can be
used to determine a desired threshold efficiently. Finally, numerical examples
show that the mixture and the H-Mix methods can both detect a community quickly
with a lower complexity than the ES method.
| David Marangoni-Simonsen and Yao Xie | 10.1109/LSP.2014.2381553 | 1407.5978 | null | null |
The U-curve optimization problem: improvements on the original algorithm
and time complexity analysis | cs.LG cs.CV | The U-curve optimization problem is characterized by a decomposable in
U-shaped curves cost function over the chains of a Boolean lattice. This
problem can be applied to model the classical feature selection problem in
Machine Learning. Recently, the U-Curve algorithm was proposed to give optimal
solutions to the U-curve problem. In this article, we point out that the
U-Curve algorithm is in fact suboptimal, and introduce the U-Curve-Search (UCS)
algorithm, which is actually optimal. We also present the results of optimal
and suboptimal experiments, in which UCS is compared with the UBB optimal
branch-and-bound algorithm and the SFFS heuristic, respectively. We show that,
in both experiments, $\proc{UCS}$ had a better performance than its competitor.
Finally, we analyze the obtained results and point out improvements on UCS that
might enhance the performance of this algorithm.
| Marcelo S. Reis, Carlos E. Ferreira, and Junior Barrera | null | 1407.6067 | null | null |
Learning Rank Functionals: An Empirical Study | cs.IR cs.LG stat.ML | Ranking is a key aspect of many applications, such as information retrieval,
question answering, ad placement and recommender systems. Learning to rank has
the goal of estimating a ranking model automatically from training data. In
practical settings, the task often reduces to estimating a rank functional of
an object with respect to a query. In this paper, we investigate key issues in
designing an effective learning to rank algorithm. These include data
representation, the choice of rank functionals, the design of the loss function
so that it is correlated with the rank metrics used in evaluation. For the loss
function, we study three techniques: approximating the rank metric by a smooth
function, decomposition of the loss into a weighted sum of element-wise losses
and into a weighted sum of pairwise losses. We then present derivations of
piecewise losses using the theory of high-order Markov chains and Markov random
fields. In experiments, we evaluate these design aspects on two tasks: answer
ranking in a Social Question Answering site, and Web Information Retrieval.
| Truyen Tran, Dinh Phung, Svetha Venkatesh | null | 1407.6089 | null | null |
Stabilizing Sparse Cox Model using Clinical Structures in Electronic
Medical Records | stat.ML cs.LG | Stability in clinical prediction models is crucial for transferability
between studies, yet has received little attention. The problem is paramount in
high dimensional data which invites sparse models with feature selection
capability. We introduce an effective method to stabilize sparse Cox model of
time-to-events using clinical structures inherent in Electronic Medical
Records. Model estimation is stabilized using a feature graph derived from two
types of EMR structures: temporal structure of disease and intervention
recurrences, and hierarchical structure of medical knowledge and practices. We
demonstrate the efficacy of the method in predicting time-to-readmission of
heart failure patients. On two stability measures - the Jaccard index and the
Consistency index - the use of clinical structures significantly increased
feature stability without hurting discriminative power. Our model reported a
competitive AUC of 0.64 (95% CIs: [0.58,0.69]) for 6 months prediction.
| Shivapratap Gopakumar, Truyen Tran, Dinh Phung, Svetha Venkatesh | null | 1407.6094 | null | null |
Permutation Models for Collaborative Ranking | cs.IR cs.LG stat.ML | We study the problem of collaborative filtering where ranking information is
available. Focusing on the core of the collaborative ranking process, the user
and their community, we propose new models for representation of the underlying
permutations and prediction of ranks. The first approach is based on the
assumption that the user makes successive choice of items in a stage-wise
manner. In particular, we extend the Plackett-Luce model in two ways -
introducing parameter factoring to account for user-specific contribution, and
modelling the latent community in a generative setting. The second approach
relies on log-linear parameterisation, which relaxes the discrete-choice
assumption, but makes learning and inference much more involved. We propose
MCMC-based learning and inference methods and derive linear-time prediction
algorithms.
| Truyen Tran and Svetha Venkatesh | null | 1407.6128 | null | null |
Content-Level Selective Offloading in Heterogeneous Networks:
Multi-armed Bandit Optimization and Regret Bounds | cs.IT cs.LG math.IT | We consider content-level selective offloading of cellular downlink traffic
to a wireless infostation terminal which stores high data-rate content in its
cache memory. Cellular users in the vicinity of the infostation can directly
download the stored content from the infostation through a broadband connection
(e.g., WiFi), reducing the latency and load on the cellular network. The goal
of the infostation cache controller (CC) is to store the most popular content
in the cache memory such that the maximum amount of traffic is offloaded to the
infostation. In practice, the popularity profile of the files is not known by
the CC, which observes only the instantaneous demands for those contents stored
in the cache. Hence, the cache content placement is optimised based on the
demand history and on the cost associated to placing each content in the cache.
By refreshing the cache content at regular time intervals, the CC gradually
learns the popularity profile, while at the same time exploiting the limited
cache capacity in the best way possible. This is formulated as a multi-armed
bandit (MAB) problem with switching cost. Several algorithms are presented to
decide on the cache content over time. The performance is measured in terms of
cache efficiency, defined as the amount of net traffic that is offloaded to the
infostation. In addition to theoretical regret bounds, the proposed algorithms
are analysed through numerical simulations. In particular, the impact of system
parameters, such as the number of files, number of users, cache size, and
skewness of the popularity profile, on the performance is studied numerically.
It is shown that the proposed algorithms learn the popularity profile quickly
for a wide range of system parameters.
| Pol Blasco and Deniz G\"und\"uz | null | 1407.6154 | null | null |
Learning in games via reinforcement and regularization | math.OC cs.GT cs.LG | We investigate a class of reinforcement learning dynamics where players
adjust their strategies based on their actions' cumulative payoffs over time -
specifically, by playing mixed strategies that maximize their expected
cumulative payoff minus a regularization term. A widely studied example is
exponential reinforcement learning, a process induced by an entropic
regularization term which leads mixed strategies to evolve according to the
replicator dynamics. However, in contrast to the class of regularization
functions used to define smooth best responses in models of stochastic
fictitious play, the functions used in this paper need not be infinitely steep
at the boundary of the simplex; in fact, dropping this requirement gives rise
to an important dichotomy between steep and nonsteep cases. In this general
framework, we extend several properties of exponential learning, including the
elimination of dominated strategies, the asymptotic stability of strict Nash
equilibria, and the convergence of time-averaged trajectories in zero-sum games
with an interior Nash equilibrium.
| Panayotis Mertikopoulos and William H. Sandholm | null | 1407.6267 | null | null |
Quadratically constrained quadratic programming for classification using
particle swarms and applications | cs.AI cs.LG cs.NE math.OC | Particle swarm optimization is used in several combinatorial optimization
problems. In this work, particle swarms are used to solve quadratic programming
problems with quadratic constraints. The approach of particle swarms is an
example for interior point methods in optimization as an iterative technique.
This approach is novel and deals with classification problems without the use
of a traditional classifier. Our method determines the optimal hyperplane or
classification boundary for a data set. In a binary classification problem, we
constrain each class as a cluster, which is enclosed by an ellipsoid. The
estimation of the optimal hyperplane between the two clusters is posed as a
quadratically constrained quadratic problem. The optimization problem is solved
in distributed format using modified particle swarms. Our method has the
advantage of using the direction towards optimal solution rather than searching
the entire feasible region. Our results on the Iris, Pima, Wine, and Thyroid
datasets show that the proposed method works better than a neural network and
the performance is close to that of SVM.
| Deepak Kumar, A G Ramakrishnan | null | 1407.6315 | null | null |
Learning Structured Outputs from Partial Labels using Forest Ensemble | stat.ML cs.CV cs.LG | Learning structured outputs with general structures is computationally
challenging, except for tree-structured models. Thus we propose an efficient
boosting-based algorithm AdaBoost.MRF for this task. The idea is based on the
realization that a graph is a superimposition of trees. Different from most
existing work, our algorithm can handle partial labelling, and thus is
particularly attractive in practice where reliable labels are often sparsely
observed. In addition, our method works exclusively on trees and thus is
guaranteed to converge. We apply the AdaBoost.MRF algorithm to an indoor video
surveillance scenario, where activities are modelled at multiple levels.
| Truyen Tran, Dinh Phung, Svetha Venkatesh | null | 1407.6432 | null | null |
Feature Engineering for Knowledge Base Construction | cs.DB cs.CL cs.LG | Knowledge base construction (KBC) is the process of populating a knowledge
base, i.e., a relational database together with inference rules, with
information extracted from documents and structured sources. KBC blurs the
distinction between two traditional database problems, information extraction
and information integration. For the last several years, our group has been
building knowledge bases with scientific collaborators. Using our approach, we
have built knowledge bases that have comparable and sometimes better quality
than those constructed by human volunteers. In contrast to these knowledge
bases, which took experts a decade or more human years to construct, many of
our projects are constructed by a single graduate student.
Our approach to KBC is based on joint probabilistic inference and learning,
but we do not see inference as either a panacea or a magic bullet: inference is
a tool that allows us to be systematic in how we construct, debug, and improve
the quality of such systems. In addition, inference allows us to construct
these systems in a more loosely coupled way than traditional approaches. To
support this idea, we have built the DeepDive system, which has the design goal
of letting the user "think about features---not algorithms." We think of
DeepDive as declarative in that one specifies what they want but not how to get
it. We describe our approach with a focus on feature engineering, which we
argue is an understudied problem relative to its importance to end-to-end
quality.
| Christopher R\'e, Amir Abbas Sadeghian, Zifei Shan, Jaeho Shin, Feiran
Wang, Sen Wu, Ce Zhang | null | 1407.6439 | null | null |
Dissimilarity-based Sparse Subset Selection | cs.LG stat.ML | Finding an informative subset of a large collection of data points or models
is at the center of many problems in computer vision, recommender systems,
bio/health informatics as well as image and natural language processing. Given
pairwise dissimilarities between the elements of a `source set' and a `target
set,' we consider the problem of finding a subset of the source set, called
representatives or exemplars, that can efficiently describe the target set. We
formulate the problem as a row-sparsity regularized trace minimization problem.
Since the proposed formulation is, in general, NP-hard, we consider a convex
relaxation. The solution of our optimization finds representatives and the
assignment of each element of the target set to each representative, hence,
obtaining a clustering. We analyze the solution of our proposed optimization as
a function of the regularization parameter. We show that when the two sets
jointly partition into multiple groups, our algorithm finds representatives
from all groups and reveals clustering of the sets. In addition, we show that
the proposed framework can effectively deal with outliers. Our algorithm works
with arbitrary dissimilarities, which can be asymmetric or violate the triangle
inequality. To efficiently implement our algorithm, we consider an Alternating
Direction Method of Multipliers (ADMM) framework, which results in quadratic
complexity in the problem size. We show that the ADMM implementation allows to
parallelize the algorithm, hence further reducing the computational time.
Finally, by experiments on real-world datasets, we show that our proposed
algorithm improves the state of the art on the two problems of scene
categorization using representative images and time-series modeling and
segmentation using representative~models.
| Ehsan Elhamifar, Guillermo Sapiro and S. Shankar Sastry | null | 1407.6810 | null | null |
Interpretable Low-Rank Document Representations with Label-Dependent
Sparsity Patterns | cs.CL cs.IR cs.LG | In context of document classification, where in a corpus of documents their
label tags are readily known, an opportunity lies in utilizing label
information to learn document representation spaces with better discriminative
properties. To this end, in this paper application of a Variational Bayesian
Supervised Nonnegative Matrix Factorization (supervised vbNMF) with
label-driven sparsity structure of coefficients is proposed for learning of
discriminative nonsubtractive latent semantic components occuring in TF-IDF
document representations. Constraints are such that the components pursued are
made to be frequently occuring in a small set of labels only, making it
possible to yield document representations with distinctive label-specific
sparse activation patterns. A simple measure of quality of this kind of
sparsity structure, dubbed inter-label sparsity, is introduced and
experimentally brought into tight connection with classification performance.
Representing a great practical convenience, inter-label sparsity is shown to be
easily controlled in supervised vbNMF by a single parameter.
| Ivan Ivek | null | 1407.6872 | null | null |
Your click decides your fate: Inferring Information Processing and
Attrition Behavior from MOOC Video Clickstream Interactions | cs.HC cs.LG | In this work, we explore video lecture interaction in Massive Open Online
Courses (MOOCs), which is central to student learning experience on these
educational platforms. As a research contribution, we operationalize video
lecture clickstreams of students into cognitively plausible higher level
behaviors, and construct a quantitative information processing index, which can
aid instructors to better understand MOOC hurdles and reason about
unsatisfactory learning outcomes. Our results illustrate how such a metric
inspired by cognitive psychology can help answer critical questions regarding
students' engagement, their future click interactions and participation
trajectories that lead to in-video & course dropouts. Implications for research
and practice are discussed
| Tanmay Sinha, Patrick Jermann, Nan Li, Pierre Dillenbourg | null | 1407.7131 | null | null |
Pairwise Correlations in Layered Close-Packed Structures | cond-mat.mtrl-sci cs.LG | Given a description of the stacking statistics of layered close-packed
structures in the form of a hidden Markov model, we develop analytical
expressions for the pairwise correlation functions between the layers. These
may be calculated analytically as explicit functions of model parameters or the
expressions may be used as a fast, accurate, and efficient way to obtain
numerical values. We present several examples, finding agreement with previous
work as well as deriving new relations.
| P. M. Riechers and D. P. Varn and J. P. Crutchfield | null | 1407.7159 | null | null |
Leveraging user profile attributes for improving pedagogical accuracy of
learning pathways | cs.CY cs.LG | In recent years, with the enormous explosion of web based learning resources,
personalization has become a critical factor for the success of services that
wish to leverage the power of Web 2.0. However, the relevance, significance and
impact of tailored content delivery in the learning domain is still
questionable. Apart from considering only interaction based features like
ratings and inferring learner preferences from them, if these services were to
incorporate innate user profile attributes which affect learning activities,
the quality of recommendations produced could be vastly improved. Recognizing
the crucial role of effective guidance in informal educational settings, we
provide a principled way of utilizing multiple sources of information from the
user profile itself for the recommendation task. We explore factors that affect
the choice of learning resources and explain in what way are they helpful to
improve the pedagogical accuracy of learning objects recommended. Through a
systematical application of machine learning techniques, we further provide a
technological solution to convert these indirectly mapped learner specific
attributes into a direct mapping with the learning resources. This mapping has
a distinct advantage of tagging learning resources to make their metadata more
informative. The results of our empirical study depict the similarity of
nominal learning attributes with respect to each other. We further succeed in
capturing the learner subset, whose preferences are most likely to be an
indication of learning resource usage. Our novel system filters learner profile
attributes to discover a tag that links them with learning resources.
| Tanmay Sinha, Ankit Banka, Dae Ki Kang | null | 1407.7260 | null | null |
Online Learning and Profit Maximization from Revealed Preferences | cs.DS cs.GT cs.LG | We consider the problem of learning from revealed preferences in an online
setting. In our framework, each period a consumer buys an optimal bundle of
goods from a merchant according to her (linear) utility function and current
prices, subject to a budget constraint. The merchant observes only the
purchased goods, and seeks to adapt prices to optimize his profits. We give an
efficient algorithm for the merchant's problem that consists of a learning
phase in which the consumer's utility function is (perhaps partially) inferred,
followed by a price optimization step. We also consider an alternative online
learning algorithm for the setting where prices are set exogenously, but the
merchant would still like to predict the bundle that will be bought by the
consumer for purposes of inventory or supply chain management. In contrast with
most prior work on the revealed preferences problem, we demonstrate that by
making stronger assumptions on the form of utility functions, efficient
algorithms for both learning and profit maximization are possible, even in
adaptive, online settings.
| Kareem Amin, Rachel Cummings, Lili Dworkin, Michael Kearns, Aaron Roth | null | 1407.7294 | null | null |
Algorithms, Initializations, and Convergence for the Nonnegative Matrix
Factorization | cs.NA cs.LG stat.ML | It is well known that good initializations can improve the speed and accuracy
of the solutions of many nonnegative matrix factorization (NMF) algorithms.
Many NMF algorithms are sensitive with respect to the initialization of W or H
or both. This is especially true of algorithms of the alternating least squares
(ALS) type, including the two new ALS algorithms that we present in this paper.
We compare the results of six initialization procedures (two standard and four
new) on our ALS algorithms. Lastly, we discuss the practical issue of choosing
an appropriate convergence criterion.
| Amy N. Langville, Carl D. Meyer, Russell Albright, James Cox, David
Duling | null | 1407.7299 | null | null |
'Almost Sure' Chaotic Properties of Machine Learning Methods | cs.LG cs.AI | It has been demonstrated earlier that universal computation is 'almost
surely' chaotic. Machine learning is a form of computational fixed point
iteration, iterating over the computable function space. We showcase some
properties of this iteration, and establish in general that the iteration is
'almost surely' of chaotic nature. This theory explains the observation in the
counter intuitive properties of deep learning methods. This paper demonstrates
that these properties are going to be universal to any learning method.
| Nabarun Mondal, Partha P. Ghosh | null | 1407.7417 | null | null |
A Fast Synchronization Clustering Algorithm | cs.LG | This paper presents a Fast Synchronization Clustering algorithm (FSynC),
which is an improved version of SynC algorithm. In order to decrease the time
complexity of the original SynC algorithm, we combine grid cell partitioning
method and Red-Black tree to construct the near neighbor point set of every
point. By simulated experiments of some artificial data sets and several real
data sets, we observe that FSynC algorithm can often get less time than SynC
algorithm for many kinds of data sets. At last, it gives some research
expectations to popularize this algorithm.
| Xinquan Chen | null | 1407.7449 | null | null |
Efficient Regularized Regression for Variable Selection with L0 Penalty | cs.LG stat.ML | Variable (feature, gene, model, which we use interchangeably) selections for
regression with high-dimensional BIGDATA have found many applications in
bioinformatics, computational biology, image processing, and engineering. One
appealing approach is the L0 regularized regression which penalizes the number
of nonzero features in the model directly. L0 is known as the most essential
sparsity measure and has nice theoretical properties, while the popular L1
regularization is only a best convex relaxation of L0. Therefore, it is natural
to expect that L0 regularized regression performs better than LASSO. However,
it is well-known that L0 optimization is NP-hard and computationally
challenging. Instead of solving the L0 problems directly, most publications so
far have tried to solve an approximation problem that closely resembles L0
regularization.
In this paper, we propose an efficient EM algorithm (L0EM) that directly
solves the L0 optimization problem. $L_0$EM is efficient with high dimensional
data. It also provides a natural solution to all Lp p in [0,2] problems. The
regularized parameter can be either determined through cross-validation or AIC
and BIC. Theoretical properties of the L0-regularized estimator are given under
mild conditions that permit the number of variables to be much larger than the
sample size. We demonstrate our methods through simulation and high-dimensional
genomic data. The results indicate that L0 has better performance than LASSO
and L0 with AIC or BIC has similar performance as computationally intensive
cross-validation. The proposed algorithms are efficient in identifying the
non-zero variables with less-bias and selecting biologically important genes
and pathways with high dimensional BIGDATA.
| Zhenqiu Liu and Gang Li | null | 1407.7508 | null | null |
Entropic one-class classifiers | cs.CV cs.LG stat.ML | The one-class classification problem is a well-known research endeavor in
pattern recognition. The problem is also known under different names, such as
outlier and novelty/anomaly detection. The core of the problem consists in
modeling and recognizing patterns belonging only to a so-called target class.
All other patterns are termed non-target, and therefore they should be
recognized as such. In this paper, we propose a novel one-class classification
system that is based on an interplay of different techniques. Primarily, we
follow a dissimilarity representation based approach; we embed the input data
into the dissimilarity space by means of an appropriate parametric
dissimilarity measure. This step allows us to process virtually any type of
data. The dissimilarity vectors are then represented through a weighted
Euclidean graphs, which we use to (i) determine the entropy of the data
distribution in the dissimilarity space, and at the same time (ii) derive
effective decision regions that are modeled as clusters of vertices. Since the
dissimilarity measure for the input data is parametric, we optimize its
parameters by means of a global optimization scheme, which considers both
mesoscopic and structural characteristics of the data represented through the
graphs. The proposed one-class classifier is designed to provide both hard
(Boolean) and soft decisions about the recognition of test patterns, allowing
an accurate description of the classification process. We evaluate the
performance of the system on different benchmarking datasets, containing either
feature-based or structured patterns. Experimental results demonstrate the
effectiveness of the proposed technique.
| Lorenzo Livi, Alireza Sadeghian, Witold Pedrycz | 10.1109/TNNLS.2015.2418332 | 1407.7556 | null | null |
Toward a multilevel representation of protein molecules: comparative
approaches to the aggregation/folding propensity problem | cs.CE cs.LG q-bio.BM q-bio.MN | This paper builds upon the fundamental work of Niwa et al. [34], which
provides the unique possibility to analyze the relative aggregation/folding
propensity of the elements of the entire Escherichia coli (E. coli) proteome in
a cell-free standardized microenvironment. The hardness of the problem comes
from the superposition between the driving forces of intra- and inter-molecule
interactions and it is mirrored by the evidences of shift from folding to
aggregation phenotypes by single-point mutations [10]. Here we apply several
state-of-the-art classification methods coming from the field of structural
pattern recognition, with the aim to compare different representations of the
same proteins gathered from the Niwa et al. data base; such representations
include sequences and labeled (contact) graphs enriched with chemico-physical
attributes. By this comparison, we are able to identify also some interesting
general properties of proteins. Notably, (i) we suggest a threshold around 250
residues discriminating "easily foldable" from "hardly foldable" molecules
consistent with other independent experiments, and (ii) we highlight the
relevance of contact graph spectra for folding behavior discrimination and
characterization of the E. coli solubility data. The soundness of the
experimental results presented in this paper is proved by the statistically
relevant relationships discovered among the chemico-physical description of
proteins and the developed cost matrix of substitution used in the various
discrimination systems.
| Lorenzo Livi, Alessandro Giuliani, Antonello Rizzi | 10.1016/j.ins.2015.07.043 | 1407.7559 | null | null |
Dependence versus Conditional Dependence in Local Causal Discovery from
Gene Expression Data | q-bio.QM cs.LG stat.ML | Motivation: Algorithms that discover variables which are causally related to
a target may inform the design of experiments. With observational gene
expression data, many methods discover causal variables by measuring each
variable's degree of statistical dependence with the target using dependence
measures (DMs). However, other methods measure each variable's ability to
explain the statistical dependence between the target and the remaining
variables in the data using conditional dependence measures (CDMs), since this
strategy is guaranteed to find the target's direct causes, direct effects, and
direct causes of the direct effects in the infinite sample limit. In this
paper, we design a new algorithm in order to systematically compare the
relative abilities of DMs and CDMs in discovering causal variables from gene
expression data.
Results: The proposed algorithm using a CDM is sample efficient, since it
consistently outperforms other state-of-the-art local causal discovery
algorithms when samples sizes are small. However, the proposed algorithm using
a CDM outperforms the proposed algorithm using a DM only when sample sizes are
above several hundred. These results suggest that accurate causal discovery
from gene expression data using current CDM-based algorithms requires datasets
with at least several hundred samples.
Availability: The proposed algorithm is freely available at
https://github.com/ericstrobl/DvCD.
| Eric V. Strobl, Shyam Visweswaran | null | 1407.7566 | null | null |
Dynamic Feature Scaling for Online Learning of Binary Classifiers | cs.LG stat.ML | Scaling feature values is an important step in numerous machine learning
tasks. Different features can have different value ranges and some form of a
feature scaling is often required in order to learn an accurate classifier.
However, feature scaling is conducted as a preprocessing task prior to
learning. This is problematic in an online setting because of two reasons.
First, it might not be possible to accurately determine the value range of a
feature at the initial stages of learning when we have observed only a few
number of training instances. Second, the distribution of data can change over
the time, which render obsolete any feature scaling that we perform in a
pre-processing step. We propose a simple but an effective method to dynamically
scale features at train time, thereby quickly adapting to any changes in the
data stream. We compare the proposed dynamic feature scaling method against
more complex methods for estimating scaling parameters using several benchmark
datasets for binary classification. Our proposed feature scaling method
consistently outperforms more complex methods on all of the benchmark datasets
and improves classification accuracy of a state-of-the-art online binary
classifier algorithm.
| Danushka Bollegala | null | 1407.7584 | null | null |
Chasing Ghosts: Competing with Stateful Policies | cs.LG | We consider sequential decision making in a setting where regret is measured
with respect to a set of stateful reference policies, and feedback is limited
to observing the rewards of the actions performed (the so called "bandit"
setting). If either the reference policies are stateless rather than stateful,
or the feedback includes the rewards of all actions (the so called "expert"
setting), previous work shows that the optimal regret grows like
$\Theta(\sqrt{T})$ in terms of the number of decision rounds $T$.
The difficulty in our setting is that the decision maker unavoidably loses
track of the internal states of the reference policies, and thus cannot
reliably attribute rewards observed in a certain round to any of the reference
policies. In fact, in this setting it is impossible for the algorithm to
estimate which policy gives the highest (or even approximately highest) total
reward. Nevertheless, we design an algorithm that achieves expected regret that
is sublinear in $T$, of the form $O( T/\log^{1/4}{T})$. Our algorithm is based
on a certain local repetition lemma that may be of independent interest. We
also show that no algorithm can guarantee expected regret better than $O(
T/\log^{3/2} T)$.
| Uriel Feige, Tomer Koren, Moshe Tennenholtz | null | 1407.7635 | null | null |
Estimating the Accuracies of Multiple Classifiers Without Labeled Data | stat.ML cs.LG | In various situations one is given only the predictions of multiple
classifiers over a large unlabeled test data. This scenario raises the
following questions: Without any labeled data and without any a-priori
knowledge about the reliability of these different classifiers, is it possible
to consistently and computationally efficiently estimate their accuracies?
Furthermore, also in a completely unsupervised manner, can one construct a more
accurate unsupervised ensemble classifier? In this paper, focusing on the
binary case, we present simple, computationally efficient algorithms to solve
these questions. Furthermore, under standard classifier independence
assumptions, we prove our methods are consistent and study their asymptotic
error. Our approach is spectral, based on the fact that the off-diagonal
entries of the classifiers' covariance matrix and 3-d tensor are rank-one. We
illustrate the competitive performance of our algorithms via extensive
experiments on both artificial and real datasets.
| Ariel Jaffe, Boaz Nadler and Yuval Kluger | null | 1407.7644 | null | null |
NMF with Sparse Regularizations in Transformed Domains | stat.ML cs.LG | Non-negative blind source separation (non-negative BSS), which is also
referred to as non-negative matrix factorization (NMF), is a very active field
in domains as different as astrophysics, audio processing or biomedical signal
processing. In this context, the efficient retrieval of the sources requires
the use of signal priors such as sparsity. If NMF has now been well studied
with sparse constraints in the direct domain, only very few algorithms can
encompass non-negativity together with sparsity in a transformed domain since
simultaneously dealing with two priors in two different domains is challenging.
In this article, we show how a sparse NMF algorithm coined non-negative
generalized morphological component analysis (nGMCA) can be extended to impose
non-negativity in the direct domain along with sparsity in a transformed
domain, with both analysis and synthesis formulations. To our knowledge, this
work presents the first comparison of analysis and synthesis priors ---as well
as their reweighted versions--- in the context of blind source separation.
Comparisons with state-of-the-art NMF algorithms on realistic data show the
efficiency as well as the robustness of the proposed algorithms.
| J\'er\'emy Rapin and J\'er\^ome Bobin and Anthony Larue and Jean-Luc
Starck | 10.1137/140952314 | 1407.7691 | null | null |
OpenML: networked science in machine learning | cs.LG cs.CY | Many sciences have made significant breakthroughs by adopting online tools
that help organize, structure and mine information that is too detailed to be
printed in journals. In this paper, we introduce OpenML, a place for machine
learning researchers to share and organize data in fine detail, so that they
can work more effectively, be more visible, and collaborate with others to
tackle harder problems. We discuss how OpenML relates to other examples of
networked science and what benefits it brings for machine learning research,
individual scientists, as well as students and practitioners.
| Joaquin Vanschoren, Jan N. van Rijn, Bernd Bischl, and Luis Torgo | 10.1145/2641190.2641198 | 1407.7722 | null | null |
A Hash-based Co-Clustering Algorithm for Categorical Data | cs.LG | Many real-life data are described by categorical attributes without a
pre-classification. A common data mining method used to extract information
from this type of data is clustering. This method group together the samples
from the data that are more similar than all other samples. But, categorical
data pose a challenge when extracting information because: the calculation of
two objects similarity is usually done by measuring the number of common
features, but ignore a possible importance weighting; if the data may be
divided differently according to different subsets of the features, the
algorithm may find clusters with different meanings from each other,
difficulting the post analysis. Data Co-Clustering of categorical data is the
technique that tries to find subsets of samples that share a subset of features
in common. By doing so, not only a sample may belong to more than one cluster
but, the feature selection of each cluster describe its own characteristics. In
this paper a novel Co-Clustering technique for categorical data is proposed by
using Locality Sensitive Hashing technique in order to preprocess a list of
Co-Clusters seeds based on a previous research. Results indicate this technique
is capable of finding high quality Co-Clusters in many different categorical
data sets and scales linearly with the data set size.
| Fabricio Olivetti de Fran\c{c}a | null | 1407.7753 | null | null |
Sure Screening for Gaussian Graphical Models | stat.ML cs.LG | We propose {graphical sure screening}, or GRASS, a very simple and
computationally-efficient screening procedure for recovering the structure of a
Gaussian graphical model in the high-dimensional setting. The GRASS estimate of
the conditional dependence graph is obtained by thresholding the elements of
the sample covariance matrix. The proposed approach possesses the sure
screening property: with very high probability, the GRASS estimated edge set
contains the true edge set. Furthermore, with high probability, the size of the
estimated edge set is controlled. We provide a choice of threshold for GRASS
that can control the expected false positive rate. We illustrate the
performance of GRASS in a simulation study and on a gene expression data set,
and show that in practice it performs quite competitively with more complex and
computationally-demanding techniques for graph estimation.
| Shikai Luo, Rui Song, Daniela Witten | null | 1407.7819 | null | null |
How Auto-Encoders Could Provide Credit Assignment in Deep Networks via
Target Propagation | cs.LG | We propose to exploit {\em reconstruction} as a layer-local training signal
for deep learning. Reconstructions can be propagated in a form of target
propagation playing a role similar to back-propagation but helping to reduce
the reliance on derivatives in order to perform credit assignment across many
levels of possibly strong non-linearities (which is difficult for
back-propagation). A regularized auto-encoder tends produce a reconstruction
that is a more likely version of its input, i.e., a small move in the direction
of higher likelihood. By generalizing gradients, target propagation may also
allow to train deep networks with discrete hidden units. If the auto-encoder
takes both a representation of input and target (or of any side information) in
input, then its reconstruction of input representation provides a target
towards a representation that is more likely, conditioned on all the side
information. A deep auto-encoder decoding path generalizes gradient propagation
in a learned way that can could thus handle not just infinitesimal changes but
larger, discrete changes, hopefully allowing credit assignment through a long
chain of non-linear operations. In addition to each layer being a good
auto-encoder, the encoder also learns to please the upper layers by
transforming the data into a space where it is easier to model by them,
flattening manifolds and disentangling factors. The motivations and theoretical
justifications for this approach are laid down in this paper, along with
conjectures that will have to be verified either mathematically or
experimentally, including a hypothesis stating that such auto-encoder mediated
target propagation could play in brains the role of credit assignment through
many non-linear, noisy and discrete transformations.
| Yoshua Bengio | null | 1407.7906 | null | null |
Learning Economic Parameters from Revealed Preferences | cs.GT cs.LG | A recent line of work, starting with Beigman and Vohra (2006) and
Zadimoghaddam and Roth (2012), has addressed the problem of {\em learning} a
utility function from revealed preference data. The goal here is to make use of
past data describing the purchases of a utility maximizing agent when faced
with certain prices and budget constraints in order to produce a hypothesis
function that can accurately forecast the {\em future} behavior of the agent.
In this work we advance this line of work by providing sample complexity
guarantees and efficient algorithms for a number of important classes. By
drawing a connection to recent advances in multi-class learning, we provide a
computationally efficient algorithm with tight sample complexity guarantees
($\Theta(d/\epsilon)$ for the case of $d$ goods) for learning linear utility
functions under a linear price model. This solves an open question in
Zadimoghaddam and Roth (2012). Our technique yields numerous generalizations
including the ability to learn other well-studied classes of utility functions,
to deal with a misspecified model, and with non-linear prices.
| Maria-Florina Balcan, Amit Daniely, Ruta Mehta, Ruth Urner, and Vijay
V. Vazirani | null | 1407.7937 | null | null |
Targeting Optimal Active Learning via Example Quality | stat.ML cs.LG | In many classification problems unlabelled data is abundant and a subset can
be chosen for labelling. This defines the context of active learning (AL),
where methods systematically select that subset, to improve a classifier by
retraining. Given a classification problem, and a classifier trained on a small
number of labelled examples, consider the selection of a single further
example. This example will be labelled by the oracle and then used to retrain
the classifier. This example selection raises a central question: given a fully
specified stochastic description of the classification problem, which example
is the optimal selection? If optimality is defined in terms of loss, this
definition directly produces expected loss reduction (ELR), a central quantity
whose maximum yields the optimal example selection. This work presents a new
theoretical approach to AL, example quality, which defines optimal AL behaviour
in terms of ELR. Once optimal AL behaviour is defined mathematically, reasoning
about this abstraction provides insights into AL. In a theoretical context the
optimal selection is compared to existing AL methods, showing that heuristics
can make sub-optimal selections. Algorithms are constructed to estimate example
quality directly. A large-scale experimental study shows these algorithms to be
competitive with standard AL methods.
| Lewis P. G. Evans and Niall M. Adams and Christoforos Anagnostopoulos | null | 1407.8042 | null | null |
Differentially-Private Logistic Regression for Detecting Multiple-SNP
Association in GWAS Databases | stat.ML cs.LG stat.AP | Following the publication of an attack on genome-wide association studies
(GWAS) data proposed by Homer et al., considerable attention has been given to
developing methods for releasing GWAS data in a privacy-preserving way. Here,
we develop an end-to-end differentially private method for solving regression
problems with convex penalty functions and selecting the penalty parameters by
cross-validation. In particular, we focus on penalized logistic regression with
elastic-net regularization, a method widely used to in GWAS analyses to
identify disease-causing genes. We show how a differentially private procedure
for penalized logistic regression with elastic-net regularization can be
applied to the analysis of GWAS data and evaluate our method's performance.
| Fei Yu, Michal Rybar, Caroline Uhler, Stephen E. Fienberg | null | 1407.8067 | null | null |
The Grow-Shrink strategy for learning Markov network structures
constrained by context-specific independences | cs.LG cs.DS | Markov networks are models for compactly representing complex probability
distributions. They are composed by a structure and a set of numerical weights.
The structure qualitatively describes independences in the distribution, which
can be exploited to factorize the distribution into a set of compact functions.
A key application for learning structures from data is to automatically
discover knowledge. In practice, structure learning algorithms focused on
"knowledge discovery" present a limitation: they use a coarse-grained
representation of the structure. As a result, this representation cannot
describe context-specific independences. Very recently, an algorithm called
CSPC was designed to overcome this limitation, but it has a high computational
complexity. This work tries to mitigate this downside presenting CSGS, an
algorithm that uses the Grow-Shrink strategy for reducing unnecessary
computations. On an empirical evaluation, the structures learned by CSGS
achieve competitive accuracies and lower computational complexity with respect
to those obtained by CSPC.
| Alejandro Edera, Yanela Strappa and Facundo Bromberg | null | 1407.8088 | null | null |
Stochastic Coordinate Coding and Its Application for Drosophila Gene
Expression Pattern Annotation | cs.LG cs.CE | \textit{Drosophila melanogaster} has been established as a model organism for
investigating the fundamental principles of developmental gene interactions.
The gene expression patterns of \textit{Drosophila melanogaster} can be
documented as digital images, which are annotated with anatomical ontology
terms to facilitate pattern discovery and comparison. The automated annotation
of gene expression pattern images has received increasing attention due to the
recent expansion of the image database. The effectiveness of gene expression
pattern annotation relies on the quality of feature representation. Previous
studies have demonstrated that sparse coding is effective for extracting
features from gene expression images. However, solving sparse coding remains a
computationally challenging problem, especially when dealing with large-scale
data sets and learning large size dictionaries. In this paper, we propose a
novel algorithm to solve the sparse coding problem, called Stochastic
Coordinate Coding (SCC). The proposed algorithm alternatively updates the
sparse codes via just a few steps of coordinate descent and updates the
dictionary via second order stochastic gradient descent. The computational cost
is further reduced by focusing on the non-zero components of the sparse codes
and the corresponding columns of the dictionary only in the updating procedure.
Thus, the proposed algorithm significantly improves the efficiency and the
scalability, making sparse coding applicable for large-scale data sets and
large dictionary sizes. Our experiments on Drosophila gene expression data sets
demonstrate the efficiency and the effectiveness of the proposed algorithm.
| Binbin Lin, Qingyang Li, Qian Sun, Ming-Jun Lai, Ian Davidson, Wei
Fan, Jieping Ye | null | 1407.8147 | null | null |
Fast Bayesian Feature Selection for High Dimensional Linear Regression
in Genomics via the Ising Approximation | q-bio.QM cs.LG stat.ML | Feature selection, identifying a subset of variables that are relevant for
predicting a response, is an important and challenging component of many
methods in statistics and machine learning. Feature selection is especially
difficult and computationally intensive when the number of variables approaches
or exceeds the number of samples, as is often the case for many genomic
datasets. Here, we introduce a new approach -- the Bayesian Ising Approximation
(BIA) -- to rapidly calculate posterior probabilities for feature relevance in
L2 penalized linear regression. In the regime where the regression problem is
strongly regularized by the prior, we show that computing the marginal
posterior probabilities for features is equivalent to computing the
magnetizations of an Ising model. Using a mean field approximation, we show it
is possible to rapidly compute the feature selection path described by the
posterior probabilities as a function of the L2 penalty. We present simulations
and analytical results illustrating the accuracy of the BIA on some simple
regression problems. Finally, we demonstrate the applicability of the BIA to
high dimensional regression by analyzing a gene expression dataset with nearly
30,000 features.
| Charles K. Fisher, Pankaj Mehta | null | 1407.8187 | null | null |
DuSK: A Dual Structure-preserving Kernel for Supervised Tensor Learning
with Applications to Neuroimages | cs.LG | With advances in data collection technologies, tensor data is assuming
increasing prominence in many applications and the problem of supervised tensor
learning has emerged as a topic of critical significance in the data mining and
machine learning community. Conventional methods for supervised tensor learning
mainly focus on learning kernels by flattening the tensor into vectors or
matrices, however structural information within the tensors will be lost. In
this paper, we introduce a new scheme to design structure-preserving kernels
for supervised tensor learning. Specifically, we demonstrate how to leverage
the naturally available structure within the tensorial representation to encode
prior knowledge in the kernel. We proposed a tensor kernel that can preserve
tensor structures based upon dual-tensorial mapping. The dual-tensorial mapping
function can map each tensor instance in the input space to another tensor in
the feature space while preserving the tensorial structure. Theoretically, our
approach is an extension of the conventional kernels in the vector space to
tensor space. We applied our novel kernel in conjunction with SVM to real-world
tensor classification problems including brain fMRI classification for three
different diseases (i.e., Alzheimer's disease, ADHD and brain damage by HIV).
Extensive empirical studies demonstrate that our proposed approach can
effectively boost tensor classification performances, particularly with small
sample sizes.
| Lifang He, Xiangnan Kong, Philip S. Yu, Ann B. Ragin, Zhifeng Hao,
Xiaowei Yang | null | 1407.8289 | null | null |
Combinatorial Multi-Armed Bandit and Its Extension to Probabilistically
Triggered Arms | cs.LG | We define a general framework for a large class of combinatorial multi-armed
bandit (CMAB) problems, where subsets of base arms with unknown distributions
form super arms. In each round, a super arm is played and the base arms
contained in the super arm are played and their outcomes are observed. We
further consider the extension in which more based arms could be
probabilistically triggered based on the outcomes of already triggered arms.
The reward of the super arm depends on the outcomes of all played arms, and it
only needs to satisfy two mild assumptions, which allow a large class of
nonlinear reward instances. We assume the availability of an offline
(\alpha,\beta)-approximation oracle that takes the means of the outcome
distributions of arms and outputs a super arm that with probability {\beta}
generates an {\alpha} fraction of the optimal expected reward. The objective of
an online learning algorithm for CMAB is to minimize
(\alpha,\beta)-approximation regret, which is the difference between the
\alpha{\beta} fraction of the expected reward when always playing the optimal
super arm, and the expected reward of playing super arms according to the
algorithm. We provide CUCB algorithm that achieves O(log n)
distribution-dependent regret, where n is the number of rounds played, and we
further provide distribution-independent bounds for a large class of reward
functions. Our regret analysis is tight in that it matches the bound of UCB1
algorithm (up to a constant factor) for the classical MAB problem, and it
significantly improves the regret bound in a earlier paper on combinatorial
bandits with linear rewards. We apply our CMAB framework to two new
applications, probabilistic maximum coverage and social influence maximization,
both having nonlinear reward structures. In particular, application to social
influence maximization requires our extension on probabilistically triggered
arms.
| Wei Chen, Yajun Wang, Yang Yuan, Qinshi Wang | null | 1407.8339 | null | null |
Beyond KernelBoost | cs.CV cs.LG | In this Technical Report we propose a set of improvements with respect to the
KernelBoost classifier presented in [Becker et al., MICCAI 2013]. We start with
a scheme inspired by Auto-Context, but that is suitable in situations where the
lack of large training sets poses a potential problem of overfitting. The aim
is to capture the interactions between neighboring image pixels to better
regularize the boundaries of segmented regions. As in Auto-Context [Tu et al.,
PAMI 2009] the segmentation process is iterative and, at each iteration, the
segmentation results for the previous iterations are taken into account in
conjunction with the image itself. However, unlike in [Tu et al., PAMI 2009],
we organize our recursion so that the classifiers can progressively focus on
difficult-to-classify locations. This lets us exploit the power of the
decision-tree paradigm while avoiding over-fitting. In the context of this
architecture, KernelBoost represents a powerful building block due to its
ability to learn on the score maps coming from previous iterations. We first
introduce two important mechanisms to empower the KernelBoost classifier,
namely pooling and the clustering of positive samples based on the appearance
of the corresponding ground-truth. These operations significantly contribute to
increase the effectiveness of the system on biomedical images, where texture
plays a major role in the recognition of the different image components. We
then present some other techniques that can be easily integrated in the
KernelBoost framework to further improve the accuracy of the final
segmentation. We show extensive results on different medical image datasets,
including some multi-label tasks, on which our method is shown to outperform
state-of-the-art approaches. The resulting segmentations display high accuracy,
neat contours, and reduced noise.
| Roberto Rigamonti, Vincent Lepetit, Pascal Fua | null | 1407.8518 | null | null |
Learning Nash Equilibria in Congestion Games | cs.LG cs.GT | We study the repeated congestion game, in which multiple populations of
players share resources, and make, at each iteration, a decentralized decision
on which resources to utilize. We investigate the following question: given a
model of how individual players update their strategies, does the resulting
dynamics of strategy profiles converge to the set of Nash equilibria of the
one-shot game? We consider in particular a model in which players update their
strategies using algorithms with sublinear discounted regret. We show that the
resulting sequence of strategy profiles converges to the set of Nash equilibria
in the sense of Ces\`aro means. However, strong convergence is not guaranteed
in general. We show that strong convergence can be guaranteed for a class of
algorithms with a vanishing upper bound on discounted regret, and which satisfy
an additional condition. We call such algorithms AREP algorithms, for
Approximate REPlicator, as they can be interpreted as a discrete-time
approximation of the replicator equation, which models the continuous-time
evolution of population strategies, and which is known to converge for the
class of congestion games. In particular, we show that the discounted Hedge
algorithm belongs to the AREP class, which guarantees its strong convergence.
| Walid Krichene, Benjamin Drigh\`es and Alexandre M. Bayen | null | 1408.0017 | null | null |
Learning From Ordered Sets and Applications in Collaborative Ranking | cs.LG cs.IR stat.ML | Ranking over sets arise when users choose between groups of items. For
example, a group may be of those movies deemed $5$ stars to them, or a
customized tour package. It turns out, to model this data type properly, we
need to investigate the general combinatorics problem of partitioning a set and
ordering the subsets. Here we construct a probabilistic log-linear model over a
set of ordered subsets. Inference in this combinatorial space is highly
challenging: The space size approaches $(N!/2)6.93145^{N+1}$ as $N$ approaches
infinity. We propose a \texttt{split-and-merge} Metropolis-Hastings procedure
that can explore the state-space efficiently. For discovering hidden aspects in
the data, we enrich the model with latent binary variables so that the
posteriors can be efficiently evaluated. Finally, we evaluate the proposed
model on large-scale collaborative filtering tasks and demonstrate that it is
competitive against state-of-the-art methods.
| Truyen Tran, Dinh Phung, Svetha Venkatesh | null | 1408.0043 | null | null |
Cumulative Restricted Boltzmann Machines for Ordinal Matrix Data
Analysis | stat.ML cs.IR cs.LG stat.AP stat.ME | Ordinal data is omnipresent in almost all multiuser-generated feedback -
questionnaires, preferences etc. This paper investigates modelling of ordinal
data with Gaussian restricted Boltzmann machines (RBMs). In particular, we
present the model architecture, learning and inference procedures for both
vector-variate and matrix-variate ordinal data. We show that our model is able
to capture latent opinion profile of citizens around the world, and is
competitive against state-of-art collaborative filtering techniques on
large-scale public datasets. The model thus has the potential to extend
application of RBMs to diverse domains such as recommendation systems, product
reviews and expert assessments.
| Truyen Tran, Dinh Phung, Svetha Venkatesh | null | 1408.0047 | null | null |
Thurstonian Boltzmann Machines: Learning from Multiple Inequalities | stat.ML cs.LG stat.ME | We introduce Thurstonian Boltzmann Machines (TBM), a unified architecture
that can naturally incorporate a wide range of data inputs at the same time.
Our motivation rests in the Thurstonian view that many discrete data types can
be considered as being generated from a subset of underlying latent continuous
variables, and in the observation that each realisation of a discrete type
imposes certain inequalities on those variables. Thus learning and inference in
TBM reduce to making sense of a set of inequalities. Our proposed TBM naturally
supports the following types: Gaussian, intervals, censored, binary,
categorical, muticategorical, ordinal, (in)-complete rank with and without
ties. We demonstrate the versatility and capacity of the proposed model on
three applications of very different natures; namely handwritten digit
recognition, collaborative filtering and complex social survey analysis.
| Truyen Tran, Dinh Phung, Svetha Venkatesh | null | 1408.0055 | null | null |
A Framework for learning multi-agent dynamic formation strategy in
real-time applications | cs.RO cs.LG cs.MA | Formation strategy is one of the most important parts of many multi-agent
systems with many applications in real world problems. In this paper, a
framework for learning this task in a limited domain (restricted environment)
is proposed. In this framework, agents learn either directly by observing an
expert behavior or indirectly by observing other agents or objects behavior.
First, a group of algorithms for learning formation strategy based on limited
features will be presented. Due to distributed and complex nature of many
multi-agent systems, it is impossible to include all features directly in the
learning process; thus, a modular scheme is proposed in order to reduce the
number of features. In this method, some important features have indirect
influence in learning instead of directly involving them as input features.
This framework has the ability to dynamically assign a group of positions to a
group of agents to improve system performance. In addition, it can change the
formation strategy when the context changes. Finally, this framework is able to
automatically produce many complex and flexible formation strategy algorithms
without directly involving an expert to present and implement such complex
algorithms.
| Mehrab Norouzitallab, Valiallah Monajjemi, Saeed Shiry Ghidary and
Mohammad Bagher Menhaj | null | 1408.0058 | null | null |
Conditional Restricted Boltzmann Machines for Cold Start Recommendations | cs.IR cs.LG stat.ML | Restricted Boltzman Machines (RBMs) have been successfully used in
recommender systems. However, as with most of other collaborative filtering
techniques, it cannot solve cold start problems for there is no rating for a
new item. In this paper, we first apply conditional RBM (CRBM) which could take
extra information into account and show that CRBM could solve cold start
problem very well, especially for rating prediction task. CRBM naturally
combine the content and collaborative data under a single framework which could
be fitted effectively. Experiments show that CRBM can be compared favourably
with matrix factorization models, while hidden features learned from the former
models are more easy to be interpreted.
| Jiankou Li and Wei Zhang | null | 1408.0096 | null | null |
A RobustICA Based Algorithm for Blind Separation of Convolutive Mixtures | cs.LG cs.SD | We propose a frequency domain method based on robust independent component
analysis (RICA) to address the multichannel Blind Source Separation (BSS)
problem of convolutive speech mixtures in highly reverberant environments. We
impose regularization processes to tackle the ill-conditioning problem of the
covariance matrix and to mitigate the performance degradation in the frequency
domain. We apply an algorithm to separate the source signals in adverse
conditions, i.e. high reverberation conditions when short observation signals
are available. Furthermore, we study the impact of several parameters on the
performance of separation, e.g. overlapping ratio and window type of the
frequency domain method. We also compare different techniques to solve the
frequency-domain permutation ambiguity. Through simulations and real world
experiments, we verify the superiority of the presented convolutive algorithm
among other BSS algorithms, including recursive regularized ICA (RR ICA),
independent vector analysis (IVA).
| Zaid Albataineh and Fathi M. Salem | null | 1408.0193 | null | null |
A Blind Adaptive CDMA Receiver Based on State Space Structures | cs.IT cs.LG math.IT | Code Division Multiple Access (CDMA) is a channel access method, based on
spread-spectrum technology, used by various radio technologies world-wide. In
general, CDMA is used as an access method in many mobile standards such as
CDMA2000 and WCDMA. We address the problem of blind multiuser equalization in
the wideband CDMA system, in the noisy multipath propagation environment.
Herein, we propose three new blind receiver schemes, which are based on state
space structures and Independent Component Analysis (ICA). These blind
state-space receivers (BSSR) do not require knowledge of the propagation
parameters or spreading code sequences of the users they primarily exploit the
natural assumption of statistical independence among the source signals. We
also develop three semi blind adaptive detectors by incorporating the new
adaptive methods into the standard RAKE receiver structure. Extensive
comparative case study, based on Bit error rate (BER) performance of these
methods, is carried out for different number of users, symbols per user, and
signal to noise ratio (SNR) in comparison with conventional detectors,
including the Blind Multiuser Detectors (BMUD) and Linear Minimum mean squared
error (LMMSE). The results show that the proposed methods outperform the other
detectors in estimating the symbol signals from the received mixed CDMA
signals. Moreover, the new blind detectors mitigate the multi access
interference (MAI) in CDMA.
| Zaid Albataineh and Fathi M. Salem | null | 1408.0196 | null | null |
Functional Principal Component Analysis and Randomized Sparse Clustering
Algorithm for Medical Image Analysis | stat.ML cs.AI cs.CV cs.LG | Due to advances in sensors, growing large and complex medical image data have
the ability to visualize the pathological change in the cellular or even the
molecular level or anatomical changes in tissues and organs. As a consequence,
the medical images have the potential to enhance diagnosis of disease,
prediction of clinical outcomes, characterization of disease progression,
management of health care and development of treatments, but also pose great
methodological and computational challenges for representation and selection of
features in image cluster analysis. To address these challenges, we first
extend one dimensional functional principal component analysis to the two
dimensional functional principle component analyses (2DFPCA) to fully capture
space variation of image signals. Image signals contain a large number of
redundant and irrelevant features which provide no additional or no useful
information for cluster analysis. Widely used methods for removing redundant
and irrelevant features are sparse clustering algorithms using a lasso-type
penalty to select the features. However, the accuracy of clustering using a
lasso-type penalty depends on how to select penalty parameters and a threshold
for selecting features. In practice, they are difficult to determine. Recently,
randomized algorithms have received a great deal of attention in big data
analysis. This paper presents a randomized algorithm for accurate feature
selection in image cluster analysis. The proposed method is applied to ovarian
and kidney cancer histology image data from the TCGA database. The results
demonstrate that the randomized feature selection method coupled with
functional principal component analysis substantially outperforms the current
sparse clustering algorithms in image cluster analysis.
| Nan Lin, Junhai Jiang, Shicheng Guo and Momiao Xiong | 10.1371/journal.pone.0132945 | 1408.0204 | null | null |
Matrix Factorization with Explicit Trust and Distrust Relationships | cs.SI cs.IR cs.LG | With the advent of online social networks, recommender systems have became
crucial for the success of many online applications/services due to their
significance role in tailoring these applications to user-specific needs or
preferences. Despite their increasing popularity, in general recommender
systems suffer from the data sparsity and the cold-start problems. To alleviate
these issues, in recent years there has been an upsurge of interest in
exploiting social information such as trust relations among users along with
the rating data to improve the performance of recommender systems. The main
motivation for exploiting trust information in recommendation process stems
from the observation that the ideas we are exposed to and the choices we make
are significantly influenced by our social context. However, in large user
communities, in addition to trust relations, the distrust relations also exist
between users. For instance, in Epinions the concepts of personal "web of
trust" and personal "block list" allow users to categorize their friends based
on the quality of reviews into trusted and distrusted friends, respectively. In
this paper, we propose a matrix factorization based model for recommendation in
social rating networks that properly incorporates both trust and distrust
relationships aiming to improve the quality of recommendations and mitigate the
data sparsity and the cold-start users issues. Through experiments on the
Epinions data set, we show that our new algorithm outperforms its standard
trust-enhanced or distrust-enhanced counterparts with respect to accuracy,
thereby demonstrating the positive effect that incorporation of explicit
distrust information can have on recommender systems.
| Rana Forsati, Mehrdad Mahdavi, Mehrnoush Shamsfard, Mohamed Sarwat | null | 1408.0325 | null | null |
Sample Complexity Analysis for Learning Overcomplete Latent Variable
Models through Tensor Methods | cs.LG math.PR stat.ML | We provide guarantees for learning latent variable models emphasizing on the
overcomplete regime, where the dimensionality of the latent space can exceed
the observed dimensionality. In particular, we consider multiview mixtures,
spherical Gaussian mixtures, ICA, and sparse coding models. We provide tight
concentration bounds for empirical moments through novel covering arguments. We
analyze parameter recovery through a simple tensor power update algorithm. In
the semi-supervised setting, we exploit the label or prior information to get a
rough estimate of the model parameters, and then refine it using the tensor
method on unlabeled samples. We establish that learning is possible when the
number of components scales as $k=o(d^{p/2})$, where $d$ is the observed
dimension, and $p$ is the order of the observed moment employed in the tensor
method. Our concentration bound analysis also leads to minimax sample
complexity for semi-supervised learning of spherical Gaussian mixtures. In the
unsupervised setting, we use a simple initialization algorithm based on SVD of
the tensor slices, and provide guarantees under the stricter condition that
$k\le \beta d$ (where constant $\beta$ can be larger than $1$), where the
tensor method recovers the components under a polynomial running time (and
exponential in $\beta$). Our analysis establishes that a wide range of
overcomplete latent variable models can be learned efficiently with low
computational and sample complexity through tensor decomposition methods.
| Animashree Anandkumar and Rong Ge and Majid Janzamin | null | 1408.0553 | null | null |
Estimating Maximally Probable Constrained Relations by Mathematical
Programming | cs.LG cs.NA math.OC stat.ML | Estimating a constrained relation is a fundamental problem in machine
learning. Special cases are classification (the problem of estimating a map
from a set of to-be-classified elements to a set of labels), clustering (the
problem of estimating an equivalence relation on a set) and ranking (the
problem of estimating a linear order on a set). We contribute a family of
probability measures on the set of all relations between two finite, non-empty
sets, which offers a joint abstraction of multi-label classification,
correlation clustering and ranking by linear ordering. Estimating (learning) a
maximally probable measure, given (a training set of) related and unrelated
pairs, is a convex optimization problem. Estimating (inferring) a maximally
probable relation, given a measure, is a 01-linear program. It is solved in
linear time for maps. It is NP-hard for equivalence relations and linear
orders. Practical solutions for all three cases are shown in experiments with
real data. Finally, estimating a maximally probable measure and relation
jointly is posed as a mixed-integer nonlinear program. This formulation
suggests a mathematical programming approach to semi-supervised learning.
| Lizhen Qu and Bjoern Andres | null | 1408.0838 | null | null |
Multilayer bootstrap networks | cs.LG cs.NE stat.ML | Multilayer bootstrap network builds a gradually narrowed multilayer nonlinear
network from bottom up for unsupervised nonlinear dimensionality reduction.
Each layer of the network is a nonparametric density estimator. It consists of
a group of k-centroids clusterings. Each clustering randomly selects data
points with randomly selected features as its centroids, and learns a one-hot
encoder by one-nearest-neighbor optimization. Geometrically, the nonparametric
density estimator at each layer projects the input data space to a
uniformly-distributed discrete feature space, where the similarity of two data
points in the discrete feature space is measured by the number of the nearest
centroids they share in common. The multilayer network gradually reduces the
nonlinear variations of data from bottom up by building a vast number of
hierarchical trees implicitly on the original data space. Theoretically, the
estimation error caused by the nonparametric density estimator is proportional
to the correlation between the clusterings, both of which are reduced by the
randomization steps.
| Xiao-Lei Zhang | null | 1408.0848 | null | null |
Adaptive Learning in Cartesian Product of Reproducing Kernel Hilbert
Spaces | cs.LG stat.ML | We propose a novel adaptive learning algorithm based on iterative orthogonal
projections in the Cartesian product of multiple reproducing kernel Hilbert
spaces (RKHSs). The task is estimating/tracking nonlinear functions which are
supposed to contain multiple components such as (i) linear and nonlinear
components, (ii) high- and low- frequency components etc. In this case, the use
of multiple RKHSs permits a compact representation of multicomponent functions.
The proposed algorithm is where two different methods of the author meet:
multikernel adaptive filtering and the algorithm of hyperplane projection along
affine subspace (HYPASS). In a certain particular case, the sum space of the
RKHSs is isomorphic to the product space and hence the proposed algorithm can
also be regarded as an iterative projection method in the sum space. The
efficacy of the proposed algorithm is shown by numerical examples.
| Masahiro Yukawa | 10.1109/TSP.2015.2463261 | 1408.0853 | null | null |
Determining the Number of Clusters via Iterative Consensus Clustering | stat.ML cs.CV cs.LG | We use a cluster ensemble to determine the number of clusters, k, in a group
of data. A consensus similarity matrix is formed from the ensemble using
multiple algorithms and several values for k. A random walk is induced on the
graph defined by the consensus matrix and the eigenvalues of the associated
transition probability matrix are used to determine the number of clusters. For
noisy or high-dimensional data, an iterative technique is presented to refine
this consensus matrix in way that encourages a block-diagonal form. It is shown
that the resulting consensus matrix is generally superior to existing
similarity matrices for this type of spectral analysis.
| Shaina Race, Carl Meyer, Kevin Valakuzhy | 10.1137/1.9781611972832.11 | 1408.0967 | null | null |
A Flexible Iterative Framework for Consensus Clustering | stat.ML cs.CV cs.LG | A novel framework for consensus clustering is presented which has the ability
to determine both the number of clusters and a final solution using multiple
algorithms. A consensus similarity matrix is formed from an ensemble using
multiple algorithms and several values for k. A variety of dimension reduction
techniques and clustering algorithms are considered for analysis. For noisy or
high-dimensional data, an iterative technique is presented to refine this
consensus matrix in way that encourages algorithms to agree upon a common
solution. We utilize the theory of nearly uncoupled Markov chains to determine
the number, k , of clusters in a dataset by considering a random walk on the
graph defined by the consensus matrix. The eigenvalues of the associated
transition probability matrix are used to determine the number of clusters.
This method succeeds at determining the number of clusters in many datasets
where previous methods fail. On every considered dataset, our consensus method
provides a final result with accuracy well above the average of the individual
algorithms.
| Shaina Race and Carl Meyer | null | 1408.0972 | null | null |
Estimating Renyi Entropy of Discrete Distributions | cs.IT cs.DS cs.LG math.IT | It was recently shown that estimating the Shannon entropy $H({\rm p})$ of a
discrete $k$-symbol distribution ${\rm p}$ requires $\Theta(k/\log k)$ samples,
a number that grows near-linearly in the support size. In many applications
$H({\rm p})$ can be replaced by the more general R\'enyi entropy of order
$\alpha$, $H_\alpha({\rm p})$. We determine the number of samples needed to
estimate $H_\alpha({\rm p})$ for all $\alpha$, showing that $\alpha < 1$
requires a super-linear, roughly $k^{1/\alpha}$ samples, noninteger $\alpha>1$
requires a near-linear $k$ samples, but, perhaps surprisingly, integer
$\alpha>1$ requires only $\Theta(k^{1-1/\alpha})$ samples. Furthermore,
developing on a recently established connection between polynomial
approximation and estimation of additive functions of the form $\sum_{x} f({\rm
p}_x)$, we reduce the sample complexity for noninteger values of $\alpha$ by a
factor of $\log k$ compared to the empirical estimator. The estimators
achieving these bounds are simple and run in time linear in the number of
samples. Our lower bounds provide explicit constructions of distributions with
different R\'enyi entropies that are hard to distinguish.
| Jayadev Acharya, Alon Orlitsky, Ananda Theertha Suresh, and Himanshu
Tyagi | null | 1408.1000 | null | null |
Multithreshold Entropy Linear Classifier | cs.LG stat.ML | Linear classifiers separate the data with a hyperplane. In this paper we
focus on the novel method of construction of multithreshold linear classifier,
which separates the data with multiple parallel hyperplanes. Proposed model is
based on the information theory concepts -- namely Renyi's quadratic entropy
and Cauchy-Schwarz divergence.
We begin with some general properties, including data scale invariance. Then
we prove that our method is a multithreshold large margin classifier, which
shows the analogy to the SVM, while in the same time works with much broader
class of hypotheses. What is also interesting, proposed method is aimed at the
maximization of the balanced quality measure (such as Matthew's Correlation
Coefficient) as opposed to very common maximization of the accuracy. This
feature comes directly from the optimization problem statement and is further
confirmed by the experiments on the UCI datasets.
It appears, that our Multithreshold Entropy Linear Classifier (MELC) obtaines
similar or higher scores than the ones given by SVM on both synthetic and real
data. We show how proposed approach can be benefitial for the cheminformatics
in the task of ligands activity prediction, where despite better classification
results, MELC gives some additional insight into the data structure (classes of
underrepresented chemical compunds).
| Wojciech Marian Czarnecki, Jacek Tabor | 10.1016/j.eswa.2015.03.007 | 1408.1054 | null | null |
Mixed-Variate Restricted Boltzmann Machines | stat.ML cs.LG stat.ME | Modern datasets are becoming heterogeneous. To this end, we present in this
paper Mixed-Variate Restricted Boltzmann Machines for simultaneously modelling
variables of multiple types and modalities, including binary and continuous
responses, categorical options, multicategorical choices, ordinal assessment
and category-ranked preferences. Dependency among variables is modeled using
latent binary variables, each of which can be interpreted as a particular
hidden aspect of the data. The proposed model, similar to the standard RBMs,
allows fast evaluation of the posterior for the latent variables. Hence, it is
naturally suitable for many common tasks including, but not limited to, (a) as
a pre-processing step to convert complex input data into a more convenient
vectorial representation through the latent posteriors, thereby offering a
dimensionality reduction capacity, (b) as a classifier supporting binary,
multiclass, multilabel, and label-ranking outputs, or a regression tool for
continuous outputs and (c) as a data completion tool for multimodal and
heterogeneous data. We evaluate the proposed model on a large-scale dataset
using the world opinion survey results on three tasks: feature extraction and
visualization, data completion and prediction.
| Truyen Tran, Dinh Phung, Svetha Venkatesh | null | 1408.1160 | null | null |
MCMC for Hierarchical Semi-Markov Conditional Random Fields | stat.ML cs.LG stat.ME | Deep architecture such as hierarchical semi-Markov models is an important
class of models for nested sequential data. Current exact inference schemes
either cost cubic time in sequence length, or exponential time in model depth.
These costs are prohibitive for large-scale problems with arbitrary length and
depth. In this contribution, we propose a new approximation technique that may
have the potential to achieve sub-cubic time complexity in length and linear
time depth, at the cost of some loss of quality. The idea is based on two
well-known methods: Gibbs sampling and Rao-Blackwellisation. We provide some
simulation-based evaluation of the quality of the RGBS with respect to run time
and sequence length.
| Truyen Tran, Dinh Phung, Svetha Venkatesh, Hung H. Bui | null | 1408.1162 | null | null |
Boosted Markov Networks for Activity Recognition | cs.LG cs.CV stat.ML | We explore a framework called boosted Markov networks to combine the learning
capacity of boosting and the rich modeling semantics of Markov networks and
applying the framework for video-based activity recognition. Importantly, we
extend the framework to incorporate hidden variables. We show how the framework
can be applied for both model learning and feature selection. We demonstrate
that boosted Markov networks with hidden variables perform comparably with the
standard maximum likelihood estimation. However, our framework is able to learn
sparse models, and therefore can provide computational savings when the learned
models are used for classification.
| Truyen Tran, Hung Bui, Svetha Venkatesh | null | 1408.1167 | null | null |
Scalable Greedy Algorithms for Transfer Learning | cs.CV cs.LG | In this paper we consider the binary transfer learning problem, focusing on
how to select and combine sources from a large pool to yield a good performance
on a target task. Constraining our scenario to real world, we do not assume the
direct access to the source data, but rather we employ the source hypotheses
trained from them. We propose an efficient algorithm that selects relevant
source hypotheses and feature dimensions simultaneously, building on the
literature on the best subset selection problem. Our algorithm achieves
state-of-the-art results on three computer vision datasets, substantially
outperforming both transfer learning and popular feature selection baselines in
a small-sample setting. We also present a randomized variant that achieves the
same results with the computational cost independent from the number of source
hypotheses and feature dimensions. Also, we theoretically prove that, under
reasonable assumptions on the source hypotheses, our algorithm can learn
effectively from few examples.
| Ilja Kuzborskij, Francesco Orabona, Barbara Caputo | 10.1016/j.cviu.2016.09.003 | 1408.1292 | null | null |
When does Active Learning Work? | stat.ML cs.LG | Active Learning (AL) methods seek to improve classifier performance when
labels are expensive or scarce. We consider two central questions: Where does
AL work? How much does it help? To address these questions, a comprehensive
experimental simulation study of Active Learning is presented. We consider a
variety of tasks, classifiers and other AL factors, to present a broad
exploration of AL performance in various settings. A precise way to quantify
performance is needed in order to know when AL works. Thus we also present a
detailed methodology for tackling the complexities of assessing AL performance
in the context of this experimental study.
| Lewis Evans and Niall M. Adams and Christoforos Anagnostopoulos | null | 1408.1319 | null | null |
Double or Nothing: Multiplicative Incentive Mechanisms for Crowdsourcing | cs.GT cs.HC cs.LG | Crowdsourcing has gained immense popularity in machine learning applications
for obtaining large amounts of labeled data. Crowdsourcing is cheap and fast,
but suffers from the problem of low-quality data. To address this fundamental
challenge in crowdsourcing, we propose a simple payment mechanism to
incentivize workers to answer only the questions that they are sure of and skip
the rest. We show that surprisingly, under a mild and natural "no-free-lunch"
requirement, this mechanism is the one and only incentive-compatible payment
mechanism possible. We also show that among all possible incentive-compatible
mechanisms (that may or may not satisfy no-free-lunch), our mechanism makes the
smallest possible payment to spammers. We further extend our results to a more
general setting in which workers are required to provide a quantized confidence
for each question. Interestingly, this unique mechanism takes a
"multiplicative" form. The simplicity of the mechanism is an added benefit. In
preliminary experiments involving over 900 worker-task pairs, we observe a
significant drop in the error rates under this unique mechanism for the same or
lower monetary expenditure.
| Nihar B. Shah and Dengyong Zhou | null | 1408.1387 | null | null |
Preventing False Discovery in Interactive Data Analysis is Hard | cs.LG cs.CC cs.DS | We show that, under a standard hardness assumption, there is no
computationally efficient algorithm that given $n$ samples from an unknown
distribution can give valid answers to $n^{3+o(1)}$ adaptively chosen
statistical queries. A statistical query asks for the expectation of a
predicate over the underlying distribution, and an answer to a statistical
query is valid if it is "close" to the correct expectation over the
distribution.
Our result stands in stark contrast to the well known fact that exponentially
many statistical queries can be answered validly and efficiently if the queries
are chosen non-adaptively (no query may depend on the answers to previous
queries). Moreover, a recent work by Dwork et al. shows how to accurately
answer exponentially many adaptively chosen statistical queries via a
computationally inefficient algorithm; and how to answer a quadratic number of
adaptive queries via a computationally efficient algorithm. The latter result
implies that our result is tight up to a linear factor in $n.$
Conceptually, our result demonstrates that achieving statistical validity
alone can be a source of computational intractability in adaptive settings. For
example, in the modern large collaborative research environment, data analysts
typically choose a particular approach based on previous findings. False
discovery occurs if a research finding is supported by the data but not by the
underlying distribution. While the study of preventing false discovery in
Statistics is decades old, to the best of our knowledge our result is the first
to demonstrate a computational barrier. In particular, our result suggests that
the perceived difficulty of preventing false discovery in today's collaborative
research environment may be inherent.
| Moritz Hardt and Jonathan Ullman | null | 1408.1655 | null | null |
A Parallel Algorithm for Exact Bayesian Structure Discovery in Bayesian
Networks | cs.AI cs.DC cs.LG | Exact Bayesian structure discovery in Bayesian networks requires exponential
time and space. Using dynamic programming (DP), the fastest known sequential
algorithm computes the exact posterior probabilities of structural features in
$O(2(d+1)n2^n)$ time and space, if the number of nodes (variables) in the
Bayesian network is $n$ and the in-degree (the number of parents) per node is
bounded by a constant $d$. Here we present a parallel algorithm capable of
computing the exact posterior probabilities for all $n(n-1)$ edges with optimal
parallel space efficiency and nearly optimal parallel time efficiency. That is,
if $p=2^k$ processors are used, the run-time reduces to
$O(5(d+1)n2^{n-k}+k(n-k)^d)$ and the space usage becomes $O(n2^{n-k})$ per
processor. Our algorithm is based the observation that the subproblems in the
sequential DP algorithm constitute a $n$-$D$ hypercube. We take a delicate way
to coordinate the computation of correlated DP procedures such that large
amount of data exchange is suppressed. Further, we develop parallel techniques
for two variants of the well-known \emph{zeta transform}, which have
applications outside the context of Bayesian networks. We demonstrate the
capability of our algorithm on datasets with up to 33 variables and its
scalability on up to 2048 processors. We apply our algorithm to a biological
data set for discovering the yeast pheromone response pathways.
| Yetian Chen, Jin Tian, Olga Nikolova and Srinivas Aluru | null | 1408.1664 | null | null |
Matrix Completion on Graphs | cs.LG stat.ML | The problem of finding the missing values of a matrix given a few of its
entries, called matrix completion, has gathered a lot of attention in the
recent years. Although the problem under the standard low rank assumption is
NP-hard, Cand\`es and Recht showed that it can be exactly relaxed if the number
of observed entries is sufficiently large. In this work, we introduce a novel
matrix completion model that makes use of proximity information about rows and
columns by assuming they form communities. This assumption makes sense in
several real-world problems like in recommender systems, where there are
communities of people sharing preferences, while products form clusters that
receive similar ratings. Our main goal is thus to find a low-rank solution that
is structured by the proximities of rows and columns encoded by graphs. We
borrow ideas from manifold learning to constrain our solution to be smooth on
these graphs, in order to implicitly force row and column proximities. Our
matrix recovery model is formulated as a convex non-smooth optimization
problem, for which a well-posed iterative scheme is provided. We study and
evaluate the proposed matrix completion on synthetic and real data, showing
that the proposed structured low-rank recovery model outperforms the standard
matrix completion model in many situations.
| Vassilis Kalofolias, Xavier Bresson, Michael Bronstein, Pierre
Vandergheynst | null | 1408.1717 | null | null |
Origin of the computational hardness for learning with binary synapses | cond-mat.dis-nn cond-mat.stat-mech cs.LG q-bio.NC | Supervised learning in a binary perceptron is able to classify an extensive
number of random patterns by a proper assignment of binary synaptic weights.
However, to find such assignments in practice, is quite a nontrivial task. The
relation between the weight space structure and the algorithmic hardness has
not yet been fully understood. To this end, we analytically derive the
Franz-Parisi potential for the binary preceptron problem, by starting from an
equilibrium solution of weights and exploring the weight space structure around
it. Our result reveals the geometrical organization of the weight
space\textemdash the weight space is composed of isolated solutions, rather
than clusters of exponentially many close-by solutions. The point-like clusters
far apart from each other in the weight space explain the previously observed
glassy behavior of stochastic local search heuristics.
| Haiping Huang and Yoshiyuki Kabashima | 10.1103/PhysRevE.90.052813 | 1408.1784 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.