title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Neural Network-Based Active Learning in Multivariate Calibration | cs.NE cs.CE cs.LG | In chemometrics, data from infrared or near-infrared (NIR) spectroscopy are
often used to identify a compound or to analyze the composition of amaterial.
This involves the calibration of models that predict the concentration
ofmaterial constituents from the measured NIR spectrum. An interesting aspect
of multivariate calibration is to achieve a particular accuracy level with a
minimum number of training samples, as this reduces the number of laboratory
tests and thus the cost of model building. In these chemometric models, the
input refers to a proper representation of the spectra and the output to the
concentrations of the sample constituents. The search for a most informative
new calibration sample thus has to be performed in the output space of the
model, rather than in the input space as in conventionalmodeling problems. In
this paper, we propose to solve the corresponding inversion problem by
utilizing the disagreements of an ensemble of neural networks to represent the
prediction error in the unexplored component space. The next calibration sample
is then chosen at a composition where the individual models of the ensemble
disagree most. The results obtained for a realistic chemometric calibration
example show that the proposed active learning can achieve a given calibration
accuracy with less training samples than random sampling.
| A. Ukil, J. Bernasconi | 10.1109/TSMCC.2012.2220963 | 1503.05831 | null | null |
Deep Transform: Time-Domain Audio Error Correction via Probabilistic
Re-Synthesis | cs.SD cs.LG cs.NE | In the process of recording, storage and transmission of time-domain audio
signals, errors may be introduced that are difficult to correct in an
unsupervised way. Here, we train a convolutional deep neural network to
re-synthesize input time-domain speech signals at its output layer. We then use
this abstract transformation, which we call a deep transform (DT), to perform
probabilistic re-synthesis on further speech (of the same speaker) which has
been degraded. Using the convolutive DT, we demonstrate the recovery of speech
audio that has been subject to extreme degradation. This approach may be useful
for correction of errors in communications devices.
| Andrew J.R. Simpson | null | 1503.05849 | null | null |
On Invariance and Selectivity in Representation Learning | cs.LG | We discuss data representation which can be learned automatically from data,
are invariant to transformations, and at the same time selective, in the sense
that two points have the same representation only if they are one the
transformation of the other. The mathematical results here sharpen some of the
key claims of i-theory -- a recent theory of feedforward processing in sensory
cortex.
| Fabio Anselmi, Lorenzo Rosasco, Tomaso Poggio | null | 1503.05938 | null | null |
Rank Subspace Learning for Compact Hash Codes | cs.LG cs.IR | The era of Big Data has spawned unprecedented interests in developing hashing
algorithms for efficient storage and fast nearest neighbor search. Most
existing work learn hash functions that are numeric quantizations of feature
values in projected feature space. In this work, we propose a novel hash
learning framework that encodes feature's rank orders instead of numeric values
in a number of optimal low-dimensional ranking subspaces. We formulate the
ranking subspace learning problem as the optimization of a piece-wise linear
convex-concave function and present two versions of our algorithm: one with
independent optimization of each hash bit and the other exploiting a sequential
learning framework. Our work is a generalization of the Winner-Take-All (WTA)
hash family and naturally enjoys all the numeric stability benefits of rank
correlation measures while being optimized to achieve high precision at very
short code length. We compare with several state-of-the-art hashing algorithms
in both supervised and unsupervised domain, showing superior performance in a
number of data sets.
| Kai Li, Guojun Qi, Jun Ye, Kien A. Hua | null | 1503.05951 | null | null |
Deep Transform: Cocktail Party Source Separation via Probabilistic
Re-Synthesis | cs.SD cs.LG cs.NE | In cocktail party listening scenarios, the human brain is able to separate
competing speech signals. However, the signal processing implemented by the
brain to perform cocktail party listening is not well understood. Here, we
trained two separate convolutive autoencoder deep neural networks (DNN) to
separate monaural and binaural mixtures of two concurrent speech streams. We
then used these DNNs as convolutive deep transform (CDT) devices to perform
probabilistic re-synthesis. The CDTs operated directly in the time-domain. Our
simulations demonstrate that very simple neural networks are capable of
exploiting monaural and binaural information available in a cocktail party
listening scenario.
| Andrew J.R. Simpson | null | 1503.06046 | null | null |
Networked Stochastic Multi-Armed Bandits with Combinatorial Strategies | cs.LG | In this paper, we investigate a largely extended version of classical MAB
problem, called networked combinatorial bandit problems. In particular, we
consider the setting of a decision maker over a networked bandits as follows:
each time a combinatorial strategy, e.g., a group of arms, is chosen, and the
decision maker receives a reward resulting from her strategy and also receives
a side bonus resulting from that strategy for each arm's neighbor. This is
motivated by many real applications such as on-line social networks where
friends can provide their feedback on shared content, therefore if we promote a
product to a user, we can also collect feedback from her friends on that
product. To this end, we consider two types of side bonus in this study: side
observation and side reward. Upon the number of arms pulled at each time slot,
we study two cases: single-play and combinatorial-play. Consequently, this
leaves us four scenarios to investigate in the presence of side bonus:
Single-play with Side Observation, Combinatorial-play with Side Observation,
Single-play with Side Reward, and Combinatorial-play with Side Reward. For each
case, we present and analyze a series of \emph{zero regret} polices where the
expect of regret over time approaches zero as time goes to infinity. Extensive
simulations validate the effectiveness of our results.
| Shaojie Tang, Yaqin Zhou | null | 1503.06169 | null | null |
Block-Wise MAP Inference for Determinantal Point Processes with
Application to Change-Point Detection | cs.LG cs.AI stat.ME stat.ML | Existing MAP inference algorithms for determinantal point processes (DPPs)
need to calculate determinants or conduct eigenvalue decomposition generally at
the scale of the full kernel, which presents a great challenge for real-world
applications. In this paper, we introduce a class of DPPs, called BwDPPs, that
are characterized by an almost block diagonal kernel matrix and thus can allow
efficient block-wise MAP inference. Furthermore, BwDPPs are successfully
applied to address the difficulty of selecting change-points in the problem of
change-point detection (CPD), which results in a new BwDPP-based CPD method,
named BwDppCpd. In BwDppCpd, a preliminary set of change-point candidates is
first created based on existing well-studied metrics. Then, these change-point
candidates are treated as DPP items, and DPP-based subset selection is
conducted to give the final estimate of the change-points that favours both
quality and diversity. The effectiveness of BwDppCpd is demonstrated through
extensive experiments on five real-world datasets.
| Jinye Zhang, Zhijian Ou | null | 1503.06239 | null | null |
Fast Imbalanced Classification of Healthcare Data with Missing Values | stat.ML cs.LG | In medical domain, data features often contain missing values. This can
create serious bias in the predictive modeling. Typical standard data mining
methods often produce poor performance measures. In this paper, we propose a
new method to simultaneously classify large datasets and reduce the effects of
missing values. The proposed method is based on a multilevel framework of the
cost-sensitive SVM and the expected maximization imputation method for missing
values, which relies on iterated regression analyses. We compare classification
results of multilevel SVM-based algorithms on public benchmark datasets with
imbalanced classes and missing values as well as real data in health
applications, and show that our multilevel SVM-based method produces fast, and
more accurate and robust classification results.
| Talayeh Razzaghi and Oleg Roderick and Ilya Safro and Nick Marko | null | 1503.06250 | null | null |
Boosting Convolutional Features for Robust Object Proposals | cs.CV cs.AI cs.LG | Deep Convolutional Neural Networks (CNNs) have demonstrated excellent
performance in image classification, but still show room for improvement in
object-detection tasks with many categories, in particular for cluttered scenes
and occlusion. Modern detection algorithms like Regions with CNNs (Girshick et
al., 2014) rely on Selective Search (Uijlings et al., 2013) to propose regions
which with high probability represent objects, where in turn CNNs are deployed
for classification. Selective Search represents a family of sophisticated
algorithms that are engineered with multiple segmentation, appearance and
saliency cues, typically coming with a significant run-time overhead.
Furthermore, (Hosang et al., 2014) have shown that most methods suffer from low
reproducibility due to unstable superpixels, even for slight image
perturbations. Although CNNs are subsequently used for classification in
top-performing object-detection pipelines, current proposal methods are
agnostic to how these models parse objects and their rich learned
representations. As a result they may propose regions which may not resemble
high-level objects or totally miss some of them. To overcome these drawbacks we
propose a boosting approach which directly takes advantage of hierarchical CNN
features for detecting regions of interest fast. We demonstrate its performance
on ImageNet 2013 detection benchmark and compare it with state-of-the-art
methods.
| Nikolaos Karianakis, Thomas J. Fuchs and Stefano Soatto | null | 1503.06350 | null | null |
Relaxed Leverage Sampling for Low-rank Matrix Completion | cs.IT cs.LG math.IT stat.ML | We consider the problem of exact recovery of any $m\times n$ matrix of rank
$\varrho$ from a small number of observed entries via the standard nuclear norm
minimization framework. Such low-rank matrices have degrees of freedom
$(m+n)\varrho - \varrho^2$. We show that any arbitrary low-rank matrices can be
recovered exactly from a $\Theta\left(((m+n)\varrho -
\varrho^2)\log^2(m+n)\right)$ randomly sampled entries, thus matching the lower
bound on the required number of entries (in terms of degrees of freedom), with
an additional factor of $O(\log^2(m+n))$. To achieve this bound on sample size
we observe each entry with probabilities proportional to the sum of
corresponding row and column leverage scores, minus their product. We show that
this relaxation in sampling probabilities (as opposed to sum of leverage scores
in Chen et al, 2014) can give us an $O(\varrho^2\log^2(m+n))$ additive
improvement on the (best known) sample size obtained by Chen et al, 2014, for
the nuclear norm minimization. Experiments on real data corroborate the
theoretical improvement on sample size. Further, exact recovery of $(a)$
incoherent matrices (with restricted leverage scores), and $(b)$ matrices with
only one of the row or column spaces to be incoherent, can be performed using
our relaxed leverage score sampling, via nuclear norm minimization, without
knowing the leverage scores a priori. In such settings also we can achieve
improvement on sample size.
| Abhisek Kundu | null | 1503.06379 | null | null |
Costing Generated Runtime Execution Plans for Large-Scale Machine
Learning Programs | cs.DC cs.LG | Declarative large-scale machine learning (ML) aims at the specification of ML
algorithms in a high-level language and automatic generation of hybrid runtime
execution plans ranging from single node, in-memory computations to distributed
computations on MapReduce (MR) or similar frameworks like Spark. The
compilation of large-scale ML programs exhibits many opportunities for
automatic optimization. Advanced cost-based optimization techniques
require---as a fundamental precondition---an accurate cost model for evaluating
the impact of optimization decisions. In this paper, we share insights into a
simple and robust yet accurate technique for costing alternative runtime
execution plans of ML programs. Our cost model relies on generating and costing
runtime plans in order to automatically reflect all successive optimization
phases. Costing runtime plans also captures control flow structures such as
loops and branches, and a variety of cost factors like IO, latency, and
computation costs. Finally, we linearize all these cost factors into a single
measure of expected execution time. Within SystemML, this cost model is
leveraged by several advanced optimizers like resource optimization and global
data flow optimization. We share our lessons learned in order to provide
foundations for the optimization of ML programs.
| Matthias Boehm | null | 1503.06384 | null | null |
Large-scale Log-determinant Computation through Stochastic Chebyshev
Expansions | cs.DS cs.LG cs.NA | Logarithms of determinants of large positive definite matrices appear
ubiquitously in machine learning applications including Gaussian graphical and
Gaussian process models, partition functions of discrete graphical models,
minimum-volume ellipsoids, metric learning and kernel learning. Log-determinant
computation involves the Cholesky decomposition at the cost cubic in the number
of variables, i.e., the matrix dimension, which makes it prohibitive for
large-scale applications. We propose a linear-time randomized algorithm to
approximate log-determinants for very large-scale positive definite and general
non-singular matrices using a stochastic trace approximation, called the
Hutchinson method, coupled with Chebyshev polynomial expansions that both rely
on efficient matrix-vector multiplications. We establish rigorous additive and
multiplicative approximation error bounds depending on the condition number of
the input matrix. In our experiments, the proposed algorithm can provide very
high accuracy solutions at orders of magnitude faster time than the Cholesky
decomposition and Schur completion, and enables us to compute log-determinants
of matrices involving tens of millions of variables.
| Insu Han, Dmitry Malioutov, Jinwoo Shin | null | 1503.06394 | null | null |
What the F-measure doesn't measure: Features, Flaws, Fallacies and Fixes | cs.IR cs.CL cs.LG cs.NE stat.CO stat.ML | The F-measure or F-score is one of the most commonly used single number
measures in Information Retrieval, Natural Language Processing and Machine
Learning, but it is based on a mistake, and the flawed assumptions render it
unsuitable for use in most contexts! Fortunately, there are better
alternatives.
| David M. W. Powers | null | 1503.06410 | null | null |
Asymmetric Distributions from Constrained Mixtures | stat.ML cs.LG | This paper introduces constrained mixtures for continuous distributions,
characterized by a mixture of distributions where each distribution has a shape
similar to the base distribution and disjoint domains. This new concept is used
to create generalized asymmetric versions of the Laplace and normal
distributions, which are shown to define exponential families, with known
conjugate priors, and to have maximum likelihood estimates for the original
parameters, with known closed-form expressions. The asymmetric and symmetric
normal distributions are compared in a linear regression example, showing that
the asymmetric version performs at least as well as the symmetric one, and in a
real world time-series problem, where a hidden Markov model is used to fit a
stock index, indicating that the asymmetric version provides higher likelihood
and may learn distribution models over states and transition distributions with
considerably less entropy.
| Conrado S. Miranda and Fernando J. Von Zuben | null | 1503.06429 | null | null |
Unsupervised model compression for multilayer bootstrap networks | cs.LG cs.NE stat.ML | Recently, multilayer bootstrap network (MBN) has demonstrated promising
performance in unsupervised dimensionality reduction. It can learn compact
representations in standard data sets, i.e. MNIST and RCV1. However, as a
bootstrap method, the prediction complexity of MBN is high. In this paper, we
propose an unsupervised model compression framework for this general problem of
unsupervised bootstrap methods. The framework compresses a large unsupervised
bootstrap model into a small model by taking the bootstrap model and its
application together as a black box and learning a mapping function from the
input of the bootstrap model to the output of the application by a supervised
learner. To specialize the framework, we propose a new technique, named
compressive MBN. It takes MBN as the unsupervised bootstrap model and deep
neural network (DNN) as the supervised learner. Our initial result on MNIST
showed that compressive MBN not only maintains the high prediction accuracy of
MBN but also is over thousands of times faster than MBN at the prediction
stage. Our result suggests that the new technique integrates the effectiveness
of MBN on unsupervised learning and the effectiveness and efficiency of DNN on
supervised learning together for the effectiveness and efficiency of
compressive MBN on unsupervised learning.
| Xiao-Lei Zhang | null | 1503.06452 | null | null |
Machine Learning Methods for Attack Detection in the Smart Grid | cs.LG cs.CR cs.SY | Attack detection problems in the smart grid are posed as statistical learning
problems for different attack scenarios in which the measurements are observed
in batch or online settings. In this approach, machine learning algorithms are
used to classify measurements as being either secure or attacked. An attack
detection framework is provided to exploit any available prior knowledge about
the system and surmount constraints arising from the sparse structure of the
problem in the proposed approach. Well-known batch and online learning
algorithms (supervised and semi-supervised) are employed with decision and
feature level fusion to model the attack detection problem. The relationships
between statistical and geometric properties of attack vectors employed in the
attack scenarios and learning algorithms are analyzed to detect unobservable
attacks using statistical learning methods. The proposed algorithms are
examined on various IEEE test systems. Experimental analyses show that machine
learning algorithms can detect attacks with performances higher than the attack
detection algorithms which employ state vector estimation methods in the
proposed attack detection framework.
| Mete Ozay, Inaki Esnaola, Fatos T. Yarman Vural, Sanjeev R. Kulkarni,
H. Vincent Poor | 10.1109/TNNLS.2015.2404803 | 1503.06468 | null | null |
Construction of FuzzyFind Dictionary using Golay Coding Transformation
for Searching Applications | cs.DB cs.AI cs.DS cs.IR cs.LG | Searching through a large volume of data is very critical for companies,
scientists, and searching engines applications due to time complexity and
memory complexity. In this paper, a new technique of generating FuzzyFind
Dictionary for text mining was introduced. We simply mapped the 23 bits of the
English alphabet into a FuzzyFind Dictionary or more than 23 bits by using more
FuzzyFind Dictionary, and reflecting the presence or absence of particular
letters. This representation preserves closeness of word distortions in terms
of closeness of the created binary vectors within Hamming distance of 2
deviations. This paper talks about the Golay Coding Transformation Hash Table
and how it can be used on a FuzzyFind Dictionary as a new technology for using
in searching through big data. This method is introduced by linear time
complexity for generating the dictionary and constant time complexity to access
the data and update by new data sets, also updating for new data sets is linear
time depends on new data points. This technique is based on searching only for
letters of English that each segment has 23 bits, and also we have more than
23-bit and also it could work with more segments as reference table.
| Kamran Kowsari, Maryam Yammahi, Nima Bari, Roman Vichr, Faisal Alsaby,
Simon Y. Berkovich | 10.14569/IJACSA.2015.060313 | 1503.06483 | null | null |
Optimum Reject Options for Prototype-based Classification | cs.LG | We analyse optimum reject strategies for prototype-based classifiers and
real-valued rejection measures, using the distance of a data point to the
closest prototype or probabilistic counterparts. We compare reject schemes with
global thresholds, and local thresholds for the Voronoi cells of the
classifier. For the latter, we develop a polynomial-time algorithm to compute
optimum thresholds based on a dynamic programming scheme, and we propose an
intuitive linear time, memory efficient approximation thereof with competitive
accuracy. Evaluating the performance in various benchmarks, we conclude that
local reject options are beneficial in particular for simple prototype-based
classifiers, while the improvement is less pronounced for advanced models. For
the latter, an accuracy-reject curve which is comparable to support vector
machine classifiers with state of the art reject options can be reached.
| Lydia Fischer, Barbara Hammer and Heiko Wersing | null | 1503.06549 | null | null |
On some provably correct cases of variational inference for topic models | cs.LG cs.DS stat.ML | Variational inference is a very efficient and popular heuristic used in
various forms in the context of latent variable models. It's closely related to
Expectation Maximization (EM), and is applied when exact EM is computationally
infeasible. Despite being immensely popular, current theoretical understanding
of the effectiveness of variaitonal inference based algorithms is very limited.
In this work we provide the first analysis of instances where variational
inference algorithms converge to the global optimum, in the setting of topic
models.
More specifically, we show that variational inference provably learns the
optimal parameters of a topic model under natural assumptions on the topic-word
matrix and the topic priors. The properties that the topic word matrix must
satisfy in our setting are related to the topic expansion assumption introduced
in (Anandkumar et al., 2013), as well as the anchor words assumption in (Arora
et al., 2012c). The assumptions on the topic priors are related to the well
known Dirichlet prior, introduced to the area of topic modeling by (Blei et
al., 2003).
It is well known that initialization plays a crucial role in how well
variational based algorithms perform in practice. The initializations that we
use are fairly natural. One of them is similar to what is currently used in
LDA-c, the most popular implementation of variational inference for topic
models. The other one is an overlapping clustering algorithm, inspired by a
work by (Arora et al., 2014) on dictionary learning, which is very simple and
efficient.
While our primary goal is to provide insights into when variational inference
might work in practice, the multiplicative, rather than the additive nature of
the variational inference updates forces us to use fairly non-standard proof
arguments, which we believe will be of general interest.
| Pranjal Awasthi and Andrej Risteski | null | 1503.06567 | null | null |
A Machine Learning Approach to Predicting the Smoothed Complexity of
Sorting Algorithms | cs.LG cs.AI cs.CC | Smoothed analysis is a framework for analyzing the complexity of an
algorithm, acting as a bridge between average and worst-case behaviour. For
example, Quicksort and the Simplex algorithm are widely used in practical
applications, despite their heavy worst-case complexity. Smoothed complexity
aims to better characterize such algorithms. Existing theoretical bounds for
the smoothed complexity of sorting algorithms are still quite weak.
Furthermore, empirically computing the smoothed complexity via its original
definition is computationally infeasible, even for modest input sizes. In this
paper, we focus on accurately predicting the smoothed complexity of sorting
algorithms, using machine learning techniques. We propose two regression models
that take into account various properties of sorting algorithms and some of the
known theoretical results in smoothed analysis to improve prediction quality.
We show experimental results for predicting the smoothed complexity of
Quicksort, Mergesort, and optimized Bubblesort for large input sizes, therefore
filling the gap between known theoretical and empirical results.
| Bichen Shi, Michel Schellekens, Georgiana Ifrim | null | 1503.06572 | null | null |
Proficiency Comparison of LADTree and REPTree Classifiers for Credit
Risk Forecast | cs.LG | Predicting the Credit Defaulter is a perilous task of Financial Industries
like Banks. Ascertaining non-payer before giving loan is a significant and
conflict-ridden task of the Banker. Classification techniques are the better
choice for predictive analysis like finding the claimant, whether he/she is an
unpretentious customer or a cheat. Defining the outstanding classifier is a
risky assignment for any industrialist like a banker. This allow computer
science researchers to drill down efficient research works through evaluating
different classifiers and finding out the best classifier for such predictive
problems. This research work investigates the productivity of LADTree
Classifier and REPTree Classifier for the credit risk prediction and compares
their fitness through various measures. German credit dataset has been taken
and used to predict the credit risk with a help of open source machine learning
tool.
| Lakshmi Devasena C | null | 1503.06608 | null | null |
Fusing Continuous-valued Medical Labels using a Bayesian Model | cs.LG | With the rapid increase in volume of time series medical data available
through wearable devices, there is a need to employ automated algorithms to
label data. Examples of labels include interventions, changes in activity (e.g.
sleep) and changes in physiology (e.g. arrhythmias). However, automated
algorithms tend to be unreliable resulting in lower quality care. Expert
annotations are scarce, expensive, and prone to significant inter- and
intra-observer variance. To address these problems, a Bayesian
Continuous-valued Label Aggregator(BCLA) is proposed to provide a reliable
estimation of label aggregation while accurately infer the precision and bias
of each algorithm. The BCLA was applied to QT interval (pro-arrhythmic
indicator) estimation from the electrocardiogram using labels from the 2006
PhysioNet/Computing in Cardiology Challenge database. It was compared to the
mean, median, and a previously proposed Expectation Maximization (EM) label
aggregation approaches. While accurately predicting each labelling algorithm's
bias and precision, the root-mean-square error of the BCLA was
11.78$\pm$0.63ms, significantly outperforming the best Challenge entry
(15.37$\pm$2.13ms) as well as the EM, mean, and median voting strategies
(14.76$\pm$0.52ms, 17.61$\pm$0.55ms, and 14.43$\pm$0.57ms respectively with
$p<0.0001$).
| Tingting Zhu, Nic Dunkley, Joachim Behar, David A. Clifton, Gari D.
Clifford | 10.1007/s10439-015-1344-1 | 1503.06619 | null | null |
A Probabilistic Interpretation of Sampling Theory of Graph Signals | cs.LG | We give a probabilistic interpretation of sampling theory of graph signals.
To do this, we first define a generative model for the data using a pairwise
Gaussian random field (GRF) which depends on the graph. We show that, under
certain conditions, reconstructing a graph signal from a subset of its samples
by least squares is equivalent to performing MAP inference on an approximation
of this GRF which has a low rank covariance matrix. We then show that a
sampling set of given size with the largest associated cut-off frequency, which
is optimal from a sampling theoretic point of view, minimizes the worst case
predictive covariance of the MAP estimate on the GRF. This interpretation also
gives an intuitive explanation for the superior performance of the sampling
theoretic approach to active semi-supervised classification.
| Akshay Gadde and Antonio Ortega | null | 1503.06629 | null | null |
Using Generic Summarization to Improve Music Information Retrieval Tasks | cs.IR cs.LG cs.SD | In order to satisfy processing time constraints, many MIR tasks process only
a segment of the whole music signal. This practice may lead to decreasing
performance, since the most important information for the tasks may not be in
those processed segments. In this paper, we leverage generic summarization
algorithms, previously applied to text and speech summarization, to summarize
items in music datasets. These algorithms build summaries, that are both
concise and diverse, by selecting appropriate segments from the input signal
which makes them good candidates to summarize music as well. We evaluate the
summarization process on binary and multiclass music genre classification
tasks, by comparing the performance obtained using summarized datasets against
the performances obtained using continuous segments (which is the traditional
method used for addressing the previously mentioned time constraints) and full
songs of the same original dataset. We show that GRASSHOPPER, LexRank, LSA,
MMR, and a Support Sets-based Centrality model improve classification
performance when compared to selected 30-second baselines. We also show that
summarized datasets lead to a classification performance whose difference is
not statistically significant from using full songs. Furthermore, we make an
argument stating the advantages of sharing summarized datasets for future MIR
research.
| Francisco Raposo, Ricardo Ribeiro, David Martins de Matos | 10.1109/TASLP.2016.2541299 | 1503.06666 | null | null |
Online classifier adaptation for cost-sensitive learning | cs.LG | In this paper, we propose the problem of online cost-sensitive clas- sifier
adaptation and the first algorithm to solve it. We assume we have a base
classifier for a cost-sensitive classification problem, but it is trained with
respect to a cost setting different to the desired one. Moreover, we also have
some training data samples streaming to the algorithm one by one. The prob- lem
is to adapt the given base classifier to the desired cost setting using the
steaming training samples online. To solve this problem, we propose to learn a
new classifier by adding an adaptation function to the base classifier, and
update the adaptation function parameter according to the streaming data
samples. Given a input data sample and the cost of misclassifying it, we up-
date the adaptation function parameter by minimizing cost weighted hinge loss
and respecting previous learned parameter simultaneously. The proposed
algorithm is compared to both online and off-line cost-sensitive algorithms on
two cost-sensitive classification problems, and the experiments show that it
not only outperforms them one classification performances, but also requires
significantly less running time.
| Junlin Zhang, Jose Garcia | null | 1503.06745 | null | null |
On Lower and Upper Bounds for Smooth and Strongly Convex Optimization
Problems | math.OC cs.LG | We develop a novel framework to study smooth and strongly convex optimization
algorithms, both deterministic and stochastic. Focusing on quadratic functions
we are able to examine optimization algorithms as a recursive application of
linear operators. This, in turn, reveals a powerful connection between a class
of optimization algorithms and the analytic theory of polynomials whereby new
lower and upper bounds are derived. Whereas existing lower bounds for this
setting are only valid when the dimensionality scales with the number of
iterations, our lower bound holds in the natural regime where the
dimensionality is fixed. Lastly, expressing it as an optimal solution for the
corresponding optimization problem over polynomials, as formulated by our
framework, we present a novel systematic derivation of Nesterov's well-known
Accelerated Gradient Descent method. This rather natural interpretation of AGD
contrasts with earlier ones which lacked a simple, yet solid, motivation.
| Yossi Arjevani, Shai Shalev-Shwartz, Ohad Shamir | null | 1503.06833 | null | null |
Communication Efficient Distributed Kernel Principal Component Analysis | cs.LG | Kernel Principal Component Analysis (KPCA) is a key machine learning
algorithm for extracting nonlinear features from data. In the presence of a
large volume of high dimensional data collected in a distributed fashion, it
becomes very costly to communicate all of this data to a single data center and
then perform kernel PCA. Can we perform kernel PCA on the entire dataset in a
distributed and communication efficient fashion while maintaining provable and
strong guarantees in solution quality?
In this paper, we give an affirmative answer to the question by developing a
communication efficient algorithm to perform kernel PCA in the distributed
setting. The algorithm is a clever combination of subspace embedding and
adaptive sampling techniques, and we show that the algorithm can take as input
an arbitrary configuration of distributed datasets, and compute a set of global
kernel principal components with relative error guarantees independent of the
dimension of the feature space or the total number of data points. In
particular, computing $k$ principal components with relative error $\epsilon$
over $s$ workers has communication cost $\tilde{O}(s \rho k/\epsilon+s
k^2/\epsilon^3)$ words, where $\rho$ is the average number of nonzero entries
in each data point. Furthermore, we experimented the algorithm with large-scale
real world datasets and showed that the algorithm produces a high quality
kernel PCA solution while using significantly less communication than
alternative approaches.
| Maria-Florina Balcan, Yingyu Liang, Le Song, David Woodruff, Bo Xie | null | 1503.06858 | null | null |
A Note on Information-Directed Sampling and Thompson Sampling | cs.LG cs.AI | This note introduce three Bayesian style Multi-armed bandit algorithms:
Information-directed sampling, Thompson Sampling and Generalized Thompson
Sampling. The goal is to give an intuitive explanation for these three
algorithms and their regret bounds, and provide some derivations that are
omitted in the original papers.
| Li Zhou | null | 1503.06902 | null | null |
PAC-Bayesian Theorems for Domain Adaptation with Specialization to
Linear Classifiers | stat.ML cs.LG | In this paper, we provide two main contributions in PAC-Bayesian theory for
domain adaptation where the objective is to learn, from a source distribution,
a well-performing majority vote on a different target distribution. On the one
hand, we propose an improvement of the previous approach proposed by Germain et
al. (2013), that relies on a novel distribution pseudodistance based on a
disagreement averaging, allowing us to derive a new tighter PAC-Bayesian domain
adaptation bound for the stochastic Gibbs classifier. We specialize it to
linear classifiers, and design a learning algorithm which shows interesting
results on a synthetic problem and on a popular sentiment annotation task. On
the other hand, we generalize these results to multisource domain adaptation
allowing us to take into account different source domains. This study opens the
door to tackle domain adaptation tasks by making use of all the PAC-Bayesian
tools.
| Pascal Germain (SIERRA), Amaury Habrard (LHC), Fran\c{c}ois
Laviolette, Emilie Morvant (LHC) | null | 1503.06944 | null | null |
Comparing published multi-label classifier performance measures to the
ones obtained by a simple multi-label baseline classifier | cs.LG | In supervised learning, simple baseline classifiers can be constructed by
only looking at the class, i.e., ignoring any other information from the
dataset. The single-label learning community frequently uses as a reference the
one which always predicts the majority class. Although a classifier might
perform worse than this simple baseline classifier, this behaviour requires a
special explanation. Aiming to motivate the community to compare experimental
results with the ones provided by a multi-label baseline classifier, calling
the attention about the need of special explanations related to classifiers
which perform worse than the baseline, in this work we propose the use of
General_B, a multi-label baseline classifier. General_B was evaluated in
contrast to results published in the literature which were carefully selected
using a systematic review process. It was found that a considerable number of
published results on 10 frequently used datasets are worse than or equal to the
ones obtained by General_B, and for one dataset it reaches up to 43% of the
dataset published results. Moreover, although a simple baseline classifier was
not considered in these publications, it was observed that even for very poor
results no special explanations were provided in most of them. We hope that the
findings of this work would encourage the multi-label community to consider the
idea of using a simple baseline classifier, such that further explanations are
provided when a classifiers performs worse than a baseline.
| Jean Metz and Newton Spola\^or and Everton A. Cherman and Maria C.
Monard | null | 1503.06952 | null | null |
Sample compression schemes for VC classes | cs.LG | Sample compression schemes were defined by Littlestone and Warmuth (1986) as
an abstraction of the structure underlying many learning algorithms. Roughly
speaking, a sample compression scheme of size $k$ means that given an arbitrary
list of labeled examples, one can retain only $k$ of them in a way that allows
to recover the labels of all other examples in the list. They showed that
compression implies PAC learnability for binary-labeled classes, and asked
whether the other direction holds. We answer their question and show that every
concept class $C$ with VC dimension $d$ has a sample compression scheme of size
exponential in $d$. The proof uses an approximate minimax phenomenon for binary
matrices of low VC dimension, which may be of interest in the context of game
theory.
| Shay Moran, Amir Yehudayoff | null | 1503.06960 | null | null |
Probabilistic Binary-Mask Cocktail-Party Source Separation in a
Convolutional Deep Neural Network | cs.SD cs.LG cs.NE | Separation of competing speech is a key challenge in signal processing and a
feat routinely performed by the human auditory brain. A long standing benchmark
of the spectrogram approach to source separation is known as the ideal binary
mask. Here, we train a convolutional deep neural network, on a two-speaker
cocktail party problem, to make probabilistic predictions about binary masks.
Our results approach ideal binary mask performance, illustrating that
relatively simple deep neural networks are capable of robust binary mask
prediction. We also illustrate the trade-off between prediction statistics and
separation quality.
| Andrew J.R. Simpson | null | 1503.06962 | null | null |
Convergence radius and sample complexity of ITKM algorithms for
dictionary learning | cs.LG cs.IT math.IT | In this work we show that iterative thresholding and K-means (ITKM)
algorithms can recover a generating dictionary with K atoms from noisy $S$
sparse signals up to an error $\tilde \varepsilon$ as long as the
initialisation is within a convergence radius, that is up to a $\log K$ factor
inversely proportional to the dynamic range of the signals, and the sample size
is proportional to $K \log K \tilde \varepsilon^{-2}$. The results are valid
for arbitrary target errors if the sparsity level is of the order of the square
root of the signal dimension $d$ and for target errors down to $K^{-\ell}$ if
$S$ scales as $S \leq d/(\ell \log K)$.
| Karin Schnass | null | 1503.07027 | null | null |
Rotation-invariant convolutional neural networks for galaxy morphology
prediction | astro-ph.IM astro-ph.GA cs.CV cs.LG cs.NE stat.ML | Measuring the morphological parameters of galaxies is a key requirement for
studying their formation and evolution. Surveys such as the Sloan Digital Sky
Survey (SDSS) have resulted in the availability of very large collections of
images, which have permitted population-wide analyses of galaxy morphology.
Morphological analysis has traditionally been carried out mostly via visual
inspection by trained experts, which is time-consuming and does not scale to
large ($\gtrsim10^4$) numbers of images.
Although attempts have been made to build automated classification systems,
these have not been able to achieve the desired level of accuracy. The Galaxy
Zoo project successfully applied a crowdsourcing strategy, inviting online
users to classify images by answering a series of questions. Unfortunately,
even this approach does not scale well enough to keep up with the increasing
availability of galaxy images.
We present a deep neural network model for galaxy morphology classification
which exploits translational and rotational symmetry. It was developed in the
context of the Galaxy Challenge, an international competition to build the best
model for morphology classification based on annotated images from the Galaxy
Zoo project.
For images with high agreement among the Galaxy Zoo participants, our model
is able to reproduce their consensus with near-perfect accuracy ($> 99\%$) for
most questions. Confident model predictions are highly accurate, which makes
the model suitable for filtering large collections of images and forwarding
challenging images to experts for manual annotation. This approach greatly
reduces the experts' workload without affecting accuracy. The application of
these algorithms to larger sets of training data will be critical for analysing
results from future surveys such as the LSST.
| Sander Dieleman, Kyle W. Willett, Joni Dambre | 10.1093/mnras/stv632 | 1503.07077 | null | null |
Analysis of Spectrum Occupancy Using Machine Learning Algorithms | cs.NI cs.LG | In this paper, we analyze the spectrum occupancy using different machine
learning techniques. Both supervised techniques (naive Bayesian classifier
(NBC), decision trees (DT), support vector machine (SVM), linear regression
(LR)) and unsupervised algorithm (hidden markov model (HMM)) are studied to
find the best technique with the highest classification accuracy (CA). A
detailed comparison of the supervised and unsupervised algorithms in terms of
the computational time and classification accuracy is performed. The classified
occupancy status is further utilized to evaluate the probability of secondary
user outage for the future time slots, which can be used by system designers to
define spectrum allocation and spectrum sharing policies. Numerical results
show that SVM is the best algorithm among all the supervised and unsupervised
classifiers. Based on this, we proposed a new SVM algorithm by combining it
with fire fly algorithm (FFA), which is shown to outperform all other
algorithms.
| Freeha Azmat, Yunfei Chen (Senior Member, IEEE) and Nigel Stocks | null | 1503.07104 | null | null |
Universal Approximation of Markov Kernels by Shallow Stochastic
Feedforward Networks | cs.LG stat.ML | We establish upper bounds for the minimal number of hidden units for which a
binary stochastic feedforward network with sigmoid activation probabilities and
a single hidden layer is a universal approximator of Markov kernels. We show
that each possible probabilistic assignment of the states of $n$ output units,
given the states of $k\geq1$ input units, can be approximated arbitrarily well
by a network with $2^{k-1}(2^{n-1}-1)$ hidden units.
| Guido Montufar | null | 1503.07211 | null | null |
Regularized Minimax Conditional Entropy for Crowdsourcing | cs.LG stat.ML | There is a rapidly increasing interest in crowdsourcing for data labeling. By
crowdsourcing, a large number of labels can be often quickly gathered at low
cost. However, the labels provided by the crowdsourcing workers are usually not
of high quality. In this paper, we propose a minimax conditional entropy
principle to infer ground truth from noisy crowdsourced labels. Under this
principle, we derive a unique probabilistic labeling model jointly
parameterized by worker ability and item difficulty. We also propose an
objective measurement principle, and show that our method is the only method
which satisfies this objective measurement principle. We validate our method
through a variety of real crowdsourcing datasets with binary, multiclass or
ordinal labels.
| Dengyong Zhou, Qiang Liu, John C. Platt, Christopher Meek, Nihar B.
Shah | null | 1503.07240 | null | null |
Initialization Strategies of Spatio-Temporal Convolutional Neural
Networks | cs.CV cs.LG | We propose a new way of incorporating temporal information present in videos
into Spatial Convolutional Neural Networks (ConvNets) trained on images, that
avoids training Spatio-Temporal ConvNets from scratch. We describe several
initializations of weights in 3D Convolutional Layers of Spatio-Temporal
ConvNet using 2D Convolutional Weights learned from ImageNet. We show that it
is important to initialize 3D Convolutional Weights judiciously in order to
learn temporal representations of videos. We evaluate our methods on the
UCF-101 dataset and demonstrate improvement over Spatial ConvNets.
| Elman Mansimov, Nitish Srivastava, Ruslan Salakhutdinov | null | 1503.07274 | null | null |
A Survey of Classification Techniques in the Area of Big Data | cs.LG | Big Data concern large-volume, growing data sets that are complex and have
multiple autonomous sources. Earlier technologies were not able to handle
storage and processing of huge data thus Big Data concept comes into existence.
This is a tedious job for users unstructured data. So, there should be some
mechanism which classify unstructured data into organized form which helps user
to easily access required data. Classification techniques over big
transactional database provide required data to the users from large datasets
more simple way. There are two main classification techniques, supervised and
unsupervised. In this paper we focused on to study of different supervised
classification techniques. Further this paper shows a advantages and
limitations.
| Praful Koturwar, Sheetal Girase, Debajyoti Mukhopadhyay | null | 1503.07477 | null | null |
Stable Feature Selection from Brain sMRI | cs.LG stat.ML | Neuroimage analysis usually involves learning thousands or even millions of
variables using only a limited number of samples. In this regard, sparse
models, e.g. the lasso, are applied to select the optimal features and achieve
high diagnosis accuracy. The lasso, however, usually results in independent
unstable features. Stability, a manifest of reproducibility of statistical
results subject to reasonable perturbations to data and the model, is an
important focus in statistics, especially in the analysis of high dimensional
data. In this paper, we explore a nonnegative generalized fused lasso model for
stable feature selection in the diagnosis of Alzheimer's disease. In addition
to sparsity, our model incorporates two important pathological priors: the
spatial cohesion of lesion voxels and the positive correlation between the
features and the disease labels. To optimize the model, we propose an efficient
algorithm by proving a novel link between total variation and fast network flow
algorithms via conic duality. Experiments show that the proposed nonnegative
model performs much better in exploring the intrinsic structure of data via
selecting stable features compared with other state-of-the-arts.
| Bo Xin, Lingjing Hu, Yizhou Wang and Wen Gao | null | 1503.07508 | null | null |
Transductive Multi-label Zero-shot Learning | cs.LG cs.CV | Zero-shot learning has received increasing interest as a means to alleviate
the often prohibitive expense of annotating training data for large scale
recognition problems. These methods have achieved great success via learning
intermediate semantic representations in the form of attributes and more
recently, semantic word vectors. However, they have thus far been constrained
to the single-label case, in contrast to the growing popularity and importance
of more realistic multi-label data. In this paper, for the first time, we
investigate and formalise a general framework for multi-label zero-shot
learning, addressing the unique challenge therein: how to exploit multi-label
correlation at test time with no training data for those classes? In
particular, we propose (1) a multi-output deep regression model to project an
image into a semantic word space, which explicitly exploits the correlations in
the intermediate semantic layer of word vectors; (2) a novel zero-shot learning
algorithm for multi-label data that exploits the unique compositionality
property of semantic word vector representations; and (3) a transductive
learning strategy to enable the regression model learned from seen classes to
generalise well to unseen classes. Our zero-shot learning experiments on a
number of standard multi-label datasets demonstrate that our method outperforms
a variety of baselines.
| Yanwei Fu, Yongxin Yang, Tim Hospedales, Tao Xiang and Shaogang Gong | null | 1503.07790 | null | null |
Multi-Labeled Classification of Demographic Attributes of Patients: a
case study of diabetics patients | cs.LG | Automated learning of patients demographics can be seen as multi-label
problem where a patient model is based on different race and gender groups. The
resulting model can be further integrated into Privacy-Preserving Data Mining,
where it can be used to assess risk of identification of different patient
groups. Our project considers relations between diabetes and demographics of
patients as a multi-labelled problem. Most research in this area has been done
as binary classification, where the target class is finding if a person has
diabetes or not. But very few, and maybe no work has been done in multi-labeled
analysis of the demographics of patients who are likely to be diagnosed with
diabetes. To identify such groups, we applied ensembles of several multi-label
learning algorithms.
| Naveen Kumar Parachur Cotha and Marina Sokolova | null | 1503.07795 | null | null |
Transductive Multi-class and Multi-label Zero-shot Learning | cs.LG cs.CV | Recently, zero-shot learning (ZSL) has received increasing interest. The key
idea underpinning existing ZSL approaches is to exploit knowledge transfer via
an intermediate-level semantic representation which is assumed to be shared
between the auxiliary and target datasets, and is used to bridge between these
domains for knowledge transfer. The semantic representation used in existing
approaches varies from visual attributes to semantic word vectors and semantic
relatedness. However, the overall pipeline is similar: a projection mapping
low-level features to the semantic representation is learned from the auxiliary
dataset by either classification or regression models and applied directly to
map each instance into the same semantic representation space where a zero-shot
classifier is used to recognise the unseen target class instances with a single
known 'prototype' of each target class. In this paper we discuss two related
lines of work improving the conventional approach: exploiting transductive
learning ZSL, and generalising ZSL to the multi-label case.
| Yanwei Fu, Yongxin Yang, Timothy M. Hospedales, Tao Xiang and Shaogang
Gong | null | 1503.07884 | null | null |
Generalized K-fan Multimodal Deep Model with Shared Representations | cs.LG stat.ML | Multimodal learning with deep Boltzmann machines (DBMs) is an generative
approach to fuse multimodal inputs, and can learn the shared representation via
Contrastive Divergence (CD) for classification and information retrieval tasks.
However, it is a 2-fan DBM model, and cannot effectively handle multiple
prediction tasks. Moreover, this model cannot recover the hidden
representations well by sampling from the conditional distribution when more
than one modalities are missing. In this paper, we propose a K-fan deep
structure model, which can handle the multi-input and muti-output learning
problems effectively. In particular, the deep structure has K-branch for
different inputs where each branch can be composed of a multi-layer deep model,
and a shared representation is learned in an discriminative manner to tackle
multimodal tasks. Given the deep structure, we propose two objective functions
to handle two multi-input and multi-output tasks: joint visual restoration and
labeling, and the multi-view multi-calss object recognition tasks. To estimate
the model parameters, we initialize the deep model parameters with CD to
maximize the joint distribution, and then we use backpropagation to update the
model according to specific objective function. The experimental results
demonstrate that the model can effectively leverages multi-source information
and predict multiple tasks well over competitive baselines.
| Gang Chen and Sargur N. Srihari | null | 1503.07906 | null | null |
Competitive Distribution Estimation | cs.IT cs.DS cs.LG math.IT math.ST stat.TH | Estimating an unknown distribution from its samples is a fundamental problem
in statistics. The common, min-max, formulation of this goal considers the
performance of the best estimator over all distributions in a class. It shows
that with $n$ samples, distributions over $k$ symbols can be learned to a KL
divergence that decreases to zero with the sample size $n$, but grows
unboundedly with the alphabet size $k$.
Min-max performance can be viewed as regret relative to an oracle that knows
the underlying distribution. We consider two natural and modest limits on the
oracle's power. One where it knows the underlying distribution only up to
symbol permutations, and the other where it knows the exact distribution but is
restricted to use natural estimators that assign the same probability to
symbols that appeared equally many times in the sample.
We show that in both cases the competitive regret reduces to
$\min(k/n,\tilde{\mathcal{O}}(1/\sqrt n))$, a quantity upper bounded uniformly
for every alphabet size. This shows that distributions can be estimated nearly
as well as when they are essentially known in advance, and nearly as well as
when they are completely known in advance but need to be estimated via a
natural estimator. We also provide an estimator that runs in linear time and
incurs competitive regret of $\tilde{\mathcal{O}}(\min(k/n,1/\sqrt n))$, and
show that for natural estimators this competitive regret is inevitable. We also
demonstrate the effectiveness of competitive estimators using simulations.
| Alon Orlitsky and Ananda Theertha Suresh | null | 1503.07940 | null | null |
Bayesian Cross Validation and WAIC for Predictive Prior Design in
Regular Asymptotic Theory | cs.LG stat.ML | Prior design is one of the most important problems in both statistics and
machine learning. The cross validation (CV) and the widely applicable
information criterion (WAIC) are predictive measures of the Bayesian
estimation, however, it has been difficult to apply them to find the optimal
prior because their mathematical properties in prior evaluation have been
unknown and the region of the hyperparameters is too wide to be examined. In
this paper, we derive a new formula by which the theoretical relation among CV,
WAIC, and the generalization loss is clarified and the optimal hyperparameter
can be directly found.
By the formula, three facts are clarified about predictive prior design.
Firstly, CV and WAIC have the same second order asymptotic expansion, hence
they are asymptotically equivalent to each other as the optimizer of the
hyperparameter. Secondly, the hyperparameter which minimizes CV or WAIC makes
the average generalization loss to be minimized asymptotically but does not the
random generalization loss. And lastly, by using the mathematical relation
between priors, the variances of the optimized hyperparameters by CV and WAIC
are made smaller with small computational costs. Also we show that the
optimized hyperparameter by DIC or the marginal likelihood does not minimize
the average or random generalization loss in general.
| Sumio Watanabe | null | 1503.07970 | null | null |
Discriminative Bayesian Dictionary Learning for Classification | cs.CV cs.LG | We propose a Bayesian approach to learn discriminative dictionaries for
sparse representation of data. The proposed approach infers probability
distributions over the atoms of a discriminative dictionary using a Beta
Process. It also computes sets of Bernoulli distributions that associate class
labels to the learned dictionary atoms. This association signifies the
selection probabilities of the dictionary atoms in the expansion of
class-specific data. Furthermore, the non-parametric character of the proposed
approach allows it to infer the correct size of the dictionary. We exploit the
aforementioned Bernoulli distributions in separately learning a linear
classifier. The classifier uses the same hierarchical Bayesian model as the
dictionary, which we present along the analytical inference solution for Gibbs
sampling. For classification, a test instance is first sparsely encoded over
the learned dictionary and the codes are fed to the classifier. We performed
experiments for face and action recognition; and object and scene-category
classification using five public datasets and compared the results with
state-of-the-art discriminative sparse representation approaches. Experiments
show that the proposed Bayesian approach consistently outperforms the existing
approaches.
| Naveed Akhtar, Faisal Shafait, Ajmal Mian | null | 1503.07989 | null | null |
RankMap: A Platform-Aware Framework for Distributed Learning from Dense
Datasets | cs.DC cs.LG | This paper introduces RankMap, a platform-aware end-to-end framework for
efficient execution of a broad class of iterative learning algorithms for
massive and dense datasets. Our framework exploits data structure to factorize
it into an ensemble of lower rank subspaces. The factorization creates sparse
low-dimensional representations of the data, a property which is leveraged to
devise effective mapping and scheduling of iterative learning algorithms on the
distributed computing machines. We provide two APIs, one matrix-based and one
graph-based, which facilitate automated adoption of the framework for
performing several contemporary learning applications. To demonstrate the
utility of RankMap, we solve sparse recovery and power iteration problems on
various real-world datasets with up to 1.8 billion non-zeros. Our evaluations
are performed on Amazon EC2 and IBM iDataPlex servers using up to 244 cores.
The results demonstrate up to two orders of magnitude improvements in memory
usage, execution speed, and bandwidth compared with the best reported prior
work, while achieving the same level of learning accuracy.
| Azalia Mirhoseini, Eva L. Dyer, Ebrahim.M. Songhori, Richard G.
Baraniuk, Farinaz Koushanfar | null | 1503.08169 | null | null |
A Variance Reduced Stochastic Newton Method | cs.LG | Quasi-Newton methods are widely used in practise for convex loss minimization
problems. These methods exhibit good empirical performance on a wide variety of
tasks and enjoy super-linear convergence to the optimal solution. For
large-scale learning problems, stochastic Quasi-Newton methods have been
recently proposed. However, these typically only achieve sub-linear convergence
rates and have not been shown to consistently perform well in practice since
noisy Hessian approximations can exacerbate the effect of high-variance
stochastic gradient estimates. In this work we propose Vite, a novel stochastic
Quasi-Newton algorithm that uses an existing first-order technique to reduce
this variance. Without exploiting the specific form of the approximate Hessian,
we show that Vite reaches the optimum at a geometric rate with a constant
step-size when dealing with smooth strongly convex functions. Empirically, we
demonstrate improvements over existing stochastic Quasi-Newton and variance
reduced stochastic gradient methods.
| Aurelien Lucchi, Brian McWilliams, Thomas Hofmann | null | 1503.08316 | null | null |
Risk Bounds for the Majority Vote: From a PAC-Bayesian Analysis to a
Learning Algorithm | stat.ML cs.LG | We propose an extensive analysis of the behavior of majority votes in binary
classification. In particular, we introduce a risk bound for majority votes,
called the C-bound, that takes into account the average quality of the voters
and their average disagreement. We also propose an extensive PAC-Bayesian
analysis that shows how the C-bound can be estimated from various observations
contained in the training data. The analysis intends to be self-contained and
can be used as introductory material to PAC-Bayesian statistical learning
theory. It starts from a general PAC-Bayesian perspective and ends with
uncommon PAC-Bayesian bounds. Some of these bounds contain no Kullback-Leibler
divergence and others allow kernel functions to be used as voters (via the
sample compression setting). Finally, out of the analysis, we propose the MinCq
learning algorithm that basically minimizes the C-bound. MinCq reduces to a
simple quadratic program. Aside from being theoretically grounded, MinCq
achieves state-of-the-art performance, as shown in our extensive empirical
comparison with both AdaBoost and the Support Vector Machine.
| Pascal Germain, Alexandre Lacasse, Fran\c{c}ois Laviolette, Mario
Marchand, Jean-Francis Roy | null | 1503.08329 | null | null |
Sparse Linear Regression With Missing Data | stat.ML cs.LG stat.ME | This paper proposes a fast and accurate method for sparse regression in the
presence of missing data. The underlying statistical model encapsulates the
low-dimensional structure of the incomplete data matrix and the sparsity of the
regression coefficients, and the proposed algorithm jointly learns the
low-dimensional structure of the data and a linear regressor with sparse
coefficients. The proposed stochastic optimization method, Sparse Linear
Regression with Missing Data (SLRM), performs an alternating minimization
procedure and scales well with the problem size. Large deviation inequalities
shed light on the impact of the various problem-dependent parameters on the
expected squared loss of the learned regressor. Extensive simulations on both
synthetic and real datasets show that SLRM performs better than competing
algorithms in a variety of contexts.
| Ravi Ganti and Rebecca M. Willett | null | 1503.08348 | null | null |
Active Model Aggregation via Stochastic Mirror Descent | stat.ML cs.AI cs.LG | We consider the problem of learning convex aggregation of models, that is as
good as the best convex aggregation, for the binary classification problem.
Working in the stream based active learning setting, where the active learner
has to make a decision on-the-fly, if it wants to query for the label of the
point currently seen in the stream, we propose a stochastic-mirror descent
algorithm, called SMD-AMA, with entropy regularization. We establish an excess
risk bounds for the loss of the convex aggregate returned by SMD-AMA to be of
the order of $O\left(\sqrt{\frac{\log(M)}{{T^{1-\mu}}}}\right)$, where $\mu\in
[0,1)$ is an algorithm dependent parameter, that trades-off the number of
labels queried, and excess risk.
| Ravi Ganti | null | 1503.08363 | null | null |
Global Bandits | cs.LG | Multi-armed bandits (MAB) model sequential decision making problems, in which
a learner sequentially chooses arms with unknown reward distributions in order
to maximize its cumulative reward. Most of the prior work on MAB assumes that
the reward distributions of each arm are independent. But in a wide variety of
decision problems -- from drug dosage to dynamic pricing -- the expected
rewards of different arms are correlated, so that selecting one arm provides
information about the expected rewards of other arms as well. We propose and
analyze a class of models of such decision problems, which we call {\em global
bandits}. In the case in which rewards of all arms are deterministic functions
of a single unknown parameter, we construct a greedy policy that achieves {\em
bounded regret}, with a bound that depends on the single true parameter of the
problem. Hence, this policy selects suboptimal arms only finitely many times
with probability one. For this case we also obtain a bound on regret that is
{\em independent of the true parameter}; this bound is sub-linear, with an
exponent that depends on the informativeness of the arms. We also propose a
variant of the greedy policy that achieves $\tilde{\mathcal{O}}(\sqrt{T})$
worst-case and $\mathcal{O}(1)$ parameter dependent regret. Finally, we perform
experiments on dynamic pricing and show that the proposed algorithms achieve
significant gains with respect to the well-known benchmarks.
| Onur Atan, Cem Tekin, Mihaela van der Schaar | null | 1503.08370 | null | null |
Towards Easier and Faster Sequence Labeling for Natural Language
Processing: A Search-based Probabilistic Online Learning Framework (SAPO) | cs.LG cs.AI | There are two major approaches for sequence labeling. One is the
probabilistic gradient-based methods such as conditional random fields (CRF)
and neural networks (e.g., RNN), which have high accuracy but drawbacks: slow
training, and no support of search-based optimization (which is important in
many cases). The other is the search-based learning methods such as structured
perceptron and margin infused relaxed algorithm (MIRA), which have fast
training but also drawbacks: low accuracy, no probabilistic information, and
non-convergence in real-world tasks. We propose a novel and "easy" solution, a
search-based probabilistic online learning method, to address most of those
issues. The method is "easy", because the optimization algorithm at the
training stage is as simple as the decoding algorithm at the test stage. This
method searches the output candidates, derives probabilities, and conducts
efficient online learning. We show that this method with fast training and
theoretical guarantee of convergence, which is easy to implement, can support
search-based optimization and obtain top accuracy. Experiments on well-known
tasks show that our method has better accuracy than CRF and BiLSTM\footnote{The
SAPO code is released at \url{https://github.com/lancopku/SAPO}.}.
| Xu Sun, Shuming Ma, Yi Zhang, Xuancheng Ren | null | 1503.08381 | null | null |
Towards More Efficient SPSD Matrix Approximation and CUR Matrix
Decomposition | cs.LG | Symmetric positive semi-definite (SPSD) matrix approximation methods have
been extensively used to speed up large-scale eigenvalue computation and kernel
learning methods. The standard sketch based method, which we call the prototype
model, produces relatively accurate approximations, but is inefficient on large
square matrices. The Nystr\"om method is highly efficient, but can only achieve
low accuracy. In this paper we propose a novel model that we call the {\it fast
SPSD matrix approximation model}. The fast model is nearly as efficient as the
Nystr\"om method and as accurate as the prototype model. We show that the fast
model can potentially solve eigenvalue problems and kernel learning problems in
linear time with respect to the matrix size $n$ to achieve $1+\epsilon$
relative-error, whereas both the prototype model and the Nystr\"om method cost
at least quadratic time to attain comparable error bound. Empirical comparisons
among the prototype model, the Nystr\"om method, and our fast model demonstrate
the superiority of the fast model. We also contribute new understandings of the
Nystr\"om method. The Nystr\"om method is a special instance of our fast model
and is approximation to the prototype model. Our technique can be
straightforwardly applied to make the CUR matrix decomposition more efficiently
computed without much affecting the accuracy.
| Shusen Wang and Zhihua Zhang and Tong Zhang | null | 1503.08395 | null | null |
Cross-validation of matching correlation analysis by resampling matching
weights | stat.ML cs.LG | The strength of association between a pair of data vectors is represented by
a nonnegative real number, called matching weight. For dimensionality
reduction, we consider a linear transformation of data vectors, and define a
matching error as the weighted sum of squared distances between transformed
vectors with respect to the matching weights. Given data vectors and matching
weights, the optimal linear transformation minimizing the matching error is
solved by the spectral graph embedding of Yan et al. (2007). This method is a
generalization of the canonical correlation analysis, and will be called as
matching correlation analysis (MCA). In this paper, we consider a novel
sampling scheme where the observed matching weights are randomly sampled from
underlying true matching weights with small probability, whereas the data
vectors are treated as constants. We then investigate a cross-validation by
resampling the matching weights. Our asymptotic theory shows that the
cross-validation, if rescaled properly, computes an unbiased estimate of the
matching error with respect to the true matching weights. Existing ideas of
cross-validation for resampling data vectors, instead of resampling matching
weights, are not applicable here. MCA can be used for data vectors from
multiple domains with different dimensions via an embarrassingly simple idea of
coding the data vectors. This method will be called as cross-domain matching
correlation analysis (CDMCA), and an interesting connection to the classical
associative memory model of neural networks is also discussed.
| Hidetoshi Shimodaira | null | 1503.08471 | null | null |
Average Distance Queries through Weighted Samples in Graphs and Metric
Spaces: High Scalability with Tight Statistical Guarantees | cs.SI cs.LG | The average distance from a node to all other nodes in a graph, or from a
query point in a metric space to a set of points, is a fundamental quantity in
data analysis. The inverse of the average distance, known as the (classic)
closeness centrality of a node, is a popular importance measure in the study of
social networks. We develop novel structural insights on the sparsifiability of
the distance relation via weighted sampling. Based on that, we present highly
practical algorithms with strong statistical guarantees for fundamental
problems. We show that the average distance (and hence the centrality) for all
nodes in a graph can be estimated using $O(\epsilon^{-2})$ single-source
distance computations. For a set $V$ of $n$ points in a metric space, we show
that after preprocessing which uses $O(n)$ distance computations we can compute
a weighted sample $S\subset V$ of size $O(\epsilon^{-2})$ such that the average
distance from any query point $v$ to $V$ can be estimated from the distances
from $v$ to $S$. Finally, we show that for a set of points $V$ in a metric
space, we can estimate the average pairwise distance using $O(n+\epsilon^{-2})$
distance computations. The estimate is based on a weighted sample of
$O(\epsilon^{-2})$ pairs of points, which is computed using $O(n)$ distance
computations. Our estimates are unbiased with normalized mean square error
(NRMSE) of at most $\epsilon$. Increasing the sample size by a $O(\log n)$
factor ensures that the probability that the relative error exceeds $\epsilon$
is polynomially small.
| Shiri Chechik and Edith Cohen and Haim Kaplan | null | 1503.08528 | null | null |
Infinite Author Topic Model based on Mixed Gamma-Negative Binomial
Process | stat.ML cs.IR cs.LG | Incorporating the side information of text corpus, i.e., authors, time
stamps, and emotional tags, into the traditional text mining models has gained
significant interests in the area of information retrieval, statistical natural
language processing, and machine learning. One branch of these works is the
so-called Author Topic Model (ATM), which incorporates the authors's interests
as side information into the classical topic model. However, the existing ATM
needs to predefine the number of topics, which is difficult and inappropriate
in many real-world settings. In this paper, we propose an Infinite Author Topic
(IAT) model to resolve this issue. Instead of assigning a discrete probability
on fixed number of topics, we use a stochastic process to determine the number
of topics from the data itself. To be specific, we extend a gamma-negative
binomial process to three levels in order to capture the
author-document-keyword hierarchical structure. Furthermore, each document is
assigned a mixed gamma process that accounts for the multi-author's
contribution towards this document. An efficient Gibbs sampling inference
algorithm with each conditional distribution being closed-form is developed for
the IAT model. Experiments on several real-world datasets show the capabilities
of our IAT model to learn the hidden topics, authors' interests on these topics
and the number of topics simultaneously.
| Junyu Xuan, Jie Lu, Guangquan Zhang, Richard Yi Da Xu, Xiangfeng Luo | null | 1503.08535 | null | null |
Nonparametric Relational Topic Models through Dependent Gamma Processes | stat.ML cs.CL cs.IR cs.LG | Traditional Relational Topic Models provide a way to discover the hidden
topics from a document network. Many theoretical and practical tasks, such as
dimensional reduction, document clustering, link prediction, benefit from this
revealed knowledge. However, existing relational topic models are based on an
assumption that the number of hidden topics is known in advance, and this is
impractical in many real-world applications. Therefore, in order to relax this
assumption, we propose a nonparametric relational topic model in this paper.
Instead of using fixed-dimensional probability distributions in its generative
model, we use stochastic processes. Specifically, a gamma process is assigned
to each document, which represents the topic interest of this document.
Although this method provides an elegant solution, it brings additional
challenges when mathematically modeling the inherent network structure of
typical document network, i.e., two spatially closer documents tend to have
more similar topics. Furthermore, we require that the topics are shared by all
the documents. In order to resolve these challenges, we use a subsampling
strategy to assign each document a different gamma process from the global
gamma process, and the subsampling probabilities of documents are assigned with
a Markov Random Field constraint that inherits the document network structure.
Through the designed posterior inference algorithm, we can discover the hidden
topics and its number simultaneously. Experimental results on both synthetic
and real-world network datasets demonstrate the capabilities of learning the
hidden topics and, more importantly, the number of topics.
| Junyu Xuan, Jie Lu, Guangquan Zhang, Richard Yi Da Xu, Xiangfeng Luo | null | 1503.08542 | null | null |
LSHTC: A Benchmark for Large-Scale Text Classification | cs.IR cs.CL cs.LG | LSHTC is a series of challenges which aims to assess the performance of
classification systems in large-scale classification in a a large number of
classes (up to hundreds of thousands). This paper describes the dataset that
have been released along the LSHTC series. The paper details the construction
of the datsets and the design of the tracks as well as the evaluation measures
that we implemented and a quick overview of the results. All of these datasets
are available online and runs may still be submitted on the online server of
the challenges.
| Ioannis Partalas, Aris Kosmopoulos, Nicolas Baskiotis, Thierry
Artieres, George Paliouras, Eric Gaussier, Ion Androutsopoulos, Massih-Reza
Amini, Patrick Galinari | null | 1503.08581 | null | null |
Sparse plus low-rank autoregressive identification in neuroimaging time
series | cs.LG cs.SY | This paper considers the problem of identifying multivariate autoregressive
(AR) sparse plus low-rank graphical models. Based on the corresponding problem
formulation recently presented, we use the alternating direction method of
multipliers (ADMM) to efficiently solve it and scale it to sizes encountered in
neuroimaging applications. We apply this decomposition on synthetic and real
neuroimaging datasets with a specific focus on the information encoded in the
low-rank structure of our model. In particular, we illustrate that this
information captures the spatio-temporal structure of the original data,
generalizing classical component analysis approaches.
| Rapha\"el Li\'egeois, Bamdev Mishra, Mattia Zorzi, Rodolphe Sepulchre | null | 1503.08639 | null | null |
Comparison of Bayesian predictive methods for model selection | stat.ME cs.LG | The goal of this paper is to compare several widely used Bayesian model
selection methods in practical model selection problems, highlight their
differences and give recommendations about the preferred approaches. We focus
on the variable subset selection for regression and classification and perform
several numerical experiments using both simulated and real world data. The
results show that the optimization of a utility estimate such as the
cross-validation (CV) score is liable to finding overfitted models due to
relatively high variance in the utility estimates when the data is scarce. This
can also lead to substantial selection induced bias and optimism in the
performance evaluation for the selected model. From a predictive viewpoint,
best results are obtained by accounting for model uncertainty by forming the
full encompassing model, such as the Bayesian model averaging solution over the
candidate models. If the encompassing model is too complex, it can be robustly
simplified by the projection method, in which the information of the full model
is projected onto the submodels. This approach is substantially less prone to
overfitting than selection based on CV-score. Overall, the projection method
appears to outperform also the maximum a posteriori model and the selection of
the most probable variables. The study also demonstrates that the model
selection can greatly benefit from using cross-validation outside the searching
process both for guiding the model size selection and assessing the predictive
performance of the finally selected model.
| Juho Piironen, Aki Vehtari | 10.1007/s11222-016-9649-y | 1503.08650 | null | null |
Founding Digital Currency on Imprecise Commodity | cs.CY cs.LG | Current digital currency schemes provide instantaneous exchange on precise
commodity, in which "precise" means a buyer can possibly verify the function of
the commodity without error. However, imprecise commodities, e.g. statistical
data, with error existing are abundant in digital world. Existing digital
currency schemes do not offer a mechanism to help the buyer for payment
decision on precision of commodity, which may lead the buyer to a dilemma
between having to buy and being unconfident. In this paper, we design a
currency schemes IDCS for imprecise digital commodity. IDCS completes a trade
in three stages of handshake between a buyer and providers. We present an IDCS
prototype implementation that assigns weights on the trustworthy of the
providers, and calculates a confidence level for the buyer to decide the
quality of a imprecise commodity. In experiment, we characterize the
performance of IDCS prototype under varying impact factors.
| Zimu Yuan, Zhiwei Xu | null | 1503.08818 | null | null |
Decentralized learning for wireless communications and networking | math.OC cs.IT cs.LG cs.MA cs.SY math.IT stat.ML | This chapter deals with decentralized learning algorithms for in-network
processing of graph-valued data. A generic learning problem is formulated and
recast into a separable form, which is iteratively minimized using the
alternating-direction method of multipliers (ADMM) so as to gain the desired
degree of parallelization. Without exchanging elements from the distributed
training sets and keeping inter-node communications at affordable levels, the
local (per-node) learners consent to the desired quantity inferred globally,
meaning the one obtained if the entire training data set were centrally
available. Impact of the decentralized learning framework to contemporary
wireless communications and networking tasks is illustrated through case
studies including target tracking using wireless sensor networks, unveiling
Internet traffic anomalies, power system state estimation, as well as spectrum
cartography for wireless cognitive radio networks.
| Georgios B. Giannakis, Qing Ling, Gonzalo Mateos, Ioannis D. Schizas,
and Hao Zhu | null | 1503.08855 | null | null |
Fast Label Embeddings for Extremely Large Output Spaces | cs.LG | Many modern multiclass and multilabel problems are characterized by
increasingly large output spaces. For these problems, label embeddings have
been shown to be a useful primitive that can improve computational and
statistical efficiency. In this work we utilize a correspondence between rank
constrained estimation and low dimensional label embeddings that uncovers a
fast label embedding algorithm which works in both the multiclass and
multilabel settings. The result is a randomized algorithm for partial least
squares, whose running time is exponentially faster than naive algorithms. We
demonstrate our techniques on two large-scale public datasets, from the Large
Scale Hierarchical Text Challenge and the Open Directory Project, where we
obtain state of the art results.
| Paul Mineiro and Nikos Karampatziakis | null | 1503.08873 | null | null |
Multi-label Classification using Labels as Hidden Nodes | stat.ML cs.LG | Competitive methods for multi-label classification typically invest in
learning labels together. To do so in a beneficial way, analysis of label
dependence is often seen as a fundamental step, separate and prior to
constructing a classifier. Some methods invest up to hundreds of times more
computational effort in building dependency models, than training the final
classifier itself. We extend some recent discussion in the literature and
provide a deeper analysis, namely, developing the view that label dependence is
often introduced by an inadequate base classifier, rather than being inherent
to the data or underlying concept; showing how even an exhaustive analysis of
label dependence may not lead to an optimal classification structure. Viewing
labels as additional features (a transformation of the input), we create
neural-network inspired novel methods that remove the emphasis of a prior
dependency structure. Our methods have an important advantage particular to
multi-label data: they leverage labels to create effective units in middle
layers, rather than learning these units from scratch in an unsupervised
fashion with gradient-based methods. Results are promising. The methods we
propose perform competitively, and also have very important qualities of
scalability.
| Jesse Read and Jaakko Hollm\'en | null | 1503.09022 | null | null |
Learning Definite Horn Formulas from Closure Queries | cs.LG cs.LO | A definite Horn theory is a set of n-dimensional Boolean vectors whose
characteristic function is expressible as a definite Horn formula, that is, as
conjunction of definite Horn clauses. The class of definite Horn theories is
known to be learnable under different query learning settings, such as learning
from membership and equivalence queries or learning from entailment. We propose
yet a different type of query: the closure query. Closure queries are a natural
extension of membership queries and also a variant, appropriate in the context
of definite Horn formulas, of the so-called correction queries. We present an
algorithm that learns conjunctions of definite Horn clauses in polynomial time,
using closure and equivalence queries, and show how it relates to the canonical
Guigues-Duquenne basis for implicational systems. We also show how the
different query models mentioned relate to each other by either showing
full-fledged reductions by means of query simulation (where possible), or by
showing their connections in the context of particular algorithms that use them
for learning definite Horn formulas.
| Marta Arias, Jos\'e L. Balc\'azar, Cristina T\^irn\u{a}uc\u{a} | null | 1503.09025 | null | null |
Generalized Categorization Axioms | cs.LG | Categorization axioms have been proposed to axiomatizing clustering results,
which offers a hint of bridging the difference between human recognition system
and machine learning through an intuitive observation: an object should be
assigned to its most similar category. However, categorization axioms cannot be
generalized into a general machine learning system as categorization axioms
become trivial when the number of categories becomes one. In order to
generalize categorization axioms into general cases, categorization input and
categorization output are reinterpreted by inner and outer category
representation. According to the categorization reinterpretation, two category
representation axioms are presented. Category representation axioms and
categorization axioms can be combined into a generalized categorization
axiomatic framework, which accurately delimit the theoretical categorization
constraints and overcome the shortcoming of categorization axioms. The proposed
axiomatic framework not only discuses categorization test issue but also
reinterprets many results in machine learning in a unified way, such as
dimensionality reduction,density estimation, regression, clustering and
classification.
| Jian Yu | null | 1503.09082 | null | null |
Real-World Font Recognition Using Deep Network and Domain Adaptation | cs.CV cs.LG | We address a challenging fine-grain classification problem: recognizing a
font style from an image of text. In this task, it is very easy to generate
lots of rendered font examples but very hard to obtain real-world labeled
images. This real-to-synthetic domain gap caused poor generalization to new
real data in previous methods (Chen et al. (2014)). In this paper, we refer to
Convolutional Neural Networks, and use an adaptation technique based on a
Stacked Convolutional Auto-Encoder that exploits unlabeled real-world images
combined with synthetic data. The proposed method achieves an accuracy of
higher than 80% (top-5) on a real-world dataset.
| Zhangyang Wang, Jianchao Yang, Hailin Jin, Eli Shechtman, Aseem
Agarwala, Jonathan Brandt, Thomas S. Huang | null | 1504.00028 | null | null |
Improved Error Bounds Based on Worst Likely Assignments | stat.ML cs.IT cs.LG math.IT math.PR | Error bounds based on worst likely assignments use permutation tests to
validate classifiers. Worst likely assignments can produce effective bounds
even for data sets with 100 or fewer training examples. This paper introduces a
statistic for use in the permutation tests of worst likely assignments that
improves error bounds, especially for accurate classifiers, which are typically
the classifiers of interest.
| Eric Bax | null | 1504.00052 | null | null |
Crowdsourcing Feature Discovery via Adaptively Chosen Comparisons | stat.ML cs.LG | We introduce an unsupervised approach to efficiently discover the underlying
features in a data set via crowdsourcing. Our queries ask crowd members to
articulate a feature common to two out of three displayed examples. In addition
we also ask the crowd to provide binary labels to the remaining examples based
on the discovered features. The triples are chosen adaptively based on the
labels of the previously discovered features on the data set. In two natural
models of features, hierarchical and independent, we show that a simple
adaptive algorithm, using "two-out-of-three" similarity queries, recovers all
features with less labor than any nonadaptive algorithm. Experimental results
validate the theoretical findings.
| James Y. Zou, Kamalika Chaudhuri, Adam Tauman Kalai | null | 1504.00064 | null | null |
A Theory of Feature Learning | stat.ML cs.LG | Feature Learning aims to extract relevant information contained in data sets
in an automated fashion. It is driving force behind the current deep learning
trend, a set of methods that have had widespread empirical success. What is
lacking is a theoretical understanding of different feature learning schemes.
This work provides a theoretical framework for feature learning and then
characterizes when features can be learnt in an unsupervised fashion. We also
provide means to judge the quality of features via rate-distortion theory and
its generalizations.
| Brendan van Rooyen, Robert C. Williamson | null | 1504.00083 | null | null |
Learning in the Presence of Corruption | stat.ML cs.LG | In supervised learning one wishes to identify a pattern present in a joint
distribution $P$, of instances, label pairs, by providing a function $f$ from
instances to labels that has low risk $\mathbb{E}_{P}\ell(y,f(x))$. To do so,
the learner is given access to $n$ iid samples drawn from $P$. In many real
world problems clean samples are not available. Rather, the learner is given
access to samples from a corrupted distribution $\tilde{P}$ from which to
learn, while the goal of predicting the clean pattern remains. There are many
different types of corruption one can consider, and as of yet there is no
general means to compare the relative ease of learning under these different
corruption processes. In this paper we develop a general framework for tackling
such problems as well as introducing upper and lower bounds on the risk for
learning in the presence of corruption. Our ultimate goal is to be able to make
informed economic decisions in regards to the acquisition of data sets. For a
certain subclass of corruption processes (those that are
\emph{reconstructible}) we achieve this goal in a particular sense. Our lower
bounds are in terms of the coefficient of ergodicity, a simple to calculate
property of stochastic matrices. Our upper bounds proceed via a generalization
of the method of unbiased estimators appearing in recent work of Natarajan et
al and implicit in the earlier work of Kearns.
| Brendan van Rooyen, Robert C. Williamson | null | 1504.00091 | null | null |
The Libra Toolkit for Probabilistic Models | cs.LG cs.AI | The Libra Toolkit is a collection of algorithms for learning and inference
with discrete probabilistic models, including Bayesian networks, Markov
networks, dependency networks, and sum-product networks. Compared to other
toolkits, Libra places a greater emphasis on learning the structure of
tractable models in which exact inference is efficient. It also includes a
variety of algorithms for learning graphical models in which inference is
potentially intractable, and for performing exact and approximate inference.
Libra is released under a 2-clause BSD license to encourage broad use in
academia and industry.
| Daniel Lowd, Amirmohammad Rooshenas | null | 1504.00110 | null | null |
A New Vision of Collaborative Active Learning | cs.LG stat.ML | Active learning (AL) is a learning paradigm where an active learner has to
train a model (e.g., a classifier) which is in principal trained in a
supervised way, but in AL it has to be done by means of a data set with
initially unlabeled samples. To get labels for these samples, the active
learner has to ask an oracle (e.g., a human expert) for labels. The goal is to
maximize the performance of the model and to minimize the number of queries at
the same time. In this article, we first briefly discuss the state of the art
and own, preliminary work in the field of AL. Then, we propose the concept of
collaborative active learning (CAL). With CAL, we will overcome some of the
harsh limitations of current AL. In particular, we envision scenarios where an
expert may be wrong for various reasons, there might be several or even many
experts with different expertise, the experts may label not only samples but
also knowledge at a higher level such as rules, and we consider that the
labeling costs depend on many conditions. Moreover, in a CAL process human
experts will profit by improving their own knowledge, too.
| Adrian Calma, Tobias Reitmaier, Bernhard Sick, Paul Lukowicz, Mark
Embrechts | null | 1504.00284 | null | null |
Bayesian Clustering of Shapes of Curves | stat.ML cs.LG | Unsupervised clustering of curves according to their shapes is an important
problem with broad scientific applications. The existing model-based clustering
techniques either rely on simple probability models (e.g., Gaussian) that are
not generally valid for shape analysis or assume the number of clusters. We
develop an efficient Bayesian method to cluster curve data using an elastic
shape metric that is based on joint registration and comparison of shapes of
curves. The elastic-inner product matrix obtained from the data is modeled
using a Wishart distribution whose parameters are assigned carefully chosen
prior distributions to allow for automatic inference on the number of clusters.
Posterior is sampled through an efficient Markov chain Monte Carlo procedure
based on the Chinese restaurant process to infer (1) the posterior distribution
on the number of clusters, and (2) clustering configuration of shapes. This
method is demonstrated on a variety of synthetic data and real data examples on
protein structure analysis, cell shape analysis in microscopy images, and
clustering of shaped from MPEG7 database.
| Zhengwu Zhang, Debdeep Pati, Anuj Srivastava | null | 1504.00377 | null | null |
Signatures of Infinity: Nonergodicity and Resource Scaling in
Prediction, Complexity, and Learning | cond-mat.stat-mech cs.IT cs.LG math.IT stat.ML | We introduce a simple analysis of the structural complexity of
infinite-memory processes built from random samples of stationary, ergodic
finite-memory component processes. Such processes are familiar from the well
known multi-arm Bandit problem. We contrast our analysis with
computation-theoretic and statistical inference approaches to understanding
their complexity. The result is an alternative view of the relationship between
predictability, complexity, and learning that highlights the distinct ways in
which informational and correlational divergences arise in complex ergodic and
nonergodic processes. We draw out consequences for the resource divergences
that delineate the structural hierarchy of ergodic processes and for processes
that are themselves hierarchical.
| James P. Crutchfield and Sarah Marzen | null | 1504.00386 | null | null |
Direct l_(2,p)-Norm Learning for Feature Selection | cs.LG cs.CV | In this paper, we propose a novel sparse learning based feature selection
method that directly optimizes a large margin linear classification model
sparsity with l_(2,p)-norm (0 < p < 1)subject to data-fitting constraints,
rather than using the sparsity as a regularization term. To solve the direct
sparsity optimization problem that is non-smooth and non-convex when 0<p<1, we
provide an efficient iterative algorithm with proved convergence by converting
it to a convex and smooth optimization problem at every iteration step. The
proposed algorithm has been evaluated based on publicly available datasets, and
extensive comparison experiments have demonstrated that our algorithm could
achieve feature selection performance competitive to state-of-the-art
algorithms.
| Hanyang Peng, Yong Fan | null | 1504.00430 | null | null |
Quantum image classification using principal component analysis | quant-ph cs.CV cs.LG | We present a novel quantum algorithm for classification of images. The
algorithm is constructed using principal component analysis and von Neuman
quantum measurements. In order to apply the algorithm we present a new quantum
representation of grayscale images.
| Mateusz Ostaszewski and Przemys{\l}aw Sadowski and Piotr Gawron | 10.20904/271001 | 1504.00580 | null | null |
A Probabilistic Theory of Deep Learning | stat.ML cs.CV cs.LG cs.NE | A grand challenge in machine learning is the development of computational
algorithms that match or outperform humans in perceptual inference tasks that
are complicated by nuisance variation. For instance, visual object recognition
involves the unknown object position, orientation, and scale in object
recognition while speech recognition involves the unknown voice pronunciation,
pitch, and speed. Recently, a new breed of deep learning algorithms have
emerged for high-nuisance inference tasks that routinely yield pattern
recognition systems with near- or super-human capabilities. But a fundamental
question remains: Why do they work? Intuitions abound, but a coherent framework
for understanding, analyzing, and synthesizing deep learning architectures has
remained elusive. We answer this question by developing a new probabilistic
framework for deep learning based on the Deep Rendering Model: a generative
probabilistic model that explicitly captures latent nuisance variation. By
relaxing the generative model to a discriminative one, we can recover two of
the current leading deep learning systems, deep convolutional neural networks
and random decision forests, providing insights into their successes and
shortcomings, as well as a principled route to their improvement.
| Ankit B. Patel, Tan Nguyen and Richard G. Baraniuk | null | 1504.00641 | null | null |
End-to-End Training of Deep Visuomotor Policies | cs.LG cs.CV cs.RO | Policy search methods can allow robots to learn control policies for a wide
range of tasks, but practical applications of policy search often require
hand-engineered components for perception, state estimation, and low-level
control. In this paper, we aim to answer the following question: does training
the perception and control systems jointly end-to-end provide better
performance than training each component separately? To this end, we develop a
method that can be used to learn policies that map raw image observations
directly to torques at the robot's motors. The policies are represented by deep
convolutional neural networks (CNNs) with 92,000 parameters, and are trained
using a partially observed guided policy search method, which transforms policy
search into supervised learning, with supervision provided by a simple
trajectory-centric reinforcement learning method. We evaluate our method on a
range of real-world manipulation tasks that require close coordination between
vision and control, such as screwing a cap onto a bottle, and present simulated
comparisons to a range of prior policy search methods.
| Sergey Levine, Chelsea Finn, Trevor Darrell, Pieter Abbeel | null | 1504.00702 | null | null |
Unsupervised Feature Selection with Adaptive Structure Learning | cs.LG | The problem of feature selection has raised considerable interests in the
past decade. Traditional unsupervised methods select the features which can
faithfully preserve the intrinsic structures of data, where the intrinsic
structures are estimated using all the input features of data. However, the
estimated intrinsic structures are unreliable/inaccurate when the redundant and
noisy features are not removed. Therefore, we face a dilemma here: one need the
true structures of data to identify the informative features, and one need the
informative features to accurately estimate the true structures of data. To
address this, we propose a unified learning framework which performs structure
learning and feature selection simultaneously. The structures are adaptively
learned from the results of feature selection, and the informative features are
reselected to preserve the refined structures of data. By leveraging the
interactions between these two essential tasks, we are able to capture accurate
structures and select more informative features. Experimental results on many
benchmark data sets demonstrate that the proposed method outperforms many state
of the art unsupervised feature selection methods.
| Liang Du, Yi-Dong Shen | null | 1504.00736 | null | null |
Learning Mixed Membership Mallows Models from Pairwise Comparisons | cs.LG stat.ML | We propose a novel parameterized family of Mixed Membership Mallows Models
(M4) to account for variability in pairwise comparisons generated by a
heterogeneous population of noisy and inconsistent users. M4 models individual
preferences as a user-specific probabilistic mixture of shared latent Mallows
components. Our key algorithmic insight for estimation is to establish a
statistical connection between M4 and topic models by viewing pairwise
comparisons as words, and users as documents. This key insight leads us to
explore Mallows components with a separable structure and leverage recent
advances in separable topic discovery. While separability appears to be overly
restrictive, we nevertheless show that it is an inevitable outcome of a
relatively small number of latent Mallows components in a world of large number
of items. We then develop an algorithm based on robust extreme-point
identification of convex polygons to learn the reference rankings, and is
provably consistent with polynomial sample complexity guarantees. We
demonstrate that our new model is empirically competitive with the current
state-of-the-art approaches in predicting real-world preferences.
| Weicong Ding, Prakash Ishwar, Venkatesh Saligrama | null | 1504.00757 | null | null |
The Gram-Charlier A Series based Extended Rule-of-Thumb for Bandwidth
Selection in Univariate and Multivariate Kernel Density Estimations | cs.LG stat.CO stat.ME stat.ML | The article derives a novel Gram-Charlier A (GCA) Series based Extended
Rule-of-Thumb (ExROT) for bandwidth selection in Kernel Density Estimation
(KDE). There are existing various bandwidth selection rules achieving
minimization of the Asymptotic Mean Integrated Square Error (AMISE) between the
estimated probability density function (PDF) and the actual PDF. The rules
differ in a way to estimate the integration of the squared second order
derivative of an unknown PDF $(f(\cdot))$, identified as the roughness
$R(f''(\cdot))$. The simplest Rule-of-Thumb (ROT) estimates $R(f''(\cdot))$
with an assumption that the density being estimated is Gaussian. Intuitively,
better estimation of $R(f''(\cdot))$ and consequently better bandwidth
selection rules can be derived, if the unknown PDF is approximated through an
infinite series expansion based on a more generalized density assumption. As a
demonstration and verification to this concept, the ExROT derived in the
article uses an extended assumption that the density being estimated is near
Gaussian. This helps use of the GCA expansion as an approximation to the
unknown near Gaussian PDF. The ExROT for univariate KDE is extended to that for
multivariate KDE. The required multivariate AMISE criteria is re-derived using
elementary calculus of several variables, instead of Tensor calculus. The
derivation uses the Kronecker product and the vector differential operator to
achieve the AMISE expression in vector notations. There is also derived ExROT
for kernel based density derivative estimator.
| Dharmani Bhaveshkumar C | null | 1504.00781 | null | null |
Robust Anomaly Detection Using Semidefinite Programming | math.OC cs.CV cs.LG cs.SY | This paper presents a new approach, based on polynomial optimization and the
method of moments, to the problem of anomaly detection. The proposed technique
only requires information about the statistical moments of the normal-state
distribution of the features of interest and compares favorably with existing
approaches (such as Parzen windows and 1-class SVM). In addition, it provides a
succinct description of the normal state. Thus, it leads to a substantial
simplification of the the anomaly detection problem when working with higher
dimensional datasets.
| Jose A. Lopez, Octavia Camps, Mario Sznaier | null | 1504.00905 | null | null |
A Unified Deep Neural Network for Speaker and Language Recognition | cs.CL cs.CV cs.LG cs.NE stat.ML | Learned feature representations and sub-phoneme posteriors from Deep Neural
Networks (DNNs) have been used separately to produce significant performance
gains for speaker and language recognition tasks. In this work we show how
these gains are possible using a single DNN for both speaker and language
recognition. The unified DNN approach is shown to yield substantial performance
improvements on the the 2013 Domain Adaptation Challenge speaker recognition
task (55% reduction in EER for the out-of-domain condition) and on the NIST
2011 Language Recognition Evaluation (48% reduction in EER for the 30s test
condition).
| Fred Richardson, Douglas Reynolds, Najim Dehak | null | 1504.00923 | null | null |
A Simple Way to Initialize Recurrent Networks of Rectified Linear Units | cs.NE cs.LG | Learning long term dependencies in recurrent networks is difficult due to
vanishing and exploding gradients. To overcome this difficulty, researchers
have developed sophisticated optimization techniques and network architectures.
In this paper, we propose a simpler solution that use recurrent neural networks
composed of rectified linear units. Key to our solution is the use of the
identity matrix or its scaled version to initialize the recurrent weight
matrix. We find that our solution is comparable to LSTM on our four benchmarks:
two toy problems involving long-range temporal structures, a large language
modeling problem and a benchmark speech recognition problem.
| Quoc V. Le, Navdeep Jaitly, Geoffrey E. Hinton | null | 1504.00941 | null | null |
The Child is Father of the Man: Foresee the Success at the Early Stage | cs.LG | Understanding the dynamic mechanisms that drive the high-impact scientific
work (e.g., research papers, patents) is a long-debated research topic and has
many important implications, ranging from personal career development and
recruitment search, to the jurisdiction of research resources. Recent advances
in characterizing and modeling scientific success have made it possible to
forecast the long-term impact of scientific work, where data mining techniques,
supervised learning in particular, play an essential role. Despite much
progress, several key algorithmic challenges in relation to predicting
long-term scientific impact have largely remained open. In this paper, we
propose a joint predictive model to forecast the long-term scientific impact at
the early stage, which simultaneously addresses a number of these open
challenges, including the scholarly feature design, the non-linearity, the
domain-heterogeneity and dynamics. In particular, we formulate it as a
regularized optimization problem and propose effective and scalable algorithms
to solve it. We perform extensive empirical evaluations on large, real
scholarly data sets to validate the effectiveness and the efficiency of our
method.
| Liangyue Li, Hanghang Tong | 10.1145/2783258.2783340 | 1504.00948 | null | null |
ELM-Based Distributed Cooperative Learning Over Networks | cs.LG math.OC | This paper investigates distributed cooperative learning algorithms for data
processing in a network setting. Specifically, the extreme learning machine
(ELM) is introduced to train a set of data distributed across several
components, and each component runs a program on a subset of the entire data.
In this scheme, there is no requirement for a fusion center in the network due
to e.g., practical limitations, security, or privacy reasons. We first
reformulate the centralized ELM training problem into a separable form among
nodes with consensus constraints. Then, we solve the equivalent problem using
distributed optimization tools. A new distributed cooperative learning
algorithm based on ELM, called DC-ELM, is proposed. The architecture of this
algorithm differs from that of some existing parallel/distributed ELMs based on
MapReduce or cloud computing. We also present an online version of the proposed
algorithm that can learn data sequentially in a one-by-one or chunk-by-chunk
mode. The novel algorithm is well suited for potential applications such as
artificial intelligence, computational biology, finance, wireless sensor
networks, and so on, involving datasets that are often extremely large,
high-dimensional and located on distributed data sources. We show simulation
results on both synthetic and real-world data sets.
| Wu Ai and Weisheng Chen | null | 1504.00981 | null | null |
Watch and Learn: Optimizing from Revealed Preferences Feedback | cs.DS cs.GT cs.LG | A Stackelberg game is played between a leader and a follower. The leader
first chooses an action, then the follower plays his best response. The goal of
the leader is to pick the action that will maximize his payoff given the
follower's best response. In this paper we present an approach to solving for
the leader's optimal strategy in certain Stackelberg games where the follower's
utility function (and thus the subsequent best response of the follower) is
unknown.
Stackelberg games capture, for example, the following interaction between a
producer and a consumer. The producer chooses the prices of the goods he
produces, and then a consumer chooses to buy a utility maximizing bundle of
goods. The goal of the seller here is to set prices to maximize his
profit---his revenue, minus the production cost of the purchased bundle. It is
quite natural that the seller in this example should not know the buyer's
utility function. However, he does have access to revealed preference
feedback---he can set prices, and then observe the purchased bundle and his own
profit. We give algorithms for efficiently solving, in terms of both
computational and query complexity, a broad class of Stackelberg games in which
the follower's utility function is unknown, using only "revealed preference"
access to it. This class includes in particular the profit maximization
problem, as well as the optimal tolling problem in nonatomic congestion games,
when the latency functions are unknown. Surprisingly, we are able to solve
these problems even though the optimization problems are non-convex in the
leader's actions.
| Aaron Roth, Jonathan Ullman, Zhiwei Steven Wu | null | 1504.01033 | null | null |
Concept Drift Detection for Streaming Data | stat.ML cs.LG | Common statistical prediction models often require and assume stationarity in
the data. However, in many practical applications, changes in the relationship
of the response and predictor variables are regularly observed over time,
resulting in the deterioration of the predictive performance of these models.
This paper presents Linear Four Rates (LFR), a framework for detecting these
concept drifts and subsequently identifying the data points that belong to the
new concept (for relearning the model). Unlike conventional concept drift
detection approaches, LFR can be applied to both batch and stream data; is not
limited by the distribution properties of the response variable (e.g., datasets
with imbalanced labels); is independent of the underlying statistical-model;
and uses user-specified parameters that are intuitively comprehensible. The
performance of LFR is compared to benchmark approaches using both simulated and
commonly used public datasets that span the gamut of concept drift types. The
results show LFR significantly outperforms benchmark approaches in terms of
recall, accuracy and delay in detection of concept drifts across datasets.
| Heng Wang and Zubin Abraham | null | 1504.01044 | null | null |
Graph Connectivity in Noisy Sparse Subspace Clustering | stat.ML cs.LG | Subspace clustering is the problem of clustering data points into a union of
low-dimensional linear/affine subspaces. It is the mathematical abstraction of
many important problems in computer vision, image processing and machine
learning. A line of recent work (4, 19, 24, 20) provided strong theoretical
guarantee for sparse subspace clustering (4), the state-of-the-art algorithm
for subspace clustering, on both noiseless and noisy data sets. It was shown
that under mild conditions, with high probability no two points from different
subspaces are clustered together. Such guarantee, however, is not sufficient
for the clustering to be correct, due to the notorious "graph connectivity
problem" (15). In this paper, we investigate the graph connectivity problem for
noisy sparse subspace clustering and show that a simple post-processing
procedure is capable of delivering consistent clustering under certain "general
position" or "restricted eigenvalue" assumptions. We also show that our
condition is almost tight with adversarial noise perturbation by constructing a
counter-example. These results provide the first exact clustering guarantee of
noisy SSC for subspaces of dimension greater then 3.
| Yining Wang, Yu-Xiang Wang and Aarti Singh | null | 1504.01046 | null | null |
An Online Approach to Dynamic Channel Access and Transmission Scheduling | cs.LG cs.SY | Making judicious channel access and transmission scheduling decisions is
essential for improving performance as well as energy and spectral efficiency
in multichannel wireless systems. This problem has been a subject of extensive
study in the past decade, and the resulting dynamic and opportunistic channel
access schemes can bring potentially significant improvement over traditional
schemes. However, a common and severe limitation of these dynamic schemes is
that they almost always require some form of a priori knowledge of the channel
statistics. A natural remedy is a learning framework, which has also been
extensively studied in the same context, but a typical learning algorithm in
this literature seeks only the best static policy, with performance measured by
weak regret, rather than learning a good dynamic channel access policy. There
is thus a clear disconnect between what an optimal channel access policy can
achieve with known channel statistics that actively exploits temporal, spatial
and spectral diversity, and what a typical existing learning algorithm aims
for, which is the static use of a single channel devoid of diversity gain. In
this paper we bridge this gap by designing learning algorithms that track known
optimal or sub-optimal dynamic channel access and transmission scheduling
policies, thereby yielding performance measured by a form of strong regret, the
accumulated difference between the reward returned by an optimal solution when
a priori information is available and that by our online algorithm. We do so in
the context of two specific algorithms that appeared in [1] and [2],
respectively, the former for a multiuser single-channel setting and the latter
for a single-user multichannel setting. In both cases we show that our
algorithms achieve sub-linear regret uniform in time and outperforms the
standard weak-regret learning algorithms.
| Yang Liu and Mingyan Liu | null | 1504.01050 | null | null |
Sync-Rank: Robust Ranking, Constrained Ranking and Rank Aggregation via
Eigenvector and Semidefinite Programming Synchronization | cs.LG cs.SI math.OC stat.ML | We consider the classic problem of establishing a statistical ranking of a
set of n items given a set of inconsistent and incomplete pairwise comparisons
between such items. Instantiations of this problem occur in numerous
applications in data analysis (e.g., ranking teams in sports data), computer
vision, and machine learning. We formulate the above problem of ranking with
incomplete noisy information as an instance of the group synchronization
problem over the group SO(2) of planar rotations, whose usefulness has been
demonstrated in numerous applications in recent years. Its least squares
solution can be approximated by either a spectral or a semidefinite programming
(SDP) relaxation, followed by a rounding procedure. We perform extensive
numerical simulations on both synthetic and real-world data sets, showing that
our proposed method compares favorably to other algorithms from the recent
literature. Existing theoretical guarantees on the group synchronization
problem imply lower bounds on the largest amount of noise permissible in the
ranking data while still achieving exact recovery. We propose a similar
synchronization-based algorithm for the rank-aggregation problem, which
integrates in a globally consistent ranking pairwise comparisons given by
different rating systems on the same set of items. We also discuss the problem
of semi-supervised ranking when there is available information on the ground
truth rank of a subset of players, and propose an algorithm based on SDP which
recovers the ranks of the remaining players. Finally, synchronization-based
ranking, combined with a spectral technique for the densest subgraph problem,
allows one to extract locally-consistent partial rankings, in other words, to
identify the rank of a small subset of players whose pairwise comparisons are
less noisy than the rest of the data, which other methods are not able to
identify.
| Mihai Cucuringu | null | 1504.01070 | null | null |
EM-Based Channel Estimation from Crowd-Sourced RSSI Samples Corrupted by
Noise and Interference | cs.LG | We propose a method for estimating channel parameters from RSSI measurements
and the lost packet count, which can work in the presence of losses due to both
interference and signal attenuation below the noise floor. This is especially
important in the wireless networks, such as vehicular, where propagation model
changes with the density of nodes. The method is based on Stochastic
Expectation Maximization, where the received data is modeled as a mixture of
distributions (no/low interference and strong interference), incomplete
(censored) due to packet losses. The PDFs in the mixture are Gamma, according
to the commonly accepted model for wireless signal and interference power. This
approach leverages the loss count as additional information, hence
outperforming maximum likelihood estimation, which does not use this
information (ML-), for a small number of received RSSI samples. Hence, it
allows inexpensive on-line channel estimation from ad-hoc collected data. The
method also outperforms ML- on uncensored data mixtures, as ML- assumes that
samples are from a single-mode PDF.
| Silvija Kokalj-Filipovic and Larry Greenstein | null | 1504.01072 | null | null |
Discriminative Neural Sentence Modeling by Tree-Based Convolution | cs.CL cs.LG cs.NE | This paper proposes a tree-based convolutional neural network (TBCNN) for
discriminative sentence modeling. Our models leverage either constituency trees
or dependency trees of sentences. The tree-based convolution process extracts
sentences' structural features, and these features are aggregated by max
pooling. Such architecture allows short propagation paths between the output
layer and underlying feature detectors, which enables effective structural
feature learning and extraction. We evaluate our models on two tasks: sentiment
analysis and question classification. In both experiments, TBCNN outperforms
previous state-of-the-art results, including existing neural networks and
dedicated feature/rule engineering. We also make efforts to visualize the
tree-based convolution process, shedding light on how our models work.
| Lili Mou, Hao Peng, Ge Li, Yan Xu, Lu Zhang, Zhi Jin | null | 1504.01106 | null | null |
Ultra-large alignments using Phylogeny-aware Profiles | q-bio.GN cs.CE cs.LG | Many biological questions, including the estimation of deep evolutionary
histories and the detection of remote homology between protein sequences, rely
upon multiple sequence alignments (MSAs) and phylogenetic trees of large
datasets. However, accurate large-scale multiple sequence alignment is very
difficult, especially when the dataset contains fragmentary sequences. We
present UPP, an MSA method that uses a new machine learning technique - the
Ensemble of Hidden Markov Models - that we propose here. UPP produces highly
accurate alignments for both nucleotide and amino acid sequences, even on
ultra-large datasets or datasets containing fragmentary sequences. UPP is
available at https://github.com/smirarab/sepp.
| Nam-phuong Nguyen, Siavash Mirarab, Keerthana Kumar, Tandy Warnow | null | 1504.01142 | null | null |
Efficient Dictionary Learning via Very Sparse Random Projections | stat.ML cs.LG | Performing signal processing tasks on compressive measurements of data has
received great attention in recent years. In this paper, we extend previous
work on compressive dictionary learning by showing that more general random
projections may be used, including sparse ones. More precisely, we examine
compressive K-means clustering as a special case of compressive dictionary
learning and give theoretical guarantees for its performance for a very general
class of random projections. We then propose a memory and computation efficient
dictionary learning algorithm, specifically designed for analyzing large
volumes of high-dimensional data, which learns the dictionary from very sparse
random projections. Experimental results demonstrate that our approach allows
for reduction of computational complexity and memory/data access, with
controllable loss in accuracy.
| Farhad Pourkamali-Anaraki, Stephen Becker, Shannon M. Hughes | null | 1504.01169 | null | null |
Semi-supervised Convolutional Neural Networks for Text Categorization
via Region Embedding | stat.ML cs.CL cs.LG | This paper presents a new semi-supervised framework with convolutional neural
networks (CNNs) for text categorization. Unlike the previous approaches that
rely on word embeddings, our method learns embeddings of small text regions
from unlabeled data for integration into a supervised CNN. The proposed scheme
for embedding learning is based on the idea of two-view semi-supervised
learning, which is intended to be useful for the task of interest even though
the training is done on unlabeled data. Our models achieve better results than
previous approaches on sentiment classification and topic classification tasks.
| Rie Johnson and Tong Zhang | null | 1504.01255 | null | null |
A Probabilistic $\ell_1$ Method for Clustering High Dimensional Data | math.ST cs.LG math.OC stat.ML stat.TH | In general, the clustering problem is NP-hard, and global optimality cannot
be established for non-trivial instances. For high-dimensional data,
distance-based methods for clustering or classification face an additional
difficulty, the unreliability of distances in very high-dimensional spaces. We
propose a distance-based iterative method for clustering data in very
high-dimensional space, using the $\ell_1$-metric that is less sensitive to
high dimensionality than the Euclidean distance. For $K$ clusters in
$\mathbb{R}^n$, the problem decomposes to $K$ problems coupled by
probabilities, and an iteration reduces to finding $Kn$ weighted medians of
points on a line. The complexity of the algorithm is linear in the dimension of
the data space, and its performance was observed to improve significantly as
the dimension increases.
| Tsvetan Asamov and Adi Ben-Israel | null | 1504.01294 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.