title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Variability of Behaviour in Electricity Load Profile Clustering; Who
Does Things at the Same Time Each Day? | cs.LG cs.CE | UK electricity market changes provide opportunities to alter households'
electricity usage patterns for the benefit of the overall electricity network.
Work on clustering similar households has concentrated on daily load profiles
and the variability in regular household behaviours has not been considered.
Those households with most variability in regular activities may be the most
receptive to incentives to change timing.
Whether using the variability of regular behaviour allows the creation of
more consistent groupings of households is investigated and compared with daily
load profile clustering. 204 UK households are analysed to find repeating
patterns (motifs). Variability in the time of the motif is used as the basis
for clustering households. Different clustering algorithms are assessed by the
consistency of the results.
Findings show that variability of behaviour, using motifs, provides more
consistent groupings of households across different clustering algorithms and
allows for more efficient targeting of behaviour change interventions.
| Ian Dent, Tony Craig, Uwe Aickelin and Tom Rodden | null | 1409.1043 | null | null |
Tuning a Multiple Classifier System for Side Effect Discovery using
Genetic Algorithms | cs.LG cs.CE | In previous work, a novel supervised framework implementing a binary
classifier was presented that obtained excellent results for side effect
discovery. Interestingly, unique side effects were identified when different
binary classifiers were used within the framework, prompting the investigation
of applying a multiple classifier system. In this paper we investigate tuning a
side effect multiple classifying system using genetic algorithms. The results
of this research show that the novel framework implementing a multiple
classifying system trained using genetic algorithms can obtain a higher partial
area under the receiver operating characteristic curve than implementing a
single classifier. Furthermore, the framework is able to detect side effects
efficiently and obtains a low false positive rate.
| Jenna M. Reps, Uwe Aickelin and Jonathan M. Garibaldi | null | 1409.1053 | null | null |
Augmented Neural Networks for Modelling Consumer Indebtness | cs.CE cs.LG cs.NE | Consumer Debt has risen to be an important problem of modern societies,
generating a lot of research in order to understand the nature of consumer
indebtness, which so far its modelling has been carried out by statistical
models. In this work we show that Computational Intelligence can offer a more
holistic approach that is more suitable for the complex relationships an
indebtness dataset has and Linear Regression cannot uncover. In particular, as
our results show, Neural Networks achieve the best performance in modelling
consumer indebtness, especially when they manage to incorporate the significant
and experimentally verified results of the Data Mining process in the model,
exploiting the flexibility Neural Networks offer in designing their topology.
This novel method forms an elaborate framework to model Consumer indebtness
that can be extended to any other real world application.
| Alexandros Ladas, Jonathan M. Garibaldi, Rodrigo Scarpel and Uwe
Aickelin | null | 1409.1057 | null | null |
Domain Transfer Structured Output Learning | cs.LG | In this paper, we propose the problem of domain transfer structured output
learn- ing and the first solution to solve it. The problem is defined on two
different data domains sharing the same input and output spaces, named as
source domain and target domain. The outputs are structured, and for the data
samples of the source domain, the corresponding outputs are available, while
for most data samples of the target domain, the corresponding outputs are
missing. The input distributions of the two domains are significantly
different. The problem is to learn a predictor for the target domain to predict
the structured outputs from the input. Due to the limited number of outputs
available for the samples form the target domain, it is difficult to directly
learn the predictor from the target domain, thus it is necessary to use the
output information available in source domain. We propose to learn the target
domain predictor by adapting a auxiliary predictor trained by using source
domain data to the target domain. The adaptation is implemented by adding a
delta function on the basis of the auxiliary predictor. An algorithm is
developed to learn the parameter of the delta function to minimize loss
functions associat- ed with the predicted outputs against the true outputs of
the data samples with available outputs of the target domain.
| Jim Jing-Yan Wang | null | 1409.1200 | null | null |
Overcoming the Curse of Sentence Length for Neural Machine Translation
using Automatic Segmentation | cs.CL cs.LG cs.NE stat.ML | The authors of (Cho et al., 2014a) have shown that the recently introduced
neural network translation systems suffer from a significant drop in
translation quality when translating long sentences, unlike existing
phrase-based translation systems. In this paper, we propose a way to address
this issue by automatically segmenting an input sentence into phrases that can
be easily translated by the neural network translation model. Once each segment
has been independently translated by the neural machine translation model, the
translated clauses are concatenated to form a final translation. Empirical
results show a significant improvement in translation quality for long
sentences.
| Jean Pouget-Abadie and Dzmitry Bahdanau and Bart van Merrienboer and
Kyunghyun Cho and Yoshua Bengio | null | 1409.1257 | null | null |
Marginal Structured SVM with Hidden Variables | stat.ML cs.LG | In this work, we propose the marginal structured SVM (MSSVM) for structured
prediction with hidden variables. MSSVM properly accounts for the uncertainty
of hidden variables, and can significantly outperform the previously proposed
latent structured SVM (LSSVM; Yu & Joachims (2009)) and other state-of-art
methods, especially when that uncertainty is large. Our method also results in
a smoother objective function, making gradient-based optimization of MSSVMs
converge significantly faster than for LSSVMs. We also show that our method
consistently outperforms hidden conditional random fields (HCRFs; Quattoni et
al. (2007)) on both simulated and real-world datasets. Furthermore, we propose
a unified framework that includes both our and several other existing methods
as special cases, and provides insights into the comparison of different models
in practice.
| Wei Ping, Qiang Liu, Alexander Ihler | null | 1409.1320 | null | null |
Communication-Efficient Distributed Dual Coordinate Ascent | cs.LG math.OC stat.ML | Communication remains the most significant bottleneck in the performance of
distributed optimization algorithms for large-scale machine learning. In this
paper, we propose a communication-efficient framework, CoCoA, that uses local
computation in a primal-dual setting to dramatically reduce the amount of
necessary communication. We provide a strong convergence rate analysis for this
class of algorithms, as well as experiments on real-world distributed datasets
with implementations in Spark. In our experiments, we find that as compared to
state-of-the-art mini-batch versions of SGD and SDCA algorithms, CoCoA
converges to the same .001-accurate solution quality on average 25x as quickly.
| Martin Jaggi, Virginia Smith, Martin Tak\'a\v{c}, Jonathan Terhorst,
Sanjay Krishnan, Thomas Hofmann, Michael I. Jordan | null | 1409.1458 | null | null |
Machine Learning Etudes in Astrophysics: Selection Functions for Mock
Cluster Catalogs | astro-ph.CO astro-ph.IM cs.LG stat.ML | Making mock simulated catalogs is an important component of astrophysical
data analysis. Selection criteria for observed astronomical objects are often
too complicated to be derived from first principles. However the existence of
an observed group of objects is a well-suited problem for machine learning
classification. In this paper we use one-class classifiers to learn the
properties of an observed catalog of clusters of galaxies from ROSAT and to
pick clusters from mock simulations that resemble the observed ROSAT catalog.
We show how this method can be used to study the cross-correlations of thermal
Sunya'ev-Zeldovich signals with number density maps of X-ray selected cluster
catalogs. The method reduces the bias due to hand-tuning the selection function
and is readily scalable to large catalogs with a high-dimensional space of
astrophysical features.
| Amir Hajian, Marcelo Alvarez, J. Richard Bond | 10.1088/1475-7516/2015/01/038 | 1409.1576 | null | null |
Novel Methods for Activity Classification and Occupany Prediction
Enabling Fine-grained HVAC Control | cs.LG | Much of the energy consumption in buildings is due to HVAC systems, which has
motivated several recent studies on making these systems more energy-
efficient. Occupancy and activity are two important aspects, which need to be
correctly estimated for optimal HVAC control. However, state-of-the-art methods
to estimate occupancy and classify activity require infrastructure and/or
wearable sensors which suffers from lower acceptability due to higher cost.
Encouragingly, with the advancement of the smartphones, these are becoming more
achievable. Most of the existing occupancy estimation tech- niques have the
underlying assumption that the phone is always carried by its user. However,
phones are often left at desk while attending meeting or other events, which
generates estimation error for the existing phone based occupancy algorithms.
Similarly, in the recent days the emerging theory of Sparse Random Classifier
(SRC) has been applied for activity classification on smartphone, however,
there are rooms to improve the on-phone process- ing. We propose a novel sensor
fusion method which offers almost 100% accuracy for occupancy estimation. We
also propose an activity classifica- tion algorithm, which offers similar
accuracy as of the state-of-the-art SRC algorithms while offering 50% reduction
in processing.
| Rajib Rana, Brano Kusy, Josh Wall, Wen Hu | null | 1409.1917 | null | null |
A Reduction of the Elastic Net to Support Vector Machines with an
Application to GPU Computing | stat.ML cs.LG | The past years have witnessed many dedicated open-source projects that built
and maintain implementations of Support Vector Machines (SVM), parallelized for
GPU, multi-core CPUs and distributed systems. Up to this point, no comparable
effort has been made to parallelize the Elastic Net, despite its popularity in
many high impact applications, including genetics, neuroscience and systems
biology. The first contribution in this paper is of theoretical nature. We
establish a tight link between two seemingly different algorithms and prove
that Elastic Net regression can be reduced to SVM with squared hinge loss
classification. Our second contribution is to derive a practical algorithm
based on this reduction. The reduction enables us to utilize prior efforts in
speeding up and parallelizing SVMs to obtain a highly optimized and parallel
solver for the Elastic Net and Lasso. With a simple wrapper, consisting of only
11 lines of MATLAB code, we obtain an Elastic Net implementation that naturally
utilizes GPU and multi-core CPUs. We demonstrate on twelve real world data
sets, that our algorithm yields identical results as the popular (and highly
optimized) glmnet implementation but is one or several orders of magnitude
faster.
| Quan Zhou, Wenlin Chen, Shiji Song, Jacob R. Gardner, Kilian Q.
Weinberger, Yixin Chen | null | 1409.1976 | null | null |
Global Convergence of Online Limited Memory BFGS | math.OC cs.LG stat.ML | Global convergence of an online (stochastic) limited memory version of the
Broyden-Fletcher- Goldfarb-Shanno (BFGS) quasi-Newton method for solving
optimization problems with stochastic objectives that arise in large scale
machine learning is established. Lower and upper bounds on the Hessian
eigenvalues of the sample functions are shown to suffice to guarantee that the
curvature approximation matrices have bounded determinants and traces, which,
in turn, permits establishing convergence to optimal arguments with probability
1. Numerical experiments on support vector machines with synthetic data
showcase reductions in convergence time relative to stochastic gradient descent
algorithms as well as reductions in storage and computation relative to other
online quasi-Newton methods. Experimental evaluation on a search engine
advertising problem corroborates that these advantages also manifest in
practical applications.
| Aryan Mokhtari and Alejandro Ribeiro | null | 1409.2045 | null | null |
The Large Margin Mechanism for Differentially Private Maximization | cs.LG cs.DS cs.IT math.IT math.ST stat.TH | A basic problem in the design of privacy-preserving algorithms is the private
maximization problem: the goal is to pick an item from a universe that
(approximately) maximizes a data-dependent function, all under the constraint
of differential privacy. This problem has been used as a sub-routine in many
privacy-preserving algorithms for statistics and machine-learning.
Previous algorithms for this problem are either range-dependent---i.e., their
utility diminishes with the size of the universe---or only apply to very
restricted function classes. This work provides the first general-purpose,
range-independent algorithm for private maximization that guarantees
approximate differential privacy. Its applicability is demonstrated on two
fundamental tasks in data mining and machine learning.
| Kamalika Chaudhuri and Daniel Hsu and Shuang Song | null | 1409.2177 | null | null |
When coding meets ranking: A joint framework based on local learning | cs.CV cs.LG stat.ML | Sparse coding, which represents a data point as a sparse reconstruction code
with regard to a dictionary, has been a popular data representation method.
Meanwhile, in database retrieval problems, learning the ranking scores from
data points plays an important role. Up to now, these two problems have always
been considered separately, assuming that data coding and ranking are two
independent and irrelevant problems. However, is there any internal
relationship between sparse coding and ranking score learning? If yes, how to
explore and make use of this internal relationship? In this paper, we try to
answer these questions by developing the first joint sparse coding and ranking
score learning algorithm. To explore the local distribution in the sparse code
space, and also to bridge coding and ranking problems, we assume that in the
neighborhood of each data point, the ranking scores can be approximated from
the corresponding sparse codes by a local linear function. By considering the
local approximation error of ranking scores, the reconstruction error and
sparsity of sparse coding, and the query information provided by the user, we
construct a unified objective function for learning of sparse codes, the
dictionary and ranking scores. We further develop an iterative algorithm to
solve this optimization problem.
| Jim Jing-Yan Wang, Xuefeng Cui, Ge Yu, Lili Guo, Xin Gao | null | 1409.2232 | null | null |
Variational Inference for Uncertainty on the Inputs of Gaussian Process
Models | stat.ML cs.AI cs.CV cs.LG | The Gaussian process latent variable model (GP-LVM) provides a flexible
approach for non-linear dimensionality reduction that has been widely applied.
However, the current approach for training GP-LVMs is based on maximum
likelihood, where the latent projection variables are maximized over rather
than integrated out. In this paper we present a Bayesian method for training
GP-LVMs by introducing a non-standard variational inference framework that
allows to approximately integrate out the latent variables and subsequently
train a GP-LVM by maximizing an analytic lower bound on the exact marginal
likelihood. We apply this method for learning a GP-LVM from iid observations
and for learning non-linear dynamical systems where the observations are
temporally correlated. We show that a benefit of the variational Bayesian
procedure is its robustness to overfitting and its ability to automatically
select the dimensionality of the nonlinear latent space. The resulting
framework is generic, flexible and easy to extend for other purposes, such as
Gaussian process regression with uncertain inputs and semi-supervised Gaussian
processes. We demonstrate our method on synthetic data and standard machine
learning benchmarks, as well as challenging real world datasets, including high
resolution video data.
| Andreas C. Damianou, Michalis K. Titsias, Neil D. Lawrence | null | 1409.2287 | null | null |
Sparse Additive Model using Symmetric Nonnegative Definite Smoothers | stat.ML cs.LG | We introduce a new algorithm, called adaptive sparse backfitting algorithm,
for solving high dimensional Sparse Additive Model (SpAM) utilizing symmetric,
non-negative definite smoothers. Unlike the previous sparse backfitting
algorithm, our method is essentially a block coordinate descent algorithm that
guarantees to converge to the optimal solution. It bridges the gap between the
population backfitting algorithm and that of the data version. We also prove
variable selection consistency under suitable conditions. Numerical studies on
both synthesis and real data are conducted to show that adaptive sparse
backfitting algorithm outperforms previous sparse backfitting algorithm in
fitting and predicting high dimensional nonparametric models.
| Yan Li | null | 1409.2552 | null | null |
Deep Unfolding: Model-Based Inspiration of Novel Deep Architectures | cs.LG cs.NE stat.ML | Model-based methods and deep neural networks have both been tremendously
successful paradigms in machine learning. In model-based methods, problem
domain knowledge can be built into the constraints of the model, typically at
the expense of difficulties during inference. In contrast, deterministic deep
neural networks are constructed in such a way that inference is
straightforward, but their architectures are generic and it is unclear how to
incorporate knowledge. This work aims to obtain the advantages of both
approaches. To do so, we start with a model-based approach and an associated
inference algorithm, and \emph{unfold} the inference iterations as layers in a
deep network. Rather than optimizing the original model, we \emph{untie} the
model parameters across layers, in order to create a more powerful network. The
resulting architecture can be trained discriminatively to perform accurate
inference within a fixed network size. We show how this framework allows us to
interpret conventional networks as mean-field inference in Markov random
fields, and to obtain new architectures by instead using belief propagation as
the inference algorithm. We then show its application to a non-negative matrix
factorization model that incorporates the problem-domain knowledge that sound
sources are additive. Deep unfolding of this model yields a new kind of
non-negative deep neural network, that can be trained using a multiplicative
backpropagation-style update algorithm. We present speech enhancement
experiments showing that our approach is competitive with conventional neural
networks despite using far fewer parameters.
| John R. Hershey, Jonathan Le Roux, Felix Weninger | null | 1409.2574 | null | null |
A theoretical contribution to the fast implementation of null linear
discriminant analysis method using random matrix multiplication with scatter
matrices | cs.NA cs.CV cs.LG | The null linear discriminant analysis method is a competitive approach for
dimensionality reduction. The implementation of this method, however, is
computationally expensive. Recently, a fast implementation of null linear
discriminant analysis method using random matrix multiplication with scatter
matrices was proposed. However, if the random matrix is chosen arbitrarily, the
orientation matrix may be rank deficient, and some useful discriminant
information will be lost. In this paper, we investigate how to choose the
random matrix properly, such that the two criteria of the null LDA method are
satisfied theoretically. We give a necessary and sufficient condition to
guarantee full column rank of the orientation matrix. Moreover, the geometric
characterization of the condition is also described.
| Ting-ting Feng, Gang Wu | null | 1409.2579 | null | null |
Learning Machines Implemented on Non-Deterministic Hardware | cs.LG stat.ML | This paper highlights new opportunities for designing large-scale machine
learning systems as a consequence of blurring traditional boundaries that have
allowed algorithm designers and application-level practitioners to stay -- for
the most part -- oblivious to the details of the underlying hardware-level
implementations. The hardware/software co-design methodology advocated here
hinges on the deployment of compute-intensive machine learning kernels onto
compute platforms that trade-off determinism in the computation for improvement
in speed and/or energy efficiency. To achieve this, we revisit digital
stochastic circuits for approximating matrix computations that are ubiquitous
in machine learning algorithms. Theoretical and empirical evaluation is
undertaken to assess the impact of the hardware-induced computational noise on
algorithm performance. As a proof-of-concept, a stochastic hardware simulator
is employed for training deep neural networks for image recognition problems.
| Suyog Gupta, Vikas Sindhwani, Kailash Gopalakrishnan | null | 1409.2620 | null | null |
Weighted Classification Cascades for Optimizing Discovery Significance
in the HiggsML Challenge | stat.ML cs.LG | We introduce a minorization-maximization approach to optimizing common
measures of discovery significance in high energy physics. The approach
alternates between solving a weighted binary classification problem and
updating class weights in a simple, closed-form manner. Moreover, an argument
based on convex duality shows that an improvement in weighted classification
error on any round yields a commensurate improvement in discovery significance.
We complement our derivation with experimental results from the 2014 Higgs
boson machine learning challenge.
| Lester Mackey and Jordan Bryan and Man Yue Mo | null | 1409.2655 | null | null |
Winner-Take-All Autoencoders | cs.LG cs.NE | In this paper, we propose a winner-take-all method for learning hierarchical
sparse representations in an unsupervised fashion. We first introduce
fully-connected winner-take-all autoencoders which use mini-batch statistics to
directly enforce a lifetime sparsity in the activations of the hidden units. We
then propose the convolutional winner-take-all autoencoder which combines the
benefits of convolutional architectures and autoencoders for learning
shift-invariant sparse representations. We describe a way to train
convolutional autoencoders layer by layer, where in addition to lifetime
sparsity, a spatial sparsity within each feature map is achieved using
winner-take-all activation functions. We will show that winner-take-all
autoencoders can be used to to learn deep sparse representations from the
MNIST, CIFAR-10, ImageNet, Street View House Numbers and Toronto Face datasets,
and achieve competitive classification performance.
| Alireza Makhzani, Brendan Frey | null | 1409.2752 | null | null |
Far-Field Compression for Fast Kernel Summation Methods in High
Dimensions | cs.LG stat.ML | We consider fast kernel summations in high dimensions: given a large set of
points in $d$ dimensions (with $d \gg 3$) and a pair-potential function (the
{\em kernel} function), we compute a weighted sum of all pairwise kernel
interactions for each point in the set. Direct summation is equivalent to a
(dense) matrix-vector multiplication and scales quadratically with the number
of points. Fast kernel summation algorithms reduce this cost to log-linear or
linear complexity.
Treecodes and Fast Multipole Methods (FMMs) deliver tremendous speedups by
constructing approximate representations of interactions of points that are far
from each other. In algebraic terms, these representations correspond to
low-rank approximations of blocks of the overall interaction matrix. Existing
approaches require an excessive number of kernel evaluations with increasing
$d$ and number of points in the dataset.
To address this issue, we use a randomized algebraic approach in which we
first sample the rows of a block and then construct its approximate, low-rank
interpolative decomposition. We examine the feasibility of this approach
theoretically and experimentally. We provide a new theoretical result showing a
tighter bound on the reconstruction error from uniformly sampling rows than the
existing state-of-the-art. We demonstrate that our sampling approach is
competitive with existing (but prohibitively expensive) methods from the
literature. We also construct kernel matrices for the Laplacian, Gaussian, and
polynomial kernels -- all commonly used in physics and data analysis. We
explore the numerical properties of blocks of these matrices, and show that
they are amenable to our approach. Depending on the data set, our randomized
algorithm can successfully compute low rank approximations in high dimensions.
We report results for data sets with ambient dimensions from four to 1,000.
| William B. March and George Biros | null | 1409.2802 | null | null |
A Stochastic PCA and SVD Algorithm with an Exponential Convergence Rate | cs.LG cs.NA math.OC stat.ML | We describe and analyze a simple algorithm for principal component analysis
and singular value decomposition, VR-PCA, which uses computationally cheap
stochastic iterations, yet converges exponentially fast to the optimal
solution. In contrast, existing algorithms suffer either from slow convergence,
or computationally intensive iterations whose runtime scales with the data
size. The algorithm builds on a recent variance-reduced stochastic gradient
technique, which was previously analyzed for strongly convex optimization,
whereas here we apply it to an inherently non-convex problem, using a very
different analysis.
| Ohad Shamir | null | 1409.2848 | null | null |
Non-Convex Boosting Overcomes Random Label Noise | cs.LG | The sensitivity of Adaboost to random label noise is a well-studied problem.
LogitBoost, BrownBoost and RobustBoost are boosting algorithms claimed to be
less sensitive to noise than AdaBoost. We present the results of experiments
evaluating these algorithms on both synthetic and real datasets. We compare the
performance on each of datasets when the labels are corrupted by different
levels of independent label noise. In presence of random label noise, we found
that BrownBoost and RobustBoost perform significantly better than AdaBoost and
LogitBoost, while the difference between each pair of algorithms is
insignificant. We provide an explanation for the difference based on the margin
distributions of the algorithms.
| Sunsern Cheamanunkul, Evan Ettinger and Yoav Freund | null | 1409.2905 | null | null |
Collaborative Deep Learning for Recommender Systems | cs.LG cs.CL cs.IR cs.NE stat.ML | Collaborative filtering (CF) is a successful approach commonly used by many
recommender systems. Conventional CF-based methods use the ratings given to
items by users as the sole source of information for learning to make
recommendation. However, the ratings are often very sparse in many
applications, causing CF-based methods to degrade significantly in their
recommendation performance. To address this sparsity problem, auxiliary
information such as item content information may be utilized. Collaborative
topic regression (CTR) is an appealing recent method taking this approach which
tightly couples the two components that learn from two different sources of
information. Nevertheless, the latent representation learned by CTR may not be
very effective when the auxiliary information is very sparse. To address this
problem, we generalize recent advances in deep learning from i.i.d. input to
non-i.i.d. (CF-based) input and propose in this paper a hierarchical Bayesian
model called collaborative deep learning (CDL), which jointly performs deep
representation learning for the content information and collaborative filtering
for the ratings (feedback) matrix. Extensive experiments on three real-world
datasets from different domains show that CDL can significantly advance the
state of the art.
| Hao Wang and Naiyan Wang and Dit-Yan Yeung | null | 1409.2944 | null | null |
"Look Ma, No Hands!" A Parameter-Free Topic Model | cs.LG cs.CL cs.IR | It has always been a burden to the users of statistical topic models to
predetermine the right number of topics, which is a key parameter of most topic
models. Conventionally, automatic selection of this parameter is done through
either statistical model selection (e.g., cross-validation, AIC, or BIC) or
Bayesian nonparametric models (e.g., hierarchical Dirichlet process). These
methods either rely on repeated runs of the inference algorithm to search
through a large range of parameter values which does not suit the mining of big
data, or replace this parameter with alternative parameters that are less
intuitive and still hard to be determined. In this paper, we explore to
"eliminate" this parameter from a new perspective. We first present a
nonparametric treatment of the PLSA model named nonparametric probabilistic
latent semantic analysis (nPLSA). The inference procedure of nPLSA allows for
the exploration and comparison of different numbers of topics within a single
execution, yet remains as simple as that of PLSA. This is achieved by
substituting the parameter of the number of topics with an alternative
parameter that is the minimal goodness of fit of a document. We show that the
new parameter can be further eliminated by two parameter-free treatments:
either by monitoring the diversity among the discovered topics or by a weak
supervision from users in the form of an exemplar topic. The parameter-free
topic model finds the appropriate number of topics when the diversity among the
discovered topics is maximized, or when the granularity of the discovered
topics matches the exemplar topic. Experiments on both synthetic and real data
prove that the parameter-free topic model extracts topics with a comparable
quality comparing to classical topic models with "manual transmission". The
quality of the topics outperforms those extracted through classical Bayesian
nonparametric models.
| Jian Tang, Ming Zhang, Qiaozhu Mei | null | 1409.2993 | null | null |
Towards Optimal Algorithms for Prediction with Expert Advice | cs.LG cs.GT math.PR | We study the classical problem of prediction with expert advice in the
adversarial setting with a geometric stopping time. In 1965, Cover gave the
optimal algorithm for the case of 2 experts. In this paper, we design the
optimal algorithm, adversary and regret for the case of 3 experts. Further, we
show that the optimal algorithm for $2$ and $3$ experts is a probability
matching algorithm (analogous to Thompson sampling) against a particular
randomized adversary. Remarkably, our proof shows that the probability matching
algorithm is not only optimal against this particular randomized adversary, but
also minimax optimal.
Our analysis develops upper and lower bounds simultaneously, analogous to the
primal-dual method. Our analysis of the optimal adversary goes through delicate
asymptotics of the random walk of a particle between multiple walls. We use the
connection we develop to random walks to derive an improved algorithm and
regret bound for the case of $4$ experts, and, provide a general framework for
designing the optimal algorithm and adversary for an arbitrary number of
experts.
| Nick Gravin, Yuval Peres and Balasubramanian Sivan | null | 1409.3040 | null | null |
Metric Learning for Temporal Sequence Alignment | cs.LG | In this paper, we propose to learn a Mahalanobis distance to perform
alignment of multivariate time series. The learning examples for this task are
time series for which the true alignment is known. We cast the alignment
problem as a structured prediction task, and propose realistic losses between
alignments for which the optimization is tractable. We provide experiments on
real data in the audio to audio context, where we show that the learning of a
similarity measure leads to improvements in the performance of the alignment
task. We also propose to use this metric learning framework to perform feature
selection and, from basic audio features, build a combination of these with
better performance for the alignment.
| Damien Garreau (INRIA Paris - Rocquencourt, DI-ENS), R\'emi Lajugie
(INRIA Paris - Rocquencourt, DI-ENS), Sylvain Arlot (INRIA Paris -
Rocquencourt, DI-ENS), Francis Bach (INRIA Paris - Rocquencourt, DI-ENS) | null | 1409.3136 | null | null |
Sequence to Sequence Learning with Neural Networks | cs.CL cs.LG | Deep Neural Networks (DNNs) are powerful models that have achieved excellent
performance on difficult learning tasks. Although DNNs work well whenever large
labeled training sets are available, they cannot be used to map sequences to
sequences. In this paper, we present a general end-to-end approach to sequence
learning that makes minimal assumptions on the sequence structure. Our method
uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to
a vector of a fixed dimensionality, and then another deep LSTM to decode the
target sequence from the vector. Our main result is that on an English to
French translation task from the WMT'14 dataset, the translations produced by
the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's
BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did
not have difficulty on long sentences. For comparison, a phrase-based SMT
system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM
to rerank the 1000 hypotheses produced by the aforementioned SMT system, its
BLEU score increases to 36.5, which is close to the previous best result on
this task. The LSTM also learned sensible phrase and sentence representations
that are sensitive to word order and are relatively invariant to the active and
the passive voice. Finally, we found that reversing the order of the words in
all source sentences (but not target sentences) improved the LSTM's performance
markedly, because doing so introduced many short term dependencies between the
source and the target sentence which made the optimization problem easier.
| Ilya Sutskever and Oriol Vinyals and Quoc V. Le | null | 1409.3215 | null | null |
Building Program Vector Representations for Deep Learning | cs.SE cs.LG cs.NE | Deep learning has made significant breakthroughs in various fields of
artificial intelligence. Advantages of deep learning include the ability to
capture highly complicated features, weak involvement of human engineering,
etc. However, it is still virtually impossible to use deep learning to analyze
programs since deep architectures cannot be trained effectively with pure back
propagation. In this pioneering paper, we propose the "coding criterion" to
build program vector representations, which are the premise of deep learning
for program analysis. Our representation learning approach directly makes deep
learning a reality in this new field. We evaluate the learned vector
representations both qualitatively and quantitatively. We conclude, based on
the experiments, the coding criterion is successful in building program
representations. To evaluate whether deep learning is beneficial for program
analysis, we feed the representations to deep neural networks, and achieve
higher accuracy in the program classification task than "shallow" methods, such
as logistic regression and the support vector machine. This result confirms the
feasibility of deep learning to analyze programs. It also gives primary
evidence of its success in this new field. We believe deep learning will become
an outstanding technique for program analysis in the near future.
| Lili Mou, Ge Li, Yuxuan Liu, Hao Peng, Zhi Jin, Yan Xu, Lu Zhang | null | 1409.3358 | null | null |
Consensus-Based Modelling using Distributed Feature Construction | cs.LG | A particularly successful role for Inductive Logic Programming (ILP) is as a
tool for discovering useful relational features for subsequent use in a
predictive model. Conceptually, the case for using ILP to construct relational
features rests on treating these features as functions, the automated discovery
of which necessarily requires some form of first-order learning. Practically,
there are now several reports in the literature that suggest that augmenting
any existing features with ILP-discovered relational features can substantially
improve the predictive power of a model. While the approach is straightforward
enough, much still needs to be done to scale it up to explore more fully the
space of possible features that can be constructed by an ILP system. This is in
principle, infinite and in practice, extremely large. Applications have been
confined to heuristic or random selections from this space. In this paper, we
address this computational difficulty by allowing features to be constructed in
a distributed manner. That is, there is a network of computational units, each
of which employs an ILP engine to construct some small number of features and
then builds a (local) model. We then employ a consensus-based algorithm, in
which neighboring nodes share information to update local models. For a
category of models (those with convex loss functions), it can be shown that the
algorithm will result in all nodes converging to a consensus model. In
practice, it may be slow to achieve this convergence. Nevertheless, our results
on synthetic and real datasets that suggests that in relatively short time the
"best" node in the network reaches a model whose predictive accuracy is
comparable to that obtained using more computational effort in a
non-distributed setting (the best node is identified as the one whose weights
converge first).
| Haimonti Dutta and Ashwin Srinivasan | null | 1409.3446 | null | null |
Topic Modeling of Hierarchical Corpora | stat.ML cs.IR cs.LG | We study the problem of topic modeling in corpora whose documents are
organized in a multi-level hierarchy. We explore a parametric approach to this
problem, assuming that the number of topics is known or can be estimated by
cross-validation. The models we consider can be viewed as special
(finite-dimensional) instances of hierarchical Dirichlet processes (HDPs). For
these models we show that there exists a simple variational approximation for
probabilistic inference. The approximation relies on a previously unexploited
inequality that handles the conditional dependence between Dirichlet latent
variables in adjacent levels of the model's hierarchy. We compare our approach
to existing implementations of nonparametric HDPs. On several benchmarks we
find that our approach is faster than Gibbs sampling and able to learn more
predictive models than existing variational methods. Finally, we demonstrate
the large-scale viability of our approach on two newly available corpora from
researchers in computer security---one with 350,000 documents and over 6,000
internal subcategories, the other with a five-level deep hierarchy.
| Do-kyum Kim, Geoffrey M. Voelker, Lawrence K. Saul | null | 1409.3518 | null | null |
10,000+ Times Accelerated Robust Subset Selection (ARSS) | cs.LG cs.CV stat.ML | Subset selection from massive data with noised information is increasingly
popular for various applications. This problem is still highly challenging as
current methods are generally slow in speed and sensitive to outliers. To
address the above two issues, we propose an accelerated robust subset selection
(ARSS) method. Specifically in the subset selection area, this is the first
attempt to employ the $\ell_{p}(0<p\leq1)$-norm based measure for the
representation loss, preventing large errors from dominating our objective. As
a result, the robustness against outlier elements is greatly enhanced.
Actually, data size is generally much larger than feature length, i.e. $N\gg
L$. Based on this observation, we propose a speedup solver (via ALM and
equivalent derivations) to highly reduce the computational cost, theoretically
from $O(N^{4})$ to $O(N{}^{2}L)$. Extensive experiments on ten benchmark
datasets verify that our method not only outperforms state of the art methods,
but also runs 10,000+ times faster than the most related method.
| Feiyun Zhu, Bin Fan, Xinliang Zhu, Ying Wang, Shiming Xiang and
Chunhong Pan | null | 1409.3660 | null | null |
Optimization Methods for Sparse Pseudo-Likelihood Graphical Model
Selection | stat.CO cs.LG stat.ML | Sparse high dimensional graphical model selection is a popular topic in
contemporary machine learning. To this end, various useful approaches have been
proposed in the context of $\ell_1$-penalized estimation in the Gaussian
framework. Though many of these inverse covariance estimation approaches are
demonstrably scalable and have leveraged recent advances in convex
optimization, they still depend on the Gaussian functional form. To address
this gap, a convex pseudo-likelihood based partial correlation graph estimation
method (CONCORD) has been recently proposed. This method uses coordinate-wise
minimization of a regression based pseudo-likelihood, and has been shown to
have robust model selection properties in comparison with the Gaussian
approach. In direct contrast to the parallel work in the Gaussian setting
however, this new convex pseudo-likelihood framework has not leveraged the
extensive array of methods that have been proposed in the machine learning
literature for convex optimization. In this paper, we address this crucial gap
by proposing two proximal gradient methods (CONCORD-ISTA and CONCORD-FISTA) for
performing $\ell_1$-regularized inverse covariance matrix estimation in the
pseudo-likelihood framework. We present timing comparisons with coordinate-wise
minimization and demonstrate that our approach yields tremendous payoffs for
$\ell_1$-penalized partial correlation graph estimation outside the Gaussian
setting, thus yielding the fastest and most scalable approach for such
problems. We undertake a theoretical analysis of our approach and rigorously
demonstrate convergence, and also derive rates thereof.
| Sang-Yun Oh, Onkar Dalal, Kshitij Khare, Bala Rajaratnam | null | 1409.3768 | null | null |
Computational Implications of Reducing Data to Sufficient Statistics | stat.CO cs.IT cs.LG math.IT | Given a large dataset and an estimation task, it is common to pre-process the
data by reducing them to a set of sufficient statistics. This step is often
regarded as straightforward and advantageous (in that it simplifies statistical
analysis). I show that -on the contrary- reducing data to sufficient statistics
can change a computationally tractable estimation problem into an intractable
one. I discuss connections with recent work in theoretical computer science,
and implications for some techniques to estimate graphical models.
| Andrea Montanari | null | 1409.3821 | null | null |
Linear, Deterministic, and Order-Invariant Initialization Methods for
the K-Means Clustering Algorithm | cs.LG cs.CV | Over the past five decades, k-means has become the clustering algorithm of
choice in many application domains primarily due to its simplicity, time/space
efficiency, and invariance to the ordering of the data points. Unfortunately,
the algorithm's sensitivity to the initial selection of the cluster centers
remains to be its most serious drawback. Numerous initialization methods have
been proposed to address this drawback. Many of these methods, however, have
time complexity superlinear in the number of data points, which makes them
impractical for large data sets. On the other hand, linear methods are often
random and/or sensitive to the ordering of the data points. These methods are
generally unreliable in that the quality of their results is unpredictable.
Therefore, it is common practice to perform multiple runs of such methods and
take the output of the run that produces the best results. Such a practice,
however, greatly increases the computational requirements of the otherwise
highly efficient k-means algorithm. In this chapter, we investigate the
empirical performance of six linear, deterministic (non-random), and
order-invariant k-means initialization methods on a large and diverse
collection of data sets from the UCI Machine Learning Repository. The results
demonstrate that two relatively unknown hierarchical initialization methods due
to Su and Dy outperform the remaining four methods with respect to two
objective effectiveness criteria. In addition, a recent method due to Erisoglu
et al. performs surprisingly poorly.
| M. Emre Celebi and Hassan A. Kingravi | null | 1409.3854 | null | null |
Unsupervised learning of clutter-resistant visual representations from
natural videos | cs.CV cs.LG | Populations of neurons in inferotemporal cortex (IT) maintain an explicit
code for object identity that also tolerates transformations of object
appearance e.g., position, scale, viewing angle [1, 2, 3]. Though the learning
rules are not known, recent results [4, 5, 6] suggest the operation of an
unsupervised temporal-association-based method e.g., Foldiak's trace rule [7].
Such methods exploit the temporal continuity of the visual world by assuming
that visual experience over short timescales will tend to have invariant
identity content. Thus, by associating representations of frames from nearby
times, a representation that tolerates whatever transformations occurred in the
video may be achieved. Many previous studies verified that such rules can work
in simple situations without background clutter, but the presence of visual
clutter has remained problematic for this approach. Here we show that temporal
association based on large class-specific filters (templates) avoids the
problem of clutter. Our system learns in an unsupervised way from natural
videos gathered from the internet, and is able to perform a difficult
unconstrained face recognition task on natural images: Labeled Faces in the
Wild [8].
| Qianli Liao, Joel Z. Leibo, Tomaso Poggio | null | 1409.3879 | null | null |
An Approach to Reducing Annotation Costs for BioNLP | cs.CL cs.LG stat.ML | There is a broad range of BioNLP tasks for which active learning (AL) can
significantly reduce annotation costs and a specific AL algorithm we have
developed is particularly effective in reducing annotation costs for these
tasks. We have previously developed an AL algorithm called ClosestInitPA that
works best with tasks that have the following characteristics: redundancy in
training material, burdensome annotation costs, Support Vector Machines (SVMs)
work well for the task, and imbalanced datasets (i.e. when set up as a binary
classification problem, one class is substantially rarer than the other). Many
BioNLP tasks have these characteristics and thus our AL algorithm is a natural
approach to apply to BioNLP tasks.
| Michael Bloodgood and K. Vijay-Shanker | null | 1409.3881 | null | null |
Parallel Distributed Block Coordinate Descent Methods based on Pairwise
Comparison Oracle | stat.ML cs.LG | This paper provides a block coordinate descent algorithm to solve
unconstrained optimization problems. In our algorithm, computation of function
values or gradients is not required. Instead, pairwise comparison of function
values is used. Our algorithm consists of two steps; one is the direction
estimate step and the other is the search step. Both steps require only
pairwise comparison of function values, which tells us only the order of
function values over two points. In the direction estimate step, a Newton type
search direction is estimated. A computation method like block coordinate
descent methods is used with the pairwise comparison. In the search step, a
numerical solution is updated along the estimated direction. The computation in
the direction estimate step can be easily parallelized, and thus, the algorithm
works efficiently to find the minimizer of the objective function. Also, we
show an upper bound of the convergence rate. In numerical experiments, we show
that our method efficiently finds the optimal solution compared to some
existing methods based on the pairwise comparison.
| Kota Matsui, Wataru Kumagai, Takafumi Kanamori | null | 1409.3912 | null | null |
A study on effectiveness of extreme learning machine | cs.NE cs.LG | Extreme learning machine (ELM), proposed by Huang et al., has been shown a
promising learning algorithm for single-hidden layer feedforward neural
networks (SLFNs). Nevertheless, because of the random choice of input weights
and biases, the ELM algorithm sometimes makes the hidden layer output matrix H
of SLFN not full column rank, which lowers the effectiveness of ELM. This paper
discusses the effectiveness of ELM and proposes an improved algorithm called
EELM that makes a proper selection of the input weights and bias before
calculating the output weights, which ensures the full column rank of H in
theory. This improves to some extend the learning rate (testing accuracy,
prediction accuracy, learning time) and the robustness property of the
networks. The experimental results based on both the benchmark function
approximation and real-world problems including classification and regression
applications show the good performances of EELM.
| Yuguang Wang and Feilong Cao and Yubo Yuan | 10.1016/j.neucom.2010.11.030 | 1409.3924 | null | null |
A Deep and Autoregressive Approach for Topic Modeling of Multimodal Data | cs.CV cs.IR cs.LG cs.NE | Topic modeling based on latent Dirichlet allocation (LDA) has been a
framework of choice to deal with multimodal data, such as in image annotation
tasks. Another popular approach to model the multimodal data is through deep
neural networks, such as the deep Boltzmann machine (DBM). Recently, a new type
of topic model called the Document Neural Autoregressive Distribution Estimator
(DocNADE) was proposed and demonstrated state-of-the-art performance for text
document modeling. In this work, we show how to successfully apply and extend
this model to multimodal data, such as simultaneous image classification and
annotation. First, we propose SupDocNADE, a supervised extension of DocNADE,
that increases the discriminative power of the learned hidden topic features
and show how to employ it to learn a joint representation from image visual
words, annotation words and class label information. We test our model on the
LabelMe and UIUC-Sports data sets and show that it compares favorably to other
topic models. Second, we propose a deep extension of our model and provide an
efficient way of training the deep model. Experimental results show that our
deep model outperforms its shallow version and reaches state-of-the-art
performance on the Multimedia Information Retrieval (MIR) Flickr data set.
| Yin Zheng, Yu-Jin Zhang, Hugo Larochelle | 10.1109/TPAMI.2015.2476802 | 1409.3970 | null | null |
EquiNMF: Graph Regularized Multiview Nonnegative Matrix Factorization | cs.LG cs.NA | Nonnegative matrix factorization (NMF) methods have proved to be powerful
across a wide range of real-world clustering applications. Integrating multiple
types of measurements for the same objects/subjects allows us to gain a deeper
understanding of the data and refine the clustering. We have developed a novel
Graph-reguarized multiview NMF-based method for data integration called
EquiNMF. The parameters for our method are set in a completely automated
data-specific unsupervised fashion, a highly desirable property in real-world
applications. We performed extensive and comprehensive experiments on multiview
imaging data. We show that EquiNMF consistently outperforms other single-view
NMF methods used on concatenated data and multi-view NMF methods with different
types of regularizations.
| Daniel Hidru and Anna Goldenberg | null | 1409.4018 | null | null |
A new approach in machine learning | stat.ML cs.LG | In this technical report we presented a novel approach to machine learning.
Once the new framework is presented, we will provide a simple and yet very
powerful learning algorithm which will be benchmark on various dataset.
The framework we proposed is based on booleen circuits; more specifically the
classifier produced by our algorithm have that form. Using bits and boolean
gates instead of real numbers and multiplication enable the the learning
algorithm and classifier to use very efficient boolean vector operations. This
enable both the learning algorithm and classifier to be extremely efficient.
The accuracy of the classifier we obtain with our framework compares very
favorably those produced by conventional techniques, both in terms of
efficiency and accuracy.
| Alain Tapp | null | 1409.4044 | null | null |
Transfer Learning for Video Recognition with Scarce Training Data for
Deep Convolutional Neural Network | cs.CV cs.LG | Unconstrained video recognition and Deep Convolution Network (DCN) are two
active topics in computer vision recently. In this work, we apply DCNs as
frame-based recognizers for video recognition. Our preliminary studies,
however, show that video corpora with complete ground truth are usually not
large and diverse enough to learn a robust model. The networks trained directly
on the video data set suffer from significant overfitting and have poor
recognition rate on the test set. The same lack-of-training-sample problem
limits the usage of deep models on a wide range of computer vision problems
where obtaining training data are difficult. To overcome the problem, we
perform transfer learning from images to videos to utilize the knowledge in the
weakly labeled image corpus for video recognition. The image corpus help to
learn important visual patterns for natural images, while these patterns are
ignored by models trained only on the video corpus. Therefore, the resultant
networks have better generalizability and better recognition rate. We show that
by means of transfer learning from image to video, we can learn a frame-based
recognizer with only 4k videos. Because the image corpus is weakly labeled, the
entire learning process requires only 4k annotated instances, which is far less
than the million scale image data sets required by previous works. The same
approach may be applied to other visual recognition tasks where only scarce
training data is available, and it improves the applicability of DCNs in
various computer vision problems. Our experiments also reveal the correlation
between meta-parameters and the performance of DCNs, given the properties of
the target problem and data. These results lead to a heuristic for
meta-parameter selection for future researches, which does not rely on the time
consuming meta-parameter search.
| Yu-Chuan Su, Tzu-Hsuan Chiu, Chun-Yen Yeh, Hsin-Fu Huang, Winston H.
Hsu | null | 1409.4127 | null | null |
Active Metric Learning from Relative Comparisons | cs.LG | This work focuses on active learning of distance metrics from relative
comparison information. A relative comparison specifies, for a data point
triplet $(x_i,x_j,x_k)$, that instance $x_i$ is more similar to $x_j$ than to
$x_k$. Such constraints, when available, have been shown to be useful toward
defining appropriate distance metrics. In real-world applications, acquiring
constraints often require considerable human effort. This motivates us to study
how to select and query the most useful relative comparisons to achieve
effective metric learning with minimum user effort. Given an underlying class
concept that is employed by the user to provide such constraints, we present an
information-theoretic criterion that selects the triplet whose answer leads to
the highest expected gain in information about the classes of a set of
examples. Directly applying the proposed criterion requires examining $O(n^3)$
triplets with $n$ instances, which is prohibitive even for datasets of moderate
size. We show that a randomized selection strategy can be used to reduce the
selection pool from $O(n^3)$ to $O(n)$, allowing us to scale up to larger-size
problems. Experiments show that the proposed method consistently outperforms
two baseline policies.
| Sicheng Xiong, R\'omer Rosales, Yuanli Pei, Xiaoli Z. Fern | null | 1409.4155 | null | null |
Machine learning for ultrafast X-ray diffraction patterns on large-scale
GPU clusters | q-bio.BM cs.DC cs.LG physics.bio-ph q-bio.QM | The classical method of determining the atomic structure of complex molecules
by analyzing diffraction patterns is currently undergoing drastic developments.
Modern techniques for producing extremely bright and coherent X-ray lasers
allow a beam of streaming particles to be intercepted and hit by an ultrashort
high energy X-ray beam. Through machine learning methods the data thus
collected can be transformed into a three-dimensional volumetric intensity map
of the particle itself. The computational complexity associated with this
problem is very high such that clusters of data parallel accelerators are
required.
We have implemented a distributed and highly efficient algorithm for
inversion of large collections of diffraction patterns targeting clusters of
hundreds of GPUs. With the expected enormous amount of diffraction data to be
produced in the foreseeable future, this is the required scale to approach real
time processing of data at the beam site. Using both real and synthetic data we
look at the scaling properties of the application and discuss the overall
computational viability of this exciting and novel imaging technique.
| Tomas Ekeberg, Stefan Engblom, and Jing Liu | 10.1177/1094342015572030 | 1409.4256 | null | null |
The Ordered Weighted $\ell_1$ Norm: Atomic Formulation, Projections, and
Algorithms | cs.DS cs.CV cs.IT cs.LG math.IT | The ordered weighted $\ell_1$ norm (OWL) was recently proposed, with two
different motivations: its good statistical properties as a sparsity promoting
regularizer; the fact that it generalizes the so-called {\it octagonal
shrinkage and clustering algorithm for regression} (OSCAR), which has the
ability to cluster/group regression variables that are highly correlated. This
paper contains several contributions to the study and application of OWL
regularization: the derivation of the atomic formulation of the OWL norm; the
derivation of the dual of the OWL norm, based on its atomic formulation; a new
and simpler derivation of the proximity operator of the OWL norm; an efficient
scheme to compute the Euclidean projection onto an OWL ball; the instantiation
of the conditional gradient (CG, also known as Frank-Wolfe) algorithm for
linear regression problems under OWL regularization; the instantiation of
accelerated projected gradient algorithms for the same class of problems.
Finally, a set of experiments give evidence that accelerated projected gradient
algorithms are considerably faster than CG, for the class of problems
considered.
| Xiangrong Zeng, and M\'ario A. T. Figueiredo | null | 1409.4271 | null | null |
A Fast Quartet Tree Heuristic for Hierarchical Clustering | cs.LG cs.CE cs.DS | The Minimum Quartet Tree Cost problem is to construct an optimal weight tree
from the $3{n \choose 4}$ weighted quartet topologies on $n$ objects, where
optimality means that the summed weight of the embedded quartet topologies is
optimal (so it can be the case that the optimal tree embeds all quartets as
nonoptimal topologies). We present a Monte Carlo heuristic, based on randomized
hill climbing, for approximating the optimal weight tree, given the quartet
topology weights. The method repeatedly transforms a dendrogram, with all
objects involved as leaves, achieving a monotonic approximation to the exact
single globally optimal tree. The problem and the solution heuristic has been
extensively used for general hierarchical clustering of nontree-like
(non-phylogeny) data in various domains and across domains with heterogeneous
data. We also present a greatly improved heuristic, reducing the running time
by a factor of order a thousand to ten thousand. All this is implemented and
available, as part of the CompLearn package. We compare performance and running
time of the original and improved versions with those of UPGMA, BioNJ, and NJ,
as implemented in the SplitsTree package on genomic data for which the latter
are optimized.
Keywords: Data and knowledge visualization, Pattern
matching--Clustering--Algorithms/Similarity measures, Hierarchical clustering,
Global optimization, Quartet tree, Randomized hill-climbing,
| Rudi L. Cilibrasi (CWI, Amsterdam) and Paul M.B. Vitanyi (CWI and
University of Amsterdam) | null | 1409.4276 | null | null |
Computing the Stereo Matching Cost with a Convolutional Neural Network | cs.CV cs.LG cs.NE | We present a method for extracting depth information from a rectified image
pair. We train a convolutional neural network to predict how well two image
patches match and use it to compute the stereo matching cost. The cost is
refined by cross-based cost aggregation and semiglobal matching, followed by a
left-right consistency check to eliminate errors in the occluded regions. Our
stereo method achieves an error rate of 2.61 % on the KITTI stereo dataset and
is currently (August 2014) the top performing method on this dataset.
| Jure \v{Z}bontar and Yann LeCun | 10.1109/CVPR.2015.7298767 | 1409.4326 | null | null |
Multivariate Comparison of Classification Algorithms | stat.ML cs.LG | Statistical tests that compare classification algorithms are univariate and
use a single performance measure, e.g., misclassification error, $F$ measure,
AUC, and so on. In multivariate tests, comparison is done using multiple
measures simultaneously. For example, error is the sum of false positives and
false negatives and a univariate test on error cannot make a distinction
between these two sources, but a 2-variate test can. Similarly, instead of
combining precision and recall in $F$ measure, we can have a 2-variate test on
(precision, recall). We use Hotelling's multivariate $T^2$ test for comparing
two algorithms, and when we have three or more algorithms we use the
multivariate analysis of variance (MANOVA) followed by pairwise post hoc tests.
In our experiments, we see that multivariate tests have higher power than
univariate tests, that is, they can detect differences that univariate tests
cannot. We also discuss how multivariate analysis allows us to automatically
extract performance measures that best distinguish the behavior of multiple
algorithms.
| Olcay Taner Yildiz, Ethem Alpaydin | null | 1409.4566 | null | null |
Compute Less to Get More: Using ORC to Improve Sparse Filtering | cs.CV cs.LG | Sparse Filtering is a popular feature learning algorithm for image
classification pipelines. In this paper, we connect the performance of Sparse
Filtering with spectral properties of the corresponding feature matrices. This
connection provides new insights into Sparse Filtering; in particular, it
suggests early stopping of Sparse Filtering. We therefore introduce the Optimal
Roundness Criterion (ORC), a novel stopping criterion for Sparse Filtering. We
show that this stopping criterion is related with pre-processing procedures
such as Statistical Whitening and demonstrate that it can make image
classification with Sparse Filtering considerably faster and more accurate.
| Johannes Lederer and Sergio Guadarrama | null | 1409.4689 | null | null |
A Mixtures-of-Experts Framework for Multi-Label Classification | cs.LG | We develop a novel probabilistic approach for multi-label classification that
is based on the mixtures-of-experts architecture combined with recently
introduced conditional tree-structured Bayesian networks. Our approach captures
different input-output relations from multi-label data using the efficient
tree-structured classifiers, while the mixtures-of-experts architecture aims to
compensate for the tree-structured restrictions and build a more accurate
model. We develop and present algorithms for learning the model from data and
for performing multi-label predictions on future data instances. Experiments on
multiple benchmark datasets demonstrate that our approach achieves highly
competitive results and outperforms the existing state-of-the-art multi-label
classification methods.
| Charmgil Hong, Iyad Batal, Milos Hauskrecht | null | 1409.4698 | null | null |
Anomaly Detection Based on Indicators Aggregation | stat.ML cs.LG | Automatic anomaly detection is a major issue in various areas. Beyond mere
detection, the identification of the source of the problem that produced the
anomaly is also essential. This is particularly the case in aircraft engine
health monitoring where detecting early signs of failure (anomalies) and
helping the engine owner to implement efficiently the adapted maintenance
operations (fixing the source of the anomaly) are of crucial importance to
reduce the costs attached to unscheduled maintenance. This paper introduces a
general methodology that aims at classifying monitoring signals into normal
ones and several classes of abnormal ones. The main idea is to leverage expert
knowledge by generating a very large number of binary indicators. Each
indicator corresponds to a fully parametrized anomaly detector built from
parametric anomaly scores designed by experts. A feature selection method is
used to keep only the most discriminant indicators which are used at inputs of
a Naive Bayes classifier. This give an interpretable classifier based on
interpretable anomaly detectors whose parameters have been optimized indirectly
by the selection process. The proposed methodology is evaluated on simulated
data designed to reproduce some of the anomaly types observed in real world
engines.
| Tsirizo Rabenoro (SAMM), J\'er\^ome Lacaille, Marie Cottrell (SAMM),
Fabrice Rossi (SAMM) | 10.1109/IJCNN.2014.6889841 | 1409.4747 | null | null |
Collapsed Variational Bayes Inference of Infinite Relational Model | cs.LG stat.ML | The Infinite Relational Model (IRM) is a probabilistic model for relational
data clustering that partitions objects into clusters based on observed
relationships. This paper presents Averaged CVB (ACVB) solutions for IRM,
convergence-guaranteed and practically useful fast Collapsed Variational Bayes
(CVB) inferences. We first derive ordinary CVB and CVB0 for IRM based on the
lower bound maximization. CVB solutions yield deterministic iterative
procedures for inferring IRM given the truncated number of clusters. Our
proposal includes CVB0 updates of hyperparameters including the concentration
parameter of the Dirichlet Process, which has not been studied in the
literature. To make the CVB more practically useful, we further study the CVB
inference in two aspects. First, we study the convergence issues and develop a
convergence-guaranteed algorithm for any CVB-based inferences called ACVB,
which enables automatic convergence detection and frees non-expert
practitioners from difficult and costly manual monitoring of inference
processes. Second, we present a few techniques for speeding up IRM inferences.
In particular, we describe the linear time inference of CVB0, allowing the IRM
for larger relational data uses. The ACVB solutions of IRM showed comparable or
better performance compared to existing inference methods in experiments, and
provide deterministic, faster, and easier convergence detection.
| Katsuhiko Ishiguro, Issei Sato, Naonori Ueda | null | 1409.4757 | null | null |
Taking into Account the Differences between Actively and Passively
Acquired Data: The Case of Active Learning with Support Vector Machines for
Imbalanced Datasets | cs.LG cs.CL stat.ML | Actively sampled data can have very different characteristics than passively
sampled data. Therefore, it's promising to investigate using different
inference procedures during AL than are used during passive learning (PL). This
general idea is explored in detail for the focused case of AL with
cost-weighted SVMs for imbalanced data, a situation that arises for many HLT
tasks. The key idea behind the proposed InitPA method for addressing imbalance
is to base cost models during AL on an estimate of overall corpus imbalance
computed via a small unbiased sample rather than the imbalance in the labeled
training data, which is the leading method used during PL.
| Michael Bloodgood and K. Vijay-Shanker | null | 1409.4835 | null | null |
Statistical inference with probabilistic graphical models | cs.LG stat.ML | These are notes from the lecture of Devavrat Shah given at the autumn school
"Statistical Physics, Optimization, Inference, and Message-Passing Algorithms",
that took place in Les Houches, France from Monday September 30th, 2013, till
Friday October 11th, 2013. The school was organized by Florent Krzakala from
UPMC & ENS Paris, Federico Ricci-Tersenghi from La Sapienza Roma, Lenka
Zdeborova from CEA Saclay & CNRS, and Riccardo Zecchina from Politecnico
Torino. This lecture of Devavrat Shah (MIT) covers the basics of inference and
learning. It explains how inference problems are represented within structures
known as graphical models. The theoretical basis of the belief propagation
algorithm is then explained and derived. This lecture sets the stage for
generalizations and applications of message passing algorithms.
| Ang\'elique Dr\'emeau, Christophe Sch\"ulke, Yingying Xu, Devavrat
Shah | null | 1409.4928 | null | null |
Ensembles of Random Sphere Cover Classifiers | cs.LG cs.AI stat.ML | We propose and evaluate alternative ensemble schemes for a new instance based
learning classifier, the Randomised Sphere Cover (RSC) classifier. RSC fuses
instances into spheres, then bases classification on distance to spheres rather
than distance to instances. The randomised nature of RSC makes it ideal for use
in ensembles. We propose two ensemble methods tailored to the RSC classifier;
$\alpha \beta$RSE, an ensemble based on instance resampling and $\alpha$RSSE, a
subspace ensemble. We compare $\alpha \beta$RSE and $\alpha$RSSE to tree based
ensembles on a set of UCI datasets and demonstrates that RSC ensembles perform
significantly better than some of these ensembles, and not significantly worse
than the others. We demonstrate via a case study on six gene expression data
sets that $\alpha$RSSE can outperform other subspace ensemble methods on high
dimensional data when used in conjunction with an attribute filter. Finally, we
perform a set of Bias/Variance decomposition experiments to analyse the source
of improvement in comparison to a base classifier.
| Anthony Bagnall and Reda Younsi | null | 1409.4936 | null | null |
An Agent-Based Algorithm exploiting Multiple Local Dissimilarities for
Clusters Mining and Knowledge Discovery | cs.LG cs.DC cs.MA | We propose a multi-agent algorithm able to automatically discover relevant
regularities in a given dataset, determining at the same time the set of
configurations of the adopted parametric dissimilarity measure yielding compact
and separated clusters. Each agent operates independently by performing a
Markovian random walk on a suitable weighted graph representation of the input
dataset. Such a weighted graph representation is induced by the specific
parameter configuration of the dissimilarity measure adopted by the agent,
which searches and takes decisions autonomously for one cluster at a time.
Results show that the algorithm is able to discover parameter configurations
that yield a consistent and interpretable collection of clusters. Moreover, we
demonstrate that our algorithm shows comparable performances with other similar
state-of-the-art algorithms when facing specific clustering problems.
| Filippo Maria Bianchi, Enrico Maiorino, Lorenzo Livi, Antonello Rizzi
and Alireza Sadeghian | 10.1007/s00500-015-1876-1 | 1409.4988 | null | null |
Predictive Capacity of Meteorological Data - Will it rain tomorrow | cs.LG | With the availability of high precision digital sensors and cheap storage
medium, it is not uncommon to find large amounts of data collected on almost
all measurable attributes, both in nature and man-made habitats. Weather in
particular has been an area of keen interest for researchers to develop more
accurate and reliable prediction models. This paper presents a set of
experiments which involve the use of prevalent machine learning techniques to
build models to predict the day of the week given the weather data for that
particular day i.e. temperature, wind, rain etc., and test their reliability
across four cities in Australia {Brisbane, Adelaide, Perth, Hobart}. The
results provide a comparison of accuracy of these machine learning techniques
and their reliability to predict the day of the week by analysing the weather
data. We then apply the models to predict weather conditions based on the
available data.
| Bilal Ahmed | null | 1409.5079 | null | null |
A Method for Stopping Active Learning Based on Stabilizing Predictions
and the Need for User-Adjustable Stopping | cs.LG cs.CL stat.ML | A survey of existing methods for stopping active learning (AL) reveals the
needs for methods that are: more widely applicable; more aggressive in saving
annotations; and more stable across changing datasets. A new method for
stopping AL based on stabilizing predictions is presented that addresses these
needs. Furthermore, stopping methods are required to handle a broad range of
different annotation/performance tradeoff valuations. Despite this, the
existing body of work is dominated by conservative methods with little (if any)
attention paid to providing users with control over the behavior of stopping
methods. The proposed method is shown to fill a gap in the level of
aggressiveness available for stopping AL and supports providing users with
control over stopping behavior.
| Michael Bloodgood and K. Vijay-Shanker | null | 1409.5165 | null | null |
Deeply-Supervised Nets | stat.ML cs.CV cs.LG cs.NE | Our proposed deeply-supervised nets (DSN) method simultaneously minimizes
classification error while making the learning process of hidden layers direct
and transparent. We make an attempt to boost the classification performance by
studying a new formulation in deep networks. Three aspects in convolutional
neural networks (CNN) style architectures are being looked at: (1) transparency
of the intermediate layers to the overall classification; (2)
discriminativeness and robustness of learned features, especially in the early
layers; (3) effectiveness in training due to the presence of the exploding and
vanishing gradients. We introduce "companion objective" to the individual
hidden layers, in addition to the overall objective at the output layer (a
different strategy to layer-wise pre-training). We extend techniques from
stochastic gradient methods to analyze our algorithm. The advantage of our
method is evident and our experimental result on benchmark datasets shows
significant performance gain over existing methods (e.g. all state-of-the-art
results on MNIST, CIFAR-10, CIFAR-100, and SVHN).
| Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, Zhuowen
Tu | null | 1409.5185 | null | null |
Pedestrian Detection with Spatially Pooled Features and Structured
Ensemble Learning | cs.CV cs.LG | Many typical applications of object detection operate within a prescribed
false-positive range. In this situation the performance of a detector should be
assessed on the basis of the area under the ROC curve over that range, rather
than over the full curve, as the performance outside the range is irrelevant.
This measure is labelled as the partial area under the ROC curve (pAUC). We
propose a novel ensemble learning method which achieves a maximal detection
rate at a user-defined range of false positive rates by directly optimizing the
partial AUC using structured learning.
In order to achieve a high object detection performance, we propose a new
approach to extract low-level visual features based on spatial pooling.
Incorporating spatial pooling improves the translational invariance and thus
the robustness of the detection process. Experimental results on both synthetic
and real-world data sets demonstrate the effectiveness of our approach, and we
show that it is possible to train state-of-the-art pedestrian detectors using
the proposed structured ensemble learning method with spatially pooled
features. The result is the current best reported performance on the
Caltech-USA pedestrian detection dataset.
| Sakrapee Paisitkriangkrai, Chunhua Shen, Anton van den Hengel | null | 1409.5209 | null | null |
Learning and approximation capability of orthogonal super greedy
algorithm | cs.LG | We consider the approximation capability of orthogonal super greedy
algorithms (OSGA) and its applications in supervised learning. OSGA is
concerned with selecting more than one atoms in each iteration step, which, of
course, greatly reduces the computational burden when compared with the
conventional orthogonal greedy algorithm (OGA). We prove that even for function
classes that are not the convex hull of the dictionary, OSGA does not degrade
the approximation capability of OGA provided the dictionary is incoherent.
Based on this, we deduce a tight generalization error bound for OSGA learning.
Our results show that in the realm of supervised learning, OSGA provides a
possibility to further reduce the computational burden of OGA in the premise of
maintaining its prominent generalization capability.
| Jian Fang, Shaobo Lin, Zongben Xu | null | 1409.5330 | null | null |
SAME but Different: Fast and High-Quality Gibbs Parameter Estimation | cs.LG stat.ML | Gibbs sampling is a workhorse for Bayesian inference but has several
limitations when used for parameter estimation, and is often much slower than
non-sampling inference methods. SAME (State Augmentation for Marginal
Estimation) \cite{Doucet99,Doucet02} is an approach to MAP parameter estimation
which gives improved parameter estimates over direct Gibbs sampling. SAME can
be viewed as cooling the posterior parameter distribution and allows annealed
search for the MAP parameters, often yielding very high quality (lower loss)
estimates. But it does so at the expense of additional samples per iteration
and generally slower performance. On the other hand, SAME dramatically
increases the parallelism in the sampling schedule, and is an excellent match
for modern (SIMD) hardware. In this paper we explore the application of SAME to
graphical model inference on modern hardware. We show that combining SAME with
factored sample representation (or approximation) gives throughput competitive
with the fastest symbolic methods, but with potentially better quality. We
describe experiments on Latent Dirichlet Allocation, achieving speeds similar
to the fastest reported methods (online Variational Bayes) and lower
cross-validated loss than other LDA implementations. The method is simple to
implement and should be applicable to many other models.
| Huasha Zhao and Biye Jiang and John Canny | null | 1409.5402 | null | null |
Efficient Feature Group Sequencing for Anytime Linear Prediction | cs.LG | We consider \textit{anytime} linear prediction in the common machine learning
setting, where features are in groups that have costs. We achieve anytime (or
interruptible) predictions by sequencing the computation of feature groups and
reporting results using the computed features at interruption. We extend
Orthogonal Matching Pursuit (OMP) and Forward Regression (FR) to learn the
sequencing greedily under this group setting with costs. We theoretically
guarantee that our algorithms achieve near-optimal linear predictions at each
budget when a feature group is chosen. With a novel analysis of OMP, we improve
its theoretical bound to the same strength as that of FR. In addition, we
develop a novel algorithm that consumes cost $4B$ to approximate the optimal
performance of \textit{any} cost $B$, and prove that with cost less than $4B$,
such an approximation is impossible. To our knowledge, these are the first
anytime bounds at \textit{all} budgets. We test our algorithms on two
real-world data-sets and evaluate them in terms of anytime linear prediction
performance against cost-weighted Group Lasso and alternative greedy
algorithms.
| Hanzhang Hu, Alexander Grubb, J. Andrew Bagnell, Martial Hebert | null | 1409.5495 | null | null |
A Survey on Soft Subspace Clustering | cs.LG | Subspace clustering (SC) is a promising clustering technology to identify
clusters based on their associations with subspaces in high dimensional spaces.
SC can be classified into hard subspace clustering (HSC) and soft subspace
clustering (SSC). While HSC algorithms have been extensively studied and well
accepted by the scientific community, SSC algorithms are relatively new but
gaining more attention in recent years due to better adaptability. In the
paper, a comprehensive survey on existing SSC algorithms and the recent
development are presented. The SSC algorithms are classified systematically
into three main categories, namely, conventional SSC (CSSC), independent SSC
(ISSC) and extended SSC (XSSC). The characteristics of these algorithms are
highlighted and the potential future development of SSC is also discussed.
| Zhaohong Deng, Kup-Sze Choi, Yizhang Jiang, Jun Wang, Shitong Wang | 10.1016/j.ins.2016.01.101 | 1409.5616 | null | null |
A Formal Methods Approach to Pattern Synthesis in Reaction Diffusion
Systems | cs.AI cs.CE cs.LG cs.LO cs.SY | We propose a technique to detect and generate patterns in a network of
locally interacting dynamical systems. Central to our approach is a novel
spatial superposition logic, whose semantics is defined over the quad-tree of a
partitioned image. We show that formulas in this logic can be efficiently
learned from positive and negative examples of several types of patterns. We
also demonstrate that pattern detection, which is implemented as a model
checking algorithm, performs very well for test data sets different from the
learning sets. We define a quantitative semantics for the logic and integrate
the model checking algorithm with particle swarm optimization in a
computational framework for synthesis of parameters leading to desired patterns
in reaction-diffusion systems.
| Ebru Aydin Gol and Ezio Bartocci and Calin Belta | null | 1409.5671 | null | null |
Transfer Prototype-based Fuzzy Clustering | cs.LG | The traditional prototype based clustering methods, such as the well-known
fuzzy c-mean (FCM) algorithm, usually need sufficient data to find a good
clustering partition. If the available data is limited or scarce, most of the
existing prototype based clustering algorithms will no longer be effective.
While the data for the current clustering task may be scarce, there is usually
some useful knowledge available in the related scenes/domains. In this study,
the concept of transfer learning is applied to prototype based fuzzy clustering
(PFC). Specifically, the idea of leveraging knowledge from the source domain is
exploited to develop a set of transfer prototype based fuzzy clustering (TPFC)
algorithms. Three prototype based fuzzy clustering algorithms, namely, FCM,
fuzzy k-plane clustering (FKPC) and fuzzy subspace clustering (FSC), have been
chosen to incorporate with knowledge leveraging mechanism to develop the
corresponding transfer clustering algorithms. Novel objective functions are
proposed to integrate the knowledge of source domain with the data of target
domain for clustering in the target domain. The proposed algorithms have been
validated on different synthetic and real-world datasets and the results
demonstrate their effectiveness when compared with both the original prototype
based fuzzy clustering algorithms and the related clustering algorithms like
multi-task clustering and co-clustering.
| Zhaohong Deng, Yizhang Jiang, Fu-Lai Chung, Hisao Ishibuchi, Kup-Sze
Choi, Shitong Wang | 10.1109/TFUZZ.2015.2505330 | 1409.5686 | null | null |
Distributed Machine Learning via Sufficient Factor Broadcasting | cs.LG cs.DC | Matrix-parametrized models, including multiclass logistic regression and
sparse coding, are used in machine learning (ML) applications ranging from
computer vision to computational biology. When these models are applied to
large-scale ML problems starting at millions of samples and tens of thousands
of classes, their parameter matrix can grow at an unexpected rate, resulting in
high parameter synchronization costs that greatly slow down distributed
learning. To address this issue, we propose a Sufficient Factor Broadcasting
(SFB) computation model for efficient distributed learning of a large family of
matrix-parameterized models, which share the following property: the parameter
update computed on each data sample is a rank-1 matrix, i.e., the outer product
of two "sufficient factors" (SFs). By broadcasting the SFs among worker
machines and reconstructing the update matrices locally at each worker, SFB
improves communication efficiency --- communication costs are linear in the
parameter matrix's dimensions, rather than quadratic --- without affecting
computational correctness. We present a theoretical convergence analysis of
SFB, and empirically corroborate its efficiency on four different
matrix-parametrized ML models.
| Pengtao Xie, Jin Kyu Kim, Yi Zhou, Qirong Ho, Abhimanu Kumar, Yaoliang
Yu, Eric Xing | null | 1409.5705 | null | null |
Convolutional Neural Networks over Tree Structures for Programming
Language Processing | cs.LG cs.NE cs.SE | Programming language processing (similar to natural language processing) is a
hot research topic in the field of software engineering; it has also aroused
growing interest in the artificial intelligence community. However, different
from a natural language sentence, a program contains rich, explicit, and
complicated structural information. Hence, traditional NLP models may be
inappropriate for programs. In this paper, we propose a novel tree-based
convolutional neural network (TBCNN) for programming language processing, in
which a convolution kernel is designed over programs' abstract syntax trees to
capture structural information. TBCNN is a generic architecture for programming
language processing; our experiments show its effectiveness in two different
program analysis tasks: classifying programs according to functionality, and
detecting code snippets of certain patterns. TBCNN outperforms baseline
methods, including several neural models for NLP.
| Lili Mou, Ge Li, Lu Zhang, Tao Wang, Zhi Jin | null | 1409.5718 | null | null |
Neural Hypernetwork Approach for Pulmonary Embolism diagnosis | physics.med-ph cs.LG physics.data-an q-bio.QM stat.ML | This work introduces an integrative approach based on Q-analysis with machine
learning. The new approach, called Neural Hypernetwork, has been applied to a
case study of pulmonary embolism diagnosis. The objective of the application of
neural hyper-network to pulmonary embolism (PE) is to improve diagnose for
reducing the number of CT-angiography needed. Hypernetworks, based on
topological simplicial complex, generalize the concept of two-relation to
many-body relation. Furthermore, Hypernetworks provide a significant
generalization of network theory, enabling the integration of relational
structure, logic and analytic dynamics. Another important results is that
Q-analysis stays close to the data, while other approaches manipulate data,
projecting them into metric spaces or applying some filtering functions to
highlight the intrinsic relations. A pulmonary embolism (PE) is a blockage of
the main artery of the lung or one of its branches, frequently fatal. Our study
uses data on 28 diagnostic features of 1,427 people considered to be at risk of
PE. The resulting neural hypernetwork correctly recognized 94% of those
developing a PE. This is better than previous results that have been obtained
with other methods (statistical selection of features, partial least squares
regression, topological data analysis in a metric space).
| Matteo Rucco, David M. S. Rodrigues, Emanuela Merelli, Jeffrey H.
Johnson, Lorenzo Falsetti, Cinzia Nitti and Aldo Salvi | null | 1409.5743 | null | null |
Attributes for Causal Inference in Longitudinal Observational Databases | cs.CE cs.LG | The pharmaceutical industry is plagued by the problem of side effects that
can occur anytime a prescribed medication is ingested. There has been a recent
interest in using the vast quantities of medical data available in longitudinal
observational databases to identify causal relationships between drugs and
medical events. Unfortunately the majority of existing post marketing
surveillance algorithms measure how dependant or associated an event is on the
presence of a drug rather than measuring causality. In this paper we
investigate potential attributes that can be used in causal inference to
identify side effects based on the Bradford-Hill causality criteria. Potential
attributes are developed by considering five of the causality criteria and
feature selection is applied to identify the most suitable of these attributes
for detecting side effects. We found that attributes based on the specificity
criterion may improve side effect signalling algorithms but the experiment and
dosage criteria attributes investigated in this paper did not offer sufficient
additional information.
| Jenna Reps, Jonathan M. Garibaldi, Uwe Aickelin and Daniele Soria,
Jack E. Gibson and Richard B. Hubbard | null | 1409.5774 | null | null |
Tight Error Bounds for Structured Prediction | cs.LG cs.DS stat.ML | Structured prediction tasks in machine learning involve the simultaneous
prediction of multiple labels. This is typically done by maximizing a score
function on the space of labels, which decomposes as a sum of pairwise
elements, each depending on two specific labels. Intuitively, the more pairwise
terms are used, the better the expected accuracy. However, there is currently
no theoretical account of this intuition. This paper takes a significant step
in this direction.
We formulate the problem as classifying the vertices of a known graph
$G=(V,E)$, where the vertices and edges of the graph are labelled and correlate
semi-randomly with the ground truth. We show that the prospects for achieving
low expected Hamming error depend on the structure of the graph $G$ in
interesting ways. For example, if $G$ is a very poor expander, like a path,
then large expected Hamming error is inevitable. Our main positive result shows
that, for a wide class of graphs including 2D grid graphs common in machine
vision applications, there is a polynomial-time algorithm with small and
information-theoretically near-optimal expected error. Our results provide a
first step toward a theoretical justification for the empirical success of the
efficient approximate inference algorithms that are used for structured
prediction in models where exact inference is intractable.
| Amir Globerson and Tim Roughgarden and David Sontag and Cafer Yildirim | null | 1409.5834 | null | null |
Capturing "attrition intensifying" structural traits from didactic
interaction sequences of MOOC learners | cs.CY cs.LG cs.SI | This work is an attempt to discover hidden structural configurations in
learning activity sequences of students in Massive Open Online Courses (MOOCs).
Leveraging combined representations of video clickstream interactions and forum
activities, we seek to fundamentally understand traits that are predictive of
decreasing engagement over time. Grounded in the interdisciplinary field of
network science, we follow a graph based approach to successfully extract
indicators of active and passive MOOC participation that reflect persistence
and regularity in the overall interaction footprint. Using these rich
educational semantics, we focus on the problem of predicting student attrition,
one of the major highlights of MOOC literature in the recent years. Our results
indicate an improvement over a baseline ngram based approach in capturing
"attrition intensifying" features from the learning activities that MOOC
learners engage in. Implications for some compelling future research are
discussed.
| Tanmay Sinha, Nan Li, Patrick Jermann, Pierre Dillenbourg | null | 1409.5887 | null | null |
Distributed Robust Learning | stat.ML cs.LG | We propose a framework for distributed robust statistical learning on {\em
big contaminated data}. The Distributed Robust Learning (DRL) framework can
reduce the computational time of traditional robust learning methods by several
orders of magnitude. We analyze the robustness property of DRL, showing that
DRL not only preserves the robustness of the base robust learning method, but
also tolerates contaminations on a constant fraction of results from computing
nodes (node failures). More precisely, even in presence of the most adversarial
outlier distribution over computing nodes, DRL still achieves a breakdown point
of at least $ \lambda^*/2 $, where $ \lambda^* $ is the break down point of
corresponding centralized algorithm. This is in stark contrast with naive
division-and-averaging implementation, which may reduce the breakdown point by
a factor of $ k $ when $ k $ computing nodes are used. We then specialize the
DRL framework for two concrete cases: distributed robust principal component
analysis and distributed robust regression. We demonstrate the efficiency and
the robustness advantages of DRL through comprehensive simulations and
predicting image tags on a large-scale image set.
| Jiashi Feng, Huan Xu, Shie Mannor | null | 1409.5937 | null | null |
Domain Adaptive Neural Networks for Object Recognition | cs.CV cs.AI cs.LG cs.NE stat.ML | We propose a simple neural network model to deal with the domain adaptation
problem in object recognition. Our model incorporates the Maximum Mean
Discrepancy (MMD) measure as a regularization in the supervised learning to
reduce the distribution mismatch between the source and target domains in the
latent space. From experiments, we demonstrate that the MMD regularization is
an effective tool to provide good domain adaptation models on both SURF
features and raw image pixels of a particular image data set. We also show that
our proposed model, preceded by the denoising auto-encoder pretraining,
achieves better performance than recent benchmark models on the same data sets.
This work represents the first study of MMD measure in the context of neural
networks.
| Muhammad Ghifary and W. Bastiaan Kleijn and Mengjie Zhang | null | 1409.6041 | null | null |
Analyzing sparse dictionaries for online learning with kernels | stat.ML cs.CV cs.IT cs.LG math.IT | Many signal processing and machine learning methods share essentially the
same linear-in-the-parameter model, with as many parameters as available
samples as in kernel-based machines. Sparse approximation is essential in many
disciplines, with new challenges emerging in online learning with kernels. To
this end, several sparsity measures have been proposed in the literature to
quantify sparse dictionaries and constructing relevant ones, the most prolific
ones being the distance, the approximation, the coherence and the Babel
measures. In this paper, we analyze sparse dictionaries based on these
measures. By conducting an eigenvalue analysis, we show that these sparsity
measures share many properties, including the linear independence condition and
inducing a well-posed optimization problem. Furthermore, we prove that there
exists a quasi-isometry between the parameter (i.e., dual) space and the
dictionary's induced feature space.
| Paul Honeine | 10.1109/TSP.2015.2457396 | 1409.6045 | null | null |
Approximation errors of online sparsification criteria | stat.ML cs.CV cs.IT cs.LG cs.NE math.IT | Many machine learning frameworks, such as resource-allocating networks,
kernel-based methods, Gaussian processes, and radial-basis-function networks,
require a sparsification scheme in order to address the online learning
paradigm. For this purpose, several online sparsification criteria have been
proposed to restrict the model definition on a subset of samples. The most
known criterion is the (linear) approximation criterion, which discards any
sample that can be well represented by the already contributing samples, an
operation with excessive computational complexity. Several computationally
efficient sparsification criteria have been introduced in the literature, such
as the distance, the coherence and the Babel criteria. In this paper, we
provide a framework that connects these sparsification criteria to the issue of
approximating samples, by deriving theoretical bounds on the approximation
errors. Moreover, we investigate the error of approximating any feature, by
proposing upper-bounds on the approximation error for each of the
aforementioned sparsification criteria. Two classes of features are described
in detail, the empirical mean and the principal axes in the kernel principal
component analysis.
| Paul Honeine | 10.1109/TSP.2015.2442960 | 1409.6046 | null | null |
The Information Theoretically Efficient Model (ITEM): A model for
computerized analysis of large datasets | cs.LG | This document discusses the Information Theoretically Efficient Model (ITEM),
a computerized system to generate an information theoretically efficient
multinomial logistic regression from a general dataset. More specifically, this
model is designed to succeed even where the logit transform of the dependent
variable is not necessarily linear in the independent variables. This research
shows that for large datasets, the resulting models can be produced on modern
computers in a tractable amount of time. These models are also resistant to
overfitting, and as such they tend to produce interpretable models with only a
limited number of features, all of which are designed to be well behaved.
| Tyler Ward | null | 1409.6075 | null | null |
Best-Arm Identification in Linear Bandits | cs.LG | We study the best-arm identification problem in linear bandit, where the
rewards of the arms depend linearly on an unknown parameter $\theta^*$ and the
objective is to return the arm with the largest reward. We characterize the
complexity of the problem and introduce sample allocation strategies that pull
arms to identify the best arm with a fixed confidence, while minimizing the
sample budget. In particular, we show the importance of exploiting the global
linear structure to improve the estimate of the reward of near-optimal arms. We
analyze the proposed strategies and compare their empirical performance.
Finally, as a by-product of our analysis, we point out the connection to the
$G$-optimality criterion used in optimal experimental design.
| Marta Soare, Alessandro Lazaric, R\'emi Munos | null | 1409.6110 | null | null |
Distributed Clustering and Learning Over Networks | math.OC cs.LG cs.MA cs.SY stat.ML | Distributed processing over networks relies on in-network processing and
cooperation among neighboring agents. Cooperation is beneficial when agents
share a common objective. However, in many applications agents may belong to
different clusters that pursue different objectives. Then, indiscriminate
cooperation will lead to undesired results. In this work, we propose an
adaptive clustering and learning scheme that allows agents to learn which
neighbors they should cooperate with and which other neighbors they should
ignore. In doing so, the resulting algorithm enables the agents to identify
their clusters and to attain improved learning and estimation accuracy over
networks. We carry out a detailed mean-square analysis and assess the error
probabilities of Types I and II, i.e., false alarm and mis-detection, for the
clustering mechanism. Among other results, we establish that these
probabilities decay exponentially with the step-sizes so that the probability
of correct clustering can be made arbitrarily close to one.
| Xiaochuan Zhao and Ali H. Sayed | 10.1109/TSP.2015.2415755 | 1409.6111 | null | null |
A non-linear learning & classification algorithm that achieves full
training accuracy with stellar classification accuracy | cs.CV cs.LG | A fast Non-linear and non-iterative learning and classification algorithm is
synthesized and validated. This algorithm named the "Reverse Ripple
Effect(R.R.E)", achieves 100% learning accuracy but is computationally
expensive upon classification. The R.R.E is a (deterministic) algorithm that
super imposes Gaussian weighted functions on training points. In this work, the
R.R.E algorithm is compared against known learning and classification
techniques/algorithms such as: the Perceptron Criterion algorithm, Linear
Support Vector machines, the Linear Fisher Discriminant and a simple Neural
Network. The classification accuracy of the R.R.E algorithm is evaluated using
simulations conducted in MATLAB. The R.R.E algorithm's behaviour is analyzed
under linearly and non-linearly separable data sets. For the comparison with
the Neural Network, the classical XOR problem is considered.
| Rashid Khogali | null | 1409.6440 | null | null |
HSR: L1/2 Regularized Sparse Representation for Fast Face Recognition
using Hierarchical Feature Selection | cs.CV cs.LG | In this paper, we propose a novel method for fast face recognition called
L1/2 Regularized Sparse Representation using Hierarchical Feature Selection
(HSR). By employing hierarchical feature selection, we can compress the scale
and dimension of global dictionary, which directly contributes to the decrease
of computational cost in sparse representation that our approach is strongly
rooted in. It consists of Gabor wavelets and Extreme Learning Machine
Auto-Encoder (ELM-AE) hierarchically. For Gabor wavelets part, local features
can be extracted at multiple scales and orientations to form Gabor-feature
based image, which in turn improves the recognition rate. Besides, in the
presence of occluded face image, the scale of Gabor-feature based global
dictionary can be compressed accordingly because redundancies exist in
Gabor-feature based occlusion dictionary. For ELM-AE part, the dimension of
Gabor-feature based global dictionary can be compressed because
high-dimensional face images can be rapidly represented by low-dimensional
feature. By introducing L1/2 regularization, our approach can produce sparser
and more robust representation compared to regularized Sparse Representation
based Classification (SRC), which also contributes to the decrease of the
computational cost in sparse representation. In comparison with related work
such as SRC and Gabor-feature based SRC (GSRC), experimental results on a
variety of face databases demonstrate the great advantage of our method for
computational cost. Moreover, we also achieve approximate or even better
recognition rate.
| Bo Han, Bo He, Tingting Sun, Mengmeng Ma, Amaury Lendasse | 10.1007/s00521-015-1907-y | 1409.6448 | null | null |
Improving Cross-domain Recommendation through Probabilistic
Cluster-level Latent Factor Model--Extended Version | cs.IR cs.LG stat.ML | Cross-domain recommendation has been proposed to transfer user behavior
pattern by pooling together the rating data from multiple domains to alleviate
the sparsity problem appearing in single rating domains. However, previous
models only assume that multiple domains share a latent common rating pattern
based on the user-item co-clustering. To capture diversities among different
domains, we propose a novel Probabilistic Cluster-level Latent Factor (PCLF)
model to improve the cross-domain recommendation performance. Experiments on
several real world datasets demonstrate that our proposed model outperforms the
state-of-the-art methods for the cross-domain recommendation task.
| Siting Ren, Sheng Gao | null | 1409.6805 | null | null |
Unsupervised learning of regression mixture models with unknown number
of components | stat.ME cs.LG stat.ML | Regression mixture models are widely studied in statistics, machine learning
and data analysis. Fitting regression mixtures is challenging and is usually
performed by maximum likelihood by using the expectation-maximization (EM)
algorithm. However, it is well-known that the initialization is crucial for EM.
If the initialization is inappropriately performed, the EM algorithm may lead
to unsatisfactory results. The EM algorithm also requires the number of
clusters to be given a priori; the problem of selecting the number of mixture
components requires using model selection criteria to choose one from a set of
pre-estimated candidate models. We propose a new fully unsupervised algorithm
to learn regression mixture models with unknown number of components. The
developed unsupervised learning approach consists in a penalized maximum
likelihood estimation carried out by a robust expectation-maximization (EM)
algorithm for fitting polynomial, spline and B-spline regressions mixtures. The
proposed learning approach is fully unsupervised: 1) it simultaneously infers
the model parameters and the optimal number of the regression mixture
components from the data as the learning proceeds, rather than in a two-fold
scheme as in standard model-based clustering using afterward model selection
criteria, and 2) it does not require accurate initialization unlike the
standard EM for regression mixtures. The developed approach is applied to curve
clustering problems. Numerical experiments on simulated data show that the
proposed robust EM algorithm performs well and provides accurate results in
terms of robustness with regard initialization and retrieving the optimal
partition with the actual number of clusters. An application to real data in
the framework of functional data clustering, confirms the benefit of the
proposed approach for practical applications.
| Faicel Chamroukhi | null | 1409.6981 | null | null |
Variational Pseudolikelihood for Regularized Ising Inference | cond-mat.stat-mech cs.LG stat.ML | I propose a variational approach to maximum pseudolikelihood inference of the
Ising model. The variational algorithm is more computationally efficient, and
does a better job predicting out-of-sample correlations than $L_2$ regularized
maximum pseudolikelihood inference as well as mean field and isolated spin pair
approximations with pseudocount regularization. The key to the approach is a
variational energy that regularizes the inference problem by shrinking the
couplings towards zero, while still allowing some large couplings to explain
strong correlations. The utility of the variational pseudolikelihood approach
is illustrated by training an Ising model to represent the letters A-J using
samples of letters from different computer fonts.
| Charles K. Fisher | null | 1409.7074 | null | null |
Semantically-Informed Syntactic Machine Translation: A Tree-Grafting
Approach | cs.CL cs.LG stat.ML | We describe a unified and coherent syntactic framework for supporting a
semantically-informed syntactic approach to statistical machine translation.
Semantically enriched syntactic tags assigned to the target-language training
texts improved translation quality. The resulting system significantly
outperformed a linguistically naive baseline model (Hiero), and reached the
highest scores yet reported on the NIST 2009 Urdu-English translation task.
This finding supports the hypothesis (posed by many researchers in the MT
community, e.g., in DARPA GALE) that both syntactic and semantic information
are critical for improving translation quality---and further demonstrates that
large gains can be achieved for low-resource languages with different word
order than English.
| Kathryn Baker, Michael Bloodgood, Chris Callison-Burch, Bonnie J.
Dorr, Nathaniel W. Filardo, Lori Levin, Scott Miller and Christine Piatko | null | 1409.7085 | null | null |
Heterogeneous Metric Learning with Content-based Regularization for
Software Artifact Retrieval | cs.LG cs.IR cs.SE | The problem of software artifact retrieval has the goal to effectively locate
software artifacts, such as a piece of source code, in a large code repository.
This problem has been traditionally addressed through the textual query. In
other words, information retrieval techniques will be exploited based on the
textual similarity between queries and textual representation of software
artifacts, which is generated by collecting words from comments, identifiers,
and descriptions of programs. However, in addition to these semantic
information, there are rich information embedded in source codes themselves.
These source codes, if analyzed properly, can be a rich source for enhancing
the efforts of software artifact retrieval. To this end, in this paper, we
develop a feature extraction method on source codes. Specifically, this method
can capture both the inherent information in the source codes and the semantic
information hidden in the comments, descriptions, and identifiers of the source
codes. Moreover, we design a heterogeneous metric learning approach, which
allows to integrate code features and text features into the same latent
semantic space. This, in turn, can help to measure the artifact similarity by
exploiting the joint power of both code and text features. Finally, extensive
experiments on real-world data show that the proposed method can help to
improve the performances of software artifact retrieval with a significant
margin.
| Liang Wu, Hui Xiong, Liang Du, Bo Liu, Guandong Xu, Yong Ge, Yanjie
Fu, Yuanchun Zhou, Jianhui Li | 10.1109/ICDM.2014.147 | 1409.7165 | null | null |
A Boosting Framework on Grounds of Online Learning | cs.LG | By exploiting the duality between boosting and online learning, we present a
boosting framework which proves to be extremely powerful thanks to employing
the vast knowledge available in the online learning area. Using this framework,
we develop various algorithms to address multiple practically and theoretically
interesting questions including sparse boosting, smooth-distribution boosting,
agnostic learning and some generalization to double-projection online learning
algorithms, as a by-product.
| Tofigh Naghibi, Beat Pfister | null | 1409.7202 | null | null |
A Semidefinite Programming Based Search Strategy for Feature Selection
with Mutual Information Measure | cs.LG | Feature subset selection, as a special case of the general subset selection
problem, has been the topic of a considerable number of studies due to the
growing importance of data-mining applications. In the feature subset selection
problem there are two main issues that need to be addressed: (i) Finding an
appropriate measure function than can be fairly fast and robustly computed for
high-dimensional data. (ii) A search strategy to optimize the measure over the
subset space in a reasonable amount of time. In this article mutual information
between features and class labels is considered to be the measure function. Two
series expansions for mutual information are proposed, and it is shown that
most heuristic criteria suggested in the literature are truncated
approximations of these expansions. It is well-known that searching the whole
subset space is an NP-hard problem. Here, instead of the conventional
sequential search algorithms, we suggest a parallel search strategy based on
semidefinite programming (SDP) that can search through the subset space in
polynomial time. By exploiting the similarities between the proposed algorithm
and an instance of the maximum-cut problem in graph theory, the approximation
ratio of this algorithm is derived and is compared with the approximation ratio
of the backward elimination method. The experiments show that it can be
misleading to judge the quality of a measure solely based on the classification
accuracy, without taking the effect of the non-optimum search strategy into
account.
| Tofigh Naghibi, Sarah Hoffmann and Beat Pfister | null | 1409.7384 | null | null |
Autoencoder Trees | cs.LG stat.ML | We discuss an autoencoder model in which the encoding and decoding functions
are implemented by decision trees. We use the soft decision tree where internal
nodes realize soft multivariate splits given by a gating function and the
overall output is the average of all leaves weighted by the gating values on
their path. The encoder tree takes the input and generates a lower dimensional
representation in the leaves and the decoder tree takes this and reconstructs
the original input. Exploiting the continuity of the trees, autoencoder trees
are trained with stochastic gradient descent. On handwritten digit and news
data, we see that the autoencoder trees yield good reconstruction error
compared to traditional autoencoder perceptrons. We also see that the
autoencoder tree captures hierarchical representations at different
granularities of the data on its different levels and the leaves capture the
localities in the input space.
| Ozan \.Irsoy, Ethem Alpayd{\i}n | null | 1409.7461 | null | null |
Short-term solar irradiance and irradiation forecasts via different time
series techniques: A preliminary study | cs.LG physics.ao-ph | This communication is devoted to solar irradiance and irradiation short-term
forecasts, which are useful for electricity production. Several different time
series approaches are employed. Our results and the corresponding numerical
simulations show that techniques which do not need a large amount of historical
data behave better than those which need them, especially when those data are
quite noisy.
| C\'edric Join (INRIA Lille - Nord Europe, CRAN, AL.I.E.N.), Cyril
Voyant (SPE), Michel Fliess (AL.I.E.N., LIX), Marc Muselli (SPE), Marie Laure
Nivet (SPE), Christophe Paoli, Fr\'ed\'eric Chaxel (CRAN) | null | 1409.7476 | null | null |
Generalized Twin Gaussian Processes using Sharma-Mittal Divergence | cs.LG cs.CV stat.ML | There has been a growing interest in mutual information measures due to their
wide range of applications in Machine Learning and Computer Vision. In this
paper, we present a generalized structured regression framework based on
Shama-Mittal divergence, a relative entropy measure, which is introduced to the
Machine Learning community in this work. Sharma-Mittal (SM) divergence is a
generalized mutual information measure for the widely used R\'enyi, Tsallis,
Bhattacharyya, and Kullback-Leibler (KL) relative entropies. Specifically, we
study Sharma-Mittal divergence as a cost function in the context of the Twin
Gaussian Processes (TGP)~\citep{Bo:2010}, which generalizes over the
KL-divergence without computational penalty. We show interesting properties of
Sharma-Mittal TGP (SMTGP) through a theoretical analysis, which covers missing
insights in the traditional TGP formulation. However, we generalize this theory
based on SM-divergence instead of KL-divergence which is a special case.
Experimentally, we evaluated the proposed SMTGP framework on several datasets.
The results show that SMTGP reaches better predictions than KL-based TGP, since
it offers a bigger class of models through its parameters that we learn from
the data.
| Mohamed Elhoseiny, Ahmed Elgammal | null | 1409.7480 | null | null |
Unsupervised Domain Adaptation by Backpropagation | stat.ML cs.LG cs.NE | Top-performing deep architectures are trained on massive amounts of labeled
data. In the absence of labeled data for a certain task, domain adaptation
often provides an attractive option given that labeled data of similar nature
but from a different domain (e.g. synthetic images) are available. Here, we
propose a new approach to domain adaptation in deep architectures that can be
trained on large amount of labeled data from the source domain and large amount
of unlabeled data from the target domain (no labeled target-domain data is
necessary).
As the training progresses, the approach promotes the emergence of "deep"
features that are (i) discriminative for the main learning task on the source
domain and (ii) invariant with respect to the shift between the domains. We
show that this adaptation behaviour can be achieved in almost any feed-forward
model by augmenting it with few standard layers and a simple new gradient
reversal layer. The resulting augmented architecture can be trained using
standard backpropagation.
Overall, the approach can be implemented with little effort using any of the
deep-learning packages. The method performs very well in a series of image
classification experiments, achieving adaptation effect in the presence of big
domain shifts and outperforming previous state-of-the-art on Office datasets.
| Yaroslav Ganin, Victor Lempitsky | null | 1409.7495 | null | null |
The Advantage of Cross Entropy over Entropy in Iterative Information
Gathering | stat.ML cs.LG | Gathering the most information by picking the least amount of data is a
common task in experimental design or when exploring an unknown environment in
reinforcement learning and robotics. A widely used measure for quantifying the
information contained in some distribution of interest is its entropy. Greedily
minimizing the expected entropy is therefore a standard method for choosing
samples in order to gain strong beliefs about the underlying random variables.
We show that this approach is prone to temporally getting stuck in local optima
corresponding to wrongly biased beliefs. We suggest instead maximizing the
expected cross entropy between old and new belief, which aims at challenging
refutable beliefs and thereby avoids these local optima. We show that both
criteria are closely related and that their difference can be traced back to
the asymmetry of the Kullback-Leibler divergence. In illustrative examples as
well as simulated and real-world experiments we demonstrate the advantage of
cross entropy over simple entropy for practical applications.
| Johannes Kulick, Robert Lieck and Marc Toussaint | null | 1409.7552 | null | null |
Semi-supervised Classification for Natural Language Processing | cs.CL cs.LG | Semi-supervised classification is an interesting idea where classification
models are learned from both labeled and unlabeled data. It has several
advantages over supervised classification in natural language processing
domain. For instance, supervised classification exploits only labeled data that
are expensive, often difficult to get, inadequate in quantity, and require
human experts for annotation. On the other hand, unlabeled data are inexpensive
and abundant. Despite the fact that many factors limit the wide-spread use of
semi-supervised classification, it has become popular since its level of
performance is empirically as good as supervised classification. This study
explores the possibilities and achievements as well as complexity and
limitations of semi-supervised classification for several natural langue
processing tasks like parsing, biomedical information processing, text
classification, and summarization.
| Rushdi Shams | null | 1409.7612 | null | null |
Maximum mutual information regularized classification | cs.LG | In this paper, a novel pattern classification approach is proposed by
regularizing the classifier learning to maximize mutual information between the
classification response and the true class label. We argue that, with the
learned classifier, the uncertainty of the true class label of a data sample
should be reduced by knowing its classification response as much as possible.
The reduced uncertainty is measured by the mutual information between the
classification response and the true class label. To this end, when learning a
linear classifier, we propose to maximize the mutual information between
classification responses and true class labels of training samples, besides
minimizing the classification error and reduc- ing the classifier complexity.
An objective function is constructed by modeling mutual information with
entropy estimation, and it is optimized by a gradi- ent descend method in an
iterative algorithm. Experiments on two real world pattern classification
problems show the significant improvements achieved by maximum mutual
information regularization.
| Jim Jing-Yan Wang, Yi Wang, Shiguang Zhao, Xin Gao | null | 1409.7780 | null | null |
Large-scale Online Feature Selection for Ultra-high Dimensional Sparse
Data | cs.LG cs.CV | Feature selection with large-scale high-dimensional data is important yet
very challenging in machine learning and data mining. Online feature selection
is a promising new paradigm that is more efficient and scalable than batch
feature section methods, but the existing online approaches usually fall short
in their inferior efficacy as compared with batch approaches. In this paper, we
present a novel second-order online feature selection scheme that is simple yet
effective, very fast and extremely scalable to deal with large-scale ultra-high
dimensional sparse data streams. The basic idea is to improve the existing
first-order online feature selection methods by exploiting second-order
information for choosing the subset of important features with high confidence
weights. However, unlike many second-order learning methods that often suffer
from extra high computational cost, we devise a novel smart algorithm for
second-order online feature selection using a MaxHeap-based approach, which is
not only more effective than the existing first-order approaches, but also
significantly more efficient and scalable for large-scale feature selection
with ultra-high dimensional sparse data, as validated from our extensive
experiments. Impressively, on a billion-scale synthetic dataset (1-billion
dimensions, 1-billion nonzero features, and 1-million samples), our new
algorithm took only 8 minutes on a single PC, which is orders of magnitudes
faster than traditional batch approaches. \url{http://arxiv.org/abs/1409.7794}
| Yue Wu, Steven C. H. Hoi, Tao Mei, Nenghai Yu | null | 1409.7794 | null | null |
Cognitive Learning of Statistical Primary Patterns via Bayesian Network | cs.LG | In cognitive radio (CR) technology, the trend of sensing is no longer to only
detect the presence of active primary users. A large number of applications
demand for more comprehensive knowledge on primary user behaviors in spatial,
temporal, and frequency domains. To satisfy such requirements, we study the
statistical relationship among primary users by introducing a Bayesian network
(BN) based framework. How to learn such a BN structure is a long standing
issue, not fully understood even in the statistical learning community.
Besides, another key problem in this learning scenario is that the CR has to
identify how many variables are in the BN, which is usually considered as prior
knowledge in statistical learning applications. To solve such two issues
simultaneously, this paper proposes a BN structure learning scheme consisting
of an efficient structure learning algorithm and a blind variable
identification scheme. The proposed approach incurs significantly lower
computational complexity compared with previous ones, and is capable of
determining the structure without assuming much prior knowledge about
variables. With this result, cognitive users could efficiently understand the
statistical pattern of primary networks, such that more efficient cognitive
protocols could be designed across different network layers.
| Weijia Han, Huiyan Sang, Min Sheng, Jiandong Li, and Shuguang Cui | null | 1409.7930 | null | null |
Combining human and machine learning for morphological analysis of
galaxy images | astro-ph.IM astro-ph.GA cs.CV cs.LG | The increasing importance of digital sky surveys collecting many millions of
galaxy images has reinforced the need for robust methods that can perform
morphological analysis of large galaxy image databases. Citizen science
initiatives such as Galaxy Zoo showed that large datasets of galaxy images can
be analyzed effectively by non-scientist volunteers, but since databases
generated by robotic telescopes grow much faster than the processing power of
any group of citizen scientists, it is clear that computer analysis is
required. Here we propose to use citizen science data for training machine
learning systems, and show experimental results demonstrating that machine
learning systems can be trained with citizen science data. Our findings show
that the performance of machine learning depends on the quality of the data,
which can be improved by using samples that have a high degree of agreement
between the citizen scientists. The source code of the method is publicly
available.
| Evan Kuminski, Joe George, John Wallin, Lior Shamir | 10.1086/678977 | 1409.7935 | null | null |
Lazier Than Lazy Greedy | cs.LG cs.DS cs.IR | Is it possible to maximize a monotone submodular function faster than the
widely used lazy greedy algorithm (also known as accelerated greedy), both in
theory and practice? In this paper, we develop the first linear-time algorithm
for maximizing a general monotone submodular function subject to a cardinality
constraint. We show that our randomized algorithm, STOCHASTIC-GREEDY, can
achieve a $(1-1/e-\varepsilon)$ approximation guarantee, in expectation, to the
optimum solution in time linear in the size of the data and independent of the
cardinality constraint. We empirically demonstrate the effectiveness of our
algorithm on submodular functions arising in data summarization, including
training large-scale kernel methods, exemplar-based clustering, and sensor
placement. We observe that STOCHASTIC-GREEDY practically achieves the same
utility value as lazy greedy but runs much faster. More surprisingly, we
observe that in many practical scenarios STOCHASTIC-GREEDY does not evaluate
the whole fraction of data points even once and still achieves
indistinguishable results compared to lazy greedy.
| Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, Amin Karbasi, Jan
Vondrak, and Andreas Krause | null | 1409.7938 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.