categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
cs.CE cs.LG stat.AP | null | 1307.1380 | null | null | http://arxiv.org/pdf/1307.1380v1 | 2013-07-04T15:45:09Z | 2013-07-04T15:45:09Z | The Application of a Data Mining Framework to Energy Usage Profiling in
Domestic Residences using UK data | This paper describes a method for defining representative load profiles for
domestic electricity users in the UK. It considers bottom up and clustering
methods and then details the research plans for implementing and improving
existing framework approaches based on the overall usage profile. The work
focuses on adapting and applying analysis framework approaches to UK energy
data in order to determine the effectiveness of creating a few (single figures)
archetypical users with the intention of improving on the current methods of
determining usage profiles. The work is currently in progress and the paper
details initial results using data collected in Milton Keynes around 1990.
Various possible enhancements to the work are considered including a split
based on temperature to reflect the varying UK weather conditions.
| [
"['Ian Dent' 'Uwe Aickelin' 'Tom Rodden']",
"Ian Dent, Uwe Aickelin, Tom Rodden"
] |
cs.CE cs.LG | null | 1307.1385 | null | null | http://arxiv.org/pdf/1307.1385v1 | 2013-07-04T15:55:33Z | 2013-07-04T15:55:33Z | Creating Personalised Energy Plans. From Groups to Individuals using
Fuzzy C Means Clustering | Changes in the UK electricity market mean that domestic users will be
required to modify their usage behaviour in order that supplies can be
maintained. Clustering allows usage profiles collected at the household level
to be clustered into groups and assigned a stereotypical profile which can be
used to target marketing campaigns. Fuzzy C Means clustering extends this by
allowing each household to be a member of many groups and hence provides the
opportunity to make personalised offers to the household dependent on their
degree of membership of each group. In addition, feedback can be provided on
how user's changing behaviour is moving them towards more "green" or cost
effective stereotypical usage.
| [
"Ian Dent, Christian Wagner, Uwe Aickelin, Tom Rodden",
"['Ian Dent' 'Christian Wagner' 'Uwe Aickelin' 'Tom Rodden']"
] |
cs.LG cs.CE | null | 1307.1387 | null | null | http://arxiv.org/pdf/1307.1387v1 | 2013-07-04T16:06:25Z | 2013-07-04T16:06:25Z | Examining the Classification Accuracy of TSVMs with ?Feature Selection
in Comparison with the GLAD Algorithm | Gene expression data sets are used to classify and predict patient diagnostic
categories. As we know, it is extremely difficult and expensive to obtain gene
expression labelled examples. Moreover, conventional supervised approaches
cannot function properly when labelled data (training examples) are
insufficient using Support Vector Machines (SVM) algorithms. Therefore, in this
paper, we suggest Transductive Support Vector Machines (TSVMs) as
semi-supervised learning algorithms, learning with both labelled samples data
and unlabelled samples to perform the classification of microarray data. To
prune the superfluous genes and samples we used a feature selection method
called Recursive Feature Elimination (RFE), which is supposed to enhance the
output of classification and avoid the local optimization problem. We examined
the classification prediction accuracy of the TSVM-RFE algorithm in comparison
with the Genetic Learning Across Datasets (GLAD) algorithm, as both are
semi-supervised learning methods. Comparing these two methods, we found that
the TSVM-RFE surpassed both a SVM using RFE and GLAD.
| [
"['Hala Helmi' 'Jon M. Garibaldi' 'Uwe Aickelin']",
"Hala Helmi, Jon M. Garibaldi and Uwe Aickelin"
] |
cs.LG cs.CR | null | 1307.1391 | null | null | http://arxiv.org/pdf/1307.1391v1 | 2013-07-04T16:19:21Z | 2013-07-04T16:19:21Z | Quiet in Class: Classification, Noise and the Dendritic Cell Algorithm | Theoretical analyses of the Dendritic Cell Algorithm (DCA) have yielded
several criticisms about its underlying structure and operation. As a result,
several alterations and fixes have been suggested in the literature to correct
for these findings. A contribution of this work is to investigate the effects
of replacing the classification stage of the DCA (which is known to be flawed)
with a traditional machine learning technique. This work goes on to question
the merits of those unique properties of the DCA that are yet to be thoroughly
analysed. If none of these properties can be found to have a benefit over
traditional approaches, then "fixing" the DCA is arguably less efficient than
simply creating a new algorithm. This work examines the dynamic filtering
property of the DCA and questions the utility of this unique feature for the
anomaly detection problem. It is found that this feature, while advantageous
for noisy, time-ordered classification, is not as useful as a traditional
static filter for processing a synthetic dataset. It is concluded that there
are still unique features of the DCA left to investigate. Areas that may be of
benefit to the Artificial Immune Systems community are suggested.
| [
"Feng Gu, Jan Feyereisl, Robert Oates, Jenna Reps, Julie Greensmith,\n Uwe Aickelin",
"['Feng Gu' 'Jan Feyereisl' 'Robert Oates' 'Jenna Reps' 'Julie Greensmith'\n 'Uwe Aickelin']"
] |
cs.CE cs.LG | null | 1307.1394 | null | null | http://arxiv.org/pdf/1307.1394v1 | 2013-07-04T16:24:17Z | 2013-07-04T16:24:17Z | Detect adverse drug reactions for drug Alendronate | Adverse drug reaction (ADR) is widely concerned for public health issue. In
this study we propose an original approach to detect the ADRs using feature
matrix and feature selection. The experiments are carried out on the drug
Simvastatin. Major side effects for the drug are detected and better
performance is achieved compared to other computerized methods. The detected
ADRs are based on the computerized method, further investigation is needed.
| [
"Yihui Liu, Uwe Aickelin",
"['Yihui Liu' 'Uwe Aickelin']"
] |
cs.LG cs.CE stat.AP | null | 1307.1411 | null | null | http://arxiv.org/pdf/1307.1411v1 | 2013-07-04T17:01:44Z | 2013-07-04T17:01:44Z | Discovering Sequential Patterns in a UK General Practice Database | The wealth of computerised medical information becoming readily available
presents the opportunity to examine patterns of illnesses, therapies and
responses. These patterns may be able to predict illnesses that a patient is
likely to develop, allowing the implementation of preventative actions. In this
paper sequential rule mining is applied to a General Practice database to find
rules involving a patients age, gender and medical history. By incorporating
these rules into current health-care a patient can be highlighted as
susceptible to a future illness based on past or current illnesses, gender and
year of birth. This knowledge has the ability to greatly improve health-care
and reduce health-care costs.
| [
"['Jenna Reps' 'Jonathan M. Garibaldi' 'Uwe Aickelin' 'Daniele Soria'\n 'Jack E. Gibson' 'Richard B. Hubbard']",
"Jenna Reps, Jonathan M. Garibaldi, Uwe Aickelin, Daniele Soria, Jack\n E. Gibson, Richard B. Hubbard"
] |
stat.ML cs.LG stat.ME | null | 1307.1493 | null | null | http://arxiv.org/pdf/1307.1493v2 | 2013-11-01T17:56:35Z | 2013-07-04T21:33:56Z | Dropout Training as Adaptive Regularization | Dropout and other feature noising schemes control overfitting by artificially
corrupting the training data. For generalized linear models, dropout performs a
form of adaptive regularization. Using this viewpoint, we show that the dropout
regularizer is first-order equivalent to an L2 regularizer applied after
scaling the features by an estimate of the inverse diagonal Fisher information
matrix. We also establish a connection to AdaGrad, an online learning
algorithm, and find that a close relative of AdaGrad operates by repeatedly
solving linear dropout-regularized problems. By casting dropout as
regularization, we develop a natural semi-supervised algorithm that uses
unlabeled data to create a better adaptive regularizer. We apply this idea to
document classification tasks, and show that it consistently boosts the
performance of dropout training, improving on state-of-the-art results on the
IMDB reviews dataset.
| [
"Stefan Wager, Sida Wang, and Percy Liang",
"['Stefan Wager' 'Sida Wang' 'Percy Liang']"
] |
cs.LG cs.CE cs.DB | null | 1307.1584 | null | null | http://arxiv.org/pdf/1307.1584v1 | 2013-07-05T11:24:55Z | 2013-07-05T11:24:55Z | Comparing Data-mining Algorithms Developed for Longitudinal
Observational Databases | Longitudinal observational databases have become a recent interest in the
post marketing drug surveillance community due to their ability of presenting a
new perspective for detecting negative side effects. Algorithms mining
longitudinal observation databases are not restricted by many of the
limitations associated with the more conventional methods that have been
developed for spontaneous reporting system databases. In this paper we
investigate the robustness of four recently developed algorithms that mine
longitudinal observational databases by applying them to The Health Improvement
Network (THIN) for six drugs with well document known negative side effects.
Our results show that none of the existing algorithms was able to consistently
identify known adverse drug reactions above events related to the cause of the
drug and no algorithm was superior.
| [
"['Jenna Reps' 'Jonathan M. Garibaldi' 'Uwe Aickelin' 'Daniele Soria'\n 'Jack E. Gibson' 'Richard B. Hubbard']",
"Jenna Reps, Jonathan M. Garibaldi, Uwe Aickelin, Daniele Soria, Jack\n E. Gibson, Richard B. Hubbard"
] |
cs.LG cs.CE stat.ML | 10.1109/ICSMC.2012.6377825 | 1307.1599 | null | null | http://arxiv.org/abs/1307.1599v1 | 2013-07-05T12:53:28Z | 2013-07-05T12:53:28Z | Supervised Learning and Anti-learning of Colorectal Cancer Classes and
Survival Rates from Cellular Biology Parameters | In this paper, we describe a dataset relating to cellular and physical
conditions of patients who are operated upon to remove colorectal tumours. This
data provides a unique insight into immunological status at the point of tumour
removal, tumour classification and post-operative survival. Attempts are made
to learn relationships between attributes (physical and immunological) and the
resulting tumour stage and survival. Results for conventional machine learning
approaches can be considered poor, especially for predicting tumour stages for
the most important types of cancer. This poor performance is further
investigated and compared with a synthetic, dataset based on the logical
exclusive-OR function and it is shown that there is a significant level of
'anti-learning' present in all supervised methods used and this can be
explained by the highly dimensional, complex and sparsely representative
dataset. For predicting the stage of cancer from the immunological attributes,
anti-learning approaches outperform a range of popular algorithms.
| [
"Chris Roadknight, Uwe Aickelin, Guoping Qiu, John Scholefield, Lindy\n Durrant",
"['Chris Roadknight' 'Uwe Aickelin' 'Guoping Qiu' 'John Scholefield'\n 'Lindy Durrant']"
] |
cs.LG cs.CE | null | 1307.1601 | null | null | http://arxiv.org/pdf/1307.1601v1 | 2013-07-05T12:56:24Z | 2013-07-05T12:56:24Z | Biomarker Clustering of Colorectal Cancer Data to Complement Clinical
Classification | In this paper, we describe a dataset relating to cellular and physical
conditions of patients who are operated upon to remove colorectal tumours. This
data provides a unique insight into immunological status at the point of tumour
removal, tumour classification and post-operative survival. Attempts are made
to cluster this dataset and important subsets of it in an effort to
characterize the data and validate existing standards for tumour
classification. It is apparent from optimal clustering that existing tumour
classification is largely unrelated to immunological factors within a patient
and that there may be scope for re-evaluating treatment options and survival
estimates based on a combination of tumour physiology and patient
histochemistry.
| [
"['Chris Roadknight' 'Uwe Aickelin' 'Alex Ladas' 'Daniele Soria'\n 'John Scholefield' 'Lindy Durrant']",
"Chris Roadknight, Uwe Aickelin, Alex Ladas, Daniele Soria, John\n Scholefield and Lindy Durrant"
] |
cs.CL cs.LG | null | 1307.1662 | null | null | http://arxiv.org/pdf/1307.1662v2 | 2014-06-27T17:31:33Z | 2013-07-05T16:52:09Z | Polyglot: Distributed Word Representations for Multilingual NLP | Distributed word representations (word embeddings) have recently contributed
to competitive performance in language modeling and several NLP tasks. In this
work, we train word embeddings for more than 100 languages using their
corresponding Wikipedias. We quantitatively demonstrate the utility of our word
embeddings by using them as the sole features for training a part of speech
tagger for a subset of these languages. We find their performance to be
competitive with near state-of-art methods in English, Danish and Swedish.
Moreover, we investigate the semantic features captured by these embeddings
through the proximity of word groupings. We will release these embeddings
publicly to help researchers in the development and enhancement of multilingual
applications.
| [
"['Rami Al-Rfou' 'Bryan Perozzi' 'Steven Skiena']",
"Rami Al-Rfou, Bryan Perozzi, Steven Skiena"
] |
stat.ML cs.LG | null | 1307.1674 | null | null | http://arxiv.org/pdf/1307.1674v1 | 2013-07-05T17:39:40Z | 2013-07-05T17:39:40Z | Stochastic Optimization of PCA with Capped MSG | We study PCA as a stochastic optimization problem and propose a novel
stochastic approximation algorithm which we refer to as "Matrix Stochastic
Gradient" (MSG), as well as a practical variant, Capped MSG. We study the
method both theoretically and empirically.
| [
"Raman Arora, Andrew Cotter, and Nathan Srebro",
"['Raman Arora' 'Andrew Cotter' 'Nathan Srebro']"
] |
cs.LG math.OC | 10.1109/CDC.2009.5399685 | 1307.1759 | null | null | http://arxiv.org/abs/1307.1759v2 | 2013-07-09T05:57:37Z | 2013-07-06T07:25:35Z | Approximate dynamic programming using fluid and diffusion approximations
with applications to power management | Neuro-dynamic programming is a class of powerful techniques for approximating
the solution to dynamic programming equations. In their most computationally
attractive formulations, these techniques provide the approximate solution only
within a prescribed finite-dimensional function class. Thus, the question that
always arises is how should the function class be chosen? The goal of this
paper is to propose an approach using the solutions to associated fluid and
diffusion approximations. In order to illustrate this approach, the paper
focuses on an application to dynamic speed scaling for power management in
computer processors.
| [
"Wei Chen, Dayu Huang, Ankur A. Kulkarni, Jayakrishnan Unnikrishnan,\n Quanyan Zhu, Prashant Mehta, Sean Meyn, Adam Wierman",
"['Wei Chen' 'Dayu Huang' 'Ankur A. Kulkarni' 'Jayakrishnan Unnikrishnan'\n 'Quanyan Zhu' 'Prashant Mehta' 'Sean Meyn' 'Adam Wierman']"
] |
stat.ML cs.LG | null | 1307.1769 | null | null | http://arxiv.org/pdf/1307.1769v1 | 2013-07-06T10:17:44Z | 2013-07-06T10:17:44Z | Ensemble Methods for Multi-label Classification | Ensemble methods have been shown to be an effective tool for solving
multi-label classification tasks. In the RAndom k-labELsets (RAKEL) algorithm,
each member of the ensemble is associated with a small randomly-selected subset
of k labels. Then, a single label classifier is trained according to each
combination of elements in the subset. In this paper we adopt a similar
approach, however, instead of randomly choosing subsets, we select the minimum
required subsets of k labels that cover all labels and meet additional
constraints such as coverage of inter-label correlations. Construction of the
cover is achieved by formulating the subset selection as a minimum set covering
problem (SCP) and solving it by using approximation algorithms. Every cover
needs only to be prepared once by offline algorithms. Once prepared, a cover
may be applied to the classification of any given multi-label dataset whose
properties conform with those of the cover. The contribution of this paper is
two-fold. First, we introduce SCP as a general framework for constructing label
covers while allowing the user to incorporate cover construction constraints.
We demonstrate the effectiveness of this framework by proposing two
construction constraints whose enforcement produces covers that improve the
prediction performance of random selection. Second, we provide theoretical
bounds that quantify the probabilities of random selection to produce covers
that meet the proposed construction criteria. The experimental results indicate
that the proposed methods improve multi-label classification accuracy and
stability compared with the RAKEL algorithm and to other state-of-the-art
algorithms.
| [
"['Lior Rokach' 'Alon Schclar' 'Ehud Itach']",
"Lior Rokach, Alon Schclar, Ehud Itach"
] |
cs.LG stat.ML | null | 1307.1827 | null | null | http://arxiv.org/pdf/1307.1827v7 | 2016-04-18T09:05:38Z | 2013-07-07T01:38:16Z | Loss minimization and parameter estimation with heavy tails | This work studies applications and generalizations of a simple estimation
technique that provides exponential concentration under heavy-tailed
distributions, assuming only bounded low-order moments. We show that the
technique can be used for approximate minimization of smooth and strongly
convex losses, and specifically for least squares linear regression. For
instance, our $d$-dimensional estimator requires just
$\tilde{O}(d\log(1/\delta))$ random samples to obtain a constant factor
approximation to the optimal least squares loss with probability $1-\delta$,
without requiring the covariates or noise to be bounded or subgaussian. We
provide further applications to sparse linear regression and low-rank
covariance matrix estimation with similar allowances on the noise and covariate
distributions. The core technique is a generalization of the median-of-means
estimator to arbitrary metric spaces.
| [
"['Daniel Hsu' 'Sivan Sabato']",
"Daniel Hsu and Sivan Sabato"
] |
cs.LG stat.ML | null | 1307.1954 | null | null | http://arxiv.org/pdf/1307.1954v3 | 2014-02-10T20:39:40Z | 2013-07-08T06:10:58Z | B-tests: Low Variance Kernel Two-Sample Tests | A family of maximum mean discrepancy (MMD) kernel two-sample tests is
introduced. Members of the test family are called Block-tests or B-tests, since
the test statistic is an average over MMDs computed on subsets of the samples.
The choice of block size allows control over the tradeoff between test power
and computation time. In this respect, the $B$-test family combines favorable
properties of previously proposed MMD two-sample tests: B-tests are more
powerful than a linear time test where blocks are just pairs of samples, yet
they are more computationally efficient than a quadratic time test where a
single large block incorporating all the samples is used to compute a
U-statistic. A further important advantage of the B-tests is their
asymptotically Normal null distribution: this is by contrast with the
U-statistic, which is degenerate under the null hypothesis, and for which
estimates of the null distribution are computationally demanding. Recent
results on kernel selection for hypothesis testing transfer seamlessly to the
B-tests, yielding a means to optimize test power via kernel choice.
| [
"['Wojciech Zaremba' 'Arthur Gretton' 'Matthew Blaschko']",
"Wojciech Zaremba (INRIA Saclay - Ile de France, CVN), Arthur Gretton,\n Matthew Blaschko (INRIA Saclay - Ile de France, CVN)"
] |
cs.LG cs.CE | null | 1307.1998 | null | null | http://arxiv.org/pdf/1307.1998v1 | 2013-07-08T09:25:07Z | 2013-07-08T09:25:07Z | Using Clustering to extract Personality Information from socio economic
data | It has become apparent that models that have been applied widely in
economics, including Machine Learning techniques and Data Mining methods,
should take into consideration principles that derive from the theories of
Personality Psychology in order to discover more comprehensive knowledge
regarding complicated economic behaviours. In this work, we present a method to
extract Behavioural Groups by using simple clustering techniques that can
potentially reveal aspects of the Personalities for their members. We believe
that this is very important because the psychological information regarding the
Personalities of individuals is limited in real world applications and because
it can become a useful tool in improving the traditional models of Knowledge
Economy.
| [
"['Alexandros Ladas' 'Uwe Aickelin' 'Jon Garibaldi' 'Eamonn Ferguson']",
"Alexandros Ladas, Uwe Aickelin, Jon Garibaldi, Eamonn Ferguson"
] |
cs.LG cs.CE | null | 1307.2111 | null | null | http://arxiv.org/pdf/1307.2111v1 | 2013-07-08T14:47:42Z | 2013-07-08T14:47:42Z | Finding the creatures of habit; Clustering households based on their
flexibility in using electricity | Changes in the UK electricity market, particularly with the roll out of smart
meters, will provide greatly increased opportunities for initiatives intended
to change households' electricity usage patterns for the benefit of the overall
system. Users show differences in their regular behaviours and clustering
households into similar groupings based on this variability provides for
efficient targeting of initiatives. Those people who are stuck into a regular
pattern of activity may be the least receptive to an initiative to change
behaviour. A sample of 180 households from the UK are clustered into four
groups as an initial test of the concept and useful, actionable groupings are
found.
| [
"['Ian Dent' 'Tony Craig' 'Uwe Aickelin' 'Tom Rodden']",
"Ian Dent, Tony Craig, Uwe Aickelin, Tom Rodden"
] |
cs.LG | null | 1307.2118 | null | null | http://arxiv.org/pdf/1307.2118v1 | 2013-07-08T15:03:03Z | 2013-07-08T15:03:03Z | A PAC-Bayesian Tutorial with A Dropout Bound | This tutorial gives a concise overview of existing PAC-Bayesian theory
focusing on three generalization bounds. The first is an Occam bound which
handles rules with finite precision parameters and which states that
generalization loss is near training loss when the number of bits needed to
write the rule is small compared to the sample size. The second is a
PAC-Bayesian bound providing a generalization guarantee for posterior
distributions rather than for individual rules. The PAC-Bayesian bound
naturally handles infinite precision rule parameters, $L_2$ regularization,
{\em provides a bound for dropout training}, and defines a natural notion of a
single distinguished PAC-Bayesian posterior distribution. The third bound is a
training-variance bound --- a kind of bias-variance analysis but with bias
replaced by expected training loss. The training-variance bound dominates the
other bounds but is more difficult to interpret. It seems to suggest variance
reduction methods such as bagging and may ultimately provide a more meaningful
analysis of dropouts.
| [
"David McAllester",
"['David McAllester']"
] |
q-bio.NC cs.LG q-bio.QM | null | 1307.2150 | null | null | http://arxiv.org/pdf/1307.2150v1 | 2013-07-08T16:30:29Z | 2013-07-08T16:30:29Z | Transmodal Analysis of Neural Signals | Localizing neuronal activity in the brain, both in time and in space, is a
central challenge to advance the understanding of brain function. Because of
the inability of any single neuroimaging techniques to cover all aspects at
once, there is a growing interest to combine signals from multiple modalities
in order to benefit from the advantages of each acquisition method. Due to the
complexity and unknown parameterization of any suggested complete model of BOLD
response in functional magnetic resonance imaging (fMRI), the development of a
reliable ultimate fusion approach remains difficult. But besides the primary
goal of superior temporal and spatial resolution, conjoint analysis of data
from multiple imaging modalities can alternatively be used to segregate neural
information from physiological and acquisition noise. In this paper we suggest
a novel methodology which relies on constructing a quantifiable mapping of data
from one modality (electroencephalography; EEG) into another (fMRI), called
transmodal analysis of neural signals (TRANSfusion). TRANSfusion attempts to
map neural data embedded within the EEG signal into its reflection in fMRI
data. Assessing the mapping performance on unseen data allows to localize brain
areas where a significant portion of the signal could be reliably
reconstructed, hence the areas neural activity of which is reflected in both
EEG and fMRI data. Consecutive analysis of the learnt model allows to localize
areas associated with specific frequency bands of EEG, or areas functionally
related (connected or coherent) to any given EEG sensor. We demonstrate the
performance of TRANSfusion on artificial and real data from an auditory
experiment. We further speculate on possible alternative uses: cross-modal data
filtering and EEG-driven interpolation of fMRI signals to obtain arbitrarily
high temporal sampling of BOLD.
| [
"Yaroslav O. Halchenko, Michael Hanke, James V. Haxby, Stephen Jose\n Hanson, Christoph S. Herrmann",
"['Yaroslav O. Halchenko' 'Michael Hanke' 'James V. Haxby'\n 'Stephen Jose Hanson' 'Christoph S. Herrmann']"
] |
stat.ML cs.LG | null | 1307.2307 | null | null | http://arxiv.org/pdf/1307.2307v1 | 2013-07-08T23:52:55Z | 2013-07-08T23:52:55Z | Bridging Information Criteria and Parameter Shrinkage for Model
Selection | Model selection based on classical information criteria, such as BIC, is
generally computationally demanding, but its properties are well studied. On
the other hand, model selection based on parameter shrinkage by $\ell_1$-type
penalties is computationally efficient. In this paper we make an attempt to
combine their strengths, and propose a simple approach that penalizes the
likelihood with data-dependent $\ell_1$ penalties as in adaptive Lasso and
exploits a fixed penalization parameter. Even for finite samples, its model
selection results approximately coincide with those based on information
criteria; in particular, we show that in some special cases, this approach and
the corresponding information criterion produce exactly the same model. One can
also consider this approach as a way to directly determine the penalization
parameter in adaptive Lasso to achieve information criteria-like model
selection. As extensions, we apply this idea to complex models including
Gaussian mixture model and mixture of factor analyzers, whose model selection
is traditionally difficult to do; by adopting suitable penalties, we provide
continuous approximators to the corresponding information criteria, which are
easy to optimize and enable efficient model selection.
| [
"['Kun Zhang' 'Heng Peng' 'Laiwan Chan' 'Aapo Hyvarinen']",
"Kun Zhang, Heng Peng, Laiwan Chan, Aapo Hyvarinen"
] |
stat.ML cs.LG | null | 1307.2312 | null | null | http://arxiv.org/pdf/1307.2312v1 | 2013-07-09T00:58:10Z | 2013-07-09T00:58:10Z | Bayesian Discovery of Multiple Bayesian Networks via Transfer Learning | Bayesian network structure learning algorithms with limited data are being
used in domains such as systems biology and neuroscience to gain insight into
the underlying processes that produce observed data. Learning reliable networks
from limited data is difficult, therefore transfer learning can improve the
robustness of learned networks by leveraging data from related tasks. Existing
transfer learning algorithms for Bayesian network structure learning give a
single maximum a posteriori estimate of network models. Yet, many other models
may be equally likely, and so a more informative result is provided by Bayesian
structure discovery. Bayesian structure discovery algorithms estimate posterior
probabilities of structural features, such as edges. We present transfer
learning for Bayesian structure discovery which allows us to explore the shared
and unique structural features among related tasks. Efficient computation
requires that our transfer learning objective factors into local calculations,
which we prove is given by a broad class of transfer biases. Theoretically, we
show the efficiency of our approach. Empirically, we show that compared to
single task learning, transfer learning is better able to positively identify
true edges. We apply the method to whole-brain neuroimaging data.
| [
"['Diane Oyen' 'Terran Lane']",
"Diane Oyen and Terran Lane"
] |
cs.LG cs.AI cs.HC stat.AP stat.ML | null | 1307.2579 | null | null | http://arxiv.org/pdf/1307.2579v1 | 2013-07-09T20:03:51Z | 2013-07-09T20:03:51Z | Tuned Models of Peer Assessment in MOOCs | In massive open online courses (MOOCs), peer grading serves as a critical
tool for scaling the grading of complex, open-ended assignments to courses with
tens or hundreds of thousands of students. But despite promising initial
trials, it does not always deliver accurate results compared to human experts.
In this paper, we develop algorithms for estimating and correcting for grader
biases and reliabilities, showing significant improvement in peer grading
accuracy on real data with 63,199 peer grades from Coursera's HCI course
offerings --- the largest peer grading networks analysed to date. We relate
grader biases and reliabilities to other student factors such as student
engagement, performance as well as commenting style. We also show that our
model can lead to more intelligent assignment of graders to gradees.
| [
"['Chris Piech' 'Jonathan Huang' 'Zhenghao Chen' 'Chuong Do' 'Andrew Ng'\n 'Daphne Koller']",
"Chris Piech, Jonathan Huang, Zhenghao Chen, Chuong Do, Andrew Ng,\n Daphne Koller"
] |
stat.ML cs.LG | null | 1307.2611 | null | null | http://arxiv.org/pdf/1307.2611v1 | 2013-07-09T22:07:55Z | 2013-07-09T22:07:55Z | Controlling the Precision-Recall Tradeoff in Differential Dependency
Network Analysis | Graphical models have gained a lot of attention recently as a tool for
learning and representing dependencies among variables in multivariate data.
Often, domain scientists are looking specifically for differences among the
dependency networks of different conditions or populations (e.g. differences
between regulatory networks of different species, or differences between
dependency networks of diseased versus healthy populations). The standard
method for finding these differences is to learn the dependency networks for
each condition independently and compare them. We show that this approach is
prone to high false discovery rates (low precision) that can render the
analysis useless. We then show that by imposing a bias towards learning similar
dependency networks for each condition the false discovery rates can be reduced
to acceptable levels, at the cost of finding a reduced number of differences.
Algorithms developed in the transfer learning literature can be used to vary
the strength of the imposed similarity bias and provide a natural mechanism to
smoothly adjust this differential precision-recall tradeoff to cater to the
requirements of the analysis conducted. We present real case studies
(oncological and neurological) where domain experts use the proposed technique
to extract useful differential networks that shed light on the biological
processes involved in cancer and brain function.
| [
"['Diane Oyen' 'Alexandru Niculescu-Mizil' 'Rachel Ostroff' 'Alex Stewart'\n 'Vincent P. Clark']",
"Diane Oyen, Alexandru Niculescu-Mizil, Rachel Ostroff, Alex Stewart,\n Vincent P. Clark"
] |
stat.ML cs.LG stat.AP | null | 1307.2674 | null | null | http://arxiv.org/pdf/1307.2674v1 | 2013-07-10T05:19:10Z | 2013-07-10T05:19:10Z | Error Rate Bounds in Crowdsourcing Models | Crowdsourcing is an effective tool for human-powered computation on many
tasks challenging for computers. In this paper, we provide finite-sample
exponential bounds on the error rate (in probability and in expectation) of
hyperplane binary labeling rules under the Dawid-Skene crowdsourcing model. The
bounds can be applied to analyze many common prediction methods, including the
majority voting and weighted majority voting. These bound results could be
useful for controlling the error rate and designing better algorithms. We show
that the oracle Maximum A Posterior (MAP) rule approximately optimizes our
upper bound on the mean error rate for any hyperplane binary labeling rule, and
propose a simple data-driven weighted majority voting (WMV) rule (called
one-step WMV) that attempts to approximate the oracle MAP and has a provable
theoretical guarantee on the error rate. Moreover, we use simulated and real
data to demonstrate that the data-driven EM-MAP rule is a good approximation to
the oracle MAP rule, and to demonstrate that the mean error rate of the
data-driven EM-MAP rule is also bounded by the mean error rate bound of the
oracle MAP rule with estimated parameters plugging into the bound.
| [
"['Hongwei Li' 'Bin Yu' 'Dengyong Zhou']",
"Hongwei Li, Bin Yu and Dengyong Zhou"
] |
cs.DS cs.LG stat.ML | 10.1137/1.9781611973402.94 | 1307.2855 | null | null | http://arxiv.org/abs/1307.2855v2 | 2013-10-13T19:44:03Z | 2013-07-10T17:04:35Z | Flow-Based Algorithms for Local Graph Clustering | Given a subset S of vertices of an undirected graph G, the cut-improvement
problem asks us to find a subset S that is similar to A but has smaller
conductance. A very elegant algorithm for this problem has been given by
Andersen and Lang [AL08] and requires solving a small number of
single-commodity maximum flow computations over the whole graph G. In this
paper, we introduce LocalImprove, the first cut-improvement algorithm that is
local, i.e. that runs in time dependent on the size of the input set A rather
than on the size of the entire graph. Moreover, LocalImprove achieves this
local behaviour while essentially matching the same theoretical guarantee as
the global algorithm of Andersen and Lang.
The main application of LocalImprove is to the design of better
local-graph-partitioning algorithms. All previously known local algorithms for
graph partitioning are random-walk based and can only guarantee an output
conductance of O(\sqrt{OPT}) when the target set has conductance OPT \in [0,1].
Very recently, Zhu, Lattanzi and Mirrokni [ZLM13] improved this to O(OPT /
\sqrt{CONN}) where the internal connectivity parameter CONN \in [0,1] is
defined as the reciprocal of the mixing time of the random walk over the graph
induced by the target set. In this work, we show how to use LocalImprove to
obtain a constant approximation O(OPT) as long as CONN/OPT = Omega(1). This
yields the first flow-based algorithm. Moreover, its performance strictly
outperforms the ones based on random walks and surprisingly matches that of the
best known global algorithm, which is SDP-based, in this parameter regime
[MMV12].
Finally, our results show that spectral methods are not the only viable
approach to the construction of local graph partitioning algorithm and open
door to the study of algorithms with even better approximation and locality
guarantees.
| [
"['Lorenzo Orecchia' 'Zeyuan Allen Zhu']",
"Lorenzo Orecchia, Zeyuan Allen Zhu"
] |
cs.CV cs.LG q-bio.TO stat.ML | 10.1007/978-3-319-05530-5_11 | 1307.2965 | null | null | http://arxiv.org/abs/1307.2965v2 | 2014-04-22T16:01:12Z | 2013-07-11T03:29:51Z | Semantic Context Forests for Learning-Based Knee Cartilage Segmentation
in 3D MR Images | The automatic segmentation of human knee cartilage from 3D MR images is a
useful yet challenging task due to the thin sheet structure of the cartilage
with diffuse boundaries and inhomogeneous intensities. In this paper, we
present an iterative multi-class learning method to segment the femoral, tibial
and patellar cartilage simultaneously, which effectively exploits the spatial
contextual constraints between bone and cartilage, and also between different
cartilages. First, based on the fact that the cartilage grows in only certain
area of the corresponding bone surface, we extract the distance features of not
only to the surface of the bone, but more informatively, to the densely
registered anatomical landmarks on the bone surface. Second, we introduce a set
of iterative discriminative classifiers that at each iteration, probability
comparison features are constructed from the class confidence maps derived by
previously learned classifiers. These features automatically embed the semantic
context information between different cartilages of interest. Validated on a
total of 176 volumes from the Osteoarthritis Initiative (OAI) dataset, the
proposed approach demonstrates high robustness and accuracy of segmentation in
comparison with existing state-of-the-art MR cartilage segmentation methods.
| [
"['Quan Wang' 'Dijia Wu' 'Le Lu' 'Meizhu Liu' 'Kim L. Boyer'\n 'Shaohua Kevin Zhou']",
"Quan Wang, Dijia Wu, Le Lu, Meizhu Liu, Kim L. Boyer and Shaohua Kevin\n Zhou"
] |
cs.LG cs.CV stat.ML | null | 1307.2971 | null | null | http://arxiv.org/pdf/1307.2971v1 | 2013-07-11T04:49:11Z | 2013-07-11T04:49:11Z | Accuracy of MAP segmentation with hidden Potts and Markov mesh prior
models via Path Constrained Viterbi Training, Iterated Conditional Modes and
Graph Cut based algorithms | In this paper, we study statistical classification accuracy of two different
Markov field environments for pixelwise image segmentation, considering the
labels of the image as hidden states and solving the estimation of such labels
as a solution of the MAP equation. The emission distribution is assumed the
same in all models, and the difference lays in the Markovian prior hypothesis
made over the labeling random field. The a priori labeling knowledge will be
modeled with a) a second order anisotropic Markov Mesh and b) a classical
isotropic Potts model. Under such models, we will consider three different
segmentation procedures, 2D Path Constrained Viterbi training for the Hidden
Markov Mesh, a Graph Cut based segmentation for the first order isotropic Potts
model, and ICM (Iterated Conditional Modes) for the second order isotropic
Potts model.
We provide a unified view of all three methods, and investigate goodness of
fit for classification, studying the influence of parameter estimation,
computational gain, and extent of automation in the statistical measures
Overall Accuracy, Relative Improvement and Kappa coefficient, allowing robust
and accurate statistical analysis on synthetic and real-life experimental data
coming from the field of Dental Diagnostic Radiography. All algorithms, using
the learned parameters, generate good segmentations with little interaction
when the images have a clear multimodal histogram. Suboptimal learning proves
to be frail in the case of non-distinctive modes, which limits the complexity
of usable models, and hence the achievable error rate as well.
All Matlab code written is provided in a toolbox available for download from
our website, following the Reproducible Research Paradigm.
| [
"['Ana Georgina Flesia' 'Josef Baumgartner' 'Javier Gimenez'\n 'Jorge Martinez']",
"Ana Georgina Flesia, Josef Baumgartner, Javier Gimenez, Jorge Martinez"
] |
cs.LG cs.DS stat.ML | null | 1307.3102 | null | null | http://arxiv.org/pdf/1307.3102v4 | 2014-11-05T06:41:07Z | 2013-07-11T13:31:21Z | Statistical Active Learning Algorithms for Noise Tolerance and
Differential Privacy | We describe a framework for designing efficient active learning algorithms
that are tolerant to random classification noise and are
differentially-private. The framework is based on active learning algorithms
that are statistical in the sense that they rely on estimates of expectations
of functions of filtered random examples. It builds on the powerful statistical
query framework of Kearns (1993).
We show that any efficient active statistical learning algorithm can be
automatically converted to an efficient active learning algorithm which is
tolerant to random classification noise as well as other forms of
"uncorrelated" noise. The complexity of the resulting algorithms has
information-theoretically optimal quadratic dependence on $1/(1-2\eta)$, where
$\eta$ is the noise rate.
We show that commonly studied concept classes including thresholds,
rectangles, and linear separators can be efficiently actively learned in our
framework. These results combined with our generic conversion lead to the first
computationally-efficient algorithms for actively learning some of these
concept classes in the presence of random classification noise that provide
exponential improvement in the dependence on the error $\epsilon$ over their
passive counterparts. In addition, we show that our algorithms can be
automatically converted to efficient active differentially-private algorithms.
This leads to the first differentially-private active learning algorithms with
exponential label savings over the passive case.
| [
"Maria Florina Balcan, Vitaly Feldman",
"['Maria Florina Balcan' 'Vitaly Feldman']"
] |
cs.LG stat.ML | null | 1307.3176 | null | null | http://arxiv.org/pdf/1307.3176v4 | 2014-11-20T12:40:48Z | 2013-07-11T16:36:29Z | Fast gradient descent for drifting least squares regression, with
application to bandits | Online learning algorithms require to often recompute least squares
regression estimates of parameters. We study improving the computational
complexity of such algorithms by using stochastic gradient descent (SGD) type
schemes in place of classic regression solvers. We show that SGD schemes
efficiently track the true solutions of the regression problems, even in the
presence of a drift. This finding coupled with an $O(d)$ improvement in
complexity, where $d$ is the dimension of the data, make them attractive for
implementation in the big data settings. In the case when strong convexity in
the regression problem is guaranteed, we provide bounds on the error both in
expectation and high probability (the latter is often needed to provide
theoretical guarantees for higher level algorithms), despite the drifting least
squares solution. As an example of this case we prove that the regret
performance of an SGD version of the PEGE linear bandit algorithm
[Rusmevichientong and Tsitsiklis 2010] is worse that that of PEGE itself only
by a factor of $O(\log^4 n)$. When strong convexity of the regression problem
cannot be guaranteed, we investigate using an adaptive regularisation. We make
an empirical study of an adaptively regularised, SGD version of LinUCB [Li et
al. 2010] in a news article recommendation application, which uses the large
scale news recommendation dataset from Yahoo! front page. These experiments
show a large gain in computational complexity, with a consistently low tracking
error and click-through-rate (CTR) performance that is $75\%$ close.
| [
"Nathaniel Korda, Prashanth L.A. and R\\'emi Munos",
"['Nathaniel Korda' 'Prashanth L. A.' 'Rémi Munos']"
] |
cs.DS cs.CC cs.LG | null | 1307.3301 | null | null | http://arxiv.org/pdf/1307.3301v3 | 2015-03-30T07:13:28Z | 2013-07-12T00:41:01Z | Optimal Bounds on Approximation of Submodular and XOS Functions by
Juntas | We investigate the approximability of several classes of real-valued
functions by functions of a small number of variables ({\em juntas}). Our main
results are tight bounds on the number of variables required to approximate a
function $f:\{0,1\}^n \rightarrow [0,1]$ within $\ell_2$-error $\epsilon$ over
the uniform distribution: 1. If $f$ is submodular, then it is $\epsilon$-close
to a function of $O(\frac{1}{\epsilon^2} \log \frac{1}{\epsilon})$ variables.
This is an exponential improvement over previously known results. We note that
$\Omega(\frac{1}{\epsilon^2})$ variables are necessary even for linear
functions. 2. If $f$ is fractionally subadditive (XOS) it is $\epsilon$-close
to a function of $2^{O(1/\epsilon^2)}$ variables. This result holds for all
functions with low total $\ell_1$-influence and is a real-valued analogue of
Friedgut's theorem for boolean functions. We show that $2^{\Omega(1/\epsilon)}$
variables are necessary even for XOS functions.
As applications of these results, we provide learning algorithms over the
uniform distribution. For XOS functions, we give a PAC learning algorithm that
runs in time $2^{poly(1/\epsilon)} poly(n)$. For submodular functions we give
an algorithm in the more demanding PMAC learning model (Balcan and Harvey,
2011) which requires a multiplicative $1+\gamma$ factor approximation with
probability at least $1-\epsilon$ over the target distribution. Our uniform
distribution algorithm runs in time $2^{poly(1/(\gamma\epsilon))} poly(n)$.
This is the first algorithm in the PMAC model that over the uniform
distribution can achieve a constant approximation factor arbitrarily close to 1
for all submodular functions. As follows from the lower bounds in (Feldman et
al., 2013) both of these algorithms are close to optimal. We also give
applications for proper learning, testing and agnostic learning with value
queries of these classes.
| [
"['Vitaly Feldman' 'Jan Vondrak']",
"Vitaly Feldman and Jan Vondrak"
] |
cs.CE cs.LG | 10.1109/ICE-CCN.2013.6528554 | 1307.3337 | null | null | http://arxiv.org/abs/1307.3337v1 | 2013-07-12T06:20:59Z | 2013-07-12T06:20:59Z | Unsupervised Gene Expression Data using Enhanced Clustering Method | Microarrays are made it possible to simultaneously monitor the expression
profiles of thousands of genes under various experimental conditions.
Identification of co-expressed genes and coherent patterns is the central goal
in microarray or gene expression data analysis and is an important task in
bioinformatics research. Feature selection is a process to select features
which are more informative. It is one of the important steps in knowledge
discovery. The problem is that not all features are important. Some of the
features may be redundant, and others may be irrelevant and noisy. In this work
the unsupervised Gene selection method and Enhanced Center Initialization
Algorithm (ECIA) with K-Means algorithms have been applied for clustering of
Gene Expression Data. This proposed clustering algorithm overcomes the
drawbacks in terms of specifying the optimal number of clusters and
initialization of good cluster centroids. Gene Expression Data show that could
identify compact clusters with performs well in terms of the Silhouette
Coefficients cluster measure.
| [
"T.Chandrasekhar, K.Thangavel, E.Elayaraja, E.N.Sathishkumar",
"['T. Chandrasekhar' 'K. Thangavel' 'E. Elayaraja' 'E. N. Sathishkumar']"
] |
cs.LG cs.IT math.IT | null | 1307.3457 | null | null | http://arxiv.org/pdf/1307.3457v1 | 2013-07-12T13:49:14Z | 2013-07-12T13:49:14Z | Energy-aware adaptive bi-Lipschitz embeddings | We propose a dimensionality reducing matrix design based on training data
with constraints on its Frobenius norm and number of rows. Our design criteria
is aimed at preserving the distances between the data points in the
dimensionality reduced space as much as possible relative to their distances in
original data space. This approach can be considered as a deterministic
Bi-Lipschitz embedding of the data points. We introduce a scalable learning
algorithm, dubbed AMUSE, and provide a rigorous estimation guarantee by
leveraging game theoretic tools. We also provide a generalization
characterization of our matrix based on our sample data. We use compressive
sensing problems as an example application of our problem, where the Frobenius
norm design constraint translates into the sensing energy.
| [
"['Bubacarr Bah' 'Ali Sadeghian' 'Volkan Cevher']",
"Bubacarr Bah, Ali Sadeghian and Volkan Cevher"
] |
cs.CE cs.LG | null | 1307.3549 | null | null | http://arxiv.org/pdf/1307.3549v1 | 2013-07-12T06:43:27Z | 2013-07-12T06:43:27Z | Performance Analysis of Clustering Algorithms for Gene Expression Data | Microarray technology is a process that allows thousands of genes
simultaneously monitor to various experimental conditions. It is used to
identify the co-expressed genes in specific cells or tissues that are actively
used to make proteins, This method is used to analysis the gene expression, an
important task in bioinformatics research. Cluster analysis of gene expression
data has proved to be a useful tool for identifying co-expressed genes,
biologically relevant groupings of genes and samples. In this paper we analysed
K-Means with Automatic Generations of Merge Factor for ISODATA- AGMFI, to group
the microarray data sets on the basic of ISODATA. AGMFI is to generate initial
values for merge and Spilt factor, maximum merge times instead of selecting
efficient values as in ISODATA. The initial seeds for each cluster were
normally chosen either sequentially or randomly. The quality of the final
clusters was found to be influenced by these initial seeds. For the real life
problems, the suitable number of clusters cannot be predicted. To overcome the
above drawback the current research focused on developing the clustering
algorithms without giving the initial number of clusters.
| [
"T.Chandrasekhar, K.Thangavel, E.Elayaraja",
"['T. Chandrasekhar' 'K. Thangavel' 'E. Elayaraja']"
] |
cs.LG stat.ML | null | 1307.3617 | null | null | http://arxiv.org/pdf/1307.3617v2 | 2015-06-12T08:11:27Z | 2013-07-13T07:00:00Z | MCMC Learning | The theory of learning under the uniform distribution is rich and deep, with
connections to cryptography, computational complexity, and the analysis of
boolean functions to name a few areas. This theory however is very limited due
to the fact that the uniform distribution and the corresponding Fourier basis
are rarely encountered as a statistical model.
A family of distributions that vastly generalizes the uniform distribution on
the Boolean cube is that of distributions represented by Markov Random Fields
(MRF). Markov Random Fields are one of the main tools for modeling high
dimensional data in many areas of statistics and machine learning.
In this paper we initiate the investigation of extending central ideas,
methods and algorithms from the theory of learning under the uniform
distribution to the setup of learning concepts given examples from MRF
distributions. In particular, our results establish a novel connection between
properties of MCMC sampling of MRFs and learning under the MRF distribution.
| [
"['Varun Kanade' 'Elchanan Mossel']",
"Varun Kanade, Elchanan Mossel"
] |
cs.LG cs.IR | null | 1307.3673 | null | null | http://arxiv.org/pdf/1307.3673v1 | 2013-07-13T19:29:33Z | 2013-07-13T19:29:33Z | A Data Management Approach for Dataset Selection Using Human Computation | As the number of applications that use machine learning algorithms increases,
the need for labeled data useful for training such algorithms intensifies.
Getting labels typically involves employing humans to do the annotation,
which directly translates to training and working costs. Crowdsourcing
platforms have made labeling cheaper and faster, but they still involve
significant costs, especially for the cases where the potential set of
candidate data to be labeled is large. In this paper we describe a methodology
and a prototype system aiming at addressing this challenge for Web-scale
problems in an industrial setting. We discuss ideas on how to efficiently
select the data to use for training of machine learning algorithms in an
attempt to reduce cost. We show results achieving good performance with reduced
cost by carefully selecting which instances to label. Our proposed algorithm is
presented as part of a framework for managing and generating training datasets,
which includes, among other components, a human computation element.
| [
"Alexandros Ntoulas, Omar Alonso, Vasilis Kandylas",
"['Alexandros Ntoulas' 'Omar Alonso' 'Vasilis Kandylas']"
] |
cs.LG | null | 1307.3675 | null | null | http://arxiv.org/pdf/1307.3675v1 | 2013-07-13T19:38:09Z | 2013-07-13T19:38:09Z | Minimum Error Rate Training and the Convex Hull Semiring | We describe the line search used in the minimum error rate training algorithm
MERT as the "inside score" of a weighted proof forest under a semiring defined
in terms of well-understood operations from computational geometry. This
conception leads to a straightforward complexity analysis of the dynamic
programming MERT algorithms of Macherey et al. (2008) and Kumar et al. (2009)
and practical approaches to implementation.
| [
"['Chris Dyer']",
"Chris Dyer"
] |
cs.SI cs.LG | null | 1307.3687 | null | null | http://arxiv.org/pdf/1307.3687v1 | 2013-07-14T01:37:48Z | 2013-07-14T01:37:48Z | On Analyzing Estimation Errors due to Constrained Connections in Online
Review Systems | Constrained connection is the phenomenon that a reviewer can only review a
subset of products/services due to narrow range of interests or limited
attention capacity. In this work, we study how constrained connections can
affect estimation performance in online review systems (ORS). We find that
reviewers' constrained connections will cause poor estimation performance, both
from the measurements of estimation accuracy and Bayesian Cramer Rao lower
bound.
| [
"['Junzhou Zhao']",
"Junzhou Zhao"
] |
stat.ML cs.LG | null | 1307.3785 | null | null | http://arxiv.org/pdf/1307.3785v1 | 2013-07-14T22:06:12Z | 2013-07-14T22:06:12Z | Probabilistic inverse reinforcement learning in unknown environments | We consider the problem of learning by demonstration from agents acting in
unknown stochastic Markov environments or games. Our aim is to estimate agent
preferences in order to construct improved policies for the same task that the
agents are trying to solve. To do so, we extend previous probabilistic
approaches for inverse reinforcement learning in known MDPs to the case of
unknown dynamics or opponents. We do this by deriving two simplified
probabilistic models of the demonstrator's policy and utility. For
tractability, we use maximum a posteriori estimation rather than full Bayesian
inference. Under a flat prior, this results in a convex optimisation problem.
We find that the resulting algorithms are highly competitive against a variety
of other methods for inverse reinforcement learning that do have knowledge of
the dynamics.
| [
"['Aristide C. Y. Tossou' 'Christos Dimitrakakis']",
"Aristide C. Y. Tossou and Christos Dimitrakakis"
] |
cs.NE cs.AI cs.CC cs.DM cs.LG | null | 1307.3824 | null | null | http://arxiv.org/pdf/1307.3824v1 | 2013-07-15T06:32:52Z | 2013-07-15T06:32:52Z | The Fundamental Learning Problem that Genetic Algorithms with Uniform
Crossover Solve Efficiently and Repeatedly As Evolution Proceeds | This paper establishes theoretical bonafides for implicit concurrent
multivariate effect evaluation--implicit concurrency for short---a broad and
versatile computational learning efficiency thought to underlie
general-purpose, non-local, noise-tolerant optimization in genetic algorithms
with uniform crossover (UGAs). We demonstrate that implicit concurrency is
indeed a form of efficient learning by showing that it can be used to obtain
close-to-optimal bounds on the time and queries required to approximately
correctly solve a constrained version (k=7, \eta=1/5) of a recognizable
computational learning problem: learning parities with noisy membership
queries. We argue that a UGA that treats the noisy membership query oracle as a
fitness function can be straightforwardly used to approximately correctly learn
the essential attributes in O(log^1.585 n) queries and O(n log^1.585 n) time,
where n is the total number of attributes. Our proof relies on an accessible
symmetry argument and the use of statistical hypothesis testing to reject a
global null hypothesis at the 10^-100 level of significance. It is, to the best
of our knowledge, the first relatively rigorous identification of efficient
computational learning in an evolutionary algorithm on a non-trivial learning
problem.
| [
"Keki M. Burjorjee",
"['Keki M. Burjorjee']"
] |
stat.ML cs.LG | null | 1307.3846 | null | null | http://arxiv.org/pdf/1307.3846v1 | 2013-07-15T07:57:56Z | 2013-07-15T07:57:56Z | Bayesian Structured Prediction Using Gaussian Processes | We introduce a conceptually novel structured prediction model, GPstruct,
which is kernelized, non-parametric and Bayesian, by design. We motivate the
model with respect to existing approaches, among others, conditional random
fields (CRFs), maximum margin Markov networks (M3N), and structured support
vector machines (SVMstruct), which embody only a subset of its properties. We
present an inference procedure based on Markov Chain Monte Carlo. The framework
can be instantiated for a wide range of structured objects such as linear
chains, trees, grids, and other general graphs. As a proof of concept, the
model is benchmarked on several natural language processing tasks and a video
gesture segmentation task involving a linear chain structure. We show
prediction accuracies for GPstruct which are comparable to or exceeding those
of CRFs and SVMstruct.
| [
"Sebastien Bratieres, Novi Quadrianto, Zoubin Ghahramani",
"['Sebastien Bratieres' 'Novi Quadrianto' 'Zoubin Ghahramani']"
] |
cs.LG math.OC stat.ML | null | 1307.3949 | null | null | http://arxiv.org/pdf/1307.3949v2 | 2014-06-24T14:21:00Z | 2013-07-15T14:04:39Z | On Soft Power Diagrams | Many applications in data analysis begin with a set of points in a Euclidean
space that is partitioned into clusters. Common tasks then are to devise a
classifier deciding which of the clusters a new point is associated to, finding
outliers with respect to the clusters, or identifying the type of clustering
used for the partition.
One of the common kinds of clusterings are (balanced) least-squares
assignments with respect to a given set of sites. For these, there is a
'separating power diagram' for which each cluster lies in its own cell.
In the present paper, we aim for efficient algorithms for outlier detection
and the computation of thresholds that measure how similar a clustering is to a
least-squares assignment for fixed sites. For this purpose, we devise a new
model for the computation of a 'soft power diagram', which allows a soft
separation of the clusters with 'point counting properties'; e.g. we are able
to prescribe how many points we want to classify as outliers.
As our results hold for a more general non-convex model of free sites, we
describe it and our proofs in this more general way. Its locally optimal
solutions satisfy the aforementioned point counting properties. For our target
applications that use fixed sites, our algorithms are efficiently solvable to
global optimality by linear programming.
| [
"Steffen Borgwardt",
"['Steffen Borgwardt']"
] |
cs.AI cs.LG stat.ML | null | 1307.3964 | null | null | http://arxiv.org/pdf/1307.3964v1 | 2013-07-15T14:31:44Z | 2013-07-15T14:31:44Z | Learning Markov networks with context-specific independences | Learning the Markov network structure from data is a problem that has
received considerable attention in machine learning, and in many other
application fields. This work focuses on a particular approach for this purpose
called independence-based learning. Such approach guarantees the learning of
the correct structure efficiently, whenever data is sufficient for representing
the underlying distribution. However, an important issue of such approach is
that the learned structures are encoded in an undirected graph. The problem
with graphs is that they cannot encode some types of independence relations,
such as the context-specific independences. They are a particular case of
conditional independences that is true only for a certain assignment of its
conditioning set, in contrast to conditional independences that must hold for
all its assignments. In this work we present CSPC, an independence-based
algorithm for learning structures that encode context-specific independences,
and encoding them in a log-linear model, instead of a graph. The central idea
of CSPC is combining the theoretical guarantees provided by the
independence-based approach with the benefits of representing complex
structures by using features in a log-linear model. We present experiments in a
synthetic case, showing that CSPC is more accurate than the state-of-the-art IB
algorithms when the underlying distribution contains CSIs.
| [
"Alejandro Edera, Federico Schl\\\"uter, Facundo Bromberg",
"['Alejandro Edera' 'Federico Schlüter' 'Facundo Bromberg']"
] |
cs.LG cs.CV stat.ML | 10.1109/ASRU.2013.6707725 | 1307.4048 | null | null | http://arxiv.org/abs/1307.4048v1 | 2013-07-15T18:39:10Z | 2013-07-15T18:39:10Z | Modified SPLICE and its Extension to Non-Stereo Data for Noise Robust
Speech Recognition | In this paper, a modification to the training process of the popular SPLICE
algorithm has been proposed for noise robust speech recognition. The
modification is based on feature correlations, and enables this stereo-based
algorithm to improve the performance in all noise conditions, especially in
unseen cases. Further, the modified framework is extended to work for
non-stereo datasets where clean and noisy training utterances, but not stereo
counterparts, are required. Finally, an MLLR-based computationally efficient
run-time noise adaptation method in SPLICE framework has been proposed. The
modified SPLICE shows 8.6% absolute improvement over SPLICE in Test C of
Aurora-2 database, and 2.93% overall. Non-stereo method shows 10.37% and 6.93%
absolute improvements over Aurora-2 and Aurora-4 baseline models respectively.
Run-time adaptation shows 9.89% absolute improvement in modified framework as
compared to SPLICE for Test C, and 4.96% overall w.r.t. standard MLLR
adaptation on HMMs.
| [
"D. S. Pavan Kumar, N. Vishnu Prasad, Vikas Joshi, S. Umesh",
"['D. S. Pavan Kumar' 'N. Vishnu Prasad' 'Vikas Joshi' 'S. Umesh']"
] |
cs.LG stat.ML | null | 1307.4145 | null | null | http://arxiv.org/pdf/1307.4145v2 | 2013-07-18T22:57:16Z | 2013-07-16T02:03:51Z | A Safe Screening Rule for Sparse Logistic Regression | The l1-regularized logistic regression (or sparse logistic regression) is a
widely used method for simultaneous classification and feature selection.
Although many recent efforts have been devoted to its efficient implementation,
its application to high dimensional data still poses significant challenges. In
this paper, we present a fast and effective sparse logistic regression
screening rule (Slores) to identify the 0 components in the solution vector,
which may lead to a substantial reduction in the number of features to be
entered to the optimization. An appealing feature of Slores is that the data
set needs to be scanned only once to run the screening and its computational
cost is negligible compared to that of solving the sparse logistic regression
problem. Moreover, Slores is independent of solvers for sparse logistic
regression, thus Slores can be integrated with any existing solver to improve
the efficiency. We have evaluated Slores using high-dimensional data sets from
different applications. Extensive experimental results demonstrate that Slores
outperforms the existing state-of-the-art screening rules and the efficiency of
solving sparse logistic regression is improved by one magnitude in general.
| [
"Jie Wang, Jiayu Zhou, Jun Liu, Peter Wonka, Jieping Ye",
"['Jie Wang' 'Jiayu Zhou' 'Jun Liu' 'Peter Wonka' 'Jieping Ye']"
] |
cs.LG stat.ML | null | 1307.4156 | null | null | http://arxiv.org/pdf/1307.4156v1 | 2013-07-16T03:09:13Z | 2013-07-16T03:09:13Z | Efficient Mixed-Norm Regularization: Algorithms and Safe Screening
Methods | Sparse learning has recently received increasing attention in many areas
including machine learning, statistics, and applied mathematics. The mixed-norm
regularization based on the l1q norm with q>1 is attractive in many
applications of regression and classification in that it facilitates group
sparsity in the model. The resulting optimization problem is, however,
challenging to solve due to the inherent structure of the mixed-norm
regularization. Existing work deals with special cases with q=1, 2, infinity,
and they cannot be easily extended to the general case. In this paper, we
propose an efficient algorithm based on the accelerated gradient method for
solving the general l1q-regularized problem. One key building block of the
proposed algorithm is the l1q-regularized Euclidean projection (EP_1q). Our
theoretical analysis reveals the key properties of EP_1q and illustrates why
EP_1q for the general q is significantly more challenging to solve than the
special cases. Based on our theoretical analysis, we develop an efficient
algorithm for EP_1q by solving two zero finding problems. To further improve
the efficiency of solving large dimensional mixed-norm regularized problems, we
propose a screening method which is able to quickly identify the inactive
groups, i.e., groups that have 0 components in the solution. This may lead to
substantial reduction in the number of groups to be entered to the
optimization. An appealing feature of our screening method is that the data set
needs to be scanned only once to run the screening. Compared to that of solving
the mixed-norm regularized problems, the computational cost of our screening
test is negligible. The key of the proposed screening method is an accurate
sensitivity analysis of the dual optimal solution when the regularization
parameter varies. Experimental results demonstrate the efficiency of the
proposed algorithm.
| [
"['Jie Wang' 'Jun Liu' 'Jieping Ye']",
"Jie Wang, Jun Liu, Jieping Ye"
] |
cs.LG cs.AI stat.ML | null | 1307.4514 | null | null | http://arxiv.org/pdf/1307.4514v2 | 2013-07-23T17:42:26Z | 2013-07-17T06:42:00Z | Supervised Metric Learning with Generalization Guarantees | The crucial importance of metrics in machine learning algorithms has led to
an increasing interest in optimizing distance and similarity functions, an area
of research known as metric learning. When data consist of feature vectors, a
large body of work has focused on learning a Mahalanobis distance. Less work
has been devoted to metric learning from structured objects (such as strings or
trees), most of it focusing on optimizing a notion of edit distance. We
identify two important limitations of current metric learning approaches.
First, they allow to improve the performance of local algorithms such as
k-nearest neighbors, but metric learning for global algorithms (such as linear
classifiers) has not been studied so far. Second, the question of the
generalization ability of metric learning methods has been largely ignored. In
this thesis, we propose theoretical and algorithmic contributions that address
these limitations. Our first contribution is the derivation of a new kernel
function built from learned edit probabilities. Our second contribution is a
novel framework for learning string and tree edit similarities inspired by the
recent theory of (e,g,t)-good similarity functions. Using uniform stability
arguments, we establish theoretical guarantees for the learned similarity that
give a bound on the generalization error of a linear classifier built from that
similarity. In our third contribution, we extend these ideas to metric learning
from feature vectors by proposing a bilinear similarity learning method that
efficiently optimizes the (e,g,t)-goodness. Generalization guarantees are
derived for our approach, highlighting that our method minimizes a tighter
bound on the generalization error of the classifier. Our last contribution is a
framework for establishing generalization bounds for a large class of existing
metric learning algorithms based on a notion of algorithmic robustness.
| [
"['Aurélien Bellet']",
"Aur\\'elien Bellet"
] |
cs.LG stat.ML | null | 1307.4564 | null | null | http://arxiv.org/pdf/1307.4564v1 | 2013-07-17T10:24:00Z | 2013-07-17T10:24:00Z | From Bandits to Experts: A Tale of Domination and Independence | We consider the partial observability model for multi-armed bandits,
introduced by Mannor and Shamir. Our main result is a characterization of
regret in the directed observability model in terms of the dominating and
independence numbers of the observability graph. We also show that in the
undirected case, the learner can achieve optimal regret without even accessing
the observability graph before selecting an action. Both results are shown
using variants of the Exp3 algorithm operating on the observability graph in a
time-efficient manner.
| [
"['Noga Alon' 'Nicolò Cesa-Bianchi' 'Claudio Gentile' 'Yishay Mansour']",
"Noga Alon, Nicol\\`o Cesa-Bianchi, Claudio Gentile, Yishay Mansour"
] |
cs.LG math.OC stat.ML | null | 1307.4653 | null | null | http://arxiv.org/pdf/1307.4653v1 | 2013-07-17T14:38:47Z | 2013-07-17T14:38:47Z | A New Convex Relaxation for Tensor Completion | We study the problem of learning a tensor from a set of linear measurements.
A prominent methodology for this problem is based on a generalization of trace
norm regularization, which has been used extensively for learning low rank
matrices, to the tensor setting. In this paper, we highlight some limitations
of this approach and propose an alternative convex relaxation on the Euclidean
ball. We then describe a technique to solve the associated regularization
problem, which builds upon the alternating direction method of multipliers.
Experiments on one synthetic dataset and two real datasets indicate that the
proposed method improves significantly over tensor trace norm regularization in
terms of estimation error, while remaining computationally tractable.
| [
"['Bernardino Romera-Paredes' 'Massimiliano Pontil']",
"Bernardino Romera-Paredes and Massimiliano Pontil"
] |
cs.LG cs.AI cs.SY stat.ML | null | 1307.4847 | null | null | http://arxiv.org/pdf/1307.4847v4 | 2016-07-06T23:56:50Z | 2013-07-18T07:22:39Z | Efficient Reinforcement Learning in Deterministic Systems with Value
Function Generalization | We consider the problem of reinforcement learning over episodes of a
finite-horizon deterministic system and as a solution propose optimistic
constraint propagation (OCP), an algorithm designed to synthesize efficient
exploration and value function generalization. We establish that when the true
value function lies within a given hypothesis class, OCP selects optimal
actions over all but at most K episodes, where K is the eluder dimension of the
given hypothesis class. We establish further efficiency and asymptotic
performance guarantees that apply even if the true value function does not lie
in the given hypothesis class, for the special case where the hypothesis class
is the span of pre-specified indicator functions over disjoint sets. We also
discuss the computational complexity of OCP and present computational results
involving two illustrative examples.
| [
"Zheng Wen and Benjamin Van Roy",
"['Zheng Wen' 'Benjamin Van Roy']"
] |
stat.ML cs.IT cs.LG math.IT | null | 1307.4891 | null | null | http://arxiv.org/pdf/1307.4891v4 | 2015-08-21T13:53:51Z | 2013-07-18T10:08:47Z | Robust Subspace Clustering via Thresholding | The problem of clustering noisy and incompletely observed high-dimensional
data points into a union of low-dimensional subspaces and a set of outliers is
considered. The number of subspaces, their dimensions, and their orientations
are assumed unknown. We propose a simple low-complexity subspace clustering
algorithm, which applies spectral clustering to an adjacency matrix obtained by
thresholding the correlations between data points. In other words, the
adjacency matrix is constructed from the nearest neighbors of each data point
in spherical distance. A statistical performance analysis shows that the
algorithm exhibits robustness to additive noise and succeeds even when the
subspaces intersect. Specifically, our results reveal an explicit tradeoff
between the affinity of the subspaces and the tolerable noise level. We
furthermore prove that the algorithm succeeds even when the data points are
incompletely observed with the number of missing entries allowed to be (up to a
log-factor) linear in the ambient dimension. We also propose a simple scheme
that provably detects outliers, and we present numerical results on real and
synthetic data.
| [
"['Reinhard Heckel' 'Helmut Bölcskei']",
"Reinhard Heckel and Helmut B\\\"olcskei"
] |
cs.LG | null | 1307.5101 | null | null | http://arxiv.org/pdf/1307.5101v3 | 2013-11-25T16:57:43Z | 2013-07-18T23:55:55Z | Large-scale Multi-label Learning with Missing Labels | The multi-label classification problem has generated significant interest in
recent years. However, existing approaches do not adequately address two key
challenges: (a) the ability to tackle problems with a large number (say
millions) of labels, and (b) the ability to handle data with missing labels. In
this paper, we directly address both these problems by studying the multi-label
problem in a generic empirical risk minimization (ERM) framework. Our
framework, despite being simple, is surprisingly able to encompass several
recent label-compression based methods which can be derived as special cases of
our method. To optimize the ERM problem, we develop techniques that exploit the
structure of specific loss functions - such as the squared loss function - to
offer efficient algorithms. We further show that our learning framework admits
formal excess risk bounds even in the presence of missing labels. Our risk
bounds are tight and demonstrate better generalization performance for low-rank
promoting trace-norm regularization when compared to (rank insensitive)
Frobenius norm regularization. Finally, we present extensive empirical results
on a variety of benchmark datasets and show that our methods perform
significantly better than existing label compression based methods and can
scale up to very large datasets such as the Wikipedia dataset.
| [
"Hsiang-Fu Yu and Prateek Jain and Purushottam Kar and Inderjit S.\n Dhillon",
"['Hsiang-Fu Yu' 'Prateek Jain' 'Purushottam Kar' 'Inderjit S. Dhillon']"
] |
stat.ML cs.LG | null | 1307.5118 | null | null | http://arxiv.org/pdf/1307.5118v1 | 2013-07-19T03:00:39Z | 2013-07-19T03:00:39Z | Model-Based Policy Gradients with Parameter-Based Exploration by
Least-Squares Conditional Density Estimation | The goal of reinforcement learning (RL) is to let an agent learn an optimal
control policy in an unknown environment so that future expected rewards are
maximized. The model-free RL approach directly learns the policy based on data
samples. Although using many samples tends to improve the accuracy of policy
learning, collecting a large number of samples is often expensive in practice.
On the other hand, the model-based RL approach first estimates the transition
model of the environment and then learns the policy based on the estimated
transition model. Thus, if the transition model is accurately learned from a
small amount of data, the model-based approach can perform better than the
model-free approach. In this paper, we propose a novel model-based RL method by
combining a recently proposed model-free policy search method called policy
gradients with parameter-based exploration and the state-of-the-art transition
model estimator called least-squares conditional density estimation. Through
experiments, we demonstrate the practical usefulness of the proposed method.
| [
"Syogo Mori, Voot Tangkaratt, Tingting Zhao, Jun Morimoto, and Masashi\n Sugiyama",
"['Syogo Mori' 'Voot Tangkaratt' 'Tingting Zhao' 'Jun Morimoto'\n 'Masashi Sugiyama']"
] |
cs.CV cs.LG stat.ML | null | 1307.5161 | null | null | http://arxiv.org/pdf/1307.5161v2 | 2014-03-28T08:49:17Z | 2013-07-19T08:47:32Z | Random Binary Mappings for Kernel Learning and Efficient SVM | Support Vector Machines (SVMs) are powerful learners that have led to
state-of-the-art results in various computer vision problems. SVMs suffer from
various drawbacks in terms of selecting the right kernel, which depends on the
image descriptors, as well as computational and memory efficiency. This paper
introduces a novel kernel, which serves such issues well. The kernel is learned
by exploiting a large amount of low-complex, randomized binary mappings of the
input feature. This leads to an efficient SVM, while also alleviating the task
of kernel selection. We demonstrate the capabilities of our kernel on 6
standard vision benchmarks, in which we combine several common image
descriptors, namely histograms (Flowers17 and Daimler), attribute-like
descriptors (UCI, OSR, and a-VOC08), and Sparse Quantization (ImageNet).
Results show that our kernel learning adapts well to the different descriptors
types, achieving the performance of the kernels specifically tuned for each
image descriptor, and with similar evaluation cost as efficient SVM methods.
| [
"Gemma Roig, Xavier Boix, Luc Van Gool",
"['Gemma Roig' 'Xavier Boix' 'Luc Van Gool']"
] |
stat.ML cs.LG | null | 1307.5302 | null | null | http://arxiv.org/pdf/1307.5302v3 | 2014-06-12T22:30:05Z | 2013-07-19T18:26:34Z | Kernel Adaptive Metropolis-Hastings | A Kernel Adaptive Metropolis-Hastings algorithm is introduced, for the
purpose of sampling from a target distribution with strongly nonlinear support.
The algorithm embeds the trajectory of the Markov chain into a reproducing
kernel Hilbert space (RKHS), such that the feature space covariance of the
samples informs the choice of proposal. The procedure is computationally
efficient and straightforward to implement, since the RKHS moves can be
integrated out analytically: our proposal distribution in the original space is
a normal distribution whose mean and covariance depend on where the current
sample lies in the support of the target distribution, and adapts to its local
covariance structure. Furthermore, the procedure requires neither gradients nor
any other higher order information about the target, making it particularly
attractive for contexts such as Pseudo-Marginal MCMC. Kernel Adaptive
Metropolis-Hastings outperforms competing fixed and adaptive samplers on
multivariate, highly nonlinear target distributions, arising in both real-world
and synthetic examples. Code may be downloaded at
https://github.com/karlnapf/kameleon-mcmc.
| [
"Dino Sejdinovic, Heiko Strathmann, Maria Lomeli Garcia, Christophe\n Andrieu, Arthur Gretton",
"['Dino Sejdinovic' 'Heiko Strathmann' 'Maria Lomeli Garcia'\n 'Christophe Andrieu' 'Arthur Gretton']"
] |
cs.LG | null | 1307.5438 | null | null | http://arxiv.org/pdf/1307.5438v3 | 2014-10-05T04:20:27Z | 2013-07-20T16:40:46Z | Towards Distribution-Free Multi-Armed Bandits with Combinatorial
Strategies | In this paper we study a generalized version of classical multi-armed bandits
(MABs) problem by allowing for arbitrary constraints on constituent bandits at
each decision point. The motivation of this study comes from many situations
that involve repeatedly making choices subject to arbitrary constraints in an
uncertain environment: for instance, regularly deciding which advertisements to
display online in order to gain high click-through-rate without knowing user
preferences, or what route to drive home each day under uncertain weather and
traffic conditions. Assume that there are $K$ unknown random variables (RVs),
i.e., arms, each evolving as an \emph{i.i.d} stochastic process over time. At
each decision epoch, we select a strategy, i.e., a subset of RVs, subject to
arbitrary constraints on constituent RVs.
We then gain a reward that is a linear combination of observations on
selected RVs.
The performance of prior results for this problem heavily depends on the
distribution of strategies generated by corresponding learning policy. For
example, if the reward-difference between the best and second best strategy
approaches zero, prior result may lead to arbitrarily large regret.
Meanwhile, when there are exponential number of possible strategies at each
decision point, naive extension of a prior distribution-free policy would cause
poor performance in terms of regret, computation and space complexity.
To this end, we propose an efficient Distribution-Free Learning (DFL) policy
that achieves zero regret, regardless of the probability distribution of the
resultant strategies.
Our learning policy has both $O(K)$ time complexity and $O(K)$ space
complexity. In successive generations, we show that even if finding the optimal
strategy at each decision point is NP-hard, our policy still allows for
approximated solutions while retaining near zero-regret.
| [
"['Xiang-yang Li' 'Shaojie Tang' 'Yaqin Zhou']",
"Xiang-yang Li, Shaojie Tang and Yaqin Zhou"
] |
math.PR cs.LG stat.ML | 10.1287/opre.2015.1408 | 1307.5449 | null | null | http://arxiv.org/abs/1307.5449v2 | 2014-12-22T22:45:18Z | 2013-07-20T18:46:01Z | Non-stationary Stochastic Optimization | We consider a non-stationary variant of a sequential stochastic optimization
problem, in which the underlying cost functions may change along the horizon.
We propose a measure, termed variation budget, that controls the extent of said
change, and study how restrictions on this budget impact achievable
performance. We identify sharp conditions under which it is possible to achieve
long-run-average optimality and more refined performance measures such as rate
optimality that fully characterize the complexity of such problems. In doing
so, we also establish a strong connection between two rather disparate strands
of literature: adversarial online convex optimization; and the more traditional
stochastic approximation paradigm (couched in a non-stationary setting). This
connection is the key to deriving well performing policies in the latter, by
leveraging structure of optimal policies in the former. Finally, tight bounds
on the minimax regret allow us to quantify the "price of non-stationarity,"
which mathematically captures the added complexity embedded in a temporally
changing environment versus a stationary one.
| [
"O. Besbes, Y. Gur, and A. Zeevi",
"['O. Besbes' 'Y. Gur' 'A. Zeevi']"
] |
cs.NA cs.LG stat.ML | null | 1307.5494 | null | null | http://arxiv.org/pdf/1307.5494v1 | 2013-07-21T03:47:16Z | 2013-07-21T03:47:16Z | On GROUSE and Incremental SVD | GROUSE (Grassmannian Rank-One Update Subspace Estimation) is an incremental
algorithm for identifying a subspace of Rn from a sequence of vectors in this
subspace, where only a subset of components of each vector is revealed at each
iteration. Recent analysis has shown that GROUSE converges locally at an
expected linear rate, under certain assumptions. GROUSE has a similar flavor to
the incremental singular value decomposition algorithm, which updates the SVD
of a matrix following addition of a single column. In this paper, we modify the
incremental SVD approach to handle missing data, and demonstrate that this
modified approach is equivalent to GROUSE, for a certain choice of an
algorithmic parameter.
| [
"['Laura Balzano' 'Stephen J. Wright']",
"Laura Balzano and Stephen J. Wright"
] |
cs.LG | null | 1307.5497 | null | null | http://arxiv.org/pdf/1307.5497v1 | 2013-07-21T06:06:13Z | 2013-07-21T06:06:13Z | A scalable stage-wise approach to large-margin multi-class loss based
boosting | We present a scalable and effective classification model to train multi-class
boosting for multi-class classification problems. Shen and Hao introduced a
direct formulation of multi- class boosting in the sense that it directly
maximizes the multi- class margin [C. Shen and Z. Hao, "A direct formulation
for totally-corrective multi- class boosting", in Proc. IEEE Conf. Comp. Vis.
Patt. Recogn., 2011]. The major problem of their approach is its high
computational complexity for training, which hampers its application on
real-world problems. In this work, we propose a scalable and simple stage-wise
multi-class boosting method, which also directly maximizes the multi-class
margin. Our approach of- fers a few advantages: 1) it is simple and
computationally efficient to train. The approach can speed up the training time
by more than two orders of magnitude without sacrificing the classification
accuracy. 2) Like traditional AdaBoost, it is less sensitive to the choice of
parameters and empirically demonstrates excellent generalization performance.
Experimental results on challenging multi-class machine learning and vision
tasks demonstrate that the proposed approach substantially improves the
convergence rate and accuracy of the final visual detector at no additional
computational cost compared to existing multi-class boosting.
| [
"['Sakrapee Paisitkriangkrai' 'Chunhua Shen' 'Anton van den Hengel']",
"Sakrapee Paisitkriangkrai, Chunhua Shen, Anton van den Hengel"
] |
cs.LG stat.ML | null | 1307.5599 | null | null | http://arxiv.org/pdf/1307.5599v1 | 2013-07-22T06:50:21Z | 2013-07-22T06:50:21Z | Performance comparison of State-of-the-art Missing Value Imputation
Algorithms on Some Bench mark Datasets | Decision making from data involves identifying a set of attributes that
contribute to effective decision making through computational intelligence. The
presence of missing values greatly influences the selection of right set of
attributes and this renders degradation in classification accuracies of the
classifiers. As missing values are quite common in data collection phase during
field experiments or clinical trails appropriate handling would improve the
classifier performance. In this paper we present a review of recently developed
missing value imputation algorithms and compare their performance on some bench
mark datasets.
| [
"M. Naresh Kumar",
"['M. Naresh Kumar']"
] |
cs.DS cs.DM cs.LG math.OC | null | 1307.5697 | null | null | http://arxiv.org/pdf/1307.5697v2 | 2014-04-30T13:28:47Z | 2013-07-22T13:34:44Z | Dimension Reduction via Colour Refinement | Colour refinement is a basic algorithmic routine for graph isomorphism
testing, appearing as a subroutine in almost all practical isomorphism solvers.
It partitions the vertices of a graph into "colour classes" in such a way that
all vertices in the same colour class have the same number of neighbours in
every colour class. Tinhofer (Disc. App. Math., 1991), Ramana, Scheinerman, and
Ullman (Disc. Math., 1994) and Godsil (Lin. Alg. and its App., 1997)
established a tight correspondence between colour refinement and fractional
isomorphisms of graphs, which are solutions to the LP relaxation of a natural
ILP formulation of graph isomorphism.
We introduce a version of colour refinement for matrices and extend existing
quasilinear algorithms for computing the colour classes. Then we generalise the
correspondence between colour refinement and fractional automorphisms and
develop a theory of fractional automorphisms and isomorphisms of matrices.
We apply our results to reduce the dimensions of systems of linear equations
and linear programs. Specifically, we show that any given LP L can efficiently
be transformed into a (potentially) smaller LP L' whose number of variables and
constraints is the number of colour classes of the colour refinement algorithm,
applied to a matrix associated with the LP. The transformation is such that we
can easily (by a linear mapping) map both feasible and optimal solutions back
and forth between the two LPs. We demonstrate empirically that colour
refinement can indeed greatly reduce the cost of solving linear programs.
| [
"['Martin Grohe' 'Kristian Kersting' 'Martin Mladenov' 'Erkal Selman']",
"Martin Grohe, Kristian Kersting, Martin Mladenov, Erkal Selman"
] |
cs.LG | null | 1307.5730 | null | null | http://arxiv.org/pdf/1307.5730v1 | 2013-07-22T14:36:03Z | 2013-07-22T14:36:03Z | A New Strategy of Cost-Free Learning in the Class Imbalance Problem | In this work, we define cost-free learning (CFL) formally in comparison with
cost-sensitive learning (CSL). The main difference between them is that a CFL
approach seeks optimal classification results without requiring any cost
information, even in the class imbalance problem. In fact, several CFL
approaches exist in the related studies, such as sampling and some
criteria-based pproaches. However, to our best knowledge, none of the existing
CFL and CSL approaches are able to process the abstaining classifications
properly when no information is given about errors and rejects. Based on
information theory, we propose a novel CFL which seeks to maximize normalized
mutual information of the targets and the decision outputs of classifiers.
Using the strategy, we can deal with binary/multi-class classifications
with/without abstaining. Significant features are observed from the new
strategy. While the degree of class imbalance is changing, the proposed
strategy is able to balance the errors and rejects accordingly and
automatically. Another advantage of the strategy is its ability of deriving
optimal rejection thresholds for abstaining classifications and the
"equivalent" costs in binary classifications. The connection between rejection
thresholds and ROC curve is explored. Empirical investigation is made on
several benchmark data sets in comparison with other existing approaches. The
classification results demonstrate a promising perspective of the strategy in
machine learning.
| [
"Xiaowan Zhang and Bao-Gang Hu",
"['Xiaowan Zhang' 'Bao-Gang Hu']"
] |
stat.ML cs.LG | null | 1307.5870 | null | null | http://arxiv.org/pdf/1307.5870v2 | 2013-08-15T05:59:52Z | 2013-07-22T20:23:29Z | Square Deal: Lower Bounds and Improved Relaxations for Tensor Recovery | Recovering a low-rank tensor from incomplete information is a recurring
problem in signal processing and machine learning. The most popular convex
relaxation of this problem minimizes the sum of the nuclear norms of the
unfoldings of the tensor. We show that this approach can be substantially
suboptimal: reliably recovering a $K$-way tensor of length $n$ and Tucker rank
$r$ from Gaussian measurements requires $\Omega(r n^{K-1})$ observations. In
contrast, a certain (intractable) nonconvex formulation needs only $O(r^K +
nrK)$ observations. We introduce a very simple, new convex relaxation, which
partially bridges this gap. Our new formulation succeeds with $O(r^{\lfloor K/2
\rfloor}n^{\lceil K/2 \rceil})$ observations. While these results pertain to
Gaussian measurements, simulations strongly suggest that the new norm also
outperforms the sum of nuclear norms for tensor completion from a random subset
of entries.
Our lower bound for the sum-of-nuclear-norms model follows from a new result
on recovering signals with multiple sparse structures (e.g. sparse, low rank),
which perhaps surprisingly demonstrates the significant suboptimality of the
commonly used recovery approach via minimizing the sum of individual sparsity
inducing norms (e.g. $l_1$, nuclear norm). Our new formulation for low-rank
tensor recovery however opens the possibility in reducing the sample complexity
by exploiting several structures jointly.
| [
"['Cun Mu' 'Bo Huang' 'John Wright' 'Donald Goldfarb']",
"Cun Mu, Bo Huang, John Wright, Donald Goldfarb"
] |
cs.DS cs.LG math.OC | null | 1307.5934 | null | null | http://arxiv.org/pdf/1307.5934v3 | 2015-06-06T09:43:57Z | 2013-07-23T03:24:28Z | A Near-Optimal Dynamic Learning Algorithm for Online Matching Problems
with Concave Returns | We consider an online matching problem with concave returns. This problem is
a significant generalization of the Adwords allocation problem and has vast
applications in online advertising. In this problem, a sequence of items arrive
sequentially and each has to be allocated to one of the bidders, who bid a
certain value for each item. At each time, the decision maker has to allocate
the current item to one of the bidders without knowing the future bids and the
objective is to maximize the sum of some concave functions of each bidder's
aggregate value. In this work, we propose an algorithm that achieves
near-optimal performance for this problem when the bids arrive in a random
order and the input data satisfies certain conditions. The key idea of our
algorithm is to learn the input data pattern dynamically: we solve a sequence
of carefully chosen partial allocation problems and use their optimal solutions
to assist with the future decision. Our analysis belongs to the primal-dual
paradigm, however, the absence of linearity of the objective function and the
dynamic feature of the algorithm makes our analysis quite unique.
| [
"Xiao Alison Chen, Zizhuo Wang",
"['Xiao Alison Chen' 'Zizhuo Wang']"
] |
stat.ML cs.LG math.OC | null | 1307.5944 | null | null | http://arxiv.org/pdf/1307.5944v3 | 2016-01-19T17:14:35Z | 2013-07-23T04:13:44Z | Online Optimization in Dynamic Environments | High-velocity streams of high-dimensional data pose significant "big data"
analysis challenges across a range of applications and settings. Online
learning and online convex programming play a significant role in the rapid
recovery of important or anomalous information from these large datastreams.
While recent advances in online learning have led to novel and rapidly
converging algorithms, these methods are unable to adapt to nonstationary
environments arising in real-world problems. This paper describes a dynamic
mirror descent framework which addresses this challenge, yielding low
theoretical regret bounds and accurate, adaptive, and computationally efficient
algorithms which are applicable to broad classes of problems. The methods are
capable of learning and adapting to an underlying and possibly time-varying
dynamical model. Empirical results in the context of dynamic texture analysis,
solar flare detection, sequential compressed sensing of a dynamic scene,
traffic surveillance,tracking self-exciting point processes and network
behavior in the Enron email corpus support the core theoretical findings.
| [
"['Eric C. Hall' 'Rebecca M. Willett']",
"Eric C. Hall and Rebecca M. Willett"
] |
cs.LG math.OC stat.ML | null | 1307.6134 | null | null | http://arxiv.org/pdf/1307.6134v5 | 2019-12-20T15:27:31Z | 2013-07-23T16:05:13Z | Modeling Human Decision-making in Generalized Gaussian Multi-armed
Bandits | We present a formal model of human decision-making in explore-exploit tasks
using the context of multi-armed bandit problems, where the decision-maker must
choose among multiple options with uncertain rewards. We address the standard
multi-armed bandit problem, the multi-armed bandit problem with transition
costs, and the multi-armed bandit problem on graphs. We focus on the case of
Gaussian rewards in a setting where the decision-maker uses Bayesian inference
to estimate the reward values. We model the decision-maker's prior knowledge
with the Bayesian prior on the mean reward. We develop the upper credible limit
(UCL) algorithm for the standard multi-armed bandit problem and show that this
deterministic algorithm achieves logarithmic cumulative expected regret, which
is optimal performance for uninformative priors. We show how good priors and
good assumptions on the correlation structure among arms can greatly enhance
decision-making performance, even over short time horizons. We extend to the
stochastic UCL algorithm and draw several connections to human decision-making
behavior. We present empirical data from human experiments and show that human
performance is efficiently captured by the stochastic UCL algorithm with
appropriate parameters. For the multi-armed bandit problem with transition
costs and the multi-armed bandit problem on graphs, we generalize the UCL
algorithm to the block UCL algorithm and the graphical block UCL algorithm,
respectively. We show that these algorithms also achieve logarithmic cumulative
expected regret and require a sub-logarithmic expected number of transitions
among arms. We further illustrate the performance of these algorithms with
numerical examples. NB: Appendix G included in this version details minor
modifications that correct for an oversight in the previously-published proofs.
The remainder of the text reflects the published work.
| [
"Paul Reverdy, Vaibhav Srivastava, Naomi E. Leonard",
"['Paul Reverdy' 'Vaibhav Srivastava' 'Naomi E. Leonard']"
] |
stat.ML cs.LG | null | 1307.6143 | null | null | http://arxiv.org/pdf/1307.6143v2 | 2013-07-24T13:26:08Z | 2013-07-23T16:33:00Z | Generative, Fully Bayesian, Gaussian, Openset Pattern Classifier | This report works out the details of a closed-form, fully Bayesian,
multiclass, openset, generative pattern classifier using multivariate Gaussian
likelihoods, with conjugate priors. The generative model has a common
within-class covariance, which is proportional to the between-class covariance
in the conjugate prior. The scalar proportionality constant is the only plugin
parameter. All other model parameters are intergated out in closed form. An
expression is given for the model evidence, which can be used to make plugin
estimates for the proportionality constant. Pattern recognition is done via the
predictive likeihoods of classes for which training data is available, as well
as a predicitve likelihood for any as yet unseen class.
| [
"['Niko Brummer']",
"Niko Brummer"
] |
cs.AI cs.DB cs.LG | 10.1109/TKDE.2014.2377746 | 1307.6365 | null | null | http://arxiv.org/abs/1307.6365v4 | 2013-12-23T22:26:35Z | 2013-07-24T10:07:50Z | Time-Series Classification Through Histograms of Symbolic Polynomials | Time-series classification has attracted considerable research attention due
to the various domains where time-series data are observed, ranging from
medicine to econometrics. Traditionally, the focus of time-series
classification has been on short time-series data composed of a unique pattern
with intraclass pattern distortions and variations, while recently there have
been attempts to focus on longer series composed of various local patterns.
This study presents a novel method which can detect local patterns in long
time-series via fitting local polynomial functions of arbitrary degrees. The
coefficients of the polynomial functions are converted to symbolic words via
equivolume discretizations of the coefficients' distributions. The symbolic
polynomial words enable the detection of similar local patterns by assigning
the same words to similar polynomials. Moreover, a histogram of the frequencies
of the words is constructed from each time-series' bag of words. Each row of
the histogram enables a new representation for the series and symbolize the
existence of local patterns and their frequencies. Experimental evidence
demonstrates outstanding results of our method compared to the state-of-art
baselines, by exhibiting the best classification accuracies in all the datasets
and having statistically significant improvements in the absolute majority of
experiments.
| [
"Josif Grabocka, Martin Wistuba, Lars Schmidt-Thieme",
"['Josif Grabocka' 'Martin Wistuba' 'Lars Schmidt-Thieme']"
] |
stat.ML cs.LG | null | 1307.6515 | null | null | http://arxiv.org/pdf/1307.6515v1 | 2013-07-24T18:17:53Z | 2013-07-24T18:17:53Z | Cluster Trees on Manifolds | In this paper we investigate the problem of estimating the cluster tree for a
density $f$ supported on or near a smooth $d$-dimensional manifold $M$
isometrically embedded in $\mathbb{R}^D$. We analyze a modified version of a
$k$-nearest neighbor based algorithm recently proposed by Chaudhuri and
Dasgupta. The main results of this paper show that under mild assumptions on
$f$ and $M$, we obtain rates of convergence that depend on $d$ only but not on
the ambient dimension $D$. We also show that similar (albeit non-algorithmic)
results can be obtained for kernel density estimators. We sketch a construction
of a sample complexity lower bound instance for a natural class of manifold
oblivious clustering algorithms. We further briefly consider the known manifold
case and show that in this case a spatially adaptive algorithm achieves better
rates.
| [
"Sivaraman Balakrishnan, Srivatsan Narayanan, Alessandro Rinaldo, Aarti\n Singh, Larry Wasserman",
"['Sivaraman Balakrishnan' 'Srivatsan Narayanan' 'Alessandro Rinaldo'\n 'Aarti Singh' 'Larry Wasserman']"
] |
cs.LG stat.ML | null | 1307.6616 | null | null | http://arxiv.org/pdf/1307.6616v2 | 2023-06-13T14:21:16Z | 2013-07-25T00:48:04Z | Does generalization performance of $l^q$ regularization learning depend
on $q$? A negative example | $l^q$-regularization has been demonstrated to be an attractive technique in
machine learning and statistical modeling. It attempts to improve the
generalization (prediction) capability of a machine (model) through
appropriately shrinking its coefficients. The shape of a $l^q$ estimator
differs in varying choices of the regularization order $q$. In particular,
$l^1$ leads to the LASSO estimate, while $l^{2}$ corresponds to the smooth
ridge regression. This makes the order $q$ a potential tuning parameter in
applications. To facilitate the use of $l^{q}$-regularization, we intend to
seek for a modeling strategy where an elaborative selection on $q$ is
avoidable. In this spirit, we place our investigation within a general
framework of $l^{q}$-regularized kernel learning under a sample dependent
hypothesis space (SDHS). For a designated class of kernel functions, we show
that all $l^{q}$ estimators for $0< q < \infty$ attain similar generalization
error bounds. These estimated bounds are almost optimal in the sense that up to
a logarithmic factor, the upper and lower bounds are asymptotically identical.
This finding tentatively reveals that, in some modeling contexts, the choice of
$q$ might not have a strong impact in terms of the generalization capability.
From this perspective, $q$ can be arbitrarily specified, or specified merely by
other no generalization criteria like smoothness, computational complexity,
sparsity, etc..
| [
"['Shaobo Lin' 'Chen Xu' 'Jingshan Zeng' 'Jian Fang']",
"Shaobo Lin, Chen Xu, Jingshan Zeng, Jian Fang"
] |
stat.ML cs.LG | null | 1307.6769 | null | null | http://arxiv.org/pdf/1307.6769v2 | 2013-11-20T23:29:01Z | 2013-07-25T15:03:40Z | Streaming Variational Bayes | We present SDA-Bayes, a framework for (S)treaming, (D)istributed,
(A)synchronous computation of a Bayesian posterior. The framework makes
streaming updates to the estimated posterior according to a user-specified
approximation batch primitive. We demonstrate the usefulness of our framework,
with variational Bayes (VB) as the primitive, by fitting the latent Dirichlet
allocation model to two large-scale document collections. We demonstrate the
advantages of our algorithm over stochastic variational inference (SVI) by
comparing the two after a single pass through a known amount of data---a case
where SVI may be applied---and in the streaming setting, where SVI does not
apply.
| [
"['Tamara Broderick' 'Nicholas Boyd' 'Andre Wibisono' 'Ashia C. Wilson'\n 'Michael I. Jordan']",
"Tamara Broderick, Nicholas Boyd, Andre Wibisono, Ashia C. Wilson,\n Michael I. Jordan"
] |
cs.LG | null | 1307.6814 | null | null | http://arxiv.org/pdf/1307.6814v1 | 2013-07-25T17:07:39Z | 2013-07-25T17:07:39Z | A Propound Method for the Improvement of Cluster Quality | In this paper Knockout Refinement Algorithm (KRA) is proposed to refine
original clusters obtained by applying SOM and K-Means clustering algorithms.
KRA Algorithm is based on Contingency Table concepts. Metrics are computed for
the Original and Refined Clusters. Quality of Original and Refined Clusters are
compared in terms of metrics. The proposed algorithm (KRA) is tested in the
educational domain and results show that it generates better quality clusters
in terms of improved metric values.
| [
"Shveta Kundra Bhatia, V.S. Dixit",
"['Shveta Kundra Bhatia' 'V. S. Dixit']"
] |
stat.ML cs.LG | null | 1307.6887 | null | null | http://arxiv.org/pdf/1307.6887v1 | 2013-07-25T22:17:12Z | 2013-07-25T22:17:12Z | Sequential Transfer in Multi-armed Bandit with Finite Set of Models | Learning from prior tasks and transferring that experience to improve future
performance is critical for building lifelong learning agents. Although results
in supervised and reinforcement learning show that transfer may significantly
improve the learning performance, most of the literature on transfer is focused
on batch learning tasks. In this paper we study the problem of
\textit{sequential transfer in online learning}, notably in the multi-armed
bandit framework, where the objective is to minimize the cumulative regret over
a sequence of tasks by incrementally transferring knowledge from prior tasks.
We introduce a novel bandit algorithm based on a method-of-moments approach for
the estimation of the possible tasks and derive regret bounds for it.
| [
"Mohammad Gheshlaghi Azar and Alessandro Lazaric and Emma Brunskill",
"['Mohammad Gheshlaghi Azar' 'Alessandro Lazaric' 'Emma Brunskill']"
] |
cs.LG stat.ML | null | 1307.7024 | null | null | http://arxiv.org/pdf/1307.7024v1 | 2013-07-26T13:02:14Z | 2013-07-26T13:02:14Z | Multi-view Laplacian Support Vector Machines | We propose a new approach, multi-view Laplacian support vector machines
(SVMs), for semi-supervised learning under the multi-view scenario. It
integrates manifold regularization and multi-view regularization into the usual
formulation of SVMs and is a natural extension of SVMs from supervised learning
to multi-view semi-supervised learning. The function optimization problem in a
reproducing kernel Hilbert space is converted to an optimization in a
finite-dimensional Euclidean space. After providing a theoretical bound for the
generalization performance of the proposed method, we further give a
formulation of the empirical Rademacher complexity which affects the bound
significantly. From this bound and the empirical Rademacher complexity, we can
gain insights into the roles played by different regularization terms to the
generalization performance. Experimental results on synthetic and real-world
data sets are presented, which validate the effectiveness of the proposed
multi-view Laplacian SVMs approach.
| [
"Shiliang Sun",
"['Shiliang Sun']"
] |
cs.LG stat.ML | null | 1307.7028 | null | null | http://arxiv.org/pdf/1307.7028v1 | 2013-07-26T13:24:31Z | 2013-07-26T13:24:31Z | Infinite Mixtures of Multivariate Gaussian Processes | This paper presents a new model called infinite mixtures of multivariate
Gaussian processes, which can be used to learn vector-valued functions and
applied to multitask learning. As an extension of the single multivariate
Gaussian process, the mixture model has the advantages of modeling multimodal
data and alleviating the computationally cubic complexity of the multivariate
Gaussian process. A Dirichlet process prior is adopted to allow the (possibly
infinite) number of mixture components to be automatically inferred from
training data, and Markov chain Monte Carlo sampling techniques are used for
parameter and latent variable inference. Preliminary experimental results on
multivariate regression show the feasibility of the proposed model.
| [
"Shiliang Sun",
"['Shiliang Sun']"
] |
cs.LG cs.CE | 10.1504/IJBRA.2015.071940 | 1307.7050 | null | null | http://arxiv.org/abs/1307.7050v1 | 2013-07-26T14:44:16Z | 2013-07-26T14:44:16Z | A Comprehensive Evaluation of Machine Learning Techniques for Cancer
Class Prediction Based on Microarray Data | Prostate cancer is among the most common cancer in males and its
heterogeneity is well known. Its early detection helps making therapeutic
decision. There is no standard technique or procedure yet which is full-proof
in predicting cancer class. The genomic level changes can be detected in gene
expression data and those changes may serve as standard model for any random
cancer data for class prediction. Various techniques were implied on prostate
cancer data set in order to accurately predict cancer class including machine
learning techniques. Huge number of attributes and few number of sample in
microarray data leads to poor machine learning, therefore the most challenging
part is attribute reduction or non significant gene reduction. In this work we
have compared several machine learning techniques for their accuracy in
predicting the cancer class. Machine learning is effective when number of
attributes (genes) are larger than the number of samples which is rarely
possible with gene expression data. Attribute reduction or gene filtering is
absolutely required in order to make the data more meaningful as most of the
genes do not participate in tumor development and are irrelevant for cancer
prediction. Here we have applied combination of statistical techniques such as
inter-quartile range and t-test, which has been effective in filtering
significant genes and minimizing noise from data. Further we have done a
comprehensive evaluation of ten state-of-the-art machine learning techniques
for their accuracy in class prediction of prostate cancer. Out of these
techniques, Bayes Network out performed with an accuracy of 94.11% followed by
Navie Bayes with an accuracy of 91.17%. To cross validate our results, we
modified our training dataset in six different way and found that average
sensitivity, specificity, precision and accuracy of Bayes Network is highest
among all other techniques used.
| [
"Khalid Raza, Atif N Hasan",
"['Khalid Raza' 'Atif N Hasan']"
] |
cs.LG math.OC | null | 1307.7192 | null | null | http://arxiv.org/pdf/1307.7192v1 | 2013-07-26T23:27:23Z | 2013-07-26T23:27:23Z | MixedGrad: An O(1/T) Convergence Rate Algorithm for Stochastic Smooth
Optimization | It is well known that the optimal convergence rate for stochastic
optimization of smooth functions is $O(1/\sqrt{T})$, which is same as
stochastic optimization of Lipschitz continuous convex functions. This is in
contrast to optimizing smooth functions using full gradients, which yields a
convergence rate of $O(1/T^2)$. In this work, we consider a new setup for
optimizing smooth functions, termed as {\bf Mixed Optimization}, which allows
to access both a stochastic oracle and a full gradient oracle. Our goal is to
significantly improve the convergence rate of stochastic optimization of smooth
functions by having an additional small number of accesses to the full gradient
oracle. We show that, with an $O(\ln T)$ calls to the full gradient oracle and
an $O(T)$ calls to the stochastic oracle, the proposed mixed optimization
algorithm is able to achieve an optimization error of $O(1/T)$.
| [
"['Mehrdad Mahdavi' 'Rong Jin']",
"Mehrdad Mahdavi and Rong Jin"
] |
cs.LG cs.CR | null | 1307.7286 | null | null | http://arxiv.org/pdf/1307.7286v1 | 2013-07-27T18:00:43Z | 2013-07-27T18:00:43Z | A Review of Machine Learning based Anomaly Detection Techniques | Intrusion detection is so much popular since the last two decades where
intrusion is attempted to break into or misuse the system. It is mainly of two
types based on the intrusions, first is Misuse or signature based detection and
the other is Anomaly detection. In this paper Machine learning based methods
which are one of the types of Anomaly detection techniques is discussed.
| [
"['Harjinder Kaur' 'Gurpreet Singh' 'Jaspreet Minhas']",
"Harjinder Kaur, Gurpreet Singh, Jaspreet Minhas"
] |
cs.LG cs.AI | null | 1307.7303 | null | null | http://arxiv.org/pdf/1307.7303v1 | 2013-07-27T20:33:34Z | 2013-07-27T20:33:34Z | Learning to Understand by Evolving Theories | In this paper, we describe an approach that enables an autonomous system to
infer the semantics of a command (i.e. a symbol sequence representing an
action) in terms of the relations between changes in the observations and the
action instances. We present a method of how to induce a theory (i.e. a
semantic description) of the meaning of a command in terms of a minimal set of
background knowledge. The only thing we have is a sequence of observations from
which we extract what kinds of effects were caused by performing the command.
This way, we yield a description of the semantics of the action and, hence, a
definition.
| [
"Martin E. Mueller and Madhura D. Thosar",
"['Martin E. Mueller' 'Madhura D. Thosar']"
] |
cs.CY cs.LG | null | 1307.7429 | null | null | http://arxiv.org/pdf/1307.7429v1 | 2013-07-29T01:15:25Z | 2013-07-29T01:15:25Z | Participation anticipating in elections using data mining methods | Anticipating the political behavior of people will be considerable help for
election candidates to assess the possibility of their success and to be
acknowledged about the public motivations to select them. In this paper, we
provide a general schematic of the architecture of participation anticipating
system in presidential election by using KNN, Classification Tree and Na\"ive
Bayes and tools orange based on crisp which had hopeful output. To test and
assess the proposed model, we begin to use the case study by selecting 100
qualified persons who attend in 11th presidential election of Islamic republic
of Iran and anticipate their participation in Kohkiloye & Boyerahmad. We
indicate that KNN can perform anticipation and classification processes with
high accuracy in compared with two other algorithms to anticipate
participation.
| [
"['Amin Babazadeh Sangar' 'Seyyed Reza Khaze' 'Laya Ebrahimi']",
"Amin Babazadeh Sangar, Seyyed Reza Khaze, Laya Ebrahimi"
] |
cs.CY cs.LG | null | 1307.7432 | null | null | http://arxiv.org/pdf/1307.7432v1 | 2013-07-29T01:29:48Z | 2013-07-29T01:29:48Z | Data mining application for cyber space users tendency in blog writing:
a case study | Blogs are the recent emerging media which relies on information technology
and technological advance. Since the mass media in some less-developed and
developing countries are in government service and their policies are developed
based on governmental interests, so blogs are provided for ideas and exchanging
opinions. In this paper, we highlighted performed simulations from obtained
information from 100 users and bloggers in Kohkiloye and Boyer Ahmad Province
and using Weka 3.6 tool and c4.5 algorithm by applying decision tree with more
than %82 precision for getting future tendency anticipation of users to
blogging and using in strategically areas.
| [
"Farhad Soleimanian Gharehchopogh, Seyyed Reza Khaze",
"['Farhad Soleimanian Gharehchopogh' 'Seyyed Reza Khaze']"
] |
cs.LG stat.ML | null | 1307.7577 | null | null | http://arxiv.org/pdf/1307.7577v3 | 2014-05-12T19:46:39Z | 2013-07-29T13:45:58Z | Safe Screening With Variational Inequalities and Its Application to
LASSO | Sparse learning techniques have been routinely used for feature selection as
the resulting model usually has a small number of non-zero entries. Safe
screening, which eliminates the features that are guaranteed to have zero
coefficients for a certain value of the regularization parameter, is a
technique for improving the computational efficiency. Safe screening is gaining
increasing attention since 1) solving sparse learning formulations usually has
a high computational cost especially when the number of features is large and
2) one needs to try several regularization parameters to select a suitable
model. In this paper, we propose an approach called "Sasvi" (Safe screening
with variational inequalities). Sasvi makes use of the variational inequality
that provides the sufficient and necessary optimality condition for the dual
problem. Several existing approaches for Lasso screening can be casted as
relaxed versions of the proposed Sasvi, thus Sasvi provides a stronger safe
screening rule. We further study the monotone properties of Sasvi for Lasso,
based on which a sure removal regularization parameter can be identified for
each feature. Experimental results on both synthetic and real data sets are
reported to demonstrate the effectiveness of the proposed Sasvi for Lasso
screening.
| [
"Jun Liu, Zheng Zhao, Jie Wang, Jieping Ye",
"['Jun Liu' 'Zheng Zhao' 'Jie Wang' 'Jieping Ye']"
] |
cs.LG cs.AI | null | 1307.7793 | null | null | http://arxiv.org/pdf/1307.7793v1 | 2013-07-30T03:02:44Z | 2013-07-30T03:02:44Z | Multi-dimensional Parametric Mincuts for Constrained MAP Inference | In this paper, we propose novel algorithms for inferring the Maximum a
Posteriori (MAP) solution of discrete pairwise random field models under
multiple constraints. We show how this constrained discrete optimization
problem can be formulated as a multi-dimensional parametric mincut problem via
its Lagrangian dual, and prove that our algorithm isolates all constraint
instances for which the problem can be solved exactly. These multiple solutions
enable us to even deal with `soft constraints' (higher order penalty
functions). Moreover, we propose two practical variants of our algorithm to
solve problems with hard constraints. We also show how our method can be
applied to solve various constrained discrete optimization problems such as
submodular minimization and shortest path computation. Experimental evaluation
using the foreground-background image segmentation problem with statistic
constraints reveals that our method is faster and its results are closer to the
ground truth labellings compared with the popular continuous relaxation based
methods.
| [
"['Yongsub Lim' 'Kyomin Jung' 'Pushmeet Kohli']",
"Yongsub Lim, Kyomin Jung, Pushmeet Kohli"
] |
q-bio.QM cs.CE cs.LG q-bio.GN | null | 1307.7795 | null | null | http://arxiv.org/pdf/1307.7795v1 | 2013-07-30T03:19:05Z | 2013-07-30T03:19:05Z | Protein (Multi-)Location Prediction: Using Location Inter-Dependencies
in a Probabilistic Framework | Knowing the location of a protein within the cell is important for
understanding its function, role in biological processes, and potential use as
a drug target. Much progress has been made in developing computational methods
that predict single locations for proteins, assuming that proteins localize to
a single location. However, it has been shown that proteins localize to
multiple locations. While a few recent systems have attempted to predict
multiple locations of proteins, they typically treat locations as independent
or capture inter-dependencies by treating each locations-combination present in
the training set as an individual location-class. We present a new method and a
preliminary system we have developed that directly incorporates
inter-dependencies among locations into the multiple-location-prediction
process, using a collection of Bayesian network classifiers. We evaluate our
system on a dataset of single- and multi-localized proteins. Our results,
obtained by incorporating inter-dependencies are significantly higher than
those obtained by classifiers that do not use inter-dependencies. The
performance of our system on multi-localized proteins is comparable to a top
performing system (YLoc+), without restricting predictions to be based only on
location-combinations present in the training set.
| [
"['Ramanuja Simha' 'Hagit Shatkay']",
"Ramanuja Simha and Hagit Shatkay"
] |
cs.CV cs.LG stat.ML | null | 1307.7852 | null | null | http://arxiv.org/pdf/1307.7852v1 | 2013-07-30T07:33:31Z | 2013-07-30T07:33:31Z | Scalable $k$-NN graph construction | The $k$-NN graph has played a central role in increasingly popular
data-driven techniques for various learning and vision tasks; yet, finding an
efficient and effective way to construct $k$-NN graphs remains a challenge,
especially for large-scale high-dimensional data. In this paper, we propose a
new approach to construct approximate $k$-NN graphs with emphasis in:
efficiency and accuracy. We hierarchically and randomly divide the data points
into subsets and build an exact neighborhood graph over each subset, achieving
a base approximate neighborhood graph; we then repeat this process for several
times to generate multiple neighborhood graphs, which are combined to yield a
more accurate approximate neighborhood graph. Furthermore, we propose a
neighborhood propagation scheme to further enhance the accuracy. We show both
theoretical and empirical accuracy and efficiency of our approach to $k$-NN
graph construction and demonstrate significant speed-up in dealing with large
scale visual data.
| [
"['Jingdong Wang' 'Jing Wang' 'Gang Zeng' 'Zhuowen Tu' 'Rui Gan'\n 'Shipeng Li']",
"Jingdong Wang, Jing Wang, Gang Zeng, Zhuowen Tu, Rui Gan, and Shipeng\n Li"
] |
stat.ME cs.LG stat.CO | null | 1307.7948 | null | null | http://arxiv.org/pdf/1307.7948v1 | 2013-07-30T12:40:16Z | 2013-07-30T12:40:16Z | On the accuracy of the Viterbi alignment | In a hidden Markov model, the underlying Markov chain is usually hidden.
Often, the maximum likelihood alignment (Viterbi alignment) is used as its
estimate. Although having the biggest likelihood, the Viterbi alignment can
behave very untypically by passing states that are at most unexpected. To avoid
such situations, the Viterbi alignment can be modified by forcing it not to
pass these states. In this article, an iterative procedure for improving the
Viterbi alignment is proposed and studied. The iterative approach is compared
with a simple bunch approach where a number of states with low probability are
all replaced at the same time. It can be seen that the iterative way of
adjusting the Viterbi alignment is more efficient and it has several advantages
over the bunch approach. The same iterative algorithm for improving the Viterbi
alignment can be used in the case of peeping, that is when it is possible to
reveal hidden states. In addition, lower bounds for classification
probabilities of the Viterbi alignment under different conditions on the model
parameters are studied.
| [
"['Kristi Kuljus' 'Jüri Lember']",
"Kristi Kuljus and J\\\"uri Lember"
] |
cs.CL cs.IR cs.LG | null | 1307.7973 | null | null | http://arxiv.org/pdf/1307.7973v1 | 2013-07-30T13:37:09Z | 2013-07-30T13:37:09Z | Connecting Language and Knowledge Bases with Embedding Models for
Relation Extraction | This paper proposes a novel approach for relation extraction from free text
which is trained to jointly use information from the text and from existing
knowledge. Our model is based on two scoring functions that operate by learning
low-dimensional embeddings of words and of entities and relationships from a
knowledge base. We empirically show on New York Times articles aligned with
Freebase relations that our approach is able to efficiently use the extra
information provided by a large subset of Freebase data (4M entities, 23k
relationships) to improve over existing methods that rely on text features
alone.
| [
"['Jason Weston' 'Antoine Bordes' 'Oksana Yakhnenko' 'Nicolas Usunier']",
"Jason Weston, Antoine Bordes, Oksana Yakhnenko, Nicolas Usunier"
] |
stat.ML cs.LG | null | 1307.7981 | null | null | http://arxiv.org/pdf/1307.7981v1 | 2013-07-30T13:59:13Z | 2013-07-30T13:59:13Z | Likelihood-ratio calibration using prior-weighted proper scoring rules | Prior-weighted logistic regression has become a standard tool for calibration
in speaker recognition. Logistic regression is the optimization of the expected
value of the logarithmic scoring rule. We generalize this via a parametric
family of proper scoring rules. Our theoretical analysis shows how different
members of this family induce different relative weightings over a spectrum of
applications of which the decision thresholds range from low to high. Special
attention is given to the interaction between prior weighting and proper
scoring rule parameters. Experiments on NIST SRE'12 suggest that for
applications with low false-alarm rate requirements, scoring rules tailored to
emphasize higher score thresholds may give better accuracy than logistic
regression.
| [
"Niko Br\\\"ummer and George Doddington",
"['Niko Brümmer' 'George Doddington']"
] |
cs.LG stat.ML | null | 1307.7993 | null | null | http://arxiv.org/pdf/1307.7993v1 | 2013-07-30T14:24:52Z | 2013-07-30T14:24:52Z | Sharp Threshold for Multivariate Multi-Response Linear Regression via
Block Regularized Lasso | In this paper, we investigate a multivariate multi-response (MVMR) linear
regression problem, which contains multiple linear regression models with
differently distributed design matrices, and different regression and output
vectors. The goal is to recover the support union of all regression vectors
using $l_1/l_2$-regularized Lasso. We characterize sufficient and necessary
conditions on sample complexity \emph{as a sharp threshold} to guarantee
successful recovery of the support union. Namely, if the sample size is above
the threshold, then $l_1/l_2$-regularized Lasso correctly recovers the support
union; and if the sample size is below the threshold, $l_1/l_2$-regularized
Lasso fails to recover the support union. In particular, the threshold
precisely captures the impact of the sparsity of regression vectors and the
statistical properties of the design matrices on sample complexity. Therefore,
the threshold function also captures the advantages of joint support union
recovery using multi-task Lasso over individual support recovery using
single-task Lasso.
| [
"['Weiguang Wang' 'Yingbin Liang' 'Eric P. Xing']",
"Weiguang Wang, Yingbin Liang, Eric P. Xing"
] |
astro-ph.IM cs.LG | 10.1109/SMC.2013.260 | 1307.8012 | null | null | http://arxiv.org/abs/1307.8012v1 | 2013-07-30T15:11:59Z | 2013-07-30T15:11:59Z | A Study on Classification in Imbalanced and Partially-Labelled Data
Streams | The domain of radio astronomy is currently facing significant computational
challenges, foremost amongst which are those posed by the development of the
world's largest radio telescope, the Square Kilometre Array (SKA). Preliminary
specifications for this instrument suggest that the final design will
incorporate between 2000 and 3000 individual 15 metre receiving dishes, which
together can be expected to produce a data rate of many TB/s. Given such a high
data rate, it becomes crucial to consider how this information will be
processed and stored to maximise its scientific utility. In this paper, we
consider one possible data processing scenario for the SKA, for the purposes of
an all-sky pulsar survey. In particular we treat the selection of promising
signals from the SKA processing pipeline as a data stream classification
problem. We consider the feasibility of classifying signals that arrive via an
unlabelled and heavily class imbalanced data stream, using currently available
algorithms and frameworks. Our results indicate that existing stream learners
exhibit unacceptably low recall on real astronomical data when used in standard
configuration; however, good false positive performance and comparable accuracy
to static learners, suggests they have definite potential as an on-line
solution to this particular big data challenge.
| [
"R. J. Lyon, J. M. Brooke, J. D. Knowles, B. W. Stappers",
"['R. J. Lyon' 'J. M. Brooke' 'J. D. Knowles' 'B. W. Stappers']"
] |
cs.LG cs.AI cs.DC | null | 1307.8049 | null | null | http://arxiv.org/pdf/1307.8049v1 | 2013-07-30T17:07:58Z | 2013-07-30T17:07:58Z | Optimistic Concurrency Control for Distributed Unsupervised Learning | Research on distributed machine learning algorithms has focused primarily on
one of two extremes - algorithms that obey strict concurrency constraints or
algorithms that obey few or no such constraints. We consider an intermediate
alternative in which algorithms optimistically assume that conflicts are
unlikely and if conflicts do arise a conflict-resolution protocol is invoked.
We view this "optimistic concurrency control" paradigm as particularly
appropriate for large-scale machine learning algorithms, particularly in the
unsupervised setting. We demonstrate our approach in three problem areas:
clustering, feature learning and online facility location. We evaluate our
methods via large-scale experiments in a cluster computing environment.
| [
"['Xinghao Pan' 'Joseph E. Gonzalez' 'Stefanie Jegelka' 'Tamara Broderick'\n 'Michael I. Jordan']",
"Xinghao Pan, Joseph E. Gonzalez, Stefanie Jegelka, Tamara Broderick,\n Michael I. Jordan"
] |
stat.ME cs.LG stat.ML | null | 1307.8136 | null | null | http://arxiv.org/pdf/1307.8136v1 | 2013-07-30T20:19:26Z | 2013-07-30T20:19:26Z | DeBaCl: A Python Package for Interactive DEnsity-BAsed CLustering | The level set tree approach of Hartigan (1975) provides a probabilistically
based and highly interpretable encoding of the clustering behavior of a
dataset. By representing the hierarchy of data modes as a dendrogram of the
level sets of a density estimator, this approach offers many advantages for
exploratory analysis and clustering, especially for complex and
high-dimensional data. Several R packages exist for level set tree estimation,
but their practical usefulness is limited by computational inefficiency,
absence of interactive graphical capabilities and, from a theoretical
perspective, reliance on asymptotic approximations. To make it easier for
practitioners to capture the advantages of level set trees, we have written the
Python package DeBaCl for DEnsity-BAsed CLustering. In this article we
illustrate how DeBaCl's level set tree estimates can be used for difficult
clustering tasks and interactive graphical data analysis. The package is
intended to promote the practical use of level set trees through improvements
in computational efficiency and a high degree of user customization. In
addition, the flexible algorithms implemented in DeBaCl enjoy finite sample
accuracy, as demonstrated in recent literature on density clustering. Finally,
we show the level set tree framework can be easily extended to deal with
functional data.
| [
"Brian P. Kent, Alessandro Rinaldo, Timothy Verstynen",
"['Brian P. Kent' 'Alessandro Rinaldo' 'Timothy Verstynen']"
] |
cs.LG | null | 1307.8187 | null | null | http://arxiv.org/pdf/1307.8187v2 | 2013-10-06T18:49:58Z | 2013-07-31T01:49:50Z | Towards Minimax Online Learning with Unknown Time Horizon | We consider online learning when the time horizon is unknown. We apply a
minimax analysis, beginning with the fixed horizon case, and then moving on to
two unknown-horizon settings, one that assumes the horizon is chosen randomly
according to some known distribution, and the other which allows the adversary
full control over the horizon. For the random horizon setting with restricted
losses, we derive a fully optimal minimax algorithm. And for the adversarial
horizon setting, we prove a nontrivial lower bound which shows that the
adversary obtains strictly more power than when the horizon is fixed and known.
Based on the minimax solution of the random horizon setting, we then propose a
new adaptive algorithm which "pretends" that the horizon is drawn from a
distribution from a special family, but no matter how the actual horizon is
chosen, the worst-case regret is of the optimal rate. Furthermore, our
algorithm can be combined and applied in many ways, for instance, to online
convex optimization, follow the perturbed leader, exponential weights algorithm
and first order bounds. Experiments show that our algorithm outperforms many
other existing algorithms in an online linear optimization setting.
| [
"Haipeng Luo and Robert E. Schapire",
"['Haipeng Luo' 'Robert E. Schapire']"
] |
cs.LG | null | 1307.8305 | null | null | http://arxiv.org/pdf/1307.8305v1 | 2013-07-31T12:38:20Z | 2013-07-31T12:38:20Z | The Planning-ahead SMO Algorithm | The sequential minimal optimization (SMO) algorithm and variants thereof are
the de facto standard method for solving large quadratic programs for support
vector machine (SVM) training. In this paper we propose a simple yet powerful
modification. The main emphasis is on an algorithm improving the SMO step size
by planning-ahead. The theoretical analysis ensures its convergence to the
optimum. Experiments involving a large number of datasets were carried out to
demonstrate the superiority of the new algorithm.
| [
"['Tobias Glasmachers']",
"Tobias Glasmachers"
] |
cs.LG cs.CC cs.DS stat.ML | null | 1307.8371 | null | null | http://arxiv.org/pdf/1307.8371v9 | 2018-06-03T18:22:37Z | 2013-07-31T16:11:26Z | The Power of Localization for Efficiently Learning Linear Separators
with Noise | We introduce a new approach for designing computationally efficient learning
algorithms that are tolerant to noise, and demonstrate its effectiveness by
designing algorithms with improved noise tolerance guarantees for learning
linear separators.
We consider both the malicious noise model and the adversarial label noise
model. For malicious noise, where the adversary can corrupt both the label and
the features, we provide a polynomial-time algorithm for learning linear
separators in $\Re^d$ under isotropic log-concave distributions that can
tolerate a nearly information-theoretically optimal noise rate of $\eta =
\Omega(\epsilon)$. For the adversarial label noise model, where the
distribution over the feature vectors is unchanged, and the overall probability
of a noisy label is constrained to be at most $\eta$, we also give a
polynomial-time algorithm for learning linear separators in $\Re^d$ under
isotropic log-concave distributions that can handle a noise rate of $\eta =
\Omega\left(\epsilon\right)$.
We show that, in the active learning model, our algorithms achieve a label
complexity whose dependence on the error parameter $\epsilon$ is
polylogarithmic. This provides the first polynomial-time active learning
algorithm for learning linear separators in the presence of malicious noise or
adversarial label noise.
| [
"['Pranjal Awasthi' 'Maria Florina Balcan' 'Philip M. Long']",
"Pranjal Awasthi, Maria Florina Balcan, Philip M. Long"
] |
cs.LG stat.ML | null | 1307.8430 | null | null | http://arxiv.org/pdf/1307.8430v1 | 2013-07-31T19:18:11Z | 2013-07-31T19:18:11Z | Fast Simultaneous Training of Generalized Linear Models (FaSTGLZ) | We present an efficient algorithm for simultaneously training sparse
generalized linear models across many related problems, which may arise from
bootstrapping, cross-validation and nonparametric permutation testing. Our
approach leverages the redundancies across problems to obtain significant
computational improvements relative to solving the problems sequentially by a
conventional algorithm. We demonstrate our fast simultaneous training of
generalized linear models (FaSTGLZ) algorithm on a number of real-world
datasets, and we run otherwise computationally intensive bootstrapping and
permutation test analyses that are typically necessary for obtaining
statistically rigorous classification results and meaningful interpretation.
Code is freely available at http://liinc.bme.columbia.edu/fastglz.
| [
"['Bryan R. Conroy' 'Jennifer M. Walz' 'Brian Cheung' 'Paul Sajda']",
"Bryan R. Conroy, Jennifer M. Walz, Brian Cheung, Paul Sajda"
] |
cs.AI cs.LG | null | 1308.0187 | null | null | http://arxiv.org/pdf/1308.0187v9 | 2014-12-23T20:21:52Z | 2013-07-31T16:56:59Z | A Time and Space Efficient Junction Tree Architecture | The junction tree algorithm is a way of computing marginals of boolean
multivariate probability distributions that factorise over sets of random
variables. The junction tree algorithm first constructs a tree called a
junction tree who's vertices are sets of random variables. The algorithm then
performs a generalised version of belief propagation on the junction tree. The
Shafer-Shenoy and Hugin architectures are two ways to perform this belief
propagation that tradeoff time and space complexities in different ways: Hugin
propagation is at least as fast as Shafer-Shenoy propagation and in the cases
that we have large vertices of high degree is significantly faster. However,
this speed increase comes at the cost of an increased space complexity. This
paper first introduces a simple novel architecture, ARCH-1, which has the best
of both worlds: the speed of Hugin propagation and the low space requirements
of Shafer-Shenoy propagation. A more complicated novel architecture, ARCH-2, is
then introduced which has, up to a factor only linear in the maximum
cardinality of any vertex, time and space complexities at least as good as
ARCH-1 and in the cases that we have large vertices of high degree is
significantly faster than ARCH-1.
| [
"Stephen Pasteris",
"['Stephen Pasteris']"
] |
cs.AI cs.LG | null | 1308.0227 | null | null | http://arxiv.org/pdf/1308.0227v7 | 2014-04-02T03:02:51Z | 2013-08-01T14:40:14Z | An Enhanced Features Extractor for a Portfolio of Constraint Solvers | Recent research has shown that a single arbitrarily efficient solver can be
significantly outperformed by a portfolio of possibly slower on-average
solvers. The solver selection is usually done by means of (un)supervised
learning techniques which exploit features extracted from the problem
specification. In this paper we present an useful and flexible framework that
is able to extract an extensive set of features from a Constraint
(Satisfaction/Optimization) Problem defined in possibly different modeling
languages: MiniZinc, FlatZinc or XCSP. We also report some empirical results
showing that the performances that can be obtained using these features are
effective and competitive with state of the art CSP portfolio techniques.
| [
"Roberto Amadini and Maurizio Gabbrielli and Jacopo Mauro",
"['Roberto Amadini' 'Maurizio Gabbrielli' 'Jacopo Mauro']"
] |
cs.AI cs.LG | null | 1308.0356 | null | null | http://arxiv.org/pdf/1308.0356v1 | 2013-08-01T21:04:07Z | 2013-08-01T21:04:07Z | Design and Development of an Expert System to Help Head of University
Departments | One of the basic tasks which is responded for head of each university
department, is employing lecturers based on some default factors such as
experience, evidences, qualifies and etc. In this respect, to help the heads,
some automatic systems have been proposed until now using machine learning
methods, decision support systems (DSS) and etc. According to advantages and
disadvantages of the previous methods, a full automatic system is designed in
this paper using expert systems. The proposed system is included two main
steps. In the first one, the human expert's knowledge is designed as decision
trees. The second step is included an expert system which is evaluated using
extracted rules of these decision trees. Also, to improve the quality of the
proposed system, a majority voting algorithm is proposed as post processing
step to choose the best lecturer which satisfied more expert's decision trees
for each course. The results are shown that the designed system average
accuracy is 78.88. Low computational complexity, simplicity to program and are
some of other advantages of the proposed system.
| [
"['Shervan Fekri-Ershad' 'Hadi Tajalizadeh' 'Shahram Jafari']",
"Shervan Fekri-Ershad, Hadi Tajalizadeh, Shahram Jafari"
] |
cs.LG cs.DB | null | 1308.0484 | null | null | http://arxiv.org/pdf/1308.0484v2 | 2013-08-15T20:00:22Z | 2013-08-02T12:56:19Z | Using Incomplete Information for Complete Weight Annotation of Road
Networks -- Extended Version | We are witnessing increasing interests in the effective use of road networks.
For example, to enable effective vehicle routing, weighted-graph models of
transportation networks are used, where the weight of an edge captures some
cost associated with traversing the edge, e.g., greenhouse gas (GHG) emissions
or travel time. It is a precondition to using a graph model for routing that
all edges have weights. Weights that capture travel times and GHG emissions can
be extracted from GPS trajectory data collected from the network. However, GPS
trajectory data typically lack the coverage needed to assign weights to all
edges. This paper formulates and addresses the problem of annotating all edges
in a road network with travel cost based weights from a set of trips in the
network that cover only a small fraction of the edges, each with an associated
ground-truth travel cost. A general framework is proposed to solve the problem.
Specifically, the problem is modeled as a regression problem and solved by
minimizing a judiciously designed objective function that takes into account
the topology of the road network. In particular, the use of weighted PageRank
values of edges is explored for assigning appropriate weights to all edges, and
the property of directional adjacency of edges is also taken into account to
assign weights. Empirical studies with weights capturing travel time and GHG
emissions on two road networks (Skagen, Denmark, and North Jutland, Denmark)
offer insight into the design properties of the proposed techniques and offer
evidence that the techniques are effective.
| [
"Bin Yang, Manohar Kaul, Christian S. Jensen",
"['Bin Yang' 'Manohar Kaul' 'Christian S. Jensen']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.