categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG cs.CY | null | 1311.6802 | null | null | http://arxiv.org/pdf/1311.6802v2 | 2014-07-30T23:08:54Z | 2013-11-26T20:48:59Z | Recommending with an Agenda: Active Learning of Private Attributes using
Matrix Factorization | Recommender systems leverage user demographic information, such as age,
gender, etc., to personalize recommendations and better place their targeted
ads. Oftentimes, users do not volunteer this information due to privacy
concerns, or due to a lack of initiative in filling out their online profiles.
We illustrate a new threat in which a recommender learns private attributes of
users who do not voluntarily disclose them. We design both passive and active
attacks that solicit ratings for strategically selected items, and could thus
be used by a recommender system to pursue this hidden agenda. Our methods are
based on a novel usage of Bayesian matrix factorization in an active learning
setting. Evaluations on multiple datasets illustrate that such attacks are
indeed feasible and use significantly fewer rated items than static inference
methods. Importantly, they succeed without sacrificing the quality of
recommendations to users.
| [
"Smriti Bhagat, Udi Weinsberg, Stratis Ioannidis, Nina Taft",
"['Smriti Bhagat' 'Udi Weinsberg' 'Stratis Ioannidis' 'Nina Taft']"
] |
cs.LG | 10.1109/TSP.2014.2333559 | 1311.6809 | null | null | http://arxiv.org/abs/1311.6809v1 | 2013-11-26T10:02:20Z | 2013-11-26T10:02:20Z | A Novel Family of Adaptive Filtering Algorithms Based on The Logarithmic
Cost | We introduce a novel family of adaptive filtering algorithms based on a
relative logarithmic cost. The new family intrinsically combines the higher and
lower order measures of the error into a single continuous update based on the
error amount. We introduce important members of this family of algorithms such
as the least mean logarithmic square (LMLS) and least logarithmic absolute
difference (LLAD) algorithms that improve the convergence performance of the
conventional algorithms. However, our approach and analysis are generic such
that they cover other well-known cost functions as described in the paper. The
LMLS algorithm achieves comparable convergence performance with the least mean
fourth (LMF) algorithm and extends the stability bound on the step size. The
LLAD and least mean square (LMS) algorithms demonstrate similar convergence
performance in impulse-free noise environments while the LLAD algorithm is
robust against impulsive interferences and outperforms the sign algorithm (SA).
We analyze the transient, steady state and tracking performance of the
introduced algorithms and demonstrate the match of the theoretical analyzes and
simulation results. We show the extended stability bound of the LMLS algorithm
and analyze the robustness of the LLAD algorithm against impulsive
interferences. Finally, we demonstrate the performance of our algorithms in
different scenarios through numerical examples.
| [
"Muhammed O. Sayin, N. Denizcan Vanli, Suleyman S. Kozat",
"['Muhammed O. Sayin' 'N. Denizcan Vanli' 'Suleyman S. Kozat']"
] |
stat.ML cs.LG | 10.1109/IJCNN.2014.6889449 | 1311.6834 | null | null | http://arxiv.org/abs/1311.6834v2 | 2015-01-16T18:37:00Z | 2013-11-26T22:13:37Z | Semi-Supervised Sparse Coding | Sparse coding approximates the data sample as a sparse linear combination of
some basic codewords and uses the sparse codes as new presentations. In this
paper, we investigate learning discriminative sparse codes by sparse coding in
a semi-supervised manner, where only a few training samples are labeled. By
using the manifold structure spanned by the data set of both labeled and
unlabeled samples and the constraints provided by the labels of the labeled
samples, we learn the variable class labels for all the samples. Furthermore,
to improve the discriminative ability of the learned sparse codes, we assume
that the class labels could be predicted from the sparse codes directly using a
linear classifier. By solving the codebook, sparse codes, class labels and
classifier parameters simultaneously in a unified objective function, we
develop a semi-supervised sparse coding algorithm. Experiments on two
real-world pattern recognition problems demonstrate the advantage of the
proposed methods over supervised sparse coding methods on partially labeled
data sets.
| [
"Jim Jing-Yan Wang and Xin Gao",
"['Jim Jing-Yan Wang' 'Xin Gao']"
] |
cs.LG cs.GT | null | 1311.6838 | null | null | http://arxiv.org/pdf/1311.6838v1 | 2013-11-26T22:53:13Z | 2013-11-26T22:53:13Z | Learning Prices for Repeated Auctions with Strategic Buyers | Inspired by real-time ad exchanges for online display advertising, we
consider the problem of inferring a buyer's value distribution for a good when
the buyer is repeatedly interacting with a seller through a posted-price
mechanism. We model the buyer as a strategic agent, whose goal is to maximize
her long-term surplus, and we are interested in mechanisms that maximize the
seller's long-term revenue. We define the natural notion of strategic regret
--- the lost revenue as measured against a truthful (non-strategic) buyer. We
present seller algorithms that are no-(strategic)-regret when the buyer
discounts her future surplus --- i.e. the buyer prefers showing advertisements
to users sooner rather than later. We also give a lower bound on strategic
regret that increases as the buyer's discounting weakens and shows, in
particular, that any seller algorithm will suffer linear strategic regret if
there is no discounting.
| [
"['Kareem Amin' 'Afshin Rostamizadeh' 'Umar Syed']",
"Kareem Amin, Afshin Rostamizadeh, Umar Syed"
] |
cs.CV cs.LG cs.NE | null | 1311.6881 | null | null | http://arxiv.org/pdf/1311.6881v1 | 2013-11-27T07:14:25Z | 2013-11-27T07:14:25Z | Color and Shape Content Based Image Classification using RBF Network and
PSO Technique: A Survey | The improvement of the accuracy of image query retrieval used image
classification technique. Image classification is well known technique of
supervised learning. The improved method of image classification increases the
working efficiency of image query retrieval. For the improvements of
classification technique we used RBF neural network function for better
prediction of feature used in image retrieval.Colour content is represented by
pixel values in image classification using radial base function(RBF) technique.
This approach provides better result compare to SVM technique in image
representation.Image is represented by matrix though RBF using pixel values of
colour intensity of image. Firstly we using RGB colour model. In this colour
model we use red, green and blue colour intensity values in matrix.SVM with
partical swarm optimization for image classification is implemented in content
of images which provide better Results based on the proposed approach are found
encouraging in terms of color image classification accuracy.
| [
"['Abhishek Pandey' 'Anjna Jayant Deen' 'Rajeev Pandey']",
"Abhishek Pandey, Anjna Jayant Deen and Rajeev Pandey (Dept. of CSE,\n UIT-RGPV)"
] |
stat.ML cs.LG stat.AP stat.ME | null | 1311.6976 | null | null | http://arxiv.org/pdf/1311.6976v2 | 2014-05-13T09:21:28Z | 2013-11-27T14:19:21Z | Dimensionality reduction for click-through rate prediction: Dense versus
sparse representation | In online advertising, display ads are increasingly being placed based on
real-time auctions where the advertiser who wins gets to serve the ad. This is
called real-time bidding (RTB). In RTB, auctions have very tight time
constraints on the order of 100ms. Therefore mechanisms for bidding
intelligently such as clickthrough rate prediction need to be sufficiently
fast. In this work, we propose to use dimensionality reduction of the
user-website interaction graph in order to produce simplified features of users
and websites that can be used as predictors of clickthrough rate. We
demonstrate that the Infinite Relational Model (IRM) as a dimensionality
reduction offers comparable predictive performance to conventional
dimensionality reduction schemes, while achieving the most economical usage of
features and fastest computations at run-time. For applications such as
real-time bidding, where fast database I/O and few computations are key to
success, we thus recommend using IRM based features as predictors to exploit
the recommender effects from bipartite graphs.
| [
"['Bjarne Ørum Fruergaard' 'Toke Jansen Hansen' 'Lars Kai Hansen']",
"Bjarne {\\O}rum Fruergaard, Toke Jansen Hansen, Lars Kai Hansen"
] |
cs.AI cs.LG stat.ML | null | 1311.7071 | null | null | http://arxiv.org/pdf/1311.7071v2 | 2013-12-03T20:08:28Z | 2013-11-27T18:58:07Z | Sparse Linear Dynamical System with Its Application in Multivariate
Clinical Time Series | Linear Dynamical System (LDS) is an elegant mathematical framework for
modeling and learning multivariate time series. However, in general, it is
difficult to set the dimension of its hidden state space. A small number of
hidden states may not be able to model the complexities of a time series, while
a large number of hidden states can lead to overfitting. In this paper, we
study methods that impose an $\ell_1$ regularization on the transition matrix
of an LDS model to alleviate the problem of choosing the optimal number of
hidden states. We incorporate a generalized gradient descent method into the
Maximum a Posteriori (MAP) framework and use Expectation Maximization (EM) to
iteratively achieve sparsity on the transition matrix of an LDS model. We show
that our Sparse Linear Dynamical System (SLDS) improves the predictive
performance when compared to ordinary LDS on a multivariate clinical time
series dataset.
| [
"Zitao Liu and Milos Hauskrecht",
"['Zitao Liu' 'Milos Hauskrecht']"
] |
stat.ML cs.LG | null | 1311.7184 | null | null | http://arxiv.org/pdf/1311.7184v1 | 2013-11-28T01:36:49Z | 2013-11-28T01:36:49Z | Using Multiple Samples to Learn Mixture Models | In the mixture models problem it is assumed that there are $K$ distributions
$\theta_{1},\ldots,\theta_{K}$ and one gets to observe a sample from a mixture
of these distributions with unknown coefficients. The goal is to associate
instances with their generating distributions, or to identify the parameters of
the hidden distributions. In this work we make the assumption that we have
access to several samples drawn from the same $K$ underlying distributions, but
with different mixing weights. As with topic modeling, having multiple samples
is often a reasonable assumption. Instead of pooling the data into one sample,
we prove that it is possible to use the differences between the samples to
better recover the underlying structure. We present algorithms that recover the
underlying structure under milder assumptions than the current state of art
when either the dimensionality or the separation is high. The methods, when
applied to topic modeling, allow generalization to words not present in the
training data.
| [
"['Jason D Lee' 'Ran Gilad-Bachrach' 'Rich Caruana']",
"Jason D Lee, Ran Gilad-Bachrach, and Rich Caruana"
] |
cs.LG math.OC stat.ML | null | 1311.7198 | null | null | http://arxiv.org/pdf/1311.7198v1 | 2013-11-28T03:59:31Z | 2013-11-28T03:59:31Z | ADMM Algorithm for Graphical Lasso with an $\ell_{\infty}$ Element-wise
Norm Constraint | We consider the problem of Graphical lasso with an additional $\ell_{\infty}$
element-wise norm constraint on the precision matrix. This problem has
applications in high-dimensional covariance decomposition such as in
\citep{Janzamin-12}. We propose an ADMM algorithm to solve this problem. We
also use a continuation strategy on the penalty parameter to have a fast
implemenation of the algorithm.
| [
"Karthik Mohan",
"['Karthik Mohan']"
] |
cs.CV cs.LG cs.NE | null | 1311.7251 | null | null | http://arxiv.org/pdf/1311.7251v1 | 2013-11-28T09:44:45Z | 2013-11-28T09:44:45Z | Spatially-Adaptive Reconstruction in Computed Tomography using Neural
Networks | We propose a supervised machine learning approach for boosting existing
signal and image recovery methods and demonstrate its efficacy on example of
image reconstruction in computed tomography. Our technique is based on a local
nonlinear fusion of several image estimates, all obtained by applying a chosen
reconstruction algorithm with different values of its control parameters.
Usually such output images have different bias/variance trade-off. The fusion
of the images is performed by feed-forward neural network trained on a set of
known examples. Numerical experiments show an improvement in reconstruction
quality relatively to existing direct and iterative reconstruction methods.
| [
"['Joseph Shtok' 'Michael Zibulevsky' 'Michael Elad']",
"Joseph Shtok, Michael Zibulevsky and Michael Elad"
] |
cs.LG | null | 1311.7385 | null | null | http://arxiv.org/pdf/1311.7385v3 | 2014-07-11T17:10:27Z | 2013-11-28T17:44:45Z | Algorithmic Identification of Probabilities | TThe problem is to identify a probability associated with a set of natural
numbers, given an infinite data sequence of elements from the set. If the given
sequence is drawn i.i.d. and the probability mass function involved (the
target) belongs to a computably enumerable (c.e.) or co-computably enumerable
(co-c.e.) set of computable probability mass functions, then there is an
algorithm to almost surely identify the target in the limit. The technical tool
is the strong law of large numbers. If the set is finite and the elements of
the sequence are dependent while the sequence is typical in the sense of
Martin-L\"of for at least one measure belonging to a c.e. or co-c.e. set of
computable measures, then there is an algorithm to identify in the limit a
computable measure for which the sequence is typical (there may be more than
one such measure). The technical tool is the theory of Kolmogorov complexity.
We give the algorithms and consider the associated predictions.
| [
"Paul M.B. Vitanyi (CWI and University of Amsterdam, NL), Nick Chater\n (University of Warwick, UK)",
"['Paul M. B. Vitanyi' 'Nick Chater']"
] |
cs.LG cs.CV cs.IR | null | 1311.7662 | null | null | http://arxiv.org/pdf/1311.7662v1 | 2013-11-29T18:53:32Z | 2013-11-29T18:53:32Z | The Power of Asymmetry in Binary Hashing | When approximating binary similarity using the hamming distance between short
binary hashes, we show that even if the similarity is symmetric, we can have
shorter and more accurate hashes by using two distinct code maps. I.e. by
approximating the similarity between $x$ and $x'$ as the hamming distance
between $f(x)$ and $g(x')$, for two distinct binary codes $f,g$, rather than as
the hamming distance between $f(x)$ and $f(x')$.
| [
"['Behnam Neyshabur' 'Payman Yadollahpour' 'Yury Makarychev'\n 'Ruslan Salakhutdinov' 'Nathan Srebro']",
"Behnam Neyshabur, Payman Yadollahpour, Yury Makarychev, Ruslan\n Salakhutdinov, Nathan Srebro"
] |
cs.LG | null | 1311.7679 | null | null | http://arxiv.org/pdf/1311.7679v1 | 2013-11-29T20:01:10Z | 2013-11-29T20:01:10Z | Combination of Diverse Ranking Models for Personalized Expedia Hotel
Searches | The ICDM Challenge 2013 is to apply machine learning to the problem of hotel
ranking, aiming to maximize purchases according to given hotel characteristics,
location attractiveness of hotels, user's aggregated purchase history and
competitive online travel agency information for each potential hotel choice.
This paper describes the solution of team "binghsu & MLRush & BrickMover". We
conduct simple feature engineering work and train different models by each
individual team member. Afterwards, we use listwise ensemble method to combine
each model's output. Besides describing effective model and features, we will
discuss about the lessons we learned while using deep learning in this
competition.
| [
"Xudong Liu, Bing Xu, Yuyu Zhang, Qiang Yan, Liang Pang, Qiang Li,\n Hanxiao Sun, Bin Wang",
"['Xudong Liu' 'Bing Xu' 'Yuyu Zhang' 'Qiang Yan' 'Liang Pang' 'Qiang Li'\n 'Hanxiao Sun' 'Bin Wang']"
] |
cs.LG | null | 1312.0048 | null | null | http://arxiv.org/pdf/1312.0048v1 | 2013-11-30T01:07:25Z | 2013-11-30T01:07:25Z | Stochastic Optimization of Smooth Loss | In this paper, we first prove a high probability bound rather than an
expectation bound for stochastic optimization with smooth loss. Furthermore,
the existing analysis requires the knowledge of optimal classifier for tuning
the step size in order to achieve the desired bound. However, this information
is usually not accessible in advanced. We also propose a strategy to address
the limitation.
| [
"['Rong Jin']",
"Rong Jin"
] |
cs.LG cs.AI | 10.1017/S026988891300043X | 1312.0049 | null | null | http://arxiv.org/abs/1312.0049v1 | 2013-11-30T01:52:36Z | 2013-11-30T01:52:36Z | One-Class Classification: Taxonomy of Study and Review of Techniques | One-class classification (OCC) algorithms aim to build classification models
when the negative class is either absent, poorly sampled or not well defined.
This unique situation constrains the learning of efficient classifiers by
defining class boundary just with the knowledge of positive class. The OCC
problem has been considered and applied under many research themes, such as
outlier/novelty detection and concept learning. In this paper we present a
unified view of the general problem of OCC by presenting a taxonomy of study
for OCC problems, which is based on the availability of training data,
algorithms used and the application domains applied. We further delve into each
of the categories of the proposed taxonomy and present a comprehensive
literature review of the OCC algorithms, techniques and methodologies with a
focus on their significance, limitations and applications. We conclude our
paper by discussing some open research problems in the field of OCC and present
our vision for future research.
| [
"Shehroz S.Khan, Michael G.Madden",
"['Shehroz S. Khan' 'Michael G. Madden']"
] |
null | null | 1312.0232 | null | null | http://arxiv.org/pdf/1312.0232v4 | 2017-05-30T13:19:17Z | 2013-12-01T15:16:25Z | Stochastic continuum armed bandit problem of few linear parameters in
high dimensions | We consider a stochastic continuum armed bandit problem where the arms are indexed by the $ell_2$ ball $B_{d}(1+nu)$ of radius $1+nu$ in $mathbb{R}^d$. The reward functions $r :B_{d}(1+nu) rightarrow mathbb{R}$ are considered to intrinsically depend on $k ll d$ unknown linear parameters so that $r(mathbf{x}) = g(mathbf{A} mathbf{x})$ where $mathbf{A}$ is a full rank $k times d$ matrix. Assuming the mean reward function to be smooth we make use of results from low-rank matrix recovery literature and derive an efficient randomized algorithm which achieves a regret bound of $O(C(k,d) n^{frac{1+k}{2+k}} (log n)^{frac{1}{2+k}})$ with high probability. Here $C(k,d)$ is at most polynomial in $d$ and $k$ and $n$ is the number of rounds or the sampling budget which is assumed to be known beforehand. | [
"['Hemant Tyagi' 'Sebastian Stich' 'Bernd Gärtner']"
] |
cs.LG stat.ML | null | 1312.0286 | null | null | http://arxiv.org/pdf/1312.0286v2 | 2014-07-20T20:16:44Z | 2013-12-01T23:17:06Z | Efficient Learning and Planning with Compressed Predictive States | Predictive state representations (PSRs) offer an expressive framework for
modelling partially observable systems. By compactly representing systems as
functions of observable quantities, the PSR learning approach avoids using
local-minima prone expectation-maximization and instead employs a globally
optimal moment-based algorithm. Moreover, since PSRs do not require a
predetermined latent state structure as an input, they offer an attractive
framework for model-based reinforcement learning when agents must plan without
a priori access to a system model. Unfortunately, the expressiveness of PSRs
comes with significant computational cost, and this cost is a major factor
inhibiting the use of PSRs in applications. In order to alleviate this
shortcoming, we introduce the notion of compressed PSRs (CPSRs). The CPSR
learning approach combines recent advancements in dimensionality reduction,
incremental matrix decomposition, and compressed sensing. We show how this
approach provides a principled avenue for learning accurate approximations of
PSRs, drastically reducing the computational costs associated with learning
while also providing effective regularization. Going further, we propose a
planning framework which exploits these learned models. And we show that this
approach facilitates model-learning and planning in large complex partially
observable domains, a task that is infeasible without the principled use of
compression.
| [
"William L. Hamilton, Mahdi Milani Fard, and Joelle Pineau",
"['William L. Hamilton' 'Mahdi Milani Fard' 'Joelle Pineau']"
] |
cs.LG | null | 1312.0412 | null | null | http://arxiv.org/pdf/1312.0412v1 | 2013-12-02T10:58:01Z | 2013-12-02T10:58:01Z | Practical Collapsed Stochastic Variational Inference for the HDP | Recent advances have made it feasible to apply the stochastic variational
paradigm to a collapsed representation of latent Dirichlet allocation (LDA).
While the stochastic variational paradigm has successfully been applied to an
uncollapsed representation of the hierarchical Dirichlet process (HDP), no
attempts to apply this type of inference in a collapsed setting of
non-parametric topic modeling have been put forward so far. In this paper we
explore such a collapsed stochastic variational Bayes inference for the HDP.
The proposed online algorithm is easy to implement and accounts for the
inference of hyper-parameters. First experiments show a promising improvement
in predictive performance.
| [
"['Arnim Bleier']",
"Arnim Bleier"
] |
math.PR cs.LG stat.ML | null | 1312.0451 | null | null | http://arxiv.org/pdf/1312.0451v5 | 2014-01-21T08:24:07Z | 2013-12-02T13:41:44Z | Consistency of weighted majority votes | We revisit the classical decision-theoretic problem of weighted expert voting
from a statistical learning perspective. In particular, we examine the
consistency (both asymptotic and finitary) of the optimal Nitzan-Paroush
weighted majority and related rules. In the case of known expert competence
levels, we give sharp error estimates for the optimal rule. When the competence
levels are unknown, they must be empirically estimated. We provide frequentist
and Bayesian analyses for this situation. Some of our proof techniques are
non-standard and may be of independent interest. The bounds we derive are
nearly optimal, and several challenging open problems are posed. Experimental
results are provided to illustrate the theory.
| [
"['Daniel Berend' 'Aryeh Kontorovich']",
"Daniel Berend and Aryeh Kontorovich"
] |
cs.LG cs.CL stat.ML | null | 1312.0493 | null | null | http://arxiv.org/pdf/1312.0493v1 | 2013-12-02T15:54:40Z | 2013-12-02T15:54:40Z | Bidirectional Recursive Neural Networks for Token-Level Labeling with
Structure | Recently, deep architectures, such as recurrent and recursive neural networks
have been successfully applied to various natural language processing tasks.
Inspired by bidirectional recurrent neural networks which use representations
that summarize the past and future around an instance, we propose a novel
architecture that aims to capture the structural information around an input,
and use it to label instances. We apply our method to the task of opinion
expression extraction, where we employ the binary parse tree of a sentence as
the structure, and word vector representations as the initial representation of
a single token. We conduct preliminary experiments to investigate its
performance and compare it to the sequential approach.
| [
"Ozan \\.Irsoy, Claire Cardie",
"['Ozan İrsoy' 'Claire Cardie']"
] |
cs.LG | null | 1312.0512 | null | null | http://arxiv.org/pdf/1312.0512v2 | 2014-03-13T12:02:10Z | 2013-12-02T16:47:10Z | Sensing-Aware Kernel SVM | We propose a novel approach for designing kernels for support vector machines
(SVMs) when the class label is linked to the observation through a latent state
and the likelihood function of the observation given the state (the sensing
model) is available. We show that the Bayes-optimum decision boundary is a
hyperplane under a mapping defined by the likelihood function. Combining this
with the maximum margin principle yields kernels for SVMs that leverage
knowledge of the sensing model in an optimal way. We derive the optimum kernel
for the bag-of-words (BoWs) sensing model and demonstrate its superior
performance over other kernels in document and image classification tasks.
These results indicate that such optimum sensing-aware kernel SVMs can match
the performance of rather sophisticated state-of-the-art approaches.
| [
"Weicong Ding, Prakash Ishwar, Venkatesh Saligrama, W. Clem Karl",
"['Weicong Ding' 'Prakash Ishwar' 'Venkatesh Saligrama' 'W. Clem Karl']"
] |
cs.LG cs.SY stat.AP stat.ML | null | 1312.0516 | null | null | http://arxiv.org/pdf/1312.0516v2 | 2014-02-14T00:35:43Z | 2013-12-02T16:58:10Z | Grid Topology Identification using Electricity Prices | The potential of recovering the topology of a grid using solely publicly
available market data is explored here. In contemporary whole-sale electricity
markets, real-time prices are typically determined by solving the
network-constrained economic dispatch problem. Under a linear DC model,
locational marginal prices (LMPs) correspond to the Lagrange multipliers of the
linear program involved. The interesting observation here is that the matrix of
spatiotemporally varying LMPs exhibits the following property: Once
premultiplied by the weighted grid Laplacian, it yields a low-rank and sparse
matrix. Leveraging this rich structure, a regularized maximum likelihood
estimator (MLE) is developed to recover the grid Laplacian from the LMPs. The
convex optimization problem formulated includes low rank- and
sparsity-promoting regularizers, and it is solved using a scalable algorithm.
Numerical tests on prices generated for the IEEE 14-bus benchmark provide
encouraging topology recovery results.
| [
"['Vassilis Kekatos' 'Georgios B. Giannakis' 'Ross Baldick']",
"Vassilis Kekatos, Georgios B. Giannakis, Ross Baldick"
] |
cs.LG | null | 1312.0579 | null | null | http://arxiv.org/pdf/1312.0579v1 | 2013-12-02T20:26:41Z | 2013-12-02T20:26:41Z | SpeedMachines: Anytime Structured Prediction | Structured prediction plays a central role in machine learning applications
from computational biology to computer vision. These models require
significantly more computation than unstructured models, and, in many
applications, algorithms may need to make predictions within a computational
budget or in an anytime fashion. In this work we propose an anytime technique
for learning structured prediction that, at training time, incorporates both
structural elements and feature computation trade-offs that affect test-time
inference. We apply our technique to the challenging problem of scene
understanding in computer vision and demonstrate efficient and anytime
predictions that gradually improve towards state-of-the-art classification
performance as the allotted time increases.
| [
"Alexander Grubb, Daniel Munoz, J. Andrew Bagnell, Martial Hebert",
"['Alexander Grubb' 'Daniel Munoz' 'J. Andrew Bagnell' 'Martial Hebert']"
] |
cs.LG stat.ML | null | 1312.0624 | null | null | http://arxiv.org/pdf/1312.0624v2 | 2013-12-13T18:47:20Z | 2013-12-02T21:09:40Z | Efficient coordinate-descent for orthogonal matrices through Givens
rotations | Optimizing over the set of orthogonal matrices is a central component in
problems like sparse-PCA or tensor decomposition. Unfortunately, such
optimization is hard since simple operations on orthogonal matrices easily
break orthogonality, and correcting orthogonality usually costs a large amount
of computation. Here we propose a framework for optimizing orthogonal matrices,
that is the parallel of coordinate-descent in Euclidean spaces. It is based on
{\em Givens-rotations}, a fast-to-compute operation that affects a small number
of entries in the learned matrix, and preserves orthogonality. We show two
applications of this approach: an algorithm for tensor decomposition that is
used in learning mixture models, and an algorithm for sparse-PCA. We study the
parameter regime where a Givens rotation approach converges faster and achieves
a superior model on a genome-wide brain-wide mRNA expression dataset.
| [
"Uri Shalit and Gal Chechik",
"['Uri Shalit' 'Gal Chechik']"
] |
cs.LG | null | 1312.0786 | null | null | http://arxiv.org/pdf/1312.0786v2 | 2014-02-19T11:13:57Z | 2013-12-03T11:59:57Z | Image Representation Learning Using Graph Regularized Auto-Encoders | We consider the problem of image representation for the tasks of unsupervised
learning and semi-supervised learning. In those learning tasks, the raw image
vectors may not provide enough representation for their intrinsic structures
due to their highly dense feature space. To overcome this problem, the raw
image vectors should be mapped to a proper representation space which can
capture the latent structure of the original data and represent the data
explicitly for further learning tasks such as clustering.
Inspired by the recent research works on deep neural network and
representation learning, in this paper, we introduce the multiple-layer
auto-encoder into image representation, we also apply the locally invariant
ideal to our image representation with auto-encoders and propose a novel
method, called Graph regularized Auto-Encoder (GAE). GAE can provide a compact
representation which uncovers the hidden semantics and simultaneously respects
the intrinsic geometric structure.
Extensive experiments on image clustering show encouraging results of the
proposed algorithm in comparison to the state-of-the-art algorithms on
real-word cases.
| [
"['Yiyi Liao' 'Yue Wang' 'Yong Liu']",
"Yiyi Liao, Yue Wang, Yong Liu"
] |
cs.AI cs.LG stat.ML | null | 1312.0790 | null | null | http://arxiv.org/pdf/1312.0790v2 | 2014-03-14T16:36:36Z | 2013-12-03T12:12:23Z | Test Set Selection using Active Information Acquisition for Predictive
Models | In this paper, we consider active information acquisition when the prediction
model is meant to be applied on a targeted subset of the population. The goal
is to label a pre-specified fraction of customers in the target or test set by
iteratively querying for information from the non-target or training set. The
number of queries is limited by an overall budget. Arising in the context of
two rather disparate applications- banking and medical diagnosis, we pose the
active information acquisition problem as a constrained optimization problem.
We propose two greedy iterative algorithms for solving the above problem. We
conduct experiments with synthetic data and compare results of our proposed
algorithms with few other baseline approaches. The experimental results show
that our proposed approaches perform better than the baseline schemes.
| [
"Sneha Chaudhari, Pankaj Dayama, Vinayaka Pandit, Indrajit Bhattacharya",
"['Sneha Chaudhari' 'Pankaj Dayama' 'Vinayaka Pandit'\n 'Indrajit Bhattacharya']"
] |
cs.LG cs.DS stat.ML | null | 1312.0925 | null | null | http://arxiv.org/pdf/1312.0925v3 | 2014-05-14T19:54:58Z | 2013-12-03T20:37:28Z | Understanding Alternating Minimization for Matrix Completion | Alternating Minimization is a widely used and empirically successful
heuristic for matrix completion and related low-rank optimization problems.
Theoretical guarantees for Alternating Minimization have been hard to come by
and are still poorly understood. This is in part because the heuristic is
iterative and non-convex in nature. We give a new algorithm based on
Alternating Minimization that provably recovers an unknown low-rank matrix from
a random subsample of its entries under a standard incoherence assumption. Our
results reduce the sample size requirements of the Alternating Minimization
approach by at least a quartic factor in the rank and the condition number of
the unknown matrix. These improvements apply even if the matrix is only close
to low-rank in the Frobenius norm. Our algorithm runs in nearly linear time in
the dimension of the matrix and, in a broad range of parameters, gives the
strongest sample bounds among all subquadratic time algorithms that we are
aware of.
Underlying our work is a new robust convergence analysis of the well-known
Power Method for computing the dominant singular vectors of a matrix. This
viewpoint leads to a conceptually simple understanding of Alternating
Minimization. In addition, we contribute a new technique for controlling the
coherence of intermediate solutions arising in iterative algorithms based on a
smoothed analysis of the QR factorization. These techniques may be of interest
beyond their application here.
| [
"['Moritz Hardt']",
"Moritz Hardt"
] |
cs.DC cs.LG | null | 1312.1031 | null | null | http://arxiv.org/pdf/1312.1031v2 | 2014-03-23T22:13:17Z | 2013-12-04T05:48:30Z | Analysis of Distributed Stochastic Dual Coordinate Ascent | In \citep{Yangnips13}, the author presented distributed stochastic dual
coordinate ascent (DisDCA) algorithms for solving large-scale regularized loss
minimization. Extraordinary performances have been observed and reported for
the well-motivated updates, as referred to the practical updates, compared to
the naive updates. However, no serious analysis has been provided to understand
the updates and therefore the convergence rates. In the paper, we bridge the
gap by providing a theoretical analysis of the convergence rates of the
practical DisDCA algorithm. Our analysis helped by empirical studies has shown
that it could yield an exponential speed-up in the convergence by increasing
the number of dual updates at each iteration. This result justifies the
superior performances of the practical DisDCA as compared to the naive variant.
As a byproduct, our analysis also reveals the convergence behavior of the
one-communication DisDCA.
| [
"Tianbao Yang, Shenghuo Zhu, Rong Jin, Yuanqing Lin",
"['Tianbao Yang' 'Shenghuo Zhu' 'Rong Jin' 'Yuanqing Lin']"
] |
cs.DS cs.LG math.PR math.ST stat.TH | null | 1312.1054 | null | null | http://arxiv.org/pdf/1312.1054v3 | 2014-05-19T13:26:05Z | 2013-12-04T08:31:58Z | Faster and Sample Near-Optimal Algorithms for Proper Learning Mixtures
of Gaussians | We provide an algorithm for properly learning mixtures of two
single-dimensional Gaussians without any separability assumptions. Given
$\tilde{O}(1/\varepsilon^2)$ samples from an unknown mixture, our algorithm
outputs a mixture that is $\varepsilon$-close in total variation distance, in
time $\tilde{O}(1/\varepsilon^5)$. Our sample complexity is optimal up to
logarithmic factors, and significantly improves upon both Kalai et al., whose
algorithm has a prohibitive dependence on $1/\varepsilon$, and Feldman et al.,
whose algorithm requires bounds on the mixture parameters and depends
pseudo-polynomially in these parameters.
One of our main contributions is an improved and generalized algorithm for
selecting a good candidate distribution from among competing hypotheses.
Namely, given a collection of $N$ hypotheses containing at least one candidate
that is $\varepsilon$-close to an unknown distribution, our algorithm outputs a
candidate which is $O(\varepsilon)$-close to the distribution. The algorithm
requires ${O}(\log{N}/\varepsilon^2)$ samples from the unknown distribution and
${O}(N \log N/\varepsilon^2)$ time, which improves previous such results (such
as the Scheff\'e estimator) from a quadratic dependence of the running time on
$N$ to quasilinear. Given the wide use of such results for the purpose of
hypothesis selection, our improved algorithm implies immediate improvements to
any such use.
| [
"Constantinos Daskalakis, Gautam Kamath",
"['Constantinos Daskalakis' 'Gautam Kamath']"
] |
stat.ML cs.LG | null | 1312.1099 | null | null | http://arxiv.org/pdf/1312.1099v1 | 2013-12-04T10:44:01Z | 2013-12-04T10:44:01Z | Multiscale Dictionary Learning for Estimating Conditional Distributions | Nonparametric estimation of the conditional distribution of a response given
high-dimensional features is a challenging problem. It is important to allow
not only the mean but also the variance and shape of the response density to
change flexibly with features, which are massive-dimensional. We propose a
multiscale dictionary learning model, which expresses the conditional response
density as a convex combination of dictionary densities, with the densities
used and their weights dependent on the path through a tree decomposition of
the feature space. A fast graph partitioning algorithm is applied to obtain the
tree decomposition, with Bayesian methods then used to adaptively prune and
average over different sub-trees in a soft probabilistic manner. The algorithm
scales efficiently to approximately one million features. State of the art
predictive performance is demonstrated for toy examples and two neuroscience
applications including up to a million features.
| [
"['Francesca Petralia' 'Joshua Vogelstein' 'David B. Dunson']",
"Francesca Petralia, Joshua Vogelstein and David B. Dunson"
] |
cs.LG | null | 1312.1121 | null | null | http://arxiv.org/pdf/1312.1121v1 | 2013-12-04T11:57:53Z | 2013-12-04T11:57:53Z | Interpreting random forest classification models using a feature
contribution method | Model interpretation is one of the key aspects of the model evaluation
process. The explanation of the relationship between model variables and
outputs is relatively easy for statistical models, such as linear regressions,
thanks to the availability of model parameters and their statistical
significance. For "black box" models, such as random forest, this information
is hidden inside the model structure. This work presents an approach for
computing feature contributions for random forest classification models. It
allows for the determination of the influence of each variable on the model
prediction for an individual instance. By analysing feature contributions for a
training dataset, the most significant variables can be determined and their
typical contribution towards predictions made for individual classes, i.e.,
class-specific feature contribution "patterns", are discovered. These patterns
represent a standard behaviour of the model and allow for an additional
assessment of the model reliability for a new data. Interpretation of feature
contributions for two UCI benchmark datasets shows the potential of the
proposed methodology. The robustness of results is demonstrated through an
extensive analysis of feature contributions calculated for a large number of
generated random forest models.
| [
"['Anna Palczewska' 'Jan Palczewski' 'Richard Marchese Robinson'\n 'Daniel Neagu']",
"Anna Palczewska and Jan Palczewski and Richard Marchese Robinson and\n Daniel Neagu"
] |
cs.DS cs.LG | null | 1312.1277 | null | null | http://arxiv.org/pdf/1312.1277v4 | 2019-04-15T14:49:36Z | 2013-12-04T18:48:00Z | Bandits and Experts in Metric Spaces | In a multi-armed bandit problem, an online algorithm chooses from a set of
strategies in a sequence of trials so as to maximize the total payoff of the
chosen strategies. While the performance of bandit algorithms with a small
finite strategy set is quite well understood, bandit problems with large
strategy sets are still a topic of very active investigation, motivated by
practical applications such as online auctions and web advertisement. The goal
of such research is to identify broad and natural classes of strategy sets and
payoff functions which enable the design of efficient solutions.
In this work we study a very general setting for the multi-armed bandit
problem in which the strategies form a metric space, and the payoff function
satisfies a Lipschitz condition with respect to the metric. We refer to this
problem as the "Lipschitz MAB problem". We present a solution for the
multi-armed bandit problem in this setting. That is, for every metric space we
define an isometry invariant which bounds from below the performance of
Lipschitz MAB algorithms for this metric space, and we present an algorithm
which comes arbitrarily close to meeting this bound. Furthermore, our technique
gives even better results for benign payoff functions. We also address the
full-feedback ("best expert") version of the problem, where after every round
the payoffs from all arms are revealed.
| [
"Robert Kleinberg, Aleksandrs Slivkins and Eli Upfal",
"['Robert Kleinberg' 'Aleksandrs Slivkins' 'Eli Upfal']"
] |
cs.LG | null | 1312.1530 | null | null | http://arxiv.org/pdf/1312.1530v2 | 2014-07-06T12:47:36Z | 2013-12-05T13:00:23Z | Bandit Online Optimization Over the Permutahedron | The permutahedron is the convex polytope with vertex set consisting of the
vectors $(\pi(1),\dots, \pi(n))$ for all permutations (bijections) $\pi$ over
$\{1,\dots, n\}$. We study a bandit game in which, at each step $t$, an
adversary chooses a hidden weight weight vector $s_t$, a player chooses a
vertex $\pi_t$ of the permutahedron and suffers an observed loss of
$\sum_{i=1}^n \pi(i) s_t(i)$.
A previous algorithm CombBand of Cesa-Bianchi et al (2009) guarantees a
regret of $O(n\sqrt{T \log n})$ for a time horizon of $T$. Unfortunately,
CombBand requires at each step an $n$-by-$n$ matrix permanent approximation to
within improved accuracy as $T$ grows, resulting in a total running time that
is super linear in $T$, making it impractical for large time horizons.
We provide an algorithm of regret $O(n^{3/2}\sqrt{T})$ with total time
complexity $O(n^3T)$. The ideas are a combination of CombBand and a recent
algorithm by Ailon (2013) for online optimization over the permutahedron in the
full information setting. The technical core is a bound on the variance of the
Plackett-Luce noisy sorting process's "pseudo loss". The bound is obtained by
establishing positive semi-definiteness of a family of 3-by-3 matrices
generated from rational functions of exponentials of 3 parameters.
| [
"['Nir Ailon' 'Kohei Hatano' 'Eiji Takimoto']",
"Nir Ailon and Kohei Hatano and Eiji Takimoto"
] |
stat.ML cs.LG cs.NA | null | 1312.1613 | null | null | http://arxiv.org/pdf/1312.1613v1 | 2013-12-05T16:49:05Z | 2013-12-05T16:49:05Z | Max-Min Distance Nonnegative Matrix Factorization | Nonnegative Matrix Factorization (NMF) has been a popular representation
method for pattern classification problem. It tries to decompose a nonnegative
matrix of data samples as the product of a nonnegative basic matrix and a
nonnegative coefficient matrix, and the coefficient matrix is used as the new
representation. However, traditional NMF methods ignore the class labels of the
data samples. In this paper, we proposed a supervised novel NMF algorithm to
improve the discriminative ability of the new representation. Using the class
labels, we separate all the data sample pairs into within-class pairs and
between-class pairs. To improve the discriminate ability of the new NMF
representations, we hope that the maximum distance of the within-class pairs in
the new NMF space could be minimized, while the minimum distance of the
between-class pairs pairs could be maximized. With this criterion, we construct
an objective function and optimize it with regard to basic and coefficient
matrices and slack variables alternatively, resulting in a iterative algorithm.
| [
"['Jim Jing-Yan Wang']",
"Jim Jing-Yan Wang"
] |
stat.ML cs.LG cs.NA math.NA math.OC | null | 1312.1666 | null | null | http://arxiv.org/pdf/1312.1666v2 | 2015-06-16T05:05:40Z | 2013-12-05T20:04:52Z | Semi-Stochastic Gradient Descent Methods | In this paper we study the problem of minimizing the average of a large
number ($n$) of smooth convex loss functions. We propose a new method, S2GD
(Semi-Stochastic Gradient Descent), which runs for one or several epochs in
each of which a single full gradient and a random number of stochastic
gradients is computed, following a geometric law. The total work needed for the
method to output an $\varepsilon$-accurate solution in expectation, measured in
the number of passes over data, or equivalently, in units equivalent to the
computation of a single gradient of the loss, is
$O((\kappa/n)\log(1/\varepsilon))$, where $\kappa$ is the condition number.
This is achieved by running the method for $O(\log(1/\varepsilon))$ epochs,
with a single gradient evaluation and $O(\kappa)$ stochastic gradient
evaluations in each. The SVRG method of Johnson and Zhang arises as a special
case. If our method is limited to a single epoch only, it needs to evaluate at
most $O((\kappa/\varepsilon)\log(1/\varepsilon))$ stochastic gradients. In
contrast, SVRG requires $O(\kappa/\varepsilon^2)$ stochastic gradients. To
illustrate our theoretical results, S2GD only needs the workload equivalent to
about 2.1 full gradient evaluations to find an $10^{-6}$-accurate solution for
a problem with $n=10^9$ and $\kappa=10^3$.
| [
"['Jakub Konečný' 'Peter Richtárik']",
"Jakub Kone\\v{c}n\\'y and Peter Richt\\'arik"
] |
cs.LG | null | 1312.1737 | null | null | http://arxiv.org/pdf/1312.1737v1 | 2013-12-05T23:53:45Z | 2013-12-05T23:53:45Z | Curriculum Learning for Handwritten Text Line Recognition | Recurrent Neural Networks (RNN) have recently achieved the best performance
in off-line Handwriting Text Recognition. At the same time, learning RNN by
gradient descent leads to slow convergence, and training times are particularly
long when the training database consists of full lines of text. In this paper,
we propose an easy way to accelerate stochastic gradient descent in this
set-up, and in the general context of learning to recognize sequences. The
principle is called Curriculum Learning, or shaping. The idea is to first learn
to recognize short sequences before training on all available training
sequences. Experiments on three different handwritten text databases (Rimes,
IAM, OpenHaRT) show that a simple implementation of this strategy can
significantly speed up the training of RNN for Text Recognition, and even
significantly improve performance in some cases.
| [
"J\\'er\\^ome Louradour and Christopher Kermorvant",
"['Jérôme Louradour' 'Christopher Kermorvant']"
] |
cs.LG cs.CV | null | 1312.1743 | null | null | http://arxiv.org/pdf/1312.1743v2 | 2014-06-13T04:10:06Z | 2013-12-06T00:55:51Z | Dual coordinate solvers for large-scale structural SVMs | This manuscript describes a method for training linear SVMs (including binary
SVMs, SVM regression, and structural SVMs) from large, out-of-core training
datasets. Current strategies for large-scale learning fall into one of two
camps; batch algorithms which solve the learning problem given a finite
datasets, and online algorithms which can process out-of-core datasets. The
former typically requires datasets small enough to fit in memory. The latter is
often phrased as a stochastic optimization problem; such algorithms enjoy
strong theoretical properties but often require manual tuned annealing
schedules, and may converge slowly for problems with large output spaces (e.g.,
structural SVMs). We discuss an algorithm for an "intermediate" regime in which
the data is too large to fit in memory, but the active constraints (support
vectors) are small enough to remain in memory. In this case, one can design
rather efficient learning algorithms that are as stable as batch algorithms,
but capable of processing out-of-core datasets. We have developed such a
MATLAB-based solver and used it to train a collection of recognition systems
for articulated pose estimation, facial analysis, 3D object recognition, and
action classification, all with publicly-available code. This writeup describes
the solver in detail.
| [
"['Deva Ramanan']",
"Deva Ramanan"
] |
cs.LG | null | 1312.1847 | null | null | http://arxiv.org/pdf/1312.1847v2 | 2014-02-19T17:55:37Z | 2013-12-06T12:55:05Z | Understanding Deep Architectures using a Recursive Convolutional Network | A key challenge in designing convolutional network models is sizing them
appropriately. Many factors are involved in these decisions, including number
of layers, feature maps, kernel sizes, etc. Complicating this further is the
fact that each of these influence not only the numbers and dimensions of the
activation units, but also the total number of parameters. In this paper we
focus on assessing the independent contributions of three of these linked
variables: The numbers of layers, feature maps, and parameters. To accomplish
this, we employ a recursive convolutional network whose weights are tied
between layers; this allows us to vary each of the three factors in a
controlled setting. We find that while increasing the numbers of layers and
parameters each have clear benefit, the number of feature maps (and hence
dimensionality of the representation) appears ancillary, and finds most of its
benefit through the introduction of more weights. Our results (i) empirically
confirm the notion that adding layers alone increases computational power,
within the context of convolutional layers, and (ii) suggest that precise
sizing of convolutional feature map dimensions is itself of little concern;
more attention should be paid to the number of parameters in these layers
instead.
| [
"David Eigen, Jason Rolfe, Rob Fergus, Yann LeCun",
"['David Eigen' 'Jason Rolfe' 'Rob Fergus' 'Yann LeCun']"
] |
cs.NE cs.CV cs.LG stat.ML | null | 1312.1909 | null | null | http://arxiv.org/pdf/1312.1909v1 | 2013-11-18T17:56:11Z | 2013-11-18T17:56:11Z | From Maxout to Channel-Out: Encoding Information on Sparse Pathways | Motivated by an important insight from neural science, we propose a new
framework for understanding the success of the recently proposed "maxout"
networks. The framework is based on encoding information on sparse pathways and
recognizing the correct pathway at inference time. Elaborating further on this
insight, we propose a novel deep network architecture, called "channel-out"
network, which takes a much better advantage of sparse pathway encoding. In
channel-out networks, pathways are not only formed a posteriori, but they are
also actively selected according to the inference outputs from the lower
layers. From a mathematical perspective, channel-out networks can represent a
wider class of piece-wise continuous functions, thereby endowing the network
with more expressive power than that of maxout networks. We test our
channel-out networks on several well-known image classification benchmarks,
setting new state-of-the-art performance on CIFAR-100 and STL-10, which
represent some of the "harder" image classification benchmarks.
| [
"['Qi Wang' 'Joseph JaJa']",
"Qi Wang and Joseph JaJa"
] |
cs.SY cs.LG stat.ML | null | 1312.2132 | null | null | http://arxiv.org/pdf/1312.2132v1 | 2013-12-07T19:19:03Z | 2013-12-07T19:19:03Z | Robust Subspace System Identification via Weighted Nuclear Norm
Optimization | Subspace identification is a classical and very well studied problem in
system identification. The problem was recently posed as a convex optimization
problem via the nuclear norm relaxation. Inspired by robust PCA, we extend this
framework to handle outliers. The proposed framework takes the form of a convex
optimization problem with an objective that trades off fit, rank and sparsity.
As in robust PCA, it can be problematic to find a suitable regularization
parameter. We show how the space in which a suitable parameter should be sought
can be limited to a bounded open set of the two dimensional parameter space. In
practice, this is very useful since it restricts the parameter space that is
needed to be surveyed.
| [
"Dorsa Sadigh, Henrik Ohlsson, S. Shankar Sastry, Sanjit A. Seshia",
"['Dorsa Sadigh' 'Henrik Ohlsson' 'S. Shankar Sastry' 'Sanjit A. Seshia']"
] |
cs.LG cs.CL cs.NE | null | 1312.2137 | null | null | http://arxiv.org/pdf/1312.2137v1 | 2013-12-07T19:55:02Z | 2013-12-07T19:55:02Z | End-to-end Phoneme Sequence Recognition using Convolutional Neural
Networks | Most phoneme recognition state-of-the-art systems rely on a classical neural
network classifiers, fed with highly tuned features, such as MFCC or PLP
features. Recent advances in ``deep learning'' approaches questioned such
systems, but while some attempts were made with simpler features such as
spectrograms, state-of-the-art systems still rely on MFCCs. This might be
viewed as a kind of failure from deep learning approaches, which are often
claimed to have the ability to train with raw signals, alleviating the need of
hand-crafted features. In this paper, we investigate a convolutional neural
network approach for raw speech signals. While convolutional architectures got
tremendous success in computer vision or text processing, they seem to have
been let down in the past recent years in the speech processing field. We show
that it is possible to learn an end-to-end phoneme sequence classifier system
directly from raw signal, with similar performance on the TIMIT and WSJ
datasets than existing systems based on MFCC, questioning the need of complex
hand-crafted features on large datasets.
| [
"Dimitri Palaz, Ronan Collobert, Mathew Magimai.-Doss",
"['Dimitri Palaz' 'Ronan Collobert' 'Mathew Magimai. -Doss']"
] |
cs.SI cs.LG stat.ML | null | 1312.2154 | null | null | http://arxiv.org/pdf/1312.2154v1 | 2013-12-07T23:42:55Z | 2013-12-07T23:42:55Z | Sequential Monte Carlo Inference of Mixed Membership Stochastic
Blockmodels for Dynamic Social Networks | Many kinds of data can be represented as a network or graph. It is crucial to
infer the latent structure underlying such a network and to predict unobserved
links in the network. Mixed Membership Stochastic Blockmodel (MMSB) is a
promising model for network data. Latent variables and unknown parameters in
MMSB have been estimated through Bayesian inference with the entire network;
however, it is important to estimate them online for evolving networks. In this
paper, we first develop online inference methods for MMSB through sequential
Monte Carlo methods, also known as particle filters. We then extend them for
time-evolving networks, taking into account the temporal dependency of the
network structure. We demonstrate through experiments that the time-dependent
particle filter outperformed several baselines in terms of prediction
performance in an online condition.
| [
"['Tomoki Kobayashi' 'Koji Eguchi']",
"Tomoki Kobayashi, Koji Eguchi"
] |
cs.LG cs.SI stat.ML | null | 1312.2164 | null | null | http://arxiv.org/pdf/1312.2164v2 | 2014-04-16T03:53:04Z | 2013-12-08T01:58:39Z | Budgeted Influence Maximization for Multiple Products | The typical algorithmic problem in viral marketing aims to identify a set of
influential users in a social network, who, when convinced to adopt a product,
shall influence other users in the network and trigger a large cascade of
adoptions. However, the host (the owner of an online social platform) often
faces more constraints than a single product, endless user attentions,
unlimited budget and unbounded time; in reality, multiple products need to be
advertised, each user can tolerate only a small number of recommendations,
influencing user has a cost and advertisers have only limited budgets, and the
adoptions need to be maximized within a short time window.
Given theses myriads of user, monetary, and timing constraints, it is
extremely challenging for the host to design principled and efficient viral
market algorithms with provable guarantees. In this paper, we provide a novel
solution by formulating the problem as a submodular maximization in a
continuous-time diffusion model under an intersection of a matroid and multiple
knapsack constraints. We also propose an adaptive threshold greedy algorithm
which can be faster than the traditional greedy algorithm with lazy evaluation,
and scalable to networks with million of nodes. Furthermore, our mathematical
formulation allows us to prove that the algorithm can achieve an approximation
factor of $k_a/(2+2 k)$ when $k_a$ out of the $k$ knapsack constraints are
active, which also improves over previous guarantees from combinatorial
optimization literature. In the case when influencing each user has uniform
cost, the approximation becomes even better to a factor of $1/3$. Extensive
synthetic and real world experiments demonstrate that our budgeted influence
maximization algorithm achieves the-state-of-the-art in terms of both
effectiveness and scalability, often beating the next best by significant
margins.
| [
"Nan Du, Yingyu Liang, Maria Florina Balcan, Le Song",
"['Nan Du' 'Yingyu Liang' 'Maria Florina Balcan' 'Le Song']"
] |
stat.ML cs.LG | null | 1312.2171 | null | null | http://arxiv.org/pdf/1312.2171v3 | 2014-11-24T19:21:22Z | 2013-12-08T03:40:47Z | bartMachine: Machine Learning with Bayesian Additive Regression Trees | We present a new package in R implementing Bayesian additive regression trees
(BART). The package introduces many new features for data analysis using BART
such as variable selection, interaction detection, model diagnostic plots,
incorporation of missing data and the ability to save trees for future
prediction. It is significantly faster than the current R implementation,
parallelized, and capable of handling both large sample sizes and
high-dimensional data.
| [
"['Adam Kapelner' 'Justin Bleich']",
"Adam Kapelner and Justin Bleich"
] |
cs.CR cs.LG cs.NI | null | 1312.2177 | null | null | http://arxiv.org/pdf/1312.2177v2 | 2015-05-09T06:07:35Z | 2013-12-08T06:56:21Z | Machine Learning Techniques for Intrusion Detection | An Intrusion Detection System (IDS) is a software that monitors a single or a
network of computers for malicious activities (attacks) that are aimed at
stealing or censoring information or corrupting network protocols. Most
techniques used in today's IDS are not able to deal with the dynamic and
complex nature of cyber attacks on computer networks. Hence, efficient adaptive
methods like various techniques of machine learning can result in higher
detection rates, lower false alarm rates and reasonable computation and
communication costs. In this paper, we study several such schemes and compare
their performance. We divide the schemes into methods based on classical
artificial intelligence (AI) and methods based on computational intelligence
(CI). We explain how various characteristics of CI techniques can be used to
build efficient IDS.
| [
"['Mahdi Zamani' 'Mahnush Movahedi']",
"Mahdi Zamani and Mahnush Movahedi"
] |
cs.LG | null | 1312.2451 | null | null | http://arxiv.org/pdf/1312.2451v1 | 2013-12-06T18:25:15Z | 2013-12-06T18:25:15Z | CEAI: CCM based Email Authorship Identification Model | In this paper we present a model for email authorship identification (EAI) by
employing a Cluster-based Classification (CCM) technique. Traditionally,
stylometric features have been successfully employed in various authorship
analysis tasks; we extend the traditional feature-set to include some more
interesting and effective features for email authorship identification (e.g.
the last punctuation mark used in an email, the tendency of an author to use
capitalization at the start of an email, or the punctuation after a greeting or
farewell). We also included Info Gain feature selection based content features.
It is observed that the use of such features in the authorship identification
process has a positive impact on the accuracy of the authorship identification
task. We performed experiments to justify our arguments and compared the
results with other base line models. Experimental results reveal that the
proposed CCM-based email authorship identification model, along with the
proposed feature set, outperforms the state-of-the-art support vector machine
(SVM)-based models, as well as the models proposed by Iqbal et al. [1, 2]. The
proposed model attains an accuracy rate of 94% for 10 authors, 89% for 25
authors, and 81% for 50 authors, respectively on Enron dataset, while 89.5%
accuracy has been achieved on authors' constructed real email dataset. The
results on Enron dataset have been achieved on quite a large number of authors
as compared to the models proposed by Iqbal et al. [1, 2].
| [
"Sarwat Nizamani, Nasrullah Memon",
"['Sarwat Nizamani' 'Nasrullah Memon']"
] |
cs.CG cs.LG math.DS nlin.CD physics.data-an | null | 1312.2482 | null | null | http://arxiv.org/pdf/1312.2482v2 | 2014-03-24T14:33:37Z | 2013-12-09T16:02:23Z | Automatic recognition and tagging of topologically different regimes in
dynamical systems | Complex systems are commonly modeled using nonlinear dynamical systems. These
models are often high-dimensional and chaotic. An important goal in studying
physical systems through the lens of mathematical models is to determine when
the system undergoes changes in qualitative behavior. A detailed description of
the dynamics can be difficult or impossible to obtain for high-dimensional and
chaotic systems. Therefore, a more sensible goal is to recognize and mark
transitions of a system between qualitatively different regimes of behavior. In
practice, one is interested in developing techniques for detection of such
transitions from sparse observations, possibly contaminated by noise. In this
paper we develop a framework to accurately tag different regimes of complex
systems based on topological features. In particular, our framework works with
a high degree of success in picking out a cyclically orbiting regime from a
stationary equilibrium regime in high-dimensional stochastic dynamical systems.
| [
"Jesse Berwald, Marian Gidea and Mikael Vejdemo-Johansson",
"['Jesse Berwald' 'Marian Gidea' 'Mikael Vejdemo-Johansson']"
] |
cs.LG | 10.1109/IJCNN.2013.6706862 | 1312.2578 | null | null | http://arxiv.org/abs/1312.2578v2 | 2014-04-28T20:08:47Z | 2013-12-09T20:58:16Z | Kernel-based Distance Metric Learning in the Output Space | In this paper we present two related, kernel-based Distance Metric Learning
(DML) methods. Their respective models non-linearly map data from their
original space to an output space, and subsequent distance measurements are
performed in the output space via a Mahalanobis metric. The dimensionality of
the output space can be directly controlled to facilitate the learning of a
low-rank metric. Both methods allow for simultaneous inference of the
associated metric and the mapping to the output space, which can be used to
visualize the data, when the output space is 2- or 3-dimensional. Experimental
results for a collection of classification tasks illustrate the advantages of
the proposed methods over other traditional and kernel-based DML approaches.
| [
"Cong Li, Michael Georgiopoulos, Georgios C. Anagnostopoulos",
"['Cong Li' 'Michael Georgiopoulos' 'Georgios C. Anagnostopoulos']"
] |
cs.LG | null | 1312.2606 | null | null | http://arxiv.org/pdf/1312.2606v1 | 2013-12-09T21:27:23Z | 2013-12-09T21:27:23Z | Multi-Task Classification Hypothesis Space with Improved Generalization
Bounds | This paper presents a RKHS, in general, of vector-valued functions intended
to be used as hypothesis space for multi-task classification. It extends
similar hypothesis spaces that have previously considered in the literature.
Assuming this space, an improved Empirical Rademacher Complexity-based
generalization bound is derived. The analysis is itself extended to an MKL
setting. The connection between the proposed hypothesis space and a Group-Lasso
type regularizer is discussed. Finally, experimental results, with some
SVM-based Multi-Task Learning problems, underline the quality of the derived
bounds and validate the paper's analysis.
| [
"Cong Li, Michael Georgiopoulos, Georgios C. Anagnostopoulos",
"['Cong Li' 'Michael Georgiopoulos' 'Georgios C. Anagnostopoulos']"
] |
cs.LG cs.AI | null | 1312.2710 | null | null | http://arxiv.org/pdf/1312.2710v1 | 2013-12-10T08:11:14Z | 2013-12-10T08:11:14Z | Improving circuit miniaturization and its efficiency using Rough Set
Theory | High-speed, accuracy, meticulousness and quick response are notion of the
vital necessities for modern digital world. An efficient electronic circuit
unswervingly affects the maneuver of the whole system. Different tools are
required to unravel different types of engineering tribulations. Improving the
efficiency, accuracy and low power consumption in an electronic circuit is
always been a bottle neck problem. So the need of circuit miniaturization is
always there. It saves a lot of time and power that is wasted in switching of
gates, the wiring-crises is reduced, cross-sectional area of chip is reduced,
the number of transistors that can implemented in chip is multiplied many
folds. Therefore to trounce with this problem we have proposed an Artificial
intelligence (AI) based approach that make use of Rough Set Theory for its
implementation. Theory of rough set has been proposed by Z Pawlak in the year
1982. Rough set theory is a new mathematical tool which deals with uncertainty
and vagueness. Decisions can be generated using rough set theory by reducing
the unwanted and superfluous data. We have condensed the number of gates
without upsetting the productivity of the given circuit. This paper proposes an
approach with the help of rough set theory which basically lessens the number
of gates in the circuit, based on decision rules.
| [
"['Sarvesh SS Rawat' 'Dheeraj Dilip Mor' 'Anugrah Kumar'\n 'Sanjiban Shekar Roy' 'Rohit kumar']",
"Sarvesh SS Rawat, Dheeraj Dilip Mor, Anugrah Kumar, Sanjiban Shekar\n Roy, Rohit kumar"
] |
cs.LG | 10.5121/ijcsity.2013.1408 | 1312.2789 | null | null | http://arxiv.org/abs/1312.2789v1 | 2013-12-10T13:16:02Z | 2013-12-10T13:16:02Z | Performance Analysis Of Regularized Linear Regression Models For
Oxazolines And Oxazoles Derivitive Descriptor Dataset | Regularized regression techniques for linear regression have been created the
last few ten years to reduce the flaws of ordinary least squares regression
with regard to prediction accuracy. In this paper, new methods for using
regularized regression in model choice are introduced, and we distinguish the
conditions in which regularized regression develops our ability to discriminate
models. We applied all the five methods that use penalty-based (regularization)
shrinkage to handle Oxazolines and Oxazoles derivatives descriptor dataset with
far more predictors than observations. The lasso, ridge, elasticnet, lars and
relaxed lasso further possess the desirable property that they simultaneously
select relevant predictive descriptors and optimally estimate their effects.
Here, we comparatively evaluate the performance of five regularized linear
regression methods The assessment of the performance of each model by means of
benchmark experiments is an established exercise. Cross-validation and
resampling methods are generally used to arrive point evaluates the
efficiencies which are compared to recognize methods with acceptable features.
Predictive accuracy was evaluated using the root mean squared error (RMSE) and
Square of usual correlation between predictors and observed mean inhibitory
concentration of antitubercular activity (R square). We found that all five
regularized regression models were able to produce feasible models and
efficient capturing the linearity in the data. The elastic net and lars had
similar accuracies as well as lasso and relaxed lasso had similar accuracies
but outperformed ridge regression in terms of the RMSE and R square metrics.
| [
"Doreswamy and Chanabasayya .M. Vastrad",
"['Doreswamy' 'Chanabasayya . M. Vastrad']"
] |
cs.LG | null | 1312.2936 | null | null | http://arxiv.org/pdf/1312.2936v1 | 2013-12-10T20:36:04Z | 2013-12-10T20:36:04Z | Active Player Modelling | We argue for the use of active learning methods for player modelling. In
active learning, the learning algorithm chooses where to sample the search
space so as to optimise learning progress. We hypothesise that player modelling
based on active learning could result in vastly more efficient learning, but
will require big changes in how data is collected. Some example active player
modelling scenarios are described. A particular form of active learning is also
equivalent to an influential formalisation of (human and machine) curiosity,
and games with active learning could therefore be seen as being curious about
the player. We further hypothesise that this form of curiosity is symmetric,
and therefore that games that explore their players based on the principles of
active learning will turn out to select game configurations that are
interesting to the player that is being explored.
| [
"Julian Togelius, Noor Shaker, Georgios N. Yannakakis",
"['Julian Togelius' 'Noor Shaker' 'Georgios N. Yannakakis']"
] |
q-bio.QM cs.LG math.OC q-bio.BM stat.ML | null | 1312.2988 | null | null | http://arxiv.org/pdf/1312.2988v5 | 2015-04-08T14:21:09Z | 2013-12-10T22:45:06Z | Protein Contact Prediction by Integrating Joint Evolutionary Coupling
Analysis and Supervised Learning | Protein contacts contain important information for protein structure and
functional study, but contact prediction from sequence remains very
challenging. Both evolutionary coupling (EC) analysis and supervised machine
learning methods are developed to predict contacts, making use of different
types of information, respectively. This paper presents a group graphical lasso
(GGL) method for contact prediction that integrates joint multi-family EC
analysis and supervised learning. Different from existing single-family EC
analysis that uses residue co-evolution information in only the target protein
family, our joint EC analysis uses residue co-evolution in both the target
family and its related families, which may have divergent sequences but similar
folds. To implement joint EC analysis, we model a set of related protein
families using Gaussian graphical models (GGM) and then co-estimate their
precision matrices by maximum-likelihood, subject to the constraint that the
precision matrices shall share similar residue co-evolution patterns. To
further improve the accuracy of the estimated precision matrices, we employ a
supervised learning method to predict contact probability from a variety of
evolutionary and non-evolutionary information and then incorporate the
predicted probability as prior into our GGL framework. Experiments show that
our method can predict contacts much more accurately than existing methods, and
that our method performs better on both conserved and family-specific contacts.
| [
"['Jianzhu Ma' 'Sheng Wang' 'Zhiyong Wang' 'Jinbo Xu']",
"Jianzhu Ma, Sheng Wang, Zhiyong Wang and Jinbo Xu"
] |
stat.ML cs.LG | null | 1312.3386 | null | null | http://arxiv.org/pdf/1312.3386v2 | 2013-12-25T13:10:44Z | 2013-12-12T02:37:36Z | Clustering for high-dimension, low-sample size data using distance
vectors | In high-dimension, low-sample size (HDLSS) data, it is not always true that
closeness of two objects reflects a hidden cluster structure. We point out the
important fact that it is not the closeness, but the "values" of distance that
contain information of the cluster structure in high-dimensional space. Based
on this fact, we propose an efficient and simple clustering approach, called
distance vector clustering, for HDLSS data. Under the assumptions given in the
work of Hall et al. (2005), we show the proposed approach provides a true
cluster label under milder conditions when the dimension tends to infinity with
the sample size fixed. The effectiveness of the distance vector clustering
approach is illustrated through a numerical experiment and real data analysis.
| [
"['Yoshikazu Terada']",
"Yoshikazu Terada"
] |
cs.LG | null | 1312.3388 | null | null | http://arxiv.org/pdf/1312.3388v1 | 2013-12-12T02:46:07Z | 2013-12-12T02:46:07Z | Online Bayesian Passive-Aggressive Learning | Online Passive-Aggressive (PA) learning is an effective framework for
performing max-margin online learning. But the deterministic formulation and
estimated single large-margin model could limit its capability in discovering
descriptive structures underlying complex data. This pa- per presents online
Bayesian Passive-Aggressive (BayesPA) learning, which subsumes the online PA
and extends naturally to incorporate latent variables and perform nonparametric
Bayesian inference, thus providing great flexibility for explorative analysis.
We apply BayesPA to topic modeling and derive efficient online learning
algorithms for max-margin topic models. We further develop nonparametric
methods to resolve the number of topics. Experimental results on real datasets
show that our approaches significantly improve time efficiency while
maintaining comparable results with the batch counterparts.
| [
"Tianlin Shi and Jun Zhu",
"['Tianlin Shi' 'Jun Zhu']"
] |
cs.LG | null | 1312.3393 | null | null | http://arxiv.org/pdf/1312.3393v2 | 2013-12-17T10:30:42Z | 2013-12-12T03:08:46Z | Relative Upper Confidence Bound for the K-Armed Dueling Bandit Problem | This paper proposes a new method for the K-armed dueling bandit problem, a
variation on the regular K-armed bandit problem that offers only relative
feedback about pairs of arms. Our approach extends the Upper Confidence Bound
algorithm to the relative setting by using estimates of the pairwise
probabilities to select a promising arm and applying Upper Confidence Bound
with the winner as a benchmark. We prove a finite-time regret bound of order
O(log t). In addition, our empirical results using real data from an
information retrieval application show that it greatly outperforms the state of
the art.
| [
"['Masrour Zoghi' 'Shimon Whiteson' 'Remi Munos' 'Maarten de Rijke']",
"Masrour Zoghi, Shimon Whiteson, Remi Munos, Maarten de Rijke"
] |
cs.CV cs.LG stat.ML | null | 1312.3429 | null | null | http://arxiv.org/pdf/1312.3429v2 | 2013-12-16T16:11:52Z | 2013-12-12T10:03:47Z | Unsupervised learning of depth and motion | We present a model for the joint estimation of disparity and motion. The
model is based on learning about the interrelations between images from
multiple cameras, multiple frames in a video, or the combination of both. We
show that learning depth and motion cues, as well as their combinations, from
data is possible within a single type of architecture and a single type of
learning algorithm, by using biologically inspired "complex cell" like units,
which encode correlations between the pixels across image pairs. Our
experimental results show that the learning of depth and motion makes it
possible to achieve state-of-the-art performance in 3-D activity analysis, and
to outperform existing hand-engineered 3-D motion features by a very large
margin.
| [
"Kishore Konda, Roland Memisevic",
"['Kishore Konda' 'Roland Memisevic']"
] |
cs.LG cs.CV stat.ML | null | 1312.3522 | null | null | http://arxiv.org/pdf/1312.3522v3 | 2014-10-12T22:10:13Z | 2013-12-12T15:26:57Z | Sparse Matrix-based Random Projection for Classification | As a typical dimensionality reduction technique, random projection can be
simply implemented with linear projection, while maintaining the pairwise
distances of high-dimensional data with high probability. Considering this
technique is mainly exploited for the task of classification, this paper is
developed to study the construction of random matrix from the viewpoint of
feature selection, rather than of traditional distance preservation. This
yields a somewhat surprising theoretical result, that is, the sparse random
matrix with exactly one nonzero element per column, can present better feature
selection performance than other more dense matrices, if the projection
dimension is sufficiently large (namely, not much smaller than the number of
feature elements); otherwise, it will perform comparably to others. For random
projection, this theoretical result implies considerable improvement on both
complexity and performance, which is widely confirmed with the classification
experiments on both synthetic data and real data.
| [
"['Weizhi Lu' 'Weiyu Li' 'Kidiyo Kpalma' 'Joseph Ronsin']",
"Weizhi Lu and Weiyu Li and Kidiyo Kpalma and Joseph Ronsin"
] |
cs.LG | 10.1007/978-3-642-40728-4_17 | 1312.3811 | null | null | http://arxiv.org/abs/1312.3811v1 | 2013-12-13T14:10:30Z | 2013-12-13T14:10:30Z | Efficient Baseline-free Sampling in Parameter Exploring Policy
Gradients: Super Symmetric PGPE | Policy Gradient methods that explore directly in parameter space are among
the most effective and robust direct policy search methods and have drawn a lot
of attention lately. The basic method from this field, Policy Gradients with
Parameter-based Exploration, uses two samples that are symmetric around the
current hypothesis to circumvent misleading reward in \emph{asymmetrical}
reward distributed problems gathered with the usual baseline approach. The
exploration parameters are still updated by a baseline approach - leaving the
exploration prone to asymmetric reward distributions. In this paper we will
show how the exploration parameters can be sampled quasi symmetric despite
having limited instead of free parameters for exploration. We give a
transformation approximation to get quasi symmetric samples with respect to the
exploration without changing the overall sampling distribution. Finally we will
demonstrate that sampling symmetrically also for the exploration parameters is
superior in needs of samples and robustness than the original sampling
approach.
| [
"['Frank Sehnke']",
"Frank Sehnke"
] |
cs.AI cs.LG | null | 1312.3903 | null | null | http://arxiv.org/pdf/1312.3903v1 | 2013-12-13T18:32:51Z | 2013-12-13T18:32:51Z | A Methodology for Player Modeling based on Machine Learning | AI is gradually receiving more attention as a fundamental feature to increase
the immersion in digital games. Among the several AI approaches, player
modeling is becoming an important one. The main idea is to understand and model
the player characteristics and behaviors in order to develop a better AI. In
this work, we discuss several aspects of this new field. We proposed a taxonomy
to organize the area, discussing several facets of this topic, ranging from
implementation decisions up to what a model attempts to describe. We then
classify, in our taxonomy, some of the most important works in this field. We
also presented a generic approach to deal with player modeling using ML, and we
instantiated this approach to model players' preferences in the game
Civilization IV. The instantiation of this approach has several steps. We first
discuss a generic representation, regardless of what is being modeled, and
evaluate it performing experiments with the strategy game Civilization IV.
Continuing the instantiation of the proposed approach we evaluated the
applicability of using game score information to distinguish different
preferences. We presented a characterization of virtual agents in the game,
comparing their behavior with their stated preferences. Once we have
characterized these agents, we were able to observe that different preferences
generate different behaviors, measured by several game indicators. We then
tackled the preference modeling problem as a binary classification task, with a
supervised learning approach. We compared four different methods, based on
different paradigms (SVM, AdaBoost, NaiveBayes and JRip), evaluating them on a
set of matches played by different virtual agents. We conclude our work using
the learned models to infer human players' preferences. Using some of the
evaluated classifiers we obtained accuracies over 60% for most of the inferred
preferences.
| [
"Marlos C. Machado",
"['Marlos C. Machado']"
] |
cs.LG stat.ML | null | 1312.3970 | null | null | http://arxiv.org/pdf/1312.3970v1 | 2013-12-13T21:59:00Z | 2013-12-13T21:59:00Z | An Extensive Evaluation of Filtering Misclassified Instances in
Supervised Classification Tasks | Removing or filtering outliers and mislabeled instances prior to training a
learning algorithm has been shown to increase classification accuracy. A
popular approach for handling outliers and mislabeled instances is to remove
any instance that is misclassified by a learning algorithm. However, an
examination of which learning algorithms to use for filtering as well as their
effects on multiple learning algorithms over a large set of data sets has not
been done. Previous work has generally been limited due to the large
computational requirements to run such an experiment, and, thus, the
examination has generally been limited to learning algorithms that are
computationally inexpensive and using a small number of data sets. In this
paper, we examine 9 learning algorithms as filtering algorithms as well as
examining the effects of filtering in the 9 chosen learning algorithms on a set
of 54 data sets. In addition to using each learning algorithm individually as a
filter, we also use the set of learning algorithms as an ensemble filter and
use an adaptive algorithm that selects a subset of the learning algorithms for
filtering for a specific task and learning algorithm. We find that for most
cases, using an ensemble of learning algorithms for filtering produces the
greatest increase in classification accuracy. We also compare filtering with a
majority voting ensemble. The voting ensemble significantly outperforms
filtering unless there are high amounts of noise present in the data set.
Additionally, we find that a majority voting ensemble is robust to noise as
filtering with a voting ensemble does not increase the classification accuracy
of the voting ensemble.
| [
"['Michael R. Smith' 'Tony Martinez']",
"Michael R. Smith and Tony Martinez"
] |
cs.CV cs.LG | null | 1312.3989 | null | null | http://arxiv.org/pdf/1312.3989v1 | 2013-12-14T00:28:32Z | 2013-12-14T00:28:32Z | Classifiers With a Reject Option for Early Time-Series Classification | Early classification of time-series data in a dynamic environment is a
challenging problem of great importance in signal processing. This paper
proposes a classifier architecture with a reject option capable of online
decision making without the need to wait for the entire time series signal to
be present. The main idea is to classify an odor/gas signal with an acceptable
accuracy as early as possible. Instead of using posterior probability of a
classifier, the proposed method uses the "agreement" of an ensemble to decide
whether to accept or reject the candidate label. The introduced algorithm is
applied to the bio-chemistry problem of odor classification to build a novel
Electronic-Nose called Forefront-Nose. Experimental results on wind tunnel
test-bed facility confirms the robustness of the forefront-nose compared to the
standard classifiers from both earliness and recognition perspectives.
| [
"Nima Hatami and Camelia Chira",
"['Nima Hatami' 'Camelia Chira']"
] |
cs.CV cs.LG | 10.1109/ICCIS.2008.4670763 | 1312.3990 | null | null | http://arxiv.org/abs/1312.3990v1 | 2013-12-14T00:29:36Z | 2013-12-14T00:29:36Z | ECOC-Based Training of Neural Networks for Face Recognition | Error Correcting Output Codes, ECOC, is an output representation method
capable of discovering some of the errors produced in classification tasks.
This paper describes the application of ECOC to the training of feed forward
neural networks, FFNN, for improving the overall accuracy of classification
systems. Indeed, to improve the generalization of FFNN classifiers, this paper
proposes an ECOC-Based training method for Neural Networks that use ECOC as the
output representation, and adopts the traditional Back-Propagation algorithm,
BP, to adjust weights of the network. Experimental results for face recognition
problem on Yale database demonstrate the effectiveness of our method. With a
rejection scheme defined by a simple robustness rate, high reliability is
achieved in this application.
| [
"Nima Hatami, Reza Ebrahimpour, Reza Ghaderi",
"['Nima Hatami' 'Reza Ebrahimpour' 'Reza Ghaderi']"
] |
cs.CL cs.LG | null | 1312.4092 | null | null | http://arxiv.org/pdf/1312.4092v1 | 2013-12-14T21:48:49Z | 2013-12-14T21:48:49Z | Domain adaptation for sequence labeling using hidden Markov models | Most natural language processing systems based on machine learning are not
robust to domain shift. For example, a state-of-the-art syntactic dependency
parser trained on Wall Street Journal sentences has an absolute drop in
performance of more than ten points when tested on textual data from the Web.
An efficient solution to make these methods more robust to domain shift is to
first learn a word representation using large amounts of unlabeled data from
both domains, and then use this representation as features in a supervised
learning algorithm. In this paper, we propose to use hidden Markov models to
learn word representations for part-of-speech tagging. In particular, we study
the influence of using data from the source, the target or both domains to
learn the representation and the different ways to represent words using an
HMM.
| [
"Edouard Grave (LIENS, INRIA Paris - Rocquencourt), Guillaume Obozinski\n (LIGM), Francis Bach (LIENS, INRIA Paris - Rocquencourt)",
"['Edouard Grave' 'Guillaume Obozinski' 'Francis Bach']"
] |
cs.LG cs.DC | null | 1312.4108 | null | null | http://arxiv.org/pdf/1312.4108v1 | 2013-12-15T05:42:51Z | 2013-12-15T05:42:51Z | A MapReduce based distributed SVM algorithm for binary classification | Although Support Vector Machine (SVM) algorithm has a high generalization
property to classify for unseen examples after training phase and it has small
loss value, the algorithm is not suitable for real-life classification and
regression problems. SVMs cannot solve hundreds of thousands examples in
training dataset. In previous studies on distributed machine learning
algorithms, SVM is trained over a costly and preconfigured computer
environment. In this research, we present a MapReduce based distributed
parallel SVM training algorithm for binary classification problems. This work
shows how to distribute optimization problem over cloud computing systems with
MapReduce technique. In the second step of this work, we used statistical
learning theory to find the predictive hypothesis that minimize our empirical
risks from hypothesis spaces that created with reduce function of MapReduce.
The results of this research are important for training of big datasets for SVM
algorithm based classification problems. We provided that iterative training of
split dataset with MapReduce technique; accuracy of the classifier function
will converge to global optimal classifier function's accuracy in finite
iteration size. The algorithm performance was measured on samples from letter
recognition and pen-based recognition of handwritten digits dataset.
| [
"Ferhat \\\"Ozg\\\"ur \\c{C}atak, Mehmet Erdal Balaban",
"['Ferhat Özgür Çatak' 'Mehmet Erdal Balaban']"
] |
cs.LG cs.DC | null | 1312.4176 | null | null | http://arxiv.org/pdf/1312.4176v3 | 2014-11-10T13:36:34Z | 2013-12-15T18:08:27Z | Distributed k-means algorithm | In this paper we provide a fully distributed implementation of the k-means
clustering algorithm, intended for wireless sensor networks where each agent is
endowed with a possibly high-dimensional observation (e.g., position, humidity,
temperature, etc.) The proposed algorithm, by means of one-hop communication,
partitions the agents into measure-dependent groups that have small in-group
and large out-group "distances". Since the partitions may not have a relation
with the topology of the network--members of the same clusters may not be
spatially close--the algorithm is provided with a mechanism to compute the
clusters'centroids even when the clusters are disconnected in several
sub-clusters.The results of the proposed distributed algorithm coincide, in
terms of minimization of the objective function, with the centralized k-means
algorithm. Some numerical examples illustrate the capabilities of the proposed
solution.
| [
"Gabriele Oliva, Roberto Setola, and Christoforos N. Hadjicostis",
"['Gabriele Oliva' 'Roberto Setola' 'Christoforos N. Hadjicostis']"
] |
cs.LG | null | 1312.4209 | null | null | http://arxiv.org/pdf/1312.4209v1 | 2013-12-15T23:40:49Z | 2013-12-15T23:40:49Z | Feature Graph Architectures | In this article we propose feature graph architectures (FGA), which are deep
learning systems employing a structured initialisation and training method
based on a feature graph which facilitates improved generalisation performance
compared with a standard shallow architecture. The goal is to explore
alternative perspectives on the problem of deep network training. We evaluate
FGA performance for deep SVMs on some experimental datasets, and show how
generalisation and stability results may be derived for these models. We
describe the effect of permutations on the model accuracy, and give a criterion
for the optimal permutation in terms of feature correlations. The experimental
results show that the algorithm produces robust and significant test set
improvements over a standard shallow SVM training method for a range of
datasets. These gains are achieved with a moderate increase in time complexity.
| [
"['Richard Davis' 'Sanjay Chawla' 'Philip Leong']",
"Richard Davis, Sanjay Chawla, Philip Leong"
] |
cs.LG | null | 1312.4314 | null | null | http://arxiv.org/pdf/1312.4314v3 | 2014-03-09T20:15:03Z | 2013-12-16T11:15:10Z | Learning Factored Representations in a Deep Mixture of Experts | Mixtures of Experts combine the outputs of several "expert" networks, each of
which specializes in a different part of the input space. This is achieved by
training a "gating" network that maps each input to a distribution over the
experts. Such models show promise for building larger networks that are still
cheap to compute at test time, and more parallelizable at training time. In
this this work, we extend the Mixture of Experts to a stacked model, the Deep
Mixture of Experts, with multiple sets of gating and experts. This
exponentially increases the number of effective experts by associating each
input with a combination of experts at each layer, yet maintains a modest model
size. On a randomly translated version of the MNIST dataset, we find that the
Deep Mixture of Experts automatically learns to develop location-dependent
("where") experts at the first layer, and class-specific ("what") experts at
the second layer. In addition, we see that the different combinations are in
use when the model is applied to a dataset of speech monophones. These
demonstrate effective use of all expert combinations.
| [
"['David Eigen' \"Marc'Aurelio Ranzato\" 'Ilya Sutskever']",
"David Eigen, Marc'Aurelio Ranzato, Ilya Sutskever"
] |
cs.CV cs.LG cs.NE | null | 1312.4384 | null | null | http://arxiv.org/pdf/1312.4384v1 | 2013-12-16T14:51:00Z | 2013-12-16T14:51:00Z | Rectifying Self Organizing Maps for Automatic Concept Learning from Web
Images | We attack the problem of learning concepts automatically from noisy web image
search results. Going beyond low level attributes, such as colour and texture,
we explore weakly-labelled datasets for the learning of higher level concepts,
such as scene categories. The idea is based on discovering common
characteristics shared among subsets of images by posing a method that is able
to organise the data while eliminating irrelevant instances. We propose a novel
clustering and outlier detection method, namely Rectifying Self Organizing Maps
(RSOM). Given an image collection returned for a concept query, RSOM provides
clusters pruned from outliers. Each cluster is used to train a model
representing a different characteristics of the concept. The proposed method
outperforms the state-of-the-art studies on the task of learning low-level
concepts, and it is competitive in learning higher level concepts as well. It
is capable to work at large scale with no supervision through exploiting the
available sources.
| [
"['Eren Golge' 'Pinar Duygulu']",
"Eren Golge and Pinar Duygulu"
] |
cs.NE cs.CV cs.LG | null | 1312.4400 | null | null | http://arxiv.org/pdf/1312.4400v3 | 2014-03-04T05:15:42Z | 2013-12-16T15:34:13Z | Network In Network | We propose a novel deep network structure called "Network In Network" (NIN)
to enhance model discriminability for local patches within the receptive field.
The conventional convolutional layer uses linear filters followed by a
nonlinear activation function to scan the input. Instead, we build micro neural
networks with more complex structures to abstract the data within the receptive
field. We instantiate the micro neural network with a multilayer perceptron,
which is a potent function approximator. The feature maps are obtained by
sliding the micro networks over the input in a similar manner as CNN; they are
then fed into the next layer. Deep NIN can be implemented by stacking mutiple
of the above described structure. With enhanced local modeling via the micro
network, we are able to utilize global average pooling over feature maps in the
classification layer, which is easier to interpret and less prone to
overfitting than traditional fully connected layers. We demonstrated the
state-of-the-art classification performances with NIN on CIFAR-10 and
CIFAR-100, and reasonable performances on SVHN and MNIST datasets.
| [
"['Min Lin' 'Qiang Chen' 'Shuicheng Yan']",
"Min Lin, Qiang Chen, Shuicheng Yan"
] |
cs.LG | null | 1312.4405 | null | null | http://arxiv.org/pdf/1312.4405v1 | 2013-12-16T15:40:05Z | 2013-12-16T15:40:05Z | Learning Deep Representations By Distributed Random Samplings | In this paper, we propose an extremely simple deep model for the unsupervised
nonlinear dimensionality reduction -- deep distributed random samplings, which
performs like a stack of unsupervised bootstrap aggregating. First, its network
structure is novel: each layer of the network is a group of mutually
independent $k$-centers clusterings. Second, its learning method is extremely
simple: the $k$ centers of each clustering are only $k$ randomly selected
examples from the training data; for small-scale data sets, the $k$ centers are
further randomly reconstructed by a simple cyclic-shift operation. Experimental
results on nonlinear dimensionality reduction show that the proposed method can
learn abstract representations on both large-scale and small-scale problems,
and meanwhile is much faster than deep neural networks on large-scale problems.
| [
"Xiao-Lei Zhang",
"['Xiao-Lei Zhang']"
] |
stat.ML cs.LG | null | 1312.4426 | null | null | http://arxiv.org/pdf/1312.4426v1 | 2013-12-16T16:51:51Z | 2013-12-16T16:51:51Z | Optimization for Compressed Sensing: the Simplex Method and Kronecker
Sparsification | In this paper we present two new approaches to efficiently solve large-scale
compressed sensing problems. These two ideas are independent of each other and
can therefore be used either separately or together. We consider all
possibilities.
For the first approach, we note that the zero vector can be taken as the
initial basic (infeasible) solution for the linear programming problem and
therefore, if the true signal is very sparse, some variants of the simplex
method can be expected to take only a small number of pivots to arrive at a
solution. We implemented one such variant and demonstrate a dramatic
improvement in computation time on very sparse signals.
The second approach requires a redesigned sensing mechanism in which the
vector signal is stacked into a matrix. This allows us to exploit the Kronecker
compressed sensing (KCS) mechanism. We show that the Kronecker sensing requires
stronger conditions for perfect recovery compared to the original vector
problem. However, the Kronecker sensing, modeled correctly, is a much sparser
linear optimization problem. Hence, algorithms that benefit from sparse problem
representation, such as interior-point methods, can solve the Kronecker sensing
problems much faster than the corresponding vector problem. In our numerical
studies, we demonstrate a ten-fold improvement in the computation time.
| [
"Robert Vanderbei and Han Liu and Lie Wang and Kevin Lin",
"['Robert Vanderbei' 'Han Liu' 'Lie Wang' 'Kevin Lin']"
] |
cs.LG | null | 1312.4461 | null | null | http://arxiv.org/pdf/1312.4461v4 | 2014-01-28T22:29:55Z | 2013-12-16T18:58:34Z | Low-Rank Approximations for Conditional Feedforward Computation in Deep
Neural Networks | Scalability properties of deep neural networks raise key research questions,
particularly as the problems considered become larger and more challenging.
This paper expands on the idea of conditional computation introduced by Bengio,
et. al., where the nodes of a deep network are augmented by a set of gating
units that determine when a node should be calculated. By factorizing the
weight matrix into a low-rank approximation, an estimation of the sign of the
pre-nonlinearity activation can be efficiently obtained. For networks using
rectified-linear hidden units, this implies that the computation of a hidden
unit with an estimated negative pre-nonlinearity can be ommitted altogether, as
its value will become zero when nonlinearity is applied. For sparse neural
networks, this can result in considerable speed gains. Experimental results
using the MNIST and SVHN data sets with a fully-connected deep neural network
demonstrate the performance robustness of the proposed scheme with respect to
the error introduced by the conditional computation process.
| [
"['Andrew Davis' 'Itamar Arel']",
"Andrew Davis, Itamar Arel"
] |
stat.ML cs.LG stat.ME | null | 1312.4479 | null | null | http://arxiv.org/pdf/1312.4479v1 | 2013-12-16T19:38:35Z | 2013-12-16T19:38:35Z | Parametric Modelling of Multivariate Count Data Using Probabilistic
Graphical Models | Multivariate count data are defined as the number of items of different
categories issued from sampling within a population, which individuals are
grouped into categories. The analysis of multivariate count data is a recurrent
and crucial issue in numerous modelling problems, particularly in the fields of
biology and ecology (where the data can represent, for example, children counts
associated with multitype branching processes), sociology and econometrics. We
focus on I) Identifying categories that appear simultaneously, or on the
contrary that are mutually exclusive. This is achieved by identifying
conditional independence relationships between the variables; II)Building
parsimonious parametric models consistent with these relationships; III)
Characterising and testing the effects of covariates on the joint distribution
of the counts. To achieve these goals, we propose an approach based on
graphical probabilistic models, and more specifically partially directed
acyclic graphs.
| [
"Pierre Fernique (VP, AGAP), Jean-Baptiste Durand (VP, INRIA Grenoble\n Rh\\^one-Alpes / LJK Laboratoire Jean Kuntzmann), Yann Gu\\'edon (VP, AGAP)",
"['Pierre Fernique' 'Jean-Baptiste Durand' 'Yann Guédon']"
] |
cs.LG stat.ML | null | 1312.4527 | null | null | http://arxiv.org/pdf/1312.4527v1 | 2013-12-16T09:34:43Z | 2013-12-16T09:34:43Z | Probable convexity and its application to Correlated Topic Models | Non-convex optimization problems often arise from probabilistic modeling,
such as estimation of posterior distributions. Non-convexity makes the problems
intractable, and poses various obstacles for us to design efficient algorithms.
In this work, we attack non-convexity by first introducing the concept of
\emph{probable convexity} for analyzing convexity of real functions in
practice. We then use the new concept to analyze an inference problem in the
\emph{Correlated Topic Model} (CTM) and related nonconjugate models. Contrary
to the existing belief of intractability, we show that this inference problem
is concave under certain conditions. One consequence of our analyses is a novel
algorithm for learning CTM which is significantly more scalable and qualitative
than existing methods. Finally, we highlight that stochastic gradient
algorithms might be a practical choice to resolve efficiently non-convex
problems. This finding might find beneficial in many contexts which are beyond
probabilistic modeling.
| [
"['Khoat Than' 'Tu Bao Ho']",
"Khoat Than and Tu Bao Ho"
] |
stat.ML cs.LG | null | 1312.4551 | null | null | http://arxiv.org/pdf/1312.4551v1 | 2013-12-16T21:03:28Z | 2013-12-16T21:03:28Z | Comparative Analysis of Viterbi Training and Maximum Likelihood
Estimation for HMMs | We present an asymptotic analysis of Viterbi Training (VT) and contrast it
with a more conventional Maximum Likelihood (ML) approach to parameter
estimation in Hidden Markov Models. While ML estimator works by (locally)
maximizing the likelihood of the observed data, VT seeks to maximize the
probability of the most likely hidden state sequence. We develop an analytical
framework based on a generating function formalism and illustrate it on an
exactly solvable model of HMM with one unambiguous symbol. For this particular
model the ML objective function is continuously degenerate. VT objective, in
contrast, is shown to have only finite degeneracy. Furthermore, VT converges
faster and results in sparser (simpler) models, thus realizing an automatic
Occam's razor for HMM learning. For more general scenario VT can be worse
compared to ML but still capable of correctly recovering most of the
parameters.
| [
"['Armen E. Allahverdyan' 'Aram Galstyan']",
"Armen E. Allahverdyan and Aram Galstyan"
] |
stat.ML cs.LG | null | 1312.4564 | null | null | http://arxiv.org/pdf/1312.4564v4 | 2014-06-09T09:31:13Z | 2013-12-16T21:22:46Z | Adaptive Stochastic Alternating Direction Method of Multipliers | The Alternating Direction Method of Multipliers (ADMM) has been studied for
years. The traditional ADMM algorithm needs to compute, at each iteration, an
(empirical) expected loss function on all training examples, resulting in a
computational complexity proportional to the number of training examples. To
reduce the time complexity, stochastic ADMM algorithms were proposed to replace
the expected function with a random loss function associated with one uniformly
drawn example plus a Bregman divergence. The Bregman divergence, however, is
derived from a simple second order proximal function, the half squared norm,
which could be a suboptimal choice.
In this paper, we present a new family of stochastic ADMM algorithms with
optimal second order proximal functions, which produce a new family of adaptive
subgradient methods. We theoretically prove that their regret bounds are as
good as the bounds which could be achieved by the best proximal function that
can be chosen in hindsight. Encouraging empirical results on a variety of
real-world datasets confirm the effectiveness and efficiency of the proposed
algorithms.
| [
"['Peilin Zhao' 'Jinwei Yang' 'Tong Zhang' 'Ping Li']",
"Peilin Zhao, Jinwei Yang, Tong Zhang, Ping Li"
] |
cs.CV cs.LG cs.NE | null | 1312.4569 | null | null | http://arxiv.org/pdf/1312.4569v2 | 2014-03-10T15:34:55Z | 2013-11-05T10:45:48Z | Dropout improves Recurrent Neural Networks for Handwriting Recognition | Recurrent neural networks (RNNs) with Long Short-Term memory cells currently
hold the best known results in unconstrained handwriting recognition. We show
that their performance can be greatly improved using dropout - a recently
proposed regularization method for deep architectures. While previous works
showed that dropout gave superior performance in the context of convolutional
networks, it had never been applied to RNNs. In our approach, dropout is
carefully used in the network so that it does not affect the recurrent
connections, hence the power of RNNs in modeling sequence is preserved.
Extensive experiments on a broad range of handwritten databases confirm the
effectiveness of dropout on deep architectures even when the network mainly
consists of recurrent and shared connections.
| [
"['Vu Pham' 'Théodore Bluche' 'Christopher Kermorvant' 'Jérôme Louradour']",
"Vu Pham, Th\\'eodore Bluche, Christopher Kermorvant, J\\'er\\^ome\n Louradour"
] |
cs.LG | null | 1312.4599 | null | null | http://arxiv.org/pdf/1312.4599v1 | 2013-12-17T00:32:43Z | 2013-12-17T00:32:43Z | Evolution and Computational Learning Theory: A survey on Valiant's paper | Darwin's theory of evolution is considered to be one of the greatest
scientific gems in modern science. It not only gives us a description of how
living things evolve, but also shows how a population evolves through time and
also, why only the fittest individuals continue the generation forward. The
paper basically gives a high level analysis of the works of Valiant[1]. Though,
we know the mechanisms of evolution, but it seems that there does not exist any
strong quantitative and mathematical theory of the evolution of certain
mechanisms. What is defined exactly as the fitness of an individual, why is
that only certain individuals in a population tend to mutate, how computation
is done in finite time when we have exponentially many examples: there seems to
be a lot of questions which need to be answered. [1] basically treats Darwinian
theory as a form of computational learning theory, which calculates the net
fitness of the hypotheses and thus distinguishes functions and their classes
which could be evolvable using polynomial amount of resources. Evolution is
considered as a function of the environment and the previous evolutionary
stages that chooses the best hypothesis using learning techniques that makes
mutation possible and hence, gives a quantitative idea that why only the
fittest individuals tend to survive and have the power to mutate.
| [
"['Arka Bhattacharya']",
"Arka Bhattacharya"
] |
stat.ML cs.LG | null | 1312.4626 | null | null | http://arxiv.org/pdf/1312.4626v1 | 2013-12-17T03:33:08Z | 2013-12-17T03:33:08Z | Compact Random Feature Maps | Kernel approximation using randomized feature maps has recently gained a lot
of interest. In this work, we identify that previous approaches for polynomial
kernel approximation create maps that are rank deficient, and therefore do not
utilize the capacity of the projected feature space effectively. To address
this challenge, we propose compact random feature maps (CRAFTMaps) to
approximate polynomial kernels more concisely and accurately. We prove the
error bounds of CRAFTMaps demonstrating their superior kernel reconstruction
performance compared to the previous approximation schemes. We show how
structured random matrices can be used to efficiently generate CRAFTMaps, and
present a single-pass algorithm using CRAFTMaps to learn non-linear multi-class
classifiers. We present experiments on multiple standard data-sets with
performance competitive with state-of-the-art results.
| [
"Raffay Hamid and Ying Xiao and Alex Gittens and Dennis DeCoste",
"['Raffay Hamid' 'Ying Xiao' 'Alex Gittens' 'Dennis DeCoste']"
] |
cs.LG cs.SD q-bio.NC | null | 1312.4695 | null | null | http://arxiv.org/pdf/1312.4695v3 | 2014-02-18T10:20:25Z | 2013-12-17T09:12:55Z | Sparse, complex-valued representations of natural sounds learned with
phase and amplitude continuity priors | Complex-valued sparse coding is a data representation which employs a
dictionary of two-dimensional subspaces, while imposing a sparse, factorial
prior on complex amplitudes. When trained on a dataset of natural image
patches, it learns phase invariant features which closely resemble receptive
fields of complex cells in the visual cortex. Features trained on natural
sounds however, rarely reveal phase invariance and capture other aspects of the
data. This observation is a starting point of the present work. As its first
contribution, it provides an analysis of natural sound statistics by means of
learning sparse, complex representations of short speech intervals. Secondly,
it proposes priors over the basis function set, which bias them towards
phase-invariant solutions. In this way, a dictionary of complex basis functions
can be learned from the data statistics, while preserving the phase invariance
property. Finally, representations trained on speech sounds with and without
priors are compared. Prior-based basis functions reveal performance comparable
to unconstrained sparse coding, while explicitely representing phase as a
temporal shift. Such representations can find applications in many perceptual
and machine learning tasks.
| [
"['Wiktor Mlynarski']",
"Wiktor Mlynarski"
] |
cs.LG | null | 1312.4986 | null | null | http://arxiv.org/pdf/1312.4986v1 | 2013-12-17T22:12:52Z | 2013-12-17T22:12:52Z | A Comparative Evaluation of Curriculum Learning with Filtering and
Boosting | Not all instances in a data set are equally beneficial for inferring a model
of the data. Some instances (such as outliers) are detrimental to inferring a
model of the data. Several machine learning techniques treat instances in a
data set differently during training such as curriculum learning, filtering,
and boosting. However, an automated method for determining how beneficial an
instance is for inferring a model of the data does not exist. In this paper, we
present an automated method that orders the instances in a data set by
complexity based on the their likelihood of being misclassified (instance
hardness). The underlying assumption of this method is that instances with a
high likelihood of being misclassified represent more complex concepts in a
data set. Ordering the instances in a data set allows a learning algorithm to
focus on the most beneficial instances and ignore the detrimental ones. We
compare ordering the instances in a data set in curriculum learning, filtering
and boosting. We find that ordering the instances significantly increases
classification accuracy and that filtering has the largest impact on
classification accuracy. On a set of 52 data sets, ordering the instances
increases the average accuracy from 81% to 84%.
| [
"['Michael R. Smith' 'Tony Martinez']",
"Michael R. Smith and Tony Martinez"
] |
cs.LG | null | 1312.5021 | null | null | http://arxiv.org/pdf/1312.5021v1 | 2013-12-18T02:10:21Z | 2013-12-18T02:10:21Z | Efficient Online Bootstrapping for Large Scale Learning | Bootstrapping is a useful technique for estimating the uncertainty of a
predictor, for example, confidence intervals for prediction. It is typically
used on small to moderate sized datasets, due to its high computation cost.
This work describes a highly scalable online bootstrapping strategy,
implemented inside Vowpal Wabbit, that is several times faster than traditional
strategies. Our experiments indicate that, in addition to providing a black
box-like method for estimating uncertainty, our implementation of online
bootstrapping may also help to train models with better prediction performance
due to model averaging.
| [
"['Zhen Qin' 'Vaclav Petricek' 'Nikos Karampatziakis' 'Lihong Li'\n 'John Langford']",
"Zhen Qin, Vaclav Petricek, Nikos Karampatziakis, Lihong Li, John\n Langford"
] |
stat.ML cs.LG math.OC | null | 1312.5023 | null | null | http://arxiv.org/pdf/1312.5023v1 | 2013-12-18T02:21:01Z | 2013-12-18T02:21:01Z | Contextually Supervised Source Separation with Application to Energy
Disaggregation | We propose a new framework for single-channel source separation that lies
between the fully supervised and unsupervised setting. Instead of supervision,
we provide input features for each source signal and use convex methods to
estimate the correlations between these features and the unobserved signal
decomposition. We analyze the case of $\ell_2$ loss theoretically and show that
recovery of the signal components depends only on cross-correlation between
features for different signals, not on correlations between features for the
same signal. Contextually supervised source separation is a natural fit for
domains with large amounts of data but no explicit supervision; our motivating
application is energy disaggregation of hourly smart meter data (the separation
of whole-home power signals into different energy uses). Here we apply
contextual supervision to disaggregate the energy usage of thousands homes over
four years, a significantly larger scale than previously published efforts, and
demonstrate on synthetic data that our method outperforms the unsupervised
approach.
| [
"['Matt Wytock' 'J. Zico Kolter']",
"Matt Wytock and J. Zico Kolter"
] |
stat.AP cs.LG stat.ML | null | 1312.5124 | null | null | http://arxiv.org/pdf/1312.5124v1 | 2013-12-18T13:13:39Z | 2013-12-18T13:13:39Z | Permuted NMF: A Simple Algorithm Intended to Minimize the Volume of the
Score Matrix | Non-Negative Matrix Factorization, NMF, attempts to find a number of
archetypal response profiles, or parts, such that any sample profile in the
dataset can be approximated by a close profile among these archetypes or a
linear combination of these profiles. The non-negativity constraint is imposed
while estimating archetypal profiles, due to the non-negative nature of the
observed signal. Apart from non negativity, a volume constraint can be applied
on the Score matrix W to enhance the ability of learning parts of NMF. In this
report, we describe a very simple algorithm, which in effect achieves volume
minimization, although indirectly.
| [
"Paul Fogel",
"['Paul Fogel']"
] |
stat.ML cs.LG math.OC | null | 1312.5179 | null | null | http://arxiv.org/pdf/1312.5179v1 | 2013-12-18T15:35:32Z | 2013-12-18T15:35:32Z | The Total Variation on Hypergraphs - Learning on Hypergraphs Revisited | Hypergraphs allow one to encode higher-order relationships in data and are
thus a very flexible modeling tool. Current learning methods are either based
on approximations of the hypergraphs via graphs or on tensor methods which are
only applicable under special conditions. In this paper, we present a new
learning framework on hypergraphs which fully uses the hypergraph structure.
The key element is a family of regularization functionals based on the total
variation on hypergraphs.
| [
"['Matthias Hein' 'Simon Setzer' 'Leonardo Jost' 'Syama Sundar Rangapuram']",
"Matthias Hein, Simon Setzer, Leonardo Jost, Syama Sundar Rangapuram"
] |
stat.ML cs.LG math.OC | null | 1312.5192 | null | null | http://arxiv.org/pdf/1312.5192v2 | 2014-03-24T10:10:42Z | 2013-12-18T16:01:25Z | Nonlinear Eigenproblems in Data Analysis - Balanced Graph Cuts and the
RatioDCA-Prox | It has been recently shown that a large class of balanced graph cuts allows
for an exact relaxation into a nonlinear eigenproblem. We review briefly some
of these results and propose a family of algorithms to compute nonlinear
eigenvectors which encompasses previous work as special cases. We provide a
detailed analysis of the properties and the convergence behavior of these
algorithms and then discuss their application in the area of balanced graph
cuts.
| [
"Leonardo Jost, Simon Setzer, Matthias Hein",
"['Leonardo Jost' 'Simon Setzer' 'Matthias Hein']"
] |
cs.LG cs.AI cs.CL stat.ML | null | 1312.5198 | null | null | http://arxiv.org/pdf/1312.5198v4 | 2014-04-25T13:31:53Z | 2013-12-18T16:13:08Z | Learning Semantic Script Knowledge with Event Embeddings | Induction of common sense knowledge about prototypical sequences of events
has recently received much attention. Instead of inducing this knowledge in the
form of graphs, as in much of the previous work, in our method, distributed
representations of event realizations are computed based on distributed
representations of predicates and their arguments, and then these
representations are used to predict prototypical event orderings. The
parameters of the compositional process for computing the event representations
and the ranking component of the model are jointly estimated from texts. We
show that this approach results in a substantial boost in ordering performance
with respect to previous methods.
| [
"Ashutosh Modi and Ivan Titov",
"['Ashutosh Modi' 'Ivan Titov']"
] |
cs.CV cs.LG cs.NE | null | 1312.5242 | null | null | http://arxiv.org/pdf/1312.5242v3 | 2014-02-16T13:07:23Z | 2013-12-18T17:44:17Z | Unsupervised feature learning by augmenting single images | When deep learning is applied to visual object recognition, data augmentation
is often used to generate additional training data without extra labeling cost.
It helps to reduce overfitting and increase the performance of the algorithm.
In this paper we investigate if it is possible to use data augmentation as the
main component of an unsupervised feature learning architecture. To that end we
sample a set of random image patches and declare each of them to be a separate
single-image surrogate class. We then extend these trivial one-element classes
by applying a variety of transformations to the initial 'seed' patches. Finally
we train a convolutional neural network to discriminate between these surrogate
classes. The feature representation learned by the network can then be used in
various vision tasks. We find that this simple feature learning algorithm is
surprisingly successful, achieving competitive classification results on
several popular vision datasets (STL-10, CIFAR-10, Caltech-101).
| [
"['Alexey Dosovitskiy' 'Jost Tobias Springenberg' 'Thomas Brox']",
"Alexey Dosovitskiy, Jost Tobias Springenberg and Thomas Brox"
] |
stat.ML cs.LG | null | 1312.5258 | null | null | http://arxiv.org/pdf/1312.5258v2 | 2014-10-24T19:16:14Z | 2013-12-18T18:30:51Z | On the Challenges of Physical Implementations of RBMs | Restricted Boltzmann machines (RBMs) are powerful machine learning models,
but learning and some kinds of inference in the model require sampling-based
approximations, which, in classical digital computers, are implemented using
expensive MCMC. Physical computation offers the opportunity to reduce the cost
of sampling by building physical systems whose natural dynamics correspond to
drawing samples from the desired RBM distribution. Such a system avoids the
burn-in and mixing cost of a Markov chain. However, hardware implementations of
this variety usually entail limitations such as low-precision and limited range
of the parameters and restrictions on the size and topology of the RBM. We
conduct software simulations to determine how harmful each of these
restrictions is. Our simulations are designed to reproduce aspects of the
D-Wave quantum computer, but the issues we investigate arise in most forms of
physical computation.
| [
"Vincent Dumoulin, Ian J. Goodfellow, Aaron Courville, Yoshua Bengio",
"['Vincent Dumoulin' 'Ian J. Goodfellow' 'Aaron Courville' 'Yoshua Bengio']"
] |
cs.CE cs.LG | null | 1312.5354 | null | null | http://arxiv.org/pdf/1312.5354v2 | 2014-07-28T09:44:01Z | 2013-12-18T22:08:07Z | Classification of Human Ventricular Arrhythmia in High Dimensional
Representation Spaces | We studied classification of human ECGs labelled as normal sinus rhythm,
ventricular fibrillation and ventricular tachycardia by means of support vector
machines in different representation spaces, using different observation
lengths. ECG waveform segments of duration 0.5-4 s, their Fourier magnitude
spectra, and lower dimensional projections of Fourier magnitude spectra were
used for classification. All considered representations were of much higher
dimension than in published studies. Classification accuracy improved with
segment duration up to 2 s, with 4 s providing little improvement. We found
that it is possible to discriminate between ventricular tachycardia and
ventricular fibrillation by the present approach with much shorter runs of ECG
(2 s, minimum 86% sensitivity per class) than previously imagined. Ensembles of
classifiers acting on 1 s segments taken over 5 s observation windows gave best
results, with sensitivities of detection for all classes exceeding 93%.
| [
"['Yaqub Alwan' 'Zoran Cvetkovic' 'Michael Curtis']",
"Yaqub Alwan, Zoran Cvetkovic, Michael Curtis"
] |
cs.NE cs.LG stat.ML | null | 1312.5394 | null | null | http://arxiv.org/pdf/1312.5394v1 | 2013-12-19T02:38:40Z | 2013-12-19T02:38:40Z | Missing Value Imputation With Unsupervised Backpropagation | Many data mining and data analysis techniques operate on dense matrices or
complete tables of data. Real-world data sets, however, often contain unknown
values. Even many classification algorithms that are designed to operate with
missing values still exhibit deteriorated accuracy. One approach to handling
missing values is to fill in (impute) the missing values. In this paper, we
present a technique for unsupervised learning called Unsupervised
Backpropagation (UBP), which trains a multi-layer perceptron to fit to the
manifold sampled by a set of observed point-vectors. We evaluate UBP with the
task of imputing missing values in datasets, and show that UBP is able to
predict missing values with significantly lower sum-squared error than other
collaborative filtering and imputation techniques. We also demonstrate with 24
datasets and 9 supervised learning algorithms that classification accuracy is
usually higher when randomly-withheld values are imputed using UBP, rather than
with other methods.
| [
"['Michael S. Gashler' 'Michael R. Smith' 'Richard Morris' 'Tony Martinez']",
"Michael S. Gashler, Michael R. Smith, Richard Morris, Tony Martinez"
] |
cs.LG stat.ML | null | 1312.5398 | null | null | http://arxiv.org/pdf/1312.5398v2 | 2014-02-17T20:32:00Z | 2013-12-19T03:24:58Z | Continuous Learning: Engineering Super Features With Feature Algebras | In this paper we consider a problem of searching a space of predictive models
for a given training data set. We propose an iterative procedure for deriving a
sequence of improving models and a corresponding sequence of sets of non-linear
features on the original input space. After a finite number of iterations N,
the non-linear features become 2^N -degree polynomials on the original space.
We show that in a limit of an infinite number of iterations derived non-linear
features must form an associative algebra: a product of two features is equal
to a linear combination of features from the same feature space for any given
input point. Because each iteration consists of solving a series of convex
problems that contain all previous solutions, the likelihood of the models in
the sequence is increasing with each iteration while the dimension of the model
parameter space is set to a limited controlled value.
| [
"Michael Tetelman",
"['Michael Tetelman']"
] |
stat.ML cs.LG | null | 1312.5412 | null | null | http://arxiv.org/pdf/1312.5412v3 | 2014-01-06T20:07:09Z | 2013-12-19T05:37:50Z | Approximated Infomax Early Stopping: Revisiting Gaussian RBMs on Natural
Images | We pursue an early stopping technique that helps Gaussian Restricted
Boltzmann Machines (GRBMs) to gain good natural image representations in terms
of overcompleteness and data fitting. GRBMs are widely considered as an
unsuitable model for natural images because they gain non-overcomplete
representations which include uniform filters that do not represent useful
image features. We have recently found that GRBMs once gain and subsequently
lose useful filters during their training, contrary to this common perspective.
We attribute this phenomenon to a tradeoff between overcompleteness of GRBM
representations and data fitting. To gain GRBM representations that are
overcomplete and fit data well, we propose a measure for GRBM representation
quality, approximated mutual information, and an early stopping technique based
on this measure. The proposed method boosts performance of classifiers trained
on GRBM representations.
| [
"Taichi Kiwaki, Takaki Makino, Kazuyuki Aihara",
"['Taichi Kiwaki' 'Takaki Makino' 'Kazuyuki Aihara']"
] |
cs.LG | 10.1007/978-3-662-44851-9_28 | 1312.5419 | null | null | http://arxiv.org/abs/1312.5419v3 | 2014-05-15T11:32:03Z | 2013-12-19T06:53:24Z | Large-scale Multi-label Text Classification - Revisiting Neural Networks | Neural networks have recently been proposed for multi-label classification
because they are able to capture and model label dependencies in the output
layer. In this work, we investigate limitations of BP-MLL, a neural network
(NN) architecture that aims at minimizing pairwise ranking error. Instead, we
propose to use a comparably simple NN approach with recently proposed learning
techniques for large-scale multi-label text classification tasks. In
particular, we show that BP-MLL's ranking loss minimization can be efficiently
and effectively replaced with the commonly used cross entropy error function,
and demonstrate that several advances in neural network training that have been
developed in the realm of deep learning can be effectively employed in this
setting. Our experimental results show that simple NN models equipped with
advanced techniques such as rectified linear units, dropout, and AdaGrad
perform as well as or even outperform state-of-the-art approaches on six
large-scale textual datasets with diverse characteristics.
| [
"['Jinseok Nam' 'Jungi Kim' 'Eneldo Loza Mencía' 'Iryna Gurevych'\n 'Johannes Fürnkranz']",
"Jinseok Nam, Jungi Kim, Eneldo Loza Menc\\'ia, Iryna Gurevych, Johannes\n F\\\"urnkranz"
] |
cs.SY cs.IT cs.LG math.IT math.OC | null | 1312.5434 | null | null | http://arxiv.org/pdf/1312.5434v3 | 2014-12-16T08:16:46Z | 2013-12-19T08:29:57Z | Asynchronous Adaptation and Learning over Networks --- Part I: Modeling
and Stability Analysis | In this work and the supporting Parts II [2] and III [3], we provide a rather
detailed analysis of the stability and performance of asynchronous strategies
for solving distributed optimization and adaptation problems over networks. We
examine asynchronous networks that are subject to fairly general sources of
uncertainties, such as changing topologies, random link failures, random data
arrival times, and agents turning on and off randomly. Under this model, agents
in the network may stop updating their solutions or may stop sending or
receiving information in a random manner and without coordination with other
agents. We establish in Part I conditions on the first and second-order moments
of the relevant parameter distributions to ensure mean-square stable behavior.
We derive in Part II expressions that reveal how the various parameters of the
asynchronous behavior influence network performance. We compare in Part III the
performance of asynchronous networks to the performance of both centralized
solutions and synchronous networks. One notable conclusion is that the
mean-square-error performance of asynchronous networks shows a degradation only
of the order of $O(\nu)$, where $\nu$ is a small step-size parameter, while the
convergence rate remains largely unaltered. The results provide a solid
justification for the remarkable resilience of cooperative networks in the face
of random failures at multiple levels: agents, links, data arrivals, and
topology.
| [
"['Xiaochuan Zhao' 'Ali H. Sayed']",
"Xiaochuan Zhao and Ali H. Sayed"
] |
cs.SY cs.IT cs.LG math.IT math.OC | null | 1312.5438 | null | null | http://arxiv.org/pdf/1312.5438v3 | 2014-12-16T08:28:11Z | 2013-12-19T08:39:58Z | Asynchronous Adaptation and Learning over Networks - Part II:
Performance Analysis | In Part I \cite{Zhao13TSPasync1}, we introduced a fairly general model for
asynchronous events over adaptive networks including random topologies, random
link failures, random data arrival times, and agents turning on and off
randomly. We performed a stability analysis and established the notable fact
that the network is still able to converge in the mean-square-error sense to
the desired solution. Once stable behavior is guaranteed, it becomes important
to evaluate how fast the iterates converge and how close they get to the
optimal solution. This is a demanding task due to the various asynchronous
events and due to the fact that agents influence each other. In this Part II,
we carry out a detailed analysis of the mean-square-error performance of
asynchronous strategies for solving distributed optimization and adaptation
problems over networks. We derive analytical expressions for the mean-square
convergence rate and the steady-state mean-square-deviation. The expressions
reveal how the various parameters of the asynchronous behavior influence
network performance. In the process, we establish the interesting conclusion
that even under the influence of asynchronous events, all agents in the
adaptive network can still reach an $O(\nu^{1 + \gamma_o'})$ near-agreement
with some $\gamma_o' > 0$ while approaching the desired solution within
$O(\nu)$ accuracy, where $\nu$ is proportional to the small step-size parameter
for adaptation.
| [
"['Xiaochuan Zhao' 'Ali H. Sayed']",
"Xiaochuan Zhao and Ali H. Sayed"
] |
cs.SY cs.IT cs.LG math.IT math.OC | null | 1312.5439 | null | null | http://arxiv.org/pdf/1312.5439v3 | 2014-12-16T08:31:57Z | 2013-12-19T08:45:42Z | Asynchronous Adaptation and Learning over Networks - Part III:
Comparison Analysis | In Part II [3] we carried out a detailed mean-square-error analysis of the
performance of asynchronous adaptation and learning over networks under a
fairly general model for asynchronous events including random topologies,
random link failures, random data arrival times, and agents turning on and off
randomly. In this Part III, we compare the performance of synchronous and
asynchronous networks. We also compare the performance of decentralized
adaptation against centralized stochastic-gradient (batch) solutions. Two
interesting conclusions stand out. First, the results establish that the
performance of adaptive networks is largely immune to the effect of
asynchronous events: the mean and mean-square convergence rates and the
asymptotic bias values are not degraded relative to synchronous or centralized
implementations. Only the steady-state mean-square-deviation suffers a
degradation in the order of $\nu$, which represents the small step-size
parameters used for adaptation. Second, the results show that the adaptive
distributed network matches the performance of the centralized solution. These
conclusions highlight another critical benefit of cooperation by networked
agents: cooperation does not only enhance performance in comparison to
stand-alone single-agent processing, but it also endows the network with
remarkable resilience to various forms of random failure events and is able to
deliver performance that is as powerful as batch solutions.
| [
"['Xiaochuan Zhao' 'Ali H. Sayed']",
"Xiaochuan Zhao and Ali H. Sayed"
] |
cs.IR cs.LG cs.MM | 10.1109/TASLP.2014.2337842 | 1312.5457 | null | null | http://arxiv.org/abs/1312.5457v1 | 2013-12-19T09:40:03Z | 2013-12-19T09:40:03Z | Codebook based Audio Feature Representation for Music Information
Retrieval | Digital music has become prolific in the web in recent decades. Automated
recommendation systems are essential for users to discover music they love and
for artists to reach appropriate audience. When manual annotations and user
preference data is lacking (e.g. for new artists) these systems must rely on
\emph{content based} methods. Besides powerful machine learning tools for
classification and retrieval, a key component for successful recommendation is
the \emph{audio content representation}.
Good representations should capture informative musical patterns in the audio
signal of songs. These representations should be concise, to enable efficient
(low storage, easy indexing, fast search) management of huge music
repositories, and should also be easy and fast to compute, to enable real-time
interaction with a user supplying new songs to the system.
Before designing new audio features, we explore the usage of traditional
local features, while adding a stage of encoding with a pre-computed
\emph{codebook} and a stage of pooling to get compact vectorial
representations. We experiment with different encoding methods, namely
\emph{the LASSO}, \emph{vector quantization (VQ)} and \emph{cosine similarity
(CS)}. We evaluate the representations' quality in two music information
retrieval applications: query-by-tag and query-by-example. Our results show
that concise representations can be used for successful performance in both
applications. We recommend using top-$\tau$ VQ encoding, which consistently
performs well in both applications, and requires much less computation time
than the LASSO.
| [
"['Yonatan Vaizman' 'Brian McFee' 'Gert Lanckriet']",
"Yonatan Vaizman, Brian McFee and Gert Lanckriet"
] |
cs.LG stat.ML | null | 1312.5465 | null | null | http://arxiv.org/pdf/1312.5465v3 | 2014-09-25T02:31:30Z | 2013-12-19T10:10:02Z | Learning rates of $l^q$ coefficient regularization learning with
Gaussian kernel | Regularization is a well recognized powerful strategy to improve the
performance of a learning machine and $l^q$ regularization schemes with
$0<q<\infty$ are central in use. It is known that different $q$ leads to
different properties of the deduced estimators, say, $l^2$ regularization leads
to smooth estimators while $l^1$ regularization leads to sparse estimators.
Then, how does the generalization capabilities of $l^q$ regularization learning
vary with $q$? In this paper, we study this problem in the framework of
statistical learning theory and show that implementing $l^q$ coefficient
regularization schemes in the sample dependent hypothesis space associated with
Gaussian kernel can attain the same almost optimal learning rates for all
$0<q<\infty$. That is, the upper and lower bounds of learning rates for $l^q$
regularization learning are asymptotically identical for all $0<q<\infty$. Our
finding tentatively reveals that, in some modeling contexts, the choice of $q$
might not have a strong impact with respect to the generalization capability.
From this perspective, $q$ can be arbitrarily specified, or specified merely by
other no generalization criteria like smoothness, computational complexity,
sparsity, etc..
| [
"Shaobo Lin, Jinshan Zeng, Jian Fang and Zongben Xu",
"['Shaobo Lin' 'Jinshan Zeng' 'Jian Fang' 'Zongben Xu']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.