categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
cs.LG | 10.1613/jair.3077 | 1406.3270 | null | null | http://arxiv.org/abs/1406.3270v1 | 2014-01-16T05:02:28Z | 2014-01-16T05:02:28Z | Kalman Temporal Differences | Because reinforcement learning suffers from a lack of scalability, online
value (and Q-) function approximation has received increasing interest this
last decade. This contribution introduces a novel approximation scheme, namely
the Kalman Temporal Differences (KTD) framework, that exhibits the following
features: sample-efficiency, non-linear approximation, non-stationarity
handling and uncertainty management. A first KTD-based algorithm is provided
for deterministic Markov Decision Processes (MDP) which produces biased
estimates in the case of stochastic transitions. Than the eXtended KTD
framework (XKTD), solving stochastic MDP, is described. Convergence is analyzed
for special cases for both deterministic and stochastic transitions. Related
algorithms are experimented on classical benchmarks. They compare favorably to
the state of the art while exhibiting the announced features.
| [
"Matthieu Geist, Olivier Pietquin",
"['Matthieu Geist' 'Olivier Pietquin']"
]
|
cs.CV cs.LG stat.ML | null | 1406.3332 | null | null | http://arxiv.org/pdf/1406.3332v2 | 2014-11-14T16:58:48Z | 2014-06-12T19:41:03Z | Convolutional Kernel Networks | An important goal in visual recognition is to devise image representations
that are invariant to particular transformations. In this paper, we address
this goal with a new type of convolutional neural network (CNN) whose
invariance is encoded by a reproducing kernel. Unlike traditional approaches
where neural networks are learned either to represent data or for solving a
classification task, our network learns to approximate the kernel feature map
on training data. Such an approach enjoys several benefits over classical ones.
First, by teaching CNNs to be invariant, we obtain simple network architectures
that achieve a similar accuracy to more complex ones, while being easy to train
and robust to overfitting. Second, we bridge a gap between the neural network
literature and kernels, which are natural tools to model invariance. We
evaluate our methodology on visual recognition tasks where CNNs have proven to
perform well, e.g., digit recognition with the MNIST dataset, and the more
challenging CIFAR-10 and STL-10 datasets, where our accuracy is competitive
with the state of the art.
| [
"Julien Mairal (INRIA Grenoble Rh\\^one-Alpes / LJK Laboratoire Jean\n Kuntzmann), Piotr Koniusz (INRIA Grenoble Rh\\^one-Alpes / LJK Laboratoire\n Jean Kuntzmann), Zaid Harchaoui (INRIA Grenoble Rh\\^one-Alpes / LJK\n Laboratoire Jean Kuntzmann), Cordelia Schmid (INRIA Grenoble Rh\\^one-Alpes /\n LJK Laboratoire Jean Kuntzmann)",
"['Julien Mairal' 'Piotr Koniusz' 'Zaid Harchaoui' 'Cordelia Schmid']"
]
|
cs.LG | null | 1406.3407 | null | null | http://arxiv.org/pdf/1406.3407v2 | 2015-04-20T18:39:18Z | 2014-06-13T02:19:26Z | Restricted Boltzmann Machine for Classification with Hierarchical
Correlated Prior | Restricted Boltzmann machines (RBM) and its variants have become hot research
topics recently, and widely applied to many classification problems, such as
character recognition and document categorization. Often, classification RBM
ignores the interclass relationship or prior knowledge of sharing information
among classes. In this paper, we are interested in RBM with the hierarchical
prior over classes. We assume parameters for nearby nodes are correlated in the
hierarchical tree, and further the parameters at each node of the tree be
orthogonal to those at its ancestors. We propose a hierarchical correlated RBM
for classification problem, which generalizes the classification RBM with
sharing information among different classes. In order to reduce the redundancy
between node parameters in the hierarchy, we also introduce orthogonal
restrictions to our objective function. We test our method on challenge
datasets, and show promising results compared to competitive baselines.
| [
"Gang Chen and Sargur H. Srihari",
"['Gang Chen' 'Sargur H. Srihari']"
]
|
cs.CV cs.LG cs.NE | null | 1406.3474 | null | null | http://arxiv.org/pdf/1406.3474v1 | 2014-06-13T10:11:18Z | 2014-06-13T10:11:18Z | Heterogeneous Multi-task Learning for Human Pose Estimation with Deep
Convolutional Neural Network | We propose an heterogeneous multi-task learning framework for human pose
estimation from monocular image with deep convolutional neural network. In
particular, we simultaneously learn a pose-joint regressor and a sliding-window
body-part detector in a deep network architecture. We show that including the
body-part detection task helps to regularize the network, directing it to
converge to a good solution. We report competitive and state-of-art results on
several data sets. We also empirically show that the learned neurons in the
middle layer of our network are tuned to localized body parts.
| [
"Sijin Li, Zhi-Qiang Liu, Antoni B. Chan",
"['Sijin Li' 'Zhi-Qiang Liu' 'Antoni B. Chan']"
]
|
cs.AI cs.LG stat.AP | 10.3233/IDA-150734 | 1406.3496 | null | null | http://arxiv.org/abs/1406.3496v1 | 2014-06-13T10:38:09Z | 2014-06-13T10:38:09Z | EigenEvent: An Algorithm for Event Detection from Complex Data Streams
in Syndromic Surveillance | Syndromic surveillance systems continuously monitor multiple pre-diagnostic
daily streams of indicators from different regions with the aim of early
detection of disease outbreaks. The main objective of these systems is to
detect outbreaks hours or days before the clinical and laboratory confirmation.
The type of data that is being generated via these systems is usually
multivariate and seasonal with spatial and temporal dimensions. The algorithm
What's Strange About Recent Events (WSARE) is the state-of-the-art method for
such problems. It exhaustively searches for contrast sets in the multivariate
data and signals an alarm when find statistically significant rules. This
bottom-up approach presents a much lower detection delay comparing the existing
top-down approaches. However, WSARE is very sensitive to the small-scale
changes and subsequently comes with a relatively high rate of false alarms. We
propose a new approach called EigenEvent that is neither fully top-down nor
bottom-up. In this method, we instead of top-down or bottom-up search, track
changes in data correlation structure via eigenspace techniques. This new
methodology enables us to detect both overall changes (via eigenvalue) and
dimension-level changes (via eigenvectors). Experimental results on hundred
sets of benchmark data reveals that EigenEvent presents a better overall
performance comparing state-of-the-art, in particular in terms of the false
alarm rate.
| [
"Hadi Fanaee-T and Jo\\~ao Gama",
"['Hadi Fanaee-T' 'João Gama']"
]
|
cs.AI cs.LG | null | 1406.3497 | null | null | http://arxiv.org/pdf/1406.3497v2 | 2014-11-18T21:31:32Z | 2014-06-13T10:49:38Z | Multi-objective Reinforcement Learning with Continuous Pareto Frontier
Approximation Supplementary Material | This document contains supplementary material for the paper "Multi-objective
Reinforcement Learning with Continuous Pareto Frontier Approximation",
published at the Twenty-Ninth AAAI Conference on Artificial Intelligence
(AAAI-15). The paper is about learning a continuous approximation of the Pareto
frontier in Multi-Objective Markov Decision Problems (MOMDPs). We propose a
policy-based approach that exploits gradient information to generate solutions
close to the Pareto ones. Differently from previous policy-gradient
multi-objective algorithms, where n optimization routines are use to have n
solutions, our approach performs a single gradient-ascent run that at each step
generates an improved continuous approximation of the Pareto frontier. The idea
is to exploit a gradient-based approach to optimize the parameters of a
function that defines a manifold in the policy parameter space so that the
corresponding image in the objective space gets as close as possible to the
Pareto frontier. Besides deriving how to compute and estimate such gradient, we
will also discuss the non-trivial issue of defining a metric to assess the
quality of the candidate Pareto frontiers. Finally, the properties of the
proposed approach are empirically evaluated on two interesting MOMDPs.
| [
"Matteo Pirotta, Simone Parisi and Marcello Restelli",
"['Matteo Pirotta' 'Simone Parisi' 'Marcello Restelli']"
]
|
math.NA cs.LG | 10.1109/TNNLS.2015.2440473 | 1406.3587 | null | null | http://arxiv.org/abs/1406.3587v1 | 2014-06-13T16:33:55Z | 2014-06-13T16:33:55Z | Quaternion Gradient and Hessian | The optimization of real scalar functions of quaternion variables, such as
the mean square error or array output power, underpins many practical
applications. Solutions often require the calculation of the gradient and
Hessian, however, real functions of quaternion variables are essentially
non-analytic. To address this issue, we propose new definitions of quaternion
gradient and Hessian, based on the novel generalized HR (GHR) calculus, thus
making possible efficient derivation of optimization algorithms directly in the
quaternion field, rather than transforming the problem to the real domain, as
is current practice. In addition, unlike the existing quaternion gradients, the
GHR calculus allows for the product and chain rule, and for a one-to-one
correspondence of the proposed quaternion gradient and Hessian with their real
counterparts. Properties of the quaternion gradient and Hessian relevant to
numerical applications are elaborated, and the results illuminate the
usefulness of the GHR calculus in greatly simplifying the derivation of the
quaternion least mean squares, and in quaternion least square and Newton
algorithm. The proposed gradient and Hessian are also shown to enable the same
generic forms as the corresponding real- and complex-valued algorithms, further
illustrating the advantages in algorithm design and evaluation.
| [
"Dongpo Xu, Danilo P. Mandic",
"['Dongpo Xu' 'Danilo P. Mandic']"
]
|
stat.ML cs.LG | null | 1406.3650 | null | null | http://arxiv.org/pdf/1406.3650v2 | 2014-11-18T03:12:37Z | 2014-06-13T21:19:09Z | Smoothed Gradients for Stochastic Variational Inference | Stochastic variational inference (SVI) lets us scale up Bayesian computation
to massive data. It uses stochastic optimization to fit a variational
distribution, following easy-to-compute noisy natural gradients. As with most
traditional stochastic optimization methods, SVI takes precautions to use
unbiased stochastic gradients whose expectations are equal to the true
gradients. In this paper, we explore the idea of following biased stochastic
gradients in SVI. Our method replaces the natural gradient with a similarly
constructed vector that uses a fixed-window moving average of some of its
previous terms. We will demonstrate the many advantages of this technique.
First, its computational cost is the same as for SVI and storage requirements
only multiply by a constant factor. Second, it enjoys significant variance
reduction over the unbiased estimates, smaller bias than averaged gradients,
and leads to smaller mean-squared error against the full gradient. We test our
method on latent Dirichlet allocation with three large corpora.
| [
"Stephan Mandt and David Blei",
"['Stephan Mandt' 'David Blei']"
]
|
cs.CY cs.LG cs.SI | null | 1406.3692 | null | null | http://arxiv.org/pdf/1406.3692v1 | 2014-06-14T07:01:03Z | 2014-06-14T07:01:03Z | Analyzing Social and Stylometric Features to Identify Spear phishing
Emails | Spear phishing is a complex targeted attack in which, an attacker harvests
information about the victim prior to the attack. This information is then used
to create sophisticated, genuine-looking attack vectors, drawing the victim to
compromise confidential information. What makes spear phishing different, and
more powerful than normal phishing, is this contextual information about the
victim. Online social media services can be one such source for gathering vital
information about an individual. In this paper, we characterize and examine a
true positive dataset of spear phishing, spam, and normal phishing emails from
Symantec's enterprise email scanning service. We then present a model to detect
spear phishing emails sent to employees of 14 international organizations, by
using social features extracted from LinkedIn. Our dataset consists of 4,742
targeted attack emails sent to 2,434 victims, and 9,353 non targeted attack
emails sent to 5,912 non victims; and publicly available information from their
LinkedIn profiles. We applied various machine learning algorithms to this
labeled data, and achieved an overall maximum accuracy of 97.76% in identifying
spear phishing emails. We used a combination of social features from LinkedIn
profiles, and stylometric features extracted from email subjects, bodies, and
attachments. However, we achieved a slightly better accuracy of 98.28% without
the social features. Our analysis revealed that social features extracted from
LinkedIn do not help in identifying spear phishing emails. To the best of our
knowledge, this is one of the first attempts to make use of a combination of
stylometric features extracted from emails, and social features extracted from
an online social network to detect targeted spear phishing emails.
| [
"Prateek Dewan and Anand Kashyap and Ponnurangam Kumaraguru",
"['Prateek Dewan' 'Anand Kashyap' 'Ponnurangam Kumaraguru']"
]
|
cs.LG | null | 1406.3726 | null | null | http://arxiv.org/pdf/1406.3726v1 | 2014-06-14T13:08:30Z | 2014-06-14T13:08:30Z | Evaluation of Machine Learning Techniques for Green Energy Prediction | We evaluate the following Machine Learning techniques for Green Energy (Wind,
Solar) Prediction: Bayesian Inference, Neural Networks, Support Vector
Machines, Clustering techniques (PCA). Our objective is to predict green energy
using weather forecasts, predict deviations from forecast green energy, find
correlation amongst different weather parameters and green energy availability,
recover lost or missing energy (/ weather) data. We use historical weather data
and weather forecasts for the same.
| [
"Ankur Sahai",
"['Ankur Sahai']"
]
|
cs.LG stat.ML | null | 1406.3781 | null | null | http://arxiv.org/pdf/1406.3781v2 | 2014-11-22T11:16:28Z | 2014-06-14T23:25:05Z | From Stochastic Mixability to Fast Rates | Empirical risk minimization (ERM) is a fundamental learning rule for
statistical learning problems where the data is generated according to some
unknown distribution $\mathsf{P}$ and returns a hypothesis $f$ chosen from a
fixed class $\mathcal{F}$ with small loss $\ell$. In the parametric setting,
depending upon $(\ell, \mathcal{F},\mathsf{P})$ ERM can have slow
$(1/\sqrt{n})$ or fast $(1/n)$ rates of convergence of the excess risk as a
function of the sample size $n$. There exist several results that give
sufficient conditions for fast rates in terms of joint properties of $\ell$,
$\mathcal{F}$, and $\mathsf{P}$, such as the margin condition and the Bernstein
condition. In the non-statistical prediction with expert advice setting, there
is an analogous slow and fast rate phenomenon, and it is entirely characterized
in terms of the mixability of the loss $\ell$ (there being no role there for
$\mathcal{F}$ or $\mathsf{P}$). The notion of stochastic mixability builds a
bridge between these two models of learning, reducing to classical mixability
in a special case. The present paper presents a direct proof of fast rates for
ERM in terms of stochastic mixability of $(\ell,\mathcal{F}, \mathsf{P})$, and
in so doing provides new insight into the fast-rates phenomenon. The proof
exploits an old result of Kemperman on the solution to the general moment
problem. We also show a partial converse that suggests a characterization of
fast rates for ERM in terms of stochastic mixability is possible.
| [
"['Nishant A. Mehta' 'Robert C. Williamson']",
"Nishant A. Mehta and Robert C. Williamson"
]
|
cs.LG stat.AP | 10.1016/j.ijepes.2014.06.010 | 1406.3792 | null | null | http://arxiv.org/abs/1406.3792v1 | 2014-06-15T02:39:37Z | 2014-06-15T02:39:37Z | Interval Forecasting of Electricity Demand: A Novel Bivariate EMD-based
Support Vector Regression Modeling Framework | Highly accurate interval forecasting of electricity demand is fundamental to
the success of reducing the risk when making power system planning and
operational decisions by providing a range rather than point estimation. In
this study, a novel modeling framework integrating bivariate empirical mode
decomposition (BEMD) and support vector regression (SVR), extended from the
well-established empirical mode decomposition (EMD) based time series modeling
framework in the energy demand forecasting literature, is proposed for interval
forecasting of electricity demand. The novelty of this study arises from the
employment of BEMD, a new extension of classical empirical model decomposition
(EMD) destined to handle bivariate time series treated as complex-valued time
series, as decomposition method instead of classical EMD only capable of
decomposing one-dimensional single-valued time series. This proposed modeling
framework is endowed with BEMD to decompose simultaneously both the lower and
upper bounds time series, constructed in forms of complex-valued time series,
of electricity demand on a monthly per hour basis, resulting in capturing the
potential interrelationship between lower and upper bounds. The proposed
modeling framework is justified with monthly interval-valued electricity demand
data per hour in Pennsylvania-New Jersey-Maryland Interconnection, indicating
it as a promising method for interval-valued electricity demand forecasting.
| [
"Tao Xiong, Yukun Bao, Zhongyi Hu",
"['Tao Xiong' 'Yukun Bao' 'Zhongyi Hu']"
]
|
cs.LG stat.ML | null | 1406.3816 | null | null | http://arxiv.org/pdf/1406.3816v1 | 2014-06-15T13:34:27Z | 2014-06-15T13:34:27Z | Simultaneous Model Selection and Optimization through Parameter-free
Stochastic Learning | Stochastic gradient descent algorithms for training linear and kernel
predictors are gaining more and more importance, thanks to their scalability.
While various methods have been proposed to speed up their convergence, the
model selection phase is often ignored. In fact, in theoretical works most of
the time assumptions are made, for example, on the prior knowledge of the norm
of the optimal solution, while in the practical world validation methods remain
the only viable approach. In this paper, we propose a new kernel-based
stochastic gradient descent algorithm that performs model selection while
training, with no parameters to tune, nor any form of cross-validation. The
algorithm builds on recent advancement in online learning theory for
unconstrained settings, to estimate over time the right regularization in a
data-dependent way. Optimal rates of convergence are proved under standard
smoothness assumptions on the target function, using the range space of the
fractional integral operator associated with the kernel.
| [
"Francesco Orabona",
"['Francesco Orabona']"
]
|
cs.CL cs.LG stat.ML | null | 1406.3830 | null | null | http://arxiv.org/pdf/1406.3830v1 | 2014-06-15T17:15:32Z | 2014-06-15T17:15:32Z | Modelling, Visualising and Summarising Documents with a Single
Convolutional Neural Network | Capturing the compositional process which maps the meaning of words to that
of documents is a central challenge for researchers in Natural Language
Processing and Information Retrieval. We introduce a model that is able to
represent the meaning of documents by embedding them in a low dimensional
vector space, while preserving distinctions of word and sentence order crucial
for capturing nuanced semantics. Our model is based on an extended Dynamic
Convolution Neural Network, which learns convolution filters at both the
sentence and document level, hierarchically learning to capture and compose low
level lexical features into high level semantic concepts. We demonstrate the
effectiveness of this model on a range of document modelling tasks, achieving
strong results with no feature engineering and with a more compact model.
Inspired by recent advances in visualising deep convolution networks for
computer vision, we present a novel visualisation technique for our document
networks which not only provides insight into their learning process, but also
can be interpreted to produce a compelling automatic summarisation system for
texts.
| [
"Misha Denil and Alban Demiraj and Nal Kalchbrenner and Phil Blunsom\n and Nando de Freitas",
"['Misha Denil' 'Alban Demiraj' 'Nal Kalchbrenner' 'Phil Blunsom'\n 'Nando de Freitas']"
]
|
stat.ML cs.LG | null | 1406.3837 | null | null | http://arxiv.org/pdf/1406.3837v1 | 2014-06-15T18:30:51Z | 2014-06-15T18:30:51Z | An Incremental Reseeding Strategy for Clustering | In this work we propose a simple and easily parallelizable algorithm for
multiway graph partitioning. The algorithm alternates between three basic
components: diffusing seed vertices over the graph, thresholding the diffused
seeds, and then randomly reseeding the thresholded clusters. We demonstrate
experimentally that the proper combination of these ingredients leads to an
algorithm that achieves state-of-the-art performance in terms of cluster purity
on standard benchmarks datasets. Moreover, the algorithm runs an order of
magnitude faster than the other algorithms that achieve comparable results in
terms of accuracy. We also describe a coarsen, cluster and refine approach
similar to GRACLUS and METIS that removes an additional order of magnitude from
the runtime of our algorithm while still maintaining competitive accuracy.
| [
"['Xavier Bresson' 'Huiyi Hu' 'Thomas Laurent' 'Arthur Szlam'\n 'James von Brecht']",
"Xavier Bresson, Huiyi Hu, Thomas Laurent, Arthur Szlam, and James von\n Brecht"
]
|
cs.LG | null | 1406.3840 | null | null | http://arxiv.org/pdf/1406.3840v1 | 2014-06-15T18:41:47Z | 2014-06-15T18:41:47Z | Optimal Resource Allocation with Semi-Bandit Feedback | We study a sequential resource allocation problem involving a fixed number of
recurring jobs. At each time-step the manager should distribute available
resources among the jobs in order to maximise the expected number of completed
jobs. Allocating more resources to a given job increases the probability that
it completes, but with a cut-off. Specifically, we assume a linear model where
the probability increases linearly until it equals one, after which allocating
additional resources is wasteful. We assume the difficulty of each job is
unknown and present the first algorithm for this problem and prove upper and
lower bounds on its regret. Despite its apparent simplicity, the problem has a
rich structure: we show that an appropriate optimistic algorithm can improve
its learning speed dramatically beyond the results one normally expects for
similar problems as the problem becomes resource-laden.
| [
"Tor Lattimore and Koby Crammer and Csaba Szepesv\\'ari",
"['Tor Lattimore' 'Koby Crammer' 'Csaba Szepesvári']"
]
|
stat.CO cs.AI cs.LG | null | 1406.3843 | null | null | http://arxiv.org/pdf/1406.3843v1 | 2014-06-15T19:03:46Z | 2014-06-15T19:03:46Z | Semi-Separable Hamiltonian Monte Carlo for Inference in Bayesian
Hierarchical Models | Sampling from hierarchical Bayesian models is often difficult for MCMC
methods, because of the strong correlations between the model parameters and
the hyperparameters. Recent Riemannian manifold Hamiltonian Monte Carlo (RMHMC)
methods have significant potential advantages in this setting, but are
computationally expensive. We introduce a new RMHMC method, which we call
semi-separable Hamiltonian Monte Carlo, which uses a specially designed mass
matrix that allows the joint Hamiltonian over model parameters and
hyperparameters to decompose into two simpler Hamiltonians. This structure is
exploited by a new integrator which we call the alternating blockwise leapfrog
algorithm. The resulting method can mix faster than simpler Gibbs sampling
while being simpler and more efficient than previous instances of RMHMC.
| [
"Yichuan Zhang, Charles Sutton",
"['Yichuan Zhang' 'Charles Sutton']"
]
|
stat.ML cs.LG stat.CO | null | 1406.3852 | null | null | http://arxiv.org/pdf/1406.3852v3 | 2015-05-27T08:25:19Z | 2014-06-15T19:23:11Z | A low variance consistent test of relative dependency | We describe a novel non-parametric statistical hypothesis test of relative
dependence between a source variable and two candidate target variables. Such a
test enables us to determine whether one source variable is significantly more
dependent on a first target variable or a second. Dependence is measured via
the Hilbert-Schmidt Independence Criterion (HSIC), resulting in a pair of
empirical dependence measures (source-target 1, source-target 2). We test
whether the first dependence measure is significantly larger than the second.
Modeling the covariance between these HSIC statistics leads to a provably more
powerful test than the construction of independent HSIC statistics by
sub-sampling. The resulting test is consistent and unbiased, and (being based
on U-statistics) has favorable convergence properties. The test can be computed
in quadratic time, matching the computational complexity of standard empirical
HSIC estimators. The effectiveness of the test is demonstrated on several
real-world problems: we identify language groups from a multilingual corpus,
and we prove that tumor location is more dependent on gene expression than
chromosomal imbalances. Source code is available for download at
https://github.com/wbounliphone/reldep.
| [
"Wacha Bounliphone, Arthur Gretton, Arthur Tenenhaus (E3S), Matthew\n Blaschko (INRIA Saclay - Ile de France, CVN)",
"['Wacha Bounliphone' 'Arthur Gretton' 'Arthur Tenenhaus'\n 'Matthew Blaschko']"
]
|
cs.SD cs.LG | null | 1406.3884 | null | null | http://arxiv.org/pdf/1406.3884v1 | 2014-06-16T02:03:29Z | 2014-06-16T02:03:29Z | Learning An Invariant Speech Representation | Recognition of speech, and in particular the ability to generalize and learn
from small sets of labelled examples like humans do, depends on an appropriate
representation of the acoustic input. We formulate the problem of finding
robust speech features for supervised learning with small sample complexity as
a problem of learning representations of the signal that are maximally
invariant to intraclass transformations and deformations. We propose an
extension of a theory for unsupervised learning of invariant visual
representations to the auditory domain and empirically evaluate its validity
for voiced speech sound classification. Our version of the theory requires the
memory-based, unsupervised storage of acoustic templates -- such as specific
phones or words -- together with all the transformations of each that normally
occur. A quasi-invariant representation for a speech segment can be obtained by
projecting it to each template orbit, i.e., the set of transformed signals, and
computing the associated one-dimensional empirical probability distributions.
The computations can be performed by modules of filtering and pooling, and
extended to hierarchical architectures. In this paper, we apply a single-layer,
multicomponent representation for phonemes and demonstrate improved accuracy
and decreased sample complexity for vowel classification compared to standard
spectral, cepstral and perceptual features.
| [
"['Georgios Evangelopoulos' 'Stephen Voinea' 'Chiyuan Zhang'\n 'Lorenzo Rosasco' 'Tomaso Poggio']",
"Georgios Evangelopoulos, Stephen Voinea, Chiyuan Zhang, Lorenzo\n Rosasco, Tomaso Poggio"
]
|
cs.LG stat.ME stat.ML | null | 1406.3895 | null | null | http://arxiv.org/pdf/1406.3895v1 | 2014-06-16T03:29:48Z | 2014-06-16T03:29:48Z | The Laplacian K-modes algorithm for clustering | In addition to finding meaningful clusters, centroid-based clustering
algorithms such as K-means or mean-shift should ideally find centroids that are
valid patterns in the input space, representative of data in their cluster.
This is challenging with data having a nonconvex or manifold structure, as with
images or text. We introduce a new algorithm, Laplacian K-modes, which
naturally combines three powerful ideas in clustering: the explicit use of
assignment variables (as in K-means); the estimation of cluster centroids which
are modes of each cluster's density estimate (as in mean-shift); and the
regularizing effect of the graph Laplacian, which encourages similar
assignments for nearby points (as in spectral clustering). The optimization
algorithm alternates an assignment step, which is a convex quadratic program,
and a mean-shift step, which separates for each cluster centroid. The algorithm
finds meaningful density estimates for each cluster, even with challenging
problems where the clusters have manifold structure, are highly nonconvex or in
high dimension. It also provides centroids that are valid patterns, truly
representative of their cluster (unlike K-means), and an out-of-sample mapping
that predicts soft assignments for a new point.
| [
"['Weiran Wang' 'Miguel Á. Carreira-Perpiñán']",
"Weiran Wang and Miguel \\'A. Carreira-Perpi\\~n\\'an"
]
|
stat.ML cs.LG | null | 1406.3896 | null | null | http://arxiv.org/pdf/1406.3896v1 | 2014-06-16T03:43:20Z | 2014-06-16T03:43:20Z | Freeze-Thaw Bayesian Optimization | In this paper we develop a dynamic form of Bayesian optimization for machine
learning models with the goal of rapidly finding good hyperparameter settings.
Our method uses the partial information gained during the training of a machine
learning model in order to decide whether to pause training and start a new
model, or resume the training of a previously-considered model. We specifically
tailor our method to machine learning problems by developing a novel
positive-definite covariance kernel to capture a variety of training curves.
Furthermore, we develop a Gaussian process prior that scales gracefully with
additional temporal observations. Finally, we provide an information-theoretic
framework to automate the decision process. Experiments on several common
machine learning models show that our approach is extremely effective in
practice.
| [
"['Kevin Swersky' 'Jasper Snoek' 'Ryan Prescott Adams']",
"Kevin Swersky and Jasper Snoek and Ryan Prescott Adams"
]
|
cs.LG stat.ML | null | 1406.3922 | null | null | http://arxiv.org/pdf/1406.3922v2 | 2014-06-30T08:29:19Z | 2014-06-16T07:14:26Z | Personalized Medical Treatments Using Novel Reinforcement Learning
Algorithms | In both the fields of computer science and medicine there is very strong
interest in developing personalized treatment policies for patients who have
variable responses to treatments. In particular, I aim to find an optimal
personalized treatment policy which is a non-deterministic function of the
patient specific covariate data that maximizes the expected survival time or
clinical outcome. I developed an algorithmic framework to solve multistage
decision problem with a varying number of stages that are subject to censoring
in which the "rewards" are expected survival times. In specific, I developed a
novel Q-learning algorithm that dynamically adjusts for these parameters.
Furthermore, I found finite upper bounds on the generalized error of the
treatment paths constructed by this algorithm. I have also shown that when the
optimal Q-function is an element of the approximation space, the anticipated
survival times for the treatment regime constructed by the algorithm will
converge to the optimal treatment path. I demonstrated the performance of the
proposed algorithmic framework via simulation studies and through the analysis
of chronic depression data and a hypothetical clinical trial. The censored
Q-learning algorithm I developed is more effective than the state of the art
clinical decision support systems and is able to operate in environments when
many covariate parameters may be unobtainable or censored.
| [
"['Yousuf M. Soliman']",
"Yousuf M. Soliman"
]
|
cs.LG stat.ML | null | 1406.3926 | null | null | http://arxiv.org/pdf/1406.3926v1 | 2014-06-16T08:04:42Z | 2014-06-16T08:04:42Z | Bayesian Optimal Control of Smoothly Parameterized Systems: The Lazy
Posterior Sampling Algorithm | We study Bayesian optimal control of a general class of smoothly
parameterized Markov decision problems. Since computing the optimal control is
computationally expensive, we design an algorithm that trades off performance
for computational efficiency. The algorithm is a lazy posterior sampling method
that maintains a distribution over the unknown parameter. The algorithm changes
its policy only when the variance of the distribution is reduced sufficiently.
Importantly, we analyze the algorithm and show the precise nature of the
performance vs. computation tradeoff. Finally, we show the effectiveness of the
method on a web server control application.
| [
"['Yasin Abbasi-Yadkori' 'Csaba Szepesvari']",
"Yasin Abbasi-Yadkori and Csaba Szepesvari"
]
|
cs.CV cs.LG | null | 1406.4112 | null | null | http://arxiv.org/pdf/1406.4112v2 | 2015-06-03T09:53:18Z | 2014-06-16T19:40:52Z | Semantic Graph for Zero-Shot Learning | Zero-shot learning aims to classify visual objects without any training data
via knowledge transfer between seen and unseen classes. This is typically
achieved by exploring a semantic embedding space where the seen and unseen
classes can be related. Previous works differ in what embedding space is used
and how different classes and a test image can be related. In this paper, we
utilize the annotation-free semantic word space for the former and focus on
solving the latter issue of modeling relatedness. Specifically, in contrast to
previous work which ignores the semantic relationships between seen classes and
focus merely on those between seen and unseen classes, in this paper a novel
approach based on a semantic graph is proposed to represent the relationships
between all the seen and unseen class in a semantic word space. Based on this
semantic graph, we design a special absorbing Markov chain process, in which
each unseen class is viewed as an absorbing state. After incorporating one test
image into the semantic graph, the absorbing probabilities from the test data
to each unseen class can be effectively computed; and zero-shot classification
can be achieved by finding the class label with the highest absorbing
probability. The proposed model has a closed-form solution which is linear with
respect to the number of test images. We demonstrate the effectiveness and
computational efficiency of the proposed method over the state-of-the-arts on
the AwA (animals with attributes) dataset.
| [
"['Zhen-Yong Fu' 'Tao Xiang' 'Shaogang Gong']",
"Zhen-Yong Fu, Tao Xiang, Shaogang Gong"
]
|
cs.LG quant-ph | null | 1406.4203 | null | null | http://arxiv.org/pdf/1406.4203v1 | 2014-06-17T00:53:59Z | 2014-06-17T00:53:59Z | Construction of non-convex polynomial loss functions for training a
binary classifier with quantum annealing | Quantum annealing is a heuristic quantum algorithm which exploits quantum
resources to minimize an objective function embedded as the energy levels of a
programmable physical system. To take advantage of a potential quantum
advantage, one needs to be able to map the problem of interest to the native
hardware with reasonably low overhead. Because experimental considerations
constrain our objective function to take the form of a low degree PUBO
(polynomial unconstrained binary optimization), we employ non-convex loss
functions which are polynomial functions of the margin. We show that these loss
functions are robust to label noise and provide a clear advantage over convex
methods. These loss functions may also be useful for classical approaches as
they compile to regularized risk expressions which can be evaluated in constant
time with respect to the number of training examples.
| [
"['Ryan Babbush' 'Vasil Denchev' 'Nan Ding' 'Sergei Isakov' 'Hartmut Neven']",
"Ryan Babbush, Vasil Denchev, Nan Ding, Sergei Isakov and Hartmut Neven"
]
|
cs.CV cs.LG | null | 1406.4296 | null | null | http://arxiv.org/pdf/1406.4296v2 | 2014-06-18T12:33:22Z | 2014-06-17T09:51:18Z | Self-Learning Camera: Autonomous Adaptation of Object Detectors to
Unlabeled Video Streams | Learning object detectors requires massive amounts of labeled training
samples from the specific data source of interest. This is impractical when
dealing with many different sources (e.g., in camera networks), or constantly
changing ones such as mobile cameras (e.g., in robotics or driving assistant
systems). In this paper, we address the problem of self-learning detectors in
an autonomous manner, i.e. (i) detectors continuously updating themselves to
efficiently adapt to streaming data sources (contrary to transductive
algorithms), (ii) without any labeled data strongly related to the target data
stream (contrary to self-paced learning), and (iii) without manual intervention
to set and update hyper-parameters. To that end, we propose an unsupervised,
on-line, and self-tuning learning algorithm to optimize a multi-task learning
convex objective. Our method uses confident but laconic oracles (high-precision
but low-recall off-the-shelf generic detectors), and exploits the structure of
the problem to jointly learn on-line an ensemble of instance-level trackers,
from which we derive an adapted category-level object detector. Our approach is
validated on real-world publicly available video object datasets.
| [
"Adrien Gaidon (Xerox Research Center Europe, France), Gloria Zen\n (University of Trento, Italy), Jose A. Rodriguez-Serrano (Xerox Research\n Center Europe, France)",
"['Adrien Gaidon' 'Gloria Zen' 'Jose A. Rodriguez-Serrano']"
]
|
stat.ML cs.LG | null | 1406.4363 | null | null | http://arxiv.org/abs/1406.4363v2 | 2015-06-09T09:15:47Z | 2014-06-17T13:38:49Z | Distributed Stochastic Optimization of the Regularized Risk | Many machine learning algorithms minimize a regularized risk, and stochastic
optimization is widely used for this task. When working with massive data, it
is desirable to perform stochastic optimization in parallel. Unfortunately,
many existing stochastic optimization algorithms cannot be parallelized
efficiently. In this paper we show that one can rewrite the regularized risk
minimization problem as an equivalent saddle-point problem, and propose an
efficient distributed stochastic optimization (DSO) algorithm. We prove the
algorithm's rate of convergence; remarkably, our analysis shows that the
algorithm scales almost linearly with the number of processors. We also verify
with empirical evaluations that the proposed algorithm is competitive with
other parallel, general purpose stochastic and batch optimization algorithms
for regularized risk minimization.
| [
"Shin Matsushima, Hyokun Yun, Xinhua Zhang, S.V.N. Vishwanathan",
"['Shin Matsushima' 'Hyokun Yun' 'Xinhua Zhang' 'S. V. N. Vishwanathan']"
]
|
cs.CV cs.LG stat.ML | null | 1406.4444 | null | null | http://arxiv.org/pdf/1406.4444v4 | 2015-05-08T01:55:13Z | 2014-06-13T20:07:27Z | PRISM: Person Re-Identification via Structured Matching | Person re-identification (re-id), an emerging problem in visual surveillance,
deals with maintaining entities of individuals whilst they traverse various
locations surveilled by a camera network. From a visual perspective re-id is
challenging due to significant changes in visual appearance of individuals in
cameras with different pose, illumination and calibration. Globally the
challenge arises from the need to maintain structurally consistent matches
among all the individual entities across different camera views. We propose
PRISM, a structured matching method to jointly account for these challenges. We
view the global problem as a weighted graph matching problem and estimate edge
weights by learning to predict them based on the co-occurrences of visual
patterns in the training examples. These co-occurrence based scores in turn
account for appearance changes by inferring likely and unlikely visual
co-occurrences appearing in training instances. We implement PRISM on single
shot and multi-shot scenarios. PRISM uniformly outperforms state-of-the-art in
terms of matching rate while being computationally efficient.
| [
"Ziming Zhang and Venkatesh Saligrama",
"['Ziming Zhang' 'Venkatesh Saligrama']"
]
|
stat.ML cs.LG math.OC | null | 1406.4445 | null | null | http://arxiv.org/pdf/1406.4445v2 | 2014-06-18T10:05:43Z | 2014-06-13T20:08:58Z | RAPID: Rapidly Accelerated Proximal Gradient Algorithms for Convex
Minimization | In this paper, we propose a new algorithm to speed-up the convergence of
accelerated proximal gradient (APG) methods. In order to minimize a convex
function $f(\mathbf{x})$, our algorithm introduces a simple line search step
after each proximal gradient step in APG so that a biconvex function
$f(\theta\mathbf{x})$ is minimized over scalar variable $\theta>0$ while fixing
variable $\mathbf{x}$. We propose two new ways of constructing the auxiliary
variables in APG based on the intermediate solutions of the proximal gradient
and the line search steps. We prove that at arbitrary iteration step $t
(t\geq1)$, our algorithm can achieve a smaller upper-bound for the gap between
the current and optimal objective values than those in the traditional APG
methods such as FISTA, making it converge faster in practice. In fact, our
algorithm can be potentially applied to many important convex optimization
problems, such as sparse linear regression and kernel SVMs. Our experimental
results clearly demonstrate that our algorithm converges faster than APG in all
of the applications above, even comparable to some sophisticated solvers.
| [
"Ziming Zhang and Venkatesh Saligrama",
"['Ziming Zhang' 'Venkatesh Saligrama']"
]
|
null | null | 1406.4465 | null | null | http://arxiv.org/pdf/1406.4465v2 | 2015-06-02T19:47:37Z | 2014-06-16T12:47:37Z | Multi-stage Multi-task feature learning via adaptive threshold | Multi-task feature learning aims to identity the shared features among tasks to improve generalization. It has been shown that by minimizing non-convex learning models, a better solution than the convex alternatives can be obtained. Therefore, a non-convex model based on the capped-$ell_{1},ell_{1}$ regularization was proposed in cite{Gong2013}, and a corresponding efficient multi-stage multi-task feature learning algorithm (MSMTFL) was presented. However, this algorithm harnesses a prescribed fixed threshold in the definition of the capped-$ell_{1},ell_{1}$ regularization and the lack of adaptivity might result in suboptimal performance. In this paper we propose to employ an adaptive threshold in the capped-$ell_{1},ell_{1}$ regularized formulation, where the corresponding variant of MSMTFL will incorporate an additional component to adaptively determine the threshold value. This variant is expected to achieve a better feature selection performance over the original MSMTFL algorithm. In particular, the embedded adaptive threshold component comes from our previously proposed iterative support detection (ISD) method cite{Wang2010}. Empirical studies on both synthetic and real-world data sets demonstrate the effectiveness of this new variant over the original MSMTFL. | [
"['Yaru Fan' 'Yilun Wang']"
]
|
cs.CL cs.LG stat.ML | 10.1109/TSP.2015.2451111 | 1406.4469 | null | null | http://arxiv.org/abs/1406.4469v1 | 2014-06-17T18:32:18Z | 2014-06-17T18:32:18Z | Authorship Attribution through Function Word Adjacency Networks | A method for authorship attribution based on function word adjacency networks
(WANs) is introduced. Function words are parts of speech that express
grammatical relationships between other words but do not carry lexical meaning
on their own. In the WANs in this paper, nodes are function words and directed
edges stand in for the likelihood of finding the sink word in the ordered
vicinity of the source word. WANs of different authors can be interpreted as
transition probabilities of a Markov chain and are therefore compared in terms
of their relative entropies. Optimal selection of WAN parameters is studied and
attribution accuracy is benchmarked across a diverse pool of authors and
varying text lengths. This analysis shows that, since function words are
independent of content, their use tends to be specific to an author and that
the relational data captured by function WANs is a good summary of stylometric
fingerprints. Attribution accuracy is observed to exceed the one achieved by
methods that rely on word frequencies alone. Further combining WANs with
methods that rely on word frequencies alone, results in larger attribution
accuracy, indicating that both sources of information encode different aspects
of authorial styles.
| [
"Santiago Segarra, Mark Eisen, Alejandro Ribeiro",
"['Santiago Segarra' 'Mark Eisen' 'Alejandro Ribeiro']"
]
|
cs.AI cs.LG stat.ML | null | 1406.4472 | null | null | http://arxiv.org/pdf/1406.4472v2 | 2014-06-18T10:38:35Z | 2014-06-17T18:41:19Z | Notes on hierarchical ensemble methods for DAG-structured taxonomies | Several real problems ranging from text classification to computational
biology are characterized by hierarchical multi-label classification tasks.
Most of the methods presented in literature focused on tree-structured
taxonomies, but only few on taxonomies structured according to a Directed
Acyclic Graph (DAG). In this contribution novel classification ensemble
algorithms for DAG-structured taxonomies are introduced. In particular
Hierarchical Top-Down (HTD-DAG) and True Path Rule (TPR-DAG) for DAGs are
presented and discussed.
| [
"Giorgio Valentini",
"['Giorgio Valentini']"
]
|
cs.LG stat.ML | null | 1406.4566 | null | null | http://arxiv.org/pdf/1406.4566v4 | 2019-12-17T19:49:48Z | 2014-06-18T01:17:27Z | Guaranteed Scalable Learning of Latent Tree Models | We present an integrated approach for structure and parameter estimation in
latent tree graphical models. Our overall approach follows a
"divide-and-conquer" strategy that learns models over small groups of variables
and iteratively merges onto a global solution. The structure learning involves
combinatorial operations such as minimum spanning tree construction and local
recursive grouping; the parameter learning is based on the method of moments
and on tensor decompositions. Our method is guaranteed to correctly recover the
unknown tree structure and the model parameters with low sample complexity for
the class of linear multivariate latent tree models which includes discrete and
Gaussian distributions, and Gaussian mixtures. Our bulk asynchronous parallel
algorithm is implemented in parallel and the parallel computation complexity
increases only logarithmically with the number of variables and linearly with
dimensionality of each variable.
| [
"Furong Huang, Niranjan U.N., Ioakeim Perros, Robert Chen, Jimeng Sun,\n Anima Anandkumar",
"['Furong Huang' 'Niranjan U. N.' 'Ioakeim Perros' 'Robert Chen'\n 'Jimeng Sun' 'Anima Anandkumar']"
]
|
stat.ML cs.DC cs.LG | null | 1406.4580 | null | null | http://arxiv.org/pdf/1406.4580v1 | 2014-06-18T03:06:52Z | 2014-06-18T03:06:52Z | Primitives for Dynamic Big Model Parallelism | When training large machine learning models with many variables or
parameters, a single machine is often inadequate since the model may be too
large to fit in memory, while training can take a long time even with
stochastic updates. A natural recourse is to turn to distributed cluster
computing, in order to harness additional memory and processors. However,
naive, unstructured parallelization of ML algorithms can make inefficient use
of distributed memory, while failing to obtain proportional convergence
speedups - or can even result in divergence. We develop a framework of
primitives for dynamic model-parallelism, STRADS, in order to explore
partitioning and update scheduling of model variables in distributed ML
algorithms - thus improving their memory efficiency while presenting new
opportunities to speed up convergence without compromising inference
correctness. We demonstrate the efficacy of model-parallel algorithms
implemented in STRADS versus popular implementations for Topic Modeling, Matrix
Factorization and Lasso.
| [
"Seunghak Lee, Jin Kyu Kim, Xun Zheng, Qirong Ho, Garth A. Gibson, Eric\n P. Xing",
"['Seunghak Lee' 'Jin Kyu Kim' 'Xun Zheng' 'Qirong Ho' 'Garth A. Gibson'\n 'Eric P. Xing']"
]
|
cs.NA cs.LG cs.NE | null | 1406.4619 | null | null | http://arxiv.org/pdf/1406.4619v1 | 2014-06-18T06:58:38Z | 2014-06-18T06:58:38Z | A Generalized Markov-Chain Modelling Approach to $(1,\lambda)$-ES Linear
Optimization: Technical Report | Several recent publications investigated Markov-chain modelling of linear
optimization by a $(1,\lambda)$-ES, considering both unconstrained and linearly
constrained optimization, and both constant and varying step size. All of them
assume normality of the involved random steps, and while this is consistent
with a black-box scenario, information on the function to be optimized (e.g.
separability) may be exploited by the use of another distribution. The
objective of our contribution is to complement previous studies realized with
normal steps, and to give sufficient conditions on the distribution of the
random steps for the success of a constant step-size $(1,\lambda)$-ES on the
simple problem of a linear function with a linear constraint. The decomposition
of a multidimensional distribution into its marginals and the copula combining
them is applied to the new distributional assumptions, particular attention
being paid to distributions with Archimedean copulas.
| [
"Alexandre Chotard (INRIA Saclay - Ile de France, LRI), Martin Holena",
"['Alexandre Chotard' 'Martin Holena']"
]
|
stat.ML cs.LG | null | 1406.4625 | null | null | http://arxiv.org/pdf/1406.4625v4 | 2015-03-04T21:25:31Z | 2014-06-18T07:26:08Z | An Entropy Search Portfolio for Bayesian Optimization | Bayesian optimization is a sample-efficient method for black-box global
optimization. How- ever, the performance of a Bayesian optimization method very
much depends on its exploration strategy, i.e. the choice of acquisition
function, and it is not clear a priori which choice will result in superior
performance. While portfolio methods provide an effective, principled way of
combining a collection of acquisition functions, they are often based on
measures of past performance which can be misleading. To address this issue, we
introduce the Entropy Search Portfolio (ESP): a novel approach to portfolio
construction which is motivated by information theoretic considerations. We
show that ESP outperforms existing portfolio methods on several real and
synthetic problems, including geostatistical datasets and simulated control
tasks. We not only show that ESP is able to offer performance as good as the
best, but unknown, acquisition function, but surprisingly it often gives better
performance. Finally, over a wide range of conditions we find that ESP is
robust to the inclusion of poor acquisition functions.
| [
"['Bobak Shahriari' 'Ziyu Wang' 'Matthew W. Hoffman'\n 'Alexandre Bouchard-Côté' 'Nando de Freitas']",
"Bobak Shahriari and Ziyu Wang and Matthew W. Hoffman and Alexandre\n Bouchard-C\\^ot\\'e and Nando de Freitas"
]
|
cs.LG | null | 1406.4631 | null | null | http://arxiv.org/pdf/1406.4631v1 | 2014-06-18T08:25:03Z | 2014-06-18T08:25:03Z | A Sober Look at Spectral Learning | Spectral learning recently generated lots of excitement in machine learning,
largely because it is the first known method to produce consistent estimates
(under suitable conditions) for several latent variable models. In contrast,
maximum likelihood estimates may get trapped in local optima due to the
non-convex nature of the likelihood function of latent variable models. In this
paper, we do an empirical evaluation of spectral learning (SL) and expectation
maximization (EM), which reveals an important gap between the theory and the
practice. First, SL often leads to negative probabilities. Second, EM often
yields better estimates than spectral learning and it does not seem to get
stuck in local optima. We discuss how the rank of the model parameters and the
amount of training data can yield negative probabilities. We also question the
common belief that maximum likelihood estimators are necessarily inconsistent.
| [
"['Han Zhao' 'Pascal Poupart']",
"Han Zhao, Pascal Poupart"
]
|
cs.AI cs.CC cs.LG | null | 1406.4682 | null | null | http://arxiv.org/pdf/1406.4682v1 | 2014-06-18T11:17:58Z | 2014-06-18T11:17:58Z | Exact Decoding on Latent Variable Conditional Models is NP-Hard | Latent variable conditional models, including the latent conditional random
fields as a special case, are popular models for many natural language
processing and vision processing tasks. The computational complexity of the
exact decoding/inference in latent conditional random fields is unclear. In
this paper, we try to clarify the computational complexity of the exact
decoding. We analyze the complexity and demonstrate that it is an NP-hard
problem even on a sequential labeling setting. Furthermore, we propose the
latent-dynamic inference (LDI-Naive) method and its bounded version
(LDI-Bounded), which are able to perform exact-inference or
almost-exact-inference by using top-$n$ search and dynamic programming.
| [
"['Xu Sun']",
"Xu Sun"
]
|
cs.LG | null | 1406.4757 | null | null | http://arxiv.org/pdf/1406.4757v1 | 2014-06-18T15:09:21Z | 2014-06-18T15:09:21Z | An Experimental Evaluation of Nearest Neighbour Time Series
Classification | Data mining research into time series classification (TSC) has focussed on
alternative distance measures for nearest neighbour classifiers. It is standard
practice to use 1-NN with Euclidean or dynamic time warping (DTW) distance as a
straw man for comparison. As part of a wider investigation into elastic
distance measures for TSC~\cite{lines14elastic}, we perform a series of
experiments to test whether this standard practice is valid.
Specifically, we compare 1-NN classifiers with Euclidean and DTW distance to
standard classifiers, examine whether the performance of 1-NN Euclidean
approaches that of 1-NN DTW as the number of cases increases, assess whether
there is any benefit of setting $k$ for $k$-NN through cross validation whether
it is worth setting the warping path for DTW through cross validation and
finally is it better to use a window or weighting for DTW. Based on experiments
on 77 problems, we conclude that 1-NN with Euclidean distance is fairly easy to
beat but 1-NN with DTW is not, if window size is set through cross validation.
| [
"Anthony Bagnall and Jason Lines",
"['Anthony Bagnall' 'Jason Lines']"
]
|
cs.LG physics.med-ph | null | 1406.4781 | null | null | http://arxiv.org/pdf/1406.4781v1 | 2014-06-18T16:13:10Z | 2014-06-18T16:13:10Z | Predictive Modelling of Bone Age through Classification and Regression
of Bone Shapes | Bone age assessment is a task performed daily in hospitals worldwide. This
involves a clinician estimating the age of a patient from a radiograph of the
non-dominant hand.
Our approach to automated bone age assessment is to modularise the algorithm
into the following three stages: segment and verify hand outline; segment and
verify bones; use the bone outlines to construct models of age. In this paper
we address the final question: given outlines of bones, can we learn how to
predict the bone age of the patient? We examine two alternative approaches.
Firstly, we attempt to train classifiers on individual bones to predict the
bone stage categories commonly used in bone ageing. Secondly, we construct
regression models to directly predict patient age.
We demonstrate that models built on summary features of the bone outline
perform better than those built using the one dimensional representation of the
outline, and also do at least as well as other automated systems. We show that
models constructed on just three bones are as accurate at predicting age as
expert human assessors using the standard technique. We also demonstrate the
utility of the model by quantifying the importance of ethnicity and sex on age
development. Our conclusion is that the feature based system of separating the
image processing from the age modelling is the best approach for automated bone
ageing, since it offers flexibility and transparency and produces accurate
estimates.
| [
"Anthony Bagnall and Luke Davis",
"['Anthony Bagnall' 'Luke Davis']"
]
|
stat.ME cs.DS cs.IR cs.LG | null | 1406.4784 | null | null | http://arxiv.org/pdf/1406.4784v1 | 2014-06-18T16:16:22Z | 2014-06-18T16:16:22Z | Improved Densification of One Permutation Hashing | The existing work on densification of one permutation hashing reduces the
query processing cost of the $(K,L)$-parameterized Locality Sensitive Hashing
(LSH) algorithm with minwise hashing, from $O(dKL)$ to merely $O(d + KL)$,
where $d$ is the number of nonzeros of the data vector, $K$ is the number of
hashes in each hash table, and $L$ is the number of hash tables. While that is
a substantial improvement, our analysis reveals that the existing densification
scheme is sub-optimal. In particular, there is no enough randomness in that
procedure, which affects its accuracy on very sparse datasets.
In this paper, we provide a new densification procedure which is provably
better than the existing scheme. This improvement is more significant for very
sparse datasets which are common over the web. The improved technique has the
same cost of $O(d + KL)$ for query processing, thereby making it strictly
preferable over the existing procedure. Experimental evaluations on public
datasets, in the task of hashing based near neighbor search, support our
theoretical findings.
| [
"['Anshumali Shrivastava' 'Ping Li']",
"Anshumali Shrivastava and Ping Li"
]
|
cs.NA cs.LG | 10.1109/TSP.2015.2421476 | 1406.4802 | null | null | http://arxiv.org/abs/1406.4802v2 | 2015-03-18T16:37:16Z | 2014-01-31T22:26:17Z | Homotopy based algorithms for $\ell_0$-regularized least-squares | Sparse signal restoration is usually formulated as the minimization of a
quadratic cost function $\|y-Ax\|_2^2$, where A is a dictionary and x is an
unknown sparse vector. It is well-known that imposing an $\ell_0$ constraint
leads to an NP-hard minimization problem. The convex relaxation approach has
received considerable attention, where the $\ell_0$-norm is replaced by the
$\ell_1$-norm. Among the many efficient $\ell_1$ solvers, the homotopy
algorithm minimizes $\|y-Ax\|_2^2+\lambda\|x\|_1$ with respect to x for a
continuum of $\lambda$'s. It is inspired by the piecewise regularity of the
$\ell_1$-regularization path, also referred to as the homotopy path. In this
paper, we address the minimization problem $\|y-Ax\|_2^2+\lambda\|x\|_0$ for a
continuum of $\lambda$'s and propose two heuristic search algorithms for
$\ell_0$-homotopy. Continuation Single Best Replacement is a forward-backward
greedy strategy extending the Single Best Replacement algorithm, previously
proposed for $\ell_0$-minimization at a given $\lambda$. The adaptive search of
the $\lambda$-values is inspired by $\ell_1$-homotopy. $\ell_0$ Regularization
Path Descent is a more complex algorithm exploiting the structural properties
of the $\ell_0$-regularization path, which is piecewise constant with respect
to $\lambda$. Both algorithms are empirically evaluated for difficult inverse
problems involving ill-conditioned dictionaries. Finally, we show that they can
be easily coupled with usual methods of model order selection.
| [
"['Charles Soussen' 'Jérôme Idier' 'Junbo Duan' 'David Brie']",
"Charles Soussen, J\\'er\\^ome Idier, Junbo Duan, David Brie"
]
|
cs.IR cs.LG cs.SD | 10.1109/LSP.2014.2347582 | 1406.4877 | null | null | http://arxiv.org/abs/1406.4877v1 | 2014-06-18T20:10:22Z | 2014-06-18T20:10:22Z | On the Application of Generic Summarization Algorithms to Music | Several generic summarization algorithms were developed in the past and
successfully applied in fields such as text and speech summarization. In this
paper, we review and apply these algorithms to music. To evaluate this
summarization's performance, we adopt an extrinsic approach: we compare a Fado
Genre Classifier's performance using truncated contiguous clips against the
summaries extracted with those algorithms on 2 different datasets. We show that
Maximal Marginal Relevance (MMR), LexRank and Latent Semantic Analysis (LSA)
all improve classification performance in both datasets used for testing.
| [
"Francisco Raposo, Ricardo Ribeiro, David Martins de Matos",
"['Francisco Raposo' 'Ricardo Ribeiro' 'David Martins de Matos']"
]
|
cs.LG cs.RO cs.SY stat.ML | null | 1406.4905 | null | null | http://arxiv.org/pdf/1406.4905v2 | 2014-11-03T08:17:59Z | 2014-06-18T22:16:27Z | Variational Gaussian Process State-Space Models | State-space models have been successfully used for more than fifty years in
different areas of science and engineering. We present a procedure for
efficient variational Bayesian learning of nonlinear state-space models based
on sparse Gaussian processes. The result of learning is a tractable posterior
over nonlinear dynamical systems. In comparison to conventional parametric
models, we offer the possibility to straightforwardly trade off model capacity
and computational cost whilst avoiding overfitting. Our main algorithm uses a
hybrid inference approach combining variational Bayes and sequential Monte
Carlo. We also present stochastic variational inference and online learning
approaches for fast learning with long time series.
| [
"Roger Frigola and Yutian Chen and Carl E. Rasmussen",
"['Roger Frigola' 'Yutian Chen' 'Carl E. Rasmussen']"
]
|
cs.NE cond-mat.mtrl-sci cs.LG | 10.3389/fnins.2014.00205 | 1406.4951 | null | null | http://arxiv.org/abs/1406.4951v4 | 2014-07-14T00:12:49Z | 2014-06-19T05:49:02Z | Brain-like associative learning using a nanoscale non-volatile phase
change synaptic device array | Recent advances in neuroscience together with nanoscale electronic device
technology have resulted in huge interests in realizing brain-like computing
hardwares using emerging nanoscale memory devices as synaptic elements.
Although there has been experimental work that demonstrated the operation of
nanoscale synaptic element at the single device level, network level studies
have been limited to simulations. In this work, we demonstrate, using
experiments, array level associative learning using phase change synaptic
devices connected in a grid like configuration similar to the organization of
the biological brain. Implementing Hebbian learning with phase change memory
cells, the synaptic grid was able to store presented patterns and recall
missing patterns in an associative brain-like fashion. We found that the system
is robust to device variations, and large variations in cell resistance states
can be accommodated by increasing the number of training epochs. We illustrated
the tradeoff between variation tolerance of the network and the overall energy
consumption, and found that energy consumption is decreased significantly for
lower variation tolerance.
| [
"['Sukru Burc Eryilmaz' 'Duygu Kuzum' 'Rakesh Jeyasingh' 'SangBum Kim'\n 'Matthew BrightSky' 'Chung Lam' 'H. -S. Philip Wong']",
"Sukru Burc Eryilmaz, Duygu Kuzum, Rakesh Jeyasingh, SangBum Kim,\n Matthew BrightSky, Chung Lam and H.-S. Philip Wong"
]
|
cs.CV cs.LG stat.ML | null | 1406.4966 | null | null | http://arxiv.org/pdf/1406.4966v2 | 2014-06-20T02:13:56Z | 2014-06-19T07:42:05Z | Inner Product Similarity Search using Compositional Codes | This paper addresses the nearest neighbor search problem under inner product
similarity and introduces a compact code-based approach. The idea is to
approximate a vector using the composition of several elements selected from a
source dictionary and to represent this vector by a short code composed of the
indices of the selected elements. The inner product between a query vector and
a database vector is efficiently estimated from the query vector and the short
code of the database vector. We show the superior performance of the proposed
group $M$-selection algorithm that selects $M$ elements from $M$ source
dictionaries for vector approximation in terms of search accuracy and
efficiency for compact codes of the same length via theoretical and empirical
analysis. Experimental results on large-scale datasets ($1M$ and $1B$ SIFT
features, $1M$ linear models and Netflix) demonstrate the superiority of the
proposed approach.
| [
"Chao Du, Jingdong Wang",
"['Chao Du' 'Jingdong Wang']"
]
|
quant-ph cs.LG gr-qc stat.ML | 10.1038/nphys3266 | 1406.5036 | null | null | http://arxiv.org/abs/1406.5036v1 | 2014-06-19T13:30:12Z | 2014-06-19T13:30:12Z | Inferring causal structure: a quantum advantage | The problem of using observed correlations to infer causal relations is
relevant to a wide variety of scientific disciplines. Yet given correlations
between just two classical variables, it is impossible to determine whether
they arose from a causal influence of one on the other or a common cause
influencing both, unless one can implement a randomized intervention. We here
consider the problem of causal inference for quantum variables. We introduce
causal tomography, which unifies and generalizes conventional quantum
tomography schemes to provide a complete solution to the causal inference
problem using a quantum analogue of a randomized trial. We furthermore show
that, in contrast to the classical case, observed quantum correlations alone
can sometimes provide a solution. We implement a quantum-optical experiment
that allows us to control the causal relation between two optical modes, and
two measurement schemes -- one with and one without randomization -- that
extract this relation from the observed correlations. Our results show that
entanglement and coherence, known to be central to quantum information
processing, also provide a quantum advantage for causal inference.
| [
"['Katja Ried' 'Megan Agnew' 'Lydia Vermeyden' 'Dominik Janzing'\n 'Robert W. Spekkens' 'Kevin J. Resch']",
"Katja Ried, Megan Agnew, Lydia Vermeyden, Dominik Janzing, Robert W.\n Spekkens and Kevin J. Resch"
]
|
cs.LG stat.ML | null | 1406.5143 | null | null | null | null | null | The Sample Complexity of Learning Linear Predictors with the Squared
Loss | In this short note, we provide a sample complexity lower bound for learning
linear predictors with respect to the squared loss. Our focus is on an agnostic
setting, where no assumptions are made on the data distribution. This contrasts
with standard results in the literature, which either make distributional
assumptions, refer to specific parameter settings, or use other performance
measures.
| [
"Ohad Shamir"
]
|
cs.DC cs.LG | null | 1406.5161 | null | null | http://arxiv.org/pdf/1406.5161v1 | 2014-06-19T19:22:28Z | 2014-06-19T19:22:28Z | Fast Support Vector Machines Using Parallel Adaptive Shrinking on
Distributed Systems | Support Vector Machines (SVM), a popular machine learning technique, has been
applied to a wide range of domains such as science, finance, and social
networks for supervised learning. Whether it is identifying high-risk patients
by health-care professionals, or potential high-school students to enroll in
college by school districts, SVMs can play a major role for social good. This
paper undertakes the challenge of designing a scalable parallel SVM training
algorithm for large scale systems, which includes commodity multi-core
machines, tightly connected supercomputers and cloud computing systems.
Intuitive techniques for improving the time-space complexity including adaptive
elimination of samples for faster convergence and sparse format representation
are proposed. Under sample elimination, several heuristics for {\em earliest
possible} to {\em lazy} elimination of non-contributing samples are proposed.
In several cases, where an early sample elimination might result in a false
positive, low overhead mechanisms for reconstruction of key data structures are
proposed. The algorithm and heuristics are implemented and evaluated on various
publicly available datasets. Empirical evaluation shows up to 26x speed
improvement on some datasets against the sequential baseline, when evaluated on
multiple compute nodes, and an improvement in execution time up to 30-60\% is
readily observed on a number of other datasets against our parallel baseline.
| [
"['Jeyanthi Narasimhan' 'Abhinav Vishnu' 'Lawrence Holder' 'Adolfy Hoisie']",
"Jeyanthi Narasimhan, Abhinav Vishnu, Lawrence Holder, Adolfy Hoisie"
]
|
stat.ML cs.LG math.NA math.OC | 10.1137/140994915 | 1406.5286 | null | null | http://arxiv.org/abs/1406.5286v1 | 2014-06-20T06:45:24Z | 2014-06-20T06:45:24Z | Enhancing Pure-Pixel Identification Performance via Preconditioning | In this paper, we analyze different preconditionings designed to enhance
robustness of pure-pixel search algorithms, which are used for blind
hyperspectral unmixing and which are equivalent to near-separable nonnegative
matrix factorization algorithms. Our analysis focuses on the successive
projection algorithm (SPA), a simple, efficient and provably robust algorithm
in the pure-pixel algorithm class. Recently, a provably robust preconditioning
was proposed by Gillis and Vavasis (arXiv:1310.2273) which requires the
resolution of a semidefinite program (SDP) to find a data points-enclosing
minimum volume ellipsoid. Since solving the SDP in high precisions can be time
consuming, we generalize the robustness analysis to approximate solutions of
the SDP, that is, solutions whose objective function values are some
multiplicative factors away from the optimal value. It is shown that a high
accuracy solution is not crucial for robustness, which paves the way for faster
preconditionings (e.g., based on first-order optimization methods). This first
contribution also allows us to provide a robustness analysis for two other
preconditionings. The first one is pre-whitening, which can be interpreted as
an optimal solution of the same SDP with additional constraints. We analyze
robustness of pre-whitening which allows us to characterize situations in which
it performs competitively with the SDP-based preconditioning. The second one is
based on SPA itself and can be interpreted as an optimal solution of a
relaxation of the SDP. It is extremely fast while competing with the SDP-based
preconditioning on several synthetic data sets.
| [
"Nicolas Gillis, Wing-Kin Ma",
"['Nicolas Gillis' 'Wing-Kin Ma']"
]
|
stat.ML cs.LG | null | 1406.5291 | null | null | http://arxiv.org/pdf/1406.5291v3 | 2015-02-02T17:54:14Z | 2014-06-20T07:11:44Z | Generalized Dantzig Selector: Application to the k-support norm | We propose a Generalized Dantzig Selector (GDS) for linear models, in which
any norm encoding the parameter structure can be leveraged for estimation. We
investigate both computational and statistical aspects of the GDS. Based on
conjugate proximal operator, a flexible inexact ADMM framework is designed for
solving GDS, and non-asymptotic high-probability bounds are established on the
estimation error, which rely on Gaussian width of unit norm ball and suitable
set encompassing estimation error. Further, we consider a non-trivial example
of the GDS using $k$-support norm. We derive an efficient method to compute the
proximal operator for $k$-support norm since existing methods are inapplicable
in this setting. For statistical analysis, we provide upper bounds for the
Gaussian widths needed in the GDS analysis, yielding the first statistical
recovery guarantee for estimation with the $k$-support norm. The experimental
results confirm our theoretical analysis.
| [
"Soumyadeep Chatterjee and Sheng Chen and Arindam Banerjee",
"['Soumyadeep Chatterjee' 'Sheng Chen' 'Arindam Banerjee']"
]
|
math.OC cs.LG cs.NA math.NA stat.ML | null | 1406.5295 | null | null | http://arxiv.org/pdf/1406.5295v1 | 2014-06-20T07:21:31Z | 2014-06-20T07:21:31Z | Rows vs Columns for Linear Systems of Equations - Randomized Kaczmarz or
Coordinate Descent? | This paper is about randomized iterative algorithms for solving a linear
system of equations $X \beta = y$ in different settings. Recent interest in the
topic was reignited when Strohmer and Vershynin (2009) proved the linear
convergence rate of a Randomized Kaczmarz (RK) algorithm that works on the rows
of $X$ (data points). Following that, Leventhal and Lewis (2010) proved the
linear convergence of a Randomized Coordinate Descent (RCD) algorithm that
works on the columns of $X$ (features). The aim of this paper is to simplify
our understanding of these two algorithms, establish the direct relationships
between them (though RK is often compared to Stochastic Gradient Descent), and
examine the algorithmic commonalities or tradeoffs involved with working on
rows or columns. We also discuss Kernel Ridge Regression and present a
Kaczmarz-style algorithm that works on data points and having the advantage of
solving the problem without ever storing or forming the Gram matrix, one of the
recognized problems encountered when scaling kernelized methods.
| [
"Aaditya Ramdas",
"['Aaditya Ramdas']"
]
|
cs.LG stat.ML | null | 1406.5298 | null | null | http://arxiv.org/pdf/1406.5298v2 | 2014-10-31T22:43:31Z | 2014-06-20T07:52:18Z | Semi-Supervised Learning with Deep Generative Models | The ever-increasing size of modern data sets combined with the difficulty of
obtaining label information has made semi-supervised learning one of the
problems of significant practical importance in modern data analysis. We
revisit the approach to semi-supervised learning with generative models and
develop new models that allow for effective generalisation from small labelled
data sets to large unlabelled ones. Generative approaches have thus far been
either inflexible, inefficient or non-scalable. We show that deep generative
models and approximate Bayesian inference exploiting recent advances in
variational methods can be used to provide significant improvements, making
generative approaches highly competitive for semi-supervised learning.
| [
"['Diederik P. Kingma' 'Danilo J. Rezende' 'Shakir Mohamed' 'Max Welling']",
"Diederik P. Kingma, Danilo J. Rezende, Shakir Mohamed, Max Welling"
]
|
math.OC cs.AI cs.LG math.NA stat.ML | 10.1080/10556788.2015.1099652 | 1406.5311 | null | null | http://arxiv.org/abs/1406.5311v2 | 2016-01-29T05:53:45Z | 2014-06-20T08:35:15Z | Towards A Deeper Geometric, Analytic and Algorithmic Understanding of
Margins | Given a matrix $A$, a linear feasibility problem (of which linear
classification is a special case) aims to find a solution to a primal problem
$w: A^Tw > \textbf{0}$ or a certificate for the dual problem which is a
probability distribution $p: Ap = \textbf{0}$. Inspired by the continued
importance of "large-margin classifiers" in machine learning, this paper
studies a condition measure of $A$ called its \textit{margin} that determines
the difficulty of both the above problems. To aid geometrical intuition, we
first establish new characterizations of the margin in terms of relevant balls,
cones and hulls. Our second contribution is analytical, where we present
generalizations of Gordan's theorem, and variants of Hoffman's theorems, both
using margins. We end by proving some new results on a classical iterative
scheme, the Perceptron, whose convergence rates famously depends on the margin.
Our results are relevant for a deeper understanding of margin-based learning
and proving convergence rates of iterative schemes, apart from providing a
unifying perspective on this vast topic.
| [
"Aaditya Ramdas and Javier Pe\\~na",
"['Aaditya Ramdas' 'Javier Peña']"
]
|
stat.ML cs.LG | null | 1406.5362 | null | null | http://arxiv.org/pdf/1406.5362v2 | 2014-11-20T17:21:19Z | 2014-06-20T12:14:45Z | Predicting the Future Behavior of a Time-Varying Probability
Distribution | We study the problem of predicting the future, though only in the
probabilistic sense of estimating a future state of a time-varying probability
distribution. This is not only an interesting academic problem, but solving
this extrapolation problem also has many practical application, e.g. for
training classifiers that have to operate under time-varying conditions. Our
main contribution is a method for predicting the next step of the time-varying
distribution from a given sequence of sample sets from earlier time steps. For
this we rely on two recent machine learning techniques: embedding probability
distributions into a reproducing kernel Hilbert space, and learning operators
by vector-valued regression. We illustrate the working principles and the
practical usefulness of our method by experiments on synthetic and real data.
We also highlight an exemplary application: training a classifier in a domain
adaptation setting without having access to examples from the test time
distribution at training time.
| [
"Christoph H. Lampert",
"['Christoph H. Lampert']"
]
|
cs.LG cs.AI stat.ML | null | 1406.5370 | null | null | http://arxiv.org/pdf/1406.5370v4 | 2016-03-10T18:15:19Z | 2014-06-20T12:58:46Z | Spectral Ranking using Seriation | We describe a seriation algorithm for ranking a set of items given pairwise
comparisons between these items. Intuitively, the algorithm assigns similar
rankings to items that compare similarly with all others. It does so by
constructing a similarity matrix from pairwise comparisons, using seriation
methods to reorder this matrix and construct a ranking. We first show that this
spectral seriation algorithm recovers the true ranking when all pairwise
comparisons are observed and consistent with a total order. We then show that
ranking reconstruction is still exact when some pairwise comparisons are
corrupted or missing, and that seriation based spectral ranking is more robust
to noise than classical scoring methods. Finally, we bound the ranking error
when only a random subset of the comparions are observed. An additional benefit
of the seriation formulation is that it allows us to solve semi-supervised
ranking problems. Experiments on both synthetic and real datasets demonstrate
that seriation based spectral ranking achieves competitive and in some cases
superior performance compared to classical ranking methods.
| [
"['Fajwel Fogel' \"Alexandre d'Aspremont\" 'Milan Vojnovic']",
"Fajwel Fogel, Alexandre d'Aspremont, Milan Vojnovic"
]
|
stat.ML cs.LG | null | 1406.5383 | null | null | http://arxiv.org/pdf/1406.5383v3 | 2015-11-23T23:09:47Z | 2014-06-20T13:42:30Z | Noise-adaptive Margin-based Active Learning and Lower Bounds under
Tsybakov Noise Condition | We present a simple noise-robust margin-based active learning algorithm to
find homogeneous (passing the origin) linear separators and analyze its error
convergence when labels are corrupted by noise. We show that when the imposed
noise satisfies the Tsybakov low noise condition (Mammen, Tsybakov, and others
1999; Tsybakov 2004) the algorithm is able to adapt to unknown level of noise
and achieves optimal statistical rate up to poly-logarithmic factors. We also
derive lower bounds for margin based active learning algorithms under Tsybakov
noise conditions (TNC) for the membership query synthesis scenario (Angluin
1988). Our result implies lower bounds for the stream based selective sampling
scenario (Cohn 1990) under TNC for some fairly simple data distributions. Quite
surprisingly, we show that the sample complexity cannot be improved even if the
underlying data distribution is as simple as the uniform distribution on the
unit ball. Our proof involves the construction of a well separated hypothesis
set on the d-dimensional unit ball along with carefully designed label
distributions for the Tsybakov noise condition. Our analysis might provide
insights for other forms of lower bounds as well.
| [
"['Yining Wang' 'Aarti Singh']",
"Yining Wang, Aarti Singh"
]
|
cs.LG | null | 1406.5388 | null | null | http://arxiv.org/pdf/1406.5388v3 | 2015-02-26T19:53:03Z | 2014-06-20T13:52:36Z | Learning computationally efficient dictionaries and their implementation
as fast transforms | Dictionary learning is a branch of signal processing and machine learning
that aims at finding a frame (called dictionary) in which some training data
admits a sparse representation. The sparser the representation, the better the
dictionary. The resulting dictionary is in general a dense matrix, and its
manipulation can be computationally costly both at the learning stage and later
in the usage of this dictionary, for tasks such as sparse coding. Dictionary
learning is thus limited to relatively small-scale problems. In this paper,
inspired by usual fast transforms, we consider a general dictionary structure
that allows cheaper manipulation, and propose an algorithm to learn such
dictionaries --and their fast implementation-- over training data. The approach
is demonstrated experimentally with the factorization of the Hadamard matrix
and with synthetic dictionary learning experiments.
| [
"['Luc Le Magoarou' 'Rémi Gribonval']",
"Luc Le Magoarou (INRIA - IRISA), R\\'emi Gribonval (INRIA - IRISA)"
]
|
cs.NA cs.CV cs.LG math.OC | null | 1406.5429 | null | null | http://arxiv.org/pdf/1406.5429v2 | 2014-12-03T20:59:42Z | 2014-06-20T15:33:00Z | Playing with Duality: An Overview of Recent Primal-Dual Approaches for
Solving Large-Scale Optimization Problems | Optimization methods are at the core of many problems in signal/image
processing, computer vision, and machine learning. For a long time, it has been
recognized that looking at the dual of an optimization problem may drastically
simplify its solution. Deriving efficient strategies which jointly brings into
play the primal and the dual problems is however a more recent idea which has
generated many important new contributions in the last years. These novel
developments are grounded on recent advances in convex analysis, discrete
optimization, parallel processing, and non-smooth optimization with emphasis on
sparsity issues. In this paper, we aim at presenting the principles of
primal-dual approaches, while giving an overview of numerical methods which
have been proposed in different contexts. We show the benefits which can be
drawn from primal-dual algorithms both for solving large-scale convex
optimization problems and discrete ones, and we provide various application
examples to illustrate their usefulness.
| [
"['Nikos Komodakis' 'Jean-Christophe Pesquet']",
"Nikos Komodakis and Jean-Christophe Pesquet"
]
|
stat.ML cs.CV cs.LG cs.MS | null | 1406.5565 | null | null | http://arxiv.org/pdf/1406.5565v1 | 2014-06-21T01:50:54Z | 2014-06-21T01:50:54Z | An Open Source Pattern Recognition Toolbox for MATLAB | Pattern recognition and machine learning are becoming integral parts of
algorithms in a wide range of applications. Different algorithms and approaches
for machine learning include different tradeoffs between performance and
computation, so during algorithm development it is often necessary to explore a
variety of different approaches to a given task. A toolbox with a unified
framework across multiple pattern recognition techniques enables algorithm
developers the ability to rapidly evaluate different choices prior to
deployment. MATLAB is a widely used environment for algorithm development and
prototyping, and although several MATLAB toolboxes for pattern recognition are
currently available these are either incomplete, expensive, or restrictively
licensed. In this work we describe a MATLAB toolbox for pattern recognition and
machine learning known as the PRT (Pattern Recognition Toolbox), licensed under
the permissive MIT license. The PRT includes many popular techniques for data
preprocessing, supervised learning, clustering, regression and feature
selection, as well as a methodology for combining these components using a
simple, uniform syntax. The resulting algorithms can be evaluated using
cross-validation and a variety of scoring metrics to ensure robust performance
when the algorithm is deployed. This paper presents an overview of the PRT as
well as an example of usage on Fisher's Iris dataset.
| [
"['Kenneth D. Morton Jr.' 'Peter Torrione' 'Leslie Collins' 'Sam Keene']",
"Kenneth D. Morton Jr., Peter Torrione, Leslie Collins, Sam Keene"
]
|
cs.LG | null | 1406.5600 | null | null | http://arxiv.org/pdf/1406.5600v1 | 2014-06-21T11:47:21Z | 2014-06-21T11:47:21Z | From conformal to probabilistic prediction | This paper proposes a new method of probabilistic prediction, which is based
on conformal prediction. The method is applied to the standard USPS data set
and gives encouraging results.
| [
"Vladimir Vovk, Ivan Petej, and Valentina Fedorova",
"['Vladimir Vovk' 'Ivan Petej' 'Valentina Fedorova']"
]
|
cs.LG cs.AI stat.ML | null | 1406.5614 | null | null | http://arxiv.org/pdf/1406.5614v2 | 2016-06-04T06:55:57Z | 2014-06-21T14:25:35Z | PAC-Bayes Analysis of Multi-view Learning | This paper presents eight PAC-Bayes bounds to analyze the generalization
performance of multi-view classifiers. These bounds adopt data dependent
Gaussian priors which emphasize classifiers with high view agreements. The
center of the prior for the first two bounds is the origin, while the center of
the prior for the third and fourth bounds is given by a data dependent vector.
An important technique to obtain these bounds is two derived logarithmic
determinant inequalities whose difference lies in whether the dimensionality of
data is involved. The centers of the fifth and sixth bounds are calculated on a
separate subset of the training set. The last two bounds use unlabeled data to
represent view agreements and are thus applicable to semi-supervised multi-view
learning. We evaluate all the presented multi-view PAC-Bayes bounds on
benchmark data and compare them with previous single-view PAC-Bayes bounds. The
usefulness and performance of the multi-view bounds are discussed.
| [
"Shiliang Sun, John Shawe-Taylor, Liang Mao",
"['Shiliang Sun' 'John Shawe-Taylor' 'Liang Mao']"
]
|
cs.LG cs.SI stat.ML | null | 1406.5647 | null | null | http://arxiv.org/pdf/1406.5647v3 | 2016-03-16T14:24:05Z | 2014-06-21T20:08:38Z | On semidefinite relaxations for the block model | The stochastic block model (SBM) is a popular tool for community detection in
networks, but fitting it by maximum likelihood (MLE) involves a computationally
infeasible optimization problem. We propose a new semidefinite programming
(SDP) solution to the problem of fitting the SBM, derived as a relaxation of
the MLE. We put ours and previously proposed SDPs in a unified framework, as
relaxations of the MLE over various sub-classes of the SBM, revealing a
connection to sparse PCA. Our main relaxation, which we call SDP-1, is tighter
than other recently proposed SDP relaxations, and thus previously established
theoretical guarantees carry over. However, we show that SDP-1 exactly recovers
true communities over a wider class of SBMs than those covered by current
results. In particular, the assumption of strong assortativity of the SBM,
implicit in consistency conditions for previously proposed SDPs, can be relaxed
to weak assortativity for our approach, thus significantly broadening the class
of SBMs covered by the consistency results. We also show that strong
assortativity is indeed a necessary condition for exact recovery for previously
proposed SDP approaches and not an artifact of the proofs. Our analysis of SDPs
is based on primal-dual witness constructions, which provides some insight into
the nature of the solutions of various SDPs. We show how to combine features
from SDP-1 and already available SDPs to achieve the most flexibility in terms
of both assortativity and block-size constraints, as our relaxation has the
tendency to produce communities of similar sizes. This tendency makes it the
ideal tool for fitting network histograms, a method gaining popularity in the
graphon estimation literature, as we illustrate on an example of a social
networks of dolphins. We also provide empirical evidence that SDPs outperform
spectral methods for fitting SBMs with a large number of blocks.
| [
"['Arash A. Amini' 'Elizaveta Levina']",
"Arash A. Amini, Elizaveta Levina"
]
|
cs.DS cs.LG | null | 1406.5665 | null | null | http://arxiv.org/pdf/1406.5665v1 | 2014-06-22T03:00:32Z | 2014-06-22T03:00:32Z | Constant Factor Approximation for Balanced Cut in the PIE model | We propose and study a new semi-random semi-adversarial model for Balanced
Cut, a planted model with permutation-invariant random edges (PIE). Our model
is much more general than planted models considered previously. Consider a set
of vertices V partitioned into two clusters $L$ and $R$ of equal size. Let $G$
be an arbitrary graph on $V$ with no edges between $L$ and $R$. Let
$E_{random}$ be a set of edges sampled from an arbitrary permutation-invariant
distribution (a distribution that is invariant under permutation of vertices in
$L$ and in $R$). Then we say that $G + E_{random}$ is a graph with
permutation-invariant random edges.
We present an approximation algorithm for the Balanced Cut problem that finds
a balanced cut of cost $O(|E_{random}|) + n \text{polylog}(n)$ in this model.
In the regime when $|E_{random}| = \Omega(n \text{polylog}(n))$, this is a
constant factor approximation with respect to the cost of the planted cut.
| [
"Konstantin Makarychev, Yury Makarychev, Aravindan Vijayaraghavan",
"['Konstantin Makarychev' 'Yury Makarychev' 'Aravindan Vijayaraghavan']"
]
|
cs.DS cs.LG | null | 1406.5667 | null | null | http://arxiv.org/pdf/1406.5667v2 | 2015-05-12T19:33:12Z | 2014-06-22T03:07:55Z | Correlation Clustering with Noisy Partial Information | In this paper, we propose and study a semi-random model for the Correlation
Clustering problem on arbitrary graphs G. We give two approximation algorithms
for Correlation Clustering instances from this model. The first algorithm finds
a solution of value $(1+ \delta) optcost + O_{\delta}(n\log^3 n)$ with high
probability, where $optcost$ is the value of the optimal solution (for every
$\delta > 0$). The second algorithm finds the ground truth clustering with an
arbitrarily small classification error $\eta$ (under some additional
assumptions on the instance).
| [
"Konstantin Makarychev, Yury Makarychev, Aravindan Vijayaraghavan",
"['Konstantin Makarychev' 'Yury Makarychev' 'Aravindan Vijayaraghavan']"
]
|
cs.LG | null | 1406.5675 | null | null | http://arxiv.org/pdf/1406.5675v6 | 2016-05-20T06:32:12Z | 2014-06-22T05:02:06Z | SPSD Matrix Approximation vis Column Selection: Theories, Algorithms,
and Extensions | Symmetric positive semidefinite (SPSD) matrix approximation is an important
problem with applications in kernel methods. However, existing SPSD matrix
approximation methods such as the Nystr\"om method only have weak error bounds.
In this paper we conduct in-depth studies of an SPSD matrix approximation model
and establish strong relative-error bounds. We call it the prototype model for
it has more efficient and effective extensions, and some of its extensions have
high scalability. Though the prototype model itself is not suitable for
large-scale data, it is still useful to study its properties, on which the
analysis of its extensions relies.
This paper offers novel theoretical analysis, efficient algorithms, and a
highly accurate extension. First, we establish a lower error bound for the
prototype model and improve the error bound of an existing column selection
algorithm to match the lower bound. In this way, we obtain the first optimal
column selection algorithm for the prototype model. We also prove that the
prototype model is exact under certain conditions. Second, we develop a simple
column selection algorithm with a provable error bound. Third, we propose a
so-called spectral shifting model to make the approximation more accurate when
the eigenvalues of the matrix decay slowly, and the improvement is
theoretically quantified. The spectral shifting method can also be applied to
improve other SPSD matrix approximation models.
| [
"['Shusen Wang' 'Luo Luo' 'Zhihua Zhang']",
"Shusen Wang, Luo Luo, Zhihua Zhang"
]
|
cs.CV cs.CL cs.LG | null | 1406.5679 | null | null | http://arxiv.org/pdf/1406.5679v1 | 2014-06-22T06:22:50Z | 2014-06-22T06:22:50Z | Deep Fragment Embeddings for Bidirectional Image Sentence Mapping | We introduce a model for bidirectional retrieval of images and sentences
through a multi-modal embedding of visual and natural language data. Unlike
previous models that directly map images or sentences into a common embedding
space, our model works on a finer level and embeds fragments of images
(objects) and fragments of sentences (typed dependency tree relations) into a
common space. In addition to a ranking objective seen in previous work, this
allows us to add a new fragment alignment objective that learns to directly
associate these fragments across modalities. Extensive experimental evaluation
shows that reasoning on both the global level of images and sentences and the
finer level of their respective fragments significantly improves performance on
image-sentence retrieval tasks. Additionally, our model provides interpretable
predictions since the inferred inter-modal fragment alignment is explicit.
| [
"Andrej Karpathy, Armand Joulin and Li Fei-Fei",
"['Andrej Karpathy' 'Armand Joulin' 'Li Fei-Fei']"
]
|
math.ST cs.LG stat.ML stat.TH | 10.1109/CCA.2014.6981380 | 1406.5706 | null | null | http://arxiv.org/abs/1406.5706v2 | 2014-09-21T22:03:06Z | 2014-06-22T11:38:59Z | On the Maximum Entropy Property of the First-Order Stable Spline Kernel
and its Implications | A new nonparametric approach for system identification has been recently
proposed where the impulse response is seen as the realization of a zero--mean
Gaussian process whose covariance, the so--called stable spline kernel,
guarantees that the impulse response is almost surely stable. Maximum entropy
properties of the stable spline kernel have been pointed out in the literature.
In this paper we provide an independent proof that relies on the theory of
matrix extension problems in the graphical model literature and leads to a
closed form expression for the inverse of the first order stable spline kernel
as well as to a new factorization in the form $UWU^\top$ with $U$ upper
triangular and $W$ diagonal. Interestingly, all first--order stable spline
kernels share the same factor $U$ and $W$ admits a closed form representation
in terms of the kernel hyperparameter, making the factorization computationally
inexpensive. Maximum likelihood properties of the stable spline kernel are also
highlighted. These results can be applied both to improve the stability and to
reduce the computational complexity associated with the computation of stable
spline estimators.
| [
"Francesca Paola Carli",
"['Francesca Paola Carli']"
]
|
stat.ML cs.LG math.OC | null | 1406.5736 | null | null | http://arxiv.org/pdf/1406.5736v1 | 2014-06-22T15:34:15Z | 2014-06-22T15:34:15Z | Convex Optimization Learning of Faithful Euclidean Distance
Representations in Nonlinear Dimensionality Reduction | Classical multidimensional scaling only works well when the noisy distances
observed in a high dimensional space can be faithfully represented by Euclidean
distances in a low dimensional space. Advanced models such as Maximum Variance
Unfolding (MVU) and Minimum Volume Embedding (MVE) use Semi-Definite
Programming (SDP) to reconstruct such faithful representations. While those SDP
models are capable of producing high quality configuration numerically, they
suffer two major drawbacks. One is that there exist no theoretically guaranteed
bounds on the quality of the configuration. The other is that they are slow in
computation when the data points are beyond moderate size. In this paper, we
propose a convex optimization model of Euclidean distance matrices. We
establish a non-asymptotic error bound for the random graph model with
sub-Gaussian noise, and prove that our model produces a matrix estimator of
high accuracy when the order of the uniform sample size is roughly the degree
of freedom of a low-rank matrix up to a logarithmic factor. Our results
partially explain why MVU and MVE often work well. Moreover, we develop a fast
inexact accelerated proximal gradient method. Numerical experiments show that
the model can produce configurations of high quality on large data points that
the SDP approach would struggle to cope with.
| [
"Chao Ding and Hou-Duo Qi",
"['Chao Ding' 'Hou-Duo Qi']"
]
|
stat.ML cs.LG | null | 1406.5752 | null | null | http://arxiv.org/pdf/1406.5752v1 | 2014-06-22T19:16:20Z | 2014-06-22T19:16:20Z | Divide-and-Conquer Learning by Anchoring a Conical Hull | We reduce a broad class of machine learning problems, usually addressed by EM
or sampling, to the problem of finding the $k$ extremal rays spanning the
conical hull of a data point set. These $k$ "anchors" lead to a global solution
and a more interpretable model that can even outperform EM and sampling on
generalization error. To find the $k$ anchors, we propose a novel
divide-and-conquer learning scheme "DCA" that distributes the problem to
$\mathcal O(k\log k)$ same-type sub-problems on different low-D random
hyperplanes, each can be solved by any solver. For the 2D sub-problem, we
present a non-iterative solver that only needs to compute an array of cosine
values and its max/min entries. DCA also provides a faster subroutine for other
methods to check whether a point is covered in a conical hull, which improves
algorithm design in multiple dimensions and brings significant speedup to
learning. We apply our method to GMM, HMM, LDA, NMF and subspace clustering,
then show its competitive performance and scalability over other methods on
rich datasets.
| [
"['Tianyi Zhou' 'Jeff Bilmes' 'Carlos Guestrin']",
"Tianyi Zhou and Jeff Bilmes and Carlos Guestrin"
]
|
cs.CV cs.LG | null | 1406.5910 | null | null | http://arxiv.org/pdf/1406.5910v1 | 2014-06-23T14:06:24Z | 2014-06-23T14:06:24Z | Multi-utility Learning: Structured-output Learning with Multiple
Annotation-specific Loss Functions | Structured-output learning is a challenging problem; particularly so because
of the difficulty in obtaining large datasets of fully labelled instances for
training. In this paper we try to overcome this difficulty by presenting a
multi-utility learning framework for structured prediction that can learn from
training instances with different forms of supervision. We propose a unified
technique for inferring the loss functions most suitable for quantifying the
consistency of solutions with the given weak annotation. We demonstrate the
effectiveness of our framework on the challenging semantic image segmentation
problem for which a wide variety of annotations can be used. For instance, the
popular training datasets for semantic segmentation are composed of images with
hard-to-generate full pixel labellings, as well as images with easy-to-obtain
weak annotations, such as bounding boxes around objects, or image-level labels
that specify which object categories are present in an image. Experimental
evaluation shows that the use of annotation-specific loss functions
dramatically improves segmentation accuracy compared to the baseline system
where only one type of weak annotation is used.
| [
"Roman Shapovalov, Dmitry Vetrov, Anton Osokin, Pushmeet Kohli",
"['Roman Shapovalov' 'Dmitry Vetrov' 'Anton Osokin' 'Pushmeet Kohli']"
]
|
cs.LG stat.ML | null | 1406.5979 | null | null | http://arxiv.org/pdf/1406.5979v1 | 2014-06-23T17:00:28Z | 2014-06-23T17:00:28Z | Reinforcement and Imitation Learning via Interactive No-Regret Learning | Recent work has demonstrated that problems-- particularly imitation learning
and structured prediction-- where a learner's predictions influence the
input-distribution it is tested on can be naturally addressed by an interactive
approach and analyzed using no-regret online learning. These approaches to
imitation learning, however, neither require nor benefit from information about
the cost of actions. We extend existing results in two directions: first, we
develop an interactive imitation learning approach that leverages cost
information; second, we extend the technique to address reinforcement learning.
The results provide theoretical support to the commonly observed successes of
online approximate policy iteration. Our approach suggests a broad new family
of algorithms and provides a unifying view of existing techniques for imitation
and reinforcement learning.
| [
"Stephane Ross, J. Andrew Bagnell",
"['Stephane Ross' 'J. Andrew Bagnell']"
]
|
cs.LG | null | 1406.6020 | null | null | http://arxiv.org/pdf/1406.6020v1 | 2014-06-23T18:48:59Z | 2014-06-23T18:48:59Z | Stationary Mixing Bandits | We study the bandit problem where arms are associated with stationary
phi-mixing processes and where rewards are therefore dependent: the question
that arises from this setting is that of recovering some independence by
ignoring the value of some rewards. As we shall see, the bandit problem we
tackle requires us to address the exploration/exploitation/independence
trade-off. To do so, we provide a UCB strategy together with a general regret
analysis for the case where the size of the independence blocks (the ignored
rewards) is fixed and we go a step beyond by providing an algorithm that is
able to compute the size of the independence blocks from the data. Finally, we
give an analysis of our bandit problem in the restless case, i.e., in the
situation where the time counters for all mixing processes simultaneously
evolve.
| [
"['Julien Audiffren' 'Liva Ralaivola']",
"Julien Audiffren (CMLA), Liva Ralaivola (LIF)"
]
|
cs.DS cs.LG q-fin.PR | null | 1406.6084 | null | null | http://arxiv.org/pdf/1406.6084v1 | 2014-06-23T20:40:14Z | 2014-06-23T20:40:14Z | From Black-Scholes to Online Learning: Dynamic Hedging under Adversarial
Environments | We consider a non-stochastic online learning approach to price financial
options by modeling the market dynamic as a repeated game between the nature
(adversary) and the investor. We demonstrate that such framework yields
analogous structure as the Black-Scholes model, the widely popular option
pricing model in stochastic finance, for both European and American options
with convex payoffs. In the case of non-convex options, we construct
approximate pricing algorithms, and demonstrate that their efficiency can be
analyzed through the introduction of an artificial probability measure, in
parallel to the so-called risk-neutral measure in the finance literature, even
though our framework is completely adversarial. Continuous-time convergence
results and extensions to incorporate price jumps are also presented.
| [
"Henry Lam and Zhenming Liu",
"['Henry Lam' 'Zhenming Liu']"
]
|
cs.CL cs.LG | 10.5815/ijigsp.2013.09.02 | 1406.6101 | null | null | http://arxiv.org/abs/1406.6101v1 | 2014-06-23T22:21:17Z | 2014-06-23T22:21:17Z | Improved Frame Level Features and SVM Supervectors Approach for the
Recogniton of Emotional States from Speech: Application to categorical and
dimensional states | The purpose of speech emotion recognition system is to classify speakers
utterances into different emotional states such as disgust, boredom, sadness,
neutral and happiness. Speech features that are commonly used in speech emotion
recognition rely on global utterance level prosodic features. In our work, we
evaluate the impact of frame level feature extraction. The speech samples are
from Berlin emotional database and the features extracted from these utterances
are energy, different variant of mel frequency cepstrum coefficients, velocity
and acceleration features.
| [
"['Imen Trabelsi' 'Dorra Ben Ayed' 'Noureddine Ellouze']",
"Imen Trabelsi, Dorra Ben Ayed, Noureddine Ellouze"
]
|
cs.LG | null | 1406.6114 | null | null | http://arxiv.org/pdf/1406.6114v1 | 2014-06-24T00:48:23Z | 2014-06-24T00:48:23Z | Mining Recurrent Concepts in Data Streams using the Discrete Fourier
Transform | In this research we address the problem of capturing recurring concepts in a
data stream environment. Recurrence capture enables the re-use of previously
learned classifiers without the need for re-learning while providing for better
accuracy during the concept recurrence interval. We capture concepts by
applying the Discrete Fourier Transform (DFT) to Decision Tree classifiers to
obtain highly compressed versions of the trees at concept drift points in the
stream and store such trees in a repository for future use. Our empirical
results on real world and synthetic data exhibiting varying degrees of
recurrence show that the Fourier compressed trees are more robust to noise and
are able to capture recurring concepts with higher precision than a meta
learning approach that chooses to re-use classifiers in their originally
occurring form.
| [
"Sakthithasan Sripirakas and Russel Pears",
"['Sakthithasan Sripirakas' 'Russel Pears']"
]
|
cs.LG | null | 1406.6130 | null | null | http://arxiv.org/pdf/1406.6130v1 | 2014-06-24T03:31:16Z | 2014-06-24T03:31:16Z | Generalized Mixability via Entropic Duality | Mixability is a property of a loss which characterizes when fast convergence
is possible in the game of prediction with expert advice. We show that a key
property of mixability generalizes, and the exp and log operations present in
the usual theory are not as special as one might have thought. In doing this we
introduce a more general notion of $\Phi$-mixability where $\Phi$ is a general
entropy (\ie, any convex function on probabilities). We show how a property
shared by the convex dual of any such entropy yields a natural algorithm (the
minimizer of a regret bound) which, analogous to the classical aggregating
algorithm, is guaranteed a constant regret when used with $\Phi$-mixable
losses. We characterize precisely which $\Phi$ have $\Phi$-mixable losses and
put forward a number of conjectures about the optimality and relationships
between different choices of entropy.
| [
"Mark D. Reid and Rafael M. Frongillo and Robert C. Williamson and\n Nishant Mehta",
"['Mark D. Reid' 'Rafael M. Frongillo' 'Robert C. Williamson'\n 'Nishant Mehta']"
]
|
cs.LG cs.CV stat.AP stat.ML | 10.1093/imaiai/iax012 | 1406.6145 | null | null | http://arxiv.org/abs/1406.6145v2 | 2016-06-09T22:58:10Z | 2014-06-24T06:15:07Z | Fast, Robust and Non-convex Subspace Recovery | This work presents a fast and non-convex algorithm for robust subspace
recovery. The data sets considered include inliers drawn around a
low-dimensional subspace of a higher dimensional ambient space, and a possibly
large portion of outliers that do not lie nearby this subspace. The proposed
algorithm, which we refer to as Fast Median Subspace (FMS), is designed to
robustly determine the underlying subspace of such data sets, while having
lower computational complexity than existing methods. We prove convergence of
the FMS iterates to a stationary point. Further, under a special model of data,
FMS converges to a point which is near to the global minimum with overwhelming
probability. Under this model, we show that the iteration complexity is
globally bounded and locally $r$-linear. The latter theorem holds for any fixed
fraction of outliers (less than 1) and any fixed positive distance between the
limit point and the global minimum. Numerical experiments on synthetic and real
data demonstrate its competitive speed and accuracy.
| [
"Gilad Lerman and Tyler Maunu",
"['Gilad Lerman' 'Tyler Maunu']"
]
|
cs.LG | null | 1406.6176 | null | null | http://arxiv.org/pdf/1406.6176v1 | 2014-06-24T09:32:10Z | 2014-06-24T09:32:10Z | Composite Likelihood Estimation for Restricted Boltzmann machines | Learning the parameters of graphical models using the maximum likelihood
estimation is generally hard which requires an approximation. Maximum composite
likelihood estimations are statistical approximations of the maximum likelihood
estimation which are higher-order generalizations of the maximum
pseudo-likelihood estimation. In this paper, we propose a composite likelihood
method and investigate its property. Furthermore, we apply our composite
likelihood method to restricted Boltzmann machines.
| [
"Muneki Yasuda, Shun Kataoka, Yuji Waizumi, Kazuyuki Tanaka",
"['Muneki Yasuda' 'Shun Kataoka' 'Yuji Waizumi' 'Kazuyuki Tanaka']"
]
|
stat.ME cs.LG stat.ML | null | 1406.6200 | null | null | http://arxiv.org/pdf/1406.6200v1 | 2014-06-24T10:56:13Z | 2014-06-24T10:56:13Z | Combining predictions from linear models when training and test inputs
differ | Methods for combining predictions from different models in a supervised
learning setting must somehow estimate/predict the quality of a model's
predictions at unknown future inputs. Many of these methods (often implicitly)
make the assumption that the test inputs are identical to the training inputs,
which is seldom reasonable. By failing to take into account that prediction
will generally be harder for test inputs that did not occur in the training
set, this leads to the selection of too complex models. Based on a novel,
unbiased expression for KL divergence, we propose XAIC and its special case
FAIC as versions of AIC intended for prediction that use different degrees of
knowledge of the test inputs. Both methods substantially differ from and may
outperform all the known versions of AIC even when the training and test inputs
are iid, and are especially useful for deterministic inputs and under covariate
shift. Our experiments on linear models suggest that if the test and training
inputs differ substantially, then XAIC and FAIC predictively outperform AIC,
BIC and several other methods including Bayesian model averaging.
| [
"Thijs van Ommen",
"['Thijs van Ommen']"
]
|
cs.LG cs.CV stat.ML | null | 1406.6247 | null | null | http://arxiv.org/pdf/1406.6247v1 | 2014-06-24T14:16:56Z | 2014-06-24T14:16:56Z | Recurrent Models of Visual Attention | Applying convolutional neural networks to large images is computationally
expensive because the amount of computation scales linearly with the number of
image pixels. We present a novel recurrent neural network model that is capable
of extracting information from an image or video by adaptively selecting a
sequence of regions or locations and only processing the selected regions at
high resolution. Like convolutional neural networks, the proposed model has a
degree of translation invariance built-in, but the amount of computation it
performs can be controlled independently of the input image size. While the
model is non-differentiable, it can be trained using reinforcement learning
methods to learn task-specific policies. We evaluate our model on several image
classification tasks, where it significantly outperforms a convolutional neural
network baseline on cluttered images, and on a dynamic visual control problem,
where it learns to track a simple object without an explicit training signal
for doing so.
| [
"Volodymyr Mnih, Nicolas Heess, Alex Graves, Koray Kavukcuoglu",
"['Volodymyr Mnih' 'Nicolas Heess' 'Alex Graves' 'Koray Kavukcuoglu']"
]
|
cs.CL cs.IR cs.LG | null | 1406.6312 | null | null | http://arxiv.org/pdf/1406.6312v2 | 2014-11-19T00:18:06Z | 2014-06-24T17:10:29Z | Scalable Topical Phrase Mining from Text Corpora | While most topic modeling algorithms model text corpora with unigrams, human
interpretation often relies on inherent grouping of terms into phrases. As
such, we consider the problem of discovering topical phrases of mixed lengths.
Existing work either performs post processing to the inference results of
unigram-based topic models, or utilizes complex n-gram-discovery topic models.
These methods generally produce low-quality topical phrases or suffer from poor
scalability on even moderately-sized datasets. We propose a different approach
that is both computationally efficient and effective. Our solution combines a
novel phrase mining framework to segment a document into single and multi-word
phrases, and a new topic model that operates on the induced document partition.
Our approach discovers high quality topical phrases with negligible extra cost
to the bag-of-words topic model in a variety of datasets including research
publication titles, abstracts, reviews, and news articles.
| [
"['Ahmed El-Kishky' 'Yanglei Song' 'Chi Wang' 'Clare Voss' 'Jiawei Han']",
"Ahmed El-Kishky, Yanglei Song, Chi Wang, Clare Voss, Jiawei Han"
]
|
cs.LG cs.CV cs.IR stat.ML | null | 1406.6314 | null | null | http://arxiv.org/pdf/1406.6314v1 | 2014-06-23T02:34:34Z | 2014-06-23T02:34:34Z | Further heuristics for $k$-means: The merge-and-split heuristic and the
$(k,l)$-means | Finding the optimal $k$-means clustering is NP-hard in general and many
heuristics have been designed for minimizing monotonically the $k$-means
objective. We first show how to extend Lloyd's batched relocation heuristic and
Hartigan's single-point relocation heuristic to take into account empty-cluster
and single-point cluster events, respectively. Those events tend to
increasingly occur when $k$ or $d$ increases, or when performing several
restarts. First, we show that those special events are a blessing because they
allow to partially re-seed some cluster centers while further minimizing the
$k$-means objective function. Second, we describe a novel heuristic,
merge-and-split $k$-means, that consists in merging two clusters and splitting
this merged cluster again with two new centers provided it improves the
$k$-means objective. This novel heuristic can improve Hartigan's $k$-means when
it has converged to a local minimum. We show empirically that this
merge-and-split $k$-means improves over the Hartigan's heuristic which is the
{\em de facto} method of choice. Finally, we propose the $(k,l)$-means
objective that generalizes the $k$-means objective by associating the data
points to their $l$ closest cluster centers, and show how to either directly
convert or iteratively relax the $(k,l)$-means into a $k$-means in order to
reach better local minima.
| [
"Frank Nielsen and Richard Nock",
"['Frank Nielsen' 'Richard Nock']"
]
|
cs.LG | null | 1406.6398 | null | null | http://arxiv.org/pdf/1406.6398v1 | 2014-06-24T21:41:03Z | 2014-06-24T21:41:03Z | Incremental Clustering: The Case for Extra Clusters | The explosion in the amount of data available for analysis often necessitates
a transition from batch to incremental clustering methods, which process one
element at a time and typically store only a small subset of the data. In this
paper, we initiate the formal analysis of incremental clustering methods
focusing on the types of cluster structure that they are able to detect. We
find that the incremental setting is strictly weaker than the batch model,
proving that a fundamental class of cluster structures that can readily be
detected in the batch setting is impossible to identify using any incremental
method. Furthermore, we show how the limitations of incremental clustering can
be overcome by allowing additional clusters.
| [
"['Margareta Ackerman' 'Sanjoy Dasgupta']",
"Margareta Ackerman and Sanjoy Dasgupta"
]
|
math.OC cs.DM cs.DS cs.LG cs.NA | null | 1406.6474 | null | null | http://arxiv.org/pdf/1406.6474v3 | 2014-11-05T07:19:00Z | 2014-06-25T06:52:33Z | On the Convergence Rate of Decomposable Submodular Function Minimization | Submodular functions describe a variety of discrete problems in machine
learning, signal processing, and computer vision. However, minimizing
submodular functions poses a number of algorithmic challenges. Recent work
introduced an easy-to-use, parallelizable algorithm for minimizing submodular
functions that decompose as the sum of "simple" submodular functions.
Empirically, this algorithm performs extremely well, but no theoretical
analysis was given. In this paper, we show that the algorithm converges
linearly, and we provide upper and lower bounds on the rate of convergence. Our
proof relies on the geometry of submodular polyhedra and draws on results from
spectral graph theory.
| [
"['Robert Nishihara' 'Stefanie Jegelka' 'Michael I. Jordan']",
"Robert Nishihara, Stefanie Jegelka, Michael I. Jordan"
]
|
cs.CV cs.LG | null | 1406.6507 | null | null | http://arxiv.org/pdf/1406.6507v1 | 2014-06-25T09:35:40Z | 2014-06-25T09:35:40Z | Weakly-supervised Discovery of Visual Pattern Configurations | The increasing prominence of weakly labeled data nurtures a growing demand
for object detection methods that can cope with minimal supervision. We propose
an approach that automatically identifies discriminative configurations of
visual patterns that are characteristic of a given object class. We formulate
the problem as a constrained submodular optimization problem and demonstrate
the benefits of the discovered configurations in remedying mislocalizations and
finding informative positive and negative training examples. Together, these
lead to state-of-the-art weakly-supervised detection results on the challenging
PASCAL VOC dataset.
| [
"['Hyun Oh Song' 'Yong Jae Lee' 'Stefanie Jegelka' 'Trevor Darrell']",
"Hyun Oh Song, Yong Jae Lee, Stefanie Jegelka, Trevor Darrell"
]
|
cs.CV cs.LG physics.med-ph | null | 1406.6568 | null | null | http://arxiv.org/pdf/1406.6568v1 | 2014-06-25T13:50:18Z | 2014-06-25T13:50:18Z | Support vector machine classification of dimensionally reduced
structural MRI images for dementia | We classify very-mild to moderate dementia in patients (CDR ranging from 0 to
2) using a support vector machine classifier acting on dimensionally reduced
feature set derived from MRI brain scans of the 416 subjects available in the
OASIS-Brains dataset. We use image segmentation and principal component
analysis to reduce the dimensionality of the data. Our resulting feature set
contains 11 features for each subject. Performance of the classifiers is
evaluated using 10-fold cross-validation. Using linear and (gaussian) kernels,
we obtain a training classification accuracy of 86.4% (90.1%), test accuracy of
85.0% (85.7%), test precision of 68.7% (68.5%), test recall of 68.0% (74.0%),
and test Matthews correlation coefficient of 0.594 (0.616).
| [
"V. A. Miller, S. Erlien, J. Piersol",
"['V. A. Miller' 'S. Erlien' 'J. Piersol']"
]
|
math.NA cs.LG stat.ML | 10.1137/140973529 | 1406.6603 | null | null | http://arxiv.org/abs/1406.6603v3 | 2015-02-02T11:25:41Z | 2014-06-25T15:12:48Z | A scaled gradient projection method for Bayesian learning in dynamical
systems | A crucial task in system identification problems is the selection of the most
appropriate model class, and is classically addressed resorting to
cross-validation or using asymptotic arguments. As recently suggested in the
literature, this can be addressed in a Bayesian framework, where model
complexity is regulated by few hyperparameters, which can be estimated via
marginal likelihood maximization. It is thus of primary importance to design
effective optimization methods to solve the corresponding optimization problem.
If the unknown impulse response is modeled as a Gaussian process with a
suitable kernel, the maximization of the marginal likelihood leads to a
challenging nonconvex optimization problem, which requires a stable and
effective solution strategy. In this paper we address this problem by means of
a scaled gradient projection algorithm, in which the scaling matrix and the
steplength parameter play a crucial role to provide a meaning solution in a
computational time comparable with second order methods. In particular, we
propose both a generalization of the split gradient approach to design the
scaling matrix in the presence of box constraints, and an effective
implementation of the gradient and objective function. The extensive numerical
experiments carried out on several test problems show that our method is very
effective in providing in few tenths of a second solutions of the problems with
accuracy comparable with state-of-the-art approaches. Moreover, the flexibility
of the proposed strategy makes it easily adaptable to a wider range of problems
arising in different areas of machine learning, signal processing and system
identification.
| [
"['Silvia Bonettini' 'Alessandro Chiuso' 'Marco Prato']",
"Silvia Bonettini and Alessandro Chiuso and Marco Prato"
]
|
stat.ML cs.LG | null | 1406.6618 | null | null | http://arxiv.org/pdf/1406.6618v1 | 2014-06-25T15:48:41Z | 2014-06-25T15:48:41Z | When is it Better to Compare than to Score? | When eliciting judgements from humans for an unknown quantity, one often has
the choice of making direct-scoring (cardinal) or comparative (ordinal)
measurements. In this paper we study the relative merits of either choice,
providing empirical and theoretical guidelines for the selection of a
measurement scheme. We provide empirical evidence based on experiments on
Amazon Mechanical Turk that in a variety of tasks, (pairwise-comparative)
ordinal measurements have lower per sample noise and are typically faster to
elicit than cardinal ones. Ordinal measurements however typically provide less
information. We then consider the popular Thurstone and Bradley-Terry-Luce
(BTL) models for ordinal measurements and characterize the minimax error rates
for estimating the unknown quantity. We compare these minimax error rates to
those under cardinal measurement models and quantify for what noise levels
ordinal measurements are better. Finally, we revisit the data collected from
our experiments and show that fitting these models confirms this prediction:
for tasks where the noise in ordinal measurements is sufficiently low, the
ordinal approach results in smaller errors in the estimation.
| [
"Nihar B. Shah, Sivaraman Balakrishnan, Joseph Bradley, Abhay Parekh,\n Kannan Ramchandran, Martin Wainwright",
"['Nihar B. Shah' 'Sivaraman Balakrishnan' 'Joseph Bradley' 'Abhay Parekh'\n 'Kannan Ramchandran' 'Martin Wainwright']"
]
|
cs.LG cs.GT | null | 1406.6633 | null | null | http://arxiv.org/pdf/1406.6633v1 | 2014-06-25T16:34:35Z | 2014-06-25T16:34:35Z | Active Learning and Best-Response Dynamics | We examine an important setting for engineered systems in which low-power
distributed sensors are each making highly noisy measurements of some unknown
target function. A center wants to accurately learn this function by querying a
small number of sensors, which ordinarily would be impossible due to the high
noise rate. The question we address is whether local communication among
sensors, together with natural best-response dynamics in an
appropriately-defined game, can denoise the system without destroying the true
signal and allow the center to succeed from only a small number of active
queries. By using techniques from game theory and empirical processes, we prove
positive (and negative) results on the denoising power of several natural
dynamics. We then show experimentally that when combined with recent agnostic
active learning algorithms, this process can achieve low error from very few
queries, performing substantially better than active or passive learning
without these denoising dynamics as well as passive learning with denoising.
| [
"['Maria-Florina Balcan' 'Chris Berlind' 'Avrim Blum' 'Emma Cohen'\n 'Kaushik Patnaik' 'Le Song']",
"Maria-Florina Balcan, Chris Berlind, Avrim Blum, Emma Cohen, Kaushik\n Patnaik, and Le Song"
]
|
cs.LG cs.IT math.IT q-fin.ST stat.ML | null | 1406.6651 | null | null | http://arxiv.org/pdf/1406.6651v1 | 2014-06-25T17:46:32Z | 2014-06-25T17:46:32Z | Causality Networks | While correlation measures are used to discern statistical relationships
between observed variables in almost all branches of data-driven scientific
inquiry, what we are really interested in is the existence of causal
dependence. Designing an efficient causality test, that may be carried out in
the absence of restrictive pre-suppositions on the underlying dynamical
structure of the data at hand, is non-trivial. Nevertheless, ability to
computationally infer statistical prima facie evidence of causal dependence may
yield a far more discriminative tool for data analysis compared to the
calculation of simple correlations. In the present work, we present a new
non-parametric test of Granger causality for quantized or symbolic data streams
generated by ergodic stationary sources. In contrast to state-of-art binary
tests, our approach makes precise and computes the degree of causal dependence
between data streams, without making any restrictive assumptions, linearity or
otherwise. Additionally, without any a priori imposition of specific dynamical
structure, we infer explicit generative models of causal cross-dependence,
which may be then used for prediction. These explicit models are represented as
generalized probabilistic automata, referred to crossed automata, and are shown
to be sufficient to capture a fairly general class of causal dependence. The
proposed algorithms are computationally efficient in the PAC sense; $i.e.$, we
find good models of cross-dependence with high probability, with polynomial
run-times and sample complexities. The theoretical results are applied to
weekly search-frequency data from Google Trends API for a chosen set of
socially "charged" keywords. The causality network inferred from this dataset
reveals, quite expectedly, the causal importance of certain keywords. It is
also illustrated that correlation analysis fails to gather such insight.
| [
"Ishanu Chattopadhyay",
"['Ishanu Chattopadhyay']"
]
|
stat.ML cs.LG math.ST stat.TH | null | 1406.6720 | null | null | http://arxiv.org/pdf/1406.6720v1 | 2014-06-25T22:01:56Z | 2014-06-25T22:01:56Z | Mass-Univariate Hypothesis Testing on MEEG Data using Cross-Validation | Recent advances in statistical theory, together with advances in the
computational power of computers, provide alternative methods to do
mass-univariate hypothesis testing in which a large number of univariate tests,
can be properly used to compare MEEG data at a large number of time-frequency
points and scalp locations. One of the major problematic aspects of this kind
of mass-univariate analysis is due to high number of accomplished hypothesis
tests. Hence procedures that remove or alleviate the increased probability of
false discoveries are crucial for this type of analysis. Here, I propose a new
method for mass-univariate analysis of MEEG data based on cross-validation
scheme. In this method, I suggest a hierarchical classification procedure under
k-fold cross-validation to detect which sensors at which time-bin and which
frequency-bin contributes in discriminating between two different stimuli or
tasks. To achieve this goal, a new feature extraction method based on the
discrete cosine transform (DCT) employed to get maximum advantage of all three
data dimensions. Employing cross-validation and hierarchy architecture
alongside the DCT feature space makes this method more reliable and at the same
time enough sensitive to detect the narrow effects in brain activities.
| [
"Seyed Mostafa Kia",
"['Seyed Mostafa Kia']"
]
|
cs.LG stat.ML | null | 1406.6812 | null | null | http://arxiv.org/pdf/1406.6812v1 | 2014-06-26T08:57:05Z | 2014-06-26T08:57:05Z | Online learning in MDPs with side information | We study online learning of finite Markov decision process (MDP) problems
when a side information vector is available. The problem is motivated by
applications such as clinical trials, recommendation systems, etc. Such
applications have an episodic structure, where each episode corresponds to a
patient/customer. Our objective is to compete with the optimal dynamic policy
that can take side information into account.
We propose a computationally efficient algorithm and show that its regret is
at most $O(\sqrt{T})$, where $T$ is the number of rounds. To best of our
knowledge, this is the first regret bound for this setting.
| [
"['Yasin Abbasi-Yadkori' 'Gergely Neu']",
"Yasin Abbasi-Yadkori and Gergely Neu"
]
|
cs.LG cs.CV cs.NE | null | 1406.6909 | null | null | http://arxiv.org/pdf/1406.6909v2 | 2015-06-19T11:43:36Z | 2014-06-26T15:07:14Z | Discriminative Unsupervised Feature Learning with Exemplar Convolutional
Neural Networks | Deep convolutional networks have proven to be very successful in learning
task specific features that allow for unprecedented performance on various
computer vision tasks. Training of such networks follows mostly the supervised
learning paradigm, where sufficiently many input-output pairs are required for
training. Acquisition of large training sets is one of the key challenges, when
approaching a new task. In this paper, we aim for generic feature learning and
present an approach for training a convolutional network using only unlabeled
data. To this end, we train the network to discriminate between a set of
surrogate classes. Each surrogate class is formed by applying a variety of
transformations to a randomly sampled 'seed' image patch. In contrast to
supervised network training, the resulting feature representation is not class
specific. It rather provides robustness to the transformations that have been
applied during training. This generic feature representation allows for
classification results that outperform the state of the art for unsupervised
learning on several popular datasets (STL-10, CIFAR-10, Caltech-101,
Caltech-256). While such generic features cannot compete with class specific
features from supervised training on a classification task, we show that they
are advantageous on geometric matching problems, where they also outperform the
SIFT descriptor.
| [
"Alexey Dosovitskiy, Philipp Fischer, Jost Tobias Springenberg, Martin\n Riedmiller and Thomas Brox",
"['Alexey Dosovitskiy' 'Philipp Fischer' 'Jost Tobias Springenberg'\n 'Martin Riedmiller' 'Thomas Brox']"
]
|
cs.IT cs.LG math.IT | null | 1406.7002 | null | null | http://arxiv.org/pdf/1406.7002v1 | 2014-06-24T09:09:29Z | 2014-06-24T09:09:29Z | A Concise Information-Theoretic Derivation of the Baum-Welch algorithm | We derive the Baum-Welch algorithm for hidden Markov models (HMMs) through an
information-theoretical approach using cross-entropy instead of the Lagrange
multiplier approach which is universal in machine learning literature. The
proposed approach provides a more concise derivation of the Baum-Welch method
and naturally generalizes to multiple observations.
| [
"['Alireza Nejati' 'Charles Unsworth']",
"Alireza Nejati, Charles Unsworth"
]
|
cs.GT cs.LG | null | 1406.7157 | null | null | http://arxiv.org/pdf/1406.7157v3 | 2015-06-17T15:18:31Z | 2014-06-27T11:59:47Z | An Incentive Compatible Multi-Armed-Bandit Crowdsourcing Mechanism with
Quality Assurance | Consider a requester who wishes to crowdsource a series of identical binary
labeling tasks to a pool of workers so as to achieve an assured accuracy for
each task, in a cost optimal way. The workers are heterogeneous with unknown
but fixed qualities and their costs are private. The problem is to select for
each task an optimal subset of workers so that the outcome obtained from the
selected workers guarantees a target accuracy level. The problem is a
challenging one even in a non strategic setting since the accuracy of
aggregated label depends on unknown qualities. We develop a novel multi-armed
bandit (MAB) mechanism for solving this problem. First, we propose a framework,
Assured Accuracy Bandit (AAB), which leads to an MAB algorithm, Constrained
Confidence Bound for a Non Strategic setting (CCB-NS). We derive an upper bound
on the number of time steps the algorithm chooses a sub-optimal set that
depends on the target accuracy level and true qualities. A more challenging
situation arises when the requester not only has to learn the qualities of the
workers but also elicit their true costs. We modify the CCB-NS algorithm to
obtain an adaptive exploration separated algorithm which we call { \em
Constrained Confidence Bound for a Strategic setting (CCB-S)}. CCB-S algorithm
produces an ex-post monotone allocation rule and thus can be transformed into
an ex-post incentive compatible and ex-post individually rational mechanism
that learns the qualities of the workers and guarantees a given target accuracy
level in a cost optimal way. We provide a lower bound on the number of times
any algorithm should select a sub-optimal set and we see that the lower bound
matches our upper bound upto a constant factor. We provide insights on the
practical implementation of this framework through an illustrative example and
we show the efficacy of our algorithms through simulations.
| [
"['Shweta Jain' 'Sujit Gujar' 'Satyanath Bhat' 'Onno Zoeter' 'Y. Narahari']",
"Shweta Jain, Sujit Gujar, Satyanath Bhat, Onno Zoeter, Y. Narahari"
]
|
q-bio.PE cs.LG stat.ML | null | 1406.7250 | null | null | http://arxiv.org/pdf/1406.7250v3 | 2015-01-06T22:05:57Z | 2014-06-27T18:01:20Z | Reconstructing subclonal composition and evolution from whole genome
sequencing of tumors | Tumors often contain multiple subpopulations of cancerous cells defined by
distinct somatic mutations. We describe a new method, PhyloWGS, that can be
applied to WGS data from one or more tumor samples to reconstruct complete
genotypes of these subpopulations based on variant allele frequencies (VAFs) of
point mutations and population frequencies of structural variations. We
introduce a principled phylogenic correction for VAFs in loci affected by copy
number alterations and we show that this correction greatly improves subclonal
reconstruction compared to existing methods.
| [
"['Amit G. Deshwar' 'Shankar Vembu' 'Christina K. Yung' 'Gun Ho Jang'\n 'Lincoln Stein' 'Quaid Morris']",
"Amit G. Deshwar, Shankar Vembu, Christina K. Yung, Gun Ho Jang,\n Lincoln Stein, Quaid Morris"
]
|
cs.CL cs.LG | null | 1406.7314 | null | null | http://arxiv.org/pdf/1406.7314v1 | 2014-06-27T20:56:00Z | 2014-06-27T20:56:00Z | On the Use of Different Feature Extraction Methods for Linear and Non
Linear kernels | The speech feature extraction has been a key focus in robust speech
recognition research; it significantly affects the recognition performance. In
this paper, we first study a set of different features extraction methods such
as linear predictive coding (LPC), mel frequency cepstral coefficient (MFCC)
and perceptual linear prediction (PLP) with several features normalization
techniques like rasta filtering and cepstral mean subtraction (CMS). Based on
this, a comparative evaluation of these features is performed on the task of
text independent speaker identification using a combination between gaussian
mixture models (GMM) and linear and non-linear kernels based on support vector
machine (SVM).
| [
"Imen Trabelsi and Dorra Ben Ayed",
"['Imen Trabelsi' 'Dorra Ben Ayed']"
]
|
cs.LG q-fin.ST | null | 1406.7330 | null | null | http://arxiv.org/pdf/1406.7330v1 | 2014-06-27T22:34:47Z | 2014-06-27T22:34:47Z | Stock Market Prediction from WSJ: Text Mining via Sparse Matrix
Factorization | We revisit the problem of predicting directional movements of stock prices
based on news articles: here our algorithm uses daily articles from The Wall
Street Journal to predict the closing stock prices on the same day. We propose
a unified latent space model to characterize the "co-movements" between stock
prices and news articles. Unlike many existing approaches, our new model is
able to simultaneously leverage the correlations: (a) among stock prices, (b)
among news articles, and (c) between stock prices and news articles. Thus, our
model is able to make daily predictions on more than 500 stocks (most of which
are not even mentioned in any news article) while having low complexity. We
carry out extensive backtesting on trading strategies based on our algorithm.
The result shows that our model has substantially better accuracy rate (55.7%)
compared to many widely used algorithms. The return (56%) and Sharpe ratio due
to a trading strategy based on our model are also much higher than baseline
indices.
| [
"['Felix Ming Fai Wong' 'Zhenming Liu' 'Mung Chiang']",
"Felix Ming Fai Wong, Zhenming Liu, Mung Chiang"
]
|
stat.ML cs.LG cs.NE | null | 1406.7362 | null | null | http://arxiv.org/pdf/1406.7362v1 | 2014-06-28T06:45:51Z | 2014-06-28T06:45:51Z | Exponentially Increasing the Capacity-to-Computation Ratio for
Conditional Computation in Deep Learning | Many state-of-the-art results obtained with deep networks are achieved with
the largest models that could be trained, and if more computation power was
available, we might be able to exploit much larger datasets in order to improve
generalization ability. Whereas in learning algorithms such as decision trees
the ratio of capacity (e.g., the number of parameters) to computation is very
favorable (up to exponentially more parameters than computation), the ratio is
essentially 1 for deep neural networks. Conditional computation has been
proposed as a way to increase the capacity of a deep neural network without
increasing the amount of computation required, by activating some parameters
and computation "on-demand", on a per-example basis. In this note, we propose a
novel parametrization of weight matrices in neural networks which has the
potential to increase up to exponentially the ratio of the number of parameters
to computation. The proposed approach is based on turning on some parameters
(weight matrices) when specific bit patterns of hidden unit activations are
obtained. In order to better control for the overfitting that might result, we
propose a parametrization that is tree-structured, where each node of the tree
corresponds to a prefix of a sequence of sign bits, or gating units, associated
with hidden units.
| [
"Kyunghyun Cho and Yoshua Bengio",
"['Kyunghyun Cho' 'Yoshua Bengio']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.