categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
cs.RO cs.AI cs.LG cs.SY | 10.1145/2696454.2696455 | 1405.6341 | null | null | http://arxiv.org/abs/1405.6341v1 | 2014-05-24T20:44:26Z | 2014-05-24T20:44:26Z | Efficient Model Learning for Human-Robot Collaborative Tasks | We present a framework for learning human user models from joint-action
demonstrations that enables the robot to compute a robust policy for a
collaborative task with a human. The learning takes place completely
automatically, without any human intervention. First, we describe the
clustering of demonstrated action sequences into different human types using an
unsupervised learning algorithm. These demonstrated sequences are also used by
the robot to learn a reward function that is representative for each type,
through the employment of an inverse reinforcement learning algorithm. The
learned model is then used as part of a Mixed Observability Markov Decision
Process formulation, wherein the human type is a partially observable variable.
With this framework, we can infer, either offline or online, the human type of
a new user that was not included in the training set, and can compute a policy
for the robot that will be aligned to the preference of this new user and will
be robust to deviations of the human actions from prior demonstrations. Finally
we validate the approach using data collected in human subject experiments, and
conduct proof-of-concept demonstrations in which a person performs a
collaborative task with a small industrial robot.
| [
"Stefanos Nikolaidis, Keren Gu, Ramya Ramakrishnan, and Julie Shah",
"['Stefanos Nikolaidis' 'Keren Gu' 'Ramya Ramakrishnan' 'Julie Shah']"
]
|
cs.CV cs.LG cs.MM | null | 1405.6434 | null | null | http://arxiv.org/pdf/1405.6434v2 | 2015-11-25T22:56:21Z | 2014-05-25T22:35:19Z | Multi-view Metric Learning for Multi-view Video Summarization | Traditional methods on video summarization are designed to generate summaries
for single-view video records; and thus they cannot fully exploit the
redundancy in multi-view video records. In this paper, we present a multi-view
metric learning framework for multi-view video summarization that combines the
advantages of maximum margin clustering with the disagreement minimization
criterion. The learning framework thus has the ability to find a metric that
best separates the data, and meanwhile to force the learned metric to maintain
original intrinsic information between data points, for example geometric
information. Facilitated by such a framework, a systematic solution to the
multi-view video summarization problem is developed. To the best of our
knowledge, it is the first time to address multi-view video summarization from
the viewpoint of metric learning. The effectiveness of the proposed method is
demonstrated by experiments.
| [
"Yanwei Fu, Lingbo Wang, Yanwen Guo",
"['Yanwei Fu' 'Lingbo Wang' 'Yanwen Guo']"
]
|
cs.LG math.OC stat.ML | null | 1405.6444 | null | null | http://arxiv.org/pdf/1405.6444v1 | 2014-05-26T01:15:44Z | 2014-05-26T01:15:44Z | The role of dimensionality reduction in linear classification | Dimensionality reduction (DR) is often used as a preprocessing step in
classification, but usually one first fixes the DR mapping, possibly using
label information, and then learns a classifier (a filter approach). Best
performance would be obtained by optimizing the classification error jointly
over DR mapping and classifier (a wrapper approach), but this is a difficult
nonconvex problem, particularly with nonlinear DR. Using the method of
auxiliary coordinates, we give a simple, efficient algorithm to train a
combination of nonlinear DR and a classifier, and apply it to a RBF mapping
with a linear SVM. This alternates steps where we train the RBF mapping and a
linear SVM as usual regression and classification, respectively, with a
closed-form step that coordinates both. The resulting nonlinear low-dimensional
classifier achieves classification errors competitive with the state-of-the-art
but is fast at training and testing, and allows the user to trade off runtime
for classification accuracy easily. We then study the role of nonlinear DR in
linear classification, and the interplay between the DR mapping, the number of
latent dimensions and the number of classes. When trained jointly, the DR
mapping takes an extreme role in eliminating variation: it tends to collapse
classes in latent space, erasing all manifold structure, and lay out class
centroids so they are linearly separable with maximum margin.
| [
"['Weiran Wang' 'Miguel Á. Carreira-Perpiñán']",
"Weiran Wang and Miguel \\'A. Carreira-Perpi\\~n\\'an"
]
|
cs.CV cs.LG stat.ML | null | 1405.6472 | null | null | http://arxiv.org/pdf/1405.6472v1 | 2014-05-26T06:25:18Z | 2014-05-26T06:25:18Z | Fast and Robust Archetypal Analysis for Representation Learning | We revisit a pioneer unsupervised learning technique called archetypal
analysis, which is related to successful data analysis methods such as sparse
coding and non-negative matrix factorization. Since it was proposed, archetypal
analysis did not gain a lot of popularity even though it produces more
interpretable models than other alternatives. Because no efficient
implementation has ever been made publicly available, its application to
important scientific problems may have been severely limited. Our goal is to
bring back into favour archetypal analysis. We propose a fast optimization
scheme using an active-set strategy, and provide an efficient open-source
implementation interfaced with Matlab, R, and Python. Then, we demonstrate the
usefulness of archetypal analysis for computer vision tasks, such as codebook
learning, signal classification, and large image collection visualization.
| [
"Yuansi Chen (EECS, INRIA Grenoble Rh\\^one-Alpes / LJK Laboratoire Jean\n Kuntzmann), Julien Mairal (INRIA Grenoble Rh\\^one-Alpes / LJK Laboratoire\n Jean Kuntzmann), Zaid Harchaoui (INRIA Grenoble Rh\\^one-Alpes / LJK\n Laboratoire Jean Kuntzmann)",
"['Yuansi Chen' 'Julien Mairal' 'Zaid Harchaoui']"
]
|
cs.SD cs.LG | 10.7717/peerj.488 | 1405.6524 | null | null | http://arxiv.org/abs/1405.6524v1 | 2014-05-26T09:58:20Z | 2014-05-26T09:58:20Z | Automatic large-scale classification of bird sounds is strongly improved
by unsupervised feature learning | Automatic species classification of birds from their sound is a computational
tool of increasing importance in ecology, conservation monitoring and vocal
communication studies. To make classification useful in practice, it is crucial
to improve its accuracy while ensuring that it can run at big data scales. Many
approaches use acoustic measures based on spectrogram-type data, such as the
Mel-frequency cepstral coefficient (MFCC) features which represent a
manually-designed summary of spectral information. However, recent work in
machine learning has demonstrated that features learnt automatically from data
can often outperform manually-designed feature transforms. Feature learning can
be performed at large scale and "unsupervised", meaning it requires no manual
data labelling, yet it can improve performance on "supervised" tasks such as
classification. In this work we introduce a technique for feature learning from
large volumes of bird sound recordings, inspired by techniques that have proven
useful in other domains. We experimentally compare twelve different feature
representations derived from the Mel spectrum (of which six use this
technique), using four large and diverse databases of bird vocalisations, with
a random forest classifier. We demonstrate that MFCCs are of limited power in
this context, leading to worse performance than the raw Mel spectral data.
Conversely, we demonstrate that unsupervised feature learning provides a
substantial boost over MFCCs and Mel spectra without adding computational
complexity after the model has been trained. The boost is particularly notable
for single-label classification tasks at large scale. The spectro-temporal
activations learned through our procedure resemble spectro-temporal receptive
fields calculated from avian primary auditory forebrain.
| [
"['Dan Stowell' 'Mark D. Plumbley']",
"Dan Stowell and Mark D. Plumbley"
]
|
cs.CV cs.GR cs.LG | 10.1007/s11263-014-0754-0 | 1405.6563 | null | null | http://arxiv.org/abs/1405.6563v1 | 2014-05-26T13:12:05Z | 2014-05-26T13:12:05Z | Robust Temporally Coherent Laplacian Protrusion Segmentation of 3D
Articulated Bodies | In motion analysis and understanding it is important to be able to fit a
suitable model or structure to the temporal series of observed data, in order
to describe motion patterns in a compact way, and to discriminate between them.
In an unsupervised context, i.e., no prior model of the moving object(s) is
available, such a structure has to be learned from the data in a bottom-up
fashion. In recent times, volumetric approaches in which the motion is captured
from a number of cameras and a voxel-set representation of the body is built
from the camera views, have gained ground due to attractive features such as
inherent view-invariance and robustness to occlusions. Automatic, unsupervised
segmentation of moving bodies along entire sequences, in a temporally-coherent
and robust way, has the potential to provide a means of constructing a
bottom-up model of the moving body, and track motion cues that may be later
exploited for motion classification. Spectral methods such as locally linear
embedding (LLE) can be useful in this context, as they preserve "protrusions",
i.e., high-curvature regions of the 3D volume, of articulated shapes, while
improving their separation in a lower dimensional space, making them in this
way easier to cluster. In this paper we therefore propose a spectral approach
to unsupervised and temporally-coherent body-protrusion segmentation along time
sequences. Volumetric shapes are clustered in an embedding space, clusters are
propagated in time to ensure coherence, and merged or split to accommodate
changes in the body's topology. Experiments on both synthetic and real
sequences of dense voxel-set data are shown. This supports the ability of the
proposed method to cluster body-parts consistently over time in a totally
unsupervised fashion, its robustness to sampling density and shape quality, and
its potential for bottom-up model construction
| [
"['Fabio Cuzzolin' 'Diana Mateus' 'Radu Horaud']",
"Fabio Cuzzolin, Diana Mateus and Radu Horaud"
]
|
stat.ML cs.LG | null | 1405.6642 | null | null | http://arxiv.org/pdf/1405.6642v2 | 2015-08-30T18:56:05Z | 2014-05-26T17:07:10Z | Stabilized Nearest Neighbor Classifier and Its Statistical Properties | The stability of statistical analysis is an important indicator for
reproducibility, which is one main principle of scientific method. It entails
that similar statistical conclusions can be reached based on independent
samples from the same underlying population. In this paper, we introduce a
general measure of classification instability (CIS) to quantify the sampling
variability of the prediction made by a classification method. Interestingly,
the asymptotic CIS of any weighted nearest neighbor classifier turns out to be
proportional to the Euclidean norm of its weight vector. Based on this concise
form, we propose a stabilized nearest neighbor (SNN) classifier, which
distinguishes itself from other nearest neighbor classifiers, by taking the
stability into consideration. In theory, we prove that SNN attains the minimax
optimal convergence rate in risk, and a sharp convergence rate in CIS. The
latter rate result is established for general plug-in classifiers under a
low-noise condition. Extensive simulated and real examples demonstrate that SNN
achieves a considerable improvement in CIS over existing nearest neighbor
classifiers, with comparable classification accuracy. We implement the
algorithm in a publicly available R package snn.
| [
"['Wei Sun' 'Xingye Qiao' 'Guang Cheng']",
"Wei Sun (Yahoo Labs), Xingye Qiao (Binghamton) and Guang Cheng\n (Purdue)"
]
|
cs.IT cs.LG math.IT | 10.1109/LSP.2014.2345761 | 1405.6664 | null | null | http://arxiv.org/abs/1405.6664v2 | 2014-08-03T23:00:22Z | 2014-05-26T18:05:18Z | On the Computational Intractability of Exact and Approximate Dictionary
Learning | The efficient sparse coding and reconstruction of signal vectors via linear
observations has received a tremendous amount of attention over the last
decade. In this context, the automated learning of a suitable basis or
overcomplete dictionary from training data sets of certain signal classes for
use in sparse representations has turned out to be of particular importance
regarding practical signal processing applications. Most popular dictionary
learning algorithms involve NP-hard sparse recovery problems in each iteration,
which may give some indication about the complexity of dictionary learning but
does not constitute an actual proof of computational intractability. In this
technical note, we show that learning a dictionary with which a given set of
training signals can be represented as sparsely as possible is indeed NP-hard.
Moreover, we also establish hardness of approximating the solution to within
large factors of the optimal sparsity level. Furthermore, we give NP-hardness
and non-approximability results for a recent dictionary learning variation
called the sensor permutation problem. Along the way, we also obtain a new
non-approximability result for the classical sparse recovery problem from
compressed sensing.
| [
"['Andreas M. Tillmann']",
"Andreas M. Tillmann"
]
|
stat.OT cs.LG math.ST stat.TH | null | 1405.6676 | null | null | http://arxiv.org/pdf/1405.6676v2 | 2014-10-05T06:28:45Z | 2014-05-26T18:44:11Z | Statistique et Big Data Analytics; Volum\'etrie, L'Attaque des Clones | This article assumes acquired the skills and expertise of a statistician in
unsupervised (NMF, k-means, SVD) and supervised learning (regression, CART,
random forest). What skills and knowledge do a statistician must acquire to
reach the "Volume" scale of big data? After a quick overview of the different
strategies available and especially of those imposed by Hadoop, the algorithms
of some available learning methods are outlined in order to understand how they
are adapted to the strong stresses of the Map-Reduce functionalities
| [
"Philippe Besse (IMT), Nathalie Villa-Vialaneix (MIAT INRA)",
"['Philippe Besse' 'Nathalie Villa-Vialaneix']"
]
|
cs.LG | 10.1007/978-3-319-07176-3_6 | 1405.6684 | null | null | http://arxiv.org/abs/1405.6684v1 | 2014-05-26T19:00:15Z | 2014-05-26T19:00:15Z | Visualizing Random Forest with Self-Organising Map | Random Forest (RF) is a powerful ensemble method for classification and
regression tasks. It consists of decision trees set. Although, a single tree is
well interpretable for human, the ensemble of trees is a black-box model. The
popular technique to look inside the RF model is to visualize a RF proximity
matrix obtained on data samples with Multidimensional Scaling (MDS) method.
Herein, we present a novel method based on Self-Organising Maps (SOM) for
revealing intrinsic relationships in data that lay inside the RF used for
classification tasks. We propose an algorithm to learn the SOM with the
proximity matrix obtained from the RF. The visualization of RF proximity matrix
with MDS and SOM is compared. What is more, the SOM learned with the RF
proximity matrix has better classification accuracy in comparison to SOM
learned with Euclidean distance. Presented approach enables better
understanding of the RF and additionally improves accuracy of the SOM.
| [
"['Piotr Płoński' 'Krzysztof Zaremba']",
"Piotr P{\\l}o\\'nski and Krzysztof Zaremba"
]
|
cs.LG | null | 1405.6757 | null | null | http://arxiv.org/pdf/1405.6757v1 | 2014-05-26T23:11:40Z | 2014-05-26T23:11:40Z | Proximal Reinforcement Learning: A New Theory of Sequential Decision
Making in Primal-Dual Spaces | In this paper, we set forth a new vision of reinforcement learning developed
by us over the past few years, one that yields mathematically rigorous
solutions to longstanding important questions that have remained unresolved:
(i) how to design reliable, convergent, and robust reinforcement learning
algorithms (ii) how to guarantee that reinforcement learning satisfies
pre-specified "safety" guarantees, and remains in a stable region of the
parameter space (iii) how to design "off-policy" temporal difference learning
algorithms in a reliable and stable manner, and finally (iv) how to integrate
the study of reinforcement learning into the rich theory of stochastic
optimization. In this paper, we provide detailed answers to all these questions
using the powerful framework of proximal operators.
The key idea that emerges is the use of primal dual spaces connected through
the use of a Legendre transform. This allows temporal difference updates to
occur in dual spaces, allowing a variety of important technical advantages. The
Legendre transform elegantly generalizes past algorithms for solving
reinforcement learning problems, such as natural gradient methods, which we
show relate closely to the previously unconnected framework of mirror descent
methods. Equally importantly, proximal operator theory enables the systematic
development of operator splitting methods that show how to safely and reliably
decompose complex products of gradients that occur in recent variants of
gradient-based temporal difference learning. This key technical innovation
makes it possible to finally design "true" stochastic gradient methods for
reinforcement learning. Finally, Legendre transforms enable a variety of other
benefits, including modeling sparsity and domain geometry. Our work builds
extensively on recent work on the convergence of saddle-point algorithms, and
on the theory of monotone operators.
| [
"['Sridhar Mahadevan' 'Bo Liu' 'Philip Thomas' 'Will Dabney'\n 'Steve Giguere' 'Nicholas Jacek' 'Ian Gemp' 'Ji Liu']",
"Sridhar Mahadevan, Bo Liu, Philip Thomas, Will Dabney, Steve Giguere,\n Nicholas Jacek, Ian Gemp, Ji Liu"
]
|
null | null | 1405.6791 | null | null | http://arxiv.org/pdf/1405.6791v2 | 2015-05-25T21:58:56Z | 2014-05-27T05:33:19Z | Agnostic Learning of Disjunctions on Symmetric Distributions | We consider the problem of approximating and learning disjunctions (or equivalently, conjunctions) on symmetric distributions over ${0,1}^n$. Symmetric distributions are distributions whose PDF is invariant under any permutation of the variables. We give a simple proof that for every symmetric distribution $mathcal{D}$, there exists a set of $n^{O(log{(1/epsilon)})}$ functions $mathcal{S}$, such that for every disjunction $c$, there is function $p$, expressible as a linear combination of functions in $mathcal{S}$, such that $p$ $epsilon$-approximates $c$ in $ell_1$ distance on $mathcal{D}$ or $mathbf{E}_{x sim mathcal{D}}[ |c(x)-p(x)|] leq epsilon$. This directly gives an agnostic learning algorithm for disjunctions on symmetric distributions that runs in time $n^{O( log{(1/epsilon)})}$. The best known previous bound is $n^{O(1/epsilon^4)}$ and follows from approximation of the more general class of halfspaces (Wimmer, 2010). We also show that there exists a symmetric distribution $mathcal{D}$, such that the minimum degree of a polynomial that $1/3$-approximates the disjunction of all $n$ variables is $ell_1$ distance on $mathcal{D}$ is $Omega( sqrt{n})$. Therefore the learning result above cannot be achieved via $ell_1$-regression with a polynomial basis used in most other agnostic learning algorithms. Our technique also gives a simple proof that for any product distribution $mathcal{D}$ and every disjunction $c$, there exists a polynomial $p$ of degree $O(log{(1/epsilon)})$ such that $p$ $epsilon$-approximates $c$ in $ell_1$ distance on $mathcal{D}$. This was first proved by Blais et al. (2008) via a more involved argument. | [
"['Vitaly Feldman' 'Pravesh Kothari']"
]
|
stat.ML cs.LG | null | 1405.6804 | null | null | http://arxiv.org/pdf/1405.6804v2 | 2014-05-28T00:51:08Z | 2014-05-27T06:29:01Z | Layered Logic Classifiers: Exploring the `And' and `Or' Relations | Designing effective and efficient classifier for pattern analysis is a key
problem in machine learning and computer vision. Many the solutions to the
problem require to perform logic operations such as `and', `or', and `not'.
Classification and regression tree (CART) include these operations explicitly.
Other methods such as neural networks, SVM, and boosting learn/compute a
weighted sum on features (weak classifiers), which weakly perform the 'and' and
'or' operations. However, it is hard for these classifiers to deal with the
'xor' pattern directly. In this paper, we propose layered logic classifiers for
patterns of complicated distributions by combining the `and', `or', and `not'
operations. The proposed algorithm is very general and easy to implement. We
test the classifiers on several typical datasets from the Irvine repository and
two challenging vision applications, object segmentation and pedestrian
detection. We observe significant improvements on all the datasets over the
widely used decision stump based AdaBoost algorithm. The resulting classifiers
have much less training complexity than decision tree based AdaBoost, and can
be applied in a wide range of domains.
| [
"Zhuowen Tu and Piotr Dollar and Yingnian Wu",
"['Zhuowen Tu' 'Piotr Dollar' 'Yingnian Wu']"
]
|
cs.CV cs.LG stat.ML | null | 1405.6914 | null | null | http://arxiv.org/pdf/1405.6914v1 | 2014-05-27T14:02:45Z | 2014-05-27T14:02:45Z | Supervised Dictionary Learning by a Variational Bayesian Group Sparse
Nonnegative Matrix Factorization | Nonnegative matrix factorization (NMF) with group sparsity constraints is
formulated as a probabilistic graphical model and, assuming some observed data
have been generated by the model, a feasible variational Bayesian algorithm is
derived for learning model parameters. When used in a supervised learning
scenario, NMF is most often utilized as an unsupervised feature extractor
followed by classification in the obtained feature subspace. Having mapped the
class labels to a more general concept of groups which underlie sparsity of the
coefficients, what the proposed group sparse NMF model allows is incorporating
class label information to find low dimensional label-driven dictionaries which
not only aim to represent the data faithfully, but are also suitable for class
discrimination. Experiments performed in face recognition and facial expression
recognition domains point to advantages of classification in such label-driven
feature subspaces over classification in feature subspaces obtained in an
unsupervised manner.
| [
"Ivan Ivek",
"['Ivan Ivek']"
]
|
cs.LG cs.CV stat.ML | null | 1405.6922 | null | null | http://arxiv.org/pdf/1405.6922v1 | 2014-05-27T14:18:26Z | 2014-05-27T14:18:26Z | Large Scale, Large Margin Classification using Indefinite Similarity
Measures | Despite the success of the popular kernelized support vector machines, they
have two major limitations: they are restricted to Positive Semi-Definite (PSD)
kernels, and their training complexity scales at least quadratically with the
size of the data. Many natural measures of similarity between pairs of samples
are not PSD e.g. invariant kernels, and those that are implicitly or explicitly
defined by latent variable models. In this paper, we investigate scalable
approaches for using indefinite similarity measures in large margin frameworks.
In particular we show that a normalization of similarity to a subset of the
data points constitutes a representation suitable for linear classifiers. The
result is a classifier which is competitive to kernelized SVM in terms of
accuracy, despite having better training and test time complexities.
Experimental results demonstrate that on CIFAR-10 dataset, the model equipped
with similarity measures invariant to rigid and non-rigid deformations, can be
made more than 5 times sparser while being more accurate than kernelized SVM
using RBF kernels.
| [
"['Omid Aghazadeh' 'Stefan Carlsson']",
"Omid Aghazadeh and Stefan Carlsson"
]
|
stat.ML cs.LG | null | 1405.6974 | null | null | http://arxiv.org/pdf/1405.6974v1 | 2014-05-27T16:52:49Z | 2014-05-27T16:52:49Z | Futility Analysis in the Cross-Validation of Machine Learning Models | Many machine learning models have important structural tuning parameters that
cannot be directly estimated from the data. The common tactic for setting these
parameters is to use resampling methods, such as cross--validation or the
bootstrap, to evaluate a candidate set of values and choose the best based on
some pre--defined criterion. Unfortunately, this process can be time consuming.
However, the model tuning process can be streamlined by adaptively resampling
candidate values so that settings that are clearly sub-optimal can be
discarded. The notion of futility analysis is introduced in this context. An
example is shown that illustrates how adaptive resampling can be used to reduce
training time. Simulation studies are used to understand how the potential
speed--up is affected by parallel processing techniques.
| [
"['Max Kuhn']",
"Max Kuhn"
]
|
cs.LG cs.CR stat.ML | null | 1405.7085 | null | null | http://arxiv.org/pdf/1405.7085v2 | 2014-10-17T23:49:13Z | 2014-05-27T22:58:26Z | Differentially Private Empirical Risk Minimization: Efficient Algorithms
and Tight Error Bounds | In this paper, we initiate a systematic investigation of differentially
private algorithms for convex empirical risk minimization. Various
instantiations of this problem have been studied before. We provide new
algorithms and matching lower bounds for private ERM assuming only that each
data point's contribution to the loss function is Lipschitz bounded and that
the domain of optimization is bounded. We provide a separate set of algorithms
and matching lower bounds for the setting in which the loss functions are known
to also be strongly convex.
Our algorithms run in polynomial time, and in some cases even match the
optimal non-private running time (as measured by oracle complexity). We give
separate algorithms (and lower bounds) for $(\epsilon,0)$- and
$(\epsilon,\delta)$-differential privacy; perhaps surprisingly, the techniques
used for designing optimal algorithms in the two cases are completely
different.
Our lower bounds apply even to very simple, smooth function families, such as
linear and quadratic functions. This implies that algorithms from previous work
can be used to obtain optimal error rates, under the additional assumption that
the contributions of each data point to the loss function is smooth. We show
that simple approaches to smoothing arbitrary loss functions (in order to apply
previous techniques) do not yield optimal error rates. In particular, optimal
algorithms were not previously known for problems such as training support
vector machines and the high-dimensional median.
| [
"['Raef Bassily' 'Adam Smith' 'Abhradeep Thakurta']",
"Raef Bassily, Adam Smith, Abhradeep Thakurta"
]
|
stat.ML cs.LG | null | 1405.7292 | null | null | http://arxiv.org/pdf/1405.7292v2 | 2014-06-05T15:47:26Z | 2014-05-28T16:08:32Z | An Easy to Use Repository for Comparing and Improving Machine Learning
Algorithm Usage | The results from most machine learning experiments are used for a specific
purpose and then discarded. This results in a significant loss of information
and requires rerunning experiments to compare learning algorithms. This also
requires implementation of another algorithm for comparison, that may not
always be correctly implemented. By storing the results from previous
experiments, machine learning algorithms can be compared easily and the
knowledge gained from them can be used to improve their performance. The
purpose of this work is to provide easy access to previous experimental results
for learning and comparison. These stored results are comprehensive -- storing
the prediction for each test instance as well as the learning algorithm,
hyperparameters, and training set that were used. Previous results are
particularly important for meta-learning, which, in a broad sense, is the
process of learning from previous machine learning results such that the
learning process is improved. While other experiment databases do exist, one of
our focuses is on easy access to the data. We provide meta-learning data sets
that are ready to be downloaded for meta-learning experiments. In addition,
queries to the underlying database can be made if specific information is
desired. We also differ from previous experiment databases in that our
databases is designed at the instance level, where an instance is an example in
a data set. We store the predictions of a learning algorithm trained on a
specific training set for each instance in the test set. Data set level
information can then be obtained by aggregating the results from the instances.
The instance level information can be used for many tasks such as determining
the diversity of a classifier or algorithmically determining the optimal subset
of training instances for a learning algorithm.
| [
"Michael R. Smith and Andrew White and Christophe Giraud-Carrier and\n Tony Martinez",
"['Michael R. Smith' 'Andrew White' 'Christophe Giraud-Carrier'\n 'Tony Martinez']"
]
|
cs.LG | null | 1405.7430 | null | null | http://arxiv.org/pdf/1405.7430v1 | 2014-05-29T00:37:28Z | 2014-05-29T00:37:28Z | BayesOpt: A Bayesian Optimization Library for Nonlinear Optimization,
Experimental Design and Bandits | BayesOpt is a library with state-of-the-art Bayesian optimization methods to
solve nonlinear optimization, stochastic bandits or sequential experimental
design problems. Bayesian optimization is sample efficient by building a
posterior distribution to capture the evidence and prior knowledge for the
target function. Built in standard C++, the library is extremely efficient
while being portable and flexible. It includes a common interface for C, C++,
Python, Matlab and Octave.
| [
"Ruben Martinez-Cantin",
"['Ruben Martinez-Cantin']"
]
|
cs.IT cs.LG math.IT | null | 1405.7460 | null | null | http://arxiv.org/pdf/1405.7460v1 | 2014-05-29T04:35:51Z | 2014-05-29T04:35:51Z | Universal Compression of Envelope Classes: Tight Characterization via
Poisson Sampling | The Poisson-sampling technique eliminates dependencies among symbol
appearances in a random sequence. It has been used to simplify the analysis and
strengthen the performance guarantees of randomized algorithms. Applying this
method to universal compression, we relate the redundancies of fixed-length and
Poisson-sampled sequences, use the relation to derive a simple single-letter
formula that approximates the redundancy of any envelope class to within an
additive logarithmic term. As a first application, we consider i.i.d.
distributions over a small alphabet as a step-envelope class, and provide a
short proof that determines the redundancy of discrete distributions over a
small al- phabet up to the first order terms. We then show the strength of our
method by applying the formula to tighten the existing bounds on the redundancy
of exponential and power-law classes, in particular answering a question posed
by Boucheron, Garivier and Gassiat.
| [
"['Jayadev Acharya' 'Ashkan Jafarpour' 'Alon Orlitsky'\n 'Ananda Theertha Suresh']",
"Jayadev Acharya and Ashkan Jafarpour and Alon Orlitsky and Ananda\n Theertha Suresh"
]
|
cs.LG | null | 1405.7471 | null | null | http://arxiv.org/pdf/1405.7471v1 | 2014-05-29T05:59:26Z | 2014-05-29T05:59:26Z | Effect of Different Distance Measures on the Performance of K-Means
Algorithm: An Experimental Study in Matlab | K-means algorithm is a very popular clustering algorithm which is famous for
its simplicity. Distance measure plays a very important rule on the performance
of this algorithm. We have different distance measure techniques available. But
choosing a proper technique for distance calculation is totally dependent on
the type of the data that we are going to cluster. In this paper an
experimental study is done in Matlab to cluster the iris and wine data sets
with different distance measures and thereby observing the variation of the
performances shown.
| [
"Mr. Dibya Jyoti Bora, Dr. Anil Kumar Gupta",
"['Mr. Dibya Jyoti Bora' 'Dr. Anil Kumar Gupta']"
]
|
cs.LG | null | 1405.7624 | null | null | http://arxiv.org/pdf/1405.7624v1 | 2014-05-29T17:32:29Z | 2014-05-29T17:32:29Z | Simultaneous Feature and Expert Selection within Mixture of Experts | A useful strategy to deal with complex classification scenarios is the
"divide and conquer" approach. The mixture of experts (MOE) technique makes use
of this strategy by joinly training a set of classifiers, or experts, that are
specialized in different regions of the input space. A global model, or gate
function, complements the experts by learning a function that weights their
relevance in different parts of the input space. Local feature selection
appears as an attractive alternative to improve the specialization of experts
and gate function, particularly, for the case of high dimensional data. Our
main intuition is that particular subsets of dimensions, or subspaces, are
usually more appropriate to classify instances located in different regions of
the input space. Accordingly, this work contributes with a regularized variant
of MoE that incorporates an embedded process for local feature selection using
$L1$ regularization, with a simultaneous expert selection. The experiments are
still pending.
| [
"Billy Peralta",
"['Billy Peralta']"
]
|
cs.CL cs.IR cs.LG | 10.1613/jair.2964 | 1405.7713 | null | null | http://arxiv.org/abs/1405.7713v1 | 2014-01-16T04:51:47Z | 2014-01-16T04:51:47Z | Using Local Alignments for Relation Recognition | This paper discusses the problem of marrying structural similarity with
semantic relatedness for Information Extraction from text. Aiming at accurate
recognition of relations, we introduce local alignment kernels and explore
various possibilities of using them for this task. We give a definition of a
local alignment (LA) kernel based on the Smith-Waterman score as a sequence
similarity measure and proceed with a range of possibilities for computing
similarity between elements of sequences. We show how distributional similarity
measures obtained from unlabeled data can be incorporated into the learning
task as semantic knowledge. Our experiments suggest that the LA kernel yields
promising results on various biomedical corpora outperforming two baselines by
a large margin. Additional series of experiments have been conducted on the
data sets of seven general relation types, where the performance of the LA
kernel is comparable to the current state-of-the-art results.
| [
"['Sophia Katrenko' 'Pieter Adriaans' 'Maarten van Someren']",
"Sophia Katrenko, Pieter Adriaans, Maarten van Someren"
]
|
cs.LG cs.AI stat.ML | null | 1405.7752 | null | null | http://arxiv.org/pdf/1405.7752v3 | 2014-11-21T10:13:34Z | 2014-05-30T00:35:34Z | Learning to Act Greedily: Polymatroid Semi-Bandits | Many important optimization problems, such as the minimum spanning tree and
minimum-cost flow, can be solved optimally by a greedy method. In this work, we
study a learning variant of these problems, where the model of the problem is
unknown and has to be learned by interacting repeatedly with the environment in
the bandit setting. We formalize our learning problem quite generally, as
learning how to maximize an unknown modular function on a known polymatroid. We
propose a computationally efficient algorithm for solving our problem and bound
its expected cumulative regret. Our gap-dependent upper bound is tight up to a
constant and our gap-free upper bound is tight up to polylogarithmic factors.
Finally, we evaluate our method on three problems and demonstrate that it is
practical.
| [
"Branislav Kveton, Zheng Wen, Azin Ashkan, and Michal Valko",
"['Branislav Kveton' 'Zheng Wen' 'Azin Ashkan' 'Michal Valko']"
]
|
stat.ML cs.LG | null | 1405.7764 | null | null | http://arxiv.org/pdf/1405.7764v3 | 2014-10-07T16:45:06Z | 2014-05-30T02:05:37Z | Generalization Bounds for Learning with Linear, Polygonal, Quadratic and
Conic Side Knowledge | In this paper, we consider a supervised learning setting where side knowledge
is provided about the labels of unlabeled examples. The side knowledge has the
effect of reducing the hypothesis space, leading to tighter generalization
bounds, and thus possibly better generalization. We consider several types of
side knowledge, the first leading to linear and polygonal constraints on the
hypothesis space, the second leading to quadratic constraints, and the last
leading to conic constraints. We show how different types of domain knowledge
can lead directly to these kinds of side knowledge. We prove bounds on
complexity measures of the hypothesis space for quadratic and conic side
knowledge, and show that these bounds are tight in a specific sense for the
quadratic case.
| [
"Theja Tulabandhula and Cynthia Rudin",
"['Theja Tulabandhula' 'Cynthia Rudin']"
]
|
cs.LG | null | 1405.7897 | null | null | http://arxiv.org/pdf/1405.7897v1 | 2014-05-30T15:50:28Z | 2014-05-30T15:50:28Z | Flip-Flop Sublinear Models for Graphs: Proof of Theorem 1 | We prove that there is no class-dual for almost all sublinear models on
graphs.
| [
"['Brijnesh Jain']",
"Brijnesh Jain"
]
|
cs.CL cs.AI cs.LG | null | 1405.7908 | null | null | http://arxiv.org/pdf/1405.7908v1 | 2014-05-30T16:36:07Z | 2014-05-30T16:36:07Z | Semantic Composition and Decomposition: From Recognition to Generation | Semantic composition is the task of understanding the meaning of text by
composing the meanings of the individual words in the text. Semantic
decomposition is the task of understanding the meaning of an individual word by
decomposing it into various aspects (factors, constituents, components) that
are latent in the meaning of the word. We take a distributional approach to
semantics, in which a word is represented by a context vector. Much recent work
has considered the problem of recognizing compositions and decompositions, but
we tackle the more difficult generation problem. For simplicity, we focus on
noun-modifier bigrams and noun unigrams. A test for semantic composition is,
given context vectors for the noun and modifier in a noun-modifier bigram ("red
salmon"), generate a noun unigram that is synonymous with the given bigram
("sockeye"). A test for semantic decomposition is, given a context vector for a
noun unigram ("snifter"), generate a noun-modifier bigram that is synonymous
with the given unigram ("brandy glass"). With a vocabulary of about 73,000
unigrams from WordNet, there are 73,000 candidate unigram compositions for a
bigram and 5,300,000,000 (73,000 squared) candidate bigram decompositions for a
unigram. We generate ranked lists of potential solutions in two passes. A fast
unsupervised learning algorithm generates an initial list of candidates and
then a slower supervised learning algorithm refines the list. We evaluate the
candidate solutions by comparing them to WordNet synonym sets. For
decomposition (unigram to bigram), the top 100 most highly ranked bigrams
include a WordNet synonym of the given unigram 50.7% of the time. For
composition (bigram to unigram), the top 100 most highly ranked unigrams
include a WordNet synonym of the given bigram 77.8% of the time.
| [
"Peter D. Turney",
"['Peter D. Turney']"
]
|
cs.DS cs.LG math.NA | null | 1405.7910 | null | null | http://arxiv.org/pdf/1405.7910v2 | 2014-07-16T14:53:44Z | 2014-05-30T16:44:06Z | Optimal CUR Matrix Decompositions | The CUR decomposition of an $m \times n$ matrix $A$ finds an $m \times c$
matrix $C$ with a subset of $c < n$ columns of $A,$ together with an $r \times
n$ matrix $R$ with a subset of $r < m$ rows of $A,$ as well as a $c \times r$
low-rank matrix $U$ such that the matrix $C U R$ approximates the matrix $A,$
that is, $ || A - CUR ||_F^2 \le (1+\epsilon) || A - A_k||_F^2$, where
$||.||_F$ denotes the Frobenius norm and $A_k$ is the best $m \times n$ matrix
of rank $k$ constructed via the SVD. We present input-sparsity-time and
deterministic algorithms for constructing such a CUR decomposition where
$c=O(k/\epsilon)$ and $r=O(k/\epsilon)$ and rank$(U) = k$. Up to constant
factors, our algorithms are simultaneously optimal in $c, r,$ and rank$(U)$.
| [
"['Christos Boutsidis' 'David P. Woodruff']",
"Christos Boutsidis and David P. Woodruff"
]
|
stat.ML cs.LG | null | 1406.0013 | null | null | http://arxiv.org/pdf/1406.0013v1 | 2014-05-30T20:45:50Z | 2014-05-30T20:45:50Z | Estimating Vector Fields on Manifolds and the Embedding of Directed
Graphs | This paper considers the problem of embedding directed graphs in Euclidean
space while retaining directional information. We model a directed graph as a
finite set of observations from a diffusion on a manifold endowed with a vector
field. This is the first generative model of its kind for directed graphs. We
introduce a graph embedding algorithm that estimates all three features of this
model: the low-dimensional embedding of the manifold, the data density and the
vector field. In the process, we also obtain new theoretical results on the
limits of "Laplacian type" matrices derived from directed graphs. The
application of our method to both artificially constructed and real data
highlights its strengths.
| [
"Dominique Perrault-Joncas and Marina Meila",
"['Dominique Perrault-Joncas' 'Marina Meila']"
]
|
stat.ML cs.LG | null | 1406.0118 | null | null | http://arxiv.org/pdf/1406.0118v1 | 2014-05-31T23:00:36Z | 2014-05-31T23:00:36Z | Improved graph Laplacian via geometric self-consistency | We address the problem of setting the kernel bandwidth used by Manifold
Learning algorithms to construct the graph Laplacian. Exploiting the connection
between manifold geometry, represented by the Riemannian metric, and the
Laplace-Beltrami operator, we set the bandwidth by optimizing the Laplacian's
ability to preserve the geometry of the data. Experiments show that this
principled approach is effective and robust.
| [
"Dominique Perrault-Joncas and Marina Meila",
"['Dominique Perrault-Joncas' 'Marina Meila']"
]
|
cs.CV cs.LG stat.ML | null | 1406.0156 | null | null | http://arxiv.org/pdf/1406.0156v2 | 2014-11-20T08:58:09Z | 2014-06-01T11:52:19Z | $l_1$-regularized Outlier Isolation and Regression | This paper proposed a new regression model called $l_1$-regularized outlier
isolation and regression (LOIRE) and a fast algorithm based on block coordinate
descent to solve this model. Besides, assuming outliers are gross errors
following a Bernoulli process, this paper also presented a Bernoulli estimate
model which, in theory, should be very accurate and robust due to its complete
elimination of affections caused by outliers. Though this Bernoulli estimate is
hard to solve, it could be approximately achieved through a process which takes
LOIRE as an important intermediate step. As a result, the approximate Bernoulli
estimate is a good combination of Bernoulli estimate's accuracy and LOIRE
regression's efficiency with several simulations conducted to strongly verify
this point. Moreover, LOIRE can be further extended to realize robust rank
factorization which is powerful in recovering low-rank component from massive
corruptions. Extensive experimental results showed that the proposed method
outperforms state-of-the-art methods like RPCA and GoDec in the aspect of
computation speed with a competitive performance.
| [
"Sheng Han, Suzhen Wang, Xinyu Wu",
"['Sheng Han' 'Suzhen Wang' 'Xinyu Wu']"
]
|
stat.ML cs.LG | null | 1406.0167 | null | null | http://arxiv.org/pdf/1406.0167v3 | 2015-02-06T13:43:54Z | 2014-06-01T14:37:54Z | Feature Selection for Linear SVM with Provable Guarantees | We give two provably accurate feature-selection techniques for the linear
SVM. The algorithms run in deterministic and randomized time respectively. Our
algorithms can be used in an unsupervised or supervised setting. The supervised
approach is based on sampling features from support vectors. We prove that the
margin in the feature space is preserved to within $\epsilon$-relative error of
the margin in the full feature space in the worst-case. In the unsupervised
setting, we also provide worst-case guarantees of the radius of the minimum
enclosing ball, thereby ensuring comparable generalization as in the full
feature space and resolving an open problem posed in Dasgupta et al. We present
extensive experiments on real-world datasets to support our theory and to
demonstrate that our method is competitive and often better than prior
state-of-the-art, for which there are no known provable guarantees.
| [
"['Saurabh Paul' 'Malik Magdon-Ismail' 'Petros Drineas']",
"Saurabh Paul, Malik Magdon-Ismail and Petros Drineas"
]
|
stat.ML cs.LG q-bio.GN q-bio.QM stat.AP | null | 1406.0189 | null | null | http://arxiv.org/pdf/1406.0189v1 | 2014-06-01T18:13:08Z | 2014-06-01T18:13:08Z | Convex Total Least Squares | We study the total least squares (TLS) problem that generalizes least squares
regression by allowing measurement errors in both dependent and independent
variables. TLS is widely used in applied fields including computer vision,
system identification and econometrics. The special case when all dependent and
independent variables have the same level of uncorrelated Gaussian noise, known
as ordinary TLS, can be solved by singular value decomposition (SVD). However,
SVD cannot solve many important practical TLS problems with realistic noise
structure, such as having varying measurement noise, known structure on the
errors, or large outliers requiring robust error-norms. To solve such problems,
we develop convex relaxation approaches for a general class of structured TLS
(STLS). We show both theoretically and experimentally, that while the plain
nuclear norm relaxation incurs large approximation errors for STLS, the
re-weighted nuclear norm approach is very effective, and achieves better
accuracy on challenging STLS problems than popular non-convex solvers. We
describe a fast solution based on augmented Lagrangian formulation, and apply
our approach to an important class of biological problems that use population
average measurements to infer cell-type and physiological-state specific
expression levels that are very hard to measure directly.
| [
"Dmitry Malioutov and Nikolai Slavov",
"['Dmitry Malioutov' 'Nikolai Slavov']"
]
|
stat.ML cs.LG q-bio.MN q-bio.QM stat.AP | null | 1406.0193 | null | null | http://arxiv.org/pdf/1406.0193v1 | 2014-06-01T19:09:14Z | 2014-06-01T19:09:14Z | Inference of Sparse Networks with Unobserved Variables. Application to
Gene Regulatory Networks | Networks are a unifying framework for modeling complex systems and network
inference problems are frequently encountered in many fields. Here, I develop
and apply a generative approach to network inference (RCweb) for the case when
the network is sparse and the latent (not observed) variables affect the
observed ones. From all possible factor analysis (FA) decompositions explaining
the variance in the data, RCweb selects the FA decomposition that is consistent
with a sparse underlying network. The sparsity constraint is imposed by a novel
method that significantly outperforms (in terms of accuracy, robustness to
noise, complexity scaling, and computational efficiency) Bayesian methods and
MLE methods using l1 norm relaxation such as K-SVD and l1--based sparse
principle component analysis (PCA). Results from simulated models demonstrate
that RCweb recovers exactly the model structures for sparsity as low (as
non-sparse) as 50% and with ratio of unobserved to observed variables as high
as 2. RCweb is robust to noise, with gradual decrease in the parameter ranges
as the noise level increases.
| [
"Nikolai Slavov",
"['Nikolai Slavov']"
]
|
cs.LG | 10.1109/TKDE.2014.2327022 | 1406.0223 | null | null | http://arxiv.org/abs/1406.0223v1 | 2014-06-02T00:34:24Z | 2014-06-02T00:34:24Z | Holistic Measures for Evaluating Prediction Models in Smart Grids | The performance of prediction models is often based on "abstract metrics"
that estimate the model's ability to limit residual errors between the observed
and predicted values. However, meaningful evaluation and selection of
prediction models for end-user domains requires holistic and
application-sensitive performance measures. Inspired by energy consumption
prediction models used in the emerging "big data" domain of Smart Power Grids,
we propose a suite of performance measures to rationally compare models along
the dimensions of scale independence, reliability, volatility and cost. We
include both application independent and dependent measures, the latter
parameterized to allow customization by domain experts to fit their scenario.
While our measures are generalizable to other domains, we offer an empirical
analysis using real energy use data for three Smart Grid applications:
planning, customer education and demand response, which are relevant for energy
sustainability. Our results underscore the value of the proposed measures to
offer a deeper insight into models' behavior and their impact on real
applications, which benefit both data mining researchers and practitioners.
| [
"Saima Aman, Yogesh Simmhan, Viktor K. Prasanna",
"['Saima Aman' 'Yogesh Simmhan' 'Viktor K. Prasanna']"
]
|
stat.ML cs.CV cs.LG | 10.1016/j.patrec.2015.03.008 | 1406.0281 | null | null | http://arxiv.org/abs/1406.0281v2 | 2014-10-07T14:55:44Z | 2014-06-02T08:06:12Z | On Classification with Bags, Groups and Sets | Many classification problems can be difficult to formulate directly in terms
of the traditional supervised setting, where both training and test samples are
individual feature vectors. There are cases in which samples are better
described by sets of feature vectors, that labels are only available for sets
rather than individual samples, or, if individual labels are available, that
these are not independent. To better deal with such problems, several
extensions of supervised learning have been proposed, where either training
and/or test objects are sets of feature vectors. However, having been proposed
rather independently of each other, their mutual similarities and differences
have hitherto not been mapped out. In this work, we provide an overview of such
learning scenarios, propose a taxonomy to illustrate the relationships between
them, and discuss directions for further research in these areas.
| [
"Veronika Cheplygina, David M. J. Tax, Marco Loog",
"['Veronika Cheplygina' 'David M. J. Tax' 'Marco Loog']"
]
|
cs.LG stat.ML | null | 1406.0304 | null | null | http://arxiv.org/pdf/1406.0304v1 | 2014-06-02T09:22:49Z | 2014-06-02T09:22:49Z | Transductive Learning for Multi-Task Copula Processes | We tackle the problem of multi-task learning with copula process.
Multivariable prediction in spatial and spatial-temporal processes such as
natural resource estimation and pollution monitoring have been typically
addressed using techniques based on Gaussian processes and co-Kriging. While
the Gaussian prior assumption is convenient from analytical and computational
perspectives, nature is dominated by non-Gaussian likelihoods. Copula processes
are an elegant and flexible solution to handle various non-Gaussian likelihoods
by capturing the dependence structure of random variables with cumulative
distribution functions rather than their marginals. We show how multi-task
learning for copula processes can be used to improve multivariable prediction
for problems where the simple Gaussianity prior assumption does not hold. Then,
we present a transductive approximation for multi-task learning and derive
analytical expressions for the copula process model. The approach is evaluated
and compared to other techniques in one artificial dataset and two publicly
available datasets for natural resource estimation and concrete slump
prediction.
| [
"Markus Schneider and Fabio Ramos",
"['Markus Schneider' 'Fabio Ramos']"
]
|
cs.SY cs.LG math.OC | null | 1406.0554 | null | null | http://arxiv.org/pdf/1406.0554v1 | 2014-06-03T00:00:38Z | 2014-06-03T00:00:38Z | Universal Convexification via Risk-Aversion | We develop a framework for convexifying a fairly general class of
optimization problems. Under additional assumptions, we analyze the
suboptimality of the solution to the convexified problem relative to the
original nonconvex problem and prove additive approximation guarantees. We then
develop algorithms based on stochastic gradient methods to solve the resulting
optimization problems and show bounds on convergence rates. %We show a simple
application of this framework to supervised learning, where one can perform
integration explicitly and can use standard (non-stochastic) optimization
algorithms with better convergence guarantees. We then extend this framework to
apply to a general class of discrete-time dynamical systems. In this context,
our convexification approach falls under the well-studied paradigm of
risk-sensitive Markov Decision Processes. We derive the first known model-based
and model-free policy gradient optimization algorithms with guaranteed
convergence to the optimal solution. Finally, we present numerical results
validating our formulation in different applications.
| [
"Krishnamurthy Dvijotham, Maryam Fazel and Emanuel Todorov",
"['Krishnamurthy Dvijotham' 'Maryam Fazel' 'Emanuel Todorov']"
]
|
cs.GT cs.LG | null | 1406.0728 | null | null | http://arxiv.org/pdf/1406.0728v2 | 2014-06-04T07:11:20Z | 2014-06-03T14:41:56Z | A Game-theoretic Machine Learning Approach for Revenue Maximization in
Sponsored Search | Sponsored search is an important monetization channel for search engines, in
which an auction mechanism is used to select the ads shown to users and
determine the prices charged from advertisers. There have been several pieces
of work in the literature that investigate how to design an auction mechanism
in order to optimize the revenue of the search engine. However, due to some
unrealistic assumptions used, the practical values of these studies are not
very clear. In this paper, we propose a novel \emph{game-theoretic machine
learning} approach, which naturally combines machine learning and game theory,
and learns the auction mechanism using a bilevel optimization framework. In
particular, we first learn a Markov model from historical data to describe how
advertisers change their bids in response to an auction mechanism, and then for
any given auction mechanism, we use the learnt model to predict its
corresponding future bid sequences. Next we learn the auction mechanism through
empirical revenue maximization on the predicted bid sequences. We show that the
empirical revenue will converge when the prediction period approaches infinity,
and a Genetic Programming algorithm can effectively optimize this empirical
revenue. Our experiments indicate that the proposed approach is able to produce
a much more effective auction mechanism than several baselines.
| [
"['Di He' 'Wei Chen' 'Liwei Wang' 'Tie-Yan Liu']",
"Di He, Wei Chen, Liwei Wang, Tie-Yan Liu"
]
|
q-fin.ST cs.CE cs.LG q-fin.PM stat.ML | null | 1406.0824 | null | null | http://arxiv.org/pdf/1406.0824v1 | 2014-06-03T19:32:09Z | 2014-06-03T19:32:09Z | Supervised classification-based stock prediction and portfolio
optimization | As the number of publicly traded companies as well as the amount of their
financial data grows rapidly, it is highly desired to have tracking, analysis,
and eventually stock selections automated. There have been few works focusing
on estimating the stock prices of individual companies. However, many of those
have worked with very small number of financial parameters. In this work, we
apply machine learning techniques to address automated stock picking, while
using a larger number of financial parameters for individual companies than the
previous studies. Our approaches are based on the supervision of prediction
parameters using company fundamentals, time-series properties, and correlation
information between different stocks. We examine a variety of supervised
learning techniques and found that using stock fundamentals is a useful
approach for the classification problem, when combined with the high
dimensional data handling capabilities of support vector machine. The portfolio
our system suggests by predicting the behavior of stocks results in a 3% larger
growth on average than the overall market within a 3-month time period, as the
out-of-sample test suggests.
| [
"Sercan Arik, Sukru Burc Eryilmaz, Adam Goldberg",
"['Sercan Arik' 'Sukru Burc Eryilmaz' 'Adam Goldberg']"
]
|
cs.CL cs.LG cs.NE stat.ML | null | 1406.1078 | null | null | http://arxiv.org/pdf/1406.1078v3 | 2014-09-03T00:25:02Z | 2014-06-03T17:47:08Z | Learning Phrase Representations using RNN Encoder-Decoder for
Statistical Machine Translation | In this paper, we propose a novel neural network model called RNN
Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN
encodes a sequence of symbols into a fixed-length vector representation, and
the other decodes the representation into another sequence of symbols. The
encoder and decoder of the proposed model are jointly trained to maximize the
conditional probability of a target sequence given a source sequence. The
performance of a statistical machine translation system is empirically found to
improve by using the conditional probabilities of phrase pairs computed by the
RNN Encoder-Decoder as an additional feature in the existing log-linear model.
Qualitatively, we show that the proposed model learns a semantically and
syntactically meaningful representation of linguistic phrases.
| [
"['Kyunghyun Cho' 'Bart van Merrienboer' 'Caglar Gulcehre'\n 'Dzmitry Bahdanau' 'Fethi Bougares' 'Holger Schwenk' 'Yoshua Bengio']",
"Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry\n Bahdanau, Fethi Bougares, Holger Schwenk and Yoshua Bengio"
]
|
cs.NA cs.LG stat.CO stat.ML | null | 1406.1102 | null | null | http://arxiv.org/pdf/1406.1102v2 | 2015-07-10T14:44:37Z | 2014-06-04T16:37:33Z | Linear Convergence of Variance-Reduced Stochastic Gradient without
Strong Convexity | Stochastic gradient algorithms estimate the gradient based on only one or a
few samples and enjoy low computational cost per iteration. They have been
widely used in large-scale optimization problems. However, stochastic gradient
algorithms are usually slow to converge and achieve sub-linear convergence
rates, due to the inherent variance in the gradient computation. To accelerate
the convergence, some variance-reduced stochastic gradient algorithms, e.g.,
proximal stochastic variance-reduced gradient (Prox-SVRG) algorithm, have
recently been proposed to solve strongly convex problems. Under the strongly
convex condition, these variance-reduced stochastic gradient algorithms achieve
a linear convergence rate. However, many machine learning problems are convex
but not strongly convex. In this paper, we introduce Prox-SVRG and its
projected variant called Variance-Reduced Projected Stochastic Gradient (VRPSG)
to solve a class of non-strongly convex optimization problems widely used in
machine learning. As the main technical contribution of this paper, we show
that both VRPSG and Prox-SVRG achieve a linear convergence rate without strong
convexity. A key ingredient in our proof is a Semi-Strongly Convex (SSC)
inequality which is the first to be rigorously proved for a class of
non-strongly convex problems in both constrained and regularized settings.
Moreover, the SSC inequality is independent of algorithms and may be applied to
analyze other stochastic gradient algorithms besides VRPSG and Prox-SVRG, which
may be of independent interest. To the best of our knowledge, this is the first
work that establishes the linear convergence rate for the variance-reduced
stochastic gradient algorithms on solving both constrained and regularized
problems without strong convexity.
| [
"['Pinghua Gong' 'Jieping Ye']",
"Pinghua Gong and Jieping Ye"
]
|
math.LO cs.LG cs.LO | null | 1406.1111 | null | null | http://arxiv.org/pdf/1406.1111v1 | 2014-06-04T16:59:33Z | 2014-06-04T16:59:33Z | PAC Learning, VC Dimension, and the Arithmetic Hierarchy | We compute that the index set of PAC-learnable concept classes is
$m$-complete $\Sigma^0_3$ within the set of indices for all concept classes of
a reasonable form. All concept classes considered are computable enumerations
of computable $\Pi^0_1$ classes, in a sense made precise here. This family of
concept classes is sufficient to cover all standard examples, and also has the
property that PAC learnability is equivalent to finite VC dimension.
| [
"Wesley Calvert",
"['Wesley Calvert']"
]
|
cs.LG cs.CV | null | 1406.1167 | null | null | http://arxiv.org/pdf/1406.1167v1 | 2014-06-04T09:16:42Z | 2014-06-04T09:16:42Z | Learning to Diversify via Weighted Kernels for Classifier Ensemble | Classifier ensemble generally should combine diverse component classifiers.
However, it is difficult to give a definitive connection between diversity
measure and ensemble accuracy. Given a list of available component classifiers,
how to adaptively and diversely ensemble classifiers becomes a big challenge in
the literature. In this paper, we argue that diversity, not direct diversity on
samples but adaptive diversity with data, is highly correlated to ensemble
accuracy, and we propose a novel technology for classifier ensemble, learning
to diversify, which learns to adaptively combine classifiers by considering
both accuracy and diversity. Specifically, our approach, Learning TO Diversify
via Weighted Kernels (L2DWK), performs classifier combination by optimizing a
direct but simple criterion: maximizing ensemble accuracy and adaptive
diversity simultaneously by minimizing a convex loss function. Given a measure
formulation, the diversity is calculated with weighted kernels (i.e., the
diversity is measured on the component classifiers' outputs which are kernelled
and weighted), and the kernel weights are automatically learned. We minimize
this loss function by estimating the kernel weights in conjunction with the
classifier weights, and propose a self-training algorithm for conducting this
convex optimization procedure iteratively. Extensive experiments on a variety
of 32 UCI classification benchmark datasets show that the proposed approach
consistently outperforms state-of-the-art ensembles such as Bagging, AdaBoost,
Random Forests, Gasen, Regularized Selective Ensemble, and Ensemble Pruning via
Semi-Definite Programming.
| [
"Xu-Cheng Yin and Chun Yang and Hong-Wei Hao",
"['Xu-Cheng Yin' 'Chun Yang' 'Hong-Wei Hao']"
]
|
cs.LG cs.AI stat.ML | null | 1406.1222 | null | null | http://arxiv.org/pdf/1406.1222v2 | 2014-10-31T02:43:28Z | 2014-06-04T21:46:30Z | Discovering Structure in High-Dimensional Data Through Correlation
Explanation | We introduce a method to learn a hierarchy of successively more abstract
representations of complex data based on optimizing an information-theoretic
objective. Intuitively, the optimization searches for a set of latent factors
that best explain the correlations in the data as measured by multivariate
mutual information. The method is unsupervised, requires no model assumptions,
and scales linearly with the number of variables which makes it an attractive
approach for very high dimensional systems. We demonstrate that Correlation
Explanation (CorEx) automatically discovers meaningful structure for data from
diverse sources including personality tests, DNA, and human language.
| [
"Greg Ver Steeg and Aram Galstyan",
"['Greg Ver Steeg' 'Aram Galstyan']"
]
|
stat.ML cs.LG cs.NE | null | 1406.1231 | null | null | http://arxiv.org/pdf/1406.1231v1 | 2014-06-04T23:00:05Z | 2014-06-04T23:00:05Z | Multi-task Neural Networks for QSAR Predictions | Although artificial neural networks have occasionally been used for
Quantitative Structure-Activity/Property Relationship (QSAR/QSPR) studies in
the past, the literature has of late been dominated by other machine learning
techniques such as random forests. However, a variety of new neural net
techniques along with successful applications in other domains have renewed
interest in network approaches. In this work, inspired by the winning team's
use of neural networks in a recent QSAR competition, we used an artificial
neural network to learn a function that predicts activities of compounds for
multiple assays at the same time. We conducted experiments leveraging recent
methods for dealing with overfitting in neural networks as well as other tricks
from the neural networks literature. We compared our methods to alternative
methods reported to perform well on these tasks and found that our neural net
methods provided superior performance.
| [
"George E. Dahl and Navdeep Jaitly and Ruslan Salakhutdinov",
"['George E. Dahl' 'Navdeep Jaitly' 'Ruslan Salakhutdinov']"
]
|
math.OC cs.LG | null | 1406.1305 | null | null | http://arxiv.org/pdf/1406.1305v2 | 2015-08-14T18:15:14Z | 2014-06-05T09:25:22Z | Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets | The Frank-Wolfe method (a.k.a. conditional gradient algorithm) for smooth
optimization has regained much interest in recent years in the context of large
scale optimization and machine learning. A key advantage of the method is that
it avoids projections - the computational bottleneck in many applications -
replacing it by a linear optimization step. Despite this advantage, the known
convergence rates of the FW method fall behind standard first order methods for
most settings of interest. It is an active line of research to derive faster
linear optimization-based algorithms for various settings of convex
optimization.
In this paper we consider the special case of optimization over strongly
convex sets, for which we prove that the vanila FW method converges at a rate
of $\frac{1}{t^2}$. This gives a quadratic improvement in convergence rate
compared to the general case, in which convergence is of the order
$\frac{1}{t}$, and known to be tight. We show that various balls induced by
$\ell_p$ norms, Schatten norms and group norms are strongly convex on one hand
and on the other hand, linear optimization over these sets is straightforward
and admits a closed-form solution. We further show how several previous
fast-rate results for the FW method follow easily from our analysis.
| [
"['Dan Garber' 'Elad Hazan']",
"Dan Garber, Elad Hazan"
]
|
cs.LG | null | 1406.1385 | null | null | http://arxiv.org/pdf/1406.1385v1 | 2014-06-05T13:44:25Z | 2014-06-05T13:44:25Z | Learning the Information Divergence | Information divergence that measures the difference between two nonnegative
matrices or tensors has found its use in a variety of machine learning
problems. Examples are Nonnegative Matrix/Tensor Factorization, Stochastic
Neighbor Embedding, topic models, and Bayesian network optimization. The
success of such a learning task depends heavily on a suitable divergence. A
large variety of divergences have been suggested and analyzed, but very few
results are available for an objective choice of the optimal divergence for a
given task. Here we present a framework that facilitates automatic selection of
the best divergence among a given family, based on standard maximum likelihood
estimation. We first propose an approximated Tweedie distribution for the
beta-divergence family. Selecting the best beta then becomes a machine learning
problem solved by maximum likelihood. Next, we reformulate alpha-divergence in
terms of beta-divergence, which enables automatic selection of alpha by maximum
likelihood with reuse of the learning principle for beta-divergence.
Furthermore, we show the connections between gamma and beta-divergences as well
as R\'enyi and alpha-divergences, such that our automatic selection framework
is extended to non-separable divergences. Experiments on both synthetic and
real-world data demonstrate that our method can quite accurately select
information divergence across different learning problems and various
divergence families.
| [
"['Onur Dikmen' 'Zhirong Yang' 'Erkki Oja']",
"Onur Dikmen and Zhirong Yang and Erkki Oja"
]
|
cs.AI cs.LG stat.ML | null | 1406.1411 | null | null | http://arxiv.org/pdf/1406.1411v2 | 2014-06-06T19:51:07Z | 2014-06-05T15:10:40Z | Advances in Learning Bayesian Networks of Bounded Treewidth | This work presents novel algorithms for learning Bayesian network structures
with bounded treewidth. Both exact and approximate methods are developed. The
exact method combines mixed-integer linear programming formulations for
structure learning and treewidth computation. The approximate method consists
in uniformly sampling $k$-trees (maximal graphs of treewidth $k$), and
subsequently selecting, exactly or approximately, the best structure whose
moral graph is a subgraph of that $k$-tree. Some properties of these methods
are discussed and proven. The approaches are empirically compared to each other
and to a state-of-the-art method for learning bounded treewidth structures on a
collection of public data sets with up to 100 variables. The experiments show
that our exact algorithm outperforms the state of the art, and that the
approximate approach is fairly accurate.
| [
"['Siqi Nie' 'Denis Deratani Maua' 'Cassio Polpo de Campos' 'Qiang Ji']",
"Siqi Nie, Denis Deratani Maua, Cassio Polpo de Campos, Qiang Ji"
]
|
stat.ML cs.LG | null | 1406.1485 | null | null | http://arxiv.org/pdf/1406.1485v3 | 2014-12-06T00:22:00Z | 2014-06-05T19:13:51Z | Iterative Neural Autoregressive Distribution Estimator (NADE-k) | Training of the neural autoregressive density estimator (NADE) can be viewed
as doing one step of probabilistic inference on missing values in data. We
propose a new model that extends this inference scheme to multiple steps,
arguing that it is easier to learn to improve a reconstruction in $k$ steps
rather than to learn to reconstruct in a single inference step. The proposed
model is an unsupervised building block for deep learning that combines the
desirable properties of NADE and multi-predictive training: (1) Its test
likelihood can be computed analytically, (2) it is easy to generate independent
samples from it, and (3) it uses an inference engine that is a superset of
variational inference for Boltzmann machines. The proposed NADE-k is
competitive with the state-of-the-art in density estimation on the two datasets
tested.
| [
"['Tapani Raiko' 'Li Yao' 'Kyunghyun Cho' 'Yoshua Bengio']",
"Tapani Raiko, Li Yao, Kyunghyun Cho and Yoshua Bengio"
]
|
cs.NE cs.AI cs.LG | null | 1406.1509 | null | null | http://arxiv.org/pdf/1406.1509v3 | 2014-06-25T20:12:19Z | 2014-06-05T20:10:48Z | Systematic N-tuple Networks for Position Evaluation: Exceeding 90% in
the Othello League | N-tuple networks have been successfully used as position evaluation functions
for board games such as Othello or Connect Four. The effectiveness of such
networks depends on their architecture, which is determined by the placement of
constituent n-tuples, sequences of board locations, providing input to the
network. The most popular method of placing n-tuples consists in randomly
generating a small number of long, snake-shaped board location sequences. In
comparison, we show that learning n-tuple networks is significantly more
effective if they involve a large number of systematically placed, short,
straight n-tuples. Moreover, we demonstrate that in order to obtain the best
performance and the steepest learning curve for Othello it is enough to use
n-tuples of size just 2, yielding a network consisting of only 288 weights. The
best such network evolved in this study has been evaluated in the online
Othello League, obtaining the performance of nearly 96% --- more than any other
player to date.
| [
"['Wojciech Jaśkowski']",
"Wojciech Ja\\'skowski"
]
|
cs.IR cs.LG | null | 1406.1580 | null | null | http://arxiv.org/pdf/1406.1580v1 | 2014-06-06T04:37:19Z | 2014-06-06T04:37:19Z | Machine learning approach for text and document mining | Text Categorization (TC), also known as Text Classification, is the task of
automatically classifying a set of text documents into different categories
from a predefined set. If a document belongs to exactly one of the categories,
it is a single-label classification task; otherwise, it is a multi-label
classification task. TC uses several tools from Information Retrieval (IR) and
Machine Learning (ML) and has received much attention in the last years from
both researchers in the academia and industry developers. In this paper, we
first categorize the documents using KNN based machine learning approach and
then return the most relevant documents.
| [
"Vishwanath Bijalwan, Pinki Kumari, Jordan Pascual and Vijay Bhaskar\n Semwal",
"['Vishwanath Bijalwan' 'Pinki Kumari' 'Jordan Pascual'\n 'Vijay Bhaskar Semwal']"
]
|
cs.LG | null | 1406.1584 | null | null | http://arxiv.org/pdf/1406.1584v3 | 2014-11-06T02:56:34Z | 2014-06-06T05:28:48Z | Learning to Discover Efficient Mathematical Identities | In this paper we explore how machine learning techniques can be applied to
the discovery of efficient mathematical identities. We introduce an attribute
grammar framework for representing symbolic expressions. Given a set of grammar
rules we build trees that combine different rules, looking for branches which
yield compositions that are analytically equivalent to a target expression, but
of lower computational complexity. However, as the size of the trees grows
exponentially with the complexity of the target expression, brute force search
is impractical for all but the simplest of expressions. Consequently, we
introduce two novel learning approaches that are able to learn from simpler
expressions to guide the tree search. The first of these is a simple n-gram
model, the other being a recursive neural-network. We show how these approaches
enable us to derive complex identities, beyond reach of brute-force search, or
human derivation.
| [
"['Wojciech Zaremba' 'Karol Kurach' 'Rob Fergus']",
"Wojciech Zaremba, Karol Kurach, Rob Fergus"
]
|
cs.LG stat.ML | null | 1406.1621 | null | null | http://arxiv.org/pdf/1406.1621v1 | 2014-06-06T09:33:59Z | 2014-06-06T09:33:59Z | Separable Cosparse Analysis Operator Learning | The ability of having a sparse representation for a certain class of signals
has many applications in data analysis, image processing, and other research
fields. Among sparse representations, the cosparse analysis model has recently
gained increasing interest. Many signals exhibit a multidimensional structure,
e.g. images or three-dimensional MRI scans. Most data analysis and learning
algorithms use vectorized signals and thereby do not account for this
underlying structure. The drawback of not taking the inherent structure into
account is a dramatic increase in computational cost. We propose an algorithm
for learning a cosparse Analysis Operator that adheres to the preexisting
structure of the data, and thus allows for a very efficient implementation.
This is achieved by enforcing a separable structure on the learned operator.
Our learning algorithm is able to deal with multidimensional data of arbitrary
order. We evaluate our method on volumetric data at the example of
three-dimensional MRI scans.
| [
"Matthias Seibert, Julian W\\\"ormann, R\\'emi Gribonval, Martin\n Kleinsteuber",
"['Matthias Seibert' 'Julian Wörmann' 'Rémi Gribonval'\n 'Martin Kleinsteuber']"
]
|
stat.ML cs.LG | null | 1406.1655 | null | null | http://arxiv.org/pdf/1406.1655v2 | 2014-09-30T08:04:58Z | 2014-06-06T11:53:46Z | Variational inference of latent state sequences using Recurrent Networks | Recent advances in the estimation of deep directed graphical models and
recurrent networks let us contribute to the removal of a blind spot in the area
of probabilistc modelling of time series. The proposed methods i) can infer
distributed latent state-space trajectories with nonlinear transitions, ii)
scale to large data sets thanks to the use of a stochastic objective and fast,
approximate inference, iii) enable the design of rich emission models which iv)
will naturally lead to structured outputs. Two different paths of introducing
latent state sequences are pursued, leading to the variational recurrent auto
encoder (VRAE) and the variational one step predictor (VOSP). The use of
independent Wiener processes as priors on the latent state sequence is a viable
compromise between efficient computation of the Kullback-Leibler divergence
from the variational approximation of the posterior and maintaining a
reasonable belief in the dynamics. We verify our methods empirically, obtaining
results close or superior to the state of the art. We also show qualitative
results for denoising and missing value imputation.
| [
"['Justin Bayer' 'Christian Osendorfer']",
"Justin Bayer, Christian Osendorfer"
]
|
cs.LG q-bio.NC | null | 1406.1770 | null | null | http://arxiv.org/pdf/1406.1770v1 | 2014-06-06T18:49:56Z | 2014-06-06T18:49:56Z | Computational role of eccentricity dependent cortical magnification | We develop a sampling extension of M-theory focused on invariance to scale
and translation. Quite surprisingly, the theory predicts an architecture of
early vision with increasing receptive field sizes and a high resolution fovea
-- in agreement with data about the cortical magnification factor, V1 and the
retina. From the slope of the inverse of the magnification factor, M-theory
predicts a cortical "fovea" in V1 in the order of $40$ by $40$ basic units at
each receptive field size -- corresponding to a foveola of size around $26$
minutes of arc at the highest resolution, $\approx 6$ degrees at the lowest
resolution. It also predicts uniform scale invariance over a fixed range of
scales independently of eccentricity, while translation invariance should
depend linearly on spatial frequency. Bouma's law of crowding follows in the
theory as an effect of cortical area-by-cortical area pooling; the Bouma
constant is the value expected if the signature responsible for recognition in
the crowding experiments originates in V2. From a broader perspective, the
emerging picture suggests that visual recognition under natural conditions
takes place by composing information from a set of fixations, with each
fixation providing recognition from a space-scale image fragment -- that is an
image patch represented at a set of increasing sizes and decreasing
resolutions.
| [
"['Tomaso Poggio' 'Jim Mutch' 'Leyla Isik']",
"Tomaso Poggio, Jim Mutch, Leyla Isik"
]
|
cs.LG | null | 1406.1822 | null | null | null | null | null | Logarithmic Time Online Multiclass prediction | We study the problem of multiclass classification with an extremely large
number of classes (k), with the goal of obtaining train and test time
complexity logarithmic in the number of classes. We develop top-down tree
construction approaches for constructing logarithmic depth trees. On the
theoretical front, we formulate a new objective function, which is optimized at
each node of the tree and creates dynamic partitions of the data which are both
pure (in terms of class labels) and balanced. We demonstrate that under
favorable conditions, we can construct logarithmic depth trees that have leaves
with low label entropy. However, the objective function at the nodes is
challenging to optimize computationally. We address the empirical problem with
a new online decision tree construction procedure. Experiments demonstrate that
this online algorithm quickly achieves improvement in test error compared to
more common logarithmic training time approaches, which makes it a plausible
method in computationally constrained large-k applications.
| [
"Anna Choromanska and John Langford"
]
|
null | null | 1406.1822v | null | null | http://arxiv.org/pdf/1406.1822v13 | 2015-11-14T23:02:33Z | 2014-06-06T21:52:25Z | Logarithmic Time Online Multiclass prediction | We study the problem of multiclass classification with an extremely large number of classes (k), with the goal of obtaining train and test time complexity logarithmic in the number of classes. We develop top-down tree construction approaches for constructing logarithmic depth trees. On the theoretical front, we formulate a new objective function, which is optimized at each node of the tree and creates dynamic partitions of the data which are both pure (in terms of class labels) and balanced. We demonstrate that under favorable conditions, we can construct logarithmic depth trees that have leaves with low label entropy. However, the objective function at the nodes is challenging to optimize computationally. We address the empirical problem with a new online decision tree construction procedure. Experiments demonstrate that this online algorithm quickly achieves improvement in test error compared to more common logarithmic training time approaches, which makes it a plausible method in computationally constrained large-k applications. | [
"['Anna Choromanska' 'John Langford']"
]
|
cs.CL cs.LG cs.NE | null | 1406.1827 | null | null | http://arxiv.org/pdf/1406.1827v4 | 2015-05-14T19:37:38Z | 2014-06-06T22:09:27Z | Recursive Neural Networks Can Learn Logical Semantics | Tree-structured recursive neural networks (TreeRNNs) for sentence meaning
have been successful for many applications, but it remains an open question
whether the fixed-length representations that they learn can support tasks as
demanding as logical deduction. We pursue this question by evaluating whether
two such models---plain TreeRNNs and tree-structured neural tensor networks
(TreeRNTNs)---can correctly learn to identify logical relationships such as
entailment and contradiction using these representations. In our first set of
experiments, we generate artificial data from a logical grammar and use it to
evaluate the models' ability to learn to handle basic relational reasoning,
recursive structures, and quantification. We then evaluate the models on the
more natural SICK challenge data. Both models perform competitively on the SICK
data and generalize well in all three experiments on simulated data, suggesting
that they can learn suitable representations for logical inference in natural
language.
| [
"['Samuel R. Bowman' 'Christopher Potts' 'Christopher D. Manning']",
"Samuel R. Bowman, Christopher Potts, Christopher D. Manning"
]
|
cs.NE cs.LG | null | 1406.1831 | null | null | http://arxiv.org/pdf/1406.1831v1 | 2014-06-06T22:49:11Z | 2014-06-06T22:49:11Z | Analyzing noise in autoencoders and deep networks | Autoencoders have emerged as a useful framework for unsupervised learning of
internal representations, and a wide variety of apparently conceptually
disparate regularization techniques have been proposed to generate useful
features. Here we extend existing denoising autoencoders to additionally inject
noise before the nonlinearity, and at the hidden unit activations. We show that
a wide variety of previous methods, including denoising, contractive, and
sparse autoencoders, as well as dropout can be interpreted using this
framework. This noise injection framework reaps practical benefits by providing
a unified strategy to develop new internal representations by designing the
nature of the injected noise. We show that noisy autoencoders outperform
denoising autoencoders at the very task of denoising, and are competitive with
other single-layer techniques on MNIST, and CIFAR-10. We also show that types
of noise other than dropout improve performance in a deep network through
sparsifying, decorrelating, and spreading information across representations.
| [
"Ben Poole, Jascha Sohl-Dickstein, Surya Ganguli",
"['Ben Poole' 'Jascha Sohl-Dickstein' 'Surya Ganguli']"
]
|
cs.NE cs.LG | null | 1406.1833 | null | null | http://arxiv.org/pdf/1406.1833v2 | 2014-06-10T03:37:45Z | 2014-06-06T23:45:03Z | Unsupervised Feature Learning through Divergent Discriminative Feature
Accumulation | Unlike unsupervised approaches such as autoencoders that learn to reconstruct
their inputs, this paper introduces an alternative approach to unsupervised
feature learning called divergent discriminative feature accumulation (DDFA)
that instead continually accumulates features that make novel discriminations
among the training set. Thus DDFA features are inherently discriminative from
the start even though they are trained without knowledge of the ultimate
classification problem. Interestingly, DDFA also continues to add new features
indefinitely (so it does not depend on a hidden layer size), is not based on
minimizing error, and is inherently divergent instead of convergent, thereby
providing a unique direction of research for unsupervised feature learning. In
this paper the quality of its learned features is demonstrated on the MNIST
dataset, where its performance confirms that indeed DDFA is a viable technique
for learning useful features.
| [
"['Paul A. Szerlip' 'Gregory Morse' 'Justin K. Pugh' 'Kenneth O. Stanley']",
"Paul A. Szerlip, Gregory Morse, Justin K. Pugh, and Kenneth O. Stanley"
]
|
cs.LG | null | 1406.1837 | null | null | http://arxiv.org/pdf/1406.1837v5 | 2016-06-01T05:35:31Z | 2014-06-07T00:24:42Z | A Credit Assignment Compiler for Joint Prediction | Many machine learning applications involve jointly predicting multiple
mutually dependent output variables. Learning to search is a family of methods
where the complex decision problem is cast into a sequence of decisions via a
search space. Although these methods have shown promise both in theory and in
practice, implementing them has been burdensomely awkward. In this paper, we
show the search space can be defined by an arbitrary imperative program,
turning learning to search into a credit assignment compiler. Altogether with
the algorithmic improvements for the compiler, we radically reduce the
complexity of programming and the running time. We demonstrate the feasibility
of our approach on multiple joint prediction tasks. In all cases, we obtain
accuracies as high as alternative approaches, at drastically reduced execution
and programming time.
| [
"['Kai-Wei Chang' 'He He' 'Hal Daumé III' 'John Langford' 'Stephane Ross']",
"Kai-Wei Chang, He He, Hal Daum\\'e III, John Langford, Stephane Ross"
]
|
stat.ML cs.LG | null | 1406.1853 | null | null | http://arxiv.org/pdf/1406.1853v2 | 2014-10-31T23:36:00Z | 2014-06-07T03:02:09Z | Model-based Reinforcement Learning and the Eluder Dimension | We consider the problem of learning to optimize an unknown Markov decision
process (MDP). We show that, if the MDP can be parameterized within some known
function class, we can obtain regret bounds that scale with the dimensionality,
rather than cardinality, of the system. We characterize this dependence
explicitly as $\tilde{O}(\sqrt{d_K d_E T})$ where $T$ is time elapsed, $d_K$ is
the Kolmogorov dimension and $d_E$ is the \emph{eluder dimension}. These
represent the first unified regret bounds for model-based reinforcement
learning and provide state of the art guarantees in several important settings.
Moreover, we present a simple and computationally efficient algorithm
\emph{posterior sampling for reinforcement learning} (PSRL) that satisfies
these bounds.
| [
"Ian Osband, Benjamin Van Roy",
"['Ian Osband' 'Benjamin Van Roy']"
]
|
cs.LG | null | 1406.1856 | null | null | http://arxiv.org/pdf/1406.1856v2 | 2014-10-30T17:40:59Z | 2014-06-07T03:11:05Z | A Drifting-Games Analysis for Online Learning and Applications to
Boosting | We provide a general mechanism to design online learning algorithms based on
a minimax analysis within a drifting-games framework. Different online learning
settings (Hedge, multi-armed bandit problems and online convex optimization)
are studied by converting into various kinds of drifting games. The original
minimax analysis for drifting games is then used and generalized by applying a
series of relaxations, starting from choosing a convex surrogate of the 0-1
loss function. With different choices of surrogates, we not only recover
existing algorithms, but also propose new algorithms that are totally
parameter-free and enjoy other useful properties. Moreover, our drifting-games
framework naturally allows us to study high probability bounds without
resorting to any concentration results, and also a generalized notion of regret
that measures how good the algorithm is compared to all but the top small
fraction of candidates. Finally, we translate our new Hedge algorithm into a
new adaptive boosting algorithm that is computationally faster as shown in
experiments, since it ignores a large number of examples on each round.
| [
"Haipeng Luo and Robert E. Schapire",
"['Haipeng Luo' 'Robert E. Schapire']"
]
|
cs.CL cs.LG stat.ML | null | 1406.2035 | null | null | http://arxiv.org/pdf/1406.2035v2 | 2014-11-06T14:26:21Z | 2014-06-08T22:35:09Z | Learning Word Representations with Hierarchical Sparse Coding | We propose a new method for learning word representations using hierarchical
regularization in sparse coding inspired by the linguistic study of word
meanings. We show an efficient learning algorithm based on stochastic proximal
methods that is significantly faster than previous approaches, making it
possible to perform hierarchical sparse coding on a corpus of billions of word
tokens. Experiments on various benchmark tasks---word similarity ranking,
analogies, sentence completion, and sentiment analysis---demonstrate that the
method outperforms or is competitive with state-of-the-art methods. Our word
representations are available at
\url{http://www.ark.cs.cmu.edu/dyogatam/wordvecs/}.
| [
"['Dani Yogatama' 'Manaal Faruqui' 'Chris Dyer' 'Noah A. Smith']",
"Dani Yogatama and Manaal Faruqui and Chris Dyer and Noah A. Smith"
]
|
cs.CV cs.LG cs.NE | null | 1406.2080 | null | null | http://arxiv.org/pdf/1406.2080v4 | 2015-04-10T16:44:00Z | 2014-06-09T05:45:12Z | Training Convolutional Networks with Noisy Labels | The availability of large labeled datasets has allowed Convolutional Network
models to achieve impressive recognition results. However, in many settings
manual annotation of the data is impractical; instead our data has noisy
labels, i.e. there is some freely available label for each image which may or
may not be accurate. In this paper, we explore the performance of
discriminatively-trained Convnets when trained on such noisy data. We introduce
an extra noise layer into the network which adapts the network outputs to match
the noisy label distribution. The parameters of this noise layer can be
estimated as part of the training process and involve simple modifications to
current training infrastructures for deep networks. We demonstrate the
approaches on several datasets, including large scale experiments on the
ImageNet classification benchmark.
| [
"Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev and\n Rob Fergus",
"['Sainbayar Sukhbaatar' 'Joan Bruna' 'Manohar Paluri' 'Lubomir Bourdev'\n 'Rob Fergus']"
]
|
stat.ML cs.LG cs.NA math.OC stat.AP | 10.1080/10618600.2015.1054033 | 1406.2082 | null | null | http://arxiv.org/abs/1406.2082v4 | 2015-08-29T00:46:34Z | 2014-06-09T05:50:20Z | Fast and Flexible ADMM Algorithms for Trend Filtering | This paper presents a fast and robust algorithm for trend filtering, a
recently developed nonparametric regression tool. It has been shown that, for
estimating functions whose derivatives are of bounded variation, trend
filtering achieves the minimax optimal error rate, while other popular methods
like smoothing splines and kernels do not. Standing in the way of a more
widespread practical adoption, however, is a lack of scalable and numerically
stable algorithms for fitting trend filtering estimates. This paper presents a
highly efficient, specialized ADMM routine for trend filtering. Our algorithm
is competitive with the specialized interior point methods that are currently
in use, and yet is far more numerically robust. Furthermore, the proposed ADMM
implementation is very simple, and importantly, it is flexible enough to extend
to many interesting related problems, such as sparse trend filtering and
isotonic trend filtering. Software for our method is freely available, in both
the C and R languages.
| [
"['Aaditya Ramdas' 'Ryan J. Tibshirani']",
"Aaditya Ramdas and Ryan J. Tibshirani"
]
|
stat.ML cs.IT cs.LG math.IT math.ST stat.ME stat.TH | null | 1406.2083 | null | null | http://arxiv.org/pdf/1406.2083v2 | 2014-11-24T00:23:35Z | 2014-06-09T05:59:21Z | On the Decreasing Power of Kernel and Distance based Nonparametric
Hypothesis Tests in High Dimensions | This paper is about two related decision theoretic problems, nonparametric
two-sample testing and independence testing. There is a belief that two
recently proposed solutions, based on kernels and distances between pairs of
points, behave well in high-dimensional settings. We identify different sources
of misconception that give rise to the above belief. Specifically, we
differentiate the hardness of estimation of test statistics from the hardness
of testing whether these statistics are zero or not, and explicitly discuss a
notion of "fair" alternative hypotheses for these problems as dimension
increases. We then demonstrate that the power of these tests actually drops
polynomially with increasing dimension against fair alternatives. We end with
some theoretical insights and shed light on the \textit{median heuristic} for
kernel bandwidth selection. Our work advances the current understanding of the
power of modern nonparametric hypothesis tests in high dimensions.
| [
"['Sashank J. Reddi' 'Aaditya Ramdas' 'Barnabás Póczos' 'Aarti Singh'\n 'Larry Wasserman']",
"Sashank J. Reddi, Aaditya Ramdas, Barnab\\'as P\\'oczos, Aarti Singh and\n Larry Wasserman"
]
|
cs.LG cond-mat.mtrl-sci | 10.1162/NECO_a_00694 | 1406.2210 | null | null | http://arxiv.org/abs/1406.2210v2 | 2014-07-14T14:54:22Z | 2014-06-09T15:16:21Z | Memristor models for machine learning | In the quest for alternatives to traditional CMOS, it is being suggested that
digital computing efficiency and power can be improved by matching the
precision to the application. Many applications do not need the high precision
that is being used today. In particular, large gains in area- and power
efficiency could be achieved by dedicated analog realizations of approximate
computing engines. In this work, we explore the use of memristor networks for
analog approximate computation, based on a machine learning framework called
reservoir computing. Most experimental investigations on the dynamics of
memristors focus on their nonvolatile behavior. Hence, the volatility that is
present in the developed technologies is usually unwanted and it is not
included in simulation models. In contrast, in reservoir computing, volatility
is not only desirable but necessary. Therefore, in this work, we propose two
different ways to incorporate it into memristor simulation models. The first is
an extension of Strukov's model and the second is an equivalent Wiener model
approximation. We analyze and compare the dynamical properties of these models
and discuss their implications for the memory and the nonlinear processing
capacity of memristor networks. Our results indicate that device variability,
increasingly causing problems in traditional computer design, is an asset in
the context of reservoir computing. We conclude that, although both models
could lead to useful memristor based reservoir computing systems, their
computational performance will differ. Therefore, experimental modeling
research is required for the development of accurate volatile memristor models.
| [
"Juan Pablo Carbajal and Joni Dambre and Michiel Hermans and Benjamin\n Schrauwen",
"['Juan Pablo Carbajal' 'Joni Dambre' 'Michiel Hermans'\n 'Benjamin Schrauwen']"
]
|
cs.LG cs.IR cs.NE stat.ML | null | 1406.2235 | null | null | http://arxiv.org/pdf/1406.2235v1 | 2014-06-09T16:21:11Z | 2014-06-09T16:21:11Z | A Hybrid Latent Variable Neural Network Model for Item Recommendation | Collaborative filtering is used to recommend items to a user without
requiring a knowledge of the item itself and tends to outperform other
techniques. However, collaborative filtering suffers from the cold-start
problem, which occurs when an item has not yet been rated or a user has not
rated any items. Incorporating additional information, such as item or user
descriptions, into collaborative filtering can address the cold-start problem.
In this paper, we present a neural network model with latent input variables
(latent neural network or LNN) as a hybrid collaborative filtering technique
that addresses the cold-start problem. LNN outperforms a broad selection of
content-based filters (which make recommendations based on item descriptions)
and other hybrid approaches while maintaining the accuracy of state-of-the-art
collaborative filtering techniques.
| [
"['Michael R. Smith' 'Tony Martinez' 'Michael Gashler']",
"Michael R. Smith, Tony Martinez, Michael Gashler"
]
|
stat.ML cs.LG | null | 1406.2237 | null | null | http://arxiv.org/pdf/1406.2237v2 | 2014-10-14T22:31:36Z | 2014-06-09T16:34:51Z | Reducing the Effects of Detrimental Instances | Not all instances in a data set are equally beneficial for inducing a model
of the data. Some instances (such as outliers or noise) can be detrimental.
However, at least initially, the instances in a data set are generally
considered equally in machine learning algorithms. Many current approaches for
handling noisy and detrimental instances make a binary decision about whether
an instance is detrimental or not. In this paper, we 1) extend this paradigm by
weighting the instances on a continuous scale and 2) present a methodology for
measuring how detrimental an instance may be for inducing a model of the data.
We call our method of identifying and weighting detrimental instances reduced
detrimental instance learning (RDIL). We examine RIDL on a set of 54 data sets
and 5 learning algorithms and compare RIDL with other weighting and filtering
approaches. RDIL is especially useful for learning algorithms where every
instance can affect the classification boundary and the training instances are
considered individually, such as multilayer perceptrons trained with
backpropagation (MLPs). Our results also suggest that a more accurate estimate
of which instances are detrimental can have a significant positive impact for
handling them.
| [
"Michael R. Smith, Tony Martinez",
"['Michael R. Smith' 'Tony Martinez']"
]
|
cs.LG cs.CV | null | 1406.2390 | null | null | http://arxiv.org/pdf/1406.2390v2 | 2014-11-03T15:25:16Z | 2014-06-09T23:51:30Z | Unsupervised Deep Haar Scattering on Graphs | The classification of high-dimensional data defined on graphs is particularly
difficult when the graph geometry is unknown. We introduce a Haar scattering
transform on graphs, which computes invariant signal descriptors. It is
implemented with a deep cascade of additions, subtractions and absolute values,
which iteratively compute orthogonal Haar wavelet transforms. Multiscale
neighborhoods of unknown graphs are estimated by minimizing an average total
variation, with a pair matching algorithm of polynomial complexity. Supervised
classification with dimension reduction is tested on data bases of scrambled
images, and for signals sampled on unknown irregular grids on a sphere.
| [
"Xu Chen, Xiuyuan Cheng and St\\'ephane Mallat",
"['Xu Chen' 'Xiuyuan Cheng' 'Stéphane Mallat']"
]
|
cs.AI cs.LG stat.ML | null | 1406.2395 | null | null | http://arxiv.org/pdf/1406.2395v1 | 2014-06-10T00:50:05Z | 2014-06-10T00:50:05Z | ExpertBayes: Automatically refining manually built Bayesian networks | Bayesian network structures are usually built using only the data and
starting from an empty network or from a naive Bayes structure. Very often, in
some domains, like medicine, a prior structure knowledge is already known. This
structure can be automatically or manually refined in search for better
performance models. In this work, we take Bayesian networks built by
specialists and show that minor perturbations to this original network can
yield better classifiers with a very small computational cost, while
maintaining most of the intended meaning of the original model.
| [
"Ezilda Almeida, Pedro Ferreira, Tiago Vinhoza, In\\^es Dutra, Jingwei\n Li, Yirong Wu, Elizabeth Burnside",
"['Ezilda Almeida' 'Pedro Ferreira' 'Tiago Vinhoza' 'Inês Dutra'\n 'Jingwei Li' 'Yirong Wu' 'Elizabeth Burnside']"
]
|
cs.CV cs.LG | null | 1406.2419 | null | null | http://arxiv.org/pdf/1406.2419v1 | 2014-06-10T04:34:43Z | 2014-06-10T04:34:43Z | Why do linear SVMs trained on HOG features perform so well? | Linear Support Vector Machines trained on HOG features are now a de facto
standard across many visual perception tasks. Their popularisation can largely
be attributed to the step-change in performance they brought to pedestrian
detection, and their subsequent successes in deformable parts models. This
paper explores the interactions that make the HOG-SVM symbiosis perform so
well. By connecting the feature extraction and learning processes rather than
treating them as disparate plugins, we show that HOG features can be viewed as
doing two things: (i) inducing capacity in, and (ii) adding prior to a linear
SVM trained on pixels. From this perspective, preserving second-order
statistics and locality of interactions are key to good performance. We
demonstrate surprising accuracy on expression recognition and pedestrian
detection tasks, by assuming only the importance of preserving such local
second-order interactions.
| [
"['Hilton Bristow' 'Simon Lucey']",
"Hilton Bristow, Simon Lucey"
]
|
cs.IR cs.LG | null | 1406.2431 | null | null | http://arxiv.org/pdf/1406.2431v3 | 2016-09-20T09:51:02Z | 2014-06-10T06:17:23Z | Budget-Constrained Item Cold-Start Handling in Collaborative Filtering
Recommenders via Optimal Design | It is well known that collaborative filtering (CF) based recommender systems
provide better modeling of users and items associated with considerable rating
history. The lack of historical ratings results in the user and the item
cold-start problems. The latter is the main focus of this work. Most of the
current literature addresses this problem by integrating content-based
recommendation techniques to model the new item. However, in many cases such
content is not available, and the question arises is whether this problem can
be mitigated using CF techniques only. We formalize this problem as an
optimization problem: given a new item, a pool of available users, and a budget
constraint, select which users to assign with the task of rating the new item
in order to minimize the prediction error of our model. We show that the
objective function is monotone-supermodular, and propose efficient optimal
design based algorithms that attain an approximation to its optimum. Our
findings are verified by an empirical study using the Netflix dataset, where
the proposed algorithms outperform several baselines for the problem at hand.
| [
"Oren Anava, Shahar Golan, Nadav Golbandi, Zohar Karnin, Ronny Lempel,\n Oleg Rokhlenko, Oren Somekh",
"['Oren Anava' 'Shahar Golan' 'Nadav Golbandi' 'Zohar Karnin'\n 'Ronny Lempel' 'Oleg Rokhlenko' 'Oren Somekh']"
]
|
cs.LG stat.ML | null | 1406.2504 | null | null | http://arxiv.org/pdf/1406.2504v3 | 2015-07-01T19:05:56Z | 2014-06-10T10:53:20Z | Exploring Algorithmic Limits of Matrix Rank Minimization under Affine
Constraints | Many applications require recovering a matrix of minimal rank within an
affine constraint set, with matrix completion a notable special case. Because
the problem is NP-hard in general, it is common to replace the matrix rank with
the nuclear norm, which acts as a convenient convex surrogate. While elegant
theoretical conditions elucidate when this replacement is likely to be
successful, they are highly restrictive and convex algorithms fail when the
ambient rank is too high or when the constraint set is poorly structured.
Non-convex alternatives fare somewhat better when carefully tuned; however,
convergence to locally optimal solutions remains a continuing source of
failure. Against this backdrop we derive a deceptively simple and
parameter-free probabilistic PCA-like algorithm that is capable, over a wide
battery of empirical tests, of successful recovery even at the theoretical
limit where the number of measurements equal the degrees of freedom in the
unknown low-rank matrix. Somewhat surprisingly, this is possible even when the
affine constraint set is highly ill-conditioned. While proving general recovery
guarantees remains evasive for non-convex algorithms, Bayesian-inspired or
otherwise, we nonetheless show conditions whereby the underlying cost function
has a unique stationary point located at the global optimum; no existing cost
function we are aware of satisfies this same property. We conclude with a
simple computer vision application involving image rectification and a standard
collaborative filtering benchmark.
| [
"['Bo Xin' 'David Wipf']",
"Bo Xin and David Wipf"
]
|
cs.CL cs.AI cs.IR cs.LG | null | 1406.2538 | null | null | http://arxiv.org/pdf/1406.2538v1 | 2014-06-10T13:16:36Z | 2014-06-10T13:16:36Z | FrameNet CNL: a Knowledge Representation and Information Extraction
Language | The paper presents a FrameNet-based information extraction and knowledge
representation framework, called FrameNet-CNL. The framework is used on natural
language documents and represents the extracted knowledge in a tailor-made
Frame-ontology from which unambiguous FrameNet-CNL paraphrase text can be
generated automatically in multiple languages. This approach brings together
the fields of information extraction and CNL, because a source text can be
considered belonging to FrameNet-CNL, if information extraction parser produces
the correct knowledge representation as a result. We describe a
state-of-the-art information extraction parser used by a national news agency
and speculate that FrameNet-CNL eventually could shape the natural language
subset used for writing the newswire articles.
| [
"['Guntis Barzdins']",
"Guntis Barzdins"
]
|
stat.ML cs.LG | null | 1406.2541 | null | null | http://arxiv.org/pdf/1406.2541v1 | 2014-06-10T13:29:09Z | 2014-06-10T13:29:09Z | Predictive Entropy Search for Efficient Global Optimization of Black-box
Functions | We propose a novel information-theoretic approach for Bayesian optimization
called Predictive Entropy Search (PES). At each iteration, PES selects the next
evaluation point that maximizes the expected information gained with respect to
the global maximum. PES codifies this intractable acquisition function in terms
of the expected reduction in the differential entropy of the predictive
distribution. This reformulation allows PES to obtain approximations that are
both more accurate and efficient than other alternatives such as Entropy Search
(ES). Furthermore, PES can easily perform a fully Bayesian treatment of the
model hyperparameters while ES cannot. We evaluate PES in both synthetic and
real-world applications, including optimization problems in machine learning,
finance, biotechnology, and robotics. We show that the increased accuracy of
PES leads to significant gains in optimization performance.
| [
"Jos\\'e Miguel Hern\\'andez-Lobato, Matthew W. Hoffman, Zoubin\n Ghahramani",
"['José Miguel Hernández-Lobato' 'Matthew W. Hoffman' 'Zoubin Ghahramani']"
]
|
cs.LG math.OC stat.ML | null | 1406.2572 | null | null | http://arxiv.org/pdf/1406.2572v1 | 2014-06-10T14:52:14Z | 2014-06-10T14:52:14Z | Identifying and attacking the saddle point problem in high-dimensional
non-convex optimization | A central challenge to many fields of science and engineering involves
minimizing non-convex error functions over continuous, high dimensional spaces.
Gradient descent or quasi-Newton methods are almost ubiquitously used to
perform such minimizations, and it is often thought that a main source of
difficulty for these local methods to find the global minimum is the
proliferation of local minima with much higher error than the global minimum.
Here we argue, based on results from statistical physics, random matrix theory,
neural network theory, and empirical evidence, that a deeper and more profound
difficulty originates from the proliferation of saddle points, not local
minima, especially in high dimensional problems of practical interest. Such
saddle points are surrounded by high error plateaus that can dramatically slow
down learning, and give the illusory impression of the existence of a local
minimum. Motivated by these arguments, we propose a new approach to
second-order optimization, the saddle-free Newton method, that can rapidly
escape high dimensional saddle points, unlike gradient descent and quasi-Newton
methods. We apply this algorithm to deep or recurrent neural network training,
and provide numerical evidence for its superior optimization performance.
| [
"['Yann Dauphin' 'Razvan Pascanu' 'Caglar Gulcehre' 'Kyunghyun Cho'\n 'Surya Ganguli' 'Yoshua Bengio']",
"Yann Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya\n Ganguli and Yoshua Bengio"
]
|
cs.CV cs.IR cs.LG | 10.1109/IC3INA.2013.6819148 | 1406.2580 | null | null | http://arxiv.org/abs/1406.2580v1 | 2014-06-10T15:11:27Z | 2014-06-10T15:11:27Z | Identification of Orchid Species Using Content-Based Flower Image
Retrieval | In this paper, we developed the system for recognizing the orchid species by
using the images of flower. We used MSRM (Maximal Similarity based on Region
Merging) method for segmenting the flower object from the background and
extracting the shape feature such as the distance from the edge to the centroid
point of the flower, aspect ratio, roundness, moment invariant, fractal
dimension and also extract color feature. We used HSV color feature with
ignoring the V value. To retrieve the image, we used Support Vector Machine
(SVM) method. Orchid is a unique flower. It has a part of flower called lip
(labellum) that distinguishes it from other flowers even from other types of
orchids. Thus, in this paper, we proposed to do feature extraction not only on
flower region but also on lip (labellum) region. The result shows that our
proposed method can increase the accuracy value of content based flower image
retrieval for orchid species up to $\pm$ 14%. The most dominant feature is
Centroid Contour Distance, Moment Invariant and HSV Color. The system accuracy
is 85,33% in validation phase and 79,33% in testing phase.
| [
"D. H. Apriyanti, A.A. Arymurthy, L.T. Handoko",
"['D. H. Apriyanti' 'A. A. Arymurthy' 'L. T. Handoko']"
]
|
stat.ML cs.LG cs.NA math.NA | null | 1406.2582 | null | null | http://arxiv.org/pdf/1406.2582v2 | 2014-10-24T11:45:49Z | 2014-06-10T15:13:24Z | Probabilistic ODE Solvers with Runge-Kutta Means | Runge-Kutta methods are the classic family of solvers for ordinary
differential equations (ODEs), and the basis for the state of the art. Like
most numerical methods, they return point estimates. We construct a family of
probabilistic numerical methods that instead return a Gauss-Markov process
defining a probability distribution over the ODE solution. In contrast to prior
work, we construct this family such that posterior means match the outputs of
the Runge-Kutta family exactly, thus inheriting their proven good properties.
Remaining degrees of freedom not identified by the match to Runge-Kutta are
chosen such that the posterior probability measure fits the observed structure
of the ODE. Our results shed light on the structure of Runge-Kutta solvers from
a new direction, provide a richer, probabilistic output, have low computational
cost, and raise new research questions.
| [
"['Michael Schober' 'David Duvenaud' 'Philipp Hennig']",
"Michael Schober, David Duvenaud, Philipp Hennig"
]
|
stat.ML cs.AI cs.CV cs.LG | null | 1406.2602 | null | null | http://arxiv.org/pdf/1406.2602v1 | 2014-06-10T15:49:05Z | 2014-06-10T15:49:05Z | Graph Approximation and Clustering on a Budget | We consider the problem of learning from a similarity matrix (such as
spectral clustering and lowd imensional embedding), when computing pairwise
similarities are costly, and only a limited number of entries can be observed.
We provide a theoretical analysis using standard notions of graph
approximation, significantly generalizing previous results (which focused on
spectral clustering with two clusters). We also propose a new algorithmic
approach based on adaptive sampling, which experimentally matches or improves
on previous methods, while being considerably more general and computationally
cheaper.
| [
"['Ethan Fetaya' 'Ohad Shamir' 'Shimon Ullman']",
"Ethan Fetaya, Ohad Shamir and Shimon Ullman"
]
|
cs.RO cs.AI cs.LG | null | 1406.2616 | null | null | http://arxiv.org/pdf/1406.2616v3 | 2016-01-05T05:35:21Z | 2014-06-10T16:23:52Z | PlanIt: A Crowdsourcing Approach for Learning to Plan Paths from Large
Scale Preference Feedback | We consider the problem of learning user preferences over robot trajectories
for environments rich in objects and humans. This is challenging because the
criterion defining a good trajectory varies with users, tasks and interactions
in the environment. We represent trajectory preferences using a cost function
that the robot learns and uses it to generate good trajectories in new
environments. We design a crowdsourcing system - PlanIt, where non-expert users
label segments of the robot's trajectory. PlanIt allows us to collect a large
amount of user feedback, and using the weak and noisy labels from PlanIt we
learn the parameters of our model. We test our approach on 122 different
environments for robotic navigation and manipulation tasks. Our extensive
experiments show that the learned cost function generates preferred
trajectories in human environments. Our crowdsourcing system is publicly
available for the visualization of the learned costs and for providing
preference feedback: \url{http://planit.cs.cornell.edu}
| [
"Ashesh Jain, Debarghya Das, Jayesh K Gupta, Ashutosh Saxena",
"['Ashesh Jain' 'Debarghya Das' 'Jayesh K Gupta' 'Ashutosh Saxena']"
]
|
cs.LG stat.ML | null | 1406.2622 | null | null | http://arxiv.org/pdf/1406.2622v1 | 2014-06-10T16:40:56Z | 2014-06-10T16:40:56Z | Equivalence of Learning Algorithms | The purpose of this paper is to introduce a concept of equivalence between
machine learning algorithms. We define two notions of algorithmic equivalence,
namely, weak and strong equivalence. These notions are of paramount importance
for identifying when learning prop erties from one learning algorithm can be
transferred to another. Using regularized kernel machines as a case study, we
illustrate the importance of the introduced equivalence concept by analyzing
the relation between kernel ridge regression (KRR) and m-power regularized
least squares regression (M-RLSR) algorithms.
| [
"Julien Audiffren (CMLA), Hachem Kadri (LIF)",
"['Julien Audiffren' 'Hachem Kadri']"
]
|
cs.CV cs.LG cs.NE | 10.1007/978-3-319-10404-1_65 | 1406.2639 | null | null | http://arxiv.org/abs/1406.2639v1 | 2014-06-06T22:43:42Z | 2014-06-06T22:43:42Z | A New 2.5D Representation for Lymph Node Detection using Random Sets of
Deep Convolutional Neural Network Observations | Automated Lymph Node (LN) detection is an important clinical diagnostic task
but very challenging due to the low contrast of surrounding structures in
Computed Tomography (CT) and to their varying sizes, poses, shapes and sparsely
distributed locations. State-of-the-art studies show the performance range of
52.9% sensitivity at 3.1 false-positives per volume (FP/vol.), or 60.9% at 6.1
FP/vol. for mediastinal LN, by one-shot boosting on 3D HAAR features. In this
paper, we first operate a preliminary candidate generation stage, towards 100%
sensitivity at the cost of high FP levels (40 per patient), to harvest volumes
of interest (VOI). Our 2.5D approach consequently decomposes any 3D VOI by
resampling 2D reformatted orthogonal views N times, via scale, random
translations, and rotations with respect to the VOI centroid coordinates. These
random views are then used to train a deep Convolutional Neural Network (CNN)
classifier. In testing, the CNN is employed to assign LN probabilities for all
N random views that can be simply averaged (as a set) to compute the final
classification probability per VOI. We validate the approach on two datasets:
90 CT volumes with 388 mediastinal LNs and 86 patients with 595 abdominal LNs.
We achieve sensitivities of 70%/83% at 3 FP/vol. and 84%/90% at 6 FP/vol. in
mediastinum and abdomen respectively, which drastically improves over the
previous state-of-the-art work.
| [
"Holger R. Roth and Le Lu and Ari Seff and Kevin M. Cherry and Joanne\n Hoffman and Shijun Wang and Jiamin Liu and Evrim Turkbey and Ronald M.\n Summers",
"['Holger R. Roth' 'Le Lu' 'Ari Seff' 'Kevin M. Cherry' 'Joanne Hoffman'\n 'Shijun Wang' 'Jiamin Liu' 'Evrim Turkbey' 'Ronald M. Summers']"
]
|
cs.LG math.AC stat.ML | null | 1406.2646 | null | null | http://arxiv.org/pdf/1406.2646v1 | 2014-06-10T17:48:58Z | 2014-06-10T17:48:58Z | Learning with Cross-Kernels and Ideal PCA | We describe how cross-kernel matrices, that is, kernel matrices between the
data and a custom chosen set of `feature spanning points' can be used for
learning. The main potential of cross-kernels lies in the fact that (a) only
one side of the matrix scales with the number of data points, and (b)
cross-kernels, as opposed to the usual kernel matrices, can be used to certify
for the data manifold. Our theoretical framework, which is based on a duality
involving the feature space and vanishing ideals, indicates that cross-kernels
have the potential to be used for any kind of kernel learning. We present a
novel algorithm, Ideal PCA (IPCA), which cross-kernelizes PCA. We demonstrate
on real and synthetic data that IPCA allows to (a) obtain PCA-like features
faster and (b) to extract novel and empirically validated features certifying
for the data manifold.
| [
"['Franz J Király' 'Martin Kreuzer' 'Louis Theran']",
"Franz J Kir\\'aly, Martin Kreuzer, Louis Theran"
]
|
stat.ML cs.LG | null | 1406.2661 | null | null | http://arxiv.org/pdf/1406.2661v1 | 2014-06-10T18:58:17Z | 2014-06-10T18:58:17Z | Generative Adversarial Networks | We propose a new framework for estimating generative models via an
adversarial process, in which we simultaneously train two models: a generative
model G that captures the data distribution, and a discriminative model D that
estimates the probability that a sample came from the training data rather than
G. The training procedure for G is to maximize the probability of D making a
mistake. This framework corresponds to a minimax two-player game. In the space
of arbitrary functions G and D, a unique solution exists, with G recovering the
training data distribution and D equal to 1/2 everywhere. In the case where G
and D are defined by multilayer perceptrons, the entire system can be trained
with backpropagation. There is no need for any Markov chains or unrolled
approximate inference networks during either training or generation of samples.
Experiments demonstrate the potential of the framework through qualitative and
quantitative evaluation of the generated samples.
| [
"Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David\n Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio",
"['Ian J. Goodfellow' 'Jean Pouget-Abadie' 'Mehdi Mirza' 'Bing Xu'\n 'David Warde-Farley' 'Sherjil Ozair' 'Aaron Courville' 'Yoshua Bengio']"
]
|
stat.ML cs.LG | null | 1406.2673 | null | null | http://arxiv.org/pdf/1406.2673v2 | 2015-02-16T14:57:52Z | 2014-06-10T19:34:51Z | Mondrian Forests: Efficient Online Random Forests | Ensembles of randomized decision trees, usually referred to as random
forests, are widely used for classification and regression tasks in machine
learning and statistics. Random forests achieve competitive predictive
performance and are computationally efficient to train and test, making them
excellent candidates for real-world prediction tasks. The most popular random
forest variants (such as Breiman's random forest and extremely randomized
trees) operate on batches of training data. Online methods are now in greater
demand. Existing online random forests, however, require more training data
than their batch counterpart to achieve comparable predictive performance. In
this work, we use Mondrian processes (Roy and Teh, 2009) to construct ensembles
of random decision trees we call Mondrian forests. Mondrian forests can be
grown in an incremental/online fashion and remarkably, the distribution of
online Mondrian forests is the same as that of batch Mondrian forests. Mondrian
forests achieve competitive predictive performance comparable with existing
online random forests and periodically re-trained batch random forests, while
being more than an order of magnitude faster, thus representing a better
computation vs accuracy tradeoff.
| [
"Balaji Lakshminarayanan, Daniel M. Roy and Yee Whye Teh",
"['Balaji Lakshminarayanan' 'Daniel M. Roy' 'Yee Whye Teh']"
]
|
cs.LG cs.CL | null | 1406.2710 | null | null | http://arxiv.org/pdf/1406.2710v1 | 2014-06-10T20:29:10Z | 2014-06-10T20:29:10Z | A Multiplicative Model for Learning Distributed Text-Based Attribute
Representations | In this paper we propose a general framework for learning distributed
representations of attributes: characteristics of text whose representations
can be jointly learned with word embeddings. Attributes can correspond to
document indicators (to learn sentence vectors), language indicators (to learn
distributed language representations), meta-data and side information (such as
the age, gender and industry of a blogger) or representations of authors. We
describe a third-order model where word context and attribute vectors interact
multiplicatively to predict the next word in a sequence. This leads to the
notion of conditional word similarity: how meanings of words change when
conditioned on different attributes. We perform several experimental tasks
including sentiment classification, cross-lingual document classification, and
blog authorship attribution. We also qualitatively evaluate conditional word
neighbours and attribute-conditioned text generation.
| [
"Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov",
"['Ryan Kiros' 'Richard S. Zemel' 'Ruslan Salakhutdinov']"
]
|
stat.ML cs.LG math.ST stat.TH | null | 1406.2721 | null | null | http://arxiv.org/pdf/1406.2721v1 | 2014-06-10T21:03:22Z | 2014-06-10T21:03:22Z | Learning Latent Variable Gaussian Graphical Models | Gaussian graphical models (GGM) have been widely used in many
high-dimensional applications ranging from biological and financial data to
recommender systems. Sparsity in GGM plays a central role both statistically
and computationally. Unfortunately, real-world data often does not fit well to
sparse graphical models. In this paper, we focus on a family of latent variable
Gaussian graphical models (LVGGM), where the model is conditionally sparse
given latent variables, but marginally non-sparse. In LVGGM, the inverse
covariance matrix has a low-rank plus sparse structure, and can be learned in a
regularized maximum likelihood framework. We derive novel parameter estimation
error bounds for LVGGM under mild conditions in the high-dimensional setting.
These results complement the existing theory on the structural learning, and
open up new possibilities of using LVGGM for statistical inference.
| [
"Zhaoshi Meng, Brian Eriksson, Alfred O. Hero III",
"['Zhaoshi Meng' 'Brian Eriksson' 'Alfred O. Hero III']"
]
|
cs.CV cs.LG | null | 1406.2732 | null | null | http://arxiv.org/pdf/1406.2732v1 | 2014-06-10T22:07:01Z | 2014-06-10T22:07:01Z | Deep Epitomic Convolutional Neural Networks | Deep convolutional neural networks have recently proven extremely competitive
in challenging image recognition tasks. This paper proposes the epitomic
convolution as a new building block for deep neural networks. An epitomic
convolution layer replaces a pair of consecutive convolution and max-pooling
layers found in standard deep convolutional neural networks. The main version
of the proposed model uses mini-epitomes in place of filters and computes
responses invariant to small translations by epitomic search instead of
max-pooling over image positions. The topographic version of the proposed model
uses large epitomes to learn filter maps organized in translational
topographies. We show that error back-propagation can successfully learn
multiple epitomic layers in a supervised fashion. The effectiveness of the
proposed method is assessed in image classification tasks on standard
benchmarks. Our experiments on Imagenet indicate improved recognition
performance compared to standard convolutional neural networks of similar
architecture. Our models pre-trained on Imagenet perform excellently on
Caltech-101. We also obtain competitive image classification results on the
small-image MNIST and CIFAR-10 datasets.
| [
"['George Papandreou']",
"George Papandreou"
]
|
cs.LG | null | 1406.2751 | null | null | http://arxiv.org/pdf/1406.2751v4 | 2015-04-16T17:22:58Z | 2014-06-11T00:44:31Z | Reweighted Wake-Sleep | Training deep directed graphical models with many hidden variables and
performing inference remains a major challenge. Helmholtz machines and deep
belief networks are such models, and the wake-sleep algorithm has been proposed
to train them. The wake-sleep algorithm relies on training not just the
directed generative model but also a conditional generative model (the
inference network) that runs backward from visible to latent, estimating the
posterior distribution of latent given visible. We propose a novel
interpretation of the wake-sleep algorithm which suggests that better
estimators of the gradient can be obtained by sampling latent variables
multiple times from the inference network. This view is based on importance
sampling as an estimator of the likelihood, with the approximate inference
network as a proposal distribution. This interpretation is confirmed
experimentally, showing that better likelihood can be achieved with this
reweighted wake-sleep procedure. Based on this interpretation, we propose that
a sigmoidal belief network is not sufficiently powerful for the layers of the
inference network in order to recover a good estimator of the posterior
distribution of latent variables. Our experiments show that using a more
powerful layer model, such as NADE, yields substantially better generative
models.
| [
"J\\\"org Bornschein and Yoshua Bengio",
"['Jörg Bornschein' 'Yoshua Bengio']"
]
|
cs.DB cs.CL cs.LG q-bio.PE | null | 1406.2963 | null | null | http://arxiv.org/pdf/1406.2963v2 | 2014-07-19T16:40:47Z | 2014-06-11T17:02:14Z | A machine-compiled macroevolutionary history of Phanerozoic life | Many aspects of macroevolutionary theory and our understanding of biotic
responses to global environmental change derive from literature-based
compilations of palaeontological data. Existing manually assembled databases
are, however, incomplete and difficult to assess and enhance. Here, we develop
and validate the quality of a machine reading system, PaleoDeepDive, that
automatically locates and extracts data from heterogeneous text, tables, and
figures in publications. PaleoDeepDive performs comparably to humans in complex
data extraction and inference tasks and generates congruent synthetic
macroevolutionary results. Unlike traditional databases, PaleoDeepDive produces
a probabilistic database that systematically improves as information is added.
We also show that the system can readily accommodate sophisticated data types,
such as morphological data in biological illustrations and associated textual
descriptions. Our machine reading approach to scientific data integration and
synthesis brings within reach many questions that are currently underdetermined
and does so in ways that may stimulate entirely new modes of inquiry.
| [
"Shanan E. Peters, Ce Zhang, Miron Livny, Christopher R\\'e",
"['Shanan E. Peters' 'Ce Zhang' 'Miron Livny' 'Christopher Ré']"
]
|
cs.CV cs.LG stat.ML | null | 1406.2969 | null | null | http://arxiv.org/pdf/1406.2969v1 | 2014-06-11T17:18:25Z | 2014-06-11T17:18:25Z | Truncated Nuclear Norm Minimization for Image Restoration Based On
Iterative Support Detection | Recovering a large matrix from limited measurements is a challenging task
arising in many real applications, such as image inpainting, compressive
sensing and medical imaging, and this kind of problems are mostly formulated as
low-rank matrix approximation problems. Due to the rank operator being
non-convex and discontinuous, most of the recent theoretical studies use the
nuclear norm as a convex relaxation and the low-rank matrix recovery problem is
solved through minimization of the nuclear norm regularized problem. However, a
major limitation of nuclear norm minimization is that all the singular values
are simultaneously minimized and the rank may not be well approximated
\cite{hu2012fast}. Correspondingly, in this paper, we propose a new multi-stage
algorithm, which makes use of the concept of Truncated Nuclear Norm
Regularization (TNNR) proposed in \citep{hu2012fast} and Iterative Support
Detection (ISD) proposed in \citep{wang2010sparse} to overcome the above
limitation. Besides matrix completion problems considered in
\citep{hu2012fast}, the proposed method can be also extended to the general
low-rank matrix recovery problems. Extensive experiments well validate the
superiority of our new algorithms over other state-of-the-art methods.
| [
"['Yilun Wang' 'Xinhua Su']",
"Yilun Wang and Xinhua Su"
]
|
stat.ML cs.LG cs.NE | null | 1406.2989 | null | null | http://arxiv.org/pdf/1406.2989v3 | 2015-04-09T12:58:06Z | 2014-06-11T18:29:27Z | Techniques for Learning Binary Stochastic Feedforward Neural Networks | Stochastic binary hidden units in a multi-layer perceptron (MLP) network give
at least three potential benefits when compared to deterministic MLP networks.
(1) They allow to learn one-to-many type of mappings. (2) They can be used in
structured prediction problems, where modeling the internal structure of the
output is important. (3) Stochasticity has been shown to be an excellent
regularizer, which makes generalization performance potentially better in
general. However, training stochastic networks is considerably more difficult.
We study training using M samples of hidden activations per input. We show that
the case M=1 leads to a fundamentally different behavior where the network
tries to avoid stochasticity. We propose two new estimators for the training
gradient and propose benchmark tests for comparing training algorithms. Our
experiments confirm that training stochastic networks is difficult and show
that the proposed two estimators perform favorably among all the five known
estimators.
| [
"['Tapani Raiko' 'Mathias Berglund' 'Guillaume Alain' 'Laurent Dinh']",
"Tapani Raiko, Mathias Berglund, Guillaume Alain, Laurent Dinh"
]
|
cs.LG cs.CV | null | 1406.3010 | null | null | http://arxiv.org/pdf/1406.3010v2 | 2014-12-05T17:55:09Z | 2014-06-11T19:38:05Z | "Mental Rotation" by Optimizing Transforming Distance | The human visual system is able to recognize objects despite transformations
that can drastically alter their appearance. To this end, much effort has been
devoted to the invariance properties of recognition systems. Invariance can be
engineered (e.g. convolutional nets), or learned from data explicitly (e.g.
temporal coherence) or implicitly (e.g. by data augmentation). One idea that
has not, to date, been explored is the integration of latent variables which
permit a search over a learned space of transformations. Motivated by evidence
that people mentally simulate transformations in space while comparing
examples, so-called "mental rotation", we propose a transforming distance.
Here, a trained relational model actively transforms pairs of examples so that
they are maximally similar in some feature space yet respect the learned
transformational constraints. We apply our method to nearest-neighbour problems
on the Toronto Face Database and NORB.
| [
"Weiguang Ding, Graham W. Taylor",
"['Weiguang Ding' 'Graham W. Taylor']"
]
|
cs.NE cs.LG stat.ML | null | 1406.3100 | null | null | http://arxiv.org/pdf/1406.3100v1 | 2014-06-12T02:08:31Z | 2014-06-12T02:08:31Z | Learning ELM network weights using linear discriminant analysis | We present an alternative to the pseudo-inverse method for determining the
hidden to output weight values for Extreme Learning Machines performing
classification tasks. The method is based on linear discriminant analysis and
provides Bayes optimal single point estimates for the weight values.
| [
"['Philip de Chazal' 'Jonathan Tapson' 'André van Schaik']",
"Philip de Chazal, Jonathan Tapson and Andr\\'e van Schaik"
]
|
cs.NE cond-mat.mes-hall cond-mat.mtrl-sci cs.DC cs.LG | null | 1406.3149 | null | null | http://arxiv.org/pdf/1406.3149v1 | 2014-06-12T08:40:04Z | 2014-06-12T08:40:04Z | A Cascade Neural Network Architecture investigating Surface Plasmon
Polaritons propagation for thin metals in OpenMP | Surface plasmon polaritons (SPPs) confined along metal-dielectric interface
have attracted a relevant interest in the area of ultracompact photonic
circuits, photovoltaic devices and other applications due to their strong field
confinement and enhancement. This paper investigates a novel cascade neural
network (NN) architecture to find the dependance of metal thickness on the SPP
propagation. Additionally, a novel training procedure for the proposed cascade
NN has been developed using an OpenMP-based framework, thus greatly reducing
training time. The performed experiments confirm the effectiveness of the
proposed NN architecture for the problem at hand.
| [
"Francesco Bonanno, Giacomo Capizzi, Grazia Lo Sciuto, Christian\n Napoli, Giuseppe Pappalardo, Emiliano Tramontana",
"['Francesco Bonanno' 'Giacomo Capizzi' 'Grazia Lo Sciuto'\n 'Christian Napoli' 'Giuseppe Pappalardo' 'Emiliano Tramontana']"
]
|
stat.ML cs.LG | null | 1406.3190 | null | null | http://arxiv.org/pdf/1406.3190v4 | 2016-05-14T03:53:59Z | 2014-06-12T10:49:50Z | Online Optimization for Large-Scale Max-Norm Regularization | Max-norm regularizer has been extensively studied in the last decade as it
promotes an effective low-rank estimation for the underlying data. However,
such max-norm regularized problems are typically formulated and solved in a
batch manner, which prevents it from processing big data due to possible memory
budget. In this paper, hence, we propose an online algorithm that is scalable
to large-scale setting. Particularly, we consider the matrix decomposition
problem as an example, although a simple variant of the algorithm and analysis
can be adapted to other important problems such as matrix completion. The
crucial technique in our implementation is to reformulating the max-norm to an
equivalent matrix factorization form, where the factors consist of a (possibly
overcomplete) basis component and a coefficients one. In this way, we may
maintain the basis component in the memory and optimize over it and the
coefficients for each sample alternatively. Since the memory footprint of the
basis component is independent of the sample size, our algorithm is appealing
when manipulating a large collection of samples. We prove that the sequence of
the solutions (i.e., the basis component) produced by our algorithm converges
to a stationary point of the expected loss function asymptotically. Numerical
study demonstrates encouraging results for the efficacy and robustness of our
algorithm compared to the widely used nuclear norm solvers.
| [
"Jie Shen and Huan Xu and Ping Li",
"['Jie Shen' 'Huan Xu' 'Ping Li']"
]
|
cs.LG stat.ML | null | 1406.3269 | null | null | http://arxiv.org/pdf/1406.3269v3 | 2015-04-10T21:05:32Z | 2014-06-12T15:40:18Z | Scheduled denoising autoencoders | We present a representation learning method that learns features at multiple
different levels of scale. Working within the unsupervised framework of
denoising autoencoders, we observe that when the input is heavily corrupted
during training, the network tends to learn coarse-grained features, whereas
when the input is only slightly corrupted, the network tends to learn
fine-grained features. This motivates the scheduled denoising autoencoder,
which starts with a high level of noise that lowers as training progresses. We
find that the resulting representation yields a significant boost on a later
supervised task compared to the original input, or to a standard denoising
autoencoder trained at a single noise level. After supervised fine-tuning our
best model achieves the lowest ever reported error on the CIFAR-10 data set
among permutation-invariant methods.
| [
"Krzysztof J. Geras and Charles Sutton",
"['Krzysztof J. Geras' 'Charles Sutton']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.