title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
On the Decreasing Power of Kernel and Distance based Nonparametric
Hypothesis Tests in High Dimensions | stat.ML cs.IT cs.LG math.IT math.ST stat.ME stat.TH | This paper is about two related decision theoretic problems, nonparametric
two-sample testing and independence testing. There is a belief that two
recently proposed solutions, based on kernels and distances between pairs of
points, behave well in high-dimensional settings. We identify different sources
of misconception that give rise to the above belief. Specifically, we
differentiate the hardness of estimation of test statistics from the hardness
of testing whether these statistics are zero or not, and explicitly discuss a
notion of "fair" alternative hypotheses for these problems as dimension
increases. We then demonstrate that the power of these tests actually drops
polynomially with increasing dimension against fair alternatives. We end with
some theoretical insights and shed light on the \textit{median heuristic} for
kernel bandwidth selection. Our work advances the current understanding of the
power of modern nonparametric hypothesis tests in high dimensions.
| Sashank J. Reddi, Aaditya Ramdas, Barnab\'as P\'oczos, Aarti Singh and
Larry Wasserman | null | 1406.2083 | null | null |
Memristor models for machine learning | cs.LG cond-mat.mtrl-sci | In the quest for alternatives to traditional CMOS, it is being suggested that
digital computing efficiency and power can be improved by matching the
precision to the application. Many applications do not need the high precision
that is being used today. In particular, large gains in area- and power
efficiency could be achieved by dedicated analog realizations of approximate
computing engines. In this work, we explore the use of memristor networks for
analog approximate computation, based on a machine learning framework called
reservoir computing. Most experimental investigations on the dynamics of
memristors focus on their nonvolatile behavior. Hence, the volatility that is
present in the developed technologies is usually unwanted and it is not
included in simulation models. In contrast, in reservoir computing, volatility
is not only desirable but necessary. Therefore, in this work, we propose two
different ways to incorporate it into memristor simulation models. The first is
an extension of Strukov's model and the second is an equivalent Wiener model
approximation. We analyze and compare the dynamical properties of these models
and discuss their implications for the memory and the nonlinear processing
capacity of memristor networks. Our results indicate that device variability,
increasingly causing problems in traditional computer design, is an asset in
the context of reservoir computing. We conclude that, although both models
could lead to useful memristor based reservoir computing systems, their
computational performance will differ. Therefore, experimental modeling
research is required for the development of accurate volatile memristor models.
| Juan Pablo Carbajal and Joni Dambre and Michiel Hermans and Benjamin
Schrauwen | 10.1162/NECO_a_00694 | 1406.2210 | null | null |
A Hybrid Latent Variable Neural Network Model for Item Recommendation | cs.LG cs.IR cs.NE stat.ML | Collaborative filtering is used to recommend items to a user without
requiring a knowledge of the item itself and tends to outperform other
techniques. However, collaborative filtering suffers from the cold-start
problem, which occurs when an item has not yet been rated or a user has not
rated any items. Incorporating additional information, such as item or user
descriptions, into collaborative filtering can address the cold-start problem.
In this paper, we present a neural network model with latent input variables
(latent neural network or LNN) as a hybrid collaborative filtering technique
that addresses the cold-start problem. LNN outperforms a broad selection of
content-based filters (which make recommendations based on item descriptions)
and other hybrid approaches while maintaining the accuracy of state-of-the-art
collaborative filtering techniques.
| Michael R. Smith, Tony Martinez, Michael Gashler | null | 1406.2235 | null | null |
Reducing the Effects of Detrimental Instances | stat.ML cs.LG | Not all instances in a data set are equally beneficial for inducing a model
of the data. Some instances (such as outliers or noise) can be detrimental.
However, at least initially, the instances in a data set are generally
considered equally in machine learning algorithms. Many current approaches for
handling noisy and detrimental instances make a binary decision about whether
an instance is detrimental or not. In this paper, we 1) extend this paradigm by
weighting the instances on a continuous scale and 2) present a methodology for
measuring how detrimental an instance may be for inducing a model of the data.
We call our method of identifying and weighting detrimental instances reduced
detrimental instance learning (RDIL). We examine RIDL on a set of 54 data sets
and 5 learning algorithms and compare RIDL with other weighting and filtering
approaches. RDIL is especially useful for learning algorithms where every
instance can affect the classification boundary and the training instances are
considered individually, such as multilayer perceptrons trained with
backpropagation (MLPs). Our results also suggest that a more accurate estimate
of which instances are detrimental can have a significant positive impact for
handling them.
| Michael R. Smith, Tony Martinez | null | 1406.2237 | null | null |
Unsupervised Deep Haar Scattering on Graphs | cs.LG cs.CV | The classification of high-dimensional data defined on graphs is particularly
difficult when the graph geometry is unknown. We introduce a Haar scattering
transform on graphs, which computes invariant signal descriptors. It is
implemented with a deep cascade of additions, subtractions and absolute values,
which iteratively compute orthogonal Haar wavelet transforms. Multiscale
neighborhoods of unknown graphs are estimated by minimizing an average total
variation, with a pair matching algorithm of polynomial complexity. Supervised
classification with dimension reduction is tested on data bases of scrambled
images, and for signals sampled on unknown irregular grids on a sphere.
| Xu Chen, Xiuyuan Cheng and St\'ephane Mallat | null | 1406.2390 | null | null |
ExpertBayes: Automatically refining manually built Bayesian networks | cs.AI cs.LG stat.ML | Bayesian network structures are usually built using only the data and
starting from an empty network or from a naive Bayes structure. Very often, in
some domains, like medicine, a prior structure knowledge is already known. This
structure can be automatically or manually refined in search for better
performance models. In this work, we take Bayesian networks built by
specialists and show that minor perturbations to this original network can
yield better classifiers with a very small computational cost, while
maintaining most of the intended meaning of the original model.
| Ezilda Almeida, Pedro Ferreira, Tiago Vinhoza, In\^es Dutra, Jingwei
Li, Yirong Wu, Elizabeth Burnside | null | 1406.2395 | null | null |
Why do linear SVMs trained on HOG features perform so well? | cs.CV cs.LG | Linear Support Vector Machines trained on HOG features are now a de facto
standard across many visual perception tasks. Their popularisation can largely
be attributed to the step-change in performance they brought to pedestrian
detection, and their subsequent successes in deformable parts models. This
paper explores the interactions that make the HOG-SVM symbiosis perform so
well. By connecting the feature extraction and learning processes rather than
treating them as disparate plugins, we show that HOG features can be viewed as
doing two things: (i) inducing capacity in, and (ii) adding prior to a linear
SVM trained on pixels. From this perspective, preserving second-order
statistics and locality of interactions are key to good performance. We
demonstrate surprising accuracy on expression recognition and pedestrian
detection tasks, by assuming only the importance of preserving such local
second-order interactions.
| Hilton Bristow, Simon Lucey | null | 1406.2419 | null | null |
Budget-Constrained Item Cold-Start Handling in Collaborative Filtering
Recommenders via Optimal Design | cs.IR cs.LG | It is well known that collaborative filtering (CF) based recommender systems
provide better modeling of users and items associated with considerable rating
history. The lack of historical ratings results in the user and the item
cold-start problems. The latter is the main focus of this work. Most of the
current literature addresses this problem by integrating content-based
recommendation techniques to model the new item. However, in many cases such
content is not available, and the question arises is whether this problem can
be mitigated using CF techniques only. We formalize this problem as an
optimization problem: given a new item, a pool of available users, and a budget
constraint, select which users to assign with the task of rating the new item
in order to minimize the prediction error of our model. We show that the
objective function is monotone-supermodular, and propose efficient optimal
design based algorithms that attain an approximation to its optimum. Our
findings are verified by an empirical study using the Netflix dataset, where
the proposed algorithms outperform several baselines for the problem at hand.
| Oren Anava, Shahar Golan, Nadav Golbandi, Zohar Karnin, Ronny Lempel,
Oleg Rokhlenko, Oren Somekh | null | 1406.2431 | null | null |
Exploring Algorithmic Limits of Matrix Rank Minimization under Affine
Constraints | cs.LG stat.ML | Many applications require recovering a matrix of minimal rank within an
affine constraint set, with matrix completion a notable special case. Because
the problem is NP-hard in general, it is common to replace the matrix rank with
the nuclear norm, which acts as a convenient convex surrogate. While elegant
theoretical conditions elucidate when this replacement is likely to be
successful, they are highly restrictive and convex algorithms fail when the
ambient rank is too high or when the constraint set is poorly structured.
Non-convex alternatives fare somewhat better when carefully tuned; however,
convergence to locally optimal solutions remains a continuing source of
failure. Against this backdrop we derive a deceptively simple and
parameter-free probabilistic PCA-like algorithm that is capable, over a wide
battery of empirical tests, of successful recovery even at the theoretical
limit where the number of measurements equal the degrees of freedom in the
unknown low-rank matrix. Somewhat surprisingly, this is possible even when the
affine constraint set is highly ill-conditioned. While proving general recovery
guarantees remains evasive for non-convex algorithms, Bayesian-inspired or
otherwise, we nonetheless show conditions whereby the underlying cost function
has a unique stationary point located at the global optimum; no existing cost
function we are aware of satisfies this same property. We conclude with a
simple computer vision application involving image rectification and a standard
collaborative filtering benchmark.
| Bo Xin and David Wipf | null | 1406.2504 | null | null |
FrameNet CNL: a Knowledge Representation and Information Extraction
Language | cs.CL cs.AI cs.IR cs.LG | The paper presents a FrameNet-based information extraction and knowledge
representation framework, called FrameNet-CNL. The framework is used on natural
language documents and represents the extracted knowledge in a tailor-made
Frame-ontology from which unambiguous FrameNet-CNL paraphrase text can be
generated automatically in multiple languages. This approach brings together
the fields of information extraction and CNL, because a source text can be
considered belonging to FrameNet-CNL, if information extraction parser produces
the correct knowledge representation as a result. We describe a
state-of-the-art information extraction parser used by a national news agency
and speculate that FrameNet-CNL eventually could shape the natural language
subset used for writing the newswire articles.
| Guntis Barzdins | null | 1406.2538 | null | null |
Predictive Entropy Search for Efficient Global Optimization of Black-box
Functions | stat.ML cs.LG | We propose a novel information-theoretic approach for Bayesian optimization
called Predictive Entropy Search (PES). At each iteration, PES selects the next
evaluation point that maximizes the expected information gained with respect to
the global maximum. PES codifies this intractable acquisition function in terms
of the expected reduction in the differential entropy of the predictive
distribution. This reformulation allows PES to obtain approximations that are
both more accurate and efficient than other alternatives such as Entropy Search
(ES). Furthermore, PES can easily perform a fully Bayesian treatment of the
model hyperparameters while ES cannot. We evaluate PES in both synthetic and
real-world applications, including optimization problems in machine learning,
finance, biotechnology, and robotics. We show that the increased accuracy of
PES leads to significant gains in optimization performance.
| Jos\'e Miguel Hern\'andez-Lobato, Matthew W. Hoffman, Zoubin
Ghahramani | null | 1406.2541 | null | null |
Identifying and attacking the saddle point problem in high-dimensional
non-convex optimization | cs.LG math.OC stat.ML | A central challenge to many fields of science and engineering involves
minimizing non-convex error functions over continuous, high dimensional spaces.
Gradient descent or quasi-Newton methods are almost ubiquitously used to
perform such minimizations, and it is often thought that a main source of
difficulty for these local methods to find the global minimum is the
proliferation of local minima with much higher error than the global minimum.
Here we argue, based on results from statistical physics, random matrix theory,
neural network theory, and empirical evidence, that a deeper and more profound
difficulty originates from the proliferation of saddle points, not local
minima, especially in high dimensional problems of practical interest. Such
saddle points are surrounded by high error plateaus that can dramatically slow
down learning, and give the illusory impression of the existence of a local
minimum. Motivated by these arguments, we propose a new approach to
second-order optimization, the saddle-free Newton method, that can rapidly
escape high dimensional saddle points, unlike gradient descent and quasi-Newton
methods. We apply this algorithm to deep or recurrent neural network training,
and provide numerical evidence for its superior optimization performance.
| Yann Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya
Ganguli and Yoshua Bengio | null | 1406.2572 | null | null |
Identification of Orchid Species Using Content-Based Flower Image
Retrieval | cs.CV cs.IR cs.LG | In this paper, we developed the system for recognizing the orchid species by
using the images of flower. We used MSRM (Maximal Similarity based on Region
Merging) method for segmenting the flower object from the background and
extracting the shape feature such as the distance from the edge to the centroid
point of the flower, aspect ratio, roundness, moment invariant, fractal
dimension and also extract color feature. We used HSV color feature with
ignoring the V value. To retrieve the image, we used Support Vector Machine
(SVM) method. Orchid is a unique flower. It has a part of flower called lip
(labellum) that distinguishes it from other flowers even from other types of
orchids. Thus, in this paper, we proposed to do feature extraction not only on
flower region but also on lip (labellum) region. The result shows that our
proposed method can increase the accuracy value of content based flower image
retrieval for orchid species up to $\pm$ 14%. The most dominant feature is
Centroid Contour Distance, Moment Invariant and HSV Color. The system accuracy
is 85,33% in validation phase and 79,33% in testing phase.
| D. H. Apriyanti, A.A. Arymurthy, L.T. Handoko | 10.1109/IC3INA.2013.6819148 | 1406.2580 | null | null |
Probabilistic ODE Solvers with Runge-Kutta Means | stat.ML cs.LG cs.NA math.NA | Runge-Kutta methods are the classic family of solvers for ordinary
differential equations (ODEs), and the basis for the state of the art. Like
most numerical methods, they return point estimates. We construct a family of
probabilistic numerical methods that instead return a Gauss-Markov process
defining a probability distribution over the ODE solution. In contrast to prior
work, we construct this family such that posterior means match the outputs of
the Runge-Kutta family exactly, thus inheriting their proven good properties.
Remaining degrees of freedom not identified by the match to Runge-Kutta are
chosen such that the posterior probability measure fits the observed structure
of the ODE. Our results shed light on the structure of Runge-Kutta solvers from
a new direction, provide a richer, probabilistic output, have low computational
cost, and raise new research questions.
| Michael Schober, David Duvenaud, Philipp Hennig | null | 1406.2582 | null | null |
Graph Approximation and Clustering on a Budget | stat.ML cs.AI cs.CV cs.LG | We consider the problem of learning from a similarity matrix (such as
spectral clustering and lowd imensional embedding), when computing pairwise
similarities are costly, and only a limited number of entries can be observed.
We provide a theoretical analysis using standard notions of graph
approximation, significantly generalizing previous results (which focused on
spectral clustering with two clusters). We also propose a new algorithmic
approach based on adaptive sampling, which experimentally matches or improves
on previous methods, while being considerably more general and computationally
cheaper.
| Ethan Fetaya, Ohad Shamir and Shimon Ullman | null | 1406.2602 | null | null |
PlanIt: A Crowdsourcing Approach for Learning to Plan Paths from Large
Scale Preference Feedback | cs.RO cs.AI cs.LG | We consider the problem of learning user preferences over robot trajectories
for environments rich in objects and humans. This is challenging because the
criterion defining a good trajectory varies with users, tasks and interactions
in the environment. We represent trajectory preferences using a cost function
that the robot learns and uses it to generate good trajectories in new
environments. We design a crowdsourcing system - PlanIt, where non-expert users
label segments of the robot's trajectory. PlanIt allows us to collect a large
amount of user feedback, and using the weak and noisy labels from PlanIt we
learn the parameters of our model. We test our approach on 122 different
environments for robotic navigation and manipulation tasks. Our extensive
experiments show that the learned cost function generates preferred
trajectories in human environments. Our crowdsourcing system is publicly
available for the visualization of the learned costs and for providing
preference feedback: \url{http://planit.cs.cornell.edu}
| Ashesh Jain, Debarghya Das, Jayesh K Gupta, Ashutosh Saxena | null | 1406.2616 | null | null |
Equivalence of Learning Algorithms | cs.LG stat.ML | The purpose of this paper is to introduce a concept of equivalence between
machine learning algorithms. We define two notions of algorithmic equivalence,
namely, weak and strong equivalence. These notions are of paramount importance
for identifying when learning prop erties from one learning algorithm can be
transferred to another. Using regularized kernel machines as a case study, we
illustrate the importance of the introduced equivalence concept by analyzing
the relation between kernel ridge regression (KRR) and m-power regularized
least squares regression (M-RLSR) algorithms.
| Julien Audiffren (CMLA), Hachem Kadri (LIF) | null | 1406.2622 | null | null |
A New 2.5D Representation for Lymph Node Detection using Random Sets of
Deep Convolutional Neural Network Observations | cs.CV cs.LG cs.NE | Automated Lymph Node (LN) detection is an important clinical diagnostic task
but very challenging due to the low contrast of surrounding structures in
Computed Tomography (CT) and to their varying sizes, poses, shapes and sparsely
distributed locations. State-of-the-art studies show the performance range of
52.9% sensitivity at 3.1 false-positives per volume (FP/vol.), or 60.9% at 6.1
FP/vol. for mediastinal LN, by one-shot boosting on 3D HAAR features. In this
paper, we first operate a preliminary candidate generation stage, towards 100%
sensitivity at the cost of high FP levels (40 per patient), to harvest volumes
of interest (VOI). Our 2.5D approach consequently decomposes any 3D VOI by
resampling 2D reformatted orthogonal views N times, via scale, random
translations, and rotations with respect to the VOI centroid coordinates. These
random views are then used to train a deep Convolutional Neural Network (CNN)
classifier. In testing, the CNN is employed to assign LN probabilities for all
N random views that can be simply averaged (as a set) to compute the final
classification probability per VOI. We validate the approach on two datasets:
90 CT volumes with 388 mediastinal LNs and 86 patients with 595 abdominal LNs.
We achieve sensitivities of 70%/83% at 3 FP/vol. and 84%/90% at 6 FP/vol. in
mediastinum and abdomen respectively, which drastically improves over the
previous state-of-the-art work.
| Holger R. Roth and Le Lu and Ari Seff and Kevin M. Cherry and Joanne
Hoffman and Shijun Wang and Jiamin Liu and Evrim Turkbey and Ronald M.
Summers | 10.1007/978-3-319-10404-1_65 | 1406.2639 | null | null |
Learning with Cross-Kernels and Ideal PCA | cs.LG math.AC stat.ML | We describe how cross-kernel matrices, that is, kernel matrices between the
data and a custom chosen set of `feature spanning points' can be used for
learning. The main potential of cross-kernels lies in the fact that (a) only
one side of the matrix scales with the number of data points, and (b)
cross-kernels, as opposed to the usual kernel matrices, can be used to certify
for the data manifold. Our theoretical framework, which is based on a duality
involving the feature space and vanishing ideals, indicates that cross-kernels
have the potential to be used for any kind of kernel learning. We present a
novel algorithm, Ideal PCA (IPCA), which cross-kernelizes PCA. We demonstrate
on real and synthetic data that IPCA allows to (a) obtain PCA-like features
faster and (b) to extract novel and empirically validated features certifying
for the data manifold.
| Franz J Kir\'aly, Martin Kreuzer, Louis Theran | null | 1406.2646 | null | null |
Generative Adversarial Networks | stat.ML cs.LG | We propose a new framework for estimating generative models via an
adversarial process, in which we simultaneously train two models: a generative
model G that captures the data distribution, and a discriminative model D that
estimates the probability that a sample came from the training data rather than
G. The training procedure for G is to maximize the probability of D making a
mistake. This framework corresponds to a minimax two-player game. In the space
of arbitrary functions G and D, a unique solution exists, with G recovering the
training data distribution and D equal to 1/2 everywhere. In the case where G
and D are defined by multilayer perceptrons, the entire system can be trained
with backpropagation. There is no need for any Markov chains or unrolled
approximate inference networks during either training or generation of samples.
Experiments demonstrate the potential of the framework through qualitative and
quantitative evaluation of the generated samples.
| Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David
Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio | null | 1406.2661 | null | null |
Mondrian Forests: Efficient Online Random Forests | stat.ML cs.LG | Ensembles of randomized decision trees, usually referred to as random
forests, are widely used for classification and regression tasks in machine
learning and statistics. Random forests achieve competitive predictive
performance and are computationally efficient to train and test, making them
excellent candidates for real-world prediction tasks. The most popular random
forest variants (such as Breiman's random forest and extremely randomized
trees) operate on batches of training data. Online methods are now in greater
demand. Existing online random forests, however, require more training data
than their batch counterpart to achieve comparable predictive performance. In
this work, we use Mondrian processes (Roy and Teh, 2009) to construct ensembles
of random decision trees we call Mondrian forests. Mondrian forests can be
grown in an incremental/online fashion and remarkably, the distribution of
online Mondrian forests is the same as that of batch Mondrian forests. Mondrian
forests achieve competitive predictive performance comparable with existing
online random forests and periodically re-trained batch random forests, while
being more than an order of magnitude faster, thus representing a better
computation vs accuracy tradeoff.
| Balaji Lakshminarayanan, Daniel M. Roy and Yee Whye Teh | null | 1406.2673 | null | null |
A Multiplicative Model for Learning Distributed Text-Based Attribute
Representations | cs.LG cs.CL | In this paper we propose a general framework for learning distributed
representations of attributes: characteristics of text whose representations
can be jointly learned with word embeddings. Attributes can correspond to
document indicators (to learn sentence vectors), language indicators (to learn
distributed language representations), meta-data and side information (such as
the age, gender and industry of a blogger) or representations of authors. We
describe a third-order model where word context and attribute vectors interact
multiplicatively to predict the next word in a sequence. This leads to the
notion of conditional word similarity: how meanings of words change when
conditioned on different attributes. We perform several experimental tasks
including sentiment classification, cross-lingual document classification, and
blog authorship attribution. We also qualitatively evaluate conditional word
neighbours and attribute-conditioned text generation.
| Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov | null | 1406.2710 | null | null |
Learning Latent Variable Gaussian Graphical Models | stat.ML cs.LG math.ST stat.TH | Gaussian graphical models (GGM) have been widely used in many
high-dimensional applications ranging from biological and financial data to
recommender systems. Sparsity in GGM plays a central role both statistically
and computationally. Unfortunately, real-world data often does not fit well to
sparse graphical models. In this paper, we focus on a family of latent variable
Gaussian graphical models (LVGGM), where the model is conditionally sparse
given latent variables, but marginally non-sparse. In LVGGM, the inverse
covariance matrix has a low-rank plus sparse structure, and can be learned in a
regularized maximum likelihood framework. We derive novel parameter estimation
error bounds for LVGGM under mild conditions in the high-dimensional setting.
These results complement the existing theory on the structural learning, and
open up new possibilities of using LVGGM for statistical inference.
| Zhaoshi Meng, Brian Eriksson, Alfred O. Hero III | null | 1406.2721 | null | null |
Deep Epitomic Convolutional Neural Networks | cs.CV cs.LG | Deep convolutional neural networks have recently proven extremely competitive
in challenging image recognition tasks. This paper proposes the epitomic
convolution as a new building block for deep neural networks. An epitomic
convolution layer replaces a pair of consecutive convolution and max-pooling
layers found in standard deep convolutional neural networks. The main version
of the proposed model uses mini-epitomes in place of filters and computes
responses invariant to small translations by epitomic search instead of
max-pooling over image positions. The topographic version of the proposed model
uses large epitomes to learn filter maps organized in translational
topographies. We show that error back-propagation can successfully learn
multiple epitomic layers in a supervised fashion. The effectiveness of the
proposed method is assessed in image classification tasks on standard
benchmarks. Our experiments on Imagenet indicate improved recognition
performance compared to standard convolutional neural networks of similar
architecture. Our models pre-trained on Imagenet perform excellently on
Caltech-101. We also obtain competitive image classification results on the
small-image MNIST and CIFAR-10 datasets.
| George Papandreou | null | 1406.2732 | null | null |
Reweighted Wake-Sleep | cs.LG | Training deep directed graphical models with many hidden variables and
performing inference remains a major challenge. Helmholtz machines and deep
belief networks are such models, and the wake-sleep algorithm has been proposed
to train them. The wake-sleep algorithm relies on training not just the
directed generative model but also a conditional generative model (the
inference network) that runs backward from visible to latent, estimating the
posterior distribution of latent given visible. We propose a novel
interpretation of the wake-sleep algorithm which suggests that better
estimators of the gradient can be obtained by sampling latent variables
multiple times from the inference network. This view is based on importance
sampling as an estimator of the likelihood, with the approximate inference
network as a proposal distribution. This interpretation is confirmed
experimentally, showing that better likelihood can be achieved with this
reweighted wake-sleep procedure. Based on this interpretation, we propose that
a sigmoidal belief network is not sufficiently powerful for the layers of the
inference network in order to recover a good estimator of the posterior
distribution of latent variables. Our experiments show that using a more
powerful layer model, such as NADE, yields substantially better generative
models.
| J\"org Bornschein and Yoshua Bengio | null | 1406.2751 | null | null |
A machine-compiled macroevolutionary history of Phanerozoic life | cs.DB cs.CL cs.LG q-bio.PE | Many aspects of macroevolutionary theory and our understanding of biotic
responses to global environmental change derive from literature-based
compilations of palaeontological data. Existing manually assembled databases
are, however, incomplete and difficult to assess and enhance. Here, we develop
and validate the quality of a machine reading system, PaleoDeepDive, that
automatically locates and extracts data from heterogeneous text, tables, and
figures in publications. PaleoDeepDive performs comparably to humans in complex
data extraction and inference tasks and generates congruent synthetic
macroevolutionary results. Unlike traditional databases, PaleoDeepDive produces
a probabilistic database that systematically improves as information is added.
We also show that the system can readily accommodate sophisticated data types,
such as morphological data in biological illustrations and associated textual
descriptions. Our machine reading approach to scientific data integration and
synthesis brings within reach many questions that are currently underdetermined
and does so in ways that may stimulate entirely new modes of inquiry.
| Shanan E. Peters, Ce Zhang, Miron Livny, Christopher R\'e | null | 1406.2963 | null | null |
Truncated Nuclear Norm Minimization for Image Restoration Based On
Iterative Support Detection | cs.CV cs.LG stat.ML | Recovering a large matrix from limited measurements is a challenging task
arising in many real applications, such as image inpainting, compressive
sensing and medical imaging, and this kind of problems are mostly formulated as
low-rank matrix approximation problems. Due to the rank operator being
non-convex and discontinuous, most of the recent theoretical studies use the
nuclear norm as a convex relaxation and the low-rank matrix recovery problem is
solved through minimization of the nuclear norm regularized problem. However, a
major limitation of nuclear norm minimization is that all the singular values
are simultaneously minimized and the rank may not be well approximated
\cite{hu2012fast}. Correspondingly, in this paper, we propose a new multi-stage
algorithm, which makes use of the concept of Truncated Nuclear Norm
Regularization (TNNR) proposed in \citep{hu2012fast} and Iterative Support
Detection (ISD) proposed in \citep{wang2010sparse} to overcome the above
limitation. Besides matrix completion problems considered in
\citep{hu2012fast}, the proposed method can be also extended to the general
low-rank matrix recovery problems. Extensive experiments well validate the
superiority of our new algorithms over other state-of-the-art methods.
| Yilun Wang and Xinhua Su | null | 1406.2969 | null | null |
Techniques for Learning Binary Stochastic Feedforward Neural Networks | stat.ML cs.LG cs.NE | Stochastic binary hidden units in a multi-layer perceptron (MLP) network give
at least three potential benefits when compared to deterministic MLP networks.
(1) They allow to learn one-to-many type of mappings. (2) They can be used in
structured prediction problems, where modeling the internal structure of the
output is important. (3) Stochasticity has been shown to be an excellent
regularizer, which makes generalization performance potentially better in
general. However, training stochastic networks is considerably more difficult.
We study training using M samples of hidden activations per input. We show that
the case M=1 leads to a fundamentally different behavior where the network
tries to avoid stochasticity. We propose two new estimators for the training
gradient and propose benchmark tests for comparing training algorithms. Our
experiments confirm that training stochastic networks is difficult and show
that the proposed two estimators perform favorably among all the five known
estimators.
| Tapani Raiko, Mathias Berglund, Guillaume Alain, Laurent Dinh | null | 1406.2989 | null | null |
"Mental Rotation" by Optimizing Transforming Distance | cs.LG cs.CV | The human visual system is able to recognize objects despite transformations
that can drastically alter their appearance. To this end, much effort has been
devoted to the invariance properties of recognition systems. Invariance can be
engineered (e.g. convolutional nets), or learned from data explicitly (e.g.
temporal coherence) or implicitly (e.g. by data augmentation). One idea that
has not, to date, been explored is the integration of latent variables which
permit a search over a learned space of transformations. Motivated by evidence
that people mentally simulate transformations in space while comparing
examples, so-called "mental rotation", we propose a transforming distance.
Here, a trained relational model actively transforms pairs of examples so that
they are maximally similar in some feature space yet respect the learned
transformational constraints. We apply our method to nearest-neighbour problems
on the Toronto Face Database and NORB.
| Weiguang Ding, Graham W. Taylor | null | 1406.3010 | null | null |
Learning ELM network weights using linear discriminant analysis | cs.NE cs.LG stat.ML | We present an alternative to the pseudo-inverse method for determining the
hidden to output weight values for Extreme Learning Machines performing
classification tasks. The method is based on linear discriminant analysis and
provides Bayes optimal single point estimates for the weight values.
| Philip de Chazal, Jonathan Tapson and Andr\'e van Schaik | null | 1406.3100 | null | null |
A Cascade Neural Network Architecture investigating Surface Plasmon
Polaritons propagation for thin metals in OpenMP | cs.NE cond-mat.mes-hall cond-mat.mtrl-sci cs.DC cs.LG | Surface plasmon polaritons (SPPs) confined along metal-dielectric interface
have attracted a relevant interest in the area of ultracompact photonic
circuits, photovoltaic devices and other applications due to their strong field
confinement and enhancement. This paper investigates a novel cascade neural
network (NN) architecture to find the dependance of metal thickness on the SPP
propagation. Additionally, a novel training procedure for the proposed cascade
NN has been developed using an OpenMP-based framework, thus greatly reducing
training time. The performed experiments confirm the effectiveness of the
proposed NN architecture for the problem at hand.
| Francesco Bonanno, Giacomo Capizzi, Grazia Lo Sciuto, Christian
Napoli, Giuseppe Pappalardo, Emiliano Tramontana | null | 1406.3149 | null | null |
Online Optimization for Large-Scale Max-Norm Regularization | stat.ML cs.LG | Max-norm regularizer has been extensively studied in the last decade as it
promotes an effective low-rank estimation for the underlying data. However,
such max-norm regularized problems are typically formulated and solved in a
batch manner, which prevents it from processing big data due to possible memory
budget. In this paper, hence, we propose an online algorithm that is scalable
to large-scale setting. Particularly, we consider the matrix decomposition
problem as an example, although a simple variant of the algorithm and analysis
can be adapted to other important problems such as matrix completion. The
crucial technique in our implementation is to reformulating the max-norm to an
equivalent matrix factorization form, where the factors consist of a (possibly
overcomplete) basis component and a coefficients one. In this way, we may
maintain the basis component in the memory and optimize over it and the
coefficients for each sample alternatively. Since the memory footprint of the
basis component is independent of the sample size, our algorithm is appealing
when manipulating a large collection of samples. We prove that the sequence of
the solutions (i.e., the basis component) produced by our algorithm converges
to a stationary point of the expected loss function asymptotically. Numerical
study demonstrates encouraging results for the efficacy and robustness of our
algorithm compared to the widely used nuclear norm solvers.
| Jie Shen and Huan Xu and Ping Li | null | 1406.3190 | null | null |
Scheduled denoising autoencoders | cs.LG stat.ML | We present a representation learning method that learns features at multiple
different levels of scale. Working within the unsupervised framework of
denoising autoencoders, we observe that when the input is heavily corrupted
during training, the network tends to learn coarse-grained features, whereas
when the input is only slightly corrupted, the network tends to learn
fine-grained features. This motivates the scheduled denoising autoencoder,
which starts with a high level of noise that lowers as training progresses. We
find that the resulting representation yields a significant boost on a later
supervised task compared to the original input, or to a standard denoising
autoencoder trained at a single noise level. After supervised fine-tuning our
best model achieves the lowest ever reported error on the CIFAR-10 data set
among permutation-invariant methods.
| Krzysztof J. Geras and Charles Sutton | null | 1406.3269 | null | null |
Kalman Temporal Differences | cs.LG | Because reinforcement learning suffers from a lack of scalability, online
value (and Q-) function approximation has received increasing interest this
last decade. This contribution introduces a novel approximation scheme, namely
the Kalman Temporal Differences (KTD) framework, that exhibits the following
features: sample-efficiency, non-linear approximation, non-stationarity
handling and uncertainty management. A first KTD-based algorithm is provided
for deterministic Markov Decision Processes (MDP) which produces biased
estimates in the case of stochastic transitions. Than the eXtended KTD
framework (XKTD), solving stochastic MDP, is described. Convergence is analyzed
for special cases for both deterministic and stochastic transitions. Related
algorithms are experimented on classical benchmarks. They compare favorably to
the state of the art while exhibiting the announced features.
| Matthieu Geist, Olivier Pietquin | 10.1613/jair.3077 | 1406.3270 | null | null |
Convolutional Kernel Networks | cs.CV cs.LG stat.ML | An important goal in visual recognition is to devise image representations
that are invariant to particular transformations. In this paper, we address
this goal with a new type of convolutional neural network (CNN) whose
invariance is encoded by a reproducing kernel. Unlike traditional approaches
where neural networks are learned either to represent data or for solving a
classification task, our network learns to approximate the kernel feature map
on training data. Such an approach enjoys several benefits over classical ones.
First, by teaching CNNs to be invariant, we obtain simple network architectures
that achieve a similar accuracy to more complex ones, while being easy to train
and robust to overfitting. Second, we bridge a gap between the neural network
literature and kernels, which are natural tools to model invariance. We
evaluate our methodology on visual recognition tasks where CNNs have proven to
perform well, e.g., digit recognition with the MNIST dataset, and the more
challenging CIFAR-10 and STL-10 datasets, where our accuracy is competitive
with the state of the art.
| Julien Mairal (INRIA Grenoble Rh\^one-Alpes / LJK Laboratoire Jean
Kuntzmann), Piotr Koniusz (INRIA Grenoble Rh\^one-Alpes / LJK Laboratoire
Jean Kuntzmann), Zaid Harchaoui (INRIA Grenoble Rh\^one-Alpes / LJK
Laboratoire Jean Kuntzmann), Cordelia Schmid (INRIA Grenoble Rh\^one-Alpes /
LJK Laboratoire Jean Kuntzmann) | null | 1406.3332 | null | null |
Restricted Boltzmann Machine for Classification with Hierarchical
Correlated Prior | cs.LG | Restricted Boltzmann machines (RBM) and its variants have become hot research
topics recently, and widely applied to many classification problems, such as
character recognition and document categorization. Often, classification RBM
ignores the interclass relationship or prior knowledge of sharing information
among classes. In this paper, we are interested in RBM with the hierarchical
prior over classes. We assume parameters for nearby nodes are correlated in the
hierarchical tree, and further the parameters at each node of the tree be
orthogonal to those at its ancestors. We propose a hierarchical correlated RBM
for classification problem, which generalizes the classification RBM with
sharing information among different classes. In order to reduce the redundancy
between node parameters in the hierarchy, we also introduce orthogonal
restrictions to our objective function. We test our method on challenge
datasets, and show promising results compared to competitive baselines.
| Gang Chen and Sargur H. Srihari | null | 1406.3407 | null | null |
Heterogeneous Multi-task Learning for Human Pose Estimation with Deep
Convolutional Neural Network | cs.CV cs.LG cs.NE | We propose an heterogeneous multi-task learning framework for human pose
estimation from monocular image with deep convolutional neural network. In
particular, we simultaneously learn a pose-joint regressor and a sliding-window
body-part detector in a deep network architecture. We show that including the
body-part detection task helps to regularize the network, directing it to
converge to a good solution. We report competitive and state-of-art results on
several data sets. We also empirically show that the learned neurons in the
middle layer of our network are tuned to localized body parts.
| Sijin Li, Zhi-Qiang Liu, Antoni B. Chan | null | 1406.3474 | null | null |
EigenEvent: An Algorithm for Event Detection from Complex Data Streams
in Syndromic Surveillance | cs.AI cs.LG stat.AP | Syndromic surveillance systems continuously monitor multiple pre-diagnostic
daily streams of indicators from different regions with the aim of early
detection of disease outbreaks. The main objective of these systems is to
detect outbreaks hours or days before the clinical and laboratory confirmation.
The type of data that is being generated via these systems is usually
multivariate and seasonal with spatial and temporal dimensions. The algorithm
What's Strange About Recent Events (WSARE) is the state-of-the-art method for
such problems. It exhaustively searches for contrast sets in the multivariate
data and signals an alarm when find statistically significant rules. This
bottom-up approach presents a much lower detection delay comparing the existing
top-down approaches. However, WSARE is very sensitive to the small-scale
changes and subsequently comes with a relatively high rate of false alarms. We
propose a new approach called EigenEvent that is neither fully top-down nor
bottom-up. In this method, we instead of top-down or bottom-up search, track
changes in data correlation structure via eigenspace techniques. This new
methodology enables us to detect both overall changes (via eigenvalue) and
dimension-level changes (via eigenvectors). Experimental results on hundred
sets of benchmark data reveals that EigenEvent presents a better overall
performance comparing state-of-the-art, in particular in terms of the false
alarm rate.
| Hadi Fanaee-T and Jo\~ao Gama | 10.3233/IDA-150734 | 1406.3496 | null | null |
Multi-objective Reinforcement Learning with Continuous Pareto Frontier
Approximation Supplementary Material | cs.AI cs.LG | This document contains supplementary material for the paper "Multi-objective
Reinforcement Learning with Continuous Pareto Frontier Approximation",
published at the Twenty-Ninth AAAI Conference on Artificial Intelligence
(AAAI-15). The paper is about learning a continuous approximation of the Pareto
frontier in Multi-Objective Markov Decision Problems (MOMDPs). We propose a
policy-based approach that exploits gradient information to generate solutions
close to the Pareto ones. Differently from previous policy-gradient
multi-objective algorithms, where n optimization routines are use to have n
solutions, our approach performs a single gradient-ascent run that at each step
generates an improved continuous approximation of the Pareto frontier. The idea
is to exploit a gradient-based approach to optimize the parameters of a
function that defines a manifold in the policy parameter space so that the
corresponding image in the objective space gets as close as possible to the
Pareto frontier. Besides deriving how to compute and estimate such gradient, we
will also discuss the non-trivial issue of defining a metric to assess the
quality of the candidate Pareto frontiers. Finally, the properties of the
proposed approach are empirically evaluated on two interesting MOMDPs.
| Matteo Pirotta, Simone Parisi and Marcello Restelli | null | 1406.3497 | null | null |
Quaternion Gradient and Hessian | math.NA cs.LG | The optimization of real scalar functions of quaternion variables, such as
the mean square error or array output power, underpins many practical
applications. Solutions often require the calculation of the gradient and
Hessian, however, real functions of quaternion variables are essentially
non-analytic. To address this issue, we propose new definitions of quaternion
gradient and Hessian, based on the novel generalized HR (GHR) calculus, thus
making possible efficient derivation of optimization algorithms directly in the
quaternion field, rather than transforming the problem to the real domain, as
is current practice. In addition, unlike the existing quaternion gradients, the
GHR calculus allows for the product and chain rule, and for a one-to-one
correspondence of the proposed quaternion gradient and Hessian with their real
counterparts. Properties of the quaternion gradient and Hessian relevant to
numerical applications are elaborated, and the results illuminate the
usefulness of the GHR calculus in greatly simplifying the derivation of the
quaternion least mean squares, and in quaternion least square and Newton
algorithm. The proposed gradient and Hessian are also shown to enable the same
generic forms as the corresponding real- and complex-valued algorithms, further
illustrating the advantages in algorithm design and evaluation.
| Dongpo Xu, Danilo P. Mandic | 10.1109/TNNLS.2015.2440473 | 1406.3587 | null | null |
Smoothed Gradients for Stochastic Variational Inference | stat.ML cs.LG | Stochastic variational inference (SVI) lets us scale up Bayesian computation
to massive data. It uses stochastic optimization to fit a variational
distribution, following easy-to-compute noisy natural gradients. As with most
traditional stochastic optimization methods, SVI takes precautions to use
unbiased stochastic gradients whose expectations are equal to the true
gradients. In this paper, we explore the idea of following biased stochastic
gradients in SVI. Our method replaces the natural gradient with a similarly
constructed vector that uses a fixed-window moving average of some of its
previous terms. We will demonstrate the many advantages of this technique.
First, its computational cost is the same as for SVI and storage requirements
only multiply by a constant factor. Second, it enjoys significant variance
reduction over the unbiased estimates, smaller bias than averaged gradients,
and leads to smaller mean-squared error against the full gradient. We test our
method on latent Dirichlet allocation with three large corpora.
| Stephan Mandt and David Blei | null | 1406.3650 | null | null |
Analyzing Social and Stylometric Features to Identify Spear phishing
Emails | cs.CY cs.LG cs.SI | Spear phishing is a complex targeted attack in which, an attacker harvests
information about the victim prior to the attack. This information is then used
to create sophisticated, genuine-looking attack vectors, drawing the victim to
compromise confidential information. What makes spear phishing different, and
more powerful than normal phishing, is this contextual information about the
victim. Online social media services can be one such source for gathering vital
information about an individual. In this paper, we characterize and examine a
true positive dataset of spear phishing, spam, and normal phishing emails from
Symantec's enterprise email scanning service. We then present a model to detect
spear phishing emails sent to employees of 14 international organizations, by
using social features extracted from LinkedIn. Our dataset consists of 4,742
targeted attack emails sent to 2,434 victims, and 9,353 non targeted attack
emails sent to 5,912 non victims; and publicly available information from their
LinkedIn profiles. We applied various machine learning algorithms to this
labeled data, and achieved an overall maximum accuracy of 97.76% in identifying
spear phishing emails. We used a combination of social features from LinkedIn
profiles, and stylometric features extracted from email subjects, bodies, and
attachments. However, we achieved a slightly better accuracy of 98.28% without
the social features. Our analysis revealed that social features extracted from
LinkedIn do not help in identifying spear phishing emails. To the best of our
knowledge, this is one of the first attempts to make use of a combination of
stylometric features extracted from emails, and social features extracted from
an online social network to detect targeted spear phishing emails.
| Prateek Dewan and Anand Kashyap and Ponnurangam Kumaraguru | null | 1406.3692 | null | null |
Evaluation of Machine Learning Techniques for Green Energy Prediction | cs.LG | We evaluate the following Machine Learning techniques for Green Energy (Wind,
Solar) Prediction: Bayesian Inference, Neural Networks, Support Vector
Machines, Clustering techniques (PCA). Our objective is to predict green energy
using weather forecasts, predict deviations from forecast green energy, find
correlation amongst different weather parameters and green energy availability,
recover lost or missing energy (/ weather) data. We use historical weather data
and weather forecasts for the same.
| Ankur Sahai | null | 1406.3726 | null | null |
From Stochastic Mixability to Fast Rates | cs.LG stat.ML | Empirical risk minimization (ERM) is a fundamental learning rule for
statistical learning problems where the data is generated according to some
unknown distribution $\mathsf{P}$ and returns a hypothesis $f$ chosen from a
fixed class $\mathcal{F}$ with small loss $\ell$. In the parametric setting,
depending upon $(\ell, \mathcal{F},\mathsf{P})$ ERM can have slow
$(1/\sqrt{n})$ or fast $(1/n)$ rates of convergence of the excess risk as a
function of the sample size $n$. There exist several results that give
sufficient conditions for fast rates in terms of joint properties of $\ell$,
$\mathcal{F}$, and $\mathsf{P}$, such as the margin condition and the Bernstein
condition. In the non-statistical prediction with expert advice setting, there
is an analogous slow and fast rate phenomenon, and it is entirely characterized
in terms of the mixability of the loss $\ell$ (there being no role there for
$\mathcal{F}$ or $\mathsf{P}$). The notion of stochastic mixability builds a
bridge between these two models of learning, reducing to classical mixability
in a special case. The present paper presents a direct proof of fast rates for
ERM in terms of stochastic mixability of $(\ell,\mathcal{F}, \mathsf{P})$, and
in so doing provides new insight into the fast-rates phenomenon. The proof
exploits an old result of Kemperman on the solution to the general moment
problem. We also show a partial converse that suggests a characterization of
fast rates for ERM in terms of stochastic mixability is possible.
| Nishant A. Mehta and Robert C. Williamson | null | 1406.3781 | null | null |
Interval Forecasting of Electricity Demand: A Novel Bivariate EMD-based
Support Vector Regression Modeling Framework | cs.LG stat.AP | Highly accurate interval forecasting of electricity demand is fundamental to
the success of reducing the risk when making power system planning and
operational decisions by providing a range rather than point estimation. In
this study, a novel modeling framework integrating bivariate empirical mode
decomposition (BEMD) and support vector regression (SVR), extended from the
well-established empirical mode decomposition (EMD) based time series modeling
framework in the energy demand forecasting literature, is proposed for interval
forecasting of electricity demand. The novelty of this study arises from the
employment of BEMD, a new extension of classical empirical model decomposition
(EMD) destined to handle bivariate time series treated as complex-valued time
series, as decomposition method instead of classical EMD only capable of
decomposing one-dimensional single-valued time series. This proposed modeling
framework is endowed with BEMD to decompose simultaneously both the lower and
upper bounds time series, constructed in forms of complex-valued time series,
of electricity demand on a monthly per hour basis, resulting in capturing the
potential interrelationship between lower and upper bounds. The proposed
modeling framework is justified with monthly interval-valued electricity demand
data per hour in Pennsylvania-New Jersey-Maryland Interconnection, indicating
it as a promising method for interval-valued electricity demand forecasting.
| Tao Xiong, Yukun Bao, Zhongyi Hu | 10.1016/j.ijepes.2014.06.010 | 1406.3792 | null | null |
Simultaneous Model Selection and Optimization through Parameter-free
Stochastic Learning | cs.LG stat.ML | Stochastic gradient descent algorithms for training linear and kernel
predictors are gaining more and more importance, thanks to their scalability.
While various methods have been proposed to speed up their convergence, the
model selection phase is often ignored. In fact, in theoretical works most of
the time assumptions are made, for example, on the prior knowledge of the norm
of the optimal solution, while in the practical world validation methods remain
the only viable approach. In this paper, we propose a new kernel-based
stochastic gradient descent algorithm that performs model selection while
training, with no parameters to tune, nor any form of cross-validation. The
algorithm builds on recent advancement in online learning theory for
unconstrained settings, to estimate over time the right regularization in a
data-dependent way. Optimal rates of convergence are proved under standard
smoothness assumptions on the target function, using the range space of the
fractional integral operator associated with the kernel.
| Francesco Orabona | null | 1406.3816 | null | null |
Modelling, Visualising and Summarising Documents with a Single
Convolutional Neural Network | cs.CL cs.LG stat.ML | Capturing the compositional process which maps the meaning of words to that
of documents is a central challenge for researchers in Natural Language
Processing and Information Retrieval. We introduce a model that is able to
represent the meaning of documents by embedding them in a low dimensional
vector space, while preserving distinctions of word and sentence order crucial
for capturing nuanced semantics. Our model is based on an extended Dynamic
Convolution Neural Network, which learns convolution filters at both the
sentence and document level, hierarchically learning to capture and compose low
level lexical features into high level semantic concepts. We demonstrate the
effectiveness of this model on a range of document modelling tasks, achieving
strong results with no feature engineering and with a more compact model.
Inspired by recent advances in visualising deep convolution networks for
computer vision, we present a novel visualisation technique for our document
networks which not only provides insight into their learning process, but also
can be interpreted to produce a compelling automatic summarisation system for
texts.
| Misha Denil and Alban Demiraj and Nal Kalchbrenner and Phil Blunsom
and Nando de Freitas | null | 1406.3830 | null | null |
An Incremental Reseeding Strategy for Clustering | stat.ML cs.LG | In this work we propose a simple and easily parallelizable algorithm for
multiway graph partitioning. The algorithm alternates between three basic
components: diffusing seed vertices over the graph, thresholding the diffused
seeds, and then randomly reseeding the thresholded clusters. We demonstrate
experimentally that the proper combination of these ingredients leads to an
algorithm that achieves state-of-the-art performance in terms of cluster purity
on standard benchmarks datasets. Moreover, the algorithm runs an order of
magnitude faster than the other algorithms that achieve comparable results in
terms of accuracy. We also describe a coarsen, cluster and refine approach
similar to GRACLUS and METIS that removes an additional order of magnitude from
the runtime of our algorithm while still maintaining competitive accuracy.
| Xavier Bresson, Huiyi Hu, Thomas Laurent, Arthur Szlam, and James von
Brecht | null | 1406.3837 | null | null |
Optimal Resource Allocation with Semi-Bandit Feedback | cs.LG | We study a sequential resource allocation problem involving a fixed number of
recurring jobs. At each time-step the manager should distribute available
resources among the jobs in order to maximise the expected number of completed
jobs. Allocating more resources to a given job increases the probability that
it completes, but with a cut-off. Specifically, we assume a linear model where
the probability increases linearly until it equals one, after which allocating
additional resources is wasteful. We assume the difficulty of each job is
unknown and present the first algorithm for this problem and prove upper and
lower bounds on its regret. Despite its apparent simplicity, the problem has a
rich structure: we show that an appropriate optimistic algorithm can improve
its learning speed dramatically beyond the results one normally expects for
similar problems as the problem becomes resource-laden.
| Tor Lattimore and Koby Crammer and Csaba Szepesv\'ari | null | 1406.3840 | null | null |
Semi-Separable Hamiltonian Monte Carlo for Inference in Bayesian
Hierarchical Models | stat.CO cs.AI cs.LG | Sampling from hierarchical Bayesian models is often difficult for MCMC
methods, because of the strong correlations between the model parameters and
the hyperparameters. Recent Riemannian manifold Hamiltonian Monte Carlo (RMHMC)
methods have significant potential advantages in this setting, but are
computationally expensive. We introduce a new RMHMC method, which we call
semi-separable Hamiltonian Monte Carlo, which uses a specially designed mass
matrix that allows the joint Hamiltonian over model parameters and
hyperparameters to decompose into two simpler Hamiltonians. This structure is
exploited by a new integrator which we call the alternating blockwise leapfrog
algorithm. The resulting method can mix faster than simpler Gibbs sampling
while being simpler and more efficient than previous instances of RMHMC.
| Yichuan Zhang, Charles Sutton | null | 1406.3843 | null | null |
A low variance consistent test of relative dependency | stat.ML cs.LG stat.CO | We describe a novel non-parametric statistical hypothesis test of relative
dependence between a source variable and two candidate target variables. Such a
test enables us to determine whether one source variable is significantly more
dependent on a first target variable or a second. Dependence is measured via
the Hilbert-Schmidt Independence Criterion (HSIC), resulting in a pair of
empirical dependence measures (source-target 1, source-target 2). We test
whether the first dependence measure is significantly larger than the second.
Modeling the covariance between these HSIC statistics leads to a provably more
powerful test than the construction of independent HSIC statistics by
sub-sampling. The resulting test is consistent and unbiased, and (being based
on U-statistics) has favorable convergence properties. The test can be computed
in quadratic time, matching the computational complexity of standard empirical
HSIC estimators. The effectiveness of the test is demonstrated on several
real-world problems: we identify language groups from a multilingual corpus,
and we prove that tumor location is more dependent on gene expression than
chromosomal imbalances. Source code is available for download at
https://github.com/wbounliphone/reldep.
| Wacha Bounliphone, Arthur Gretton, Arthur Tenenhaus (E3S), Matthew
Blaschko (INRIA Saclay - Ile de France, CVN) | null | 1406.3852 | null | null |
Learning An Invariant Speech Representation | cs.SD cs.LG | Recognition of speech, and in particular the ability to generalize and learn
from small sets of labelled examples like humans do, depends on an appropriate
representation of the acoustic input. We formulate the problem of finding
robust speech features for supervised learning with small sample complexity as
a problem of learning representations of the signal that are maximally
invariant to intraclass transformations and deformations. We propose an
extension of a theory for unsupervised learning of invariant visual
representations to the auditory domain and empirically evaluate its validity
for voiced speech sound classification. Our version of the theory requires the
memory-based, unsupervised storage of acoustic templates -- such as specific
phones or words -- together with all the transformations of each that normally
occur. A quasi-invariant representation for a speech segment can be obtained by
projecting it to each template orbit, i.e., the set of transformed signals, and
computing the associated one-dimensional empirical probability distributions.
The computations can be performed by modules of filtering and pooling, and
extended to hierarchical architectures. In this paper, we apply a single-layer,
multicomponent representation for phonemes and demonstrate improved accuracy
and decreased sample complexity for vowel classification compared to standard
spectral, cepstral and perceptual features.
| Georgios Evangelopoulos, Stephen Voinea, Chiyuan Zhang, Lorenzo
Rosasco, Tomaso Poggio | null | 1406.3884 | null | null |
The Laplacian K-modes algorithm for clustering | cs.LG stat.ME stat.ML | In addition to finding meaningful clusters, centroid-based clustering
algorithms such as K-means or mean-shift should ideally find centroids that are
valid patterns in the input space, representative of data in their cluster.
This is challenging with data having a nonconvex or manifold structure, as with
images or text. We introduce a new algorithm, Laplacian K-modes, which
naturally combines three powerful ideas in clustering: the explicit use of
assignment variables (as in K-means); the estimation of cluster centroids which
are modes of each cluster's density estimate (as in mean-shift); and the
regularizing effect of the graph Laplacian, which encourages similar
assignments for nearby points (as in spectral clustering). The optimization
algorithm alternates an assignment step, which is a convex quadratic program,
and a mean-shift step, which separates for each cluster centroid. The algorithm
finds meaningful density estimates for each cluster, even with challenging
problems where the clusters have manifold structure, are highly nonconvex or in
high dimension. It also provides centroids that are valid patterns, truly
representative of their cluster (unlike K-means), and an out-of-sample mapping
that predicts soft assignments for a new point.
| Weiran Wang and Miguel \'A. Carreira-Perpi\~n\'an | null | 1406.3895 | null | null |
Freeze-Thaw Bayesian Optimization | stat.ML cs.LG | In this paper we develop a dynamic form of Bayesian optimization for machine
learning models with the goal of rapidly finding good hyperparameter settings.
Our method uses the partial information gained during the training of a machine
learning model in order to decide whether to pause training and start a new
model, or resume the training of a previously-considered model. We specifically
tailor our method to machine learning problems by developing a novel
positive-definite covariance kernel to capture a variety of training curves.
Furthermore, we develop a Gaussian process prior that scales gracefully with
additional temporal observations. Finally, we provide an information-theoretic
framework to automate the decision process. Experiments on several common
machine learning models show that our approach is extremely effective in
practice.
| Kevin Swersky and Jasper Snoek and Ryan Prescott Adams | null | 1406.3896 | null | null |
Personalized Medical Treatments Using Novel Reinforcement Learning
Algorithms | cs.LG stat.ML | In both the fields of computer science and medicine there is very strong
interest in developing personalized treatment policies for patients who have
variable responses to treatments. In particular, I aim to find an optimal
personalized treatment policy which is a non-deterministic function of the
patient specific covariate data that maximizes the expected survival time or
clinical outcome. I developed an algorithmic framework to solve multistage
decision problem with a varying number of stages that are subject to censoring
in which the "rewards" are expected survival times. In specific, I developed a
novel Q-learning algorithm that dynamically adjusts for these parameters.
Furthermore, I found finite upper bounds on the generalized error of the
treatment paths constructed by this algorithm. I have also shown that when the
optimal Q-function is an element of the approximation space, the anticipated
survival times for the treatment regime constructed by the algorithm will
converge to the optimal treatment path. I demonstrated the performance of the
proposed algorithmic framework via simulation studies and through the analysis
of chronic depression data and a hypothetical clinical trial. The censored
Q-learning algorithm I developed is more effective than the state of the art
clinical decision support systems and is able to operate in environments when
many covariate parameters may be unobtainable or censored.
| Yousuf M. Soliman | null | 1406.3922 | null | null |
Bayesian Optimal Control of Smoothly Parameterized Systems: The Lazy
Posterior Sampling Algorithm | cs.LG stat.ML | We study Bayesian optimal control of a general class of smoothly
parameterized Markov decision problems. Since computing the optimal control is
computationally expensive, we design an algorithm that trades off performance
for computational efficiency. The algorithm is a lazy posterior sampling method
that maintains a distribution over the unknown parameter. The algorithm changes
its policy only when the variance of the distribution is reduced sufficiently.
Importantly, we analyze the algorithm and show the precise nature of the
performance vs. computation tradeoff. Finally, we show the effectiveness of the
method on a web server control application.
| Yasin Abbasi-Yadkori and Csaba Szepesvari | null | 1406.3926 | null | null |
Semantic Graph for Zero-Shot Learning | cs.CV cs.LG | Zero-shot learning aims to classify visual objects without any training data
via knowledge transfer between seen and unseen classes. This is typically
achieved by exploring a semantic embedding space where the seen and unseen
classes can be related. Previous works differ in what embedding space is used
and how different classes and a test image can be related. In this paper, we
utilize the annotation-free semantic word space for the former and focus on
solving the latter issue of modeling relatedness. Specifically, in contrast to
previous work which ignores the semantic relationships between seen classes and
focus merely on those between seen and unseen classes, in this paper a novel
approach based on a semantic graph is proposed to represent the relationships
between all the seen and unseen class in a semantic word space. Based on this
semantic graph, we design a special absorbing Markov chain process, in which
each unseen class is viewed as an absorbing state. After incorporating one test
image into the semantic graph, the absorbing probabilities from the test data
to each unseen class can be effectively computed; and zero-shot classification
can be achieved by finding the class label with the highest absorbing
probability. The proposed model has a closed-form solution which is linear with
respect to the number of test images. We demonstrate the effectiveness and
computational efficiency of the proposed method over the state-of-the-arts on
the AwA (animals with attributes) dataset.
| Zhen-Yong Fu, Tao Xiang, Shaogang Gong | null | 1406.4112 | null | null |
Construction of non-convex polynomial loss functions for training a
binary classifier with quantum annealing | cs.LG quant-ph | Quantum annealing is a heuristic quantum algorithm which exploits quantum
resources to minimize an objective function embedded as the energy levels of a
programmable physical system. To take advantage of a potential quantum
advantage, one needs to be able to map the problem of interest to the native
hardware with reasonably low overhead. Because experimental considerations
constrain our objective function to take the form of a low degree PUBO
(polynomial unconstrained binary optimization), we employ non-convex loss
functions which are polynomial functions of the margin. We show that these loss
functions are robust to label noise and provide a clear advantage over convex
methods. These loss functions may also be useful for classical approaches as
they compile to regularized risk expressions which can be evaluated in constant
time with respect to the number of training examples.
| Ryan Babbush, Vasil Denchev, Nan Ding, Sergei Isakov and Hartmut Neven | null | 1406.4203 | null | null |
Self-Learning Camera: Autonomous Adaptation of Object Detectors to
Unlabeled Video Streams | cs.CV cs.LG | Learning object detectors requires massive amounts of labeled training
samples from the specific data source of interest. This is impractical when
dealing with many different sources (e.g., in camera networks), or constantly
changing ones such as mobile cameras (e.g., in robotics or driving assistant
systems). In this paper, we address the problem of self-learning detectors in
an autonomous manner, i.e. (i) detectors continuously updating themselves to
efficiently adapt to streaming data sources (contrary to transductive
algorithms), (ii) without any labeled data strongly related to the target data
stream (contrary to self-paced learning), and (iii) without manual intervention
to set and update hyper-parameters. To that end, we propose an unsupervised,
on-line, and self-tuning learning algorithm to optimize a multi-task learning
convex objective. Our method uses confident but laconic oracles (high-precision
but low-recall off-the-shelf generic detectors), and exploits the structure of
the problem to jointly learn on-line an ensemble of instance-level trackers,
from which we derive an adapted category-level object detector. Our approach is
validated on real-world publicly available video object datasets.
| Adrien Gaidon (Xerox Research Center Europe, France), Gloria Zen
(University of Trento, Italy), Jose A. Rodriguez-Serrano (Xerox Research
Center Europe, France) | null | 1406.4296 | null | null |
Distributed Stochastic Optimization of the Regularized Risk | stat.ML cs.LG | Many machine learning algorithms minimize a regularized risk, and stochastic
optimization is widely used for this task. When working with massive data, it
is desirable to perform stochastic optimization in parallel. Unfortunately,
many existing stochastic optimization algorithms cannot be parallelized
efficiently. In this paper we show that one can rewrite the regularized risk
minimization problem as an equivalent saddle-point problem, and propose an
efficient distributed stochastic optimization (DSO) algorithm. We prove the
algorithm's rate of convergence; remarkably, our analysis shows that the
algorithm scales almost linearly with the number of processors. We also verify
with empirical evaluations that the proposed algorithm is competitive with
other parallel, general purpose stochastic and batch optimization algorithms
for regularized risk minimization.
| Shin Matsushima, Hyokun Yun, Xinhua Zhang, S.V.N. Vishwanathan | null | 1406.4363 | null | null |
PRISM: Person Re-Identification via Structured Matching | cs.CV cs.LG stat.ML | Person re-identification (re-id), an emerging problem in visual surveillance,
deals with maintaining entities of individuals whilst they traverse various
locations surveilled by a camera network. From a visual perspective re-id is
challenging due to significant changes in visual appearance of individuals in
cameras with different pose, illumination and calibration. Globally the
challenge arises from the need to maintain structurally consistent matches
among all the individual entities across different camera views. We propose
PRISM, a structured matching method to jointly account for these challenges. We
view the global problem as a weighted graph matching problem and estimate edge
weights by learning to predict them based on the co-occurrences of visual
patterns in the training examples. These co-occurrence based scores in turn
account for appearance changes by inferring likely and unlikely visual
co-occurrences appearing in training instances. We implement PRISM on single
shot and multi-shot scenarios. PRISM uniformly outperforms state-of-the-art in
terms of matching rate while being computationally efficient.
| Ziming Zhang and Venkatesh Saligrama | null | 1406.4444 | null | null |
RAPID: Rapidly Accelerated Proximal Gradient Algorithms for Convex
Minimization | stat.ML cs.LG math.OC | In this paper, we propose a new algorithm to speed-up the convergence of
accelerated proximal gradient (APG) methods. In order to minimize a convex
function $f(\mathbf{x})$, our algorithm introduces a simple line search step
after each proximal gradient step in APG so that a biconvex function
$f(\theta\mathbf{x})$ is minimized over scalar variable $\theta>0$ while fixing
variable $\mathbf{x}$. We propose two new ways of constructing the auxiliary
variables in APG based on the intermediate solutions of the proximal gradient
and the line search steps. We prove that at arbitrary iteration step $t
(t\geq1)$, our algorithm can achieve a smaller upper-bound for the gap between
the current and optimal objective values than those in the traditional APG
methods such as FISTA, making it converge faster in practice. In fact, our
algorithm can be potentially applied to many important convex optimization
problems, such as sparse linear regression and kernel SVMs. Our experimental
results clearly demonstrate that our algorithm converges faster than APG in all
of the applications above, even comparable to some sophisticated solvers.
| Ziming Zhang and Venkatesh Saligrama | null | 1406.4445 | null | null |
Authorship Attribution through Function Word Adjacency Networks | cs.CL cs.LG stat.ML | A method for authorship attribution based on function word adjacency networks
(WANs) is introduced. Function words are parts of speech that express
grammatical relationships between other words but do not carry lexical meaning
on their own. In the WANs in this paper, nodes are function words and directed
edges stand in for the likelihood of finding the sink word in the ordered
vicinity of the source word. WANs of different authors can be interpreted as
transition probabilities of a Markov chain and are therefore compared in terms
of their relative entropies. Optimal selection of WAN parameters is studied and
attribution accuracy is benchmarked across a diverse pool of authors and
varying text lengths. This analysis shows that, since function words are
independent of content, their use tends to be specific to an author and that
the relational data captured by function WANs is a good summary of stylometric
fingerprints. Attribution accuracy is observed to exceed the one achieved by
methods that rely on word frequencies alone. Further combining WANs with
methods that rely on word frequencies alone, results in larger attribution
accuracy, indicating that both sources of information encode different aspects
of authorial styles.
| Santiago Segarra, Mark Eisen, Alejandro Ribeiro | 10.1109/TSP.2015.2451111 | 1406.4469 | null | null |
Notes on hierarchical ensemble methods for DAG-structured taxonomies | cs.AI cs.LG stat.ML | Several real problems ranging from text classification to computational
biology are characterized by hierarchical multi-label classification tasks.
Most of the methods presented in literature focused on tree-structured
taxonomies, but only few on taxonomies structured according to a Directed
Acyclic Graph (DAG). In this contribution novel classification ensemble
algorithms for DAG-structured taxonomies are introduced. In particular
Hierarchical Top-Down (HTD-DAG) and True Path Rule (TPR-DAG) for DAGs are
presented and discussed.
| Giorgio Valentini | null | 1406.4472 | null | null |
Guaranteed Scalable Learning of Latent Tree Models | cs.LG stat.ML | We present an integrated approach for structure and parameter estimation in
latent tree graphical models. Our overall approach follows a
"divide-and-conquer" strategy that learns models over small groups of variables
and iteratively merges onto a global solution. The structure learning involves
combinatorial operations such as minimum spanning tree construction and local
recursive grouping; the parameter learning is based on the method of moments
and on tensor decompositions. Our method is guaranteed to correctly recover the
unknown tree structure and the model parameters with low sample complexity for
the class of linear multivariate latent tree models which includes discrete and
Gaussian distributions, and Gaussian mixtures. Our bulk asynchronous parallel
algorithm is implemented in parallel and the parallel computation complexity
increases only logarithmically with the number of variables and linearly with
dimensionality of each variable.
| Furong Huang, Niranjan U.N., Ioakeim Perros, Robert Chen, Jimeng Sun,
Anima Anandkumar | null | 1406.4566 | null | null |
Primitives for Dynamic Big Model Parallelism | stat.ML cs.DC cs.LG | When training large machine learning models with many variables or
parameters, a single machine is often inadequate since the model may be too
large to fit in memory, while training can take a long time even with
stochastic updates. A natural recourse is to turn to distributed cluster
computing, in order to harness additional memory and processors. However,
naive, unstructured parallelization of ML algorithms can make inefficient use
of distributed memory, while failing to obtain proportional convergence
speedups - or can even result in divergence. We develop a framework of
primitives for dynamic model-parallelism, STRADS, in order to explore
partitioning and update scheduling of model variables in distributed ML
algorithms - thus improving their memory efficiency while presenting new
opportunities to speed up convergence without compromising inference
correctness. We demonstrate the efficacy of model-parallel algorithms
implemented in STRADS versus popular implementations for Topic Modeling, Matrix
Factorization and Lasso.
| Seunghak Lee, Jin Kyu Kim, Xun Zheng, Qirong Ho, Garth A. Gibson, Eric
P. Xing | null | 1406.4580 | null | null |
A Generalized Markov-Chain Modelling Approach to $(1,\lambda)$-ES Linear
Optimization: Technical Report | cs.NA cs.LG cs.NE | Several recent publications investigated Markov-chain modelling of linear
optimization by a $(1,\lambda)$-ES, considering both unconstrained and linearly
constrained optimization, and both constant and varying step size. All of them
assume normality of the involved random steps, and while this is consistent
with a black-box scenario, information on the function to be optimized (e.g.
separability) may be exploited by the use of another distribution. The
objective of our contribution is to complement previous studies realized with
normal steps, and to give sufficient conditions on the distribution of the
random steps for the success of a constant step-size $(1,\lambda)$-ES on the
simple problem of a linear function with a linear constraint. The decomposition
of a multidimensional distribution into its marginals and the copula combining
them is applied to the new distributional assumptions, particular attention
being paid to distributions with Archimedean copulas.
| Alexandre Chotard (INRIA Saclay - Ile de France, LRI), Martin Holena | null | 1406.4619 | null | null |
An Entropy Search Portfolio for Bayesian Optimization | stat.ML cs.LG | Bayesian optimization is a sample-efficient method for black-box global
optimization. How- ever, the performance of a Bayesian optimization method very
much depends on its exploration strategy, i.e. the choice of acquisition
function, and it is not clear a priori which choice will result in superior
performance. While portfolio methods provide an effective, principled way of
combining a collection of acquisition functions, they are often based on
measures of past performance which can be misleading. To address this issue, we
introduce the Entropy Search Portfolio (ESP): a novel approach to portfolio
construction which is motivated by information theoretic considerations. We
show that ESP outperforms existing portfolio methods on several real and
synthetic problems, including geostatistical datasets and simulated control
tasks. We not only show that ESP is able to offer performance as good as the
best, but unknown, acquisition function, but surprisingly it often gives better
performance. Finally, over a wide range of conditions we find that ESP is
robust to the inclusion of poor acquisition functions.
| Bobak Shahriari and Ziyu Wang and Matthew W. Hoffman and Alexandre
Bouchard-C\^ot\'e and Nando de Freitas | null | 1406.4625 | null | null |
A Sober Look at Spectral Learning | cs.LG | Spectral learning recently generated lots of excitement in machine learning,
largely because it is the first known method to produce consistent estimates
(under suitable conditions) for several latent variable models. In contrast,
maximum likelihood estimates may get trapped in local optima due to the
non-convex nature of the likelihood function of latent variable models. In this
paper, we do an empirical evaluation of spectral learning (SL) and expectation
maximization (EM), which reveals an important gap between the theory and the
practice. First, SL often leads to negative probabilities. Second, EM often
yields better estimates than spectral learning and it does not seem to get
stuck in local optima. We discuss how the rank of the model parameters and the
amount of training data can yield negative probabilities. We also question the
common belief that maximum likelihood estimators are necessarily inconsistent.
| Han Zhao, Pascal Poupart | null | 1406.4631 | null | null |
Exact Decoding on Latent Variable Conditional Models is NP-Hard | cs.AI cs.CC cs.LG | Latent variable conditional models, including the latent conditional random
fields as a special case, are popular models for many natural language
processing and vision processing tasks. The computational complexity of the
exact decoding/inference in latent conditional random fields is unclear. In
this paper, we try to clarify the computational complexity of the exact
decoding. We analyze the complexity and demonstrate that it is an NP-hard
problem even on a sequential labeling setting. Furthermore, we propose the
latent-dynamic inference (LDI-Naive) method and its bounded version
(LDI-Bounded), which are able to perform exact-inference or
almost-exact-inference by using top-$n$ search and dynamic programming.
| Xu Sun | null | 1406.4682 | null | null |
An Experimental Evaluation of Nearest Neighbour Time Series
Classification | cs.LG | Data mining research into time series classification (TSC) has focussed on
alternative distance measures for nearest neighbour classifiers. It is standard
practice to use 1-NN with Euclidean or dynamic time warping (DTW) distance as a
straw man for comparison. As part of a wider investigation into elastic
distance measures for TSC~\cite{lines14elastic}, we perform a series of
experiments to test whether this standard practice is valid.
Specifically, we compare 1-NN classifiers with Euclidean and DTW distance to
standard classifiers, examine whether the performance of 1-NN Euclidean
approaches that of 1-NN DTW as the number of cases increases, assess whether
there is any benefit of setting $k$ for $k$-NN through cross validation whether
it is worth setting the warping path for DTW through cross validation and
finally is it better to use a window or weighting for DTW. Based on experiments
on 77 problems, we conclude that 1-NN with Euclidean distance is fairly easy to
beat but 1-NN with DTW is not, if window size is set through cross validation.
| Anthony Bagnall and Jason Lines | null | 1406.4757 | null | null |
Predictive Modelling of Bone Age through Classification and Regression
of Bone Shapes | cs.LG physics.med-ph | Bone age assessment is a task performed daily in hospitals worldwide. This
involves a clinician estimating the age of a patient from a radiograph of the
non-dominant hand.
Our approach to automated bone age assessment is to modularise the algorithm
into the following three stages: segment and verify hand outline; segment and
verify bones; use the bone outlines to construct models of age. In this paper
we address the final question: given outlines of bones, can we learn how to
predict the bone age of the patient? We examine two alternative approaches.
Firstly, we attempt to train classifiers on individual bones to predict the
bone stage categories commonly used in bone ageing. Secondly, we construct
regression models to directly predict patient age.
We demonstrate that models built on summary features of the bone outline
perform better than those built using the one dimensional representation of the
outline, and also do at least as well as other automated systems. We show that
models constructed on just three bones are as accurate at predicting age as
expert human assessors using the standard technique. We also demonstrate the
utility of the model by quantifying the importance of ethnicity and sex on age
development. Our conclusion is that the feature based system of separating the
image processing from the age modelling is the best approach for automated bone
ageing, since it offers flexibility and transparency and produces accurate
estimates.
| Anthony Bagnall and Luke Davis | null | 1406.4781 | null | null |
Improved Densification of One Permutation Hashing | stat.ME cs.DS cs.IR cs.LG | The existing work on densification of one permutation hashing reduces the
query processing cost of the $(K,L)$-parameterized Locality Sensitive Hashing
(LSH) algorithm with minwise hashing, from $O(dKL)$ to merely $O(d + KL)$,
where $d$ is the number of nonzeros of the data vector, $K$ is the number of
hashes in each hash table, and $L$ is the number of hash tables. While that is
a substantial improvement, our analysis reveals that the existing densification
scheme is sub-optimal. In particular, there is no enough randomness in that
procedure, which affects its accuracy on very sparse datasets.
In this paper, we provide a new densification procedure which is provably
better than the existing scheme. This improvement is more significant for very
sparse datasets which are common over the web. The improved technique has the
same cost of $O(d + KL)$ for query processing, thereby making it strictly
preferable over the existing procedure. Experimental evaluations on public
datasets, in the task of hashing based near neighbor search, support our
theoretical findings.
| Anshumali Shrivastava and Ping Li | null | 1406.4784 | null | null |
Homotopy based algorithms for $\ell_0$-regularized least-squares | cs.NA cs.LG | Sparse signal restoration is usually formulated as the minimization of a
quadratic cost function $\|y-Ax\|_2^2$, where A is a dictionary and x is an
unknown sparse vector. It is well-known that imposing an $\ell_0$ constraint
leads to an NP-hard minimization problem. The convex relaxation approach has
received considerable attention, where the $\ell_0$-norm is replaced by the
$\ell_1$-norm. Among the many efficient $\ell_1$ solvers, the homotopy
algorithm minimizes $\|y-Ax\|_2^2+\lambda\|x\|_1$ with respect to x for a
continuum of $\lambda$'s. It is inspired by the piecewise regularity of the
$\ell_1$-regularization path, also referred to as the homotopy path. In this
paper, we address the minimization problem $\|y-Ax\|_2^2+\lambda\|x\|_0$ for a
continuum of $\lambda$'s and propose two heuristic search algorithms for
$\ell_0$-homotopy. Continuation Single Best Replacement is a forward-backward
greedy strategy extending the Single Best Replacement algorithm, previously
proposed for $\ell_0$-minimization at a given $\lambda$. The adaptive search of
the $\lambda$-values is inspired by $\ell_1$-homotopy. $\ell_0$ Regularization
Path Descent is a more complex algorithm exploiting the structural properties
of the $\ell_0$-regularization path, which is piecewise constant with respect
to $\lambda$. Both algorithms are empirically evaluated for difficult inverse
problems involving ill-conditioned dictionaries. Finally, we show that they can
be easily coupled with usual methods of model order selection.
| Charles Soussen, J\'er\^ome Idier, Junbo Duan, David Brie | 10.1109/TSP.2015.2421476 | 1406.4802 | null | null |
On the Application of Generic Summarization Algorithms to Music | cs.IR cs.LG cs.SD | Several generic summarization algorithms were developed in the past and
successfully applied in fields such as text and speech summarization. In this
paper, we review and apply these algorithms to music. To evaluate this
summarization's performance, we adopt an extrinsic approach: we compare a Fado
Genre Classifier's performance using truncated contiguous clips against the
summaries extracted with those algorithms on 2 different datasets. We show that
Maximal Marginal Relevance (MMR), LexRank and Latent Semantic Analysis (LSA)
all improve classification performance in both datasets used for testing.
| Francisco Raposo, Ricardo Ribeiro, David Martins de Matos | 10.1109/LSP.2014.2347582 | 1406.4877 | null | null |
Variational Gaussian Process State-Space Models | cs.LG cs.RO cs.SY stat.ML | State-space models have been successfully used for more than fifty years in
different areas of science and engineering. We present a procedure for
efficient variational Bayesian learning of nonlinear state-space models based
on sparse Gaussian processes. The result of learning is a tractable posterior
over nonlinear dynamical systems. In comparison to conventional parametric
models, we offer the possibility to straightforwardly trade off model capacity
and computational cost whilst avoiding overfitting. Our main algorithm uses a
hybrid inference approach combining variational Bayes and sequential Monte
Carlo. We also present stochastic variational inference and online learning
approaches for fast learning with long time series.
| Roger Frigola and Yutian Chen and Carl E. Rasmussen | null | 1406.4905 | null | null |
Brain-like associative learning using a nanoscale non-volatile phase
change synaptic device array | cs.NE cond-mat.mtrl-sci cs.LG | Recent advances in neuroscience together with nanoscale electronic device
technology have resulted in huge interests in realizing brain-like computing
hardwares using emerging nanoscale memory devices as synaptic elements.
Although there has been experimental work that demonstrated the operation of
nanoscale synaptic element at the single device level, network level studies
have been limited to simulations. In this work, we demonstrate, using
experiments, array level associative learning using phase change synaptic
devices connected in a grid like configuration similar to the organization of
the biological brain. Implementing Hebbian learning with phase change memory
cells, the synaptic grid was able to store presented patterns and recall
missing patterns in an associative brain-like fashion. We found that the system
is robust to device variations, and large variations in cell resistance states
can be accommodated by increasing the number of training epochs. We illustrated
the tradeoff between variation tolerance of the network and the overall energy
consumption, and found that energy consumption is decreased significantly for
lower variation tolerance.
| Sukru Burc Eryilmaz, Duygu Kuzum, Rakesh Jeyasingh, SangBum Kim,
Matthew BrightSky, Chung Lam and H.-S. Philip Wong | 10.3389/fnins.2014.00205 | 1406.4951 | null | null |
Inner Product Similarity Search using Compositional Codes | cs.CV cs.LG stat.ML | This paper addresses the nearest neighbor search problem under inner product
similarity and introduces a compact code-based approach. The idea is to
approximate a vector using the composition of several elements selected from a
source dictionary and to represent this vector by a short code composed of the
indices of the selected elements. The inner product between a query vector and
a database vector is efficiently estimated from the query vector and the short
code of the database vector. We show the superior performance of the proposed
group $M$-selection algorithm that selects $M$ elements from $M$ source
dictionaries for vector approximation in terms of search accuracy and
efficiency for compact codes of the same length via theoretical and empirical
analysis. Experimental results on large-scale datasets ($1M$ and $1B$ SIFT
features, $1M$ linear models and Netflix) demonstrate the superiority of the
proposed approach.
| Chao Du, Jingdong Wang | null | 1406.4966 | null | null |
Inferring causal structure: a quantum advantage | quant-ph cs.LG gr-qc stat.ML | The problem of using observed correlations to infer causal relations is
relevant to a wide variety of scientific disciplines. Yet given correlations
between just two classical variables, it is impossible to determine whether
they arose from a causal influence of one on the other or a common cause
influencing both, unless one can implement a randomized intervention. We here
consider the problem of causal inference for quantum variables. We introduce
causal tomography, which unifies and generalizes conventional quantum
tomography schemes to provide a complete solution to the causal inference
problem using a quantum analogue of a randomized trial. We furthermore show
that, in contrast to the classical case, observed quantum correlations alone
can sometimes provide a solution. We implement a quantum-optical experiment
that allows us to control the causal relation between two optical modes, and
two measurement schemes -- one with and one without randomization -- that
extract this relation from the observed correlations. Our results show that
entanglement and coherence, known to be central to quantum information
processing, also provide a quantum advantage for causal inference.
| Katja Ried, Megan Agnew, Lydia Vermeyden, Dominik Janzing, Robert W.
Spekkens and Kevin J. Resch | 10.1038/nphys3266 | 1406.5036 | null | null |
The Sample Complexity of Learning Linear Predictors with the Squared
Loss | cs.LG stat.ML | In this short note, we provide a sample complexity lower bound for learning
linear predictors with respect to the squared loss. Our focus is on an agnostic
setting, where no assumptions are made on the data distribution. This contrasts
with standard results in the literature, which either make distributional
assumptions, refer to specific parameter settings, or use other performance
measures.
| Ohad Shamir | null | 1406.5143 | null | null |
Fast Support Vector Machines Using Parallel Adaptive Shrinking on
Distributed Systems | cs.DC cs.LG | Support Vector Machines (SVM), a popular machine learning technique, has been
applied to a wide range of domains such as science, finance, and social
networks for supervised learning. Whether it is identifying high-risk patients
by health-care professionals, or potential high-school students to enroll in
college by school districts, SVMs can play a major role for social good. This
paper undertakes the challenge of designing a scalable parallel SVM training
algorithm for large scale systems, which includes commodity multi-core
machines, tightly connected supercomputers and cloud computing systems.
Intuitive techniques for improving the time-space complexity including adaptive
elimination of samples for faster convergence and sparse format representation
are proposed. Under sample elimination, several heuristics for {\em earliest
possible} to {\em lazy} elimination of non-contributing samples are proposed.
In several cases, where an early sample elimination might result in a false
positive, low overhead mechanisms for reconstruction of key data structures are
proposed. The algorithm and heuristics are implemented and evaluated on various
publicly available datasets. Empirical evaluation shows up to 26x speed
improvement on some datasets against the sequential baseline, when evaluated on
multiple compute nodes, and an improvement in execution time up to 30-60\% is
readily observed on a number of other datasets against our parallel baseline.
| Jeyanthi Narasimhan, Abhinav Vishnu, Lawrence Holder, Adolfy Hoisie | null | 1406.5161 | null | null |
Enhancing Pure-Pixel Identification Performance via Preconditioning | stat.ML cs.LG math.NA math.OC | In this paper, we analyze different preconditionings designed to enhance
robustness of pure-pixel search algorithms, which are used for blind
hyperspectral unmixing and which are equivalent to near-separable nonnegative
matrix factorization algorithms. Our analysis focuses on the successive
projection algorithm (SPA), a simple, efficient and provably robust algorithm
in the pure-pixel algorithm class. Recently, a provably robust preconditioning
was proposed by Gillis and Vavasis (arXiv:1310.2273) which requires the
resolution of a semidefinite program (SDP) to find a data points-enclosing
minimum volume ellipsoid. Since solving the SDP in high precisions can be time
consuming, we generalize the robustness analysis to approximate solutions of
the SDP, that is, solutions whose objective function values are some
multiplicative factors away from the optimal value. It is shown that a high
accuracy solution is not crucial for robustness, which paves the way for faster
preconditionings (e.g., based on first-order optimization methods). This first
contribution also allows us to provide a robustness analysis for two other
preconditionings. The first one is pre-whitening, which can be interpreted as
an optimal solution of the same SDP with additional constraints. We analyze
robustness of pre-whitening which allows us to characterize situations in which
it performs competitively with the SDP-based preconditioning. The second one is
based on SPA itself and can be interpreted as an optimal solution of a
relaxation of the SDP. It is extremely fast while competing with the SDP-based
preconditioning on several synthetic data sets.
| Nicolas Gillis, Wing-Kin Ma | 10.1137/140994915 | 1406.5286 | null | null |
Generalized Dantzig Selector: Application to the k-support norm | stat.ML cs.LG | We propose a Generalized Dantzig Selector (GDS) for linear models, in which
any norm encoding the parameter structure can be leveraged for estimation. We
investigate both computational and statistical aspects of the GDS. Based on
conjugate proximal operator, a flexible inexact ADMM framework is designed for
solving GDS, and non-asymptotic high-probability bounds are established on the
estimation error, which rely on Gaussian width of unit norm ball and suitable
set encompassing estimation error. Further, we consider a non-trivial example
of the GDS using $k$-support norm. We derive an efficient method to compute the
proximal operator for $k$-support norm since existing methods are inapplicable
in this setting. For statistical analysis, we provide upper bounds for the
Gaussian widths needed in the GDS analysis, yielding the first statistical
recovery guarantee for estimation with the $k$-support norm. The experimental
results confirm our theoretical analysis.
| Soumyadeep Chatterjee and Sheng Chen and Arindam Banerjee | null | 1406.5291 | null | null |
Rows vs Columns for Linear Systems of Equations - Randomized Kaczmarz or
Coordinate Descent? | math.OC cs.LG cs.NA math.NA stat.ML | This paper is about randomized iterative algorithms for solving a linear
system of equations $X \beta = y$ in different settings. Recent interest in the
topic was reignited when Strohmer and Vershynin (2009) proved the linear
convergence rate of a Randomized Kaczmarz (RK) algorithm that works on the rows
of $X$ (data points). Following that, Leventhal and Lewis (2010) proved the
linear convergence of a Randomized Coordinate Descent (RCD) algorithm that
works on the columns of $X$ (features). The aim of this paper is to simplify
our understanding of these two algorithms, establish the direct relationships
between them (though RK is often compared to Stochastic Gradient Descent), and
examine the algorithmic commonalities or tradeoffs involved with working on
rows or columns. We also discuss Kernel Ridge Regression and present a
Kaczmarz-style algorithm that works on data points and having the advantage of
solving the problem without ever storing or forming the Gram matrix, one of the
recognized problems encountered when scaling kernelized methods.
| Aaditya Ramdas | null | 1406.5295 | null | null |
Semi-Supervised Learning with Deep Generative Models | cs.LG stat.ML | The ever-increasing size of modern data sets combined with the difficulty of
obtaining label information has made semi-supervised learning one of the
problems of significant practical importance in modern data analysis. We
revisit the approach to semi-supervised learning with generative models and
develop new models that allow for effective generalisation from small labelled
data sets to large unlabelled ones. Generative approaches have thus far been
either inflexible, inefficient or non-scalable. We show that deep generative
models and approximate Bayesian inference exploiting recent advances in
variational methods can be used to provide significant improvements, making
generative approaches highly competitive for semi-supervised learning.
| Diederik P. Kingma, Danilo J. Rezende, Shakir Mohamed, Max Welling | null | 1406.5298 | null | null |
Towards A Deeper Geometric, Analytic and Algorithmic Understanding of
Margins | math.OC cs.AI cs.LG math.NA stat.ML | Given a matrix $A$, a linear feasibility problem (of which linear
classification is a special case) aims to find a solution to a primal problem
$w: A^Tw > \textbf{0}$ or a certificate for the dual problem which is a
probability distribution $p: Ap = \textbf{0}$. Inspired by the continued
importance of "large-margin classifiers" in machine learning, this paper
studies a condition measure of $A$ called its \textit{margin} that determines
the difficulty of both the above problems. To aid geometrical intuition, we
first establish new characterizations of the margin in terms of relevant balls,
cones and hulls. Our second contribution is analytical, where we present
generalizations of Gordan's theorem, and variants of Hoffman's theorems, both
using margins. We end by proving some new results on a classical iterative
scheme, the Perceptron, whose convergence rates famously depends on the margin.
Our results are relevant for a deeper understanding of margin-based learning
and proving convergence rates of iterative schemes, apart from providing a
unifying perspective on this vast topic.
| Aaditya Ramdas and Javier Pe\~na | 10.1080/10556788.2015.1099652 | 1406.5311 | null | null |
Predicting the Future Behavior of a Time-Varying Probability
Distribution | stat.ML cs.LG | We study the problem of predicting the future, though only in the
probabilistic sense of estimating a future state of a time-varying probability
distribution. This is not only an interesting academic problem, but solving
this extrapolation problem also has many practical application, e.g. for
training classifiers that have to operate under time-varying conditions. Our
main contribution is a method for predicting the next step of the time-varying
distribution from a given sequence of sample sets from earlier time steps. For
this we rely on two recent machine learning techniques: embedding probability
distributions into a reproducing kernel Hilbert space, and learning operators
by vector-valued regression. We illustrate the working principles and the
practical usefulness of our method by experiments on synthetic and real data.
We also highlight an exemplary application: training a classifier in a domain
adaptation setting without having access to examples from the test time
distribution at training time.
| Christoph H. Lampert | null | 1406.5362 | null | null |
Spectral Ranking using Seriation | cs.LG cs.AI stat.ML | We describe a seriation algorithm for ranking a set of items given pairwise
comparisons between these items. Intuitively, the algorithm assigns similar
rankings to items that compare similarly with all others. It does so by
constructing a similarity matrix from pairwise comparisons, using seriation
methods to reorder this matrix and construct a ranking. We first show that this
spectral seriation algorithm recovers the true ranking when all pairwise
comparisons are observed and consistent with a total order. We then show that
ranking reconstruction is still exact when some pairwise comparisons are
corrupted or missing, and that seriation based spectral ranking is more robust
to noise than classical scoring methods. Finally, we bound the ranking error
when only a random subset of the comparions are observed. An additional benefit
of the seriation formulation is that it allows us to solve semi-supervised
ranking problems. Experiments on both synthetic and real datasets demonstrate
that seriation based spectral ranking achieves competitive and in some cases
superior performance compared to classical ranking methods.
| Fajwel Fogel, Alexandre d'Aspremont, Milan Vojnovic | null | 1406.5370 | null | null |
Noise-adaptive Margin-based Active Learning and Lower Bounds under
Tsybakov Noise Condition | stat.ML cs.LG | We present a simple noise-robust margin-based active learning algorithm to
find homogeneous (passing the origin) linear separators and analyze its error
convergence when labels are corrupted by noise. We show that when the imposed
noise satisfies the Tsybakov low noise condition (Mammen, Tsybakov, and others
1999; Tsybakov 2004) the algorithm is able to adapt to unknown level of noise
and achieves optimal statistical rate up to poly-logarithmic factors. We also
derive lower bounds for margin based active learning algorithms under Tsybakov
noise conditions (TNC) for the membership query synthesis scenario (Angluin
1988). Our result implies lower bounds for the stream based selective sampling
scenario (Cohn 1990) under TNC for some fairly simple data distributions. Quite
surprisingly, we show that the sample complexity cannot be improved even if the
underlying data distribution is as simple as the uniform distribution on the
unit ball. Our proof involves the construction of a well separated hypothesis
set on the d-dimensional unit ball along with carefully designed label
distributions for the Tsybakov noise condition. Our analysis might provide
insights for other forms of lower bounds as well.
| Yining Wang, Aarti Singh | null | 1406.5383 | null | null |
Learning computationally efficient dictionaries and their implementation
as fast transforms | cs.LG | Dictionary learning is a branch of signal processing and machine learning
that aims at finding a frame (called dictionary) in which some training data
admits a sparse representation. The sparser the representation, the better the
dictionary. The resulting dictionary is in general a dense matrix, and its
manipulation can be computationally costly both at the learning stage and later
in the usage of this dictionary, for tasks such as sparse coding. Dictionary
learning is thus limited to relatively small-scale problems. In this paper,
inspired by usual fast transforms, we consider a general dictionary structure
that allows cheaper manipulation, and propose an algorithm to learn such
dictionaries --and their fast implementation-- over training data. The approach
is demonstrated experimentally with the factorization of the Hadamard matrix
and with synthetic dictionary learning experiments.
| Luc Le Magoarou (INRIA - IRISA), R\'emi Gribonval (INRIA - IRISA) | null | 1406.5388 | null | null |
Playing with Duality: An Overview of Recent Primal-Dual Approaches for
Solving Large-Scale Optimization Problems | cs.NA cs.CV cs.LG math.OC | Optimization methods are at the core of many problems in signal/image
processing, computer vision, and machine learning. For a long time, it has been
recognized that looking at the dual of an optimization problem may drastically
simplify its solution. Deriving efficient strategies which jointly brings into
play the primal and the dual problems is however a more recent idea which has
generated many important new contributions in the last years. These novel
developments are grounded on recent advances in convex analysis, discrete
optimization, parallel processing, and non-smooth optimization with emphasis on
sparsity issues. In this paper, we aim at presenting the principles of
primal-dual approaches, while giving an overview of numerical methods which
have been proposed in different contexts. We show the benefits which can be
drawn from primal-dual algorithms both for solving large-scale convex
optimization problems and discrete ones, and we provide various application
examples to illustrate their usefulness.
| Nikos Komodakis and Jean-Christophe Pesquet | null | 1406.5429 | null | null |
An Open Source Pattern Recognition Toolbox for MATLAB | stat.ML cs.CV cs.LG cs.MS | Pattern recognition and machine learning are becoming integral parts of
algorithms in a wide range of applications. Different algorithms and approaches
for machine learning include different tradeoffs between performance and
computation, so during algorithm development it is often necessary to explore a
variety of different approaches to a given task. A toolbox with a unified
framework across multiple pattern recognition techniques enables algorithm
developers the ability to rapidly evaluate different choices prior to
deployment. MATLAB is a widely used environment for algorithm development and
prototyping, and although several MATLAB toolboxes for pattern recognition are
currently available these are either incomplete, expensive, or restrictively
licensed. In this work we describe a MATLAB toolbox for pattern recognition and
machine learning known as the PRT (Pattern Recognition Toolbox), licensed under
the permissive MIT license. The PRT includes many popular techniques for data
preprocessing, supervised learning, clustering, regression and feature
selection, as well as a methodology for combining these components using a
simple, uniform syntax. The resulting algorithms can be evaluated using
cross-validation and a variety of scoring metrics to ensure robust performance
when the algorithm is deployed. This paper presents an overview of the PRT as
well as an example of usage on Fisher's Iris dataset.
| Kenneth D. Morton Jr., Peter Torrione, Leslie Collins, Sam Keene | null | 1406.5565 | null | null |
From conformal to probabilistic prediction | cs.LG | This paper proposes a new method of probabilistic prediction, which is based
on conformal prediction. The method is applied to the standard USPS data set
and gives encouraging results.
| Vladimir Vovk, Ivan Petej, and Valentina Fedorova | null | 1406.5600 | null | null |
PAC-Bayes Analysis of Multi-view Learning | cs.LG cs.AI stat.ML | This paper presents eight PAC-Bayes bounds to analyze the generalization
performance of multi-view classifiers. These bounds adopt data dependent
Gaussian priors which emphasize classifiers with high view agreements. The
center of the prior for the first two bounds is the origin, while the center of
the prior for the third and fourth bounds is given by a data dependent vector.
An important technique to obtain these bounds is two derived logarithmic
determinant inequalities whose difference lies in whether the dimensionality of
data is involved. The centers of the fifth and sixth bounds are calculated on a
separate subset of the training set. The last two bounds use unlabeled data to
represent view agreements and are thus applicable to semi-supervised multi-view
learning. We evaluate all the presented multi-view PAC-Bayes bounds on
benchmark data and compare them with previous single-view PAC-Bayes bounds. The
usefulness and performance of the multi-view bounds are discussed.
| Shiliang Sun, John Shawe-Taylor, Liang Mao | null | 1406.5614 | null | null |
On semidefinite relaxations for the block model | cs.LG cs.SI stat.ML | The stochastic block model (SBM) is a popular tool for community detection in
networks, but fitting it by maximum likelihood (MLE) involves a computationally
infeasible optimization problem. We propose a new semidefinite programming
(SDP) solution to the problem of fitting the SBM, derived as a relaxation of
the MLE. We put ours and previously proposed SDPs in a unified framework, as
relaxations of the MLE over various sub-classes of the SBM, revealing a
connection to sparse PCA. Our main relaxation, which we call SDP-1, is tighter
than other recently proposed SDP relaxations, and thus previously established
theoretical guarantees carry over. However, we show that SDP-1 exactly recovers
true communities over a wider class of SBMs than those covered by current
results. In particular, the assumption of strong assortativity of the SBM,
implicit in consistency conditions for previously proposed SDPs, can be relaxed
to weak assortativity for our approach, thus significantly broadening the class
of SBMs covered by the consistency results. We also show that strong
assortativity is indeed a necessary condition for exact recovery for previously
proposed SDP approaches and not an artifact of the proofs. Our analysis of SDPs
is based on primal-dual witness constructions, which provides some insight into
the nature of the solutions of various SDPs. We show how to combine features
from SDP-1 and already available SDPs to achieve the most flexibility in terms
of both assortativity and block-size constraints, as our relaxation has the
tendency to produce communities of similar sizes. This tendency makes it the
ideal tool for fitting network histograms, a method gaining popularity in the
graphon estimation literature, as we illustrate on an example of a social
networks of dolphins. We also provide empirical evidence that SDPs outperform
spectral methods for fitting SBMs with a large number of blocks.
| Arash A. Amini, Elizaveta Levina | null | 1406.5647 | null | null |
Constant Factor Approximation for Balanced Cut in the PIE model | cs.DS cs.LG | We propose and study a new semi-random semi-adversarial model for Balanced
Cut, a planted model with permutation-invariant random edges (PIE). Our model
is much more general than planted models considered previously. Consider a set
of vertices V partitioned into two clusters $L$ and $R$ of equal size. Let $G$
be an arbitrary graph on $V$ with no edges between $L$ and $R$. Let
$E_{random}$ be a set of edges sampled from an arbitrary permutation-invariant
distribution (a distribution that is invariant under permutation of vertices in
$L$ and in $R$). Then we say that $G + E_{random}$ is a graph with
permutation-invariant random edges.
We present an approximation algorithm for the Balanced Cut problem that finds
a balanced cut of cost $O(|E_{random}|) + n \text{polylog}(n)$ in this model.
In the regime when $|E_{random}| = \Omega(n \text{polylog}(n))$, this is a
constant factor approximation with respect to the cost of the planted cut.
| Konstantin Makarychev, Yury Makarychev, Aravindan Vijayaraghavan | null | 1406.5665 | null | null |
Correlation Clustering with Noisy Partial Information | cs.DS cs.LG | In this paper, we propose and study a semi-random model for the Correlation
Clustering problem on arbitrary graphs G. We give two approximation algorithms
for Correlation Clustering instances from this model. The first algorithm finds
a solution of value $(1+ \delta) optcost + O_{\delta}(n\log^3 n)$ with high
probability, where $optcost$ is the value of the optimal solution (for every
$\delta > 0$). The second algorithm finds the ground truth clustering with an
arbitrarily small classification error $\eta$ (under some additional
assumptions on the instance).
| Konstantin Makarychev, Yury Makarychev, Aravindan Vijayaraghavan | null | 1406.5667 | null | null |
SPSD Matrix Approximation vis Column Selection: Theories, Algorithms,
and Extensions | cs.LG | Symmetric positive semidefinite (SPSD) matrix approximation is an important
problem with applications in kernel methods. However, existing SPSD matrix
approximation methods such as the Nystr\"om method only have weak error bounds.
In this paper we conduct in-depth studies of an SPSD matrix approximation model
and establish strong relative-error bounds. We call it the prototype model for
it has more efficient and effective extensions, and some of its extensions have
high scalability. Though the prototype model itself is not suitable for
large-scale data, it is still useful to study its properties, on which the
analysis of its extensions relies.
This paper offers novel theoretical analysis, efficient algorithms, and a
highly accurate extension. First, we establish a lower error bound for the
prototype model and improve the error bound of an existing column selection
algorithm to match the lower bound. In this way, we obtain the first optimal
column selection algorithm for the prototype model. We also prove that the
prototype model is exact under certain conditions. Second, we develop a simple
column selection algorithm with a provable error bound. Third, we propose a
so-called spectral shifting model to make the approximation more accurate when
the eigenvalues of the matrix decay slowly, and the improvement is
theoretically quantified. The spectral shifting method can also be applied to
improve other SPSD matrix approximation models.
| Shusen Wang, Luo Luo, Zhihua Zhang | null | 1406.5675 | null | null |
Deep Fragment Embeddings for Bidirectional Image Sentence Mapping | cs.CV cs.CL cs.LG | We introduce a model for bidirectional retrieval of images and sentences
through a multi-modal embedding of visual and natural language data. Unlike
previous models that directly map images or sentences into a common embedding
space, our model works on a finer level and embeds fragments of images
(objects) and fragments of sentences (typed dependency tree relations) into a
common space. In addition to a ranking objective seen in previous work, this
allows us to add a new fragment alignment objective that learns to directly
associate these fragments across modalities. Extensive experimental evaluation
shows that reasoning on both the global level of images and sentences and the
finer level of their respective fragments significantly improves performance on
image-sentence retrieval tasks. Additionally, our model provides interpretable
predictions since the inferred inter-modal fragment alignment is explicit.
| Andrej Karpathy, Armand Joulin and Li Fei-Fei | null | 1406.5679 | null | null |
On the Maximum Entropy Property of the First-Order Stable Spline Kernel
and its Implications | math.ST cs.LG stat.ML stat.TH | A new nonparametric approach for system identification has been recently
proposed where the impulse response is seen as the realization of a zero--mean
Gaussian process whose covariance, the so--called stable spline kernel,
guarantees that the impulse response is almost surely stable. Maximum entropy
properties of the stable spline kernel have been pointed out in the literature.
In this paper we provide an independent proof that relies on the theory of
matrix extension problems in the graphical model literature and leads to a
closed form expression for the inverse of the first order stable spline kernel
as well as to a new factorization in the form $UWU^\top$ with $U$ upper
triangular and $W$ diagonal. Interestingly, all first--order stable spline
kernels share the same factor $U$ and $W$ admits a closed form representation
in terms of the kernel hyperparameter, making the factorization computationally
inexpensive. Maximum likelihood properties of the stable spline kernel are also
highlighted. These results can be applied both to improve the stability and to
reduce the computational complexity associated with the computation of stable
spline estimators.
| Francesca Paola Carli | 10.1109/CCA.2014.6981380 | 1406.5706 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.