title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Towards stability and optimality in stochastic gradient descent | stat.ME cs.LG stat.CO stat.ML | Iterative procedures for parameter estimation based on stochastic gradient
descent allow the estimation to scale to massive data sets. However, in both
theory and practice, they suffer from numerical instability. Moreover, they are
statistically inefficient as estimators of the true parameter value. To address
these two issues, we propose a new iterative procedure termed averaged implicit
SGD (AI-SGD). For statistical efficiency, AI-SGD employs averaging of the
iterates, which achieves the optimal Cram\'{e}r-Rao bound under strong
convexity, i.e., it is an optimal unbiased estimator of the true parameter
value. For numerical stability, AI-SGD employs an implicit update at each
iteration, which is related to proximal operators in optimization. In practice,
AI-SGD achieves competitive performance with other state-of-the-art procedures.
Furthermore, it is more stable than averaging procedures that do not employ
proximal updates, and is simple to implement as it requires fewer tunable
hyperparameters than procedures that do employ proximal updates.
| Panos Toulis, Dustin Tran, Edoardo M. Airoldi | null | 1505.02417 | null | null |
Improved Relation Extraction with Feature-Rich Compositional Embedding
Models | cs.CL cs.AI cs.LG | Compositional embedding models build a representation (or embedding) for a
linguistic structure based on its component word embeddings. We propose a
Feature-rich Compositional Embedding Model (FCM) for relation extraction that
is expressive, generalizes to new domains, and is easy-to-implement. The key
idea is to combine both (unlexicalized) hand-crafted features with learned word
embeddings. The model is able to directly tackle the difficulties met by
traditional compositional embeddings models, such as handling arbitrary types
of sentence annotations and utilizing global information for composition. We
test the proposed model on two relation extraction tasks, and demonstrate that
our model outperforms both previous compositional models and traditional
feature rich models on the ACE 2005 relation extraction task, and the SemEval
2010 relation classification task. The combination of our model and a
log-linear classifier with hand-crafted features gives state-of-the-art
results.
| Matthew R. Gormley and Mo Yu and Mark Dredze | null | 1505.02419 | null | null |
Spike and Slab Gaussian Process Latent Variable Models | stat.ML cs.LG | The Gaussian process latent variable model (GP-LVM) is a popular approach to
non-linear probabilistic dimensionality reduction. One design choice for the
model is the number of latent variables. We present a spike and slab prior for
the GP-LVM and propose an efficient variational inference procedure that gives
a lower bound of the log marginal likelihood. The new model provides a more
principled approach for selecting latent dimensions than the standard way of
thresholding the length-scale parameters. The effectiveness of our approach is
demonstrated through experiments on real and simulated data. Further, we extend
multi-view Gaussian processes that rely on sharing latent dimensions (known as
manifold relevance determination) with spike and slab priors. This allows a
more principled approach for selecting a subset of the latent space for each
view of data. The extended model outperforms the previous state-of-the-art when
applied to a cross-modal multimedia retrieval task.
| Zhenwen Dai and James Hensman and Neil Lawrence | null | 1505.02434 | null | null |
Soft-Deep Boltzmann Machines | cs.NE cs.LG stat.ML | We present a layered Boltzmann machine (BM) that can better exploit the
advantages of a distributed representation. It is widely believed that deep BMs
(DBMs) have far greater representational power than its shallow counterpart,
restricted Boltzmann machines (RBMs). However, this expectation on the
supremacy of DBMs over RBMs has not ever been validated in a theoretical
fashion. In this paper, we provide both theoretical and empirical evidences
that the representational power of DBMs can be actually rather limited in
taking advantages of distributed representations. We propose an approximate
measure for the representational power of a BM regarding to the efficiency of a
distributed representation. With this measure, we show a surprising fact that
DBMs can make inefficient use of distributed representations. Based on these
observations, we propose an alternative BM architecture, which we dub soft-deep
BMs (sDBMs). We show that sDBMs can more efficiently exploit the distributed
representations in terms of the measure. Experiments demonstrate that sDBMs
outperform several state-of-the-art models, including DBMs, in generative tasks
on binarized MNIST and Caltech-101 silhouettes.
| Taichi Kiwaki | null | 1505.02462 | null | null |
Improving neural networks with bunches of neurons modeled by Kumaraswamy
units: Preliminary study | cs.LG cs.NE | Deep neural networks have recently achieved state-of-the-art results in many
machine learning problems, e.g., speech recognition or object recognition.
Hitherto, work on rectified linear units (ReLU) provides empirical and
theoretical evidence on performance increase of neural networks comparing to
typically used sigmoid activation function. In this paper, we investigate a new
manner of improving neural networks by introducing a bunch of copies of the
same neuron modeled by the generalized Kumaraswamy distribution. As a result,
we propose novel non-linear activation function which we refer to as
Kumaraswamy unit which is closely related to ReLU. In the experimental study
with MNIST image corpora we evaluate the Kumaraswamy unit applied to
single-layer (shallow) neural network and report a significant drop in test
classification error and test cross-entropy in comparison to sigmoid unit, ReLU
and Noisy ReLU.
| Jakub Mikolaj Tomczak | null | 1505.02581 | null | null |
Sample complexity of learning Mahalanobis distance metrics | cs.LG cs.AI stat.ML | Metric learning seeks a transformation of the feature space that enhances
prediction quality for the given task at hand. In this work we provide
PAC-style sample complexity rates for supervised metric learning. We give
matching lower- and upper-bounds showing that the sample complexity scales with
the representation dimension when no assumptions are made about the underlying
data distribution. However, by leveraging the structure of the data
distribution, we show that one can achieve rates that are fine-tuned to a
specific notion of intrinsic complexity for a given dataset. Our analysis
reveals that augmenting the metric learning optimization criterion with a
simple norm-based regularization can help adapt to a dataset's intrinsic
complexity, yielding better generalization. Experiments on benchmark datasets
validate our analysis and show that regularizing the metric can help discern
the signal even when the data contains high amounts of noise.
| Nakul Verma and Kristin Branson | null | 1505.02729 | null | null |
Asymptotic Behavior of Minimal-Exploration Allocation Policies: Almost
Sure, Arbitrarily Slow Growing Regret | stat.ML cs.LG | The purpose of this paper is to provide further understanding into the
structure of the sequential allocation ("stochastic multi-armed bandit", or
MAB) problem by establishing probability one finite horizon bounds and
convergence rates for the sample (or "pseudo") regret associated with two
simple classes of allocation policies $\pi$.
For any slowly increasing function $g$, subject to mild regularity
constraints, we construct two policies (the $g$-Forcing, and the $g$-Inflated
Sample Mean) that achieve a measure of regret of order $ O(g(n))$ almost surely
as $n \to \infty$, bound from above and below. Additionally, almost sure upper
and lower bounds on the remainder term are established. In the constructions
herein, the function $g$ effectively controls the "exploration" of the
classical "exploration/exploitation" tradeoff.
| Wesley Cowan and Michael N. Katehakis | null | 1505.02865 | null | null |
The Boundary Forest Algorithm for Online Supervised and Unsupervised
Learning | cs.LG cs.DS cs.IR stat.ML | We describe a new instance-based learning algorithm called the Boundary
Forest (BF) algorithm, that can be used for supervised and unsupervised
learning. The algorithm builds a forest of trees whose nodes store previously
seen examples. It can be shown data points one at a time and updates itself
incrementally, hence it is naturally online. Few instance-based algorithms have
this property while being simultaneously fast, which the BF is. This is crucial
for applications where one needs to respond to input data in real time. The
number of children of each node is not set beforehand but obtained from the
training procedure, which makes the algorithm very flexible with regards to
what data manifolds it can learn. We test its generalization performance and
speed on a range of benchmark datasets and detail in which settings it
outperforms the state of the art. Empirically we find that training time scales
as O(DNlog(N)) and testing as O(Dlog(N)), where D is the dimensionality and N
the amount of data,
| Charles Mathy, Nate Derbinsky, Jos\'e Bento, Jonathan Rosenthal and
Jonathan Yedidia | null | 1505.02867 | null | null |
Incorporating Type II Error Probabilities from Independence Tests into
Score-Based Learning of Bayesian Network Structure | cs.LG stat.ML | We give a new consistent scoring function for structure learning of Bayesian
networks. In contrast to traditional approaches to score-based structure
learning, such as BDeu or MDL, the complexity penalty that we propose is
data-dependent and is given by the probability that a conditional independence
test correctly shows that an edge cannot exist. What really distinguishes this
new scoring function from earlier work is that it has the property of becoming
computationally easier to maximize as the amount of data increases. We prove a
polynomial sample complexity result, showing that maximizing this score is
guaranteed to correctly learn a structure with no false edges and a
distribution close to the generating distribution, whenever there exists a
Bayesian network which is a perfect map for the data generating distribution.
Although the new score can be used with any search algorithm, in our related
UAI 2013 paper [BS13], we have given empirical results showing that it is
particularly effective when used together with a linear programming relaxation
approach to Bayesian network structure learning. The present paper contains all
details of the proofs of the finite-sample complexity results in [BS13] as well
as detailed explanation of the computation of the certain error probabilities
called beta-values, whose precomputation and tabulation is necessary for the
implementation of the algorithm in [BS13].
| Eliot Brenner, David Sontag | null | 1505.02870 | null | null |
Permutational Rademacher Complexity: a New Complexity Measure for
Transductive Learning | stat.ML cs.LG | Transductive learning considers situations when a learner observes $m$
labelled training points and $u$ unlabelled test points with the final goal of
giving correct answers for the test points. This paper introduces a new
complexity measure for transductive learning called Permutational Rademacher
Complexity (PRC) and studies its properties. A novel symmetrization inequality
is proved, which shows that PRC provides a tighter control over expected
suprema of empirical processes compared to what happens in the standard i.i.d.
setting. A number of comparison results are also provided, which show the
relation between PRC and other popular complexity measures used in statistical
learning theory, including Rademacher complexity and Transductive Rademacher
Complexity (TRC). We argue that PRC is a more suitable complexity measure for
transductive learning. Finally, these results are combined with a standard
concentration argument to provide novel data-dependent risk bounds for
transductive learning.
| Ilya Tolstikhin, Nikita Zhivotovskiy, Gilles Blanchard | null | 1505.02910 | null | null |
Detecting the large entries of a sparse covariance matrix in
sub-quadratic time | stat.CO cs.LG stat.ML | The covariance matrix of a $p$-dimensional random variable is a fundamental
quantity in data analysis. Given $n$ i.i.d. observations, it is typically
estimated by the sample covariance matrix, at a computational cost of
$O(np^{2})$ operations. When $n,p$ are large, this computation may be
prohibitively slow. Moreover, in several contemporary applications, the
population matrix is approximately sparse, and only its few large entries are
of interest. This raises the following question, at the focus of our work:
Assuming approximate sparsity of the covariance matrix, can its large entries
be detected much faster, say in sub-quadratic time, without explicitly
computing all its $p^{2}$ entries? In this paper, we present and theoretically
analyze two randomized algorithms that detect the large entries of an
approximately sparse sample covariance matrix using only $O(np\text{ poly log }
p)$ operations. Furthermore, assuming sparsity of the population matrix, we
derive sufficient conditions on the underlying random variable and on the
number of samples $n$, for the sample covariance matrix to satisfy our
approximate sparsity requirements. Finally, we illustrate the performance of
our algorithms via several simulations.
| Ofer Shwartz and Boaz Nadler | 10.1093/imaiai/iaw004 | 1505.03001 | null | null |
Removing systematic errors for exoplanet search via latent causes | stat.ML astro-ph.EP astro-ph.IM cs.LG | We describe a method for removing the effect of confounders in order to
reconstruct a latent quantity of interest. The method, referred to as
half-sibling regression, is inspired by recent work in causal inference using
additive noise models. We provide a theoretical justification and illustrate
the potential of the method in a challenging astronomy application.
| Bernhard Sch\"olkopf, David W. Hogg, Dun Wang, Daniel Foreman-Mackey,
Dominik Janzing, Carl-Johann Simon-Gabriel, Jonas Peters | null | 1505.03036 | null | null |
Hybrid data clustering approach using K-Means and Flower Pollination
Algorithm | cs.LG cs.IR cs.NE | Data clustering is a technique for clustering set of objects into known
number of groups. Several approaches are widely applied to data clustering so
that objects within the clusters are similar and objects in different clusters
are far away from each other. K-Means, is one of the familiar center based
clustering algorithms since implementation is very easy and fast convergence.
However, K-Means algorithm suffers from initialization, hence trapped in local
optima. Flower Pollination Algorithm (FPA) is the global optimization
technique, which avoids trapping in local optimum solution. In this paper, a
novel hybrid data clustering approach using Flower Pollination Algorithm and
K-Means (FPAKM) is proposed. The proposed algorithm results are compared with
K-Means and FPA on eight datasets. From the experimental results, FPAKM is
better than FPA and K-Means.
| R. Jensi and G. Wiselin Jiji | null | 1505.03236 | null | null |
Mind the duality gap: safer rules for the Lasso | stat.ML cs.LG math.OC stat.CO | Screening rules allow to early discard irrelevant variables from the
optimization in Lasso problems, or its derivatives, making solvers faster. In
this paper, we propose new versions of the so-called $\textit{safe rules}$ for
the Lasso. Based on duality gap considerations, our new rules create safe test
regions whose diameters converge to zero, provided that one relies on a
converging solver. This property helps screening out more variables, for a
wider range of regularization parameter values. In addition to faster
convergence, we prove that we correctly identify the active sets (supports) of
the solutions in finite time. While our proposed strategy can cope with any
solver, its performance is demonstrated using a coordinate descent algorithm
particularly adapted to machine learning use cases. Significant computing time
reductions are obtained with respect to previous safe rules.
| Olivier Fercoq, Alexandre Gramfort, Joseph Salmon | null | 1505.03410 | null | null |
Neural Network with Unbounded Activation Functions is Universal
Approximator | cs.NE cs.LG math.FA | This paper presents an investigation of the approximation property of neural
networks with unbounded activation functions, such as the rectified linear unit
(ReLU), which is the new de-facto standard of deep learning. The ReLU network
can be analyzed by the ridgelet transform with respect to Lizorkin
distributions. By showing three reconstruction formulas by using the Fourier
slice theorem, the Radon transform, and Parseval's relation, it is shown that a
neural network with unbounded activation functions still satisfies the
universal approximation property. As an additional consequence, the ridgelet
transform, or the backprojection filter in the Radon domain, is what the
network learns after backpropagation. Subject to a constructive admissibility
condition, the trained network can be obtained by simply discretizing the
ridgelet transform, without backpropagation. Numerical examples not only
support the consistency of the admissibility condition but also imply that some
non-admissible cases result in low-pass filtering.
| Sho Sonoda and Noboru Murata | 10.1016/j.acha.2015.12.005 | 1505.03654 | null | null |
A PCA-Based Convolutional Network | cs.LG cs.CV cs.NE | In this paper, we propose a novel unsupervised deep learning model, called
PCA-based Convolutional Network (PCN). The architecture of PCN is composed of
several feature extraction stages and a nonlinear output stage. Particularly,
each feature extraction stage includes two layers: a convolutional layer and a
feature pooling layer. In the convolutional layer, the filter banks are simply
learned by PCA. In the nonlinear output stage, binary hashing is applied. For
the higher convolutional layers, the filter banks are learned from the feature
maps that were obtained in the previous stage. To test PCN, we conducted
extensive experiments on some challenging tasks, including handwritten digits
recognition, face recognition and texture classification. The results show that
PCN performs competitive with or even better than state-of-the-art deep
learning models. More importantly, since there is no back propagation for
supervised finetuning, PCN is much more efficient than existing deep networks.
| Yanhai Gan, Jun Liu, Junyu Dong, Guoqiang Zhong | null | 1505.03703 | null | null |
Training generative neural networks via Maximum Mean Discrepancy
optimization | stat.ML cs.LG | We consider training a deep neural network to generate samples from an
unknown distribution given i.i.d. data. We frame learning as an optimization
minimizing a two-sample test statistic---informally speaking, a good generator
network produces samples that cause a two-sample test to fail to reject the
null hypothesis. As our two-sample test statistic, we use an unbiased estimate
of the maximum mean discrepancy, which is the centerpiece of the nonparametric
kernel two-sample test proposed by Gretton et al. (2012). We compare to the
adversarial nets framework introduced by Goodfellow et al. (2014), in which
learning is a two-player game between a generator network and an adversarial
discriminator network, both trained to outwit the other. From this perspective,
the MMD statistic plays the role of the discriminator. In addition to empirical
comparisons, we prove bounds on the generalization error incurred by optimizing
the empirical MMD.
| Gintare Karolina Dziugaite and Daniel M. Roy and Zoubin Ghahramani | null | 1505.03906 | null | null |
$k$-center Clustering under Perturbation Resilience | cs.DS cs.LG | The $k$-center problem is a canonical and long-studied facility location and
clustering problem with many applications in both its symmetric and asymmetric
forms. Both versions of the problem have tight approximation factors on worst
case instances. Therefore to improve on these ratios, one must go beyond the
worst case.
In this work, we take this approach and provide strong positive results both
for the asymmetric and symmetric $k$-center problems under a natural input
stability (promise) condition called $\alpha$-perturbation resilience [Bilu and
Linia 2012], which states that the optimal solution does not change under any
alpha-factor perturbation to the input distances. We provide algorithms that
give strong guarantees simultaneously for stable and non-stable instances: our
algorithms always inherit the worst-case guarantees of clustering approximation
algorithms, and output the optimal solution if the input is $2$-perturbation
resilient. Furthermore, we prove our result is tight by showing symmetric
$k$-center under $(2-\epsilon)$-perturbation resilience is hard unless $NP=RP$.
The impact of our results are multifaceted. This is the first tight result
for any problem under perturbation resilience. Furthermore, our results
illustrate a surprising relationship between symmetric and asymmetric
$k$-center instances under perturbation resilience. Unlike approximation ratio,
for which symmetric $k$-center is easily solved to a factor of 2 but asymmetric
$k$-center cannot be approximated to any constant factor, both symmetric and
asymmetric $k$-center can be solved optimally under resilience to
2-perturbations. Finally, our guarantees in the setting where only part of the
data satisfies perturbation resilience makes these algorithms more applicable
to real-life instances.
| Maria-Florina Balcan, Nika Haghtalab, Colin White | null | 1505.03924 | null | null |
Using Ensemble Models in the Histological Examination of Tissue
Abnormalities | cs.CV cs.CE cs.LG | Classification models for the automatic detection of abnormalities on
histological samples do exists, with an active debate on the cost associated
with false negative diagnosis (underdiagnosis) and false positive diagnosis
(overdiagnosis). Current models tend to underdiagnose, failing to recognize a
potentially fatal disease.
The objective of this study is to investigate the possibility of
automatically identifying abnormalities in tissue samples through the use of an
ensemble model on data generated by histological examination and to minimize
the number of false negative cases.
| Giancarlo Crocetti, Michael Coakley, Phil Dressner, Wanda Kellum,
Tamba Lamin | null | 1505.03932 | null | null |
Safe Screening for Multi-Task Feature Learning with Multiple Data
Matrices | cs.LG | Multi-task feature learning (MTFL) is a powerful technique in boosting the
predictive performance by learning multiple related
classification/regression/clustering tasks simultaneously. However, solving the
MTFL problem remains challenging when the feature dimension is extremely large.
In this paper, we propose a novel screening rule---that is based on the dual
projection onto convex sets (DPC)---to quickly identify the inactive
features---that have zero coefficients in the solution vectors across all
tasks. One of the appealing features of DPC is that: it is safe in the sense
that the detected inactive features are guaranteed to have zero coefficients in
the solution vectors across all tasks. Thus, by removing the inactive features
from the training phase, we may have substantial savings in the computational
cost and memory usage without sacrificing accuracy. To the best of our
knowledge, it is the first screening rule that is applicable to sparse models
with multiple data matrices. A key challenge in deriving DPC is to solve a
nonconvex problem. We show that we can solve for the global optimum efficiently
via a properly chosen parametrization of the constraint set. Moreover, DPC has
very low computational cost and can be integrated with any existing solvers. We
have evaluated the proposed DPC rule on both synthetic and real data sets. The
experiments indicate that DPC is very effective in identifying the inactive
features---especially for high dimensional data---which leads to a speedup up
to several orders of magnitude.
| Jie Wang and Jieping Ye | null | 1505.04073 | null | null |
Optimal Low-Rank Tensor Recovery from Separable Measurements: Four
Contractions Suffice | stat.ML cs.IT cs.LG math.IT math.OC | Tensors play a central role in many modern machine learning and signal
processing applications. In such applications, the target tensor is usually of
low rank, i.e., can be expressed as a sum of a small number of rank one
tensors. This motivates us to consider the problem of low rank tensor recovery
from a class of linear measurements called separable measurements. As specific
examples, we focus on two distinct types of separable measurement mechanisms
(a) Random projections, where each measurement corresponds to an inner product
of the tensor with a suitable random tensor, and (b) the completion problem
where measurements constitute revelation of a random set of entries. We present
a computationally efficient algorithm, with rigorous and order-optimal sample
complexity results (upto logarithmic factors) for tensor recovery. Our method
is based on reduction to matrix completion sub-problems and adaptation of
Leurgans' method for tensor decomposition. We extend the methodology and sample
complexity results to higher order tensors, and experimentally validate our
theoretical results.
| Parikshit Shah, Nikhil Rao, Gongguo Tang | null | 1505.04085 | null | null |
MCODE: Multivariate Conditional Outlier Detection | cs.AI cs.LG stat.ML | Outlier detection aims to identify unusual data instances that deviate from
expected patterns. The outlier detection is particularly challenging when
outliers are context dependent and when they are defined by unusual
combinations of multiple outcome variable values. In this paper, we develop and
study a new conditional outlier detection approach for multivariate outcome
spaces that works by (1) transforming the conditional detection to the outlier
detection problem in a new (unconditional) space and (2) defining outlier
scores by analyzing the data in the new space. Our approach relies on the
classifier chain decomposition of the multi-dimensional classification problem
that lets us transform the output space into a probability vector, one
probability for each dimension of the output space. Outlier scores applied to
these transformed vectors are then used to detect the outliers. Experiments on
multiple multi-dimensional classification problems with the different outlier
injection rates show that our methodology is robust and able to successfully
identify outliers when outliers are either sparse (manifested in one or very
few dimensions) or dense (affecting multiple dimensions).
| Charmgil Hong, Milos Hauskrecht | null | 1505.04097 | null | null |
Margins, Kernels and Non-linear Smoothed Perceptrons | cs.LG cs.AI cs.NA math.OC | We focus on the problem of finding a non-linear classification function that
lies in a Reproducing Kernel Hilbert Space (RKHS) both from the primal point of
view (finding a perfect separator when one exists) and the dual point of view
(giving a certificate of non-existence), with special focus on generalizations
of two classical schemes - the Perceptron (primal) and Von-Neumann (dual)
algorithms.
We cast our problem as one of maximizing the regularized normalized
hard-margin ($\rho$) in an RKHS and %use the Representer Theorem to rephrase it
in terms of a Mahalanobis dot-product/semi-norm associated with the kernel's
(normalized and signed) Gram matrix. We derive an accelerated smoothed
algorithm with a convergence rate of $\tfrac{\sqrt {\log n}}{\rho}$ given $n$
separable points, which is strikingly similar to the classical kernelized
Perceptron algorithm whose rate is $\tfrac1{\rho^2}$. When no such classifier
exists, we prove a version of Gordan's separation theorem for RKHSs, and give a
reinterpretation of negative margins. This allows us to give guarantees for a
primal-dual algorithm that halts in $\min\{\tfrac{\sqrt n}{|\rho|},
\tfrac{\sqrt n}{\epsilon}\}$ iterations with a perfect separator in the RKHS if
the primal is feasible or a dual $\epsilon$-certificate of near-infeasibility.
| Aaditya Ramdas, Javier Pe\~na | null | 1505.04123 | null | null |
Consistent Algorithms for Multiclass Classification with a Reject Option | cs.LG stat.ML | We consider the problem of $n$-class classification ($n\geq 2$), where the
classifier can choose to abstain from making predictions at a given cost, say,
a factor $\alpha$ of the cost of misclassification. Designing consistent
algorithms for such $n$-class classification problems with a `reject option' is
the main goal of this paper, thereby extending and generalizing previously
known results for $n=2$. We show that the Crammer-Singer surrogate and the one
vs all hinge loss, albeit with a different predictor than the standard argmax,
yield consistent algorithms for this problem when $\alpha=\frac{1}{2}$. More
interestingly, we design a new convex surrogate that is also consistent for
this problem when $\alpha=\frac{1}{2}$ and operates on a much lower dimensional
space ($\log(n)$ as opposed to $n$). We also generalize all three surrogates to
be consistent for any $\alpha\in[0, \frac{1}{2}]$.
| Harish G. Ramaswamy and Ambuj Tewari and Shivani Agarwal | null | 1505.04137 | null | null |
Algorithmic Connections Between Active Learning and Stochastic Convex
Optimization | cs.LG cs.AI math.OC stat.ML | Interesting theoretical associations have been established by recent papers
between the fields of active learning and stochastic convex optimization due to
the common role of feedback in sequential querying mechanisms. In this paper,
we continue this thread in two parts by exploiting these relations for the
first time to yield novel algorithms in both fields, further motivating the
study of their intersection. First, inspired by a recent optimization algorithm
that was adaptive to unknown uniform convexity parameters, we present a new
active learning algorithm for one-dimensional thresholds that can yield minimax
rates by adapting to unknown noise parameters. Next, we show that one can
perform $d$-dimensional stochastic minimization of smooth uniformly convex
functions when only granted oracle access to noisy gradient signs along any
coordinate instead of real-valued gradients, by using a simple randomized
coordinate descent procedure where each line search can be solved by
$1$-dimensional active learning, provably achieving the same error convergence
rate as having the entire real-valued gradient. Combining these two parts
yields an algorithm that solves stochastic convex optimization of uniformly
convex and smooth functions using only noisy gradient signs by repeatedly
performing active learning, achieves optimal rates and is adaptive to all
unknown convexity and smoothness parameters.
| Aaditya Ramdas, Aarti Singh | null | 1505.04214 | null | null |
An Analysis of Active Learning With Uniform Feature Noise | stat.ML cs.AI cs.LG math.ST stat.TH | In active learning, the user sequentially chooses values for feature $X$ and
an oracle returns the corresponding label $Y$. In this paper, we consider the
effect of feature noise in active learning, which could arise either because
$X$ itself is being measured, or it is corrupted in transmission to the oracle,
or the oracle returns the label of a noisy version of the query point. In
statistics, feature noise is known as "errors in variables" and has been
studied extensively in non-active settings. However, the effect of feature
noise in active learning has not been studied before. We consider the
well-known Berkson errors-in-variables model with additive uniform noise of
width $\sigma$.
Our simple but revealing setting is that of one-dimensional binary
classification setting where the goal is to learn a threshold (point where the
probability of a $+$ label crosses half). We deal with regression functions
that are antisymmetric in a region of size $\sigma$ around the threshold and
also satisfy Tsybakov's margin condition around the threshold. We prove minimax
lower and upper bounds which demonstrate that when $\sigma$ is smaller than the
minimiax active/passive noiseless error derived in \cite{CN07}, then noise has
no effect on the rates and one achieves the same noiseless rates. For larger
$\sigma$, the \textit{unflattening} of the regression function on convolution
with uniform noise, along with its local antisymmetry around the threshold,
together yield a behaviour where noise \textit{appears} to be beneficial. Our
key result is that active learning can buy significant improvement over a
passive strategy even in the presence of feature noise.
| Aaditya Ramdas, Barnabas Poczos, Aarti Singh, Larry Wasserman | null | 1505.04215 | null | null |
A New Perspective on Boosting in Linear Regression via Subgradient
Optimization and Relatives | math.ST cs.LG math.OC stat.ML stat.TH | In this paper we analyze boosting algorithms in linear regression from a new
perspective: that of modern first-order methods in convex optimization. We show
that classic boosting algorithms in linear regression, namely the incremental
forward stagewise algorithm (FS$_\varepsilon$) and least squares boosting
(LS-Boost($\varepsilon$)), can be viewed as subgradient descent to minimize the
loss function defined as the maximum absolute correlation between the features
and residuals. We also propose a modification of FS$_\varepsilon$ that yields
an algorithm for the Lasso, and that may be easily extended to an algorithm
that computes the Lasso path for different values of the regularization
parameter. Furthermore, we show that these new algorithms for the Lasso may
also be interpreted as the same master algorithm (subgradient descent), applied
to a regularized version of the maximum absolute correlation loss function. We
derive novel, comprehensive computational guarantees for several boosting
algorithms in linear regression (including LS-Boost($\varepsilon$) and
FS$_\varepsilon$) by using techniques of modern first-order methods in convex
optimization. Our computational guarantees inform us about the statistical
properties of boosting algorithms. In particular they provide, for the first
time, a precise theoretical description of the amount of data-fidelity and
regularization imparted by running a boosting algorithm with a prespecified
learning rate for a fixed but arbitrary number of iterations, for any dataset.
| Robert M. Freund, Paul Grigas, Rahul Mazumder | null | 1505.04243 | null | null |
Global Convergence of Unmodified 3-Block ADMM for a Class of Convex
Minimization Problems | math.OC cs.LG stat.ML | The alternating direction method of multipliers (ADMM) has been successfully
applied to solve structured convex optimization problems due to its superior
practical performance. The convergence properties of the 2-block ADMM have been
studied extensively in the literature. Specifically, it has been proven that
the 2-block ADMM globally converges for any penalty parameter $\gamma>0$. In
this sense, the 2-block ADMM allows the parameter to be free, i.e., there is no
need to restrict the value for the parameter when implementing this algorithm
in order to ensure convergence. However, for the 3-block ADMM, Chen \etal
\cite{Chen-admm-failure-2013} recently constructed a counter-example showing
that it can diverge if no further condition is imposed. The existing results on
studying further sufficient conditions on guaranteeing the convergence of the
3-block ADMM usually require $\gamma$ to be smaller than a certain bound, which
is usually either difficult to compute or too small to make it a practical
algorithm. In this paper, we show that the 3-block ADMM still globally
converges with any penalty parameter $\gamma>0$ if the third function $f_3$ in
the objective is smooth and strongly convex, and its condition number is in
$[1,1.0798)$, besides some other mild conditions. This requirement covers an
important class of problems to be called regularized least squares
decomposition (RLSD) in this paper.
| Tianyi Lin, Shiqian Ma, Shuzhong Zhang | null | 1505.04252 | null | null |
Provably Correct Algorithms for Matrix Column Subset Selection with
Selectively Sampled Data | stat.ML cs.LG | We consider the problem of matrix column subset selection, which selects a
subset of columns from an input matrix such that the input can be well
approximated by the span of the selected columns. Column subset selection has
been applied to numerous real-world data applications such as population
genetics summarization, electronic circuits testing and recommendation systems.
In many applications the complete data matrix is unavailable and one needs to
select representative columns by inspecting only a small portion of the input
matrix. In this paper we propose the first provably correct column subset
selection algorithms for partially observed data matrices. Our proposed
algorithms exhibit different merits and limitations in terms of statistical
accuracy, computational efficiency, sample complexity and sampling schemes,
which provides a nice exploration of the tradeoff between these desired
properties for column subset selection. The proposed methods employ the idea of
feedback driven sampling and are inspired by several sampling schemes
previously introduced for low-rank matrix approximation tasks (Drineas et al.,
2008; Frieze et al., 2004; Deshpande and Vempala, 2006; Krishnamurthy and
Singh, 2014). Our analysis shows that, under the assumption that the input data
matrix has incoherent rows but possibly coherent columns, all algorithms
provably converge to the best low-rank approximation of the original data as
number of selected columns increases. Furthermore, two of the proposed
algorithms enjoy a relative error bound, which is preferred for column subset
selection and matrix approximation purposes. We also demonstrate through both
theoretical and empirical analysis the power of feedback driven sampling
compared to uniform random sampling on input matrices with highly correlated
columns.
| Yining Wang, Aarti Singh | null | 1505.04343 | null | null |
Shrinkage degree in $L_2$-re-scale boosting for regression | cs.LG | Re-scale boosting (RBoosting) is a variant of boosting which can essentially
improve the generalization performance of boosting learning. The key feature of
RBoosting lies in introducing a shrinkage degree to re-scale the ensemble
estimate in each gradient-descent step. Thus, the shrinkage degree determines
the performance of RBoosting.
The aim of this paper is to develop a concrete analysis concerning how to
determine the shrinkage degree in $L_2$-RBoosting. We propose two feasible ways
to select the shrinkage degree. The first one is to parameterize the shrinkage
degree and the other one is to develope a data-driven approach of it. After
rigorously analyzing the importance of the shrinkage degree in $L_2$-RBoosting
learning, we compare the pros and cons of the proposed methods. We find that
although these approaches can reach the same learning rates, the structure of
the final estimate of the parameterized approach is better, which sometimes
yields a better generalization capability when the number of sample is finite.
With this, we recommend to parameterize the shrinkage degree of
$L_2$-RBoosting. To this end, we present an adaptive parameter-selection
strategy for shrinkage degree and verify its feasibility through both
theoretical analysis and numerical verification.
The obtained results enhance the understanding of RBoosting and further give
guidance on how to use $L_2$-RBoosting for regression tasks.
| Lin Xu, Shaobo Lin, Yao Wang and Zongben Xu | null | 1505.04369 | null | null |
Hinge-Loss Markov Random Fields and Probabilistic Soft Logic | cs.LG cs.AI stat.ML | A fundamental challenge in developing high-impact machine learning
technologies is balancing the need to model rich, structured domains with the
ability to scale to big data. Many important problem areas are both richly
structured and large scale, from social and biological networks, to knowledge
graphs and the Web, to images, video, and natural language. In this paper, we
introduce two new formalisms for modeling structured data, and show that they
can both capture rich structure and scale to big data. The first, hinge-loss
Markov random fields (HL-MRFs), is a new kind of probabilistic graphical model
that generalizes different approaches to convex inference. We unite three
approaches from the randomized algorithms, probabilistic graphical models, and
fuzzy logic communities, showing that all three lead to the same inference
objective. We then define HL-MRFs by generalizing this unified objective. The
second new formalism, probabilistic soft logic (PSL), is a probabilistic
programming language that makes HL-MRFs easy to define using a syntax based on
first-order logic. We introduce an algorithm for inferring most-probable
variable assignments (MAP inference) that is much more scalable than
general-purpose convex optimization methods, because it uses message passing to
take advantage of sparse dependency structures. We then show how to learn the
parameters of HL-MRFs. The learned HL-MRFs are as accurate as analogous
discrete models, but much more scalable. Together, these algorithms enable
HL-MRFs and PSL to model rich, structured data at scales not previously
possible.
| Stephen H. Bach, Matthias Broecheler, Bert Huang, Lise Getoor | null | 1505.04406 | null | null |
Simple regret for infinitely many armed bandits | cs.LG stat.ML | We consider a stochastic bandit problem with infinitely many arms. In this
setting, the learner has no chance of trying all the arms even once and has to
dedicate its limited number of samples only to a certain number of arms. All
previous algorithms for this setting were designed for minimizing the
cumulative regret of the learner. In this paper, we propose an algorithm aiming
at minimizing the simple regret. As in the cumulative regret setting of
infinitely many armed bandits, the rate of the simple regret will depend on a
parameter $\beta$ characterizing the distribution of the near-optimal arms. We
prove that depending on $\beta$, our algorithm is minimax optimal either up to
a multiplicative constant or up to a $\log(n)$ factor. We also provide
extensions to several important cases: when $\beta$ is unknown, in a natural
setting where the near-optimal arms have a small variance, and in the case of
unknown time horizon.
| Alexandra Carpentier, Michal Valko | null | 1505.04627 | null | null |
Recurrent Neural Network Training with Dark Knowledge Transfer | stat.ML cs.CL cs.LG cs.NE | Recurrent neural networks (RNNs), particularly long short-term memory (LSTM),
have gained much attention in automatic speech recognition (ASR). Although some
successful stories have been reported, training RNNs remains highly
challenging, especially with limited training data. Recent research found that
a well-trained model can be used as a teacher to train other child models, by
using the predictions generated by the teacher model as supervision. This
knowledge transfer learning has been employed to train simple neural nets with
a complex one, so that the final performance can reach a level that is
infeasible to obtain by regular training. In this paper, we employ the
knowledge transfer learning approach to train RNNs (precisely LSTM) using a
deep neural network (DNN) model as the teacher. This is different from most of
the existing research on knowledge transfer learning, since the teacher (DNN)
is assumed to be weaker than the child (RNN); however, our experiments on an
ASR task showed that it works fairly well: without applying any tricks on the
learning scheme, this approach can train RNNs successfully even with limited
training data.
| Zhiyuan Tang, Dong Wang and Zhiyong Zhang | 10.1109/ICASSP.2016.7472809 | 1505.04630 | null | null |
Graph Partitioning via Parallel Submodular Approximation to Accelerate
Distributed Machine Learning | cs.DC cs.AI cs.LG | Distributed computing excels at processing large scale data, but the
communication cost for synchronizing the shared parameters may slow down the
overall performance. Fortunately, the interactions between parameter and data
in many problems are sparse, which admits efficient partition in order to
reduce the communication overhead.
In this paper, we formulate data placement as a graph partitioning problem.
We propose a distributed partitioning algorithm. We give both theoretical
guarantees and a highly efficient implementation. We also provide a highly
efficient implementation of the algorithm and demonstrate its promising results
on both text datasets and social networks. We show that the proposed algorithm
leads to 1.6x speedup of a state-of-the-start distributed machine learning
system by eliminating 90\% of the network communication.
| Mu Li, Dave G. Andersen, Alexander J. Smola | null | 1505.04636 | null | null |
Ensemble of Example-Dependent Cost-Sensitive Decision Trees | cs.LG | Several real-world classification problems are example-dependent
cost-sensitive in nature, where the costs due to misclassification vary between
examples and not only within classes. However, standard classification methods
do not take these costs into account, and assume a constant cost of
misclassification errors. In previous works, some methods that take into
account the financial costs into the training of different algorithms have been
proposed, with the example-dependent cost-sensitive decision tree algorithm
being the one that gives the highest savings. In this paper we propose a new
framework of ensembles of example-dependent cost-sensitive decision-trees. The
framework consists in creating different example-dependent cost-sensitive
decision trees on random subsamples of the training set, and then combining
them using three different combination approaches. Moreover, we propose two new
cost-sensitive combination approaches; cost-sensitive weighted voting and
cost-sensitive stacking, the latter being based on the cost-sensitive logistic
regression method. Finally, using five different databases, from four
real-world applications: credit card fraud detection, churn modeling, credit
scoring and direct marketing, we evaluate the proposed method against
state-of-the-art example-dependent cost-sensitive techniques, namely,
cost-proportionate sampling, Bayes minimum risk and cost-sensitive decision
trees. The results show that the proposed algorithms have better results for
all databases, in the sense of higher savings.
| Alejandro Correa Bahnsen, Djamila Aouada, Bjorn Ottersten | null | 1505.04637 | null | null |
Compressed Nonnegative Matrix Factorization is Fast and Accurate | cs.LG stat.ML | Nonnegative matrix factorization (NMF) has an established reputation as a
useful data analysis technique in numerous applications. However, its usage in
practical situations is undergoing challenges in recent years. The fundamental
factor to this is the increasingly growing size of the datasets available and
needed in the information sciences. To address this, in this work we propose to
use structured random compression, that is, random projections that exploit the
data structure, for two NMF variants: classical and separable. In separable NMF
(SNMF) the left factors are a subset of the columns of the input matrix. We
present suitable formulations for each problem, dealing with different
representative algorithms within each one. We show that the resulting
compressed techniques are faster than their uncompressed variants, vastly
reduce memory demands, and do not encompass any significant deterioration in
performance. The proposed structured random projections for SNMF allow to deal
with arbitrarily shaped large matrices, beyond the standard limit of
tall-and-skinny matrices, granting access to very efficient computations in
this general setting. We accompany the algorithmic presentation with
theoretical foundations and numerous and diverse examples, showing the
suitability of the proposed approaches.
| Mariano Tepper and Guillermo Sapiro | 10.1109/TSP.2016.2516971 | 1505.04650 | null | null |
Layered Adaptive Importance Sampling | stat.CO cs.LG stat.ML | Monte Carlo methods represent the "de facto" standard for approximating
complicated integrals involving multidimensional target distributions. In order
to generate random realizations from the target distribution, Monte Carlo
techniques use simpler proposal probability densities to draw candidate
samples. The performance of any such method is strictly related to the
specification of the proposal distribution, such that unfortunate choices
easily wreak havoc on the resulting estimators. In this work, we introduce a
layered (i.e., hierarchical) procedure to generate samples employed within a
Monte Carlo scheme. This approach ensures that an appropriate equivalent
proposal density is always obtained automatically (thus eliminating the risk of
a catastrophic performance), although at the expense of a moderate increase in
the complexity. Furthermore, we provide a general unified importance sampling
(IS) framework, where multiple proposal densities are employed and several IS
schemes are introduced by applying the so-called deterministic mixture
approach. Finally, given these schemes, we also propose a novel class of
adaptive importance samplers using a population of proposals, where the
adaptation is driven by independent parallel or interacting Markov Chain Monte
Carlo (MCMC) chains. The resulting algorithms efficiently combine the benefits
of both IS and MCMC methods.
| L. Martino, V. Elvira, D. Luengo, J. Corander | 10.1007/s11222-016-9642-5 | 1505.04732 | null | null |
DopeLearning: A Computational Approach to Rap Lyrics Generation | cs.LG cs.AI cs.CL cs.NE | Writing rap lyrics requires both creativity to construct a meaningful,
interesting story and lyrical skills to produce complex rhyme patterns, which
form the cornerstone of good flow. We present a rap lyrics generation method
that captures both of these aspects. First, we develop a prediction model to
identify the next line of existing lyrics from a set of candidate next lines.
This model is based on two machine-learning techniques: the RankSVM algorithm
and a deep neural network model with a novel structure. Results show that the
prediction model can identify the true next line among 299 randomly selected
lines with an accuracy of 17%, i.e., over 50 times more likely than by random.
Second, we employ the prediction model to combine lines from existing songs,
producing lyrics with rhyme and a meaning. An evaluation of the produced lyrics
shows that in terms of quantitative rhyme density, the method outperforms the
best human rappers by 21%. The rap lyrics generator has been deployed as an
online tool called DeepBeat, and the performance of the tool has been assessed
by analyzing its usage logs. This analysis shows that machine-learned rankings
correlate with user preferences.
| Eric Malmi, Pyry Takala, Hannu Toivonen, Tapani Raiko, Aristides
Gionis | 10.1145/2939672.2939679 | 1505.04771 | null | null |
On the tightness of an SDP relaxation of k-means | cs.IT cs.DS cs.LG math.IT math.ST stat.ML stat.TH | Recently, Awasthi et al. introduced an SDP relaxation of the $k$-means
problem in $\mathbb R^m$. In this work, we consider a random model for the data
points in which $k$ balls of unit radius are deterministically distributed
throughout $\mathbb R^m$, and then in each ball, $n$ points are drawn according
to a common rotationally invariant probability distribution. For any fixed ball
configuration and probability distribution, we prove that the SDP relaxation of
the $k$-means problem exactly recovers these planted clusters with probability
$1-e^{-\Omega(n)}$ provided the distance between any two of the ball centers is
$>2+\epsilon$, where $\epsilon$ is an explicit function of the configuration of
the ball centers, and can be arbitrarily small when $m$ is large.
| Takayuki Iguchi, Dustin G. Mixon, Jesse Peterson, Soledad Villar | null | 1505.04778 | null | null |
Multi-task additive models with shared transfer functions based on
dictionary learning | stat.ML cs.LG | Additive models form a widely popular class of regression models which
represent the relation between covariates and response variables as the sum of
low-dimensional transfer functions. Besides flexibility and accuracy, a key
benefit of these models is their interpretability: the transfer functions
provide visual means for inspecting the models and identifying domain-specific
relations between inputs and outputs. However, in large-scale problems
involving the prediction of many related tasks, learning independently additive
models results in a loss of model interpretability, and can cause overfitting
when training data is scarce. We introduce a novel multi-task learning approach
which provides a corpus of accurate and interpretable additive models for a
large number of related forecasting tasks. Our key idea is to share transfer
functions across models in order to reduce the model complexity and ease the
exploration of the corpus. We establish a connection with sparse dictionary
learning and propose a new efficient fitting algorithm which alternates between
sparse coding and transfer function updates. The former step is solved via an
extension of Orthogonal Matching Pursuit, whose properties are analyzed using a
novel recovery condition which extends existing results in the literature. The
latter step is addressed using a traditional dictionary update rule.
Experiments on real-world data demonstrate that our approach compares favorably
to baseline methods while yielding an interpretable corpus of models, revealing
structure among the individual tasks and being more robust when training data
is scarce. Our framework therefore extends the well-known benefits of additive
models to common regression settings possibly involving thousands of tasks.
| Alhussein Fawzi, Mathieu Sinn, Pascal Frossard | null | 1505.04966 | null | null |
Risk and Regret of Hierarchical Bayesian Learners | cs.LG stat.ML | Common statistical practice has shown that the full power of Bayesian methods
is not realized until hierarchical priors are used, as these allow for greater
"robustness" and the ability to "share statistical strength." Yet it is an
ongoing challenge to provide a learning-theoretically sound formalism of such
notions that: offers practical guidance concerning when and how best to utilize
hierarchical models; provides insights into what makes for a good hierarchical
prior; and, when the form of the prior has been chosen, can guide the choice of
hyperparameter settings. We present a set of analytical tools for understanding
hierarchical priors in both the online and batch learning settings. We provide
regret bounds under log-loss, which show how certain hierarchical models
compare, in retrospect, to the best single model in the model class. We also
show how to convert a Bayesian log-loss regret bound into a Bayesian risk bound
for any bounded loss, a result which may be of independent interest. Risk and
regret bounds for Student's $t$ and hierarchical Gaussian priors allow us to
formalize the concepts of "robustness" and "sharing statistical strength."
Priors for feature selection are investigated as well. Our results suggest that
the learning-theoretic benefits of using hierarchical priors can often come at
little cost on practical problems.
| Jonathan H. Huggins and Joshua B. Tenenbaum | null | 1505.04984 | null | null |
An Experimental Comparison of Hybrid Algorithms for Bayesian Network
Structure Learning | stat.ML cs.AI cs.LG | We present a novel hybrid algorithm for Bayesian network structure learning,
called Hybrid HPC (H2PC). It first reconstructs the skeleton of a Bayesian
network and then performs a Bayesian-scoring greedy hill-climbing search to
orient the edges. It is based on a subroutine called HPC, that combines ideas
from incremental and divide-and-conquer constraint-based methods to learn the
parents and children of a target variable. We conduct an experimental
comparison of H2PC against Max-Min Hill-Climbing (MMHC), which is currently the
most powerful state-of-the-art algorithm for Bayesian network structure
learning, on several benchmarks with various data sizes. Our extensive
experiments show that H2PC outperforms MMHC both in terms of goodness of fit to
new data and in terms of the quality of the network structure itself, which is
closer to the true dependence structure of the data. The source code (in R) of
H2PC as well as all data sets used for the empirical tests are publicly
available.
| Maxime Gasse (DM2L), Alex Aussem (DM2L), Haytham Elghazel (DM2L) | 10.1007/978-3-642-33460-3_9 | 1505.05004 | null | null |
Modelling-based experiment retrieval: A case study with gene expression
clustering | stat.ML cs.IR cs.LG | Motivation: Public and private repositories of experimental data are growing
to sizes that require dedicated methods for finding relevant data. To improve
on the state of the art of keyword searches from annotations, methods for
content-based retrieval have been proposed. In the context of gene expression
experiments, most methods retrieve gene expression profiles, requiring each
experiment to be expressed as a single profile, typically of case vs. control.
A more general, recently suggested alternative is to retrieve experiments whose
models are good for modelling the query dataset. However, for very noisy and
high-dimensional query data, this retrieval criterion turns out to be very
noisy as well.
Results: We propose doing retrieval using a denoised model of the query
dataset, instead of the original noisy dataset itself. To this end, we
introduce a general probabilistic framework, where each experiment is modelled
separately and the retrieval is done by finding related models. For retrieval
of gene expression experiments, we use a probabilistic model called product
partition model, which induces a clustering of genes that show similar
expression patterns across a number of samples. The suggested metric for
retrieval using clusterings is the normalized information distance. Empirical
results finally suggest that inference for the full probabilistic model can be
approximated with good performance using computationally faster heuristic
clustering approaches (e.g. $k$-means). The method is highly scalable and
straightforward to apply to construct a general-purpose gene expression
experiment retrieval method.
Availability: The method can be implemented using standard clustering
algorithms and normalized information distance, available in many statistical
software packages.
| Paul Blomstedt, Ritabrata Dutta, Sohan Seth, Alvis Brazma and Samuel
Kaski | 10.1093/bioinformatics/btv762 | 1505.05007 | null | null |
Solving Random Quadratic Systems of Equations Is Nearly as Easy as
Solving Linear Systems | cs.IT cs.LG math.IT math.NA math.ST stat.ML stat.TH | We consider the fundamental problem of solving quadratic systems of equations
in $n$ variables, where $y_i = |\langle \boldsymbol{a}_i, \boldsymbol{x}
\rangle|^2$, $i = 1, \ldots, m$ and $\boldsymbol{x} \in \mathbb{R}^n$ is
unknown. We propose a novel method, which starting with an initial guess
computed by means of a spectral method, proceeds by minimizing a nonconvex
functional as in the Wirtinger flow approach. There are several key
distinguishing features, most notably, a distinct objective functional and
novel update rules, which operate in an adaptive fashion and drop terms bearing
too much influence on the search direction. These careful selection rules
provide a tighter initial guess, better descent directions, and thus enhanced
practical performance. On the theoretical side, we prove that for certain
unstructured models of quadratic systems, our algorithms return the correct
solution in linear time, i.e. in time proportional to reading the data
$\{\boldsymbol{a}_i\}$ and $\{y_i\}$ as soon as the ratio $m/n$ between the
number of equations and unknowns exceeds a fixed numerical constant. We extend
the theory to deal with noisy systems in which we only have $y_i \approx
|\langle \boldsymbol{a}_i, \boldsymbol{x} \rangle|^2$ and prove that our
algorithms achieve a statistical accuracy, which is nearly un-improvable. We
complement our theoretical study with numerical examples showing that solving
random quadratic systems is both computationally and statistically not much
harder than solving linear systems of the same size---hence the title of this
paper. For instance, we demonstrate empirically that the computational cost of
our algorithm is about four times that of solving a least-squares problem of
the same size.
| Yuxin Chen, Emmanuel J. Candes | null | 1505.05114 | null | null |
oASIS: Adaptive Column Sampling for Kernel Matrix Approximation | stat.ML cs.LG | Kernel matrices (e.g. Gram or similarity matrices) are essential for many
state-of-the-art approaches to classification, clustering, and dimensionality
reduction. For large datasets, the cost of forming and factoring such kernel
matrices becomes intractable. To address this challenge, we introduce a new
adaptive sampling algorithm called Accelerated Sequential Incoherence Selection
(oASIS) that samples columns without explicitly computing the entire kernel
matrix. We provide conditions under which oASIS is guaranteed to exactly
recover the kernel matrix with an optimal number of columns selected. Numerical
experiments on both synthetic and real-world datasets demonstrate that oASIS
achieves performance comparable to state-of-the-art adaptive sampling methods
at a fraction of the computational cost. The low runtime complexity of oASIS
and its low memory footprint enable the solution of large problems that are
simply intractable using other adaptive methods.
| Raajen Patel, Thomas A. Goldstein, Eva L. Dyer, Azalia Mirhoseini, and
Richard G. Baraniuk | null | 1505.05208 | null | null |
Learning with a Drifting Target Concept | cs.LG | We study the problem of learning in the presence of a drifting target
concept. Specifically, we provide bounds on the error rate at a given time,
given a learner with access to a history of independent samples labeled
according to a target concept that can change on each round. One of our main
contributions is a refinement of the best previous results for polynomial-time
algorithms for the space of linear separators under a uniform distribution. We
also provide general results for an algorithm capable of adapting to a variable
rate of drift of the target concept. Some of the results also describe an
active learning variant of this setting, and provide bounds on the number of
queries for the labels of points in the sequence sufficient to obtain the
stated bounds on the error rates.
| Steve Hanneke, Varun Kanade, Liu Yang | null | 1505.05215 | null | null |
Bounds on the Minimax Rate for Estimating a Prior over a VC Class from
Independent Learning Tasks | cs.LG | We study the optimal rates of convergence for estimating a prior distribution
over a VC class from a sequence of independent data sets respectively labeled
by independent target functions sampled from the prior. We specifically derive
upper and lower bounds on the optimal rates under a smoothness condition on the
correct prior, with the number of samples per data set equal the VC dimension.
These results have implications for the improvements achievable via transfer
learning. We additionally extend this setting to real-valued function, where we
establish consistency of an estimator for the prior, and discuss an additional
application to a preference elicitation problem in algorithmic economics.
| Liu Yang, Steve Hanneke, Jaime Carbonell | null | 1505.05231 | null | null |
Visual Understanding via Multi-Feature Shared Learning with Global
Consistency | cs.CV cs.LG | Image/video data is usually represented with multiple visual features. Fusion
of multi-source information for establishing the attributes has been widely
recognized. Multi-feature visual recognition has recently received much
attention in multimedia applications. This paper studies visual understanding
via a newly proposed l_2-norm based multi-feature shared learning framework,
which can simultaneously learn a global label matrix and multiple
sub-classifiers with the labeled multi-feature data. Additionally, a group
graph manifold regularizer composed of the Laplacian and Hessian graph is
proposed for better preserving the manifold structure of each feature, such
that the label prediction power is much improved through the semi-supervised
learning with global label consistency. For convenience, we call the proposed
approach Global-Label-Consistent Classifier (GLCC). The merits of the proposed
method include: 1) the manifold structure information of each feature is
exploited in learning, resulting in a more faithful classification owing to the
global label consistency; 2) a group graph manifold regularizer based on the
Laplacian and Hessian regularization is constructed; 3) an efficient
alternative optimization method is introduced as a fast solver owing to the
convex sub-problems. Experiments on several benchmark visual datasets for
multimedia understanding, such as the 17-category Oxford Flower dataset, the
challenging 101-category Caltech dataset, the YouTube & Consumer Videos dataset
and the large-scale NUS-WIDE dataset, demonstrate that the proposed approach
compares favorably with the state-of-the-art algorithms. An extensive
experiment on the deep convolutional activation features also show the
effectiveness of the proposed approach. The code is available on
http://www.escience.cn/people/lei/index.html
| Lei Zhang and David Zhang | 10.1109/TMM.2015.2510509 | 1505.05233 | null | null |
Supervised Learning for Dynamical System Learning | stat.ML cs.LG | Recently there has been substantial interest in spectral methods for learning
dynamical systems. These methods are popular since they often offer a good
tradeoff between computational and statistical efficiency. Unfortunately, they
can be difficult to use and extend in practice: e.g., they can make it
difficult to incorporate prior information such as sparsity or structure. To
address this problem, we present a new view of dynamical system learning: we
show how to learn dynamical systems by solving a sequence of ordinary
supervised learning problems, thereby allowing users to incorporate prior
knowledge via standard techniques such as L1 regularization. Many existing
spectral methods are special cases of this new framework, using linear
regression as the supervised learner. We demonstrate the effectiveness of our
framework by showing examples where nonlinear regression or lasso let us learn
better state representations than plain linear regression does; the correctness
of these instances follows directly from our general analysis.
| Ahmed Hefny, Carlton Downey and Geoffrey Gordon | null | 1505.05310 | null | null |
A Max-Sum algorithm for training discrete neural networks | cond-mat.dis-nn cs.LG cs.NE | We present an efficient learning algorithm for the problem of training neural
networks with discrete synapses, a well-known hard (NP-complete) discrete
optimization problem. The algorithm is a variant of the so-called Max-Sum (MS)
algorithm. In particular, we show how, for bounded integer weights with $q$
distinct states and independent concave a priori distribution (e.g. $l_{1}$
regularization), the algorithm's time complexity can be made to scale as
$O\left(N\log N\right)$ per node update, thus putting it on par with
alternative schemes, such as Belief Propagation (BP), without resorting to
approximations. Two special cases are of particular interest: binary synapses
$W\in\{-1,1\}$ and ternary synapses $W\in\{-1,0,1\}$ with $l_{0}$
regularization. The algorithm we present performs as well as BP on binary
perceptron learning problems, and may be better suited to address the problem
on fully-connected two-layer networks, since inherent symmetries in two layer
networks are naturally broken using the MS approach.
| Carlo Baldassi and Alfredo Braunstein | 10.1088/1742-5468/2015/08/P08008 | 1505.05401 | null | null |
Weight Uncertainty in Neural Networks | stat.ML cs.LG | We introduce a new, efficient, principled and backpropagation-compatible
algorithm for learning a probability distribution on the weights of a neural
network, called Bayes by Backprop. It regularises the weights by minimising a
compression cost, known as the variational free energy or the expected lower
bound on the marginal likelihood. We show that this principled kind of
regularisation yields comparable performance to dropout on MNIST
classification. We then demonstrate how the learnt uncertainty in the weights
can be used to improve generalisation in non-linear regression problems, and
how this weight uncertainty can be used to drive the exploration-exploitation
trade-off in reinforcement learning.
| Charles Blundell and Julien Cornebise and Koray Kavukcuoglu and Daan
Wierstra | null | 1505.05424 | null | null |
Fuzzy Least Squares Twin Support Vector Machines | cs.AI cs.LG | Least Squares Twin Support Vector Machine (LST-SVM) has been shown to be an
efficient and fast algorithm for binary classification. It combines the
operating principles of Least Squares SVM (LS-SVM) and Twin SVM (T-SVM); it
constructs two non-parallel hyperplanes (as in T-SVM) by solving two systems of
linear equations (as in LS-SVM). Despite its efficiency, LST-SVM is still
unable to cope with two features of real-world problems. First, in many
real-world applications, labels of samples are not deterministic; they come
naturally with their associated membership degrees. Second, samples in
real-world applications may not be equally important and their importance
degrees affect the classification. In this paper, we propose Fuzzy LST-SVM
(FLST-SVM) to deal with these two characteristics of real-world data. Two
models are introduced for FLST-SVM: the first model builds up crisp hyperplanes
using training samples and their corresponding membership degrees. The second
model, on the other hand, constructs fuzzy hyperplanes using training samples
and their membership degrees. Numerical evaluation of the proposed method with
synthetic and real datasets demonstrate significant improvement in the
classification accuracy of FLST-SVM when compared to well-known existing
versions of SVM.
| Javad Salimi Sartakhti, Homayun Afrabandpey, Nasser Ghadiri | null | 1505.05451 | null | null |
Why Regularized Auto-Encoders learn Sparse Representation? | stat.ML cs.CV cs.LG | While the authors of Batch Normalization (BN) identify and address an
important problem involved in training deep networks-- \textit{Internal
Covariate Shift}-- the current solution has certain drawbacks. For instance, BN
depends on batch statistics for layerwise input normalization during training
which makes the estimates of mean and standard deviation of input
(distribution) to hidden layers inaccurate due to shifting parameter values
(especially during initial training epochs). Another fundamental problem with
BN is that it cannot be used with batch-size $ 1 $ during training. We address
these drawbacks of BN by proposing a non-adaptive normalization technique for
removing covariate shift, that we call \textit{Normalization Propagation}. Our
approach does not depend on batch statistics, but rather uses a
data-independent parametric estimate of mean and standard-deviation in every
layer thus being computationally faster compared with BN. We exploit the
observation that the pre-activation before Rectified Linear Units follow
Gaussian distribution in deep networks, and that once the first and second
order statistics of any given dataset are normalized, we can forward propagate
this normalization without the need for recalculating the approximate
statistics for hidden layers.
| Devansh Arpit, Yingbo Zhou, Hung Ngo, Venu Govindaraju | null | 1505.05561 | null | null |
The development of an information criterion for Change-Point Analysis | physics.data-an cs.LG stat.ML | Change-point analysis is a flexible and computationally tractable tool for
the analysis of times series data from systems that transition between discrete
states and whose observables are corrupted by noise. The change-point algorithm
is used to identify the time indices (change points) at which the system
transitions between these discrete states. We present a unified
information-based approach to testing for the existence of change points. This
new approach reconciles two previously disparate approaches to Change-Point
Analysis (frequentist and information-based) for testing transitions between
states. The resulting method is statistically principled, parameter and prior
free and widely applicable to a wide range of change-point problems.
| Paul A. Wiggins, Colin H. LaMont | null | 1505.05572 | null | null |
Are You Talking to a Machine? Dataset and Methods for Multilingual Image
Question Answering | cs.CV cs.CL cs.LG | In this paper, we present the mQA model, which is able to answer questions
about the content of an image. The answer can be a sentence, a phrase or a
single word. Our model contains four components: a Long Short-Term Memory
(LSTM) to extract the question representation, a Convolutional Neural Network
(CNN) to extract the visual representation, an LSTM for storing the linguistic
context in an answer, and a fusing component to combine the information from
the first three components and generate the answer. We construct a Freestyle
Multilingual Image Question Answering (FM-IQA) dataset to train and evaluate
our mQA model. It contains over 150,000 images and 310,000 freestyle Chinese
question-answer pairs and their English translations. The quality of the
generated answers of our mQA model on this dataset is evaluated by human judges
through a Turing Test. Specifically, we mix the answers provided by humans and
our model. The human judges need to distinguish our model from the human. They
will also provide a score (i.e. 0, 1, 2, the larger the better) indicating the
quality of the answer. We propose strategies to monitor the quality of this
evaluation process. The experiments show that in 64.7% of cases, the human
judges cannot distinguish our model from humans. The average score is 1.454
(1.918 for human). The details of this work, including the FM-IQA dataset, can
be found on the project page: http://idl.baidu.com/FM-IQA.html
| Haoyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, Wei Xu | null | 1505.05612 | null | null |
Regulating Greed Over Time in Multi-Armed Bandits | stat.ML cs.LG | In retail, there are predictable yet dramatic time-dependent patterns in
customer behavior, such as periodic changes in the number of visitors, or
increases in customers just before major holidays. The current paradigm of
multi-armed bandit analysis does not take these known patterns into account.
This means that for applications in retail, where prices are fixed for periods
of time, current bandit algorithms will not suffice. This work provides a
remedy that takes the time-dependent patterns into account, and we show how
this remedy is implemented for the UCB, $\varepsilon$-greedy, and UCB-L
algorithms, and also through a new policy called the variable arm pool
algorithm. In the corrected methods, exploitation (greed) is regulated over
time, so that more exploitation occurs during higher reward periods, and more
exploration occurs in periods of low reward. In order to understand why regret
is reduced with the corrected methods, we present a set of bounds that provide
insight into why we would want to exploit during periods of high reward, and
discuss the impact on regret. Our proposed methods perform well in experiments,
and were inspired by a high-scoring entry in the Exploration and Exploitation 3
contest using data from Yahoo$!$ Front Page. That entry heavily used
time-series methods to regulate greed over time, which was substantially more
effective than other contextual bandit methods.
| Stefano Trac\`a, Cynthia Rudin, and Weiyu Yan | null | 1505.05629 | null | null |
Inferring Graphs from Cascades: A Sparse Recovery Framework | cs.SI cs.LG stat.ML | In the Network Inference problem, one seeks to recover the edges of an
unknown graph from the observations of cascades propagating over this graph. In
this paper, we approach this problem from the sparse recovery perspective. We
introduce a general model of cascades, including the voter model and the
independent cascade model, for which we provide the first algorithm which
recovers the graph's edges with high probability and $O(s\log m)$ measurements
where $s$ is the maximum degree of the graph and $m$ is the number of nodes.
Furthermore, we show that our algorithm also recovers the edge weights (the
parameters of the diffusion process) and is robust in the context of
approximate sparsity. Finally we prove an almost matching lower bound of
$\Omega(s\log\frac{m}{s})$ and validate our approach empirically on synthetic
graphs.
| Jean Pouget-Abadie, Thibaut Horel | null | 1505.05663 | null | null |
A Re-ranking Model for Dependency Parser with Recursive Convolutional
Neural Network | cs.CL cs.LG cs.NE | In this work, we address the problem to model all the nodes (words or
phrases) in a dependency tree with the dense representations. We propose a
recursive convolutional neural network (RCNN) architecture to capture syntactic
and compositional-semantic representations of phrases and words in a dependency
tree. Different with the original recursive neural network, we introduce the
convolution and pooling layers, which can model a variety of compositions by
the feature maps and choose the most informative compositions by the pooling
layers. Based on RCNN, we use a discriminative model to re-rank a $k$-best list
of candidate dependency parsing trees. The experiments show that RCNN is very
effective to improve the state-of-the-art dependency parsing on both English
and Chinese datasets.
| Chenxi Zhu, Xipeng Qiu, Xinchi Chen, Xuanjing Huang | null | 1505.05667 | null | null |
On the relation between accuracy and fairness in binary classification | cs.LG cs.AI | Our study revisits the problem of accuracy-fairness tradeoff in binary
classification. We argue that comparison of non-discriminatory classifiers
needs to account for different rates of positive predictions, otherwise
conclusions about performance may be misleading, because accuracy and
discrimination of naive baselines on the same dataset vary with different rates
of positive predictions. We provide methodological recommendations for sound
comparison of non-discriminatory classifiers, and present a brief theoretical
and empirical analysis of tradeoffs between accuracy and non-discrimination.
| Indre Zliobaite | null | 1505.05723 | null | null |
Variational Inference with Normalizing Flows | stat.ML cs.AI cs.LG stat.CO stat.ME | The choice of approximate posterior distribution is one of the core problems
in variational inference. Most applications of variational inference employ
simple families of posterior approximations in order to allow for efficient
inference, focusing on mean-field or other simple structured approximations.
This restriction has a significant impact on the quality of inferences made
using variational methods. We introduce a new approach for specifying flexible,
arbitrarily complex and scalable approximate posterior distributions. Our
approximations are distributions constructed through a normalizing flow,
whereby a simple initial density is transformed into a more complex one by
applying a sequence of invertible transformations until a desired level of
complexity is attained. We use this view of normalizing flows to develop
categories of finite and infinitesimal flows and provide a unified view of
approaches for constructing rich posterior approximations. We demonstrate that
the theoretical advantages of having posteriors that better match the true
posterior, combined with the scalability of amortized variational approaches,
provides a clear improvement in performance and applicability of variational
inference.
| Danilo Jimenez Rezende and Shakir Mohamed | null | 1505.05770 | null | null |
Safe Policy Search for Lifelong Reinforcement Learning with Sublinear
Regret | cs.LG | Lifelong reinforcement learning provides a promising framework for developing
versatile agents that can accumulate knowledge over a lifetime of experience
and rapidly learn new tasks by building upon prior knowledge. However, current
lifelong learning methods exhibit non-vanishing regret as the amount of
experience increases and include limitations that can lead to suboptimal or
unsafe control policies. To address these issues, we develop a lifelong policy
gradient learner that operates in an adversarial set- ting to learn multiple
tasks online while enforcing safety constraints on the learned policies. We
demonstrate, for the first time, sublinear regret for lifelong policy search,
and validate our algorithm on several benchmark dynamical systems and an
application to quadrotor control.
| Haitham Bou Ammar, Rasul Tutunov, Eric Eaton | null | 1505.05798 | null | null |
Complexity Theoretic Limitations on Learning Halfspaces | cs.CC cs.LG | We study the problem of agnostically learning halfspaces which is defined by
a fixed but unknown distribution $\mathcal{D}$ on $\mathbb{Q}^n\times \{\pm
1\}$. We define $\mathrm{Err}_{\mathrm{HALF}}(\mathcal{D})$ as the least error
of a halfspace classifier for $\mathcal{D}$. A learner who can access
$\mathcal{D}$ has to return a hypothesis whose error is small compared to
$\mathrm{Err}_{\mathrm{HALF}}(\mathcal{D})$.
Using the recently developed method of the author, Linial and Shalev-Shwartz
we prove hardness of learning results under a natural assumption on the
complexity of refuting random $K$-$\mathrm{XOR}$ formulas. We show that no
efficient learning algorithm has non-trivial worst-case performance even under
the guarantees that $\mathrm{Err}_{\mathrm{HALF}}(\mathcal{D}) \le \eta$ for
arbitrarily small constant $\eta>0$, and that $\mathcal{D}$ is supported in
$\{\pm 1\}^n\times \{\pm 1\}$. Namely, even under these favorable conditions
its error must be $\ge \frac{1}{2}-\frac{1}{n^c}$ for every $c>0$. In
particular, no efficient algorithm can achieve a constant approximation ratio.
Under a stronger version of the assumption (where $K$ can be poly-logarithmic
in $n$), we can take $\eta = 2^{-\log^{1-\nu}(n)}$ for arbitrarily small
$\nu>0$. Interestingly, this is even stronger than the best known lower bounds
(Arora et. al. 1993, Feldamn et. al. 2006, Guruswami and Raghavendra 2006) for
the case that the learner is restricted to return a halfspace classifier (i.e.
proper learning).
| Amit Daniely | null | 1505.05800 | null | null |
Learning Program Embeddings to Propagate Feedback on Student Code | cs.LG cs.NE cs.SE | Providing feedback, both assessing final work and giving hints to stuck
students, is difficult for open-ended assignments in massive online classes
which can range from thousands to millions of students. We introduce a neural
network method to encode programs as a linear mapping from an embedded
precondition space to an embedded postcondition space and propose an algorithm
for feedback at scale using these linear maps as features. We apply our
algorithm to assessments from the Code.org Hour of Code and Stanford
University's CS1 course, where we propagate human comments on student
assignments to orders of magnitude more submissions.
| Chris Piech, Jonathan Huang, Andy Nguyen, Mike Phulsuksombati, Mehran
Sahami, Leonidas Guibas | null | 1505.05969 | null | null |
Instant Learning: Parallel Deep Neural Networks and Convolutional
Bootstrapping | cs.LG | Although deep neural networks (DNN) are able to scale with direct advances in
computational power (e.g., memory and processing speed), they are not well
suited to exploit the recent trends for parallel architectures. In particular,
gradient descent is a sequential process and the resulting serial dependencies
mean that DNN training cannot be parallelized effectively. Here, we show that a
DNN may be replicated over a massive parallel architecture and used to provide
a cumulative sampling of local solution space which results in rapid and robust
learning. We introduce a complimentary convolutional bootstrapping approach
that enhances performance of the parallel architecture further. Our
parallelized convolutional bootstrapping DNN out-performs an identical
fully-trained traditional DNN after only a single iteration of training.
| Andrew J.R. Simpson | null | 1505.05972 | null | null |
Machine Learning for Indoor Localization Using Mobile Phone-Based
Sensors | cs.LG cs.NI | In this paper we investigate the problem of localizing a mobile device based
on readings from its embedded sensors utilizing machine learning methodologies.
We consider a real-world environment, collect a large dataset of 3110
datapoints, and examine the performance of a substantial number of machine
learning algorithms in localizing a mobile device. We have found algorithms
that give a mean error as accurate as 0.76 meters, outperforming other indoor
localization systems reported in the literature. We also propose a hybrid
instance-based approach that results in a speed increase by a factor of ten
with no loss of accuracy in a live deployment over standard instance-based
methods, allowing for fast and accurate localization. Further, we determine how
smaller datasets collected with less density affect accuracy of localization,
important for use in real-world environments. Finally, we demonstrate that
these approaches are appropriate for real-world deployment by evaluating their
performance in an online, in-motion experiment.
| David Mascharka and Eric Manley | 10.1109/CCNC.2016.7444919 | 1505.06125 | null | null |
Learning Dynamic Feature Selection for Fast Sequential Prediction | cs.CL cs.LG | We present paired learning and inference algorithms for significantly
reducing computation and increasing speed of the vector dot products in the
classifiers that are at the heart of many NLP components. This is accomplished
by partitioning the features into a sequence of templates which are ordered
such that high confidence can often be reached using only a small fraction of
all features. Parameter estimation is arranged to maximize accuracy and early
confidence in this sequence. Our approach is simpler and better suited to NLP
than other related cascade methods. We present experiments in left-to-right
part-of-speech tagging, named entity recognition, and transition-based
dependency parsing. On the typical benchmarking datasets we can preserve POS
tagging accuracy above 97% and parsing LAS above 88.5% both with over a
five-fold reduction in run-time, and NER F1 above 88 with more than 2x increase
in speed.
| Emma Strubell, Luke Vilnis, Kate Silverstein, Andrew McCallum | null | 1505.06169 | null | null |
Greedy Biomarker Discovery in the Genome with Applications to
Antimicrobial Resistance | q-bio.GN cs.LG stat.ML | The Set Covering Machine (SCM) is a greedy learning algorithm that produces
sparse classifiers. We extend the SCM for datasets that contain a huge number
of features. The whole genetic material of living organisms is an example of
such a case, where the number of feature exceeds 10^7. Three human pathogens
were used to evaluate the performance of the SCM at predicting antimicrobial
resistance. Our results show that the SCM compares favorably in terms of
sparsity and accuracy against L1 and L2 regularized Support Vector Machines and
CART decision trees. Moreover, the SCM was the only algorithm that could
consider the full feature space. For all other algorithms, the latter had to be
filtered as a preprocessing step.
| Alexandre Drouin, S\'ebastien Gigu\`ere, Maxime D\'eraspe,
Fran\c{c}ois Laviolette, Mario Marchand, Jacques Corbeil | null | 1505.06249 | null | null |
The Benefit of Multitask Representation Learning | stat.ML cs.LG | We discuss a general method to learn data representations from multiple
tasks. We provide a justification for this method in both settings of multitask
learning and learning-to-learn. The method is illustrated in detail in the
special case of linear feature learning. Conditions on the theoretical
advantage offered by multitask representation learning over independent task
learning are established. In particular, focusing on the important example of
half-space learning, we derive the regime in which multitask representation
learning is beneficial over independent task learning, as a function of the
sample size, the number of tasks and the intrinsic data dimensionality. Other
potential applications of our results include multitask feature learning in
reproducing kernel Hilbert spaces and multilayer, deep networks.
| Andreas Maurer, Massimiliano Pontil, Bernardino Romera-Paredes | null | 1505.06279 | null | null |
Low-Rank Matrix Recovery from Row-and-Column Affine Measurements | cs.LG cs.IT math.IT math.ST stat.CO stat.ML stat.TH | We propose and study a row-and-column affine measurement scheme for low-rank
matrix recovery. Each measurement is a linear combination of elements in one
row or one column of a matrix $X$. This setting arises naturally in
applications from different domains. However, current algorithms developed for
standard matrix recovery problems do not perform well in our case, hence the
need for developing new algorithms and theory for our problem. We propose a
simple algorithm for the problem based on Singular Value Decomposition ($SVD$)
and least-squares ($LS$), which we term \alg. We prove that (a simplified
version of) our algorithm can recover $X$ exactly with the minimum possible
number of measurements in the noiseless case. In the general noisy case, we
prove performance guarantees on the reconstruction accuracy under the Frobenius
norm. In simulations, our row-and-column design and \alg algorithm show
improved speed, and comparable and in some cases better accuracy compared to
standard measurements designs and algorithms. Our theoretical and experimental
results suggest that the proposed row-and-column affine measurements scheme,
together with our recovery algorithm, may provide a powerful framework for
affine matrix reconstruction.
| Avishai Wagner and Or Zuk | null | 1505.06292 | null | null |
Monotonic Calibrated Interpolated Look-Up Tables | cs.LG | Real-world machine learning applications may require functions that are
fast-to-evaluate and interpretable. In particular, guaranteed monotonicity of
the learned function can be critical to user trust. We propose meeting these
goals for low-dimensional machine learning problems by learning flexible,
monotonic functions using calibrated interpolated look-up tables. We extend the
structural risk minimization framework of lattice regression to train monotonic
look-up tables by solving a convex problem with appropriate linear inequality
constraints. In addition, we propose jointly learning interpretable
calibrations of each feature to normalize continuous features and handle
categorical or missing data, at the cost of making the objective non-convex. We
address large-scale learning through parallelization, mini-batching, and
propose random sampling of additive regularizer terms. Case studies with
real-world problems with five to sixteen features and thousands to millions of
training samples demonstrate the proposed monotonic functions can achieve
state-of-the-art accuracy on practical problems while providing greater
transparency to users.
| Maya Gupta, Andrew Cotter, Jan Pfeifer, Konstantin Voevodski, Kevin
Canini, Alexander Mangylov, Wojtek Moczydlowski and Alex van Esbroeck | null | 1505.06378 | null | null |
Domain Adaptation Extreme Learning Machines for Drift Compensation in
E-nose Systems | cs.LG | This paper addresses an important issue, known as sensor drift that behaves a
nonlinear dynamic property in electronic nose (E-nose), from the viewpoint of
machine learning. Traditional methods for drift compensation are laborious and
costly due to the frequent acquisition and labeling process for gases samples
recalibration. Extreme learning machines (ELMs) have been confirmed to be
efficient and effective learning techniques for pattern recognition and
regression. However, ELMs primarily focus on the supervised, semi-supervised
and unsupervised learning problems in single domain (i.e. source domain). To
our best knowledge, ELM with cross-domain learning capability has never been
studied. This paper proposes a unified framework, referred to as Domain
Adaptation Extreme Learning Machine (DAELM), which learns a robust classifier
by leveraging a limited number of labeled data from target domain for drift
compensation as well as gases recognition in E-nose systems, without loss of
the computational efficiency and learning ability of traditional ELM. In the
unified framework, two algorithms called DAELM-S and DAELM-T are proposed for
the purpose of this paper, respectively. In order to percept the differences
among ELM, DAELM-S and DAELM-T, two remarks are provided. Experiments on the
popular sensor drift data with multiple batches collected by E-nose system
clearly demonstrate that the proposed DAELM significantly outperforms existing
drift compensation methods without cumbersome measures, and also bring new
perspectives for ELM.
| Lei Zhang and David Zhang | 10.1109/TIM.2014.2367775 | 1505.06405 | null | null |
Deep Speaker Vectors for Semi Text-independent Speaker Verification | cs.CL cs.LG cs.NE | Recent research shows that deep neural networks (DNNs) can be used to extract
deep speaker vectors (d-vectors) that preserve speaker characteristics and can
be used in speaker verification. This new method has been tested on
text-dependent speaker verification tasks, and improvement was reported when
combined with the conventional i-vector method.
This paper extends the d-vector approach to semi text-independent speaker
verification tasks, i.e., the text of the speech is in a limited set of short
phrases. We explore various settings of the DNN structure used for d-vector
extraction, and present a phone-dependent training which employs the posterior
features obtained from an ASR system. The experimental results show that it is
possible to apply d-vectors on semi text-independent speaker recognition, and
the phone-dependent training improves system performance.
| Lantian Li and Dong Wang and Zhiyong Zhang and Thomas Fang Zheng | null | 1505.06427 | null | null |
Detecting bird sound in unknown acoustic background using crowdsourced
training data | stat.ML cs.LG cs.SD | Biodiversity monitoring using audio recordings is achievable at a truly
global scale via large-scale deployment of inexpensive, unattended recording
stations or by large-scale crowdsourcing using recording and species
recognition on mobile devices. The ability, however, to reliably identify
vocalising animal species is limited by the fact that acoustic signatures of
interest in such recordings are typically embedded in a diverse and complex
acoustic background. To avoid the problems associated with modelling such
backgrounds, we build generative models of bird sounds and use the concept of
novelty detection to screen recordings to detect sections of data which are
likely bird vocalisations. We present detection results against various
acoustic environments and different signal-to-noise ratios. We discuss the
issues related to selecting the cost function and setting detection thresholds
in such algorithms. Our methods are designed to be scalable and automatically
applicable to arbitrary selections of species depending on the specific
geographic region and time period of deployment.
| Timos Papadopoulos, Stephen Roberts and Kathy Willis | null | 1505.06443 | null | null |
Tight Continuous Relaxation of the Balanced $k$-Cut Problem | stat.ML cs.LG | Spectral Clustering as a relaxation of the normalized/ratio cut has become
one of the standard graph-based clustering methods. Existing methods for the
computation of multiple clusters, corresponding to a balanced $k$-cut of the
graph, are either based on greedy techniques or heuristics which have weak
connection to the original motivation of minimizing the normalized cut. In this
paper we propose a new tight continuous relaxation for any balanced $k$-cut
problem and show that a related recently proposed relaxation is in most cases
loose leading to poor performance in practice. For the optimization of our
tight continuous relaxation we propose a new algorithm for the difficult
sum-of-ratios minimization problem which achieves monotonic descent. Extensive
comparisons show that our method outperforms all existing approaches for ratio
cut and other balanced $k$-cut criteria.
| Syama Sundar Rangapuram, Pramod Kaushik Mudrakarta and Matthias Hein | null | 1505.06478 | null | null |
Constrained 1-Spectral Clustering | stat.ML cs.LG | An important form of prior information in clustering comes in form of
cannot-link and must-link constraints. We present a generalization of the
popular spectral clustering technique which integrates such constraints.
Motivated by the recently proposed $1$-spectral clustering for the
unconstrained problem, our method is based on a tight relaxation of the
constrained normalized cut into a continuous optimization problem. Opposite to
all other methods which have been suggested for constrained spectral
clustering, we can always guarantee to satisfy all constraints. Moreover, our
soft formulation allows to optimize a trade-off between normalized cut and the
number of violated constraints. An efficient implementation is provided which
scales to large datasets. We outperform consistently all other proposed methods
in the experiments.
| Syama Sundar Rangapuram and Matthias Hein | null | 1505.06485 | null | null |
Affine and Regional Dynamic Time Warpng | cs.CV cs.CE cs.LG | Pointwise matches between two time series are of great importance in time
series analysis, and dynamic time warping (DTW) is known to provide generally
reasonable matches. There are situations where time series alignment should be
invariant to scaling and offset in amplitude or where local regions of the
considered time series should be strongly reflected in pointwise matches. Two
different variants of DTW, affine DTW (ADTW) and regional DTW (RDTW), are
proposed to handle scaling and offset in amplitude and provide regional
emphasis respectively. Furthermore, ADTW and RDTW can be combined in two
different ways to generate alignments that incorporate advantages from both
methods, where the affine model can be applied either globally to the entire
time series or locally to each region. The proposed alignment methods
outperform DTW on specific simulated datasets, and one-nearest-neighbor
classifiers using their associated difference measures are competitive with the
difference measures associated with state-of-the-art alignment methods on real
datasets.
| Tsu-Wei Chen, Meena Abdelmaseeh, Daniel Stashuk | null | 1505.06531 | null | null |
Clustering via Content-Augmented Stochastic Blockmodels | stat.ML cs.LG cs.SI | Much of the data being created on the web contains interactions between users
and items. Stochastic blockmodels, and other methods for community detection
and clustering of bipartite graphs, can infer latent user communities and
latent item clusters from this interaction data. These methods, however,
typically ignore the items' contents and the information they provide about
item clusters, despite the tendency of items in the same latent cluster to
share commonalities in content. We introduce content-augmented stochastic
blockmodels (CASB), which use item content together with user-item interaction
data to enhance the user communities and item clusters learned. Comparisons to
several state-of-the-art benchmark methods, on datasets arising from scientists
interacting with scientific articles, show that content-augmented stochastic
blockmodels provide highly accurate clusters with respect to metrics
representative of the underlying community structure.
| J. Massey Cashore, Xiaoting Zhao, Alexander A. Alemi, Yujia Liu, Peter
I. Frazier | null | 1505.06538 | null | null |
Differentially Private Distributed Online Learning | cs.LG | Online learning has been in the spotlight from the machine learning society
for a long time. To handle massive data in Big Data era, one single learner
could never efficiently finish this heavy task. Hence, in this paper, we
propose a novel distributed online learning algorithm to solve the problem.
Comparing to typical centralized online learner, the distributed learners
optimize their own learning parameters based on local data sources and timely
communicate with neighbors. However, communication may lead to a privacy
breach. Thus, we use differential privacy to preserve the privacy of learners,
and study the influence of guaranteeing differential privacy on the utility of
the distributed online learning algorithm. Furthermore, by using the results
from Kakade and Tewari (2009), we use the regret bounds of online learning to
achieve fast convergence rates for offline learning algorithms in distributed
scenarios, which provides tighter utility performance than the existing
state-of-the-art results. In simulation, we demonstrate that the differentially
private offline learning algorithm has high variance, but we can use mini-batch
to improve the performance. Finally, the simulations show that the analytical
results of our proposed theorems are right and our private distributed online
learning algorithm is a general framework.
| Chencheng Li and Pan Zhou | null | 1505.06556 | null | null |
Electre Tri-Machine Learning Approach to the Record Linkage Problem | stat.ML cs.LG | In this short paper, the Electre Tri-Machine Learning Method, generally used
to solve ordinal classification problems, is proposed for solving the Record
Linkage problem. Preliminary experimental results show that, using the Electre
Tri method, high accuracy can be achieved and more than 99% of the matches and
nonmatches were correctly identified by the procedure.
| Renato De Leone, Valentina Minnetti | null | 1505.06614 | null | null |
Sketching for Sequential Change-Point Detection | cs.LG stat.ML | We study sequential change-point detection procedures based on linear
sketches of high-dimensional signal vectors using generalized likelihood ratio
(GLR) statistics. The GLR statistics allow for an unknown post-change mean that
represents an anomaly or novelty. We consider both fixed and time-varying
projections, derive theoretical approximations to two fundamental performance
metrics: the average run length (ARL) and the expected detection delay (EDD);
these approximations are shown to be highly accurate by numerical simulations.
We further characterize the relative performance measure of the sketching
procedure compared to that without sketching and show that there can be little
performance loss when the signal strength is sufficiently large, and enough
number of sketches are used. Finally, we demonstrate the good performance of
sketching procedures using simulation and real-data examples on solar flare
detection and failure detection in power networks.
| Yang Cao, Andrew Thompson, Meng Wang, Yao Xie | null | 1505.06770 | null | null |
An Empirical Evaluation of Current Convolutional Architectures' Ability
to Manage Nuisance Location and Scale Variability | cs.CV cs.LG cs.NE | We conduct an empirical study to test the ability of Convolutional Neural
Networks (CNNs) to reduce the effects of nuisance transformations of the input
data, such as location, scale and aspect ratio. We isolate factors by adopting
a common convolutional architecture either deployed globally on the image to
compute class posterior distributions, or restricted locally to compute class
conditional distributions given location, scale and aspect ratios of bounding
boxes determined by proposal heuristics. In theory, averaging the latter should
yield inferior performance compared to proper marginalization. Yet empirical
evidence suggests the converse, leading us to conclude that - at the current
level of complexity of convolutional architectures and scale of the data sets
used to train them - CNNs are not very effective at marginalizing nuisance
variability. We also quantify the effects of context on the overall
classification task and its impact on the performance of CNNs, and propose
improved sampling techniques for heuristic proposal schemes that improve
end-to-end performance to state-of-the-art levels. We test our hypothesis on a
classification task using the ImageNet Challenge benchmark and on a
wide-baseline matching task using the Oxford and Fischer's datasets.
| Nikolaos Karianakis, Jingming Dong and Stefano Soatto | null | 1505.06795 | null | null |
Accelerating Very Deep Convolutional Networks for Classification and
Detection | cs.CV cs.LG cs.NE | This paper aims to accelerate the test-time computation of convolutional
neural networks (CNNs), especially very deep CNNs that have substantially
impacted the computer vision community. Unlike previous methods that are
designed for approximating linear filters or linear responses, our method takes
the nonlinear units into account. We develop an effective solution to the
resulting nonlinear optimization problem without the need of stochastic
gradient descent (SGD). More importantly, while previous methods mainly focus
on optimizing one or two layers, our nonlinear method enables an asymmetric
reconstruction that reduces the rapidly accumulated error when multiple (e.g.,
>=10) layers are approximated. For the widely used very deep VGG-16 model, our
method achieves a whole-model speedup of 4x with merely a 0.3% increase of
top-5 error in ImageNet classification. Our 4x accelerated VGG-16 model also
shows a graceful accuracy degradation for object detection when plugged into
the Fast R-CNN detector.
| Xiangyu Zhang, Jianhua Zou, Kaiming He, Jian Sun | null | 1505.06798 | null | null |
Boosting-like Deep Learning For Pedestrian Detection | cs.CV cs.LG cs.NE | This paper proposes boosting-like deep learning (BDL) framework for
pedestrian detection. Due to overtraining on the limited training samples,
overfitting is a major problem of deep learning. We incorporate a boosting-like
technique into deep learning to weigh the training samples, and thus prevent
overtraining in the iterative process. We theoretically give the details of
derivation of our algorithm, and report the experimental results on open data
sets showing that BDL achieves a better stable performance than the
state-of-the-arts. Our approach achieves 15.85% and 3.81% reduction in the
average miss rate compared with ACF and JointDeep on the largest Caltech
benchmark dataset, respectively.
| Lei Wang, Baochang Zhang | null | 1505.06800 | null | null |
MLlib: Machine Learning in Apache Spark | cs.LG cs.DC cs.MS stat.ML | Apache Spark is a popular open-source platform for large-scale data
processing that is well-suited for iterative machine learning tasks. In this
paper we present MLlib, Spark's open-source distributed machine learning
library. MLlib provides efficient functionality for a wide range of learning
settings and includes several underlying statistical, optimization, and linear
algebra primitives. Shipped with Spark, MLlib supports several languages and
provides a high-level API that leverages Spark's rich ecosystem to simplify the
development of end-to-end machine learning pipelines. MLlib has experienced a
rapid growth due to its vibrant open-source community of over 140 contributors,
and includes extensive documentation to support further growth and to let users
quickly get up to speed.
| Xiangrui Meng, Joseph Bradley, Burak Yavuz, Evan Sparks, Shivaram
Venkataraman, Davies Liu, Jeremy Freeman, DB Tsai, Manish Amde, Sean Owen,
Doris Xin, Reynold Xin, Michael J. Franklin, Reza Zadeh, Matei Zaharia, Ameet
Talwalkar | null | 1505.06807 | null | null |
Optimizing Non-decomposable Performance Measures: A Tale of Two Classes | stat.ML cs.LG | Modern classification problems frequently present mild to severe label
imbalance as well as specific requirements on classification characteristics,
and require optimizing performance measures that are non-decomposable over the
dataset, such as F-measure. Such measures have spurred much interest and pose
specific challenges to learning algorithms since their non-additive nature
precludes a direct application of well-studied large scale optimization methods
such as stochastic gradient descent.
In this paper we reveal that for two large families of performance measures
that can be expressed as functions of true positive/negative rates, it is
indeed possible to implement point stochastic updates. The families we consider
are concave and pseudo-linear functions of TPR, TNR which cover several
popularly used performance measures such as F-measure, G-mean and H-mean.
Our core contribution is an adaptive linearization scheme for these families,
using which we develop optimization techniques that enable truly point-based
stochastic updates. For concave performance measures we propose SPADE, a
stochastic primal dual solver; for pseudo-linear measures we propose STAMP, a
stochastic alternate maximization procedure. Both methods have crisp
convergence guarantees, demonstrate significant speedups over existing methods
- often by an order of magnitude or more, and give similar or more accurate
predictions on test data.
| Harikrishna Narasimhan and Purushottam Kar and Prateek Jain | null | 1505.06812 | null | null |
Surrogate Functions for Maximizing Precision at the Top | stat.ML cs.LG | The problem of maximizing precision at the top of a ranked list, often dubbed
Precision@k (prec@k), finds relevance in myriad learning applications such as
ranking, multi-label classification, and learning with severe label imbalance.
However, despite its popularity, there exist significant gaps in our
understanding of this problem and its associated performance measure.
The most notable of these is the lack of a convex upper bounding surrogate
for prec@k. We also lack scalable perceptron and stochastic gradient descent
algorithms for optimizing this performance measure. In this paper we make key
contributions in these directions. At the heart of our results is a family of
truly upper bounding surrogates for prec@k. These surrogates are motivated in a
principled manner and enjoy attractive properties such as consistency to prec@k
under various natural margin/noise conditions.
These surrogates are then used to design a class of novel perceptron
algorithms for optimizing prec@k with provable mistake bounds. We also devise
scalable stochastic gradient descent style methods for this problem with
provable convergence bounds. Our proofs rely on novel uniform convergence
bounds which require an in-depth analysis of the structural properties of
prec@k and its surrogates. We conclude with experimental results comparing our
algorithms with state-of-the-art cutting plane and stochastic gradient
algorithms for maximizing prec@k.
| Purushottam Kar and Harikrishna Narasimhan and Prateek Jain | null | 1505.06813 | null | null |
Discrete Independent Component Analysis (DICA) with Belief Propagation | cs.CV cs.LG stat.ML | We apply belief propagation to a Bayesian bipartite graph composed of
discrete independent hidden variables and discrete visible variables. The
network is the Discrete counterpart of Independent Component Analysis (DICA)
and it is manipulated in a factor graph form for inference and learning. A full
set of simulations is reported for character images from the MNIST dataset. The
results show that the factorial code implemented by the sources contributes to
build a good generative model for the data that can be used in various
inference modes.
| Francesco A. N. Palmieri and Amedeo Buonanno | null | 1505.06814 | null | null |
Times series averaging from a probabilistic interpretation of
time-elastic kernel | cs.LG cs.DS | At the light of regularized dynamic time warping kernels, this paper
reconsider the concept of time elastic centroid (TEC) for a set of time series.
From this perspective, we show first how TEC can easily be addressed as a
preimage problem. Unfortunately this preimage problem is ill-posed, may suffer
from over-fitting especially for long time series and getting a sub-optimal
solution involves heavy computational costs. We then derive two new algorithms
based on a probabilistic interpretation of kernel alignment matrices that
expresses in terms of probabilistic distributions over sets of alignment paths.
The first algorithm is an iterative agglomerative heuristics inspired from the
state of the art DTW barycenter averaging (DBA) algorithm proposed specifically
for the Dynamic Time Warping measure. The second proposed algorithm achieves a
classical averaging of the aligned samples but also implements an averaging of
the time of occurrences of the aligned samples. It exploits a straightforward
progressive agglomerative heuristics. An experimentation that compares for 45
time series datasets classification error rates obtained by first near
neighbors classifiers exploiting a single medoid or centroid estimate to
represent each categories show that: i) centroids based approaches
significantly outperform medoids based approaches, ii) on the considered
experience, the two proposed algorithms outperform the state of the art DBA
algorithm, and iii) the second proposed algorithm that implements an averaging
jointly in the sample space and along the time axes emerges as the most
significantly robust time elastic averaging heuristic with an interesting noise
reduction capability. Index Terms-Time series averaging Time elastic kernel
Dynamic Time Warping Time series clustering and classification.
| Pierre-Fran\c{c}ois Marteau (IRISA) | 10.2478/amcs-2019-0028 | 1505.06897 | null | null |
Using Dimension Reduction to Improve the Classification of
High-dimensional Data | cs.LG cs.CV | In this work we show that the classification performance of high-dimensional
structural MRI data with only a small set of training examples is improved by
the usage of dimension reduction methods. We assessed two different dimension
reduction variants: feature selection by ANOVA F-test and feature
transformation by PCA. On the reduced datasets, we applied common learning
algorithms using 5-fold cross-validation. Training, tuning of the
hyperparameters, as well as the performance evaluation of the classifiers was
conducted using two different performance measures: Accuracy, and Receiver
Operating Characteristic curve (AUC). Our hypothesis is supported by
experimental results.
| Andreas Gr\"unauer and Markus Vincze | null | 1505.06907 | null | null |
Large-scale Machine Learning for Metagenomics Sequence Classification | q-bio.QM cs.CE cs.LG q-bio.GN stat.ML | Metagenomics characterizes the taxonomic diversity of microbial communities
by sequencing DNA directly from an environmental sample. One of the main
challenges in metagenomics data analysis is the binning step, where each
sequenced read is assigned to a taxonomic clade. Due to the large volume of
metagenomics datasets, binning methods need fast and accurate algorithms that
can operate with reasonable computing requirements. While standard
alignment-based methods provide state-of-the-art performance, compositional
approaches that assign a taxonomic class to a DNA read based on the k-mers it
contains have the potential to provide faster solutions. In this work, we
investigate the potential of modern, large-scale machine learning
implementations for taxonomic affectation of next-generation sequencing reads
based on their k-mers profile. We show that machine learning-based
compositional approaches benefit from increasing the number of fragments
sampled from reference genome to tune their parameters, up to a coverage of
about 10, and from increasing the k-mer size to about 12. Tuning these models
involves training a machine learning model on about 10 8 samples in 10 7
dimensions, which is out of reach of standard soft-wares but can be done
efficiently with modern implementations for large-scale machine learning. The
resulting models are competitive in terms of accuracy with well-established
alignment tools for problems involving a small to moderate number of candidate
species, and for reasonable amounts of sequencing errors. We show, however,
that compositional approaches are still limited in their ability to deal with
problems involving a greater number of species, and more sensitive to
sequencing errors. We finally confirm that compositional approach achieve
faster prediction times, with a gain of 3 to 15 times with respect to the
BWA-MEM short read mapper, depending on the number of candidate species and the
level of sequencing noise.
| K\'evin Vervier (CBIO), Pierre Mah\'e, Maud Tournoud, Jean-Baptiste
Veyrieras, Jean-Philippe Vert (CBIO) | null | 1505.06915 | null | null |
Fantasy Football Prediction | cs.LG | The ubiquity of professional sports and specifically the NFL have lead to an
increase in popularity for Fantasy Football. Users have many tools at their
disposal: statistics, predictions, rankings of experts and even recommendations
of peers. There are issues with all of these, though. Especially since many
people pay money to play, the prediction tools should be enhanced as they
provide unbiased and easy-to-use assistance for users. This paper provides and
discusses approaches to predict Fantasy Football scores of Quarterbacks with
relatively limited data. In addition to that, it includes several suggestions
on how the data could be enhanced to achieve better results. The dataset
consists only of game data from the last six NFL seasons. I used two different
methods to predict the Fantasy Football scores of NFL players: Support Vector
Regression (SVR) and Neural Networks. The results of both are promising given
the limited data that was used.
| Roman Lutz | null | 1505.06918 | null | null |
Sequential Dimensionality Reduction for Extracting Localized Features | cs.CV cs.LG cs.NA math.NA stat.ML | Linear dimensionality reduction techniques are powerful tools for image
analysis as they allow the identification of important features in a data set.
In particular, nonnegative matrix factorization (NMF) has become very popular
as it is able to extract sparse, localized and easily interpretable features by
imposing an additive combination of nonnegative basis elements. Nonnegative
matrix underapproximation (NMU) is a closely related technique that has the
advantage to identify features sequentially. In this paper, we propose a
variant of NMU that is particularly well suited for image analysis as it
incorporates the spatial information, that is, it takes into account the fact
that neighboring pixels are more likely to be contained in the same features,
and favors the extraction of localized features by looking for sparse basis
elements. We show that our new approach competes favorably with comparable
state-of-the-art techniques on synthetic, facial and hyperspectral image data
sets.
| Gabriella Casalino, Nicolas Gillis | 10.1016/j.patcog.2016.09.006 | 1505.06957 | null | null |
Some Open Problems in Optimal AdaBoost and Decision Stumps | cs.LG stat.ML | The significance of the study of the theoretical and practical properties of
AdaBoost is unquestionable, given its simplicity, wide practical use, and
effectiveness on real-world datasets. Here we present a few open problems
regarding the behavior of "Optimal AdaBoost," a term coined by Rudin,
Daubechies, and Schapire in 2004 to label the simple version of the standard
AdaBoost algorithm in which the weak learner that AdaBoost uses always outputs
the weak classifier with lowest weighted error among the respective hypothesis
class of weak classifiers implicit in the weak learner. We concentrate on the
standard, "vanilla" version of Optimal AdaBoost for binary classification that
results from using an exponential-loss upper bound on the misclassification
training error. We present two types of open problems. One deals with general
weak hypotheses. The other deals with the particular case of decision stumps,
as often and commonly used in practice. Answers to the open problems can have
immediate significant impact to (1) cementing previously established results on
asymptotic convergence properties of Optimal AdaBoost, for finite datasets,
which in turn can be the start to any convergence-rate analysis; (2)
understanding the weak-hypotheses class of effective decision stumps generated
from data, which we have empirically observed to be significantly smaller than
the typically obtained class, as well as the effect on the weak learner's
running time and previously established improved bounds on the generalization
performance of Optimal AdaBoost classifiers; and (3) shedding some light on the
"self control" that AdaBoost tends to exhibit in practice.
| Joshua Belanich and Luis E. Ortiz | null | 1505.06999 | null | null |
An Overview of the Asymptotic Performance of the Family of the FastICA
Algorithms | stat.ML cs.LG | This contribution summarizes the results on the asymptotic performance of
several variants of the FastICA algorithm. A number of new closed-form
expressions are presented.
| Tianwen Wei | null | 1505.07008 | null | null |
Belief Flows of Robust Online Learning | stat.ML cs.LG | This paper introduces a new probabilistic model for online learning which
dynamically incorporates information from stochastic gradients of an arbitrary
loss function. Similar to probabilistic filtering, the model maintains a
Gaussian belief over the optimal weight parameters. Unlike traditional Bayesian
updates, the model incorporates a small number of gradient evaluations at
locations chosen using Thompson sampling, making it computationally tractable.
The belief is then transformed via a linear flow field which optimally updates
the belief distribution using rules derived from information theoretic
principles. Several versions of the algorithm are shown using different
constraints on the flow field and compared with conventional online learning
algorithms. Results are given for several classification tasks including
logistic regression and multilayer neural networks.
| Pedro A. Ortega and Koby Crammer and Daniel D. Lee | null | 1505.07067 | null | null |
Training a Convolutional Neural Network for Appearance-Invariant Place
Recognition | cs.CV cs.LG cs.RO | Place recognition is one of the most challenging problems in computer vision,
and has become a key part in mobile robotics and autonomous driving
applications for performing loop closure in visual SLAM systems. Moreover, the
difficulty of recognizing a revisited location increases with appearance
changes caused, for instance, by weather or illumination variations, which
hinders the long-term application of such algorithms in real environments. In
this paper we present a convolutional neural network (CNN), trained for the
first time with the purpose of recognizing revisited locations under severe
appearance changes, which maps images to a low dimensional space where
Euclidean distances represent place dissimilarity. In order for the network to
learn the desired invariances, we train it with triplets of images selected
from datasets which present a challenging variability in visual appearance. The
triplets are selected in such way that two samples are from the same location
and the third one is taken from a different place. We validate our system
through extensive experimentation, where we demonstrate better performance than
state-of-art algorithms in a number of popular datasets.
| Ruben Gomez-Ojeda, Manuel Lopez-Antequera, Nicolai Petkov, Javier
Gonzalez-Jimenez | null | 1505.07428 | null | null |
A Practical Guide to Randomized Matrix Computations with MATLAB
Implementations | cs.MS cs.LG | Matrix operations such as matrix inversion, eigenvalue decomposition,
singular value decomposition are ubiquitous in real-world applications.
Unfortunately, many of these matrix operations so time and memory expensive
that they are prohibitive when the scale of data is large. In real-world
applications, since the data themselves are noisy, machine-precision matrix
operations are not necessary at all, and one can sacrifice a reasonable amount
of accuracy for computational efficiency.
In recent years, a bunch of randomized algorithms have been devised to make
matrix computations more scalable. Mahoney (2011) and Woodruff (2014) have
written excellent but very technical reviews of the randomized algorithms.
Differently, the focus of this manuscript is on intuition, algorithm
derivation, and implementation. This manuscript should be accessible to people
with knowledge in elementary matrix algebra but unfamiliar with randomized
matrix computations. The algorithms introduced in this manuscript are all
summarized in a user-friendly way, and they can be implemented in lines of
MATLAB code. The readers can easily follow the implementations even if they do
not understand the maths and algorithms.
| Shusen Wang | null | 1505.07570 | null | null |
Learning with Symmetric Label Noise: The Importance of Being Unhinged | cs.LG | Convex potential minimisation is the de facto approach to binary
classification. However, Long and Servedio [2010] proved that under symmetric
label noise (SLN), minimisation of any convex potential over a linear function
class can result in classification performance equivalent to random guessing.
This ostensibly shows that convex losses are not SLN-robust. In this paper, we
propose a convex, classification-calibrated loss and prove that it is
SLN-robust. The loss avoids the Long and Servedio [2010] result by virtue of
being negatively unbounded. The loss is a modification of the hinge loss, where
one does not clamp at zero; hence, we call it the unhinged loss. We show that
the optimal unhinged solution is equivalent to that of a strongly regularised
SVM, and is the limiting solution for any convex potential; this implies that
strong l2 regularisation makes most standard learners SLN-robust. Experiments
confirm the SLN-robustness of the unhinged loss.
| Brendan van Rooyen and Aditya Krishna Menon and Robert C. Williamson | null | 1505.07634 | null | null |
Domain-Adversarial Training of Neural Networks | stat.ML cs.LG cs.NE | We introduce a new representation learning approach for domain adaptation, in
which data at training and test time come from similar but different
distributions. Our approach is directly inspired by the theory on domain
adaptation suggesting that, for effective domain transfer to be achieved,
predictions must be made based on features that cannot discriminate between the
training (source) and test (target) domains. The approach implements this idea
in the context of neural network architectures that are trained on labeled data
from the source domain and unlabeled data from the target domain (no labeled
target-domain data is necessary). As the training progresses, the approach
promotes the emergence of features that are (i) discriminative for the main
learning task on the source domain and (ii) indiscriminate with respect to the
shift between the domains. We show that this adaptation behaviour can be
achieved in almost any feed-forward model by augmenting it with few standard
layers and a new gradient reversal layer. The resulting augmented architecture
can be trained using standard backpropagation and stochastic gradient descent,
and can thus be implemented with little effort using any of the deep learning
packages. We demonstrate the success of our approach for two distinct
classification problems (document sentiment analysis and image classification),
where state-of-the-art domain adaptation performance on standard benchmarks is
achieved. We also validate the approach for descriptor learning task in the
context of person re-identification application.
| Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo
Larochelle, Fran\c{c}ois Laviolette, Mario Marchand, Victor Lempitsky | null | 1505.07818 | null | null |
Solving Verbal Comprehension Questions in IQ Test by Knowledge-Powered
Word Embedding | cs.CL cs.IR cs.LG | Intelligence Quotient (IQ) Test is a set of standardized questions designed
to evaluate human intelligence. Verbal comprehension questions appear very
frequently in IQ tests, which measure human's verbal ability including the
understanding of the words with multiple senses, the synonyms and antonyms, and
the analogies among words. In this work, we explore whether such tests can be
solved automatically by artificial intelligence technologies, especially the
deep learning technologies that are recently developed and successfully applied
in a number of fields. However, we found that the task was quite challenging,
and simply applying existing technologies (e.g., word embedding) could not
achieve a good performance, mainly due to the multiple senses of words and the
complex relations among words. To tackle these challenges, we propose a novel
framework consisting of three components. First, we build a classifier to
recognize the specific type of a verbal question (e.g., analogy,
classification, synonym, or antonym). Second, we obtain distributed
representations of words and relations by leveraging a novel word embedding
method that considers the multi-sense nature of words and the relational
knowledge among words (or their senses) contained in dictionaries. Third, for
each type of questions, we propose a specific solver based on the obtained
distributed word representations and relation representations. Experimental
results have shown that the proposed framework can not only outperform existing
methods for solving verbal comprehension questions but also exceed the average
performance of the Amazon Mechanical Turk workers involved in the study. The
results indicate that with appropriate uses of the deep learning technologies
we might be a further step closer to the human intelligence.
| Huazheng Wang, Fei Tian, Bin Gao, Jiang Bian, Tie-Yan Liu | null | 1505.07909 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.