title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Inductive Bias of Deep Convolutional Networks through Pooling Geometry | cs.NE cs.LG | Our formal understanding of the inductive bias that drives the success of
convolutional networks on computer vision tasks is limited. In particular, it
is unclear what makes hypotheses spaces born from convolution and pooling
operations so suitable for natural images. In this paper we study the ability
of convolutional networks to model correlations among regions of their input.
We theoretically analyze convolutional arithmetic circuits, and empirically
validate our findings on other types of convolutional networks as well.
Correlations are formalized through the notion of separation rank, which for a
given partition of the input, measures how far a function is from being
separable. We show that a polynomially sized deep network supports
exponentially high separation ranks for certain input partitions, while being
limited to polynomial separation ranks for others. The network's pooling
geometry effectively determines which input partitions are favored, thus serves
as a means for controlling the inductive bias. Contiguous pooling windows as
commonly employed in practice favor interleaved partitions over coarse ones,
orienting the inductive bias towards the statistics of natural images. Other
pooling schemes lead to different preferences, and this allows tailoring the
network to data that departs from the usual domain of natural imagery. In
addition to analyzing deep networks, we show that shallow ones support only
linear separation ranks, and by this gain insight into the benefit of functions
brought forth by depth - they are able to efficiently model strong correlation
under favored partitions of the input.
| Nadav Cohen and Amnon Shashua | null | 1605.06743 | null | null |
Active Nearest-Neighbor Learning in Metric Spaces | cs.LG math.ST stat.TH | We propose a pool-based non-parametric active learning algorithm for general
metric spaces, called MArgin Regularized Metric Active Nearest Neighbor
(MARMANN), which outputs a nearest-neighbor classifier. We give prediction
error guarantees that depend on the noisy-margin properties of the input
sample, and are competitive with those obtained by previously proposed passive
learners. We prove that the label complexity of MARMANN is significantly lower
than that of any passive learner with similar error guarantees. MARMANN is
based on a generalized sample compression scheme, and a new label-efficient
active model-selection procedure.
| Aryeh Kontorovich, Sivan Sabato, Ruth Urner | null | 1605.06792 | null | null |
Interpretable Distribution Features with Maximum Testing Power | stat.ML cs.LG | Two semimetrics on probability distributions are proposed, given as the sum
of differences of expectations of analytic functions evaluated at spatial or
frequency locations (i.e, features). The features are chosen so as to maximize
the distinguishability of the distributions, by optimizing a lower bound on
test power for a statistical test using these features. The result is a
parsimonious and interpretable indication of how and where two distributions
differ locally. An empirical estimate of the test power criterion converges
with increasing sample size, ensuring the quality of the returned features. In
real-world benchmarks on high-dimensional text and image data, linear-time
tests using the proposed semimetrics achieve comparable performance to the
state-of-the-art quadratic-time maximum mean discrepancy test, while returning
human-interpretable features that explain the test results.
| Wittawat Jitkrittum, Zoltan Szabo, Kacper Chwialkowski, Arthur Gretton | null | 1605.06796 | null | null |
Nonnegative Matrix Factorization Requires Irrationality | cs.CC cs.LG math.NA | Nonnegative matrix factorization (NMF) is the problem of decomposing a given
nonnegative $n \times m$ matrix $M$ into a product of a nonnegative $n \times
d$ matrix $W$ and a nonnegative $d \times m$ matrix $H$. A longstanding open
question, posed by Cohen and Rothblum in 1993, is whether a rational matrix $M$
always has an NMF of minimal inner dimension $d$ whose factors $W$ and $H$ are
also rational. We answer this question negatively, by exhibiting a matrix for
which $W$ and $H$ require irrational entries.
| Dmitry Chistikov, Stefan Kiefer, Ines Maru\v{s}i\'c, Mahsa
Shirmohammadi, James Worrell | null | 1605.06848 | null | null |
Smart broadcasting: Do you want to be seen? | cs.SI cs.LG stat.ML | Many users in online social networks are constantly trying to gain attention
from their followers by broadcasting posts to them. These broadcasters are
likely to gain greater attention if their posts can remain visible for a longer
period of time among their followers' most recent feeds. Then when to post? In
this paper, we study the problem of smart broadcasting using the framework of
temporal point processes, where we model users feeds and posts as discrete
events occurring in continuous time. Based on such continuous-time model, then
choosing a broadcasting strategy for a user becomes a problem of designing the
conditional intensity of her posting events. We derive a novel formula which
links this conditional intensity with the visibility of the user in her
followers' feeds. Furthermore, by exploiting this formula, we develop an
efficient convex optimization framework for the when-to-post problem. Our
method can find broadcasting strategies that reach a desired visibility level
with provable guarantees. We experimented with data gathered from Twitter, and
show that our framework can consistently make broadcasters' post more visible
than alternatives.
| Mohammad Reza Karimi and Erfan Tavakoli and Mehrdad Farajtabar and Le
Song and Manuel Gomez-Rodriguez | null | 1605.06855 | null | null |
DLAU: A Scalable Deep Learning Accelerator Unit on FPGA | cs.LG cs.DC cs.NE | As the emerging field of machine learning, deep learning shows excellent
ability in solving complex learning problems. However, the size of the networks
becomes increasingly large scale due to the demands of the practical
applications, which poses significant challenge to construct a high performance
implementations of deep learning neural networks. In order to improve the
performance as well to maintain the low power cost, in this paper we design
DLAU, which is a scalable accelerator architecture for large-scale deep
learning networks using FPGA as the hardware prototype. The DLAU accelerator
employs three pipelined processing units to improve the throughput and utilizes
tile techniques to explore locality for deep learning applications.
Experimental results on the state-of-the-art Xilinx FPGA board demonstrate that
the DLAU accelerator is able to achieve up to 36.1x speedup comparing to the
Intel Core2 processors, with the power consumption at 234mW.
| Chao Wang, Qi Yu, Lei Gong, Xi Li, Yuan Xie, Xuehai Zhou | null | 1605.06894 | null | null |
Fast Stochastic Methods for Nonsmooth Nonconvex Optimization | math.OC cs.LG stat.ML | We analyze stochastic algorithms for optimizing nonconvex, nonsmooth
finite-sum problems, where the nonconvex part is smooth and the nonsmooth part
is convex. Surprisingly, unlike the smooth case, our knowledge of this
fundamental problem is very limited. For example, it is not known whether the
proximal stochastic gradient method with constant minibatch converges to a
stationary point. To tackle this issue, we develop fast stochastic algorithms
that provably converge to a stationary point for constant minibatches.
Furthermore, using a variant of these algorithms, we show provably faster
convergence than batch proximal gradient descent. Finally, we prove global
linear convergence rate for an interesting subclass of nonsmooth nonconvex
functions, that subsumes several recent works. This paper builds upon our
recent series of papers on fast stochastic methods for smooth nonconvex
optimization [22, 23], with a novel analysis for nonconvex and nonsmooth
functions.
| Sashank J. Reddi, Suvrit Sra, Barnabas Poczos, Alex Smola | null | 1605.06900 | null | null |
Generative Choreography using Deep Learning | cs.AI cs.LG cs.MM cs.NE | Recent advances in deep learning have enabled the extraction of high-level
features from raw sensor data which has opened up new possibilities in many
different fields, including computer generated choreography. In this paper we
present a system chor-rnn for generating novel choreographic material in the
nuanced choreographic language and style of an individual choreographer. It
also shows promising results in producing a higher level compositional
cohesion, rather than just generating sequences of movement. At the core of
chor-rnn is a deep recurrent neural network trained on raw motion capture data
and that can generate new dance sequences for a solo dancer. Chor-rnn can be
used for collaborative human-machine choreography or as a creative catalyst,
serving as inspiration for a choreographer.
| Luka Crnkovic-Friis, Louise Crnkovic-Friis | null | 1605.06921 | null | null |
An Information Criterion for Inferring Coupling in Distributed Dynamical
Systems | cs.LG cs.IT math.IT stat.ML | The behaviour of many real-world phenomena can be modelled by nonlinear
dynamical systems whereby a latent system state is observed through a filter.
We are interested in interacting subsystems of this form, which we model by a
set of coupled maps as a synchronous update graph dynamical systems.
Specifically, we study the structure learning problem for spatially distributed
dynamical systems coupled via a directed acyclic graph. Unlike established
structure learning procedures that find locally maximum posterior probabilities
of a network structure containing latent variables, our work exploits the
properties of dynamical systems to compute globally optimal approximations of
these distributions. We arrive at this result by the use of time delay
embedding theorems. Taking an information-theoretic perspective, we show that
the log-likelihood has an intuitive interpretation in terms of information
transfer.
| Oliver M. Cliff, Mikhail Prokopenko and Robert Fitch | 10.3389/frobt.2016.00071 | 1605.06931 | null | null |
A Sub-Quadratic Exact Medoid Algorithm | stat.ML cs.DS cs.LG | We present a new algorithm, trimed, for obtaining the medoid of a set, that
is the element of the set which minimises the mean distance to all other
elements. The algorithm is shown to have, under certain assumptions, expected
run time O(N^(3/2)) in R^d where N is the set size, making it the first
sub-quadratic exact medoid algorithm for d>1. Experiments show that it performs
very well on spatial network data, frequently requiring two orders of magnitude
fewer distance calculations than state-of-the-art approximate algorithms. As an
application, we show how trimed can be used as a component in an accelerated
K-medoids algorithm, and then how it can be relaxed to obtain further
computational gains with only a minor loss in cluster quality.
| James Newling, Fran\c{c}ois Fleuret | null | 1605.06950 | null | null |
Semi-Supervised Classification Based on Classification from Positive and
Unlabeled Data | cs.LG | Most of the semi-supervised classification methods developed so far use
unlabeled data for regularization purposes under particular distributional
assumptions such as the cluster assumption. In contrast, recently developed
methods of classification from positive and unlabeled data (PU classification)
use unlabeled data for risk evaluation, i.e., label information is directly
extracted from unlabeled data. In this paper, we extend PU classification to
also incorporate negative data and propose a novel semi-supervised
classification approach. We establish generalization error bounds for our novel
methods and show that the bounds decrease with respect to the number of
unlabeled data without the distributional assumptions that are required in
existing semi-supervised classification methods. Through experiments, we
demonstrate the usefulness of the proposed methods.
| Tomoya Sakai, Marthinus Christoffel du Plessis, Gang Niu, Masashi
Sugiyama | null | 1605.06955 | null | null |
A Riemannian gossip approach to decentralized matrix completion | cs.NA cs.LG math.OC | In this paper, we propose novel gossip algorithms for the low-rank
decentralized matrix completion problem. The proposed approach is on the
Riemannian Grassmann manifold that allows local matrix completion by different
agents while achieving asymptotic consensus on the global low-rank factors. The
resulting approach is scalable and parallelizable. Our numerical experiments
show the good performance of the proposed algorithms on various benchmarks.
| Bamdev Mishra, Hiroyuki Kasai, and Atul Saroop | null | 1605.06968 | null | null |
DP-EM: Differentially Private Expectation Maximization | cs.LG cs.AI cs.CR stat.ME stat.ML | The iterative nature of the expectation maximization (EM) algorithm presents
a challenge for privacy-preserving estimation, as each iteration increases the
amount of noise needed. We propose a practical private EM algorithm that
overcomes this challenge using two innovations: (1) a novel moment perturbation
formulation for differentially private EM (DP-EM), and (2) the use of two
recently developed composition methods to bound the privacy "cost" of multiple
EM iterations: the moments accountant (MA) and zero-mean concentrated
differential privacy (zCDP). Both MA and zCDP bound the moment generating
function of the privacy loss random variable and achieve a refined tail bound,
which effectively decrease the amount of additive noise. We present empirical
results showing the benefits of our approach, as well as similar performance
between these two composition methods in the DP-EM setting for Gaussian mixture
models. Our approach can be readily extended to many iterative learning
algorithms, opening up various exciting future directions.
| Mijung Park, Jimmy Foulds, Kamalika Chaudhuri, Max Welling | null | 1605.06995 | null | null |
Online Learning with Feedback Graphs Without the Graphs | cs.LG stat.ML | We study an online learning framework introduced by Mannor and Shamir (2011)
in which the feedback is specified by a graph, in a setting where the graph may
vary from round to round and is \emph{never fully revealed} to the learner. We
show a large gap between the adversarial and the stochastic cases. In the
adversarial case, we prove that even for dense feedback graphs, the learner
cannot improve upon a trivial regret bound obtained by ignoring any additional
feedback besides her own loss. In contrast, in the stochastic case we give an
algorithm that achieves $\widetilde \Theta(\sqrt{\alpha T})$ regret over $T$
rounds, provided that the independence numbers of the hidden feedback graphs
are at most $\alpha$. We also extend our results to a more general feedback
model, in which the learner does not necessarily observe her own loss, and show
that, even in simple cases, concealing the feedback graphs might render a
learnable problem unlearnable.
| Alon Cohen, Tamir Hazan, Tomer Koren | null | 1605.07018 | null | null |
Collaborative Filtering with Side Information: a Gaussian Process
Perspective | stat.ML cs.IR cs.LG | We tackle the problem of collaborative filtering (CF) with side information,
through the lens of Gaussian Process (GP) regression. Driven by the idea of
using the kernel to explicitly model user-item similarities, we formulate the
GP in a way that allows the incorporation of low-rank matrix factorisation,
arriving at our model, the Tucker Gaussian Process (TGP). Consequently, TGP
generalises classical Bayesian matrix factorisation models, and goes beyond
them to give a natural and elegant method for incorporating side information,
giving enhanced predictive performance for CF problems. Moreover we show that
it is a novel model for regression, especially well-suited to grid-structured
data and problems where the dependence on covariates is close to being
separable.
| Hyunjik Kim, Xiaoyu Lu, Seth Flaxman, Yee Whye Teh | null | 1605.07025 | null | null |
Convergence Analysis for Rectangular Matrix Completion Using
Burer-Monteiro Factorization and Gradient Descent | stat.ML cs.LG | We address the rectangular matrix completion problem by lifting the unknown
matrix to a positive semidefinite matrix in higher dimension, and optimizing a
nonconvex objective over the semidefinite factor using a simple gradient
descent scheme. With $O( \mu r^2 \kappa^2 n \max(\mu, \log n))$ random
observations of a $n_1 \times n_2$ $\mu$-incoherent matrix of rank $r$ and
condition number $\kappa$, where $n = \max(n_1, n_2)$, the algorithm linearly
converges to the global optimum with high probability.
| Qinqing Zheng, John Lafferty | null | 1605.07051 | null | null |
Bayesian Model Selection of Stochastic Block Models | stat.ML cs.LG cs.SI | A central problem in analyzing networks is partitioning them into modules or
communities. One of the best tools for this is the stochastic block model,
which clusters vertices into blocks with statistically homogeneous pattern of
links. Despite its flexibility and popularity, there has been a lack of
principled statistical model selection criteria for the stochastic block model.
Here we propose a Bayesian framework for choosing the number of blocks as well
as comparing it to the more elaborate degree- corrected block models,
ultimately leading to a universal model selection framework capable of
comparing multiple modeling combinations. We will also investigate its
connection to the minimum description length principle.
| Xiaoran Yan | null | 1605.07057 | null | null |
On Restricted Nonnegative Matrix Factorization | cs.FL cs.CC cs.LG | Nonnegative matrix factorization (NMF) is the problem of decomposing a given
nonnegative $n \times m$ matrix $M$ into a product of a nonnegative $n \times
d$ matrix $W$ and a nonnegative $d \times m$ matrix $H$. Restricted NMF
requires in addition that the column spaces of $M$ and $W$ coincide. Finding
the minimal inner dimension $d$ is known to be NP-hard, both for NMF and
restricted NMF. We show that restricted NMF is closely related to a question
about the nature of minimal probabilistic automata, posed by Paz in his seminal
1971 textbook. We use this connection to answer Paz's question negatively, thus
falsifying a positive answer claimed in 1974. Furthermore, we investigate
whether a rational matrix $M$ always has a restricted NMF of minimal inner
dimension whose factors $W$ and $H$ are also rational. We show that this holds
for matrices $M$ of rank at most $3$ and we exhibit a rank-$4$ matrix for which
$W$ and $H$ require irrational entries.
| Dmitry Chistikov, Stefan Kiefer, Ines Maru\v{s}i\'c, Mahsa
Shirmohammadi, James Worrell | null | 1605.07061 | null | null |
A Unifying Framework for Gaussian Process Pseudo-Point Approximations
using Power Expectation Propagation | stat.ML cs.LG | Gaussian processes (GPs) are flexible distributions over functions that
enable high-level assumptions about unknown functions to be encoded in a
parsimonious, flexible and general way. Although elegant, the application of
GPs is limited by computational and analytical intractabilities that arise when
data are sufficiently numerous or when employing non-Gaussian models.
Consequently, a wealth of GP approximation schemes have been developed over the
last 15 years to address these key limitations. Many of these schemes employ a
small set of pseudo data points to summarise the actual data. In this paper, we
develop a new pseudo-point approximation framework using Power Expectation
Propagation (Power EP) that unifies a large number of these pseudo-point
approximations. Unlike much of the previous venerable work in this area, the
new framework is built on standard methods for approximate inference
(variational free-energy, EP and Power EP methods) rather than employing
approximations to the probabilistic generative model itself. In this way, all
of approximation is performed at `inference time' rather than at `modelling
time' resolving awkward philosophical and empirical questions that trouble
previous approaches. Crucially, we demonstrate that the new framework includes
new pseudo-point approximation methods that outperform current approaches on
regression and classification tasks.
| Thang D. Bui, Josiah Yan, Richard E. Turner | null | 1605.07066 | null | null |
Learning Sensor Multiplexing Design through Back-propagation | cs.LG stat.ML | Recent progress on many imaging and vision tasks has been driven by the use
of deep feed-forward neural networks, which are trained by propagating
gradients of a loss defined on the final output, back through the network up to
the first layer that operates directly on the image. We propose
back-propagating one step further---to learn camera sensor designs jointly with
networks that carry out inference on the images they capture. In this paper, we
specifically consider the design and inference problems in a typical color
camera---where the sensor is able to measure only one color channel at each
pixel location, and computational inference is required to reconstruct a full
color image. We learn the camera sensor's color multiplexing pattern by
encoding it as layer whose learnable weights determine which color channel,
from among a fixed set, will be measured at each location. These weights are
jointly trained with those of a reconstruction network that operates on the
corresponding sensor measurements to produce a full color image. Our network
achieves significant improvements in accuracy over the traditional Bayer
pattern used in most color cameras. It automatically learns to employ a sparse
color measurement approach similar to that of a recent design, and moreover,
improves upon that design by learning an optimal layout for these measurements.
| Ayan Chakrabarti | null | 1605.07078 | null | null |
Fast Bayesian Optimization of Machine Learning Hyperparameters on Large
Datasets | cs.LG cs.AI stat.ML | Bayesian optimization has become a successful tool for hyperparameter
optimization of machine learning algorithms, such as support vector machines or
deep neural networks. Despite its success, for large datasets, training and
validating a single configuration often takes hours, days, or even weeks, which
limits the achievable performance. To accelerate hyperparameter optimization,
we propose a generative model for the validation error as a function of
training set size, which is learned during the optimization process and allows
exploration of preliminary configurations on small subsets, by extrapolating to
the full dataset. We construct a Bayesian optimization procedure, dubbed
Fabolas, which models loss and training time as a function of dataset size and
automatically trades off high information gain about the global optimum against
computational cost. Experiments optimizing support vector machines and deep
neural networks show that Fabolas often finds high-quality solutions 10 to 100
times faster than other state-of-the-art Bayesian optimization methods or the
recently proposed bandit strategy Hyperband.
| Aaron Klein, Stefan Falkner, Simon Bartels, Philipp Hennig, Frank
Hutter | null | 1605.07079 | null | null |
A note on the expected minimum error probability in equientropic
channels | q-bio.NC cs.IT cs.LG math.IT stat.ML | While the channel capacity reflects a theoretical upper bound on the
achievable information transmission rate in the limit of infinitely many bits,
it does not characterise the information transfer of a given encoding routine
with finitely many bits. In this note, we characterise the quality of a code
(i. e. a given encoding routine) by an upper bound on the expected minimum
error probability that can be achieved when using this code. We show that for
equientropic channels this upper bound is minimal for codes with maximal
marginal entropy. As an instructive example we show for the additive white
Gaussian noise (AWGN) channel that random coding---also a capacity achieving
code---indeed maximises the marginal entropy in the limit of infinite messages.
| Sebastian Weichwald, Tatiana Fomina, Bernhard Sch\"olkopf, Moritz
Grosse-Wentrup | null | 1605.07094 | null | null |
Deep Learning without Poor Local Minima | stat.ML cs.LG math.OC | In this paper, we prove a conjecture published in 1989 and also partially
address an open problem announced at the Conference on Learning Theory (COLT)
2015. With no unrealistic assumption, we first prove the following statements
for the squared loss function of deep linear neural networks with any depth and
any widths: 1) the function is non-convex and non-concave, 2) every local
minimum is a global minimum, 3) every critical point that is not a global
minimum is a saddle point, and 4) there exist "bad" saddle points (where the
Hessian has no negative eigenvalue) for the deeper networks (with more than
three layers), whereas there is no bad saddle point for the shallow networks
(with three layers). Moreover, for deep nonlinear neural networks, we prove the
same four statements via a reduction to a deep linear model under the
independence assumption adopted from recent work. As a result, we present an
instance, for which we can answer the following question: how difficult is it
to directly train a deep model in theory? It is more difficult than the
classical machine learning models (because of the non-convexity), but not too
difficult (because of the nonexistence of poor local minima). Furthermore, the
mathematically proven existence of bad saddle points for deeper models would
suggest a possible open problem. We note that even though we have advanced the
theoretical foundations of deep learning and non-convex optimization, there is
still a gap between theory and practice.
| Kenji Kawaguchi | null | 1605.07110 | null | null |
Learning and Policy Search in Stochastic Dynamical Systems with Bayesian
Neural Networks | stat.ML cs.LG | We present an algorithm for model-based reinforcement learning that combines
Bayesian neural networks (BNNs) with random roll-outs and stochastic
optimization for policy learning. The BNNs are trained by minimizing
$\alpha$-divergences, allowing us to capture complicated statistical patterns
in the transition dynamics, e.g. multi-modality and heteroskedasticity, which
are usually missed by other common modeling approaches. We illustrate the
performance of our method by solving a challenging benchmark where model-based
approaches usually fail and by obtaining promising results in a real-world
scenario for controlling a gas turbine.
| Stefan Depeweg, Jos\'e Miguel Hern\'andez-Lobato, Finale Doshi-Velez,
Steffen Udluft | null | 1605.07127 | null | null |
Towards Multi-Agent Communication-Based Language Learning | cs.CL cs.CV cs.LG | We propose an interactive multimodal framework for language learning. Instead
of being passively exposed to large amounts of natural text, our learners
(implemented as feed-forward neural networks) engage in cooperative referential
games starting from a tabula rasa setup, and thus develop their own language
from the need to communicate in order to succeed at the game. Preliminary
experiments provide promising results, but also suggest that it is important to
ensure that agents trained in this way do not develop an adhoc communication
code only effective for the game they are playing
| Angeliki Lazaridou, Nghia The Pham and Marco Baroni | null | 1605.07133 | null | null |
Fairness in Learning: Classic and Contextual Bandits | cs.LG stat.ML | We introduce the study of fairness in multi-armed bandit problems. Our
fairness definition can be interpreted as demanding that given a pool of
applicants (say, for college admission or mortgages), a worse applicant is
never favored over a better one, despite a learning algorithm's uncertainty
over the true payoffs. We prove results of two types.
First, in the important special case of the classic stochastic bandits
problem (i.e., in which there are no contexts), we provide a provably fair
algorithm based on "chained" confidence intervals, and provide a cumulative
regret bound with a cubic dependence on the number of arms. We further show
that any fair algorithm must have such a dependence. When combined with regret
bounds for standard non-fair algorithms such as UCB, this proves a strong
separation between fair and unfair learning, which extends to the general
contextual case.
In the general contextual case, we prove a tight connection between fairness
and the KWIK (Knows What It Knows) learning model: a KWIK algorithm for a class
of functions can be transformed into a provably fair contextual bandit
algorithm, and conversely any fair contextual bandit algorithm can be
transformed into a KWIK learning algorithm. This tight connection allows us to
provide a provably fair algorithm for the linear contextual bandit problem with
a polynomial dependence on the dimension, and to show (for a different class of
functions) a worst-case exponential gap in regret between fair and non-fair
learning algorithms
| Matthew Joseph and Michael Kearns and Jamie Morgenstern and Aaron Roth | null | 1605.07139 | null | null |
Actively Learning Hemimetrics with Applications to Eliciting User
Preferences | stat.ML cs.LG | Motivated by an application of eliciting users' preferences, we investigate
the problem of learning hemimetrics, i.e., pairwise distances among a set of
$n$ items that satisfy triangle inequalities and non-negativity constraints. In
our application, the (asymmetric) distances quantify private costs a user
incurs when substituting one item by another. We aim to learn these distances
(costs) by asking the users whether they are willing to switch from one item to
another for a given incentive offer. Without exploiting structural constraints
of the hemimetric polytope, learning the distances between each pair of items
requires $\Theta(n^2)$ queries. We propose an active learning algorithm that
substantially reduces this sample complexity by exploiting the structural
constraints on the version space of hemimetrics. Our proposed algorithm
achieves provably-optimal sample complexity for various instances of the task.
For example, when the items are embedded into $K$ tight clusters, the sample
complexity of our algorithm reduces to $O(n K)$. Extensive experiments on a
restaurant recommendation data set support the conclusions of our theoretical
analysis.
| Adish Singla, Sebastian Tschiatschek, Andreas Krause | null | 1605.07144 | null | null |
On Optimality Conditions for Auto-Encoder Signal Recovery | stat.ML cs.LG cs.NE | Auto-Encoders are unsupervised models that aim to learn patterns from
observed data by minimizing a reconstruction cost. The useful representations
learned are often found to be sparse and distributed. On the other hand,
compressed sensing and sparse coding assume a data generating process, where
the observed data is generated from some true latent signal source, and try to
recover the corresponding signal from measurements. Looking at auto-encoders
from this \textit{signal recovery perspective} enables us to have a more
coherent view of these techniques. In this paper, in particular, we show that
the \textit{true} hidden representation can be approximately recovered if the
weight matrices are highly incoherent with unit $ \ell^{2} $ row length and the
bias vectors takes the value (approximately) equal to the negative of the data
mean. The recovery also becomes more and more accurate as the sparsity in
hidden signals increases. Additionally, we empirically demonstrate that
auto-encoders are capable of recovering the data generating dictionary when
only data samples are given.
| Devansh Arpit, Yingbo Zhou, Hung Q. Ngo, Nils Napp, Venu Govindaraju | null | 1605.07145 | null | null |
Wide Residual Networks | cs.CV cs.LG cs.NE | Deep residual networks were shown to be able to scale up to thousands of
layers and still have improving performance. However, each fraction of a
percent of improved accuracy costs nearly doubling the number of layers, and so
training very deep residual networks has a problem of diminishing feature
reuse, which makes these networks very slow to train. To tackle these problems,
in this paper we conduct a detailed experimental study on the architecture of
ResNet blocks, based on which we propose a novel architecture where we decrease
depth and increase width of residual networks. We call the resulting network
structures wide residual networks (WRNs) and show that these are far superior
over their commonly used thin and very deep counterparts. For example, we
demonstrate that even a simple 16-layer-deep wide residual network outperforms
in accuracy and efficiency all previous deep residual networks, including
thousand-layer-deep networks, achieving new state-of-the-art results on CIFAR,
SVHN, COCO, and significant improvements on ImageNet. Our code and models are
available at https://github.com/szagoruyko/wide-residual-networks
| Sergey Zagoruyko, Nikos Komodakis | null | 1605.07146 | null | null |
Riemannian SVRG: Fast Stochastic Optimization on Riemannian Manifolds | math.OC cs.LG | We study optimization of finite sums of geodesically smooth functions on
Riemannian manifolds. Although variance reduction techniques for optimizing
finite-sums have witnessed tremendous attention in the recent years, existing
work is limited to vector space problems. We introduce Riemannian SVRG (RSVRG),
a new variance reduced Riemannian optimization method. We analyze RSVRG for
both geodesically convex and nonconvex (smooth) functions. Our analysis reveals
that RSVRG inherits advantages of the usual SVRG method, but with factors
depending on curvature of the manifold that influence its convergence. To our
knowledge, RSVRG is the first provably fast stochastic Riemannian method.
Moreover, our paper presents the first non-asymptotic complexity analysis
(novel even for the batch setting) for nonconvex Riemannian optimization. Our
results have several implications; for instance, they offer a Riemannian
perspective on variance reduced PCA, which promises a short, transparent
convergence analysis.
| Hongyi Zhang, Sashank J. Reddi, Suvrit Sra | null | 1605.07147 | null | null |
Backprop KF: Learning Discriminative Deterministic State Estimators | cs.LG cs.AI | Generative state estimators based on probabilistic filters and smoothers are
one of the most popular classes of state estimators for robots and autonomous
vehicles. However, generative models have limited capacity to handle rich
sensory observations, such as camera images, since they must model the entire
distribution over sensor readings. Discriminative models do not suffer from
this limitation, but are typically more complex to train as latent variable
models for state estimation. We present an alternative approach where the
parameters of the latent state distribution are directly optimized as a
deterministic computation graph, resulting in a simple and effective gradient
descent algorithm for training discriminative state estimators. We show that
this procedure can be used to train state estimators that use complex input,
such as raw camera images, which must be processed using expressive nonlinear
function approximators such as convolutional neural networks. Our model can be
viewed as a type of recurrent neural network, and the connection to
probabilistic filtering allows us to design a network architecture that is
particularly well suited for state estimation. We evaluate our approach on
synthetic tracking task with raw image inputs and on the visual odometry task
in the KITTI dataset. The results show significant improvement over both
standard generative approaches and regular recurrent neural networks.
| Tuomas Haarnoja, Anurag Ajay, Sergey Levine, Pieter Abbeel | null | 1605.07148 | null | null |
Path-Normalized Optimization of Recurrent Neural Networks with ReLU
Activations | cs.LG cs.NE | We investigate the parameter-space geometry of recurrent neural networks
(RNNs), and develop an adaptation of path-SGD optimization method, attuned to
this geometry, that can learn plain RNNs with ReLU activations. On several
datasets that require capturing long-term dependency structure, we show that
path-SGD can significantly improve trainability of ReLU RNNs compared to RNNs
trained with SGD, even with various recently suggested initialization schemes.
| Behnam Neyshabur, Yuhuai Wu, Ruslan Salakhutdinov, Nathan Srebro | null | 1605.07154 | null | null |
Genetic Architect: Discovering Genomic Structure with Learned Neural
Architectures | cs.LG cs.AI cs.NE stat.ML | Each human genome is a 3 billion base pair set of encoding instructions.
Decoding the genome using deep learning fundamentally differs from most tasks,
as we do not know the full structure of the data and therefore cannot design
architectures to suit it. As such, architectures that fit the structure of
genomics should be learned not prescribed. Here, we develop a novel search
algorithm, applicable across domains, that discovers an optimal architecture
which simultaneously learns general genomic patterns and identifies the most
important sequence motifs in predicting functional genomic outcomes. The
architectures we find using this algorithm succeed at using only RNA expression
data to predict gene regulatory structure, learn human-interpretable
visualizations of key sequence motifs, and surpass state-of-the-art results on
benchmark genomics challenges.
| Laura Deming, Sasha Targ, Nate Sauder, Diogo Almeida, Chun Jimmie Ye | null | 1605.07156 | null | null |
Unsupervised Learning for Physical Interaction through Video Prediction | cs.LG cs.AI cs.CV cs.RO | A core challenge for an agent learning to interact with the world is to
predict how its actions affect objects in its environment. Many existing
methods for learning the dynamics of physical interactions require labeled
object information. However, to scale real-world interaction learning to a
variety of scenes and objects, acquiring labeled data becomes increasingly
impractical. To learn about physical object motion without labels, we develop
an action-conditioned video prediction model that explicitly models pixel
motion, by predicting a distribution over pixel motion from previous frames.
Because our model explicitly predicts motion, it is partially invariant to
object appearance, enabling it to generalize to previously unseen objects. To
explore video prediction for real-world interactive agents, we also introduce a
dataset of 59,000 robot interactions involving pushing motions, including a
test set with novel objects. In this dataset, accurate prediction of videos
conditioned on the robot's future actions amounts to learning a "visual
imagination" of different futures based on different courses of action. Our
experiments show that our proposed method produces more accurate video
predictions both quantitatively and qualitatively, when compared to prior
methods.
| Chelsea Finn, Ian Goodfellow, Sergey Levine | null | 1605.07157 | null | null |
Pure Exploration of Multi-armed Bandit Under Matroid Constraints | cs.LG cs.DS | We study the pure exploration problem subject to a matroid constraint
(Best-Basis) in a stochastic multi-armed bandit game. In a Best-Basis instance,
we are given $n$ stochastic arms with unknown reward distributions, as well as
a matroid $\mathcal{M}$ over the arms. Let the weight of an arm be the mean of
its reward distribution. Our goal is to identify a basis of $\mathcal{M}$ with
the maximum total weight, using as few samples as possible.
The problem is a significant generalization of the best arm identification
problem and the top-$k$ arm identification problem, which have attracted
significant attentions in recent years. We study both the exact and PAC
versions of Best-Basis, and provide algorithms with nearly-optimal sample
complexities for these versions. Our results generalize and/or improve on
several previous results for the top-$k$ arm identification problem and the
combinatorial pure exploration problem when the combinatorial constraint is a
matroid.
| Lijie Chen, Anupam Gupta, Jian Li | null | 1605.07162 | null | null |
Kernel-based Reconstruction of Graph Signals | stat.ML cs.LG | A number of applications in engineering, social sciences, physics, and
biology involve inference over networks. In this context, graph signals are
widely encountered as descriptors of vertex attributes or features in
graph-structured data. Estimating such signals in all vertices given noisy
observations of their values on a subset of vertices has been extensively
analyzed in the literature of signal processing on graphs (SPoG). This paper
advocates kernel regression as a framework generalizing popular SPoG modeling
and reconstruction and expanding their capabilities. Formulating signal
reconstruction as a regression task on reproducing kernel Hilbert spaces of
graph signals permeates benefits from statistical learning, offers fresh
insights, and allows for estimators to leverage richer forms of prior
information than existing alternatives. A number of SPoG notions such as
bandlimitedness, graph filters, and the graph Fourier transform are naturally
accommodated in the kernel framework. Additionally, this paper capitalizes on
the so-called representer theorem to devise simpler versions of existing
Thikhonov regularized estimators, and offers a novel probabilistic
interpretation of kernel methods on graphs based on graphical models. Motivated
by the challenges of selecting the bandwidth parameter in SPoG estimators or
the kernel map in kernel-based methods, the present paper further proposes two
multi-kernel approaches with complementary strengths. Whereas the first enables
estimation of the unknown bandwidth of bandlimited signals, the second allows
for efficient graph filter selection. Numerical tests with synthetic as well as
real data demonstrate the merits of the proposed methods relative to
state-of-the-art alternatives.
| Daniel Romero, Meng Ma, Georgios B. Giannakis | 10.1109/TSP.2016.2620116 | 1605.07174 | null | null |
Global Optimality of Local Search for Low Rank Matrix Recovery | stat.ML cs.LG math.OC | We show that there are no spurious local minima in the non-convex factorized
parametrization of low-rank matrix recovery from incoherent linear
measurements. With noisy measurements we show all local minima are very close
to a global optimum. Together with a curvature bound at saddle points, this
yields a polynomial time global convergence guarantee for stochastic gradient
descent {\em from random initialization}.
| Srinadh Bhojanapalli, Behnam Neyshabur, Nathan Srebro | null | 1605.07221 | null | null |
Deep Portfolio Theory | q-fin.PM cs.LG | We construct a deep portfolio theory. By building on Markowitz's classic
risk-return trade-off, we develop a self-contained four-step routine of encode,
calibrate, validate and verify to formulate an automated and general portfolio
selection process. At the heart of our algorithm are deep hierarchical
compositions of portfolios constructed in the encoding step. The calibration
step then provides multivariate payouts in the form of deep hierarchical
portfolios that are designed to target a variety of objective functions. The
validate step trades-off the amount of regularization used in the encode and
calibrate steps. The verification step uses a cross validation approach to
trace out an ex post deep portfolio efficient frontier. We demonstrate all four
steps of our portfolio theory numerically.
| J. B. Heaton, N. G. Polson, J. H. Witte | null | 1605.07230 | null | null |
Adaptive ADMM with Spectral Penalty Parameter Selection | cs.LG cs.AI cs.NA | The alternating direction method of multipliers (ADMM) is a versatile tool
for solving a wide range of constrained optimization problems, with
differentiable or non-differentiable objective functions. Unfortunately, its
performance is highly sensitive to a penalty parameter, which makes ADMM often
unreliable and hard to automate for a non-expert user. We tackle this weakness
of ADMM by proposing a method to adaptively tune the penalty parameters to
achieve fast convergence. The resulting adaptive ADMM (AADMM) algorithm,
inspired by the successful Barzilai-Borwein spectral method for gradient
descent, yields fast convergence and relative insensitivity to the initial
stepsize and problem scaling.
| Zheng Xu, Mario A. T. Figueiredo, Tom Goldstein | null | 1605.07246 | null | null |
Interaction Screening: Efficient and Sample-Optimal Learning of Ising
Models | cs.LG cond-mat.stat-mech cs.IT math.IT math.ST stat.ML stat.TH | We consider the problem of learning the underlying graph of an unknown Ising
model on p spins from a collection of i.i.d. samples generated from the model.
We suggest a new estimator that is computationally efficient and requires a
number of samples that is near-optimal with respect to previously established
information-theoretic lower-bound. Our statistical estimator has a physical
interpretation in terms of "interaction screening". The estimator is consistent
and is efficiently implemented using convex optimization. We prove that with
appropriate regularization, the estimator recovers the underlying graph using a
number of samples that is logarithmic in the system size p and exponential in
the maximum coupling-intensity and maximum node-degree.
| Marc Vuffray, Sidhant Misra, Andrey Y. Lokhov and Michael Chertkov | null | 1605.07252 | null | null |
Measuring Neural Net Robustness with Constraints | cs.LG cs.CV cs.NE | Despite having high accuracy, neural nets have been shown to be susceptible
to adversarial examples, where a small perturbation to an input can cause it to
become mislabeled. We propose metrics for measuring the robustness of a neural
net and devise a novel algorithm for approximating these metrics based on an
encoding of robustness as a linear program. We show how our metrics can be used
to evaluate the robustness of deep neural nets with experiments on the MNIST
and CIFAR-10 datasets. Our algorithm generates more informative estimates of
robustness metrics compared to estimates based on existing algorithms.
Furthermore, we show how existing approaches to improving robustness "overfit"
to adversarial examples generated using a specific algorithm. Finally, we show
that our techniques can be used to additionally improve neural net robustness
both according to the metrics that we propose, but also according to previously
proposed metrics.
| Osbert Bastani, Yani Ioannou, Leonidas Lampropoulos, Dimitrios
Vytiniotis, Aditya Nori, Antonio Criminisi | null | 1605.07262 | null | null |
Matrix Completion has No Spurious Local Minimum | cs.LG cs.DS stat.ML | Matrix completion is a basic machine learning problem that has wide
applications, especially in collaborative filtering and recommender systems.
Simple non-convex optimization algorithms are popular and effective in
practice. Despite recent progress in proving various non-convex algorithms
converge from a good initial point, it remains unclear why random or arbitrary
initialization suffices in practice. We prove that the commonly used non-convex
objective function for \textit{positive semidefinite} matrix completion has no
spurious local minima --- all local minima must also be global. Therefore, many
popular optimization algorithms such as (stochastic) gradient descent can
provably solve positive semidefinite matrix completion with \textit{arbitrary}
initialization in polynomial time. The result can be generalized to the setting
when the observed entries contain noise. We believe that our main proof
strategy can be useful for understanding geometric properties of other
statistical problems involving partial or noisy observations.
| Rong Ge, Jason D. Lee, Tengyu Ma | null | 1605.07272 | null | null |
Transferability in Machine Learning: from Phenomena to Black-Box Attacks
using Adversarial Samples | cs.CR cs.LG | Many machine learning models are vulnerable to adversarial examples: inputs
that are specially crafted to cause a machine learning model to produce an
incorrect output. Adversarial examples that affect one model often affect
another model, even if the two models have different architectures or were
trained on different training sets, so long as both models were trained to
perform the same task. An attacker may therefore train their own substitute
model, craft adversarial examples against the substitute, and transfer them to
a victim model, with very little information about the victim. Recent work has
further developed a technique that uses the victim model as an oracle to label
a synthetic training set for the substitute, so the attacker need not even
collect a training set to mount the attack. We extend these recent techniques
using reservoir sampling to greatly enhance the efficiency of the training
procedure for the substitute model. We introduce new transferability attacks
between previously unexplored (substitute, victim) pairs of machine learning
model classes, most notably SVMs and decision trees. We demonstrate our attacks
on two commercial machine learning classification systems from Amazon (96.19%
misclassification rate) and Google (88.94%) using only 800 queries of the
victim model, thereby showing that existing machine learning approaches are in
general vulnerable to systematic black-box attacks regardless of their
structure.
| Nicolas Papernot and Patrick McDaniel and Ian Goodfellow | null | 1605.07277 | null | null |
Near-optimal Bayesian Active Learning with Correlated and Noisy Tests | cs.LG cs.AI | We consider the Bayesian active learning and experimental design problem,
where the goal is to learn the value of some unknown target variable through a
sequence of informative, noisy tests. In contrast to prior work, we focus on
the challenging, yet practically relevant setting where test outcomes can be
conditionally dependent given the hidden target variable. Under such
assumptions, common heuristics, such as greedily performing tests that maximize
the reduction in uncertainty of the target, often perform poorly. In this
paper, we propose ECED, a novel, computationally efficient active learning
algorithm, and prove strong theoretical guarantees that hold with correlated,
noisy tests. Rather than directly optimizing the prediction error, at each
step, ECED picks the test that maximizes the gain in a surrogate objective,
which takes into account the dependencies between tests. Our analysis relies on
an information-theoretic auxiliary function to track the progress of ECED, and
utilizes adaptive submodularity to attain the near-optimal bound. We
demonstrate strong empirical performance of ECED on two problem instances,
including a Bayesian experimental design task intended to distinguish among
economic theories of how people make risky decisions, and an active preference
learning task via pairwise comparisons.
| Yuxin Chen, S. Hamed Hassani, Andreas Krause | null | 1605.07334 | null | null |
Riemannian stochastic variance reduced gradient on Grassmann manifold | cs.LG cs.NA math.OC stat.ML | Stochastic variance reduction algorithms have recently become popular for
minimizing the average of a large, but finite, number of loss functions. In
this paper, we propose a novel Riemannian extension of the Euclidean stochastic
variance reduced gradient algorithm (R-SVRG) to a compact manifold search
space. To this end, we show the developments on the Grassmann manifold. The key
challenges of averaging, addition, and subtraction of multiple gradients are
addressed with notions like logarithm mapping and parallel translation of
vectors on the Grassmann manifold. We present a global convergence analysis of
the proposed algorithm with decay step-sizes and a local convergence rate
analysis under fixed step-size with some natural assumptions. The proposed
algorithm is applied on a number of problems on the Grassmann manifold like
principal components analysis, low-rank matrix completion, and the Karcher mean
computation. In all these cases, the proposed algorithm outperforms the
standard Riemannian stochastic gradient descent algorithm.
| Hiroyuki Kasai, Hiroyuki Sato, and Bamdev Mishra | null | 1605.07367 | null | null |
Refined Lower Bounds for Adversarial Bandits | math.ST cs.LG stat.ML stat.TH | We provide new lower bounds on the regret that must be suffered by
adversarial bandit algorithms. The new results show that recent upper bounds
that either (a) hold with high-probability or (b) depend on the total lossof
the best arm or (c) depend on the quadratic variation of the losses, are close
to tight. Besides this we prove two impossibility results. First, the existence
of a single arm that is optimal in every round cannot improve the regret in the
worst case. Second, the regret cannot scale with the effective range of the
losses. In contrast, both results are possible in the full-information setting.
| S\'ebastien Gerchinovitz (IMT, AOC), Tor Lattimore | null | 1605.07416 | null | null |
Computing Web-scale Topic Models using an Asynchronous Parameter Server | cs.DC cs.IR cs.LG stat.ML | Topic models such as Latent Dirichlet Allocation (LDA) have been widely used
in information retrieval for tasks ranging from smoothing and feedback methods
to tools for exploratory search and discovery. However, classical methods for
inferring topic models do not scale up to the massive size of today's publicly
available Web-scale data sets. The state-of-the-art approaches rely on custom
strategies, implementations and hardware to facilitate their asynchronous,
communication-intensive workloads.
We present APS-LDA, which integrates state-of-the-art topic modeling with
cluster computing frameworks such as Spark using a novel asynchronous parameter
server. Advantages of this integration include convenient usage of existing
data processing pipelines and eliminating the need for disk writes as data can
be kept in memory from start to finish. Our goal is not to outperform highly
customized implementations, but to propose a general high-performance topic
modeling framework that can easily be used in today's data processing
pipelines. We compare APS-LDA to the existing Spark LDA implementations and
show that our system can, on a 480-core cluster, process up to 135 times more
data and 10 times more topics without sacrificing model quality.
| Rolf Jagerman, Carsten Eickhoff and Maarten de Rijke | 10.1145/3077136.3084135 | 1605.07422 | null | null |
Hierarchical Memory Networks | stat.ML cs.CL cs.LG cs.NE | Memory networks are neural networks with an explicit memory component that
can be both read and written to by the network. The memory is often addressed
in a soft way using a softmax function, making end-to-end training with
backpropagation possible. However, this is not computationally scalable for
applications which require the network to read from extremely large memories.
On the other hand, it is well known that hard attention mechanisms based on
reinforcement learning are challenging to train successfully. In this paper, we
explore a form of hierarchical memory network, which can be considered as a
hybrid between hard and soft attention memory networks. The memory is organized
in a hierarchical structure such that reading from it is done with less
computation than soft attention over a flat memory, while also being easier to
train than hard attention over a flat memory. Specifically, we propose to
incorporate Maximum Inner Product Search (MIPS) in the training and inference
procedures for our hierarchical memory network. We explore the use of various
state-of-the art approximate MIPS techniques and report results on
SimpleQuestions, a challenging large scale factoid question answering task.
| Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald
Tesauro, Yoshua Bengio | null | 1605.07427 | null | null |
Alternating Optimisation and Quadrature for Robust Control | cs.LG cs.AI stat.ML | Bayesian optimisation has been successfully applied to a variety of
reinforcement learning problems. However, the traditional approach for learning
optimal policies in simulators does not utilise the opportunity to improve
learning by adjusting certain environment variables: state features that are
unobservable and randomly determined by the environment in a physical setting
but are controllable in a simulator. This paper considers the problem of
finding a robust policy while taking into account the impact of environment
variables. We present Alternating Optimisation and Quadrature (ALOQ), which
uses Bayesian optimisation and Bayesian quadrature to address such settings.
ALOQ is robust to the presence of significant rare events, which may not be
observable under random sampling, but play a substantial role in determining
the optimal policy. Experimental results across different domains show that
ALOQ can learn more efficiently and robustly than existing methods.
| Supratik Paul, Konstantinos Chatzilygeroudis, Kamil Ciosek,
Jean-Baptiste Mouret, Michael A. Osborne, Shimon Whiteson | null | 1605.07496 | null | null |
Leveraging Over Priors for Boosting Control of Prosthetic Hands | cs.LG | The Electromyography (EMG) signal is the electrical activity produced by
cells of skeletal muscles in order to provide a movement. The non-invasive
prosthetic hand works with several electrodes, placed on the stump of an
amputee, that record this signal. In order to favour the control of prosthesis,
the EMG signal is analyzed with algorithms based on machine learning theory to
decide the movement that the subject is going to do. In order to obtain a
significant control of the prosthesis and avoid mismatch between desired and
performed movements, a long training period is needed when we use the
traditional algorithm of machine learning (i.e. Support Vector Machines). An
actual challenge in this field concerns the reduction of the time necessary for
an amputee to learn how to use the prosthesis. Recently, several algorithms
that exploit a form of prior knowledge have been proposed. In general, we refer
to prior knowledge as a past experience available in the form of models. In our
case an amputee, that attempts to perform some movements with the prosthesis,
could use experience from different subjects that are already able to perform
those movements. The aim of this work is to verify, with a computational
investigation, if for an amputee this kind of previous experience is useful in
order to reduce the training time and boost the prosthetic control.
Furthermore, we want to understand if and how the final results change when the
previous knowledge of intact or amputated subjects is used for a new amputee.
Our experiments indicate that: (1) the use of experience, from other subjects
already trained to perform a task, makes us able to reduce the training time of
about an order of magnitude; (2) it seems that an amputee that tries to learn
to use the prosthesis doesn't reach different results when he/she exploits
previous experience of amputees or intact.
| Valentina Gregori | null | 1605.07498 | null | null |
Inductive supervised quantum learning | cs.LG quant-ph stat.ML | In supervised learning, an inductive learning algorithm extracts general
rules from observed training instances, then the rules are applied to test
instances. We show that this splitting of training and application arises
naturally, in the classical setting, from a simple independence requirement
with a physical interpretation of being non-signalling. Thus, two seemingly
different definitions of inductive learning happen to coincide. This follows
from the properties of classical information that break down in the quantum
setup. We prove a quantum de Finetti theorem for quantum channels, which shows
that in the quantum case, the equivalence holds in the asymptotic setting, that
is, for large number of test instances. This reveals a natural analogy between
classical learning protocols and their quantum counterparts, justifying a
similar treatment, and allowing to inquire about standard elements in
computational learning theory, such as structural risk minimization and sample
complexity.
| Alex Monr\`as, Gael Sent\'is, Peter Wittek | 10.1103/PhysRevLett.118.190503 | 1605.07541 | null | null |
Sequential Neural Models with Stochastic Layers | stat.ML cs.LG | How can we efficiently propagate uncertainty in a latent state representation
with recurrent neural networks? This paper introduces stochastic recurrent
neural networks which glue a deterministic recurrent neural network and a state
space model together to form a stochastic and sequential neural generative
model. The clear separation of deterministic and stochastic layers allows a
structured variational inference network to track the factorization of the
model's posterior distribution. By retaining both the nonlinear recursive
structure of a recurrent neural network and averaging over the uncertainty in a
latent path, like a state space model, we improve the state of the art results
on the Blizzard and TIMIT speech modeling data sets by a large margin, while
achieving comparable performances to competing methods on polyphonic music
modeling.
| Marco Fraccaro, S{\o}ren Kaae S{\o}nderby, Ulrich Paquet, Ole Winther | null | 1605.07571 | null | null |
Recursive Sampling for the Nystr\"om Method | cs.LG cs.DS stat.ML | We give the first algorithm for kernel Nystr\"om approximation that runs in
*linear time in the number of training points* and is provably accurate for all
kernel matrices, without dependence on regularity or incoherence conditions.
The algorithm projects the kernel onto a set of $s$ landmark points sampled by
their *ridge leverage scores*, requiring just $O(ns)$ kernel evaluations and
$O(ns^2)$ additional runtime. While leverage score sampling has long been known
to give strong theoretical guarantees for Nystr\"om approximation, by employing
a fast recursive sampling scheme, our algorithm is the first to make the
approach scalable. Empirically we show that it finds more accurate, lower rank
kernel approximations in less time than popular techniques such as uniformly
sampled Nystr\"om approximation and the random Fourier features method.
| Cameron Musco and Christopher Musco | null | 1605.07583 | null | null |
A Consistent Regularization Approach for Structured Prediction | cs.LG stat.ML | We propose and analyze a regularization approach for structured prediction
problems. We characterize a large class of loss functions that allows to
naturally embed structured outputs in a linear space. We exploit this fact to
design learning algorithms using a surrogate loss approach and regularization
techniques. We prove universal consistency and finite sample bounds
characterizing the generalization properties of the proposed methods.
Experimental results are provided to demonstrate the practical usefulness of
the proposed approach.
| Carlo Ciliberto, Alessandro Rudi, Lorenzo Rosasco | null | 1605.07588 | null | null |
Adaptive Newton Method for Empirical Risk Minimization to Statistical
Accuracy | cs.LG math.OC | We consider empirical risk minimization for large-scale datasets. We
introduce Ada Newton as an adaptive algorithm that uses Newton's method with
adaptive sample sizes. The main idea of Ada Newton is to increase the size of
the training set by a factor larger than one in a way that the minimization
variable for the current training set is in the local neighborhood of the
optimal argument of the next training set. This allows to exploit the quadratic
convergence property of Newton's method and reach the statistical accuracy of
each training set with only one iteration of Newton's method. We show
theoretically and empirically that Ada Newton can double the size of the
training set in each iteration to achieve the statistical accuracy of the full
training set with about two passes over the dataset.
| Aryan Mokhtari and Alejandro Ribeiro | null | 1605.07659 | null | null |
On-line Active Reward Learning for Policy Optimisation in Spoken
Dialogue Systems | cs.CL cs.LG | The ability to compute an accurate reward function is essential for
optimising a dialogue policy via reinforcement learning. In real-world
applications, using explicit user feedback as the reward signal is often
unreliable and costly to collect. This problem can be mitigated if the user's
intent is known in advance or data is available to pre-train a task success
predictor off-line. In practice neither of these apply for most real world
applications. Here we propose an on-line learning framework whereby the
dialogue policy is jointly trained alongside the reward model via active
learning with a Gaussian process model. This Gaussian process operates on a
continuous space dialogue representation generated in an unsupervised fashion
using a recurrent neural network encoder-decoder. The experimental results
demonstrate that the proposed framework is able to significantly reduce data
annotation costs and mitigate noisy user feedback in dialogue policy learning.
| Pei-Hao Su and Milica Gasic and Nikola Mrksic and Lina Rojas-Barahona
and Stefan Ultes and David Vandyke and Tsung-Hsien Wen and Steve Young | null | 1605.07669 | null | null |
Communication-Efficient Distributed Statistical Inference | stat.ML cs.IT cs.LG math.IT math.OC stat.ME | We present a Communication-efficient Surrogate Likelihood (CSL) framework for
solving distributed statistical inference problems. CSL provides a
communication-efficient surrogate to the global likelihood that can be used for
low-dimensional estimation, high-dimensional regularized estimation and
Bayesian inference. For low-dimensional estimation, CSL provably improves upon
naive averaging schemes and facilitates the construction of confidence
intervals. For high-dimensional regularized estimation, CSL leads to a
minimax-optimal estimator with controlled communication cost. For Bayesian
inference, CSL can be used to form a communication-efficient quasi-posterior
distribution that converges to the true posterior. This quasi-posterior
procedure significantly improves the computational efficiency of MCMC
algorithms even in a non-distributed setting. We present both theoretical
analysis and experiments to explore the properties of the CSL approximation.
| Michael I. Jordan, Jason D. Lee, Yun Yang | null | 1605.07689 | null | null |
Learning Purposeful Behaviour in the Absence of Rewards | cs.LG cs.AI | Artificial intelligence is commonly defined as the ability to achieve goals
in the world. In the reinforcement learning framework, goals are encoded as
reward functions that guide agent behaviour, and the sum of observed rewards
provide a notion of progress. However, some domains have no such reward signal,
or have a reward signal so sparse as to appear absent. Without reward feedback,
agent behaviour is typically random, often dithering aimlessly and lacking
intentionality. In this paper we present an algorithm capable of learning
purposeful behaviour in the absence of rewards. The algorithm proceeds by
constructing temporally extended actions (options), through the identification
of purposes that are "just out of reach" of the agent's current behaviour.
These purposes establish intrinsic goals for the agent to learn, ultimately
resulting in a suite of behaviours that encourage the agent to visit different
parts of the state space. Moreover, the approach is particularly suited for
settings where rewards are very sparse, and such behaviours can help in the
exploration of the environment until reward is observed.
| Marlos C. Machado and Michael Bowling | null | 1605.07700 | null | null |
Deep Structured Energy Based Models for Anomaly Detection | cs.LG stat.ML | In this paper, we attack the anomaly detection problem by directly modeling
the data distribution with deep architectures. We propose deep structured
energy based models (DSEBMs), where the energy function is the output of a
deterministic deep neural network with structure. We develop novel model
architectures to integrate EBMs with different types of data such as static
data, sequential data, and spatial data, and apply appropriate model
architectures to adapt to the data structure. Our training algorithm is built
upon the recent development of score matching \cite{sm}, which connects an EBM
with a regularized autoencoder, eliminating the need for complicated sampling
method. Statistically sound decision criterion can be derived for anomaly
detection purpose from the perspective of the energy landscape of the data
distribution. We investigate two decision criteria for performing anomaly
detection: the energy score and the reconstruction error. Extensive empirical
studies on benchmark tasks demonstrate that our proposed model consistently
matches or outperforms all the competing methods.
| Shuangfei Zhai, Yu Cheng, Weining Lu, Zhongfei Zhang | null | 1605.07717 | null | null |
Reshaped Wirtinger Flow and Incremental Algorithm for Solving Quadratic
System of Equations | stat.ML cs.LG | We study the phase retrieval problem, which solves quadratic system of
equations, i.e., recovers a vector $\boldsymbol{x}\in \mathbb{R}^n$ from its
magnitude measurements $y_i=|\langle \boldsymbol{a}_i, \boldsymbol{x}\rangle|,
i=1,..., m$. We develop a gradient-like algorithm (referred to as RWF
representing reshaped Wirtinger flow) by minimizing a nonconvex nonsmooth loss
function. In comparison with existing nonconvex Wirtinger flow (WF) algorithm
\cite{candes2015phase}, although the loss function becomes nonsmooth, it
involves only the second power of variable and hence reduces the complexity. We
show that for random Gaussian measurements, RWF enjoys geometric convergence to
a global optimal point as long as the number $m$ of measurements is on the
order of $n$, the dimension of the unknown $\boldsymbol{x}$. This improves the
sample complexity of WF, and achieves the same sample complexity as truncated
Wirtinger flow (TWF) \cite{chen2015solving}, but without truncation in gradient
loop. Furthermore, RWF costs less computationally than WF, and runs faster
numerically than both WF and TWF. We further develop the incremental
(stochastic) reshaped Wirtinger flow (IRWF) and show that IRWF converges
linearly to the true signal. We further establish performance guarantee of an
existing Kaczmarz method for the phase retrieval problem based on its
connection to IRWF. We also empirically demonstrate that IRWF outperforms
existing ITWF algorithm (stochastic version of TWF) as well as other batch
algorithms.
| Huishuai Zhang, Yi Zhou, Yingbin Liang, Yuejie Chi | null | 1605.07719 | null | null |
Data Programming: Creating Large Training Sets, Quickly | stat.ML cs.AI cs.LG | Large labeled training sets are the critical building blocks of supervised
learning methods and are key enablers of deep learning techniques. For some
applications, creating labeled training sets is the most time-consuming and
expensive part of applying machine learning. We therefore propose a paradigm
for the programmatic creation of training sets called data programming in which
users express weak supervision strategies or domain heuristics as labeling
functions, which are programs that label subsets of the data, but that are
noisy and may conflict. We show that by explicitly representing this training
set labeling process as a generative model, we can "denoise" the generated
training set, and establish theoretically that we can recover the parameters of
these generative models in a handful of settings. We then show how to modify a
discriminative loss function to make it noise-aware, and demonstrate our method
over a range of discriminative models including logistic regression and LSTMs.
Experimentally, on the 2014 TAC-KBP Slot Filling challenge, we show that data
programming would have led to a new winning score, and also show that applying
data programming to an LSTM model leads to a TAC-KBP score almost 6 F1 points
over a state-of-the-art LSTM baseline (and into second place in the
competition). Additionally, in initial user studies we observed that data
programming may be an easier way for non-experts to create machine learning
models when training data is limited or unavailable.
| Alexander Ratner, Christopher De Sa, Sen Wu, Daniel Selsam,
Christopher R\'e | null | 1605.07723 | null | null |
Adversarial Training Methods for Semi-Supervised Text Classification | stat.ML cs.LG | Adversarial training provides a means of regularizing supervised learning
algorithms while virtual adversarial training is able to extend supervised
learning algorithms to the semi-supervised setting. However, both methods
require making small perturbations to numerous entries of the input vector,
which is inappropriate for sparse high-dimensional inputs such as one-hot word
representations. We extend adversarial and virtual adversarial training to the
text domain by applying perturbations to the word embeddings in a recurrent
neural network rather than to the original input itself. The proposed method
achieves state of the art results on multiple benchmark semi-supervised and
purely supervised tasks. We provide visualizations and analysis showing that
the learned word embeddings have improved in quality and that while training,
the model is less prone to overfitting. Code is available at
https://github.com/tensorflow/models/tree/master/research/adversarial_text.
| Takeru Miyato, Andrew M. Dai, Ian Goodfellow | null | 1605.07725 | null | null |
Learning Multiagent Communication with Backpropagation | cs.LG cs.AI | Many tasks in AI require the collaboration of multiple agents. Typically, the
communication protocol between agents is manually specified and not altered
during training. In this paper we explore a simple neural model, called
CommNet, that uses continuous communication for fully cooperative tasks. The
model consists of multiple agents and the communication between them is learned
alongside their policy. We apply this model to a diverse set of tasks,
demonstrating the ability of the agents to learn to communicate amongst
themselves, yielding improved performance over non-communicative agents and
baselines. In some cases, it is possible to interpret the language devised by
the agents, revealing simple but effective strategies for solving the task at
hand.
| Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus | null | 1605.07736 | null | null |
NESTT: A Nonconvex Primal-Dual Splitting Method for Distributed and
Stochastic Optimization | math.OC cs.LG stat.ML | We study a stochastic and distributed algorithm for nonconvex problems whose
objective consists of a sum of $N$ nonconvex $L_i/N$-smooth functions, plus a
nonsmooth regularizer. The proposed NonconvEx primal-dual SpliTTing (NESTT)
algorithm splits the problem into $N$ subproblems, and utilizes an augmented
Lagrangian based primal-dual scheme to solve it in a distributed and stochastic
manner. With a special non-uniform sampling, a version of NESTT achieves
$\epsilon$-stationary solution using
$\mathcal{O}((\sum_{i=1}^N\sqrt{L_i/N})^2/\epsilon)$ gradient evaluations,
which can be up to $\mathcal{O}(N)$ times better than the (proximal) gradient
descent methods. It also achieves Q-linear convergence rate for nonconvex
$\ell_1$ penalized quadratic problems with polyhedral constraints. Further, we
reveal a fundamental connection between primal-dual based methods and a few
primal only methods such as IAG/SAG/SAGA.
| Davood Hajinezhad, Mingyi Hong, Tuo Zhao, Zhaoran Wang | null | 1605.07747 | null | null |
Generalized Mirror Descents in Congestion Games | cs.GT cs.LG | Different types of dynamics have been studied in repeated game play, and one
of them which has received much attention recently consists of those based on
"no-regret" algorithms from the area of machine learning. It is known that
dynamics based on generic no-regret algorithms may not converge to Nash
equilibria in general, but to a larger set of outcomes, namely coarse
correlated equilibria. Moreover, convergence results based on generic no-regret
algorithms typically use a weaker notion of convergence: the convergence of the
average plays instead of the actual plays. Some work has been done showing that
when using a specific no-regret algorithm, the well-known multiplicative
updates algorithm, convergence of actual plays to equilibria can be shown and
better quality of outcomes in terms of the price of anarchy can be reached for
atomic congestion games and load balancing games. Are there more cases of
natural no-regret dynamics that perform well in suitable classes of games in
terms of convergence and quality of outcomes that the dynamics converge to?
We answer this question positively in the bulletin-board model by showing
that when employing the mirror-descent algorithm, a well-known generic
no-regret algorithm, the actual plays converge quickly to equilibria in
nonatomic congestion games. Furthermore, the bandit model considers a probably
more realistic and prevalent setting with only partial information, in which at
each time step each player only knows the cost of her own currently played
strategy, but not any costs of unplayed strategies. For the class of atomic
congestion games, we propose a family of bandit algorithms based on the
mirror-descent algorithms previously presented, and show that when each player
individually adopts such a bandit algorithm, their joint (mixed) strategy
profile quickly converges with implications.
| Po-An Chen, Chi-Jen Lu | null | 1605.07774 | null | null |
Neural Universal Discrete Denoiser | cs.LG | We present a new framework of applying deep neural networks (DNN) to devise a
universal discrete denoiser. Unlike other approaches that utilize supervised
learning for denoising, we do not require any additional training data. In such
setting, while the ground-truth label, i.e., the clean data, is not available,
we devise "pseudo-labels" and a novel objective function such that DNN can be
trained in a same way as supervised learning to become a discrete denoiser. We
experimentally show that our resulting algorithm, dubbed as Neural DUDE,
significantly outperforms the previous state-of-the-art in several applications
with a systematic rule of choosing the hyperparameter, which is an attractive
feature in practice.
| Taesup Moon, Seonwoo Min, Byunghan Lee, Sungroh Yoon | null | 1605.07779 | null | null |
Fast Algorithms for Robust PCA via Gradient Descent | cs.IT cs.LG math.IT math.ST stat.ML stat.TH | We consider the problem of Robust PCA in the fully and partially observed
settings. Without corruptions, this is the well-known matrix completion
problem. From a statistical standpoint this problem has been recently
well-studied, and conditions on when recovery is possible (how many
observations do we need, how many corruptions can we tolerate) via
polynomial-time algorithms is by now understood. This paper presents and
analyzes a non-convex optimization approach that greatly reduces the
computational complexity of the above problems, compared to the best available
algorithms. In particular, in the fully observed case, with $r$ denoting rank
and $d$ dimension, we reduce the complexity from
$\mathcal{O}(r^2d^2\log(1/\varepsilon))$ to
$\mathcal{O}(rd^2\log(1/\varepsilon))$ -- a big savings when the rank is big.
For the partially observed case, we show the complexity of our algorithm is no
more than $\mathcal{O}(r^4d \log d \log(1/\varepsilon))$. Not only is this the
best-known run-time for a provable algorithm under partial observation, but in
the setting where $r$ is small compared to $d$, it also allows for
near-linear-in-$d$ run-time that can be exploited in the fully-observed case as
well, by simply running our algorithm on a subset of the observations.
| Xinyang Yi, Dohyung Park, Yudong Chen, Constantine Caramanis | null | 1605.07784 | null | null |
Geometry-aware stationary subspace analysis | cs.LG | In many real-world applications data exhibits non-stationarity, i.e., its
distribution changes over time. One approach to handling non-stationarity is to
remove or minimize it before attempting to analyze the data. In the context of
brain computer interface (BCI) data analysis this may be done by means of
stationary subspace analysis (SSA). The classic SSA method finds a matrix that
projects the data onto a stationary subspace by optimizing a cost function
based on a matrix divergence. In this work we present an alternative method for
SSA based on a symmetrized version of this matrix divergence. We show that this
frames the problem in terms of distances between symmetric positive definite
(SPD) matrices, suggesting a geometric interpretation of the problem. Stemming
from this geometric viewpoint, we introduce and analyze a method which utilizes
the geometry of the SPD matrix manifold and the invariance properties of its
metrics. Most notably we show that these invariances alleviate the need to
whiten the input matrices, a common step in many SSA methods which often
introduces errors. We demonstrate the usefulness of our technique in
experiments on both synthesized and real-world data.
| Inbal Horev and Florian Yger and Masashi Sugiyama | null | 1605.07785 | null | null |
Learning Moore Machines from Input-Output Traces | cs.FL cs.LG | The problem of learning automata from example traces (but no equivalence or
membership queries) is fundamental in automata learning theory and practice. In
this paper we study this problem for finite state machines with inputs and
outputs, and in particular for Moore machines. We develop three algorithms for
solving this problem: (1) the PTAP algorithm, which transforms a set of
input-output traces into an incomplete Moore machine and then completes the
machine with self-loops; (2) the PRPNI algorithm, which uses the well-known
RPNI algorithm for automata learning to learn a product of automata encoding a
Moore machine; and (3) the MooreMI algorithm, which directly learns a Moore
machine using PTAP extended with state merging. We prove that MooreMI has the
fundamental identification in the limit property. We also compare the
algorithms experimentally in terms of the size of the learned machine and
several notions of accuracy, introduced in this paper. Finally, we compare with
OSTIA, an algorithm that learns a more general class of transducers, and find
that OSTIA generally does not learn a Moore machine, even when fed with a
characteristic sample.
| Georgios Giantamidis and Stavros Tripakis | null | 1605.07805 | null | null |
Action Classification via Concepts and Attributes | cs.CV cs.LG | Classes in natural images tend to follow long tail distributions. This is
problematic when there are insufficient training examples for rare classes.
This effect is emphasized in compound classes, involving the conjunction of
several concepts, such as those appearing in action-recognition datasets. In
this paper, we propose to address this issue by learning how to utilize common
visual concepts which are readily available. We detect the presence of
prominent concepts in images and use them to infer the target labels instead of
using visual features directly, combining tools from vision and
natural-language processing. We validate our method on the recently introduced
HICO dataset reaching a mAP of 31.54\% and on the Stanford-40 Actions dataset,
where the proposed method outperforms that obtained by direct visual features,
obtaining an accuracy 83.12\%. Moreover, the method provides for each class a
semantically meaningful list of keywords and relevant image regions relating it
to its constituent concepts.
| Amir Rosenfeld, Shimon Ullman | null | 1605.07824 | null | null |
Effective Blind Source Separation Based on the Adam Algorithm | cs.LG | In this paper, we derive a modified InfoMax algorithm for the solution of
Blind Signal Separation (BSS) problems by using advanced stochastic methods.
The proposed approach is based on a novel stochastic optimization approach
known as the Adaptive Moment Estimation (Adam) algorithm. The proposed BSS
solution can benefit from the excellent properties of the Adam approach. In
order to derive the new learning rule, the Adam algorithm is introduced in the
derivation of the cost function maximization in the standard InfoMax algorithm.
The natural gradient adaptation is also considered. Finally, some experimental
results show the effectiveness of the proposed approach.
| Michele Scarpiniti, Simone Scardapane, Danilo Comminiello, Raffaele
Parisi, Aurelio Uncini | null | 1605.07833 | null | null |
Review Networks for Caption Generation | cs.LG cs.CL cs.CV | We propose a novel extension of the encoder-decoder framework, called a
review network. The review network is generic and can enhance any existing
encoder- decoder model: in this paper, we consider RNN decoders with both CNN
and RNN encoders. The review network performs a number of review steps with
attention mechanism on the encoder hidden states, and outputs a thought vector
after each review step; the thought vectors are used as the input of the
attention mechanism in the decoder. We show that conventional encoder-decoders
are a special case of our framework. Empirically, we show that our framework
improves over state-of- the-art encoder-decoder systems on the tasks of image
captioning and source code captioning.
| Zhilin Yang, Ye Yuan, Yuexin Wu, Ruslan Salakhutdinov, William W.
Cohen | null | 1605.07912 | null | null |
On Fast Convergence of Proximal Algorithms for SQRT-Lasso Optimization:
Don't Worry About Its Nonsmooth Loss Function | cs.LG math.OC stat.ML | Many machine learning techniques sacrifice convenient computational
structures to gain estimation robustness and modeling flexibility. However, by
exploring the modeling structures, we find these "sacrifices" do not always
require more computational efforts. To shed light on such a "free-lunch"
phenomenon, we study the square-root-Lasso (SQRT-Lasso) type regression
problem. Specifically, we show that the nonsmooth loss functions of SQRT-Lasso
type regression ease tuning effort and gain adaptivity to inhomogeneous noise,
but is not necessarily more challenging than Lasso in computation. We can
directly apply proximal algorithms (e.g. proximal gradient descent, proximal
Newton, and proximal Quasi-Newton algorithms) without worrying the
nonsmoothness of the loss function. Theoretically, we prove that the proximal
algorithms combined with the pathwise optimization scheme enjoy fast
convergence guarantees with high probability. Numerical results are provided to
support our theory.
| Xingguo Li, Haoming Jiang, Jarvis Haupt, Raman Arora, Han Liu, Mingyi
Hong, and Tuo Zhao | null | 1605.07950 | null | null |
Adaptive Neural Compilation | cs.AI cs.LG | This paper proposes an adaptive neural-compilation framework to address the
problem of efficient program learning. Traditional code optimisation strategies
used in compilers are based on applying pre-specified set of transformations
that make the code faster to execute without changing its semantics. In
contrast, our work involves adapting programs to make them more efficient while
considering correctness only on a target input distribution. Our approach is
inspired by the recent works on differentiable representations of programs. We
show that it is possible to compile programs written in a low-level language to
a differentiable representation. We also show how programs in this
representation can be optimised to make them efficient on a target distribution
of inputs. Experimental results demonstrate that our approach enables learning
specifically-tuned algorithms for given data distributions with a high success
rate.
| Rudy Bunel, Alban Desmaison, Pushmeet Kohli, Philip H.S. Torr and M.
Pawan Kumar | null | 1605.07969 | null | null |
Efficient Distributed Learning with Sparsity | stat.ML cs.LG | We propose a novel, efficient approach for distributed sparse learning in
high-dimensions, where observations are randomly partitioned across machines.
Computationally, at each round our method only requires the master machine to
solve a shifted ell_1 regularized M-estimation problem, and other workers to
compute the gradient. In respect of communication, the proposed approach
provably matches the estimation error bound of centralized methods within
constant rounds of communications (ignoring logarithmic factors). We conduct
extensive experiments on both simulated and real world datasets, and
demonstrate encouraging performances on high-dimensional regression and
classification tasks.
| Jialei Wang, Mladen Kolar, Nathan Srebro, Tong Zhang | null | 1605.07991 | null | null |
Toward a general, scaleable framework for Bayesian teaching with
applications to topic models | cs.LG cs.AI stat.ML | Machines, not humans, are the world's dominant knowledge accumulators but
humans remain the dominant decision makers. Interpreting and disseminating the
knowledge accumulated by machines requires expertise, time, and is prone to
failure. The problem of how best to convey accumulated knowledge from computers
to humans is a critical bottleneck in the broader application of machine
learning. We propose an approach based on human teaching where the problem is
formalized as selecting a small subset of the data that will, with high
probability, lead the human user to the correct inference. This approach,
though successful for modeling human learning in simple laboratory experiments,
has failed to achieve broader relevance due to challenges in formulating
general and scalable algorithms. We propose general-purpose teaching via
pseudo-marginal sampling and demonstrate the algorithm by teaching topic
models. Simulation results show our sampling-based approach: effectively
approximates the probability where ground-truth is possible via enumeration,
results in data that are markedly different from those expected by random
sampling, and speeds learning especially for small amounts of data. Application
to movie synopsis data illustrates differences between teaching and random
sampling for teaching distributions and specific topics, and demonstrates gains
in scalability and applicability to real-world problems.
| Baxter S. Eaves Jr and Patrick Shafto | null | 1605.07999 | null | null |
Tight Complexity Bounds for Optimizing Composite Objectives | math.OC cs.LG stat.ML | We provide tight upper and lower bounds on the complexity of minimizing the
average of $m$ convex functions using gradient and prox oracles of the
component functions. We show a significant gap between the complexity of
deterministic vs randomized optimization. For smooth functions, we show that
accelerated gradient descent (AGD) and an accelerated variant of SVRG are
optimal in the deterministic and randomized settings respectively, and that a
gradient oracle is sufficient for the optimal rate. For non-smooth functions,
having access to prox oracles reduces the complexity and we present optimal
methods based on smoothing that improve over methods using just gradient
accesses.
| Blake Woodworth and Nathan Srebro | null | 1605.08003 | null | null |
A PAC RL Algorithm for Episodic POMDPs | cs.LG cs.AI stat.ML | Many interesting real world domains involve reinforcement learning (RL) in
partially observable environments. Efficient learning in such domains is
important, but existing sample complexity bounds for partially observable RL
are at least exponential in the episode length. We give, to our knowledge, the
first partially observable RL algorithm with a polynomial bound on the number
of episodes on which the algorithm may not achieve near-optimal performance.
Our algorithm is suitable for an important class of episodic POMDPs. Our
approach builds on recent advances in method of moments for latent variable
model estimation.
| Zhaohan Daniel Guo, Shayan Doroudi, Emma Brunskill | null | 1605.08062 | null | null |
Deep Predictive Coding Networks for Video Prediction and Unsupervised
Learning | cs.LG cs.AI cs.CV cs.NE q-bio.NC | While great strides have been made in using deep learning algorithms to solve
supervised learning tasks, the problem of unsupervised learning - leveraging
unlabeled examples to learn about the structure of a domain - remains a
difficult unsolved challenge. Here, we explore prediction of future frames in a
video sequence as an unsupervised learning rule for learning about the
structure of the visual world. We describe a predictive neural network
("PredNet") architecture that is inspired by the concept of "predictive coding"
from the neuroscience literature. These networks learn to predict future frames
in a video sequence, with each layer in the network making local predictions
and only forwarding deviations from those predictions to subsequent network
layers. We show that these networks are able to robustly learn to predict the
movement of synthetic (rendered) objects, and that in doing so, the networks
learn internal representations that are useful for decoding latent object
parameters (e.g. pose) that support object recognition with fewer training
views. We also show that these networks can scale to complex natural image
streams (car-mounted camera videos), capturing key aspects of both egocentric
movement and the movement of objects in the visual scene, and the
representation learned in this setting is useful for estimating the steering
angle. Altogether, these results suggest that prediction represents a powerful
framework for unsupervised learning, allowing for implicit learning of object
and scene structure.
| William Lotter, Gabriel Kreiman, David Cox | null | 1605.08104 | null | null |
FLAG n' FLARE: Fast Linearly-Coupled Adaptive Gradient Methods | math.OC cs.LG stat.ML | We consider first order gradient methods for effectively optimizing a
composite objective in the form of a sum of smooth and, potentially, non-smooth
functions. We present accelerated and adaptive gradient methods, called FLAG
and FLARE, which can offer the best of both worlds. They can achieve the
optimal convergence rate by attaining the optimal first-order oracle complexity
for smooth convex optimization. Additionally, they can adaptively and
non-uniformly re-scale the gradient direction to adapt to the limited curvature
available and conform to the geometry of the domain. We show theoretically and
empirically that, through the compounding effects of acceleration and
adaptivity, FLAG and FLARE can be highly effective for many data fitting and
machine learning applications.
| Xiang Cheng, Farbod Roosta-Khorasani, Stefan Palombo, Peter L.
Bartlett and Michael W. Mahoney | null | 1605.08108 | null | null |
Video Summarization with Long Short-term Memory | cs.CV cs.LG | We propose a novel supervised learning technique for summarizing videos by
automatically selecting keyframes or key subshots. Casting the problem as a
structured prediction problem on sequential data, our main idea is to use Long
Short-Term Memory (LSTM), a special type of recurrent neural networks to model
the variable-range dependencies entailed in the task of video summarization.
Our learning models attain the state-of-the-art results on two benchmark video
datasets. Detailed analysis justifies the design of the models. In particular,
we show that it is crucial to take into consideration the sequential structures
in videos and model them. Besides advances in modeling techniques, we introduce
techniques to address the need of a large number of annotated data for training
complex learning models. There, our main idea is to exploit the existence of
auxiliary annotated video datasets, albeit heterogeneous in visual styles and
contents. Specifically, we show domain adaptation techniques can improve
summarization by reducing the discrepancies in statistical properties across
those datasets.
| Ke Zhang, Wei-Lun Chao, Fei Sha, Kristen Grauman | null | 1605.08110 | null | null |
Highly-Smooth Zero-th Order Online Optimization Vianney Perchet | cs.LG math.OC | The minimization of convex functions which are only available through partial
and noisy information is a key methodological problem in many disciplines. In
this paper we consider convex optimization with noisy zero-th order
information, that is noisy function evaluations at any desired point. We focus
on problems with high degrees of smoothness, such as logistic regression. We
show that as opposed to gradient-based algorithms, high-order smoothness may be
used to improve estimation rates, with a precise dependence of our upper-bounds
on the degree of smoothness. In particular, we show that for infinitely
differentiable functions, we recover the same dependence on sample size as
gradient-based algorithms, with an extra dimension-dependent factor. This is
done for both convex and strongly-convex functions, with finite horizon and
anytime algorithms. Finally, we also recover similar results in the online
optimization setting.
| Francis Bach (SIERRA, LIENS), Vianney Perchet (CREST) | null | 1605.08165 | null | null |
Adiabatic Persistent Contrastive Divergence Learning | cs.LG stat.ML | This paper studies the problem of parameter learning in probabilistic
graphical models having latent variables, where the standard approach is the
expectation maximization algorithm alternating expectation (E) and maximization
(M) steps. However, both E and M steps are computationally intractable for high
dimensional data, while the substitution of one step to a faster surrogate for
combating against intractability can often cause failure in convergence. We
propose a new learning algorithm which is computationally efficient and
provably ensures convergence to a correct optimum. Its key idea is to run only
a few cycles of Markov Chains (MC) in both E and M steps. Such an idea of
running incomplete MC has been well studied only for M step in the literature,
called Contrastive Divergence (CD) learning. While such known CD-based schemes
find approximated gradients of the log-likelihood via the mean-field approach
in E step, our proposed algorithm does exact ones via MC algorithms in both
steps due to the multi-time-scale stochastic approximation theory. Despite its
theoretical guarantee in convergence, the proposed scheme might suffer from the
slow mixing of MC in E step. To tackle it, we also propose a hybrid approach
applying both mean-field and MC approximation in E step, where the hybrid
approach outperforms the bare mean-field CD scheme in our experiments on
real-world datasets.
| Hyeryung Jang, Hyungwon Choi, Yung Yi, Jinwoo Shin | null | 1605.08174 | null | null |
Learning Multivariate Log-concave Distributions | cs.LG cs.IT math.IT math.ST stat.TH | We study the problem of estimating multivariate log-concave probability
density functions. We prove the first sample complexity upper bound for
learning log-concave densities on $\mathbb{R}^d$, for all $d \geq 1$. Prior to
our work, no upper bound on the sample complexity of this learning problem was
known for the case of $d>3$. In more detail, we give an estimator that, for any
$d \ge 1$ and $\epsilon>0$, draws $\tilde{O}_d \left( (1/\epsilon)^{(d+5)/2}
\right)$ samples from an unknown target log-concave density on $\mathbb{R}^d$,
and outputs a hypothesis that (with high probability) is $\epsilon$-close to
the target, in total variation distance. Our upper bound on the sample
complexity comes close to the known lower bound of $\Omega_d \left(
(1/\epsilon)^{(d+1)/2} \right)$ for this problem.
| Ilias Diakonikolas, Daniel M. Kane, Alistair Stewart | null | 1605.08188 | null | null |
Stochastic Variance Reduced Riemannian Eigensolver | cs.LG stat.ML | We study the stochastic Riemannian gradient algorithm for matrix
eigen-decomposition. The state-of-the-art stochastic Riemannian algorithm
requires the learning rate to decay to zero and thus suffers from slow
convergence and sub-optimal solutions. In this paper, we address this issue by
deploying the variance reduction (VR) technique of stochastic gradient descent
(SGD). The technique was originally developed to solve convex problems in the
Euclidean space. We generalize it to Riemannian manifolds and realize it to
solve the non-convex eigen-decomposition problem. We are the first to propose
and analyze the generalization of SVRG to Riemannian manifolds. Specifically,
we propose the general variance reduction form, SVRRG, in the framework of the
stochastic Riemannian gradient optimization. It's then specialized to the
problem with eigensolvers and induces the SVRRG-EIGS algorithm. We provide a
novel and elegant theoretical analysis on this algorithm. The theory shows that
a fixed learning rate can be used in the Riemannian setting with an exponential
global convergence rate guaranteed. The theoretical results make a significant
improvement over existing studies, with the effectiveness empirically verified.
| Zhiqiang Xu and Yiping Ke | null | 1605.08233 | null | null |
Neighborhood Sensitive Mapping for Zero-Shot Classification using
Independently Learned Semantic Embeddings | cs.LG | In a traditional setting, classifiers are trained to approximate a target
function $f:X \rightarrow Y$ where at least a sample for each $y \in Y$ is
presented to the training algorithm. In a zero-shot setting we have a subset of
the labels $\hat{Y} \subset Y$ for which we do not observe any corresponding
training instance. Still, the function $f$ that we train must be able to
correctly assign labels also on $\hat{Y}$. In practice, zero-shot problems are
very important especially when the label set is large and the cost of
editorially label samples for all possible values in the label set might be
prohibitively high. Most recent approaches to zero-shot learning are based on
finding and exploiting relationships between labels using semantic embeddings.
We show in this paper that semantic embeddings, despite being very good at
capturing relationships between labels, are not very good at capturing the
relationships among labels in a data-dependent manner. For this reason, we
propose a novel two-step process for learning a zero-shot classifier. In the
first step, we learn what we call a \emph{property embedding space} capturing
the "\emph{learnable}" features of the label set. Then, we exploit the learned
properties in order to reduce the generalization error for a linear nearest
neighbor-based classifier.
| Gaurav Singh, Fabrizio Silvestri, John Shawe-Taylor | null | 1605.08242 | null | null |
cvpaper.challenge in 2015 - A review of CVPR2015 and DeepSurvey | cs.CV cs.LG cs.MM cs.RO | The "cvpaper.challenge" is a group composed of members from AIST, Tokyo Denki
Univ. (TDU), and Univ. of Tsukuba that aims to systematically summarize papers
on computer vision, pattern recognition, and related fields. For this
particular review, we focused on reading the ALL 602 conference papers
presented at the CVPR2015, the premier annual computer vision event held in
June 2015, in order to grasp the trends in the field. Further, we are proposing
"DeepSurvey" as a mechanism embodying the entire process from the reading
through all the papers, the generation of ideas, and to the writing of paper.
| Hirokatsu Kataoka and Yudai Miyashita and Tomoaki Yamabe and Soma
Shirakabe and Shin'ichi Sato and Hironori Hoshino and Ryo Kato and Kaori Abe
and Takaaki Imanari and Naomichi Kobayashi and Shinichiro Morita and Akio
Nakamura | null | 1605.08247 | null | null |
Robust Large Margin Deep Neural Networks | stat.ML cs.LG cs.NE | The generalization error of deep neural networks via their classification
margin is studied in this work. Our approach is based on the Jacobian matrix of
a deep neural network and can be applied to networks with arbitrary
non-linearities and pooling layers, and to networks with different
architectures such as feed forward networks and residual networks. Our analysis
leads to the conclusion that a bounded spectral norm of the network's Jacobian
matrix in the neighbourhood of the training samples is crucial for a deep
neural network of arbitrary depth and width to generalize well. This is a
significant improvement over the current bounds in the literature, which imply
that the generalization error grows with either the width or the depth of the
network. Moreover, it shows that the recently proposed batch normalization and
weight normalization re-parametrizations enjoy good generalization properties,
and leads to a novel network regularizer based on the network's Jacobian
matrix. The analysis is supported with experimental results on the MNIST,
CIFAR-10, LaRED and ImageNet datasets.
| Jure Sokolic, Raja Giryes, Guillermo Sapiro, Miguel R. D. Rodrigues | 10.1109/TSP.2017.2708039 | 1605.08254 | null | null |
Low-rank tensor completion: a Riemannian manifold preconditioning
approach | cs.LG cs.NA math.OC stat.ML | We propose a novel Riemannian manifold preconditioning approach for the
tensor completion problem with rank constraint. A novel Riemannian metric or
inner product is proposed that exploits the least-squares structure of the cost
function and takes into account the structured symmetry that exists in Tucker
decomposition. The specific metric allows to use the versatile framework of
Riemannian optimization on quotient manifolds to develop preconditioned
nonlinear conjugate gradient and stochastic gradient descent algorithms for
batch and online setups, respectively. Concrete matrix representations of
various optimization-related ingredients are listed. Numerical comparisons
suggest that our proposed algorithms robustly outperform state-of-the-art
algorithms across different synthetic and real-world datasets.
| Hiroyuki Kasai and Bamdev Mishra | null | 1605.08257 | null | null |
Discrete Deep Feature Extraction: A Theory and New Architectures | cs.LG cs.CV cs.IT cs.NE math.IT stat.ML | First steps towards a mathematical theory of deep convolutional neural
networks for feature extraction were made---for the continuous-time case---in
Mallat, 2012, and Wiatowski and B\"olcskei, 2015. This paper considers the
discrete case, introduces new convolutional neural network architectures, and
proposes a mathematical framework for their analysis. Specifically, we
establish deformation and translation sensitivity results of local and global
nature, and we investigate how certain structural properties of the input
signal are reflected in the corresponding feature vectors. Our theory applies
to general filters and general Lipschitz-continuous non-linearities and pooling
operators. Experiments on handwritten digit classification and facial landmark
detection---including feature importance evaluation---complement the
theoretical findings.
| Thomas Wiatowski and Michael Tschannen and Aleksandar Stani\'c and
Philipp Grohs and Helmut B\"olcskei | null | 1605.08283 | null | null |
Theano-MPI: a Theano-based Distributed Training Framework | cs.LG cs.DC | We develop a scalable and extendable training framework that can utilize GPUs
across nodes in a cluster and accelerate the training of deep learning models
based on data parallelism. Both synchronous and asynchronous training are
implemented in our framework, where parameter exchange among GPUs is based on
CUDA-aware MPI. In this report, we analyze the convergence and capability of
the framework to reduce training time when scaling the synchronous training of
AlexNet and GoogLeNet from 2 GPUs to 8 GPUs. In addition, we explore novel ways
to reduce the communication overhead caused by exchanging parameters. Finally,
we release the framework as open-source for further research on distributed
deep learning
| He Ma, Fei Mao, and Graham W. Taylor | null | 1605.08325 | null | null |
No bad local minima: Data independent training error guarantees for
multilayer neural networks | stat.ML cs.LG cs.NE | We use smoothed analysis techniques to provide guarantees on the training
loss of Multilayer Neural Networks (MNNs) at differentiable local minima.
Specifically, we examine MNNs with piecewise linear activation functions,
quadratic loss and a single output, under mild over-parametrization. We prove
that for a MNN with one hidden layer, the training error is zero at every
differentiable local minimum, for almost every dataset and dropout-like noise
realization. We then extend these results to the case of more than one hidden
layer. Our theoretical guarantees assume essentially nothing on the training
data, and are verified numerically. These results suggest why the highly
non-convex loss of such MNNs can be easily optimized using local updates (e.g.,
stochastic gradient descent), as observed empirically.
| Daniel Soudry, Yair Carmon | null | 1605.08361 | null | null |
Provable Efficient Online Matrix Completion via Non-convex Stochastic
Gradient Descent | cs.LG math.OC stat.ML | Matrix completion, where we wish to recover a low rank matrix by observing a
few entries from it, is a widely studied problem in both theory and practice
with wide applications. Most of the provable algorithms so far on this problem
have been restricted to the offline setting where they provide an estimate of
the unknown matrix using all observations simultaneously. However, in many
applications, the online version, where we observe one entry at a time and
dynamically update our estimate, is more appealing. While existing algorithms
are efficient for the offline setting, they could be highly inefficient for the
online setting.
In this paper, we propose the first provable, efficient online algorithm for
matrix completion. Our algorithm starts from an initial estimate of the matrix
and then performs non-convex stochastic gradient descent (SGD). After every
observation, it performs a fast update involving only one row of two tall
matrices, giving near linear total runtime. Our algorithm can be naturally used
in the offline setting as well, where it gives competitive sample complexity
and runtime to state of the art algorithms. Our proofs introduce a general
framework to show that SGD updates tend to stay away from saddle surfaces and
could be of broader interests for other non-convex problems to prove tight
rates.
| Chi Jin, Sham M. Kakade, Praneeth Netrapalli | null | 1605.08370 | null | null |
Kronecker Determinantal Point Processes | cs.LG cs.AI stat.ML | Determinantal Point Processes (DPPs) are probabilistic models over all
subsets a ground set of $N$ items. They have recently gained prominence in
several applications that rely on "diverse" subsets. However, their
applicability to large problems is still limited due to the $\mathcal O(N^3)$
complexity of core tasks such as sampling and learning. We enable efficient
sampling and learning for DPPs by introducing KronDPP, a DPP model whose kernel
matrix decomposes as a tensor product of multiple smaller kernel matrices. This
decomposition immediately enables fast exact sampling. But contrary to what one
may expect, leveraging the Kronecker product structure for speeding up DPP
learning turns out to be more difficult. We overcome this challenge, and derive
batch and stochastic optimization algorithms for efficiently learning the
parameters of a KronDPP.
| Zelda Mariet and Suvrit Sra | null | 1605.08374 | null | null |
Generalization Properties and Implicit Regularization for Multiple
Passes SGM | cs.LG stat.ML | We study the generalization properties of stochastic gradient methods for
learning with convex loss functions and linearly parameterized functions. We
show that, in the absence of penalizations or constraints, the stability and
approximation properties of the algorithm can be controlled by tuning either
the step-size or the number of passes over the data. In this view, these
parameters can be seen to control a form of implicit regularization. Numerical
results complement the theoretical findings.
| Junhong Lin, Raffaello Camoriano, Lorenzo Rosasco | null | 1605.08375 | null | null |
Suppressing Background Radiation Using Poisson Principal Component
Analysis | cs.LG physics.data-an stat.ML | Performance of nuclear threat detection systems based on gamma-ray
spectrometry often strongly depends on the ability to identify the part of
measured signal that can be attributed to background radiation. We have
successfully applied a method based on Principal Component Analysis (PCA) to
obtain a compact null-space model of background spectra using PCA projection
residuals to derive a source detection score. We have shown the method's
utility in a threat detection system using mobile spectrometers in urban scenes
(Tandon et al 2012). While it is commonly assumed that measured photon counts
follow a Poisson process, standard PCA makes a Gaussian assumption about the
data distribution, which may be a poor approximation when photon counts are
low. This paper studies whether and in what conditions PCA with a Poisson-based
loss function (Poisson PCA) can outperform standard Gaussian PCA in modeling
background radiation to enable more sensitive and specific nuclear threat
detection.
| P. Tandon (1), P. Huggins (1), A. Dubrawski (1), S. Labov (2), K.
Nelson (2) ((1) Auton Lab, Carnegie Mellon University, (2) Lawrence Livermore
National Laboratory) | null | 1605.08455 | null | null |
Model-Free Imitation Learning with Policy Optimization | cs.LG cs.AI | In imitation learning, an agent learns how to behave in an environment with
an unknown cost function by mimicking expert demonstrations. Existing imitation
learning algorithms typically involve solving a sequence of planning or
reinforcement learning problems. Such algorithms are therefore not directly
applicable to large, high-dimensional environments, and their performance can
significantly degrade if the planning problems are not solved to optimality.
Under the apprenticeship learning formalism, we develop alternative model-free
algorithms for finding a parameterized stochastic policy that performs at least
as well as an expert policy on an unknown cost function, based on sample
trajectories from the expert. Our approach, based on policy gradients, scales
to large continuous environments with guaranteed convergence to local minima.
| Jonathan Ho, Jayesh K. Gupta, Stefano Ermon | null | 1605.08478 | null | null |
Open Problem: Best Arm Identification: Almost Instance-Wise Optimality
and the Gap Entropy Conjecture | cs.LG | The best arm identification problem (BEST-1-ARM) is the most basic pure
exploration problem in stochastic multi-armed bandits. The problem has a long
history and attracted significant attention for the last decade. However, we do
not yet have a complete understanding of the optimal sample complexity of the
problem: The state-of-the-art algorithms achieve a sample complexity of
$O(\sum_{i=2}^{n} \Delta_{i}^{-2}(\ln\delta^{-1} + \ln\ln\Delta_i^{-1}))$
($\Delta_{i}$ is the difference between the largest mean and the $i^{th}$
mean), while the best known lower bound is $\Omega(\sum_{i=2}^{n}
\Delta_{i}^{-2}\ln\delta^{-1})$ for general instances and $\Omega(\Delta^{-2}
\ln\ln \Delta^{-1})$ for the two-arm instances. We propose to study the
instance-wise optimality for the BEST-1-ARM problem. Previous work has proved
that it is impossible to have an instance optimal algorithm for the 2-arm
problem. However, we conjecture that modulo the additive term
$\Omega(\Delta_2^{-2} \ln\ln \Delta_2^{-1})$ (which is an upper bound and worst
case lower bound for the 2-arm problem), there is an instance optimal algorithm
for BEST-1-ARM. Moreover, we introduce a new quantity, called the gap entropy
for a best-arm problem instance, and conjecture that it is the instance-wise
lower bound. Hence, resolving this conjecture would provide a final answer to
the old and basic problem.
| Lijie Chen and Jian Li | null | 1605.08481 | null | null |
Provable Algorithms for Inference in Topic Models | cs.LG stat.ML | Recently, there has been considerable progress on designing algorithms with
provable guarantees -- typically using linear algebraic methods -- for
parameter learning in latent variable models. But designing provable algorithms
for inference has proven to be more challenging. Here we take a first step
towards provable inference in topic models. We leverage a property of topic
models that enables us to construct simple linear estimators for the unknown
topic proportions that have small variance, and consequently can work with
short documents. Our estimators also correspond to finding an estimate around
which the posterior is well-concentrated. We show lower bounds that for shorter
documents it can be information theoretically impossible to find the hidden
topics. Finally, we give empirical results that demonstrate that our algorithm
works on realistic topic models. It yields good solutions on synthetic data and
runs in time comparable to a {\em single} iteration of Gibbs sampling.
| Sanjeev Arora, Rong Ge, Frederic Koehler, Tengyu Ma, Ankur Moitra | null | 1605.08491 | null | null |
Universum Learning for SVM Regression | cs.LG | This paper extends the idea of Universum learning [18, 19] to regression
problems. We propose new Universum-SVM formulation for regression problems that
incorporates a priori knowledge in the form of additional data samples. These
additional data samples or Universum belong to the same application domain as
the training samples, but they follow a different distribution. Several
empirical comparisons are presented to illustrate the utility of the proposed
approach.
| Sauptik Dhar, Vladimir Cherkassky | null | 1605.08497 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.