title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Graphical Model Sketch | cs.DS cs.LG stat.ML | Structured high-cardinality data arises in many domains, and poses a major
challenge for both modeling and inference. Graphical models are a popular
approach to modeling structured data but they are unsuitable for
high-cardinality variables. The count-min (CM) sketch is a popular approach to
estimating probabilities in high-cardinality data but it does not scale well
beyond a few variables. In this work, we bring together the ideas of graphical
models and count sketches; and propose and analyze several approaches to
estimating probabilities in structured high-cardinality streams of data. The
key idea of our approximations is to use the structure of a graphical model and
approximately estimate its factors by "sketches", which hash high-cardinality
variables using random projections. Our approximations are computationally
efficient and their space complexity is independent of the cardinality of
variables. Our error bounds are multiplicative and significantly improve upon
those of the CM sketch, a state-of-the-art approach to estimating probabilities
in streams. We evaluate our approximations on synthetic and real-world
problems, and report an order of magnitude improvements over the CM sketch.
| Branislav Kveton, Hung Bui, Mohammad Ghavamzadeh, Georgios
Theocharous, S. Muthukrishnan, and Siqi Sun | null | 1602.03105 | null | null |
DCM Bandits: Learning to Rank with Multiple Clicks | cs.LG stat.ML | A search engine recommends to the user a list of web pages. The user examines
this list, from the first page to the last, and clicks on all attractive pages
until the user is satisfied. This behavior of the user can be described by the
dependent click model (DCM). We propose DCM bandits, an online learning variant
of the DCM where the goal is to maximize the probability of recommending
satisfactory items, such as web pages. The main challenge of our learning
problem is that we do not observe which attractive item is satisfactory. We
propose a computationally-efficient learning algorithm for solving our problem,
dcmKL-UCB; derive gap-dependent upper bounds on its regret under reasonable
assumptions; and also prove a matching lower bound up to logarithmic factors.
We evaluate our algorithm on synthetic and real-world problems, and show that
it performs well even when our model is misspecified. This work presents the
first practical and regret-optimal online algorithm for learning to rank with
multiple clicks in a cascade-like click model.
| Sumeet Katariya, Branislav Kveton, Csaba Szepesv\'ari, Zheng Wen | null | 1602.03146 | null | null |
Learning Efficient Algorithms with Hierarchical Attentive Memory | cs.LG | In this paper, we propose and investigate a novel memory architecture for
neural networks called Hierarchical Attentive Memory (HAM). It is based on a
binary tree with leaves corresponding to memory cells. This allows HAM to
perform memory access in O(log n) complexity, which is a significant
improvement over the standard attention mechanism that requires O(n)
operations, where n is the size of the memory.
We show that an LSTM network augmented with HAM can learn algorithms for
problems like merging, sorting or binary searching from pure input-output
examples. In particular, it learns to sort n numbers in time O(n log n) and
generalizes well to input sequences much longer than the ones seen during the
training. We also show that HAM can be trained to act like classic data
structures: a stack, a FIFO queue and a priority queue.
| Marcin Andrychowicz and Karol Kurach | null | 1602.03218 | null | null |
Discriminative Regularization for Generative Models | stat.ML cs.LG | We explore the question of whether the representations learned by classifiers
can be used to enhance the quality of generative models. Our conjecture is that
labels correspond to characteristics of natural data which are most salient to
humans: identity in faces, objects in images, and utterances in speech. We
propose to take advantage of this by using the representations from
discriminative classifiers to augment the objective function corresponding to a
generative model. In particular we enhance the objective function of the
variational autoencoder, a popular generative model, with a discriminative
regularization term. We show that enhancing the objective function in this way
leads to samples that are clearer and have higher visual quality than the
samples from the standard variational autoencoders.
| Alex Lamb, Vincent Dumoulin and Aaron Courville | null | 1602.03220 | null | null |
Interactive Bayesian Hierarchical Clustering | cs.LG | Clustering is a powerful tool in data analysis, but it is often difficult to
find a grouping that aligns with a user's needs. To address this, several
methods incorporate constraints obtained from users into clustering algorithms,
but unfortunately do not apply to hierarchical clustering. We design an
interactive Bayesian algorithm that incorporates user interaction into
hierarchical clustering while still utilizing the geometry of the data by
sampling a constrained posterior distribution over hierarchies. We also suggest
several ways to intelligently query a user. The algorithm, along with the
querying schemes, shows promising results on real data.
| Sharad Vikram, Sanjoy Dasgupta | null | 1602.03258 | null | null |
A Theory of Generative ConvNet | stat.ML cs.LG | We show that a generative random field model, which we call generative
ConvNet, can be derived from the commonly used discriminative ConvNet, by
assuming a ConvNet for multi-category classification and assuming one of the
categories is a base category generated by a reference distribution. If we
further assume that the non-linearity in the ConvNet is Rectified Linear Unit
(ReLU) and the reference distribution is Gaussian white noise, then we obtain a
generative ConvNet model that is unique among energy-based models: The model is
piecewise Gaussian, and the means of the Gaussian pieces are defined by an
auto-encoder, where the filters in the bottom-up encoding become the basis
functions in the top-down decoding, and the binary activation variables
detected by the filters in the bottom-up convolution process become the
coefficients of the basis functions in the top-down deconvolution process. The
Langevin dynamics for sampling the generative ConvNet is driven by the
reconstruction error of this auto-encoder. The contrastive divergence learning
of the generative ConvNet reconstructs the training images by the auto-encoder.
The maximum likelihood learning algorithm can synthesize realistic natural
image patterns.
| Jianwen Xie, Yang Lu, Song-Chun Zhu, Ying Nian Wu | null | 1602.03264 | null | null |
Iterative Hierarchical Optimization for Misspecified Problems (IHOMP) | cs.LG cs.AI | For complex, high-dimensional Markov Decision Processes (MDPs), it may be
necessary to represent the policy with function approximation. A problem is
misspecified whenever, the representation cannot express any policy with
acceptable performance. We introduce IHOMP : an approach for solving
misspecified problems. IHOMP iteratively learns a set of context specialized
options and combines these options to solve an otherwise misspecified problem.
Our main contribution is proving that IHOMP enjoys theoretical convergence
guarantees. In addition, we extend IHOMP to exploit Option Interruption (OI)
enabling it to decide where the learned options can be reused. Our experiments
demonstrate that IHOMP can find near-optimal solutions to otherwise
misspecified problems and that OI can further improve the solutions.
| Daniel J. Mankowitz, Timothy A. Mann, Shie Mannor | null | 1602.03348 | null | null |
Adaptive Skills, Adaptive Partitions (ASAP) | cs.LG cs.AI stat.ML | We introduce the Adaptive Skills, Adaptive Partitions (ASAP) framework that
(1) learns skills (i.e., temporally extended actions or options) as well as (2)
where to apply them. We believe that both (1) and (2) are necessary for a truly
general skill learning framework, which is a key building block needed to scale
up to lifelong learning agents. The ASAP framework can also solve related new
tasks simply by adapting where it applies its existing learned skills. We prove
that ASAP converges to a local optimum under natural conditions. Finally, our
experimental results, which include a RoboCup domain, demonstrate the ability
of ASAP to learn where to reuse skills as well as solve multiple tasks with
considerably less experience than solving each task from scratch.
| Daniel J. Mankowitz, Timothy A. Mann, Shie Mannor | null | 1602.03351 | null | null |
Fast model selection by limiting SVM training times | stat.ML cs.LG | Kernelized Support Vector Machines (SVMs) are among the best performing
supervised learning methods. But for optimal predictive performance,
time-consuming parameter tuning is crucial, which impedes application. To
tackle this problem, the classic model selection procedure based on grid-search
and cross-validation was refined, e.g. by data subsampling and direct search
heuristics. Here we focus on a different aspect, the stopping criterion for SVM
training. We show that by limiting the training time given to the SVM solver
during parameter tuning we can reduce model selection times by an order of
magnitude.
| Aydin Demircioglu, Daniel Horn, Tobias Glasmachers, Bernd Bischl,
Claus Weihs | null | 1602.03368 | null | null |
Conditional Dependence via Shannon Capacity: Axioms, Estimators and
Applications | cs.IT cs.LG math.IT stat.ML | We conduct an axiomatic study of the problem of estimating the strength of a
known causal relationship between a pair of variables. We propose that an
estimate of causal strength should be based on the conditional distribution of
the effect given the cause (and not on the driving distribution of the cause),
and study dependence measures on conditional distributions. Shannon capacity,
appropriately regularized, emerges as a natural measure under these axioms. We
examine the problem of calculating Shannon capacity from the observed samples
and propose a novel fixed-$k$ nearest neighbor estimator, and demonstrate its
consistency. Finally, we demonstrate an application to single-cell
flow-cytometry, where the proposed estimators significantly reduce sample
complexity.
| Weihao Gao, Sreeram Kannan, Sewoong Oh, Pramod Viswanath | null | 1602.03476 | null | null |
Achieving Budget-optimality with Adaptive Schemes in Crowdsourcing | cs.LG cs.HC cs.SI stat.ML | Crowdsourcing platforms provide marketplaces where task requesters can pay to
get labels on their data. Such markets have emerged recently as popular venues
for collecting annotations that are crucial in training machine learning models
in various applications. However, as jobs are tedious and payments are low,
errors are common in such crowdsourced labels. A common strategy to overcome
such noise in the answers is to add redundancy by getting multiple answers for
each task and aggregating them using some methods such as majority voting. For
such a system, there is a fundamental question of interest: how can we maximize
the accuracy given a fixed budget on how many responses we can collect on the
crowdsourcing system. We characterize this fundamental trade-off between the
budget (how many answers the requester can collect in total) and the accuracy
in the estimated labels. In particular, we ask whether adaptive task assignment
schemes lead to a more efficient trade-off between the accuracy and the budget.
Adaptive schemes, where tasks are assigned adaptively based on the data
collected thus far, are widely used in practical crowdsourcing systems to
efficiently use a given fixed budget. However, existing theoretical analyses of
crowdsourcing systems suggest that the gain of adaptive task assignments is
minimal. To bridge this gap, we investigate this question under a strictly more
general probabilistic model, which has been recently introduced to model
practical crowdsourced annotations. Under this generalized Dawid-Skene model,
we characterize the fundamental trade-off between budget and accuracy. We
introduce a novel adaptive scheme that matches this fundamental limit. We
further quantify the fundamental gap between adaptive and non-adaptive schemes,
by comparing the trade-off with the one for non-adaptive schemes. Our analyses
confirm that the gap is significant.
| Ashish Khetan and Sewoong Oh | null | 1602.03481 | null | null |
Learning Distributed Representations of Sentences from Unlabelled Data | cs.CL cs.LG | Unsupervised methods for learning distributed representations of words are
ubiquitous in today's NLP research, but far less is known about the best ways
to learn distributed phrase or sentence representations from unlabelled data.
This paper is a systematic comparison of models that learn such
representations. We find that the optimal approach depends critically on the
intended application. Deeper, more complex models are preferable for
representations to be used in supervised systems, but shallow log-linear models
work best for building representation spaces that can be decoded with simple
spatial distance metrics. We also propose two new unsupervised
representation-learning objectives designed to optimise the trade-off between
training time, domain portability and performance.
| Felix Hill, Kyunghyun Cho, Anna Korhonen | null | 1602.03483 | null | null |
Unsupervised Transductive Domain Adaptation | stat.ML cs.LG | Supervised learning with large scale labeled datasets and deep layered models
has made a paradigm shift in diverse areas in learning and recognition.
However, this approach still suffers generalization issues under the presence
of a domain shift between the training and the test data distribution. In this
regard, unsupervised domain adaptation algorithms have been proposed to
directly address the domain shift problem. In this paper, we approach the
problem from a transductive perspective. We incorporate the domain shift and
the transductive target inference into our framework by jointly solving for an
asymmetric similarity metric and the optimal transductive target label
assignment. We also show that our model can easily be extended for deep feature
learning in order to learn features which are discriminative in the target
domain. Our experiments show that the proposed method significantly outperforms
state-of-the-art algorithms in both object recognition and digit classification
experiments by a large margin.
| Ozan Sener, Hyun Oh Song, Ashutosh Saxena, Silvio Savarese | null | 1602.03534 | null | null |
Learning Privately from Multiparty Data | cs.LG cs.CR | Learning a classifier from private data collected by multiple parties is an
important problem that has many potential applications. How can we build an
accurate and differentially private global classifier by combining
locally-trained classifiers from different parties, without access to any
party's private data? We propose to transfer the `knowledge' of the local
classifier ensemble by first creating labeled data from auxiliary unlabeled
data, and then train a global $\epsilon$-differentially private classifier. We
show that majority voting is too sensitive and therefore propose a new risk
weighted by class probabilities estimated from the ensemble. Relative to a
non-private solution, our private solution has a generalization error bounded
by $O(\epsilon^{-2}M^{-2})$ where $M$ is the number of parties. This allows
strong privacy without performance loss when $M$ is large, such as in
crowdsensing applications. We demonstrate the performance of our method with
realistic tasks of activity recognition, network intrusion detection, and
malicious URL detection.
| Jihun Hamm, Paul Cao, Mikhail Belkin | null | 1602.03552 | null | null |
High Dimensional Inference with Random Maximum A-Posteriori
Perturbations | cs.LG cs.IT math.IT stat.ML | This paper presents a new approach, called perturb-max, for high-dimensional
statistical inference that is based on applying random perturbations followed
by optimization. This framework injects randomness to maximum a-posteriori
(MAP) predictors by randomly perturbing the potential function for the input. A
classic result from extreme value statistics asserts that perturb-max
operations generate unbiased samples from the Gibbs distribution using
high-dimensional perturbations. Unfortunately, the computational cost of
generating so many high-dimensional random variables can be prohibitive.
However, when the perturbations are of low dimension, sampling the perturb-max
prediction is as efficient as MAP optimization. This paper shows that the
expected value of perturb-max inference with low dimensional perturbations can
be used sequentially to generate unbiased samples from the Gibbs distribution.
Furthermore the expected value of the maximal perturbations is a natural bound
on the entropy of such perturb-max models. A measure concentration result for
perturb-max values shows that the deviation of their sampled average from its
expectation decays exponentially in the number of samples, allowing effective
approximation of the expectation.
| Tamir Hazan, Francesco Orabona, Anand D. Sarwate, Subhransu Maji,
Tommi Jaakkola | null | 1602.03571 | null | null |
Data-Driven Online Decision Making with Costly Information Acquisition | stat.ML cs.LG | In most real-world settings such as recommender systems, finance, and
healthcare, collecting useful information is costly and requires an active
choice on the part of the decision maker. The decision-maker needs to learn
simultaneously what observations to make and what actions to take. This paper
incorporates the information acquisition decision into an online learning
framework. We propose two different algorithms for this dual learning problem:
Sim-OOS and Seq-OOS where observations are made simultaneously and
sequentially, respectively. We prove that both algorithms achieve a regret that
is sublinear in time. The developed framework and algorithms can be used in
many applications including medical informatics, recommender systems and
actionable intelligence in transportation, finance, cyber-security etc., in
which collecting information prior to making decisions is costly. We validate
our algorithms in a breast cancer example setting in which we show substantial
performance gains for our proposed algorithms.
| Onur Atan, Mihaela van der Schaar | null | 1602.03600 | null | null |
Attentive Pooling Networks | cs.CL cs.LG | In this work, we propose Attentive Pooling (AP), a two-way attention
mechanism for discriminative model training. In the context of pair-wise
ranking or classification with neural networks, AP enables the pooling layer to
be aware of the current input pair, in a way that information from the two
input items can directly influence the computation of each other's
representations. Along with such representations of the paired inputs, AP
jointly learns a similarity measure over projected segments (e.g. trigrams) of
the pair, and subsequently, derives the corresponding attention vector for each
input to guide the pooling. Our two-way attention mechanism is a general
framework independent of the underlying representation learning, and it has
been applied to both convolutional neural networks (CNNs) and recurrent neural
networks (RNNs) in our studies. The empirical results, from three very
different benchmark tasks of question answering/answer selection, demonstrate
that our proposed models outperform a variety of strong baselines and achieve
state-of-the-art performance in all the benchmarks.
| Cicero dos Santos, Ming Tan, Bing Xiang, Bowen Zhou | null | 1602.03609 | null | null |
Optimal Inference in Crowdsourced Classification via Belief Propagation | cs.LG stat.ML | Crowdsourcing systems are popular for solving large-scale labelling tasks
with low-paid workers. We study the problem of recovering the true labels from
the possibly erroneous crowdsourced labels under the popular Dawid-Skene model.
To address this inference problem, several algorithms have recently been
proposed, but the best known guarantee is still significantly larger than the
fundamental limit. We close this gap by introducing a tighter lower bound on
the fundamental limit and proving that Belief Propagation (BP) exactly matches
this lower bound. The guaranteed optimality of BP is the strongest in the sense
that it is information-theoretically impossible for any other algorithm to
correctly label a larger fraction of the tasks. Experimental results suggest
that BP is close to optimal for all regimes considered and improves upon
competing state-of-the-art algorithms.
| Jungseul Ok, Sewoong Oh, Jinwoo Shin and Yung Yi | null | 1602.03619 | null | null |
On the Difficulty of Selecting Ising Models with Approximate Recovery | cs.IT cs.LG cs.SI math.IT stat.ML | In this paper, we consider the problem of estimating the underlying graph
associated with an Ising model given a number of independent and identically
distributed samples. We adopt an \emph{approximate recovery} criterion that
allows for a number of missed edges or incorrectly-included edges, in contrast
with the widely-studied exact recovery problem. Our main results provide
information-theoretic lower bounds on the sample complexity for graph classes
imposing constraints on the number of edges, maximal degree, and other
properties. We identify a broad range of scenarios where, either up to constant
factors or logarithmic factors, our lower bounds match the best known lower
bounds for the exact recovery criterion, several of which are known to be tight
or near-tight. Hence, in these cases, approximate recovery has a similar
difficulty to exact recovery in the minimax sense.
Our bounds are obtained via a modification of Fano's inequality for handling
the approximate recovery criterion, along with suitably-designed ensembles of
graphs that can broadly be classed into two categories: (i) Those containing
graphs that contain several isolated edges or cliques and are thus difficult to
distinguish from the empty graph; (ii) Those containing graphs for which
certain groups of nodes are highly correlated, thus making it difficult to
determine precisely which edges connect them. We support our theoretical
results on these ensembles with numerical experiments.
| Jonathan Scarlett and Volkan Cevher | null | 1602.03647 | null | null |
Medical Concept Representation Learning from Electronic Health Records
and its Application on Heart Failure Prediction | cs.LG cs.NE | Objective: To transform heterogeneous clinical data from electronic health
records into clinically meaningful constructed features using data driven
method that rely, in part, on temporal relations among data. Materials and
Methods: The clinically meaningful representations of medical concepts and
patients are the key for health analytic applications. Most of existing
approaches directly construct features mapped to raw data (e.g., ICD or CPT
codes), or utilize some ontology mapping such as SNOMED codes. However, none of
the existing approaches leverage EHR data directly for learning such concept
representation. We propose a new way to represent heterogeneous medical
concepts (e.g., diagnoses, medications and procedures) based on co-occurrence
patterns in longitudinal electronic health records. The intuition behind the
method is to map medical concepts that are co-occuring closely in time to
similar concept vectors so that their distance will be small. We also derive a
simple method to construct patient vectors from the related medical concept
vectors. Results: For qualitative evaluation, we study similar medical concepts
across diagnosis, medication and procedure. In quantitative evaluation, our
proposed representation significantly improves the predictive modeling
performance for onset of heart failure (HF), where classification methods (e.g.
logistic regression, neural network, support vector machine and K-nearest
neighbors) achieve up to 23% improvement in area under the ROC curve (AUC)
using this proposed representation. Conclusion: We proposed an effective method
for patient and medical concept representation learning. The resulting
representation can map relevant concepts together and also improves predictive
modeling performance.
| Edward Choi, Andy Schuetz, Walter F. Stewart, Jimeng Sun | null | 1602.03686 | null | null |
Network of Bandits insure Privacy of end-users | cs.AI cs.DC cs.LG | In order to distribute the best arm identification task as close as possible
to the user's devices, on the edge of the Radio Access Network, we propose a
new problem setting, where distributed players collaborate to find the best
arm. This architecture guarantees privacy to end-users since no events are
stored. The only thing that can be observed by an adversary through the core
network is aggregated information across users. We provide a first algorithm,
Distributed Median Elimination, which is optimal in term of number of
transmitted bits and near optimal in term of speed-up factor with respect to an
optimal algorithm run independently on each player. In practice, this first
algorithm cannot handle the trade-off between the communication cost and the
speed-up factor, and requires some knowledge about the distribution of players.
Extended Distributed Median Elimination overcomes these limitations, by playing
in parallel different instances of Distributed Median Elimination and selecting
the best one. Experiments illustrate and complete the analysis. According to
the analysis, in comparison to Median Elimination performed on each player, the
proposed algorithm shows significant practical improvements.
| Rapha\"el F\'eraud | null | 1602.03779 | null | null |
Semi-supervised Learning with Explicit Relationship Regularization | cs.CV cs.LG | In many learning tasks, the structure of the target space of a function holds
rich information about the relationships between evaluations of functions on
different data points. Existing approaches attempt to exploit this relationship
information implicitly by enforcing smoothness on function evaluations only.
However, what happens if we explicitly regularize the relationships between
function evaluations? Inspired by homophily, we regularize based on a smooth
relationship function, either defined from the data or with labels. In
experiments, we demonstrate that this significantly improves the performance of
state-of-the-art algorithms in semi-supervised classification and in spectral
data embedding for constrained clustering and dimensionality reduction.
| Kwang In Kim and James Tompkin and Hanspeter Pfister and Christian
Theobalt | 10.1109/CVPR.2015.7298831 | 1602.03808 | null | null |
A Critical Connectivity Radius for Segmenting Randomly-Generated, High
Dimensional Data Points | cs.LG | Motivated by a $2$-dimensional (unsupervised) image segmentation task whereby
local regions of pixels are clustered via edge detection methods, a more
general probabilistic mathematical framework is devised. Critical thresholds
are calculated that indicate strong correlation between randomly-generated,
high dimensional data points that have been projected into structures in a
partition of a bounded, $2$-dimensional area, of which, an image is a special
case. A neighbor concept for structures in the partition is defined and a
critical radius is uncovered. Measured from a central structure in localized
regions of the partition, the radius indicates strong, long and short range
correlation in the count of occupied structures. The size of a short interval
of radii is estimated upon which the transition from short-to-long range
correlation is virtually assured, which defines a demarcation of when an image
ceases to be "interesting".
| Robert A. Murphy | null | 1602.03822 | null | null |
Community Recovery in Graphs with Locality | cs.IT cs.LG cs.SI math.IT math.ST q-bio.GN stat.TH | Motivated by applications in domains such as social networks and
computational biology, we study the problem of community recovery in graphs
with locality. In this problem, pairwise noisy measurements of whether two
nodes are in the same community or different communities come mainly or
exclusively from nearby nodes rather than uniformly sampled between all nodes
pairs, as in most existing models. We present an algorithm that runs nearly
linearly in the number of measurements and which achieves the information
theoretic limit for exact recovery.
| Yuxin Chen, Govinda Kamath, Changho Suh, David Tse | null | 1602.03828 | null | null |
Wavelet-Based Semantic Features for Hyperspectral Signature
Discrimination | cs.CV cs.LG | Hyperspectral signature classification is a quantitative analysis approach
for hyperspectral imagery which performs detection and classification of the
constituent materials at the pixel level in the scene. The classification
procedure can be operated directly on hyperspectral data or performed by using
some features extracted from the corresponding hyperspectral signatures
containing information like the signature's energy or shape. In this paper, we
describe a technique that applies non-homogeneous hidden Markov chain (NHMC)
models to hyperspectral signature classification. The basic idea is to use
statistical models (such as NHMC) to characterize wavelet coefficients which
capture the spectrum semantics (i.e., structural information) at multiple
levels. Experimental results show that the approach based on NHMC models can
outperform existing approaches relevant in classification tasks.
| Siwei Feng, Yuki Itoh, Mario Parente, and Marco F. Duarte | null | 1602.03903 | null | null |
Second-Order Stochastic Optimization for Machine Learning in Linear Time | stat.ML cs.LG | First-order stochastic methods are the state-of-the-art in large-scale
machine learning optimization owing to efficient per-iteration complexity.
Second-order methods, while able to provide faster convergence, have been much
less explored due to the high cost of computing the second-order information.
In this paper we develop second-order stochastic methods for optimization
problems in machine learning that match the per-iteration cost of gradient
based methods, and in certain settings improve upon the overall running time
over popular first-order methods. Furthermore, our algorithm has the desirable
property of being implementable in time linear in the sparsity of the input
data.
| Naman Agarwal, Brian Bullins, Elad Hazan | null | 1602.03943 | null | null |
General Vector Machine | stat.ML cs.LG | The support vector machine (SVM) is an important class of learning machines
for function approach, pattern recognition, and time-serious prediction, etc.
It maps samples into the feature space by so-called support vectors of selected
samples, and then feature vectors are separated by maximum margin hyperplane.
The present paper presents the general vector machine (GVM) to replace the SVM.
The support vectors are replaced by general project vectors selected from the
usual vector space, and a Monte Carlo (MC) algorithm is developed to find the
general vectors. The general project vectors improves the feature-extraction
ability, and the MC algorithm can control the width of the separation margin of
the hyperplane. By controlling the separation margin, we show that the maximum
margin hyperplane can usually induce the overlearning, and the best learning
machine is achieved with a proper separation margin. Applications in function
approach, pattern recognition, and classification indicate that the developed
method is very successful, particularly for small-set training problems.
Additionally, our algorithm may induce some particular applications, such as
for the transductive inference.
| Hong Zhao | null | 1602.03950 | null | null |
Orthogonal Sparse PCA and Covariance Estimation via Procrustes
Reformulation | stat.ML cs.LG math.OC stat.AP | The problem of estimating sparse eigenvectors of a symmetric matrix attracts
a lot of attention in many applications, especially those with high dimensional
data set. While classical eigenvectors can be obtained as the solution of a
maximization problem, existing approaches formulate this problem by adding a
penalty term into the objective function that encourages a sparse solution.
However, the resulting methods achieve sparsity at the expense of sacrificing
the orthogonality property. In this paper, we develop a new method to estimate
dominant sparse eigenvectors without trading off their orthogonality. The
problem is highly non-convex and hard to handle. We apply the MM framework
where we iteratively maximize a tight lower bound (surrogate function) of the
objective function over the Stiefel manifold. The inner maximization problem
turns out to be a rectangular Procrustes problem, which has a closed form
solution. In addition, we propose a method to improve the covariance estimation
problem when its underlying eigenvectors are known to be sparse. We use the
eigenvalue decomposition of the covariance matrix to formulate an optimization
problem where we impose sparsity on the corresponding eigenvectors. Numerical
experiments show that the proposed eigenvector extraction algorithm matches or
outperforms existing algorithms in terms of support recovery and explained
variance, while the covariance estimation algorithms improve significantly the
sample covariance estimator.
| Konstantinos Benidis, Ying Sun, Prabhu Babu, and Daniel P. Palomar | 10.1109/TSP.2016.2605073 | 1602.03992 | null | null |
Using Deep Q-Learning to Control Optimization Hyperparameters | math.OC cs.LG | We present a novel definition of the reinforcement learning state, actions
and reward function that allows a deep Q-network (DQN) to learn to control an
optimization hyperparameter. Using Q-learning with experience replay, we train
two DQNs to accept a state representation of an objective function as input and
output the expected discounted return of rewards, or q-values, connected to the
actions of either adjusting the learning rate or leaving it unchanged. The two
DQNs learn a policy similar to a line search, but differ in the number of
allowed actions. The trained DQNs in combination with a gradient-based update
routine form the basis of the Q-gradient descent algorithms. To demonstrate the
viability of this framework, we show that the DQN's q-values associated with
optimal action converge and that the Q-gradient descent algorithms outperform
gradient descent with an Armijo or nonmonotone line search. Unlike traditional
optimization methods, Q-gradient descent can incorporate any objective
statistic and by varying the actions we gain insight into the type of learning
rate adjustment strategies that are successful for neural network optimization.
| Samantha Hansen | null | 1602.04062 | null | null |
Convolutional Radio Modulation Recognition Networks | cs.LG cs.CV | We study the adaptation of convolutional neural networks to the complex
temporal radio signal domain. We compare the efficacy of radio modulation
classification using naively learned features against using expert features
which are widely used in the field today and we show significant performance
improvements. We show that blind temporal learning on large and densely encoded
time series using deep convolutional neural networks is viable and a strong
candidate approach for this task especially at low signal to noise ratio.
| Timothy J O'Shea, Johnathan Corgan, T. Charles Clancy | null | 1602.04105 | null | null |
Coin Betting and Parameter-Free Online Learning | cs.LG | In the recent years, a number of parameter-free algorithms have been
developed for online linear optimization over Hilbert spaces and for learning
with expert advice. These algorithms achieve optimal regret bounds that depend
on the unknown competitors, without having to tune the learning rates with
oracle choices.
We present a new intuitive framework to design parameter-free algorithms for
\emph{both} online linear optimization over Hilbert spaces and for learning
with expert advice, based on reductions to betting on outcomes of adversarial
coins. We instantiate it using a betting algorithm based on the
Krichevsky-Trofimov estimator. The resulting algorithms are simple, with no
parameters to be tuned, and they improve or match previous results in terms of
regret guarantee and per-round complexity.
| Francesco Orabona, D\'avid P\'al | null | 1602.04128 | null | null |
Deep Gaussian Processes for Regression using Approximate Expectation
Propagation | stat.ML cs.LG | Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations
of Gaussian processes (GPs) and are formally equivalent to neural networks with
multiple, infinitely wide hidden layers. DGPs are nonparametric probabilistic
models and as such are arguably more flexible, have a greater capacity to
generalise, and provide better calibrated uncertainty estimates than
alternative deep models. This paper develops a new approximate Bayesian
learning scheme that enables DGPs to be applied to a range of medium to large
scale regression problems for the first time. The new method uses an
approximate Expectation Propagation procedure and a novel and efficient
extension of the probabilistic backpropagation algorithm for learning. We
evaluate the new method for non-linear regression on eleven real-world
datasets, showing that it always outperforms GP regression and is almost always
better than state-of-the-art deterministic and sampling-based approximate
inference methods for Bayesian neural networks. As a by-product, this work
provides a comprehensive analysis of six approximate Bayesian methods for
training neural networks.
| Thang D. Bui and Daniel Hern\'andez-Lobato and Yingzhen Li and Jos\'e
Miguel Hern\'andez-Lobato and Richard E. Turner | null | 1602.04133 | null | null |
Pursuits in Structured Non-Convex Matrix Factorizations | cs.LG stat.ML | Efficiently representing real world data in a succinct and parsimonious
manner is of central importance in many fields. We present a generalized greedy
pursuit framework, allowing us to efficiently solve structured matrix
factorization problems, where the factors are allowed to be from arbitrary sets
of structured vectors. Such structure may include sparsity, non-negativeness,
order, or a combination thereof. The algorithm approximates a given matrix by a
linear combination of few rank-1 matrices, each factorized into an outer
product of two vector atoms of the desired structure. For the non-convex
subproblems of obtaining good rank-1 structured matrix atoms, we employ and
analyze a general atomic power method. In addition to the above applications,
we prove linear convergence for generalized pursuit variants in Hilbert spaces
- for the task of approximation over the linear span of arbitrary dictionaries
- which generalizes OMP and is useful beyond matrix problems. Our experiments
on real datasets confirm both the efficiency and also the broad applicability
of our framework in practice.
| Rajiv Khanna, Michael Tschannen, Martin Jaggi | null | 1602.04208 | null | null |
A Minimalistic Approach to Sum-Product Network Learning for Real
Applications | cs.AI cs.LG stat.ML | Sum-Product Networks (SPNs) are a class of expressive yet tractable
hierarchical graphical models. LearnSPN is a structure learning algorithm for
SPNs that uses hierarchical co-clustering to simultaneously identifying similar
entities and similar features. The original LearnSPN algorithm assumes that all
the variables are discrete and there is no missing data. We introduce a
practical, simplified version of LearnSPN, MiniSPN, that runs faster and can
handle missing data and heterogeneous features common in real applications. We
demonstrate the performance of MiniSPN on standard benchmark datasets and on
two datasets from Google's Knowledge Graph exhibiting high missingness rates
and a mix of discrete and continuous features.
| Viktoriya Krakovna, Moshe Looks | null | 1602.04259 | null | null |
Lasso Guarantees for Time Series Estimation Under Subgaussian Tails and
$ \beta $-Mixing | stat.ML cs.LG | Many theoretical results on estimation of high dimensional time series
require specifying an underlying data generating model (DGM). Instead, along
the footsteps of~\cite{wong2017lasso}, this paper relies only on (strict)
stationarity and $ \beta $-mixing condition to establish consistency of lasso
when data comes from a $\beta$-mixing process with marginals having subgaussian
tails. Because of the general assumptions, the data can come from DGMs
different than standard time series models such as VAR or ARCH. When the true
DGM is not VAR, the lasso estimates correspond to those of the best linear
predictors using the past observations. We establish non-asymptotic
inequalities for estimation and prediction errors of the lasso estimates.
Together with~\cite{wong2017lasso}, we provide lasso guarantees that cover full
spectrum of the parameters in specifications of $ \beta $-mixing subgaussian
time series. Applications of these results potentially extend to non-Gaussian,
non-Markovian and non-linear times series models as the examples we provide
demonstrate. In order to prove our results, we derive a novel Hanson-Wright
type concentration inequality for $\beta$-mixing subgaussian random vectors
that may be of independent interest.
| Kam Chung Wong, Zifan Li and Ambuj Tewari | null | 1602.04265 | null | null |
Evaluation of Protein Structural Models Using Random Forests | cs.LG q-bio.BM q-bio.QM stat.ML | Protein structure prediction has been a grand challenge problem in the
structure biology over the last few decades. Protein quality assessment plays a
very important role in protein structure prediction. In the paper, we propose a
new protein quality assessment method which can predict both local and global
quality of the protein 3D structural models. Our method uses both multi and
single model quality assessment method for global quality assessment, and uses
chemical, physical, geo-metrical features, and global quality score for local
quality assessment. CASP9 targets are used to generate the features for local
quality assessment. We evaluate the performance of our local quality assessment
method on CASP10, which is comparable with two stage-of-art QA methods based on
the average absolute distance between the real and predicted distance. In
addition, we blindly tested our method on CASP11, and the good performance
shows that combining single and multiple model quality assessment method could
be a good way to improve the accuracy of model quality assessment, and the
random forest technique could be used to train a good local quality assessment
model.
| Renzhi Cao, Taeho Jo, Jianlin Cheng | null | 1602.04277 | null | null |
Conservative Bandits | stat.ML cs.LG | We study a novel multi-armed bandit problem that models the challenge faced
by a company wishing to explore new strategies to maximize revenue whilst
simultaneously maintaining their revenue above a fixed baseline, uniformly over
time. While previous work addressed the problem under the weaker requirement of
maintaining the revenue constraint only at a given fixed time in the future,
the algorithms previously proposed are unsuitable due to their design under the
more stringent constraints. We consider both the stochastic and the adversarial
settings, where we propose, natural, yet novel strategies and analyze the price
for maintaining the constraints. Amongst other things, we prove both high
probability and expectation bounds on the regret, while we also consider both
the problem of maintaining the constraints with high probability or
expectation. For the adversarial setting the price of maintaining the
constraint appears to be higher, at least for the algorithm considered. A lower
bound is given showing that the algorithm for the stochastic setting is almost
optimal. Empirical results obtained in synthetic environments complement our
theoretical findings.
| Yifan Wu, Roshan Shariff, Tor Lattimore and Csaba Szepesv\'ari | null | 1602.04282 | null | null |
Deep Learning on FPGAs: Past, Present, and Future | cs.DC cs.LG stat.ML | The rapid growth of data size and accessibility in recent years has
instigated a shift of philosophy in algorithm design for artificial
intelligence. Instead of engineering algorithms by hand, the ability to learn
composable systems automatically from massive amounts of data has led to
ground-breaking performance in important domains such as computer vision,
speech recognition, and natural language processing. The most popular class of
techniques used in these domains is called deep learning, and is seeing
significant attention from industry. However, these models require incredible
amounts of data and compute power to train, and are limited by the need for
better hardware acceleration to accommodate scaling beyond current data and
model sizes. While the current solution has been to use clusters of graphics
processing units (GPU) as general purpose processors (GPGPU), the use of field
programmable gate arrays (FPGA) provide an interesting alternative. Current
trends in design tools for FPGAs have made them more compatible with the
high-level software practices typically practiced in the deep learning
community, making FPGAs more accessible to those who build and deploy models.
Since FPGA architectures are flexible, this could also allow researchers the
ability to explore model-level optimizations beyond what is possible on fixed
architectures such as GPUs. As well, FPGAs tend to provide high performance per
watt of power consumption, which is of particular importance for application
scientists interested in large scale server-based deployment or
resource-limited embedded applications. This review takes a look at deep
learning and FPGAs from a hardware acceleration perspective, identifying trends
and innovations that make these technologies a natural fit, and motivates a
discussion on how FPGAs may best serve the needs of the deep learning community
moving forward.
| Griffin Lacey, Graham W. Taylor, Shawki Areibi | null | 1602.04283 | null | null |
A Minimax Theory for Adaptive Data Analysis | stat.ML cs.LG | In adaptive data analysis, the user makes a sequence of queries on the data,
where at each step the choice of query may depend on the results in previous
steps. The releases are often randomized in order to reduce overfitting for
such adaptively chosen queries. In this paper, we propose a minimax framework
for adaptive data analysis. Assuming Gaussianity of queries, we establish the
first sharp minimax lower bound on the squared error in the order of
$O(\frac{\sqrt{k}\sigma^2}{n})$, where $k$ is the number of queries asked, and
$\sigma^2/n$ is the ordinary signal-to-noise ratio for a single query. Our
lower bound is based on the construction of an approximately least favorable
adversary who picks a sequence of queries that are most likely to be affected
by overfitting. This approximately least favorable adversary uses only one
level of adaptivity, suggesting that the minimax risk for 1-step adaptivity
with k-1 initial releases and that for $k$-step adaptivity are on the same
order. The key technical component of the lower bound proof is a reduction to
finding the convoluting distribution that optimally obfuscates the sign of a
Gaussian signal. Our lower bound construction also reveals a transparent and
elementary proof of the matching upper bound as an alternative approach to
Russo and Zou (2015), who used information-theoretic tools to provide the same
upper bound. We believe that the proposed framework opens up opportunities to
obtain theoretical insights for many other settings of adaptive data analysis,
which would extend the idea to more practical realms.
| Yu-Xiang Wang, Jing Lei, Stephen E. Fienberg | null | 1602.04287 | null | null |
Convex Optimization for Linear Query Processing under Approximate
Differential Privacy | cs.DB cs.LG stat.ML | Differential privacy enables organizations to collect accurate aggregates
over sensitive data with strong, rigorous guarantees on individuals' privacy.
Previous work has found that under differential privacy, computing multiple
correlated aggregates as a batch, using an appropriate \emph{strategy}, may
yield higher accuracy than computing each of them independently. However,
finding the best strategy that maximizes result accuracy is non-trivial, as it
involves solving a complex constrained optimization program that appears to be
non-linear and non-convex. Hence, in the past much effort has been devoted in
solving this non-convex optimization program. Existing approaches include
various sophisticated heuristics and expensive numerical solutions. None of
them, however, guarantees to find the optimal solution of this optimization
problem.
This paper points out that under ($\epsilon$, $\delta$)-differential privacy,
the optimal solution of the above constrained optimization problem in search of
a suitable strategy can be found, rather surprisingly, by solving a simple and
elegant convex optimization program. Then, we propose an efficient algorithm
based on Newton's method, which we prove to always converge to the optimal
solution with linear global convergence rate and quadratic local convergence
rate. Empirical evaluations demonstrate the accuracy and efficiency of the
proposed solution.
| Ganzhao Yuan, Yin Yang, Zhenjie Zhang, Zhifeng Hao | null | 1602.04302 | null | null |
Look, Listen and Learn - A Multimodal LSTM for Speaker Identification | cs.LG | Speaker identification refers to the task of localizing the face of a person
who has the same identity as the ongoing voice in a video. This task not only
requires collective perception over both visual and auditory signals, the
robustness to handle severe quality degradations and unconstrained content
variations are also indispensable. In this paper, we describe a novel
multimodal Long Short-Term Memory (LSTM) architecture which seamlessly unifies
both visual and auditory modalities from the beginning of each sequence input.
The key idea is to extend the conventional LSTM by not only sharing weights
across time steps, but also sharing weights across modalities. We show that
modeling the temporal dependency across face and voice can significantly
improve the robustness to content quality degradations and variations. We also
found that our multimodal LSTM is robustness to distractors, namely the
non-speaking identities. We applied our multimodal LSTM to The Big Bang Theory
dataset and showed that our system outperforms the state-of-the-art systems in
speaker identification with lower false alarm rate and higher recognition
accuracy.
| Jimmy Ren, Yongtao Hu, Yu-Wing Tai, Chuan Wang, Li Xu, Wenxiu Sun,
Qiong Yan | null | 1602.04364 | null | null |
Science Question Answering using Instructional Materials | cs.CL cs.AI cs.IR cs.LG | We provide a solution for elementary science test using instructional
materials. We posit that there is a hidden structure that explains the
correctness of an answer given the question and instructional materials and
present a unified max-margin framework that learns to find these hidden
structures (given a corpus of question-answer pairs and instructional
materials), and uses what it learns to answer novel elementary science
questions. Our evaluation shows that our framework outperforms several strong
baselines.
| Mrinmaya Sachan, Avinava Dubey, Eric P. Xing | null | 1602.04375 | null | null |
Joint Dimensionality Reduction for Two Feature Vectors | stat.ML cs.IT cs.LG math.IT | Many machine learning problems, especially multi-modal learning problems,
have two sets of distinct features (e.g., image and text features in news story
classification, or neuroimaging data and neurocognitive data in cognitive
science research). This paper addresses the joint dimensionality reduction of
two feature vectors in supervised learning problems. In particular, we assume a
discriminative model where low-dimensional linear embeddings of the two feature
vectors are sufficient statistics for predicting a dependent variable. We show
that a simple algorithm involving singular value decomposition can accurately
estimate the embeddings provided that certain sample complexities are
satisfied, without specifying the nonlinear link function (regressor or
classifier). The main results establish sample complexities under multiple
settings. Sample complexities for different link functions only differ by
constant factors.
| Yanjun Li, Yoram Bresler | null | 1602.04398 | null | null |
Convex Optimization For Non-Convex Problems via Column Generation | cs.LG | We apply column generation to approximating complex structured objects via a
set of primitive structured objects under either the cross entropy or L2 loss.
We use L1 regularization to encourage the use of few structured primitive
objects. We attack approximation using convex optimization over an infinite
number of variables each corresponding to a primitive structured object that
are generated on demand by easy inference in the Lagrangian dual. We apply our
approach to producing low rank approximations to large 3-way tensors.
| Julian Yarkony, Kamalika Chaudhuri | null | 1602.04409 | null | null |
Identifiability Assumptions and Algorithm for Directed Graphical Models
with Feedback | stat.ML cs.LG | Directed graphical models provide a useful framework for modeling causal or
directional relationships for multivariate data. Prior work has largely focused
on identifiability and search algorithms for directed acyclic graphical (DAG)
models. In many applications, feedback naturally arises and directed graphical
models that permit cycles occur. In this paper we address the issue of
identifiability for general directed cyclic graphical (DCG) models satisfying
the Markov assumption. In particular, in addition to the faithfulness
assumption which has already been introduced for cyclic models, we introduce
two new identifiability assumptions, one based on selecting the model with the
fewest edges and the other based on selecting the DCG model that entails the
maximum number of d-separation rules. We provide theoretical results comparing
these assumptions which show that: (1) selecting models with the largest number
of d-separation rules is strictly weaker than the faithfulness assumption; (2)
unlike for DAG models, selecting models with the fewest edges does not
necessarily result in a milder assumption than the faithfulness assumption. We
also provide connections between our two new principles and minimality
assumptions. We use our identifiability assumptions to develop search
algorithms for small-scale DCG models. Our simulation study supports our
theoretical results, showing that the algorithms based on our two new
principles generally out-perform algorithms based on the faithfulness
assumption in terms of selecting the true skeleton for DCG models.
| Gunwoong Park and Garvesh Raskutti | null | 1602.04418 | null | null |
Unsupervised Domain Adaptation with Residual Transfer Networks | cs.LG | The recent success of deep neural networks relies on massive amounts of
labeled data. For a target task where labeled data is unavailable, domain
adaptation can transfer a learner from a different source domain. In this
paper, we propose a new approach to domain adaptation in deep networks that can
jointly learn adaptive classifiers and transferable features from labeled data
in the source domain and unlabeled data in the target domain. We relax a
shared-classifier assumption made by previous methods and assume that the
source classifier and target classifier differ by a residual function. We
enable classifier adaptation by plugging several layers into deep network to
explicitly learn the residual function with reference to the target classifier.
We fuse features of multiple layers with tensor product and embed them into
reproducing kernel Hilbert spaces to match distributions for feature
adaptation. The adaptation can be achieved in most feed-forward models by
extending them with new residual layers and loss functions, which can be
trained efficiently via back-propagation. Empirical evidence shows that the new
approach outperforms state of the art methods on standard domain adaptation
benchmarks.
| Mingsheng Long, Han Zhu, Jianmin Wang, Michael I. Jordan | null | 1602.04433 | null | null |
Frequency Analysis of Temporal Graph Signals | cs.LG cs.SY stat.ML | This letter extends the concept of graph-frequency to graph signals that
evolve with time. Our goal is to generalize and, in fact, unify the familiar
concepts from time- and graph-frequency analysis. To this end, we study a joint
temporal and graph Fourier transform (JFT) and demonstrate its attractive
properties. We build on our results to create filters which act on the joint
(temporal and graph) frequency domain, and show how these can be used to
perform interference cancellation. The proposed algorithms are distributed,
have linear complexity, and can approximate any desired joint filtering
objective.
| Andreas Loukas and Damien Foucard | null | 1602.04434 | null | null |
Random Forest Based Approach for Concept Drift Handling | cs.AI cs.LG math.ST stat.TH | Concept drift has potential in smart grid analysis because the socio-economic
behaviour of consumers is not governed by the laws of physics. Likewise there
are also applications in wind power forecasting. In this paper we present
decision tree ensemble classification method based on the Random Forest
algorithm for concept drift. The weighted majority voting ensemble aggregation
rule is employed based on the ideas of Accuracy Weighted Ensemble (AWE) method.
Base learner weight in our case is computed for each sample evaluation using
base learners accuracy and intrinsic proximity measure of Random Forest. Our
algorithm exploits both temporal weighting of samples and ensemble pruning as a
forgetting strategy. We present results of empirical comparison of our method
with original random forest with incorporated "replace-the-looser" forgetting
andother state-of-the-art concept-drfit classifiers like AWE2.
| A. Zhukov, D. Sidorov and A. Foley | null | 1602.04435 | null | null |
Autoregressive Moving Average Graph Filtering | cs.LG cs.SY stat.ML | One of the cornerstones of the field of signal processing on graphs are graph
filters, direct analogues of classical filters, but intended for signals
defined on graphs. This work brings forth new insights on the distributed graph
filtering problem. We design a family of autoregressive moving average (ARMA)
recursions, which (i) are able to approximate any desired graph frequency
response, and (ii) give exact solutions for tasks such as graph signal
denoising and interpolation. The design philosophy, which allows us to design
the ARMA coefficients independently from the underlying graph, renders the ARMA
graph filters suitable in static and, particularly, time-varying settings. The
latter occur when the graph signal and/or graph are changing over time. We show
that in case of a time-varying graph signal our approach extends naturally to a
two-dimensional filter, operating concurrently in the graph and regular time
domains. We also derive sufficient conditions for filter stability when the
graph and signal are time-varying. The analytical and numerical results
presented in this paper illustrate that ARMA graph filters are practically
appealing for static and time-varying settings, as predicted by theoretical
derivations.
| Elvin Isufi and Andreas Loukas and Andrea Simonetto and Geert Leus | 10.1109/TSP.2016.2614793 | 1602.04436 | null | null |
Bayesian Optimization with Safety Constraints: Safe and Automatic
Parameter Tuning in Robotics | cs.RO cs.LG cs.SY | Robotic algorithms typically depend on various parameters, the choice of
which significantly affects the robot's performance. While an initial guess for
the parameters may be obtained from dynamic models of the robot, parameters are
usually tuned manually on the real system to achieve the best performance.
Optimization algorithms, such as Bayesian optimization, have been used to
automate this process. However, these methods may evaluate unsafe parameters
during the optimization process that lead to safety-critical system failures.
Recently, a safe Bayesian optimization algorithm, called SafeOpt, has been
developed, which guarantees that the performance of the system never falls
below a critical value; that is, safety is defined based on the performance
function. However, coupling performance and safety is often not desirable in
robotics. For example, high-gain controllers might achieve low average tracking
error (performance), but can overshoot and violate input constraints. In this
paper, we present a generalized algorithm that allows for multiple safety
constraints separate from the objective. Given an initial set of safe
parameters, the algorithm maximizes performance but only evaluates parameters
that satisfy safety for all constraints with high probability. To this end, it
carefully explores the parameter space by exploiting regularity assumptions in
terms of a Gaussian process prior. Moreover, we show how context variables can
be used to safely transfer knowledge to new situations and tasks. We provide a
theoretical analysis and demonstrate that the proposed algorithm enables fast,
automatic, and safe optimization of tuning parameters in experiments on a
quadrotor vehicle.
| Felix Berkenkamp, Andreas Krause, Angela P. Schoellig | null | 1602.04450 | null | null |
Generalization Properties of Learning with Random Features | stat.ML cs.LG | We study the generalization properties of ridge regression with random
features in the statistical learning framework. We show for the first time that
$O(1/\sqrt{n})$ learning bounds can be achieved with only $O(\sqrt{n}\log n)$
random features rather than $O({n})$ as suggested by previous results. Further,
we prove faster learning rates and show that they might require more random
features, unless they are sampled according to a possibly problem dependent
distribution. Our results shed light on the statistical computational
trade-offs in large scale kernelized learning, showing the potential
effectiveness of random features in reducing the computational complexity while
keeping optimal generalization properties.
| Alessandro Rudi and Lorenzo Rosasco | null | 1602.04474 | null | null |
Surprising properties of dropout in deep networks | cs.LG cs.AI cs.NE math.ST stat.ML stat.TH | We analyze dropout in deep networks with rectified linear units and the
quadratic loss. Our results expose surprising differences between the behavior
of dropout and more traditional regularizers like weight decay. For example, on
some simple data sets dropout training produces negative weights even though
the output is the sum of the inputs. This provides a counterpoint to the
suggestion that dropout discourages co-adaptation of weights. We also show that
the dropout penalty can grow exponentially in the depth of the network while
the weight-decay penalty remains essentially linear, and that dropout is
insensitive to various re-scalings of the input features, outputs, and network
weights. This last insensitivity implies that there are no isolated local
minima of the dropout training criterion. Our work uncovers new properties of
dropout, extends our understanding of why dropout succeeds, and lays the
foundation for further progress.
| David P. Helmbold and Philip M. Long | null | 1602.04484 | null | null |
Benefits of depth in neural networks | cs.LG cs.NE stat.ML | For any positive integer $k$, there exist neural networks with $\Theta(k^3)$
layers, $\Theta(1)$ nodes per layer, and $\Theta(1)$ distinct parameters which
can not be approximated by networks with $\mathcal{O}(k)$ layers unless they
are exponentially large --- they must possess $\Omega(2^k)$ nodes. This result
is proved here for a class of nodes termed "semi-algebraic gates" which
includes the common choices of ReLU, maximum, indicator, and piecewise
polynomial functions, therefore establishing benefits of depth against not just
standard networks with ReLU gates, but also convolutional networks with ReLU
and maximization gates, sum-product networks, and boosted decision trees (in
this last case with a stronger separation: $\Omega(2^{k^3})$ total tree nodes
are required).
| Matus Telgarsky | null | 1602.04485 | null | null |
Convolutional Tables Ensemble: classification in microseconds | cs.CV cs.LG | We study classifiers operating under severe classification time constraints,
corresponding to 1-1000 CPU microseconds, using Convolutional Tables Ensemble
(CTE), an inherently fast architecture for object category recognition. The
architecture is based on convolutionally-applied sparse feature extraction,
using trees or ferns, and a linear voting layer. Several structure and
optimization variants are considered, including novel decision functions, tree
learning algorithm, and distillation from CNN to CTE architecture. Accuracy
improvements of 24-45% over related art of similar speed are demonstrated on
standard object recognition benchmarks. Using Pareto speed-accuracy curves, we
show that CTE can provide better accuracy than Convolutional Neural Networks
(CNN) for a certain range of classification time constraints, or alternatively
provide similar error rates with 5-200X speedup.
| Aharon Bar-Hillel and Eyal Krupka and Noam Bloom | null | 1602.04489 | null | null |
Learning Granger Causality for Hawkes Processes | cs.LG stat.ML | Learning Granger causality for general point processes is a very challenging
task. In this paper, we propose an effective method, learning Granger
causality, for a special but significant type of point processes --- Hawkes
process. We reveal the relationship between Hawkes process's impact function
and its Granger causality graph. Specifically, our model represents impact
functions using a series of basis functions and recovers the Granger causality
graph via group sparsity of the impact functions' coefficients. We propose an
effective learning algorithm combining a maximum likelihood estimator (MLE)
with a sparse-group-lasso (SGL) regularizer. Additionally, the flexibility of
our model allows to incorporate the clustering structure event types into
learning framework. We analyze our learning algorithm and propose an adaptive
procedure to select basis functions. Experiments on both synthetic and
real-world data show that our method can learn the Granger causality graph and
the triggering patterns of the Hawkes processes simultaneously.
| Hongteng Xu and Mehrdad Farajtabar and Hongyuan Zha | null | 1602.04511 | null | null |
Adversarial Top-$K$ Ranking | cs.IR cs.IT cs.LG math.IT stat.ML | We study the top-$K$ ranking problem where the goal is to recover the set of
top-$K$ ranked items out of a large collection of items based on partially
revealed preferences. We consider an adversarial crowdsourced setting where
there are two population sets, and pairwise comparison samples drawn from one
of the populations follow the standard Bradley-Terry-Luce model (i.e., the
chance of item $i$ beating item $j$ is proportional to the relative score of
item $i$ to item $j$), while in the other population, the corresponding chance
is inversely proportional to the relative score. When the relative size of the
two populations is known, we characterize the minimax limit on the sample size
required (up to a constant) for reliably identifying the top-$K$ items, and
demonstrate how it scales with the relative size. Moreover, by leveraging a
tensor decomposition method for disambiguating mixture distributions, we extend
our result to the more realistic scenario in which the relative population size
is unknown, thus establishing an upper bound on the fundamental limit of the
sample size for recovering the top-$K$ set.
| Changho Suh, Vincent Y. F. Tan, Renbo Zhao | null | 1602.04567 | null | null |
Personalized Expertise Search at LinkedIn | cs.IR cs.LG cs.SI | LinkedIn is the largest professional network with more than 350 million
members. As the member base increases, searching for experts becomes more and
more challenging. In this paper, we propose an approach to address the problem
of personalized expertise search on LinkedIn, particularly for exploratory
search queries containing {\it skills}. In the offline phase, we introduce a
collaborative filtering approach based on matrix factorization. Our approach
estimates expertise scores for both the skills that members list on their
profiles as well as the skills they are likely to have but do not explicitly
list. In the online phase (at query time) we use expertise scores on these
skills as a feature in combination with other features to rank the results. To
learn the personalized ranking function, we propose a heuristic to extract
training data from search logs while handling position and sample selection
biases. We tested our models on two products - LinkedIn homepage and LinkedIn
recruiter. A/B tests showed significant improvements in click through rates -
31% for CTR@1 for recruiter (18% for homepage) as well as downstream messages
sent from search - 37% for recruiter (20% for homepage). As of writing this
paper, these models serve nearly all live traffic for skills search on LinkedIn
homepage as well as LinkedIn recruiter.
| Viet Ha-Thuc and Ganesh Venkataraman and Mario Rodriguez and Shakti
Sinha and Senthil Sundaram and Lin Guo | null | 1602.04572 | null | null |
Secure Approximation Guarantee for Cryptographically Private Empirical
Risk Minimization | stat.ML cs.CR cs.LG | Privacy concern has been increasingly important in many machine learning (ML)
problems. We study empirical risk minimization (ERM) problems under secure
multi-party computation (MPC) frameworks. Main technical tools for MPC have
been developed based on cryptography. One of limitations in current
cryptographically private ML is that it is computationally intractable to
evaluate non-linear functions such as logarithmic functions or exponential
functions. Therefore, for a class of ERM problems such as logistic regression
in which non-linear function evaluations are required, one can only obtain
approximate solutions. In this paper, we introduce a novel cryptographically
private tool called secure approximation guarantee (SAG) method. The key
property of SAG method is that, given an arbitrary approximate solution, it can
provide a non-probabilistic assumption-free bound on the approximation quality
under cryptographically secure computation framework. We demonstrate the
benefit of the SAG method by applying it to several problems including a
practical privacy-preserving data analysis task on genomic and clinical
information.
| Toshiyuki Takada, Hiroyuki Hanada, Yoshiji Yamada, Jun Sakuma, Ichiro
Takeuchi | null | 1602.04579 | null | null |
Optimal Best Arm Identification with Fixed Confidence | math.ST cs.LG stat.ML stat.TH | We give a complete characterization of the complexity of best-arm
identification in one-parameter bandit problems. We prove a new, tight lower
bound on the sample complexity. We propose the `Track-and-Stop' strategy, which
we prove to be asymptotically optimal. It consists in a new sampling rule
(which tracks the optimal proportions of arm draws highlighted by the lower
bound) and in a stopping rule named after Chernoff, for which we give a new
analysis.
| Aur\'elien Garivier (IMT), Emilie Kaufmann (CRIStAL, SEQUEL) | null | 1602.04589 | null | null |
Distributed Information-Theoretic Clustering | cs.IT cs.LG math.IT | We study a novel multi-terminal source coding setup motivated by the
biclustering problem. Two separate encoders observe two i.i.d. sequences $X^n$
and $Y^n$, respectively. The goal is to find rate-limited encodings $f(x^n)$
and $g(z^n)$ that maximize the mutual information $I(f(X^n); g(Y^n))/n$. We
discuss connections of this problem with hypothesis testing against
independence, pattern recognition, and the information bottleneck method.
Improving previous cardinality bounds for the inner and outer bounds allows us
to thoroughly study the special case of a binary symmetric source and to
quantify the gap between the inner and the outer bound in this special case.
Furthermore, we investigate a multiple description (MD) extension of the Chief
Operating Officer (CEO) problem with mutual information constraint.
Surprisingly, this MD-CEO problem permits a tight single-letter
characterization of the achievable region.
| Georg Pichler, Pablo Piantanida and Gerald Matz | 10.1093/imaiai/iaab007 | 1602.04605 | null | null |
Deep Exploration via Bootstrapped DQN | cs.LG cs.AI cs.SY stat.ML | Efficient exploration in complex environments remains a major challenge for
reinforcement learning. We propose bootstrapped DQN, a simple algorithm that
explores in a computationally and statistically efficient manner through use of
randomized value functions. Unlike dithering strategies such as epsilon-greedy
exploration, bootstrapped DQN carries out temporally-extended (or deep)
exploration; this can lead to exponentially faster learning. We demonstrate
these benefits in complex stochastic MDPs and in the large-scale Arcade
Learning Environment. Bootstrapped DQN substantially improves learning times
and performance across most Atari games.
| Ian Osband, Charles Blundell, Alexander Pritzel, Benjamin Van Roy | null | 1602.04621 | null | null |
Efficient Representation of Low-Dimensional Manifolds using Deep
Networks | cs.NE cs.LG stat.ML | We consider the ability of deep neural networks to represent data that lies
near a low-dimensional manifold in a high-dimensional space. We show that deep
networks can efficiently extract the intrinsic, low-dimensional coordinates of
such data. We first show that the first two layers of a deep network can
exactly embed points lying on a monotonic chain, a special type of piecewise
linear manifold, mapping them to a low-dimensional Euclidean space. Remarkably,
the network can do this using an almost optimal number of parameters. We also
show that this network projects nearby points onto the manifold and then embeds
them with little error. We then extend these results to more general manifolds.
| Ronen Basri and David Jacobs | null | 1602.04723 | null | null |
Delay and Cooperation in Nonstochastic Bandits | cs.LG | We study networks of communicating learning agents that cooperate to solve a
common nonstochastic bandit problem. Agents use an underlying communication
network to get messages about actions selected by other agents, and drop
messages that took more than $d$ hops to arrive, where $d$ is a delay
parameter. We introduce \textsc{Exp3-Coop}, a cooperative version of the {\sc
Exp3} algorithm and prove that with $K$ actions and $N$ agents the average
per-agent regret after $T$ rounds is at most of order $\sqrt{\bigl(d+1 +
\tfrac{K}{N}\alpha_{\le d}\bigr)(T\ln K)}$, where $\alpha_{\le d}$ is the
independence number of the $d$-th power of the connected communication graph
$G$. We then show that for any connected graph, for $d=\sqrt{K}$ the regret
bound is $K^{1/4}\sqrt{T}$, strictly better than the minimax regret $\sqrt{KT}$
for noncooperating agents. More informed choices of $d$ lead to bounds which
are arbitrarily close to the full information minimax regret $\sqrt{T\ln K}$
when $G$ is dense. When $G$ has sparse components, we show that a variant of
\textsc{Exp3-Coop}, allowing agents to choose their parameters according to
their centrality in $G$, strictly improves the regret. Finally, as a by-product
of our analysis, we provide the first characterization of the minimax regret
for bandit learning with delay.
| Nicolo' Cesa-Bianchi and Claudio Gentile and Yishay Mansour and
Alberto Minora | null | 1602.04741 | null | null |
Quantum Perceptron Models | quant-ph cs.LG stat.ML | We demonstrate how quantum computation can provide non-trivial improvements
in the computational and statistical complexity of the perceptron model. We
develop two quantum algorithms for perceptron learning. The first algorithm
exploits quantum information processing to determine a separating hyperplane
using a number of steps sublinear in the number of data points $N$, namely
$O(\sqrt{N})$. The second algorithm illustrates how the classical mistake bound
of $O(\frac{1}{\gamma^2})$ can be further improved to
$O(\frac{1}{\sqrt{\gamma}})$ through quantum means, where $\gamma$ denotes the
margin. Such improvements are achieved through the application of quantum
amplitude amplification to the version space interpretation of the perceptron
model.
| Nathan Wiebe, Ashish Kapoor, Krysta M Svore | null | 1602.04799 | null | null |
DR-ABC: Approximate Bayesian Computation with Kernel-Based Distribution
Regression | stat.ML cs.LG stat.CO stat.ME | Performing exact posterior inference in complex generative models is often
difficult or impossible due to an expensive to evaluate or intractable
likelihood function. Approximate Bayesian computation (ABC) is an inference
framework that constructs an approximation to the true likelihood based on the
similarity between the observed and simulated data as measured by a predefined
set of summary statistics. Although the choice of appropriate problem-specific
summary statistics crucially influences the quality of the likelihood
approximation and hence also the quality of the posterior sample in ABC, there
are only few principled general-purpose approaches to the selection or
construction of such summary statistics. In this paper, we develop a novel
framework for this task using kernel-based distribution regression. We model
the functional relationship between data distributions and the optimal choice
(with respect to a loss function) of summary statistics using kernel-based
distribution regression. We show that our approach can be implemented in a
computationally and statistically efficient way using the random Fourier
features framework for large-scale kernel learning. In addition to that, our
framework shows superior performance when compared to related methods on toy
and real-world problems.
| Jovana Mitrovic, Dino Sejdinovic, Yee Whye Teh | null | 1602.04805 | null | null |
Black-box optimization with a politician | math.OC cs.DS cs.LG cs.NA | We propose a new framework for black-box convex optimization which is
well-suited for situations where gradient computations are expensive. We derive
a new method for this framework which leverages several concepts from convex
optimization, from standard first-order methods (e.g. gradient descent or
quasi-Newton methods) to analytical centers (i.e. minimizers of self-concordant
barriers). We demonstrate empirically that our new technique compares favorably
with state of the art algorithms (such as BFGS).
| S\'ebastien Bubeck and Yin-Tat Lee | null | 1602.04847 | null | null |
Bi-directional LSTM Recurrent Neural Network for Chinese Word
Segmentation | cs.LG cs.CL | Recurrent neural network(RNN) has been broadly applied to natural language
processing(NLP) problems. This kind of neural network is designed for modeling
sequential data and has been testified to be quite efficient in sequential
tagging tasks. In this paper, we propose to use bi-directional RNN with long
short-term memory(LSTM) units for Chinese word segmentation, which is a crucial
preprocess task for modeling Chinese sentences and articles. Classical methods
focus on designing and combining hand-craft features from context, whereas
bi-directional LSTM network(BLSTM) does not need any prior knowledge or
pre-designing, and it is expert in keeping the contextual information in both
directions. Experiment result shows that our approach gets state-of-the-art
performance in word segmentation on both traditional Chinese datasets and
simplified Chinese datasets.
| Yushi Yao, Zheng Huang | null | 1602.04874 | null | null |
Unsupervised Domain Adaptation Using Approximate Label Matching | cs.LG cs.AI | Domain adaptation addresses the problem created when training data is
generated by a so-called source distribution, but test data is generated by a
significantly different target distribution. In this work, we present
approximate label matching (ALM), a new unsupervised domain adaptation
technique that creates and leverages a rough labeling on the test samples, then
uses these noisy labels to learn a transformation that aligns the source and
target samples. We show that the transformation estimated by ALM has favorable
properties compared to transformations estimated by other methods, which do not
use any kind of target labeling. Our model is regularized by requiring that a
classifier trained to discriminate source from transformed target samples
cannot distinguish between the two. We experiment with ALM on simulated and
real data, and show that it outperforms techniques commonly used in the field.
| Jordan T. Ash, Robert E. Schapire and Barbara E. Engelhardt | null | 1602.04889 | null | null |
Segmentation Rectification for Video Cutout via One-Class Structured
Learning | cs.CV cs.GR cs.LG | Recent works on interactive video object cutout mainly focus on designing
dynamic foreground-background (FB) classifiers for segmentation propagation.
However, the research on optimally removing errors from the FB classification
is sparse, and the errors often accumulate rapidly, causing significant errors
in the propagated frames. In this work, we take the initial steps to addressing
this problem, and we call this new task \emph{segmentation rectification}. Our
key observation is that the possibly asymmetrically distributed false positive
and false negative errors were handled equally in the conventional methods. We,
alternatively, propose to optimally remove these two types of errors. To this
effect, we propose a novel bilayer Markov Random Field (MRF) model for this new
task. We also adopt the well-established structured learning framework to learn
the optimal model from data. Additionally, we propose a novel one-class
structured SVM (OSSVM) which greatly speeds up the structured learning process.
Our method naturally extends to RGB-D videos as well. Comprehensive experiments
on both RGB and RGB-D data demonstrate that our simple and effective method
significantly outperforms the segmentation propagation methods adopted in the
state-of-the-art video cutout systems, and the results also suggest the
potential usefulness of our method in image cutout system.
| Junyan Wang, Sai-kit Yeung, Jue Wang and Kun Zhou | null | 1602.04906 | null | null |
Gradient Descent Converges to Minimizers | stat.ML cs.LG math.OC | We show that gradient descent converges to a local minimizer, almost surely
with random initialization. This is proved by applying the Stable Manifold
Theorem from dynamical systems theory.
| Jason D. Lee, Max Simchowitz, Michael I. Jordan, Benjamin Recht | null | 1602.04915 | null | null |
Personalized Federated Search at LinkedIn | cs.IR cs.LG | LinkedIn has grown to become a platform hosting diverse sources of
information ranging from member profiles, jobs, professional groups, slideshows
etc. Given the existence of multiple sources, when a member issues a query like
"software engineer", the member could look for software engineer profiles, jobs
or professional groups. To tackle this problem, we exploit a data-driven
approach that extracts searcher intents from their profile data and recent
activities at a large scale. The intents such as job seeking, hiring, content
consuming are used to construct features to personalize federated search
experience. We tested the approach on the LinkedIn homepage and A/B tests show
significant improvements in member engagement. As of writing this paper, the
approach powers all of federated search on LinkedIn homepage.
| Dhruv Arya and Viet Ha-Thuc and Shakti Sinha | 10.1145/2806416.2806615 | 1602.04924 | null | null |
"Why Should I Trust You?": Explaining the Predictions of Any Classifier | cs.LG cs.AI stat.ML | Despite widespread adoption, machine learning models remain mostly black
boxes. Understanding the reasons behind predictions is, however, quite
important in assessing trust, which is fundamental if one plans to take action
based on a prediction, or when choosing whether to deploy a new model. Such
understanding also provides insights into the model, which can be used to
transform an untrustworthy model or prediction into a trustworthy one. In this
work, we propose LIME, a novel explanation technique that explains the
predictions of any classifier in an interpretable and faithful manner, by
learning an interpretable model locally around the prediction. We also propose
a method to explain models by presenting representative individual predictions
and their explanations in a non-redundant way, framing the task as a submodular
optimization problem. We demonstrate the flexibility of these methods by
explaining different models for text (e.g. random forests) and image
classification (e.g. neural networks). We show the utility of explanations via
novel experiments, both simulated and with human subjects, on various scenarios
that require trust: deciding if one should trust a prediction, choosing between
models, improving an untrustworthy classifier, and identifying why a classifier
should not be trusted.
| Marco Tulio Ribeiro and Sameer Singh and Carlos Guestrin | null | 1602.04938 | null | null |
Q($\lambda$) with Off-Policy Corrections | cs.AI cs.LG stat.ML | We propose and analyze an alternate approach to off-policy multi-step
temporal difference learning, in which off-policy returns are corrected with
the current Q-function in terms of rewards, rather than with the target policy
in terms of transition probabilities. We prove that such approximate
corrections are sufficient for off-policy convergence both in policy evaluation
and control, provided certain conditions. These conditions relate the distance
between the target and behavior policies, the eligibility trace parameter and
the discount factor, and formalize an underlying tradeoff in off-policy
TD($\lambda$). We illustrate this theoretical relationship empirically on a
continuous-state control task.
| Anna Harutyunyan and Marc G. Bellemare and Tom Stepleton and Remi
Munos | null | 1602.04951 | null | null |
Stochastic Process Bandits: Upper Confidence Bounds Algorithms via
Generic Chaining | stat.ML cs.LG | The paper considers the problem of global optimization in the setup of
stochastic process bandits. We introduce an UCB algorithm which builds a
cascade of discretization trees based on generic chaining in order to render
possible his operability over a continuous domain. The theoretical framework
applies to functions under weak probabilistic smoothness assumptions and also
extends significantly the spectrum of application of UCB strategies. Moreover
generic regret bounds are derived which are then specialized to Gaussian
processes indexed on infinite-dimensional spaces as well as to quadratic forms
of Gaussian processes. Lower bounds are also proved in the case of Gaussian
processes to assess the optimality of the proposed algorithm.
| Emile Contal and Nicolas Vayatis | null | 1602.04976 | null | null |
A Subsequence Interleaving Model for Sequential Pattern Mining | stat.ML cs.AI cs.LG | Recent sequential pattern mining methods have used the minimum description
length (MDL) principle to define an encoding scheme which describes an
algorithm for mining the most compressing patterns in a database. We present a
novel subsequence interleaving model based on a probabilistic model of the
sequence database, which allows us to search for the most compressing set of
patterns without designing a specific encoding scheme. Our proposed algorithm
is able to efficiently mine the most relevant sequential patterns and rank them
using an associated measure of interestingness. The efficient inference in our
model is a direct result of our use of a structural expectation-maximization
framework, in which the expectation-step takes the form of a submodular
optimization problem subject to a coverage constraint. We show on both
synthetic and real world datasets that our model mines a set of sequential
patterns with low spuriousness and redundancy, high interpretability and
usefulness in real-world applications. Furthermore, we demonstrate that the
quality of the patterns from our approach is comparable to, if not better than,
existing state of the art sequential pattern mining algorithms.
| Jaroslav Fowkes and Charles Sutton | 10.1145/2939672.2939787 | 1602.05012 | null | null |
Generating images with recurrent adversarial networks | cs.LG cs.CV | Gatys et al. (2015) showed that optimizing pixels to match features in a
convolutional network with respect reference image features is a way to render
images of high visual quality. We show that unrolling this gradient-based
optimization yields a recurrent computation that creates images by
incrementally adding onto a visual "canvas". We propose a recurrent generative
model inspired by this view, and show that it can be trained using adversarial
training to generate very good image samples. We also propose a way to
quantitatively compare adversarial networks by having the generators and
discriminators of these networks compete against each other.
| Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, Roland Memisevic | null | 1602.05110 | null | null |
Patient Flow Prediction via Discriminative Learning of
Mutually-Correcting Processes | cs.LG | Over the past decade the rate of care unit (CU) use in the United States has
been increasing. With an aging population and ever-growing demand for medical
care, effective management of patients' transitions among different care
facilities will prove indispensible for shortening the length of hospital
stays, improving patient outcomes, allocating critical care resources, and
reducing preventable re-admissions. In this paper, we focus on an important
problem of predicting the so-called "patient flow" from longitudinal electronic
health records (EHRs), which has not been explored via existing machine
learning techniques. By treating a sequence of transition events as a point
process, we develop a novel framework for modeling patient flow through various
CUs and jointly predicting patients' destination CUs and duration days. Instead
of learning a generative point process model via maximum likelihood estimation,
we propose a novel discriminative learning algorithm aiming at improving the
prediction of transition events in the case of sparse data. By parameterizing
the proposed model as a mutually-correcting process, we formulate the
estimation problem via generalized linear models, which lends itself to
efficient learning based on alternating direction method of multipliers (ADMM).
Furthermore, we achieve simultaneous feature selection and learning by adding a
group-lasso regularizer to the ADMM algorithm. Additionally, for suppressing
the negative influence of data imbalance on the learning of model, we
synthesize auxiliary training data for the classes with extremely few samples,
and improve the robustness of our learning method accordingly. Testing on
real-world data, we show that our method obtains superior performance in terms
of accuracy of predicting the destination CU transition and duration of each CU
occupancy.
| Hongteng Xu and Weichang Wu and Shamim Nemati and Hongyuan Zha | null | 1602.05112 | null | null |
Practical Introduction to Clustering Data | physics.data-an astro-ph.IM cond-mat.stat-mech cs.LG | Data clustering is an approach to seek for structure in sets of complex data,
i.e., sets of "objects". The main objective is to identify groups of objects
which are similar to each other, e.g., for classification. Here, an
introduction to clustering is given and three basic approaches are introduced:
the k-means algorithm, neighbour-based clustering, and an agglomerative
clustering method. For all cases, C source code examples are given, allowing
for an easy implementation.
| Alexander K. Hartmann | null | 1602.05124 | null | null |
A Harmonic Extension Approach for Collaborative Ranking | cs.LG | We present a new perspective on graph-based methods for collaborative ranking
for recommender systems. Unlike user-based or item-based methods that compute a
weighted average of ratings given by the nearest neighbors, or low-rank
approximation methods using convex optimization and the nuclear norm, we
formulate matrix completion as a series of semi-supervised learning problems,
and propagate the known ratings to the missing ones on the user-user or
item-item graph globally. The semi-supervised learning problems are expressed
as Laplace-Beltrami equations on a manifold, or namely, harmonic extension, and
can be discretized by a point integral method. We show that our approach does
not impose a low-rank Euclidean subspace on the data points, but instead
minimizes the dimension of the underlying manifold. Our method, named LDM (low
dimensional manifold), turns out to be particularly effective in generating
rankings of items, showing decent computational efficiency and robust ranking
quality compared to state-of-the-art methods.
| Da Kuang, Zuoqiang Shi, Stanley Osher, Andrea Bertozzi | null | 1602.05127 | null | null |
Fast Learning Requires Good Memory: A Time-Space Lower Bound for Parity
Learning | cs.LG cs.CC cs.CR | We prove that any algorithm for learning parities requires either a memory of
quadratic size or an exponential number of samples. This proves a recent
conjecture of Steinhardt, Valiant and Wager and shows that for some learning
problems a large storage space is crucial.
More formally, in the problem of parity learning, an unknown string $x \in
\{0,1\}^n$ was chosen uniformly at random. A learner tries to learn $x$ from a
stream of samples $(a_1, b_1), (a_2, b_2) \ldots$, where each~$a_t$ is
uniformly distributed over $\{0,1\}^n$ and $b_t$ is the inner product of $a_t$
and $x$, modulo~2. We show that any algorithm for parity learning, that uses
less than $\frac{n^2}{25}$ bits of memory, requires an exponential number of
samples.
Previously, there was no non-trivial lower bound on the number of samples
needed, for any learning problem, even if the allowed memory size is $O(n)$
(where $n$ is the space needed to store one sample).
We also give an application of our result in the field of bounded-storage
cryptography. We show an encryption scheme that requires a private key of
length $n$, as well as time complexity of $n$ per encryption/decription of each
bit, and is provenly and unconditionally secure as long as the attacker uses
less than $\frac{n^2}{25}$ memory bits and the scheme is used at most an
exponential number of times. Previous works on bounded-storage cryptography
assumed that the memory size used by the attacker is at most linear in the time
needed for encryption/decription.
| Ran Raz | null | 1602.05161 | null | null |
Equilibrium Propagation: Bridging the Gap Between Energy-Based Models
and Backpropagation | cs.LG | We introduce Equilibrium Propagation, a learning framework for energy-based
models. It involves only one kind of neural computation, performed in both the
first phase (when the prediction is made) and the second phase of training
(after the target or prediction error is revealed). Although this algorithm
computes the gradient of an objective function just like Backpropagation, it
does not need a special computation or circuit for the second phase, where
errors are implicitly propagated. Equilibrium Propagation shares similarities
with Contrastive Hebbian Learning and Contrastive Divergence while solving the
theoretical issues of both algorithms: our algorithm computes the gradient of a
well defined objective function. Because the objective function is defined in
terms of local perturbations, the second phase of Equilibrium Propagation
corresponds to only nudging the prediction (fixed point, or stationary
distribution) towards a configuration that reduces prediction error. In the
case of a recurrent multi-layer supervised network, the output units are
slightly nudged towards their target in the second phase, and the perturbation
introduced at the output layer propagates backward in the hidden layers. We
show that the signal 'back-propagated' during this second phase corresponds to
the propagation of error derivatives and encodes the gradient of the objective
function, when the synaptic update corresponds to a standard form of
spike-timing dependent plasticity. This work makes it more plausible that a
mechanism similar to Backpropagation could be implemented by brains, since
leaky integrator neural computation performs both inference and error
back-propagation in our model. The only local difference between the two phases
is whether synaptic changes are allowed or not.
| Benjamin Scellier and Yoshua Bengio | null | 1602.05179 | null | null |
Primal-Dual Rates and Certificates | cs.LG math.OC | We propose an algorithm-independent framework to equip existing optimization
methods with primal-dual certificates. Such certificates and corresponding rate
of convergence guarantees are important for practitioners to diagnose progress,
in particular in machine learning applications. We obtain new primal-dual
convergence rates, e.g., for the Lasso as well as many L1, Elastic Net, group
Lasso and TV-regularized problems. The theory applies to any norm-regularized
generalized linear model. Our approach provides efficiently computable duality
gaps which are globally defined, without modifying the original problems in the
region of interest.
| Celestine D\"unner and Simone Forte and Martin Tak\'a\v{c} and Martin
Jaggi | null | 1602.05205 | null | null |
Monte Carlo Markov Chain Algorithms for Sampling Strongly Rayleigh
Distributions and Determinantal Point Processes | cs.LG cs.DS math.PR | Strongly Rayleigh distributions are natural generalizations of product and
determinantal probability distributions and satisfy strongest form of negative
dependence properties. We show that the "natural" Monte Carlo Markov Chain
(MCMC) is rapidly mixing in the support of a {\em homogeneous} strongly
Rayleigh distribution. As a byproduct, our proof implies Markov chains can be
used to efficiently generate approximate samples of a $k$-determinantal point
process. This answers an open question raised by Deshpande and Rademacher.
| Nima Anari, Shayan Oveis Gharan, Alireza Rezaei | null | 1602.05242 | null | null |
Peak Criterion for Choosing Gaussian Kernel Bandwidth in Support Vector
Data Description | cs.LG stat.AP stat.ML | Support Vector Data Description (SVDD) is a machine-learning technique used
for single class classification and outlier detection. SVDD formulation with
kernel function provides a flexible boundary around data. The value of kernel
function parameters affects the nature of the data boundary. For example, it is
observed that with a Gaussian kernel, as the value of kernel bandwidth is
lowered, the data boundary changes from spherical to wiggly. The spherical data
boundary leads to underfitting, and an extremely wiggly data boundary leads to
overfitting. In this paper, we propose empirical criterion to obtain good
values of the Gaussian kernel bandwidth parameter. This criterion provides a
smooth boundary that captures the essential geometric features of the data.
| Deovrat Kakde, Arin Chaudhuri, Seunghyun Kong, Maria Jahja, Hansi
Jiang, Jorge Silva | 10.1109/ICPHM.2017.7998302 | 1602.05257 | null | null |
Anomaly Detection in Clutter using Spectrally Enhanced Ladar | physics.optics cs.LG physics.ins-det stat.AP stat.ML | Discrete return (DR) Laser Detection and Ranging (Ladar) systems provide a
series of echoes that reflect from objects in a scene. These can be first, last
or multi-echo returns. In contrast, Full-Waveform (FW)-Ladar systems measure
the intensity of light reflected from objects continuously over a period of
time. In a camouflaged scenario, e.g., objects hidden behind dense foliage, a
FW-Ladar penetrates such foliage and returns a sequence of echoes including
buried faint echoes. The aim of this paper is to learn local-patterns of
co-occurring echoes characterised by their measured spectra. A deviation from
such patterns defines an abnormal event in a forest/tree depth profile. As far
as the authors know, neither DR or FW-Ladar, along with several spectral
measurements, has not been applied to anomaly detection. This work presents an
algorithm that allows detection of spectral and temporal anomalies in FW-Multi
Spectral Ladar (FW-MSL) data samples. An anomaly is defined as a full waveform
temporal and spectral signature that does not conform to a prior expectation,
represented using a learnt subspace (dictionary) and set of coefficients that
capture co-occurring local-patterns using an overlapping temporal window. A
modified optimization scheme is proposed for subspace learning based on
stochastic approximations. The objective function is augmented with a
discriminative term that represents the subspace's separability properties and
supports anomaly characterisation. The algorithm detects several man-made
objects and anomalous spectra hidden in a dense clutter of vegetation and also
allows tree species classification.
| Puneet S Chhabra, Andrew M Wallace and James R Hopgood | null | 1602.05264 | null | null |
Choice by Elimination via Deep Neural Networks | stat.ML cs.IR cs.LG | We introduce Neural Choice by Elimination, a new framework that integrates
deep neural networks into probabilistic sequential choice models for learning
to rank. Given a set of items to chose from, the elimination strategy starts
with the whole item set and iteratively eliminates the least worthy item in the
remaining subset. We prove that the choice by elimination is equivalent to
marginalizing out the random Gompertz latent utilities. Coupled with the choice
model is the recently introduced Neural Highway Networks for approximating
arbitrarily complex rank functions. We evaluate the proposed framework on a
large-scale public dataset with over 425K items, drawn from the Yahoo! learning
to rank challenge. It is demonstrated that the proposed method is competitive
against state-of-the-art learning to rank methods.
| Truyen Tran, Dinh Phung and Svetha Venkatesh | null | 1602.05285 | null | null |
Label Noise Reduction in Entity Typing by Heterogeneous Partial-Label
Embedding | cs.CL cs.LG | Current systems of fine-grained entity typing use distant supervision in
conjunction with existing knowledge bases to assign categories (type labels) to
entity mentions. However, the type labels so obtained from knowledge bases are
often noisy (i.e., incorrect for the entity mention's local context). We define
a new task, Label Noise Reduction in Entity Typing (LNR), to be the automatic
identification of correct type labels (type-paths) for training examples, given
the set of candidate type labels obtained by distant supervision with a given
type hierarchy. The unknown type labels for individual entity mentions and the
semantic similarity between entity types pose unique challenges for solving the
LNR task. We propose a general framework, called PLE, to jointly embed entity
mentions, text features and entity types into the same low-dimensional space
where, in that space, objects whose types are semantically close have similar
representations. Then we estimate the type-path for each training example in a
top-down manner using the learned embeddings. We formulate a global objective
for learning the embeddings from text corpora and knowledge bases, which adopts
a novel margin-based loss that is robust to noisy labels and faithfully models
type correlation derived from knowledge bases. Our experiments on three public
typing datasets demonstrate the effectiveness and robustness of PLE, with an
average of 25% improvement in accuracy compared to next best method.
| Xiang Ren, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, Jiawei Han | null | 1602.05307 | null | null |
Large Scale Kernel Learning using Block Coordinate Descent | cs.LG math.OC stat.ML | We demonstrate that distributed block coordinate descent can quickly solve
kernel regression and classification problems with millions of data points.
Armed with this capability, we conduct a thorough comparison between the full
kernel, the Nystr\"om method, and random features on three large classification
tasks from various domains. Our results suggest that the Nystr\"om method
generally achieves better statistical accuracy than random features, but can
require significantly more iterations of optimization. Lastly, we derive new
rates for block coordinate descent which support our experimental findings when
specialized to kernel methods.
| Stephen Tu and Rebecca Roelofs and Shivaram Venkataraman and Benjamin
Recht | null | 1602.05310 | null | null |
Relative Error Embeddings for the Gaussian Kernel Distance | cs.LG | A reproducing kernel can define an embedding of a data point into an infinite
dimensional reproducing kernel Hilbert space (RKHS). The norm in this space
describes a distance, which we call the kernel distance. The random Fourier
features (of Rahimi and Recht) describe an oblivious approximate mapping into
finite dimensional Euclidean space that behaves similar to the RKHS. We show in
this paper that for the Gaussian kernel the Euclidean norm between these mapped
to features has $(1+\epsilon)$-relative error with respect to the kernel
distance. When there are $n$ data points, we show that $O((1/\epsilon^2)
\log(n))$ dimensions of the approximate feature space are sufficient and
necessary.
Without a bound on $n$, but when the original points lie in $\mathbb{R}^d$
and have diameter bounded by $\mathcal{M}$, then we show that $O((d/\epsilon^2)
\log(\mathcal{M}))$ dimensions are sufficient, and that this many are required,
up to $\log(1/\epsilon)$ factors.
| Di Chen and Jeff M. Phillips | null | 1602.05350 | null | null |
Recommendations as Treatments: Debiasing Learning and Evaluation | cs.LG cs.AI cs.IR | Most data for evaluating and training recommender systems is subject to
selection biases, either through self-selection by the users or through the
actions of the recommendation system itself. In this paper, we provide a
principled approach to handling selection biases, adapting models and
estimation techniques from causal inference. The approach leads to unbiased
performance estimators despite biased data, and to a matrix factorization
method that provides substantially improved prediction performance on
real-world data. We theoretically and empirically characterize the robustness
of the approach, finding that it is highly practical and scalable.
| Tobias Schnabel, Adith Swaminathan, Ashudeep Singh, Navin Chandak and
Thorsten Joachims | null | 1602.05352 | null | null |
Online optimization and regret guarantees for non-additive long-term
constraints | stat.ML cs.LG math.OC math.ST stat.TH | We consider online optimization in the 1-lookahead setting, where the
objective does not decompose additively over the rounds of the online game. The
resulting formulation enables us to deal with non-stationary and/or long-term
constraints , which arise, for example, in online display advertising problems.
We propose an on-line primal-dual algorithm for which we obtain dynamic
cumulative regret guarantees. They depend on the convexity and the smoothness
of the non-additive penalty, as well as terms capturing the smoothness with
which the residuals of the non-stationary and long-term constraints vary over
the rounds. We conduct experiments on synthetic data to illustrate the benefits
of the non-additive penalty and show vanishing regret convergence on live
traffic data collected by a display advertising platform in production.
| Rodolphe Jenatton, Jim Huang, Dominik Csiba, Cedric Archambeau | null | 1602.05394 | null | null |
Harder, Better, Faster, Stronger Convergence Rates for Least-Squares
Regression | math.OC cs.LG stat.ML | We consider the optimization of a quadratic objective function whose
gradients are only accessible through a stochastic oracle that returns the
gradient at any given point plus a zero-mean finite variance random error. We
present the first algorithm that achieves jointly the optimal prediction error
rates for least-squares regression, both in terms of forgetting of initial
conditions in O(1/n 2), and in terms of dependence on the noise and dimension d
of the problem, as O(d/n). Our new algorithm is based on averaged accelerated
regularized gradient descent, and may also be analyzed through finer
assumptions on initial conditions and the Hessian matrix, leading to
dimension-free quantities that may still be small while the " optimal " terms
above are large. In order to characterize the tightness of these new bounds, we
consider an application to non-parametric regression and use the known lower
bounds on the statistical performance (without computational limits), which
happen to match our bounds obtained from a single pass on the data and thus
show optimality of our algorithm in a wide variety of particular trade-offs
between bias and variance.
| Aymeric Dieuleveut (SIERRA, LIENS), Nicolas Flammarion (LIENS,
SIERRA), Francis Bach (SIERRA, LIENS) | null | 1602.05419 | null | null |
Low-Rank Factorization of Determinantal Point Processes for
Recommendation | stat.ML cs.LG | Determinantal point processes (DPPs) have garnered attention as an elegant
probabilistic model of set diversity. They are useful for a number of subset
selection tasks, including product recommendation. DPPs are parametrized by a
positive semi-definite kernel matrix. In this work we present a new method for
learning the DPP kernel from observed data using a low-rank factorization of
this kernel. We show that this low-rank factorization enables a learning
algorithm that is nearly an order of magnitude faster than previous approaches,
while also providing for a method for computing product recommendation
predictions that is far faster (up to 20x faster or more for large item
catalogs) than previous techniques that involve a full-rank DPP kernel.
Furthermore, we show that our method provides equivalent or sometimes better
predictive performance than prior full-rank DPP approaches, and better
performance than several other competing recommendation methods in many cases.
We conduct an extensive experimental evaluation using several real-world
datasets in the domain of product recommendation to demonstrate the utility of
our method, along with its limitations.
| Mike Gartrell, Ulrich Paquet, Noam Koenigstein | null | 1602.05436 | null | null |
Cell segmentation with random ferns and graph-cuts | cs.CV cs.LG | The progress in imaging techniques have allowed the study of various aspect
of cellular mechanisms. To isolate individual cells in live imaging data, we
introduce an elegant image segmentation framework that effectively extracts
cell boundaries, even in the presence of poor edge details. Our approach works
in two stages. First, we estimate pixel interior/border/exterior class
probabilities using random ferns. Then, we use an energy minimization framework
to compute boundaries whose localization is compliant with the pixel class
probabilities. We validate our approach on a manually annotated dataset.
| Arnaud Browet, Christophe De Vleeschouwer, Laurent Jacques, Navrita
Mathiah, Bechara Saykali, Isabelle Migeotte | null | 1602.05439 | null | null |
Auxiliary Deep Generative Models | stat.ML cs.AI cs.LG | Deep generative models parameterized by neural networks have recently
achieved state-of-the-art performance in unsupervised and semi-supervised
learning. We extend deep generative models with auxiliary variables which
improves the variational approximation. The auxiliary variables leave the
generative model unchanged but make the variational distribution more
expressive. Inspired by the structure of the auxiliary variable we also propose
a model with two stochastic layers and skip connections. Our findings suggest
that more expressive and properly specified deep generative models converge
faster with better results. We show state-of-the-art performance within
semi-supervised learning on MNIST, SVHN and NORB datasets.
| Lars Maal{\o}e, Casper Kaae S{\o}nderby, S{\o}ren Kaae S{\o}nderby,
Ole Winther | null | 1602.05473 | null | null |
Multi-layer Representation Learning for Medical Concepts | cs.LG | Learning efficient representations for concepts has been proven to be an
important basis for many applications such as machine translation or document
classification. Proper representations of medical concepts such as diagnosis,
medication, procedure codes and visits will have broad applications in
healthcare analytics. However, in Electronic Health Records (EHR) the visit
sequences of patients include multiple concepts (diagnosis, procedure, and
medication codes) per visit. This structure provides two types of relational
information, namely sequential order of visits and co-occurrence of the codes
within each visit. In this work, we propose Med2Vec, which not only learns
distributed representations for both medical codes and visits from a large EHR
dataset with over 3 million visits, but also allows us to interpret the learned
representations confirmed positively by clinical experts. In the experiments,
Med2Vec displays significant improvement in key medical applications compared
to popular baselines such as Skip-gram, GloVe and stacked autoencoder, while
providing clinically meaningful interpretation.
| Edward Choi, Mohammad Taha Bahadori, Elizabeth Searles, Catherine
Coffey, Jimeng Sun | null | 1602.05568 | null | null |
Communication-Efficient Learning of Deep Networks from Decentralized
Data | cs.LG | Modern mobile devices have access to a wealth of data suitable for learning
models, which in turn can greatly improve the user experience on the device.
For example, language models can improve speech recognition and text entry, and
image models can automatically select good photos. However, this rich data is
often privacy sensitive, large in quantity, or both, which may preclude logging
to the data center and training there using conventional approaches. We
advocate an alternative that leaves the training data distributed on the mobile
devices, and learns a shared model by aggregating locally-computed updates. We
term this decentralized approach Federated Learning.
We present a practical method for the federated learning of deep networks
based on iterative model averaging, and conduct an extensive empirical
evaluation, considering five different model architectures and four datasets.
These experiments demonstrate the approach is robust to the unbalanced and
non-IID data distributions that are a defining characteristic of this setting.
Communication costs are the principal constraint, and we show a reduction in
required communication rounds by 10-100x as compared to synchronized stochastic
gradient descent.
| H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, Blaise
Ag\"uera y Arcas | null | 1602.05629 | null | null |
Boost Picking: A Universal Method on Converting Supervised
Classification to Semi-supervised Classification | cs.CV cs.LG | This paper proposes a universal method, Boost Picking, to train supervised
classification models mainly by un-labeled data. Boost Picking only adopts two
weak classifiers to estimate and correct the error. It is theoretically proved
that Boost Picking could train a supervised model mainly by un-labeled data as
effectively as the same model trained by 100% labeled data, only if recalls of
the two weak classifiers are all greater than zero and the sum of precisions is
greater than one. Based on Boost Picking, we present "Test along with Training
(TawT)" to improve the generalization of supervised models. Both Boost Picking
and TawT are successfully tested in varied little data sets.
| Fuqiang Liu, Fukun Bi, Yiding Yang, Liang Chen | null | 1602.05659 | null | null |
Audio Recording Device Identification Based on Deep Learning | cs.SD cs.LG | In this paper we present a research on identification of audio recording
devices from background noise, thus providing a method for forensics. The audio
signal is the sum of speech signal and noise signal. Usually, people pay more
attention to speech signal, because it carries the information to deliver. So a
great amount of researches have been dedicated to getting higher
Signal-Noise-Ratio (SNR). There are many speech enhancement algorithms to
improve the quality of the speech, which can be seen as reducing the noise.
However, noises can be regarded as the intrinsic fingerprint traces of an audio
recording device. These digital traces can be characterized and identified by
new machine learning techniques. Therefore, in our research, we use the noise
as the intrinsic features. As for the identification, multiple classifiers of
deep learning methods are used and compared. The identification result shows
that the method of getting feature vector from the noise of each device and
identifying them with deep learning techniques is viable, and well-preformed.
| Simeng Qi, Zheng Huang, Yan Li, Shaopei Shi | null | 1602.05682 | null | null |
Adaptive Least Mean Squares Estimation of Graph Signals | cs.LG cs.SY | The aim of this paper is to propose a least mean squares (LMS) strategy for
adaptive estimation of signals defined over graphs. Assuming the graph signal
to be band-limited, over a known bandwidth, the method enables reconstruction,
with guaranteed performance in terms of mean-square error, and tracking from a
limited number of observations over a subset of vertices. A detailed mean
square analysis provides the performance of the proposed method, and leads to
several insights for designing useful sampling strategies for graph signals.
Numerical results validate our theoretical findings, and illustrate the
performance of the proposed method. Furthermore, to cope with the case where
the bandwidth is not known beforehand, we propose a method that performs a
sparse online estimation of the signal support in the (graph) frequency domain,
which enables online adaptation of the graph sampling strategy. Finally, we
apply the proposed method to build the power spatial density cartography of a
given operational region in a cognitive network environment.
| Paolo Di Lorenzo, Sergio Barbarossa, Paolo Banelli, and Stefania
Sardellitti | 10.1109/TSIPN.2016.2613687 | 1602.05703 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.