title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
An Information Theoretic Feature Selection Framework for Big Data under
Apache Spark | cs.AI cs.DC cs.LG | With the advent of extremely high dimensional datasets, dimensionality
reduction techniques are becoming mandatory. Among many techniques, feature
selection has been growing in interest as an important tool to identify
relevant features on huge datasets --both in number of instances and
features--. The purpose of this work is to demonstrate that standard feature
selection methods can be parallelized in Big Data platforms like Apache Spark,
boosting both performance and accuracy. We thus propose a distributed
implementation of a generic feature selection framework which includes a wide
group of well-known Information Theoretic methods. Experimental results on a
wide set of real-world datasets show that our distributed framework is capable
of dealing with ultra-high dimensional datasets as well as those with a huge
number of samples in a short period of time, outperforming the sequential
version in all the cases studied.
| Sergio Ram\'irez-Gallego, H\'ector Mouri\~no-Tal\'in, David
Mart\'inez-Rego, Ver\'onica Bol\'on-Canedo, Jos\'e Manuel Ben\'itez, Amparo
Alonso-Betanzos, Francisco Herrera | null | 1610.04154 | null | null |
Why Deep Neural Networks for Function Approximation? | cs.LG cs.NE | Recently there has been much interest in understanding why deep neural
networks are preferred to shallow networks. We show that, for a large class of
piecewise smooth functions, the number of neurons needed by a shallow network
to approximate a function is exponentially larger than the corresponding number
of neurons needed by a deep network for a given degree of function
approximation. First, we consider univariate functions on a bounded interval
and require a neural network to achieve an approximation error of $\varepsilon$
uniformly over the interval. We show that shallow networks (i.e., networks
whose depth does not depend on $\varepsilon$) require
$\Omega(\text{poly}(1/\varepsilon))$ neurons while deep networks (i.e.,
networks whose depth grows with $1/\varepsilon$) require
$\mathcal{O}(\text{polylog}(1/\varepsilon))$ neurons. We then extend these
results to certain classes of important multivariate functions. Our results are
derived for neural networks which use a combination of rectifier linear units
(ReLUs) and binary step units, two of the most popular type of activation
functions. Our analysis builds on a simple observation: the multiplication of
two bits can be represented by a ReLU.
| Shiyu Liang and R. Srikant | null | 1610.04161 | null | null |
Tensorial Mixture Models | cs.LG cs.NE stat.ML | Casting neural networks in generative frameworks is a highly sought-after
endeavor these days. Contemporary methods, such as Generative Adversarial
Networks, capture some of the generative capabilities, but not all. In
particular, they lack the ability of tractable marginalization, and thus are
not suitable for many tasks. Other methods, based on arithmetic circuits and
sum-product networks, do allow tractable marginalization, but their performance
is challenged by the need to learn the structure of a circuit. Building on the
tractability of arithmetic circuits, we leverage concepts from tensor analysis,
and derive a family of generative models we call Tensorial Mixture Models
(TMMs). TMMs assume a simple convolutional network structure, and in addition,
lend themselves to theoretical analyses that allow comprehensive understanding
of the relation between their structure and their expressive properties. We
thus obtain a generative model that is tractable on one hand, and on the other
hand, allows effective representation of rich distributions in an easily
controlled manner. These two capabilities are brought together in the task of
classification under missing data, where TMMs deliver state of the art
accuracies with seamless implementation and design.
| Or Sharir, Ronen Tamari, Nadav Cohen and Amnon Shashua | null | 1610.04167 | null | null |
Phase Retrieval Meets Statistical Learning Theory: A Flexible Convex
Relaxation | cs.IT cs.LG math.FA math.IT math.OC stat.ML | We propose a flexible convex relaxation for the phase retrieval problem that
operates in the natural domain of the signal. Therefore, we avoid the
prohibitive computational cost associated with "lifting" and semidefinite
programming (SDP) in methods such as PhaseLift and compete with recently
developed non-convex techniques for phase retrieval. We relax the quadratic
equations for phaseless measurements to inequality constraints each of which
representing a symmetric "slab". Through a simple convex program, our proposed
estimator finds an extreme point of the intersection of these slabs that is
best aligned with a given anchor vector. We characterize geometric conditions
that certify success of the proposed estimator. Furthermore, using classic
results in statistical learning theory, we show that for random measurements
the geometric certificates hold with high probability at an optimal sample
complexity. Phase transition of our estimator is evaluated through simulations.
Our numerical experiments also suggest that the proposed method can solve phase
retrieval problems with coded diffraction measurements as well.
| Sohail Bahmani and Justin Romberg | null | 1610.0421 | null | null |
Sim-to-Real Robot Learning from Pixels with Progressive Nets | cs.RO cs.LG | Applying end-to-end learning to solve complex, interactive, pixel-driven
control tasks on a robot is an unsolved problem. Deep Reinforcement Learning
algorithms are too slow to achieve performance on a real robot, but their
potential has been demonstrated in simulated environments. We propose using
progressive networks to bridge the reality gap and transfer learned policies
from simulation to the real world. The progressive net approach is a general
framework that enables reuse of everything from low-level visual features to
high-level policies for transfer to new tasks, enabling a compositional, yet
simple, approach to building complex skills. We present an early demonstration
of this approach with a number of experiments in the domain of robot
manipulation that focus on bridging the reality gap. Unlike other proposed
approaches, our real-world experiments demonstrate successful task learning
from raw visual input on a fully actuated robot manipulator. Moreover, rather
than relying on model-based trajectory optimisation, the task learning is
accomplished using only deep reinforcement learning and sparse rewards.
| Andrei A. Rusu, Mel Vecerik, Thomas Roth\"orl, Nicolas Heess, Razvan
Pascanu, Raia Hadsell | null | 1610.04286 | null | null |
Approximate Counting, the Lovasz Local Lemma and Inference in Graphical
Models | cs.DS cs.CC cs.LG | In this paper we introduce a new approach for approximately counting in
bounded degree systems with higher-order constraints. Our main result is an
algorithm to approximately count the number of solutions to a CNF formula
$\Phi$ when the width is logarithmic in the maximum degree. This closes an
exponential gap between the known upper and lower bounds.
Moreover our algorithm extends straightforwardly to approximate sampling,
which shows that under Lov\'asz Local Lemma-like conditions it is not only
possible to find a satisfying assignment, it is also possible to generate one
approximately uniformly at random from the set of all satisfying assignments.
Our approach is a significant departure from earlier techniques in approximate
counting, and is based on a framework to bootstrap an oracle for computing
marginal probabilities on individual variables. Finally, we give an application
of our results to show that it is algorithmically possible to sample from the
posterior distribution in an interesting class of graphical models.
| Ankur Moitra | null | 1610.04317 | null | null |
MML is not consistent for Neyman-Scott | stat.ML cs.LG math.ST stat.TH | Strict Minimum Message Length (SMML) is an information-theoretic statistical
inference method widely cited (but only with informal arguments) as providing
estimations that are consistent for general estimation problems. It is,
however, almost invariably intractable to compute, for which reason only
approximations of it (known as MML algorithms) are ever used in practice. Using
novel techniques that allow for the first time direct, non-approximated
analysis of SMML solutions, we investigate the Neyman-Scott estimation problem,
an oft-cited showcase for the consistency of MML, and show that even with a
natural choice of prior neither SMML nor its popular approximations are
consistent for it, thereby providing a counterexample to the general claim.
This is the first known explicit construction of an SMML solution for a
natural, high-dimensional problem.
| Michael Brand | 10.1109/TIT.2019.2943464 | 1610.04336 | null | null |
Spectral Inference Methods on Sparse Graphs: Theory and Applications | cond-mat.dis-nn cs.IT cs.LG math.IT | In an era of unprecedented deluge of (mostly unstructured) data, graphs are
proving more and more useful, across the sciences, as a flexible abstraction to
capture complex relationships between complex objects. One of the main
challenges arising in the study of such networks is the inference of
macroscopic, large-scale properties affecting a large number of objects, based
solely on the microscopic interactions between their elementary constituents.
Statistical physics, precisely created to recover the macroscopic laws of
thermodynamics from an idealized model of interacting particles, provides
significant insight to tackle such complex networks.
In this dissertation, we use methods derived from the statistical physics of
disordered systems to design and study new algorithms for inference on graphs.
Our focus is on spectral methods, based on certain eigenvectors of carefully
chosen matrices, and sparse graphs, containing only a small amount of
information. We develop an original theory of spectral inference based on a
relaxation of various mean-field free energy optimizations. Our approach is
therefore fully probabilistic, and contrasts with more traditional motivations
based on the optimization of a cost function. We illustrate the efficiency of
our approach on various problems, including community detection, randomized
similarity-based clustering, and matrix completion.
| Alaa Saade | null | 1610.04337 | null | null |
Semi-supervised Graph Embedding Approach to Dynamic Link Prediction | stat.ML cs.LG cs.SI physics.soc-ph | We propose a simple discrete time semi-supervised graph embedding approach to
link prediction in dynamic networks. The learned embedding reflects information
from both the temporal and cross-sectional network structures, which is
performed by defining the loss function as a weighted sum of the supervised
loss from past dynamics and the unsupervised loss of predicting the
neighborhood context in the current network. Our model is also capable of
learning different embeddings for both formation and dissolution dynamics.
These key aspects contributes to the predictive performance of our model and we
provide experiments with three real--world dynamic networks showing that our
method is comparable to state of the art methods in link formation prediction
and outperforms state of the art baseline methods in link dissolution
prediction.
| Ryohei Hisano | null | 1610.04351 | null | null |
Theoretical Analysis of Domain Adaptation with Optimal Transport | stat.ML cs.LG | Domain adaptation (DA) is an important and emerging field of machine learning
that tackles the problem occurring when the distributions of training (source
domain) and test (target domain) data are similar but different. Current
theoretical results show that the efficiency of DA algorithms depends on their
capacity of minimizing the divergence between source and target probability
distributions. In this paper, we provide a theoretical study on the advantages
that concepts borrowed from optimal transportation theory can bring to DA. In
particular, we show that the Wasserstein metric can be used as a divergence
measure between distributions to obtain generalization guarantees for three
different learning settings: (i) classic DA with unsupervised target data (ii)
DA combining source and target labeled data, (iii) multiple source DA. Based on
the obtained results, we provide some insights showing when this analysis can
be tighter than other existing frameworks.
| Ievgen Redko, Amaury Habrard and Marc Sebban | null | 1610.0442 | null | null |
Amortised MAP Inference for Image Super-resolution | cs.CV cs.LG stat.ML | Image super-resolution (SR) is an underdetermined inverse problem, where a
large number of plausible high-resolution images can explain the same
downsampled image. Most current single image SR methods use empirical risk
minimisation, often with a pixel-wise mean squared error (MSE) loss. However,
the outputs from such methods tend to be blurry, over-smoothed and generally
appear implausible. A more desirable approach would employ Maximum a Posteriori
(MAP) inference, preferring solutions that always have a high probability under
the image prior, and thus appear more plausible. Direct MAP estimation for SR
is non-trivial, as it requires us to build a model for the image prior from
samples. Furthermore, MAP inference is often performed via optimisation-based
iterative algorithms which don't compare well with the efficiency of
neural-network-based alternatives. Here we introduce new methods for amortised
MAP inference whereby we calculate the MAP estimate directly using a
convolutional neural network. We first introduce a novel neural network
architecture that performs a projection to the affine subspace of valid SR
solutions ensuring that the high resolution output of the network is always
consistent with the low resolution input. We show that, using this
architecture, the amortised MAP inference problem reduces to minimising the
cross-entropy between two distributions, similar to training generative models.
We propose three methods to solve this optimisation problem: (1) Generative
Adversarial Networks (GAN) (2) denoiser-guided SR which backpropagates
gradient-estimates from denoising to train the network, and (3) a baseline
method using a maximum-likelihood-trained image prior. Our experiments show
that the GAN based approach performs best on real image data. Lastly, we
establish a connection between GANs and amortised variational inference as in
e.g. variational autoencoders.
| Casper Kaae S{\o}nderby, Jose Caballero, Lucas Theis, Wenzhe Shi,
Ferenc Husz\'ar | null | 1610.0449 | null | null |
The End of Optimism? An Asymptotic Analysis of Finite-Armed Linear
Bandits | stat.ML cs.LG | Stochastic linear bandits are a natural and simple generalisation of
finite-armed bandits with numerous practical applications. Current approaches
focus on generalising existing techniques for finite-armed bandits, notably the
optimism principle and Thompson sampling. While prior work has mostly been in
the worst-case setting, we analyse the asymptotic instance-dependent regret and
show matching upper and lower bounds on what is achievable. Surprisingly, our
results show that no algorithm based on optimism or Thompson sampling will ever
achieve the optimal rate, and indeed, can be arbitrarily far from optimal, even
in very simple cases. This is a disturbing result because these techniques are
standard tools that are widely used for sequential optimisation. For example,
for generalised linear bandits and reinforcement learning.
| Tor Lattimore and Csaba Szepesvari | null | 1610.04491 | null | null |
Generalization Error of Invariant Classifiers | stat.ML cs.AI cs.CV cs.LG | This paper studies the generalization error of invariant classifiers. In
particular, we consider the common scenario where the classification task is
invariant to certain transformations of the input, and that the classifier is
constructed (or learned) to be invariant to these transformations. Our approach
relies on factoring the input space into a product of a base space and a set of
transformations. We show that whereas the generalization error of a
non-invariant classifier is proportional to the complexity of the input space,
the generalization error of an invariant classifier is proportional to the
complexity of the base space. We also derive a set of sufficient conditions on
the geometry of the base space and the set of transformations that ensure that
the complexity of the base space is much smaller than the complexity of the
input space. Our analysis applies to general classifiers such as convolutional
neural networks. We demonstrate the implications of the developed theory for
such classifiers with experiments on the MNIST and CIFAR-10 datasets.
| Jure Sokolic, Raja Giryes, Guillermo Sapiro, Miguel R. D. Rodrigues | null | 1610.04574 | null | null |
Kernel Alignment Inspired Linear Discriminant Analysis | cs.LG stat.ML | Kernel alignment measures the degree of similarity between two kernels. In
this paper, inspired from kernel alignment, we propose a new Linear
Discriminant Analysis (LDA) formulation, kernel alignment LDA (kaLDA). We first
define two kernels, data kernel and class indicator kernel. The problem is to
find a subspace to maximize the alignment between subspace-transformed data
kernel and class indicator kernel. Surprisingly, the kernel alignment induced
kaLDA objective function is very similar to classical LDA and can be expressed
using between-class and total scatter matrices. This can be extended to
multi-label data. We use a Stiefel-manifold gradient descent algorithm to solve
this problem. We perform experiments on 8 single-label and 6 multi-label data
sets. Results show that kaLDA has very good performance on many single-label
and multi-label problems.
| Shuai Zheng, Chris Ding | null | 1610.04576 | null | null |
Improved Strongly Adaptive Online Learning using Coin Betting | stat.ML cs.LG | This paper describes a new parameter-free online learning algorithm for
changing environments. In comparing against algorithms with the same time
complexity as ours, we obtain a strongly adaptive regret bound that is a factor
of at least $\sqrt{\log(T)}$ better, where $T$ is the time horizon. Empirical
results show that our algorithm outperforms state-of-the-art methods in
learning with expert advice and metric learning scenarios.
| Kwang-Sung Jun, Francesco Orabona, Rebecca Willett, Stephen Wright | null | 1610.04578 | null | null |
Data-Driven Threshold Machine: Scan Statistics, Change-Point Detection,
and Extreme Bandits | cs.LG math.ST stat.ML stat.TH | We present a novel distribution-free approach, the data-driven threshold
machine (DTM), for a fundamental problem at the core of many learning tasks:
choose a threshold for a given pre-specified level that bounds the tail
probability of the maximum of a (possibly dependent but stationary) random
sequence. We do not assume data distribution, but rather relying on the
asymptotic distribution of extremal values, and reduce the problem to estimate
three parameters of the extreme value distributions and the extremal index. We
specially take care of data dependence via estimating extremal index since in
many settings, such as scan statistics, change-point detection, and extreme
bandits, where dependence in the sequence of statistics can be significant. Key
features of our DTM also include robustness and the computational efficiency,
and it only requires one sample path to form a reliable estimate of the
threshold, in contrast to the Monte Carlo sampling approach which requires
drawing a large number of sample paths. We demonstrate the good performance of
DTM via numerical examples in various dependent settings.
| Shuang Li, Yao Xie, and Le Song | null | 1610.04599 | null | null |
Simultaneous Learning of Trees and Representations for Extreme
Classification and Density Estimation | stat.ML cs.CL cs.LG | We consider multi-class classification where the predictor has a hierarchical
structure that allows for a very large number of labels both at train and test
time. The predictive power of such models can heavily depend on the structure
of the tree, and although past work showed how to learn the tree structure, it
expected that the feature vectors remained static. We provide a novel algorithm
to simultaneously perform representation learning for the input data and
learning of the hierarchi- cal predictor. Our approach optimizes an objec- tive
function which favors balanced and easily- separable multi-way node partitions.
We theoret- ically analyze this objective, showing that it gives rise to a
boosting style property and a bound on classification error. We next show how
to extend the algorithm to conditional density estimation. We empirically
validate both variants of the al- gorithm on text classification and language
mod- eling, respectively, and show that they compare favorably to common
baselines in terms of accu- racy and running time.
| Yacine Jernite, Anna Choromanska and David Sontag | null | 1610.04658 | null | null |
A Closed Form Solution to Multi-View Low-Rank Regression | cs.LG | Real life data often includes information from different channels. For
example, in computer vision, we can describe an image using different image
features, such as pixel intensity, color, HOG, GIST feature, SIFT features,
etc.. These different aspects of the same objects are often called multi-view
(or multi-modal) data. Low-rank regression model has been proved to be an
effective learning mechanism by exploring the low-rank structure of real life
data. But previous low-rank regression model only works on single view data. In
this paper, we propose a multi-view low-rank regression model by imposing
low-rank constraints on multi-view regression model. Most importantly, we
provide a closed-form solution to the multi-view low-rank regression model.
Extensive experiments on 4 multi-view datasets show that the multi-view
low-rank regression model outperforms single-view regression model and reveals
that multi-view low-rank structure is very helpful.
| Shuai Zheng, Xiao Cai, Chris Ding, Feiping Nie, Heng Huang | null | 1610.04668 | null | null |
Generalization of metric classification algorithms for sequences
classification and labelling | cs.LG cs.CL | The article deals with the issue of modification of metric classification
algorithms. In particular, it studies the algorithm k-Nearest Neighbours for
its application to sequential data. A method of generalization of metric
classification algorithms is proposed. As a part of it, there has been
developed an algorithm for solving the problem of classification and labelling
of sequential data. The advantages of the developed algorithm of classification
in comparison with the existing one are also discussed in the article. There is
a comparison of the effectiveness of the proposed algorithm with the algorithm
of CRF in the task of chunking in the open data set CoNLL2000.
| Roman Samarev, Andrey Vasnetsov, Elizaveta Smelkova | null | 1610.04718 | null | null |
An Adaptive Test of Independence with Analytic Kernel Embeddings | stat.ML cs.LG | A new computationally efficient dependence measure, and an adaptive
statistical test of independence, are proposed. The dependence measure is the
difference between analytic embeddings of the joint distribution and the
product of the marginals, evaluated at a finite set of locations (features).
These features are chosen so as to maximize a lower bound on the test power,
resulting in a test that is data-efficient, and that runs in linear time (with
respect to the sample size n). The optimized features can be interpreted as
evidence to reject the null hypothesis, indicating regions in the joint domain
where the joint distribution and the product of the marginals differ most.
Consistency of the independence test is established, for an appropriate choice
of features. In real-world benchmarks, independence tests using the optimized
features perform comparably to the state-of-the-art quadratic-time HSIC test,
and outperform competing O(n) and O(n log n) tests.
| Wittawat Jitkrittum, Zoltan Szabo, Arthur Gretton | null | 1610.04782 | null | null |
Similarity Learning for Time Series Classification | cs.LG | Multivariate time series naturally exist in many fields, like energy,
bioinformatics, signal processing, and finance. Most of these applications need
to be able to compare these structured data. In this context, dynamic time
warping (DTW) is probably the most common comparison measure. However, not much
research effort has been put into improving it by learning. In this paper, we
propose a novel method for learning similarities based on DTW, in order to
improve time series classification. Making use of the uniform stability
framework, we provide the first theoretical guarantees in the form of a
generalization bound for linear classification. The experimental study shows
that the proposed approach is efficient, while yielding sparse classifiers.
| Maria-Irina Nicolae, \'Eric Gaussier, Amaury Habrard, Marc Sebban | null | 1610.04783 | null | null |
Towards K-means-friendly Spaces: Simultaneous Deep Learning and
Clustering | cs.LG | Most learning approaches treat dimensionality reduction (DR) and clustering
separately (i.e., sequentially), but recent research has shown that optimizing
the two tasks jointly can substantially improve the performance of both. The
premise behind the latter genre is that the data samples are obtained via
linear transformation of latent representations that are easy to cluster; but
in practice, the transformation from the latent space to the data can be more
complicated. In this work, we assume that this transformation is an unknown and
possibly nonlinear function. To recover the `clustering-friendly' latent
representations and to better cluster the data, we propose a joint DR and
K-means clustering approach in which DR is accomplished via learning a deep
neural network (DNN). The motivation is to keep the advantages of jointly
optimizing the two tasks, while exploiting the deep neural network's ability to
approximate any nonlinear function. This way, the proposed approach can work
well for a broad class of generative models. Towards this end, we carefully
design the DNN structure and the associated joint optimization criterion, and
propose an effective and scalable algorithm to handle the formulated
optimization problem. Experiments using different real datasets are employed to
showcase the effectiveness of the proposed approach.
| Bo Yang, Xiao Fu, Nicholas D. Sidiropoulos, Mingyi Hong | null | 1610.04794 | null | null |
Sample Efficient Optimization for Learning Controllers for Bipedal
Locomotion | cs.RO cs.LG | Learning policies for bipedal locomotion can be difficult, as experiments are
expensive and simulation does not usually transfer well to hardware. To counter
this, we need al- gorithms that are sample efficient and inherently safe.
Bayesian Optimization is a powerful sample-efficient tool for optimizing
non-convex black-box functions. However, its performance can degrade in higher
dimensions. We develop a distance metric for bipedal locomotion that enhances
the sample-efficiency of Bayesian Optimization and use it to train a 16
dimensional neuromuscular model for planar walking. This distance metric
reflects some basic gait features of healthy walking and helps us quickly
eliminate a majority of unstable controllers. With our approach we can learn
policies for walking in less than 100 trials for a range of challenging
settings. In simulation, we show results on two different costs and on various
terrains including rough ground and ramps, sloping upwards and downwards. We
also perturb our models with unknown inertial disturbances analogous with
differences between simulation and hardware. These results are promising, as
they indicate that this method can potentially be used to learn control
policies on hardware.
| Rika Antonova, Akshara Rai, Christopher G. Atkeson | 10.1109/HUMANOIDS.2016.7803249 | 1610.04795 | null | null |
Dynamic Stacked Generalization for Node Classification on Networks | stat.ML cs.LG cs.SI stat.AP | We propose a novel stacked generalization (stacking) method as a dynamic
ensemble technique using a pool of heterogeneous classifiers for node label
classification on networks. The proposed method assigns component models a set
of functional coefficients, which can vary smoothly with certain topological
features of a node. Compared to the traditional stacking model, the proposed
method can dynamically adjust the weights of individual models as we move
across the graph and provide a more versatile and significantly more accurate
stacking model for label prediction on a network. We demonstrate the benefits
of the proposed model using both a simulation study and real data analysis.
| Zhen Han and Alyson Wilson | null | 1610.04804 | null | null |
Convergence rate of stochastic k-means | cs.LG | We analyze online and mini-batch k-means variants. Both scale up the widely
used Lloyd 's algorithm via stochastic approximation, and have become popular
for large-scale clustering and unsupervised feature learning. We show, for the
first time, that they have global convergence towards local optima at
$O(\frac{1}{t})$ rate under general conditions. In addition, we show if the
dataset is clusterable, with suitable initialization, mini-batch k-means
converges to an optimal k-means solution with $O(\frac{1}{t})$ convergence rate
with high probability. The k-means objective is non-convex and
non-differentiable: we exploit ideas from non-convex gradient-based
optimization by providing a novel characterization of the trajectory of k-means
algorithm on its solution space, and circumvent its non-differentiability via
geometric insights about k-means update.
| Cheng Tang, Claire Monteleoni | null | 1610.049 | null | null |
Probabilistic Dimensionality Reduction via Structure Learning | stat.ML cs.LG | We propose a novel probabilistic dimensionality reduction framework that can
naturally integrate the generative model and the locality information of data.
Based on this framework, we present a new model, which is able to learn a
smooth skeleton of embedding points in a low-dimensional space from
high-dimensional noisy data. The formulation of the new model can be
equivalently interpreted as two coupled learning problem, i.e., structure
learning and the learning of projection matrix. This interpretation motivates
the learning of the embedding points that can directly form an explicit graph
structure. We develop a new method to learn the embedding points that form a
spanning tree, which is further extended to obtain a discriminative and compact
feature representation for clustering problems. Unlike traditional clustering
methods, we assume that centers of clusters should be close to each other if
they are connected in a learned graph, and other cluster centers should be
distant. This can greatly facilitate data visualization and scientific
discovery in downstream analysis. Extensive experiments are performed that
demonstrate that the proposed framework is able to obtain discriminative
feature representations, and correctly recover the intrinsic structures of
various real-world datasets.
| Li Wang | null | 1610.04929 | null | null |
Wind ramp event prediction with parallelized Gradient Boosted Regression
Trees | cs.LG cs.AI | Accurate prediction of wind ramp events is critical for ensuring the
reliability and stability of the power systems with high penetration of wind
energy. This paper proposes a classification based approach for estimating the
future class of wind ramp event based on certain thresholds. A parallelized
gradient boosted regression tree based technique has been proposed to
accurately classify the normal as well as rare extreme wind power ramp events.
The model has been validated using wind power data obtained from the National
Renewable Energy Laboratory database. Performance comparison with several
benchmark techniques indicates the superiority of the proposed technique in
terms of superior classification accuracy.
| Saurav Gupta, Nitin Anand Shrivastava, Abbas Khosravi, Bijaya Ketan
Panigrahi | null | 1610.05009 | null | null |
Encoding the Local Connectivity Patterns of fMRI for Cognitive State
Classification | cs.CV cs.LG | In this work, we propose a novel framework to encode the local connectivity
patterns of brain, using Fisher Vectors (FV), Vector of Locally Aggregated
Descriptors (VLAD) and Bag-of-Words (BoW) methods. We first obtain local
descriptors, called Mesh Arc Descriptors (MADs) from fMRI data, by forming
local meshes around anatomical regions, and estimating their relationship
within a neighborhood. Then, we extract a dictionary of relationships, called
\textit{brain connectivity dictionary} by fitting a generative Gaussian mixture
model (GMM) to a set of MADs, and selecting the codewords at the mean of each
component of the mixture. Codewords represent the connectivity patterns among
anatomical regions. We also encode MADs by VLAD and BoW methods using the
k-Means clustering.
We classify the cognitive states of Human Connectome Project (HCP) task fMRI
dataset, where we train support vector machines (SVM) by the encoded MADs.
Results demonstrate that, FV encoding of MADs can be successfully employed for
classification of cognitive tasks, and outperform the VLAD and BoW
representations. Moreover, we identify the significant Gaussians in mixture
models by computing energy of their corresponding FV parts, and analyze their
effect on classification accuracy. Finally, we suggest a new method to
visualize the codewords of brain connectivity dictionary.
| Itir Onal Ertugrul and Mete Ozay and Fatos T. Yarman Vural | null | 1610.05036 | null | null |
Efficient Metric Learning for the Analysis of Motion Data | cs.LG stat.ML | We investigate metric learning in the context of dynamic time warping (DTW),
the by far most popular dissimilarity measure used for the comparison and
analysis of motion capture data. While metric learning enables a
problem-adapted representation of data, the majority of methods has been
proposed for vectorial data only. In this contribution, we extend the popular
principle offered by the large margin nearest neighbors learner (LMNN) to DTW
by treating the resulting component-wise dissimilarity values as features. We
demonstrate that this principle greatly enhances the classification accuracy in
several benchmarks. Further, we show that recent auxiliary concepts such as
metric regularization can be transferred from the vectorial case to
component-wise DTW in a similar way. We illustrate that metric regularization
constitutes a crucial prerequisite for the interpretation of the resulting
relevance profiles.
| Babak Hosseini, and Barbara Hammer | 10.1109/DSAA.2015.7344819 | 1610.05083 | null | null |
Lazifying Conditional Gradient Algorithms | cs.DS cs.LG | Conditional gradient algorithms (also often called Frank-Wolfe algorithms)
are popular due to their simplicity of only requiring a linear optimization
oracle and more recently they also gained significant traction for online
learning. While simple in principle, in many cases the actual implementation of
the linear optimization oracle is costly. We show a general method to lazify
various conditional gradient algorithms, which in actual computations leads to
several orders of magnitude of speedup in wall-clock time. This is achieved by
using a faster separation oracle instead of a linear optimization oracle,
relying only on few linear optimization oracle calls.
| G\'abor Braun, Sebastian Pokutta, Daniel Zink | null | 1610.0512 | null | null |
Risk-Aware Algorithms for Adversarial Contextual Bandits | cs.LG stat.ML | In this work we consider adversarial contextual bandits with risk
constraints. At each round, nature prepares a context, a cost for each arm, and
additionally a risk for each arm. The learner leverages the context to pull an
arm and then receives the corresponding cost and risk associated with the
pulled arm. In addition to minimizing the cumulative cost, the learner also
needs to satisfy long-term risk constraints -- the average of the cumulative
risk from all pulled arms should not be larger than a pre-defined threshold. To
address this problem, we first study the full information setting where in each
round the learner receives an adversarial convex loss and a convex constraint.
We develop a meta algorithm leveraging online mirror descent for the full
information setting and extend it to contextual bandit with risk constraints
setting using expert advice. Our algorithms can achieve near-optimal regret in
terms of minimizing the total cost, while successfully maintaining a sublinear
growth of cumulative risk constraint violation.
| Wen Sun, Debadeepta Dey, and Ashish Kapoor | null | 1610.05129 | null | null |
The Peaking Phenomenon in Semi-supervised Learning | stat.ML cs.LG | For the supervised least squares classifier, when the number of training
objects is smaller than the dimensionality of the data, adding more data to the
training set may first increase the error rate before decreasing it. This,
possibly counterintuitive, phenomenon is known as peaking. In this work, we
observe that a similar but more pronounced version of this phenomenon also
occurs in the semi-supervised setting, where instead of labeled objects,
unlabeled objects are added to the training set. We explain why the learning
curve has a more steep incline and a more gradual decline in this setting
through simulation studies and by applying an approximation of the learning
curve based on the work by Raudys & Duin.
| Jesse H. Krijthe and Marco Loog | null | 1610.0516 | null | null |
Decentralized Collaborative Learning of Personalized Models over
Networks | cs.LG cs.AI cs.DC cs.SY stat.ML | We consider a set of learning agents in a collaborative peer-to-peer network,
where each agent learns a personalized model according to its own learning
objective. The question addressed in this paper is: how can agents improve upon
their locally trained model by communicating with other agents that have
similar objectives? We introduce and analyze two asynchronous gossip algorithms
running in a fully decentralized manner. Our first approach, inspired from
label propagation, aims to smooth pre-trained local models over the network
while accounting for the confidence that each agent has in its initial model.
In our second approach, agents jointly learn and propagate their model by
making iterative updates based on both their local dataset and the behavior of
their neighbors. To optimize this challenging objective, our decentralized
algorithm is based on ADMM.
| Paul Vanhaesebrouck, Aur\'elien Bellet, Marc Tommasi | null | 1610.05202 | null | null |
BET on Independence | math.ST cs.LG stat.CO stat.ME stat.ML stat.TH | We study the problem of nonparametric dependence detection. Many existing
methods may suffer severe power loss due to non-uniform consistency, which we
illustrate with a paradox. To avoid such power loss, we approach the
nonparametric test of independence through the new framework of binary
expansion statistics (BEStat) and binary expansion testing (BET), which examine
dependence through a novel binary expansion filtration approximation of the
copula. Through a Hadamard transform, we find that the symmetry statistics in
the filtration are complete sufficient statistics for dependence. These
statistics are also uncorrelated under the null. By utilizing symmetry
statistics, the BET avoids the problem of non-uniform consistency and improves
upon a wide class of commonly used methods (a) by achieving the minimax rate in
sample size requirement for reliable power and (b) by providing clear
interpretations of global relationships upon rejection of independence. The
binary expansion approach also connects the symmetry statistics with the
current computing system to facilitate efficient bitwise implementation. We
illustrate the BET with a study of the distribution of stars in the night sky
and with an exploratory data analysis of the TCGA breast cancer data.
| Kai Zhang | 10.1080/01621459.2018.1537921 | 1610.05246 | null | null |
A probabilistic model for the numerical solution of initial value
problems | math.NA cs.LG stat.ML | Like many numerical methods, solvers for initial value problems (IVPs) on
ordinary differential equations estimate an analytically intractable quantity,
using the results of tractable computations as inputs. This structure is
closely connected to the notion of inference on latent variables in statistics.
We describe a class of algorithms that formulate the solution to an IVP as
inference on a latent path that is a draw from a Gaussian process probability
measure (or equivalently, the solution of a linear stochastic differential
equation). We then show that certain members of this class are connected
precisely to generalized linear methods for ODEs, a number of Runge--Kutta
methods, and Nordsieck methods. This probabilistic formulation of classic
methods is valuable in two ways: analytically, it highlights implicit prior
assumptions favoring certain approximate solutions to the IVP over others, and
gives a precise meaning to the old observation that these methods act like
filters. Practically, it endows the classic solvers with `docking points' for
notions of uncertainty and prior information about the initial value, the value
of the ODE itself, and the solution of the problem.
| Michael Schober, Simo S\"arkk\"a, Philipp Hennig | null | 1610.05261 | null | null |
Sequential Learning without Feedback | cs.LG | In many security and healthcare systems a sequence of features/sensors/tests
are used for detection and diagnosis. Each test outputs a prediction of the
latent state, and carries with it inherent costs. Our objective is to {\it
learn} strategies for selecting tests to optimize accuracy \& costs.
Unfortunately it is often impossible to acquire in-situ ground truth
annotations and we are left with the problem of unsupervised sensor selection
(USS). We pose USS as a version of stochastic partial monitoring problem with
an {\it unusual} reward structure (even noisy annotations are unavailable).
Unsurprisingly no learner can achieve sublinear regret without further
assumptions. To this end we propose the notion of weak-dominance. This is a
condition on the joint probability distribution of test outputs and latent
state and says that whenever a test is accurate on an example, a later test in
the sequence is likely to be accurate as well. We empirically verify that weak
dominance holds on real datasets and prove that it is a maximal condition for
achieving sublinear regret. We reduce USS to a special case of multi-armed
bandit problem with side information and develop polynomial time algorithms
that achieve sublinear regret.
| Manjesh Hanawal and Csaba Szepesvari and Venkatesh Saligrama | null | 1610.05394 | null | null |
A Joint Indoor WLAN Localization and Outlier Detection Scheme Using
LASSO and Elastic-Net Optimization Techniques | cs.NI cs.LG | In this paper, we introduce two indoor Wireless Local Area Network (WLAN)
positioning methods using augmented sparse recovery algorithms. These schemes
render a sparse user's position vector, and in parallel, minimize the distance
between the online measurement and radio map. The overall localization scheme
for both methods consists of three steps: 1) coarse localization, obtained from
comparing the online measurements with clustered radio map. A novel graph-based
method is proposed to cluster the offline fingerprints. In the online phase, a
Region Of Interest (ROI) is selected within which we search for the user's
location; 2) Access Point (AP) selection; and 3) fine localization through the
novel sparse recovery algorithms. Since the online measurements are subject to
inordinate measurement readings, called outliers, the sparse recovery methods
are modified in order to jointly estimate the outliers and user's position
vector. The outlier detection procedure identifies the APs whose readings are
either not available or erroneous. The proposed localization methods have been
tested with Received Signal Strength (RSS) measurements in a typical office
environment and the results show that they can localize the user with
significantly high accuracy and resolution which is superior to the results
from competing WLAN fingerprinting localization methods.
| Ali Khalajmehrabadi, Nikolaos Gatsis, Daniel Pack and David Akopian | 10.1109/TMC.2016.2616465 | 1610.05419 | null | null |
Structured Group Sparsity: A Novel Indoor WLAN Localization, Outlier
Detection, and Radio Map Interpolation Scheme | cs.NI cs.LG | This paper introduces novel schemes for indoor localization, outlier
detection, and radio map interpolation using Wireless Local Area Networks
(WLANs). The localization method consists of a novel multicomponent
optimization technique that minimizes the squared $\ell_{2}$-norm of the
residuals between the radio map and the online Received Signal Strength (RSS)
measurements, the $\ell_{1}$-norm of the user's location vector, and weighted
$\ell_{2}$-norms of layered groups of Reference Points (RPs). RPs are grouped
using a new criterion based on the similarity between the so-called Access
Point (AP) coverage vectors. In addition, since AP readings are prone to
containing inordinate readings, called outliers, an augmented optimization
problem is proposed to detect the outliers and localize the user with cleaned
online measurements. Moreover, a novel scheme to record fingerprints from a
smaller number of RPs and estimate the radio map at RPs without recorded
fingerprints is developed using sparse recovery techniques. All localization
schemes are tested on RSS fingerprints collected from a real environment. The
overall scheme has comparable complexity with competing approaches, while it
performs with high accuracy under a small number of APs and finer granularity
of RPs.
| Ali Khalajmehrabadi, Nikolaos Gatsis, and David Akopian | null | 1610.05421 | null | null |
Modern WLAN Fingerprinting Indoor Positioning Methods and Deployment
Challenges | cs.NI cs.LG | Wireless Local Area Network (WLAN) has become a promising choice for indoor
positioning as the only existing and established infrastructure, to localize
the mobile and stationary users indoors. However, since WLAN has been initially
designed for wireless networking and not positioning, the localization task
based on WLAN signals has several challenges. Amongst the WLAN positioning
methods, WLAN fingerprinting localization has recently achieved great attention
due to its promising results. WLAN fingerprinting faces several challenges and
hence, in this paper, our goal is to overview these challenges and the
state-of-the-art solutions. This paper consists of three main parts: 1)
Conventional localization schemes; 2) State-of-the-art approaches; 3) Practical
deployment challenges. Since all the proposed methods in WLAN literature have
been conducted and tested in different settings, the reported results are not
equally comparable. So, we compare some of the main localization schemes in a
single real environment and assess their localization accuracy, positioning
error statistics, and complexity. Our results depict illustrative evaluation of
WLAN localization systems and guide to future improvement opportunities.
| Ali Khalajmehrabadi, Nikolaos Gatsis, and David Akopian | null | 1610.05424 | null | null |
Improving Covariance-Regularized Discriminant Analysis for EHR-based
Predictive Analytics of Diseases | cs.LG | Linear Discriminant Analysis (LDA) is a well-known technique for feature
extraction and dimension reduction. The performance of classical LDA, however,
significantly degrades on the High Dimension Low Sample Size (HDLSS) data for
the ill-posed inverse problem. Existing approaches for HDLSS data
classification typically assume the data in question are with Gaussian
distribution and deal the HDLSS classification problem with regularization.
However, these assumptions are too strict to hold in many emerging real-life
applications, such as enabling personalized predictive analysis using
Electronic Health Records (EHRs) data collected from an extremely limited
number of patients who have been diagnosed with or without the target disease
for prediction. In this paper, we revised the problem of predictive analysis of
disease using personal EHR data and LDA classifier. To fill the gap, in this
paper, we first studied an analytical model that understands the accuracy of
LDA for classifying data with arbitrary distribution. The model gives a
theoretical upper bound of LDA error rate that is controlled by two factors:
(1) the statistical convergence rate of (inverse) covariance matrix estimators
and (2) the divergence of the training/testing datasets to fitted
distributions. To this end, we could lower the error rate by balancing the two
factors for better classification performance. Hereby, we further proposed a
novel LDA classifier De-Sparse that leverages De-sparsified Graphical Lasso to
improve the estimation of LDA, which outperforms state-of-the-art LDA
approaches developed for HDLSS data. Such advances and effectiveness are
further demonstrated by both theoretical analysis and extensive experiments on
EHR datasets.
| Sijia Yang, Haoyi Xiong, Kaibo Xu, Licheng Wang, Jiang Bian, Zeyi Sun | null | 1610.05446 | null | null |
An Interactive Machine Learning Framework | cs.HC cs.LG | Machine learning (ML) is believed to be an effective and efficient tool to
build reliable prediction model or extract useful structure from an avalanche
of data. However, ML is also criticized by its difficulty in interpretation and
complicated parameter tuning. In contrast, visualization is able to well
organize and visually encode the entangled information in data and guild
audiences to simpler perceptual inferences and analytic thinking. But large
scale and high dimensional data will usually lead to the failure of many
visualization methods. In this paper, we close a loop between ML and
visualization via interaction between ML algorithm and users, so machine
intelligence and human intelligence can cooperate and improve each other in a
mutually rewarding way. In particular, we propose "transparent boosting tree
(TBT)", which visualizes both the model structure and prediction statistics of
each step in the learning process of gradient boosting tree to user, and
involves user's feedback operations to trees into the learning process. In TBT,
ML is in charge of updating weights in learning model and filtering information
shown to user from the big data, while visualization is in charge of providing
a visual understanding of ML model to facilitate user exploration. It combines
the advantages of both ML in big data statistics and human in decision making
based on domain knowledge. We develop a user friendly interface for this novel
learning method, and apply it to two datasets collected from real applications.
Our study shows that making ML transparent by using interactive visualization
can significantly improve the exploration of ML algorithms, give rise to novel
insights of ML models, and integrates both machine and human intelligence.
| Teng Lee, James Johnson, Steve Cheng | null | 1610.05463 | null | null |
Federated Learning: Strategies for Improving Communication Efficiency | cs.LG | Federated Learning is a machine learning setting where the goal is to train a
high-quality centralized model while training data remains distributed over a
large number of clients each with unreliable and relatively slow network
connections. We consider learning algorithms for this setting where on each
round, each client independently computes an update to the current model based
on its local data, and communicates this update to a central server, where the
client-side updates are aggregated to compute a new global model. The typical
clients in this setting are mobile phones, and communication efficiency is of
the utmost importance.
In this paper, we propose two ways to reduce the uplink communication costs:
structured updates, where we directly learn an update from a restricted space
parametrized using a smaller number of variables, e.g. either low-rank or a
random mask; and sketched updates, where we learn a full model update and then
compress it using a combination of quantization, random rotations, and
subsampling before sending it to the server. Experiments on both convolutional
and recurrent networks show that the proposed methods can reduce the
communication cost by two orders of magnitude.
| Jakub Kone\v{c}n\'y, H. Brendan McMahan, Felix X. Yu, Peter
Richt\'arik, Ananda Theertha Suresh, Dave Bacon | null | 1610.05492 | null | null |
Analysis and Implementation of an Asynchronous Optimization Algorithm
for the Parameter Server | math.OC cs.DC cs.LG stat.ML | This paper presents an asynchronous incremental aggregated gradient algorithm
and its implementation in a parameter server framework for solving regularized
optimization problems. The algorithm can handle both general convex (possibly
non-smooth) regularizers and general convex constraints. When the empirical
data loss is strongly convex, we establish linear convergence rate, give
explicit expressions for step-size choices that guarantee convergence to the
optimum, and bound the associated convergence factors. The expressions have an
explicit dependence on the degree of asynchrony and recover classical results
under synchronous operation. Simulations and implementations on commercial
compute clouds validate our findings.
| Arda Aytekin, Hamid Reza Feyzmahdavian, Mikael Johansson | null | 1610.05507 | null | null |
Online Contrastive Divergence with Generative Replay: Experience Replay
without Storing Data | cs.LG cs.NE | Conceived in the early 1990s, Experience Replay (ER) has been shown to be a
successful mechanism to allow online learning algorithms to reuse past
experiences. Traditionally, ER can be applied to all machine learning paradigms
(i.e., unsupervised, supervised, and reinforcement learning). Recently, ER has
contributed to improving the performance of deep reinforcement learning. Yet,
its application to many practical settings is still limited by the memory
requirements of ER, necessary to explicitly store previous observations. To
remedy this issue, we explore a novel approach, Online Contrastive Divergence
with Generative Replay (OCD_GR), which uses the generative capability of
Restricted Boltzmann Machines (RBMs) instead of recorded past experiences. The
RBM is trained online, and does not require the system to store any of the
observed data points. We compare OCD_GR to ER on 9 real-world datasets,
considering a worst-case scenario (data points arriving in sorted order) as
well as a more realistic one (sequential random-order data points). Our results
show that in 64.28% of the cases OCD_GR outperforms ER and in the remaining
35.72% it has an almost equal performance, while having a considerably reduced
space complexity (i.e., memory usage) at a comparable time complexity.
| Decebal Constantin Mocanu and Maria Torres Vega and Eric Eaton and
Peter Stone and Antonio Liotta | null | 1610.05555 | null | null |
Stylometric Analysis of Early Modern Period English Plays | cs.CL cs.LG | Function word adjacency networks (WANs) are used to study the authorship of
plays from the Early Modern English period. In these networks, nodes are
function words and directed edges between two nodes represent the relative
frequency of directed co-appearance of the two words. For every analyzed play,
a WAN is constructed and these are aggregated to generate author profile
networks. We first study the similarity of writing styles between Early English
playwrights by comparing the profile WANs. The accuracy of using WANs for
authorship attribution is then demonstrated by attributing known plays among
six popular playwrights. Moreover, the WAN method is shown to outperform other
frequency-based methods on attributing Early English plays. In addition, WANs
are shown to be reliable classifiers even when attributing collaborative plays.
For several plays of disputed co-authorship, a deeper analysis is performed by
attributing every act and scene separately, in which we both corroborate
existing breakdowns and provide evidence of new assignments.
| Mark Eisen, Santiago Segarra, Gabriel Egan, Alejandro Ribeiro | null | 1610.0567 | null | null |
Markov Chain Truncation for Doubly-Intractable Inference | stat.ML cs.LG | Computing partition functions, the normalizing constants of probability
distributions, is often hard. Variants of importance sampling give unbiased
estimates of a normalizer Z, however, unbiased estimates of the reciprocal 1/Z
are harder to obtain. Unbiased estimates of 1/Z allow Markov chain Monte Carlo
sampling of "doubly-intractable" distributions, such as the parameter posterior
for Markov Random Fields or Exponential Random Graphs. We demonstrate how to
construct unbiased estimates for 1/Z given access to black-box importance
sampling estimators for Z. We adapt recent work on random series truncation and
Markov chain coupling, producing estimators with lower variance and a higher
percentage of positive estimates than before. Our debiasing algorithms are
simple to implement, and have some theoretical and empirical advantages over
existing methods.
| Colin Wei and Iain Murray | null | 1610.05672 | null | null |
Low-rank and Sparse Soft Targets to Learn Better DNN Acoustic Models | cs.CL cs.AI cs.HC cs.LG | Conventional deep neural networks (DNN) for speech acoustic modeling rely on
Gaussian mixture models (GMM) and hidden Markov model (HMM) to obtain binary
class labels as the targets for DNN training. Subword classes in speech
recognition systems correspond to context-dependent tied states or senones. The
present work addresses some limitations of GMM-HMM senone alignments for DNN
training. We hypothesize that the senone probabilities obtained from a DNN
trained with binary labels can provide more accurate targets to learn better
acoustic models. However, DNN outputs bear inaccuracies which are exhibited as
high dimensional unstructured noise, whereas the informative components are
structured and low-dimensional. We exploit principle component analysis (PCA)
and sparse coding to characterize the senone subspaces. Enhanced probabilities
obtained from low-rank and sparse reconstructions are used as soft-targets for
DNN acoustic modeling, that also enables training with untranscribed data.
Experiments conducted on AMI corpus shows 4.6% relative reduction in word error
rate.
| Pranay Dighe, Afsaneh Asaei and Herve Bourlard | 10.1109/ICASSP.2017.7953161 | 1610.05688 | null | null |
Feasibility Based Large Margin Nearest Neighbor Metric Learning | cs.DS cs.LG | Large margin nearest neighbor (LMNN) is a metric learner which optimizes the
performance of the popular $k$NN classifier. However, its resulting metric
relies on pre-selected target neighbors. In this paper, we address the
feasibility of LMNN's optimization constraints regarding these target points,
and introduce a mathematical measure to evaluate the size of the feasible
region of the optimization problem. We enhance the optimization framework of
LMNN by a weighting scheme which prefers data triplets which yield a larger
feasible region. This increases the chances to obtain a good metric as the
solution of LMNN's problem. We evaluate the performance of the resulting
feasibility-based LMNN algorithm using synthetic and real datasets. The
empirical results show an improved accuracy for different types of datasets in
comparison to regular LMNN.
| Babak Hosseini, Barbara Hammer | null | 1610.0571 | null | null |
Fast L1-NMF for Multiple Parametric Model Estimation | cs.CV cs.LG | In this work we introduce a comprehensive algorithmic pipeline for multiple
parametric model estimation. The proposed approach analyzes the information
produced by a random sampling algorithm (e.g., RANSAC) from a machine
learning/optimization perspective, using a \textit{parameterless} biclustering
algorithm based on L1 nonnegative matrix factorization (L1-NMF). The proposed
framework exploits consistent patterns that naturally arise during the RANSAC
execution, while explicitly avoiding spurious inconsistencies. Contrarily to
the main trends in the literature, the proposed technique does not impose
non-intersecting parametric models. A new accelerated algorithm to compute
L1-NMFs allows to handle medium-sized problems faster while also extending the
usability of the algorithm to much larger datasets. This accelerated algorithm
has applications in any other context where an L1-NMF is needed, beyond the
biclustering approach to parameter estimation here addressed. We accompany the
algorithmic presentation with theoretical foundations and numerous and diverse
examples.
| Mariano Tepper and Guillermo Sapiro | null | 1610.05712 | null | null |
Deep Amortized Inference for Probabilistic Programs | cs.AI cs.LG stat.ML | Probabilistic programming languages (PPLs) are a powerful modeling tool, able
to represent any computable probability distribution. Unfortunately,
probabilistic program inference is often intractable, and existing PPLs mostly
rely on expensive, approximate sampling-based methods. To alleviate this
problem, one could try to learn from past inferences, so that future inferences
run faster. This strategy is known as amortized inference; it has recently been
applied to Bayesian networks and deep generative models. This paper proposes a
system for amortized inference in PPLs. In our system, amortization comes in
the form of a parameterized guide program. Guide programs have similar
structure to the original program, but can have richer data flow, including
neural network components. These networks can be optimized so that the guide
approximately samples from the posterior distribution defined by the original
program. We present a flexible interface for defining guide programs and a
stochastic gradient-based scheme for optimizing guide parameters, as well as
some preliminary results on automatically deriving guide programs. We explore
in detail the common machine learning pattern in which a 'local' model is
specified by 'global' random values and used to generate independent observed
data points; this gives rise to amortized local inference supporting global
model learning.
| Daniel Ritchie, Paul Horsfall, Noah D. Goodman | null | 1610.05735 | null | null |
Semi-supervised Knowledge Transfer for Deep Learning from Private
Training Data | stat.ML cs.CR cs.LG | Some machine learning applications involve training data that is sensitive,
such as the medical histories of patients in a clinical trial. A model may
inadvertently and implicitly store some of its training data; careful analysis
of the model may therefore reveal sensitive information.
To address this problem, we demonstrate a generally applicable approach to
providing strong privacy guarantees for training data: Private Aggregation of
Teacher Ensembles (PATE). The approach combines, in a black-box fashion,
multiple models trained with disjoint datasets, such as records from different
subsets of users. Because they rely directly on sensitive data, these models
are not published, but instead used as "teachers" for a "student" model. The
student learns to predict an output chosen by noisy voting among all of the
teachers, and cannot directly access an individual teacher or the underlying
data or parameters. The student's privacy properties can be understood both
intuitively (since no single teacher and thus no single dataset dictates the
student's training) and formally, in terms of differential privacy. These
properties hold even if an adversary can not only query the student but also
inspect its internal workings.
Compared with previous work, the approach imposes only weak assumptions on
how teachers are trained: it applies to any model, including non-convex models
like DNNs. We achieve state-of-the-art privacy/utility trade-offs on MNIST and
SVHN thanks to an improved privacy analysis and semi-supervised learning.
| Nicolas Papernot, Mart\'in Abadi, \'Ulfar Erlingsson, Ian Goodfellow,
Kunal Talwar | null | 1610.05755 | null | null |
RedQueen: An Online Algorithm for Smart Broadcasting in Social Networks | stat.ML cs.DS cs.LG cs.SI | Users in social networks whose posts stay at the top of their followers'{}
feeds the longest time are more likely to be noticed. Can we design an online
algorithm to help them decide when to post to stay at the top? In this paper,
we address this question as a novel optimal control problem for jump stochastic
differential equations. For a wide variety of feed dynamics, we show that the
optimal broadcasting intensity for any user is surprisingly simple -- it is
given by the position of her most recent post on each of her follower's feeds.
As a consequence, we are able to develop a simple and highly efficient online
algorithm, RedQueen, to sample the optimal times for the user to post.
Experiments on both synthetic and real data gathered from Twitter show that our
algorithm is able to consistently make a user's posts more visible over time,
is robust to volume changes on her followers' feeds, and significantly
outperforms the state of the art.
| Ali Zarezade and Utkarsh Upadhyay and Hamid Rabiee and Manuel Gomez
Rodriguez | null | 1610.05773 | null | null |
Modeling the Dynamics of Online Learning Activity | stat.ML cs.LG cs.SI | People are increasingly relying on the Web and social media to find solutions
to their problems in a wide range of domains. In this online setting, closely
related problems often lead to the same characteristic learning pattern, in
which people sharing these problems visit related pieces of information,
perform almost identical queries or, more generally, take a series of similar
actions. In this paper, we introduce a novel modeling framework for clustering
continuous-time grouped streaming data, the hierarchical Dirichlet Hawkes
process (HDHP), which allows us to automatically uncover a wide variety of
learning patterns from detailed traces of learning activity. Our model allows
for efficient inference, scaling to millions of actions taken by thousands of
users. Experiments on real data gathered from Stack Overflow reveal that our
framework can recover meaningful learning patterns in terms of both content and
temporal dynamics, as well as accurately track users' interests and goals over
time.
| Charalampos Mavroforakis and Isabel Valera and Manuel Gomez Rodriguez | null | 1610.05775 | null | null |
Big Batch SGD: Automated Inference using Adaptive Batch Sizes | cs.LG math.NA math.OC stat.ML | Classical stochastic gradient methods for optimization rely on noisy gradient
approximations that become progressively less accurate as iterates approach a
solution. The large noise and small signal in the resulting gradients makes it
difficult to use them for adaptive stepsize selection and automatic stopping.
We propose alternative "big batch" SGD schemes that adaptively grow the batch
size over time to maintain a nearly constant signal-to-noise ratio in the
gradient approximation. The resulting methods have similar convergence rates to
classical SGD, and do not require convexity of the objective. The high fidelity
gradients enable automated learning rate selection and do not require stepsize
decay. Big batch methods are thus easily automated and can run with little or
no oversight.
| Soham De, Abhay Yadav, David Jacobs and Tom Goldstein | null | 1610.05792 | null | null |
Decision Tree Classification on Outsourced Data | cs.LG cs.CR cs.DB | This paper proposes a client-server decision tree learning method for
outsourced private data. The privacy model is anatomization/fragmentation: the
server sees data values, but the link between sensitive and identifying
information is encrypted with a key known only to clients. Clients have limited
processing and storage capability. Both sensitive and identifying information
thus are stored on the server. The approach presented also retains most
processing at the server, and client-side processing is amortized over
predictions made by the clients. Experiments on various datasets show that the
method produces decision trees approaching the accuracy of a non-private
decision tree, while substantially reducing the client's computing resource
requirements.
| Koray Mancuhan and Chris Clifton | null | 1610.05796 | null | null |
Small-footprint Highway Deep Neural Networks for Speech Recognition | cs.CL cs.LG | State-of-the-art speech recognition systems typically employ neural network
acoustic models. However, compared to Gaussian mixture models, deep neural
network (DNN) based acoustic models often have many more model parameters,
making it challenging for them to be deployed on resource-constrained
platforms, such as mobile devices. In this paper, we study the application of
the recently proposed highway deep neural network (HDNN) for training
small-footprint acoustic models. HDNNs are a depth-gated feedforward neural
network, which include two types of gate functions to facilitate the
information flow through different layers. Our study demonstrates that HDNNs
are more compact than regular DNNs for acoustic modeling, i.e., they can
achieve comparable recognition accuracy with many fewer model parameters.
Furthermore, HDNNs are more controllable than DNNs: the gate functions of an
HDNN can control the behavior of the whole network using a very small number of
model parameters. Finally, we show that HDNNs are more adaptable than DNNs. For
example, simply updating the gate functions using adaptation data can result in
considerable gains in accuracy. We demonstrate these aspects by experiments
using the publicly available AMI corpus, which has around 80 hours of training
data.
| Liang Lu and Steve Renals | 10.1109/TASLP.2017.2698723 | 1610.05812 | null | null |
Statistical Learning Theory Approach for Data Classification with
l-diversity | cs.LG cs.CR cs.DB | Corporations are retaining ever-larger corpuses of personal data; the
frequency or breaches and corresponding privacy impact have been rising
accordingly. One way to mitigate this risk is through use of anonymized data,
limiting the exposure of individual data to only where it is absolutely needed.
This would seem particularly appropriate for data mining, where the goal is
generalizable knowledge rather than data on specific individuals. In practice,
corporate data miners often insist on original data, for fear that they might
"miss something" with anonymized or differentially private approaches. This
paper provides a theoretical justification for the use of anonymized data.
Specifically, we show that a support vector classifier trained on anatomized
data satisfying l-diversity should be expected to do as well as on the original
data. Anatomy preserves all data values, but introduces uncertainty in the
mapping between identifying and sensitive values, thus satisfying l-diversity.
The theoretical effectiveness of the proposed approach is validated using
several publicly available datasets, showing that we outperform the state of
the art for support vector classification using training data protected by
k-anonymity, and are comparable to learning on the original data.
| Koray Mancuhan and Chris Clifton | null | 1610.05815 | null | null |
Membership Inference Attacks against Machine Learning Models | cs.CR cs.LG stat.ML | We quantitatively investigate how machine learning models leak information
about the individual data records on which they were trained. We focus on the
basic membership inference attack: given a data record and black-box access to
a model, determine if the record was in the model's training dataset. To
perform membership inference against a target model, we make adversarial use of
machine learning and train our own inference model to recognize differences in
the target model's predictions on the inputs that it trained on versus the
inputs that it did not train on.
We empirically evaluate our inference techniques on classification models
trained by commercial "machine learning as a service" providers such as Google
and Amazon. Using realistic datasets and classification tasks, including a
hospital discharge dataset whose membership is sensitive from the privacy
perspective, we show that these models can be vulnerable to membership
inference attacks. We then investigate the factors that influence this leakage
and evaluate mitigation strategies.
| Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov | null | 1610.0582 | null | null |
CuMF_SGD: Fast and Scalable Matrix Factorization | cs.LG cs.NA | Matrix factorization (MF) has been widely used in e.g., recommender systems,
topic modeling and word embedding. Stochastic gradient descent (SGD) is popular
in solving MF problems because it can deal with large data sets and is easy to
do incremental learning. We observed that SGD for MF is memory bound.
Meanwhile, single-node CPU systems with caching performs well only for small
data sets; distributed systems have higher aggregated memory bandwidth but
suffer from relatively slow network connection. This observation inspires us to
accelerate MF by utilizing GPUs's high memory bandwidth and fast intra-node
connection. We present cuMF_SGD, a CUDA-based SGD solution for large-scale MF
problems. On a single CPU, we design two workload schedule schemes, i.e.,
batch-Hogwild! and wavefront-update that fully exploit the massive amount of
cores. Especially, batch-Hogwild! as a vectorized version of Hogwild! overcomes
the issue of memory discontinuity. We also develop highly-optimized kernels for
SGD update, leveraging cache, warp-shuffle instructions and half-precision
floats. We also design a partition scheme to utilize multiple GPUs while
addressing the well-known convergence issue when parallelizing SGD. On three
data sets with only one Maxwell or Pascal GPU, cuMF_SGD runs 3.1X-28.2X as fast
compared with state-of-art CPU solutions on 1-64 CPU nodes. Evaluations also
show that cuMF_SGD scales well on multiple GPUs in large data sets.
| Xiaolong Xie, Wei Tan, Liana L. Fong, Yun Liang | null | 1610.05838 | null | null |
Learning Determinantal Point Processes in Sublinear Time | stat.ML cs.LG | We propose a new class of determinantal point processes (DPPs) which can be
manipulated for inference and parameter learning in potentially sublinear time
in the number of items. This class, based on a specific low-rank factorization
of the marginal kernel, is particularly suited to a subclass of continuous DPPs
and DPPs defined on exponentially many items. We apply this new class to
modelling text documents as sampling a DPP of sentences, and propose a
conditional maximum likelihood formulation to model topic proportions, which is
made possible with no approximation for our class of DPPs. We present an
application to document summarization with a DPP on $2^{500}$ items.
| Christophe Dupuy (SIERRA), Francis Bach (SIERRA, LIENS) | null | 1610.05925 | null | null |
A multi-task learning model for malware classification with useful file
access pattern from API call sequence | cs.SD cs.CR cs.LG | Based on API call sequences, semantic-aware and machine learning (ML) based
malware classifiers can be built for malware detection or classification.
Previous works concentrate on crafting and extracting various features from
malware binaries, disassembled binaries or API calls via static or dynamic
analysis and resorting to ML to build classifiers. However, they tend to
involve too much feature engineering and fail to provide interpretability. We
solve these two problems with the recent advances in deep learning: 1)
RNN-based autoencoders (RNN-AEs) can automatically learn low-dimensional
representation of a malware from its raw API call sequence. 2) Multiple
decoders can be trained under different supervisions to give more information,
other than the class or family label of a malware. Inspired by the works of
document classification and automatic sentence summarization, each API call
sequence can be regarded as a sentence. In this paper, we make the first
attempt to build a multi-task malware learning model based on API call
sequences. The model consists of two decoders, one for malware classification
and one for $\emph{file access pattern}$ (FAP) generation given the API call
sequence of a malware. We base our model on the general seq2seq framework.
Experiments show that our model can give competitive classification results as
well as insightful FAP information.
| Xin Wang and Siu Ming Yiu | null | 1610.05945 | null | null |
Particle Swarm Optimization for Generating Interpretable Fuzzy
Reinforcement Learning Policies | cs.NE cs.AI cs.LG cs.SY | Fuzzy controllers are efficient and interpretable system controllers for
continuous state and action spaces. To date, such controllers have been
constructed manually or trained automatically either using expert-generated
problem-specific cost functions or incorporating detailed knowledge about the
optimal control strategy. Both requirements for automatic training processes
are not found in most real-world reinforcement learning (RL) problems. In such
applications, online learning is often prohibited for safety reasons because
online learning requires exploration of the problem's dynamics during policy
training. We introduce a fuzzy particle swarm reinforcement learning (FPSRL)
approach that can construct fuzzy RL policies solely by training parameters on
world models that simulate real system dynamics. These world models are created
by employing an autonomous machine learning technique that uses previously
generated transition samples of a real system. To the best of our knowledge,
this approach is the first to relate self-organizing fuzzy controllers to
model-based batch RL. Therefore, FPSRL is intended to solve problems in domains
where online learning is prohibited, system dynamics are relatively easy to
model from previously generated default policy transition samples, and it is
expected that a relatively easily interpretable control policy exists. The
efficiency of the proposed approach with problems from such domains is
demonstrated using three standard RL benchmarks, i.e., mountain car, cart-pole
balancing, and cart-pole swing-up. Our experimental results demonstrate
high-performing, interpretable fuzzy policies.
| Daniel Hein, Alexander Hentschel, Thomas Runkler, Steffen Udluft | 10.1016/j.engappai.2017.07.005 | 1610.05984 | null | null |
K-Nearest Neighbor Classification Using Anatomized Data | cs.LG cs.CR cs.DB | This paper analyzes k nearest neighbor classification with training data
anonymized using anatomy. Anatomy preserves all data values, but introduces
uncertainty in the mapping between identifying and sensitive values. We first
study the theoretical effect of the anatomized training data on the k nearest
neighbor error rate bounds, nearest neighbor convergence rate, and Bayesian
error. We then validate the derived bounds empirically. We show that 1)
Learning from anatomized data approaches the limits of learning through the
unprotected data (although requiring larger training data), and 2) nearest
neighbor using anatomized data outperforms nearest neighbor on
generalization-based anonymization.
| Koray Mancuhan and Chris Clifton | null | 1610.06048 | null | null |
Learning to Learn Neural Networks | cs.LG stat.ML | Meta-learning consists in learning learning algorithms. We use a Long Short
Term Memory (LSTM) based network to learn to compute on-line updates of the
parameters of another neural network. These parameters are stored in the cell
state of the LSTM. Our framework allows to compare learned algorithms to
hand-made algorithms within the traditional train and test methodology. In an
experiment, we learn a learning algorithm for a one-hidden layer Multi-Layer
Perceptron (MLP) on non-linearly separable datasets. The learned algorithm is
able to update parameters of both layers and generalise well on similar
datasets.
| Tom Bosc | null | 1610.06072 | null | null |
Efficiency of active learning for the allocation of workers on
crowdsourced classification tasks | cs.HC cs.LG | Crowdsourcing has been successfully employed in the past as an effective and
cheap way to execute classification tasks and has therefore attracted the
attention of the research community. However, we still lack a theoretical
understanding of how to collect the labels from the crowd in an optimal way. In
this paper we focus on the problem of worker allocation and compare two active
learning policies proposed in the empirical literature with a uniform
allocation of the available budget. To this end we make a thorough mathematical
analysis of the problem and derive a new bound on the performance of the
system. Furthermore we run extensive simulations in a more realistic scenario
and show that our theoretical results hold in practice.
| Edoardo Manino, Long Tran-Thanh, Nicholas R. Jennings | null | 1610.06106 | null | null |
Streaming Normalization: Towards Simpler and More Biologically-plausible
Normalizations for Online and Recurrent Learning | cs.LG cs.NE | We systematically explored a spectrum of normalization algorithms related to
Batch Normalization (BN) and propose a generalized formulation that
simultaneously solves two major limitations of BN: (1) online learning and (2)
recurrent learning. Our proposal is simpler and more biologically-plausible.
Unlike previous approaches, our technique can be applied out of the box to all
learning scenarios (e.g., online learning, batch learning, fully-connected,
convolutional, feedforward, recurrent and mixed --- recurrent and
convolutional) and compare favorably with existing approaches. We also propose
Lp Normalization for normalizing by different orders of statistical moments. In
particular, L1 normalization is well-performing, simple to implement, fast to
compute, more biologically-plausible and thus ideal for GPU or hardware
implementations.
| Qianli Liao, Kenji Kawaguchi, Tomaso Poggio | null | 1610.0616 | null | null |
Structured adaptive and random spinners for fast machine learning
computations | cs.LG | We consider an efficient computational framework for speeding up several
machine learning algorithms with almost no loss of accuracy. The proposed
framework relies on projections via structured matrices that we call Structured
Spinners, which are formed as products of three structured matrix-blocks that
incorporate rotations. The approach is highly generic, i.e. i) structured
matrices under consideration can either be fully-randomized or learned, ii) our
structured family contains as special cases all previously considered
structured schemes, iii) the setting extends to the non-linear case where the
projections are followed by non-linear functions, and iv) the method finds
numerous applications including kernel approximations via random feature maps,
dimensionality reduction algorithms, new fast cross-polytope LSH techniques,
deep learning, convex optimization algorithms via Newton sketches, quantization
with random projection trees, and more. The proposed framework comes with
theoretical guarantees characterizing the capacity of the structured model in
reference to its unstructured counterpart and is based on a general theoretical
principle that we describe in the paper. As a consequence of our theoretical
analysis, we provide the first theoretical guarantees for one of the most
efficient existing LSH algorithms based on the HD3HD2HD1 structured matrix
[Andoni et al., 2015]. The exhaustive experimental evaluation confirms the
accuracy and efficiency of structured spinners for a variety of different
applications.
| Mariusz Bojarski, Anna Choromanska, Krzysztof Choromanski, Francois
Fagan, Cedric Gouy-Pailler, Anne Morvan, Nourhan Sakr, Tamas Sarlos, Jamal
Atif | null | 1610.06209 | null | null |
Multilevel Anomaly Detection for Mixed Data | cs.LG cs.DB | Anomalies are those deviating from the norm. Unsupervised anomaly detection
often translates to identifying low density regions. Major problems arise when
data is high-dimensional and mixed of discrete and continuous attributes. We
propose MIXMAD, which stands for MIXed data Multilevel Anomaly Detection, an
ensemble method that estimates the sparse regions across multiple levels of
abstraction of mixed data. The hypothesis is for domains where multiple data
abstractions exist, a data point may be anomalous with respect to the raw
representation or more abstract representations. To this end, our method
sequentially constructs an ensemble of Deep Belief Nets (DBNs) with varying
depths. Each DBN is an energy-based detector at a predefined abstraction level.
At the bottom level of each DBN, there is a Mixed-variate Restricted Boltzmann
Machine that models the density of mixed data. Predictions across the ensemble
are finally combined via rank aggregation. The proposed MIXMAD is evaluated on
high-dimensional realworld datasets of different characteristics. The results
demonstrate that for anomaly detection, (a) multilevel abstraction of
high-dimensional and mixed data is a sensible strategy, and (b) empirically,
MIXMAD is superior to popular unsupervised detection methods for both
homogeneous and mixed data.
| Kien Do and Truyen Tran and Svetha Venkatesh | null | 1610.06249 | null | null |
DeepGraph: Graph Structure Predicts Network Growth | cs.SI cs.LG | The topological (or graph) structures of real-world networks are known to be
predictive of multiple dynamic properties of the networks. Conventionally, a
graph structure is represented using an adjacency matrix or a set of
hand-crafted structural features. These representations either fail to
highlight local and global properties of the graph or suffer from a severe loss
of structural information. There lacks an effective graph representation, which
hinges the realization of the predictive power of network structures.
In this study, we propose to learn the represention of a graph, or the
topological structure of a network, through a deep learning model. This
end-to-end prediction model, named DeepGraph, takes the input of the raw
adjacency matrix of a real-world network and outputs a prediction of the growth
of the network. The adjacency matrix is first represented using a graph
descriptor based on the heat kernel signature, which is then passed through a
multi-column, multi-resolution convolutional neural network. Extensive
experiments on five large collections of real-world networks demonstrate that
the proposed prediction model significantly improves the effectiveness of
existing methods, including linear or nonlinear regressors that use
hand-crafted features, graph kernels, and competing deep learning methods.
| Cheng Li, Xiaoxiao Guo and Qiaozhu Mei | null | 1610.06251 | null | null |
Using Fast Weights to Attend to the Recent Past | stat.ML cs.LG cs.NE | Until recently, research on artificial neural networks was largely restricted
to systems with only two types of variable: Neural activities that represent
the current or recent input and weights that learn to capture regularities
among inputs, outputs and payoffs. There is no good reason for this
restriction. Synapses have dynamics at many different time-scales and this
suggests that artificial neural networks might benefit from variables that
change slower than activities but much faster than the standard weights. These
"fast weights" can be used to store temporary memories of the recent past and
they provide a neurally plausible way of implementing the type of attention to
the past that has recently proved very helpful in sequence-to-sequence models.
By using fast weights we can avoid the need to store copies of neural activity
patterns.
| Jimmy Ba, Geoffrey Hinton, Volodymyr Mnih, Joel Z. Leibo, Catalin
Ionescu | null | 1610.06258 | null | null |
Modeling Scalability of Distributed Machine Learning | cs.LG cs.DC | Present day machine learning is computationally intensive and processes large
amounts of data. It is implemented in a distributed fashion in order to address
these scalability issues. The work is parallelized across a number of computing
nodes. It is usually hard to estimate in advance how many nodes to use for a
particular workload. We propose a simple framework for estimating the
scalability of distributed machine learning algorithms. We measure the
scalability by means of the speedup an algorithm achieves with more nodes. We
propose time complexity models for gradient descent and graphical model
inference. We validate our models with experiments on deep learning training
and belief propagation. This framework was used to study the scalability of
machine learning algorithms in Apache Spark.
| Alexander Ulanov, Andrey Simanovsky, Manish Marwah | null | 1610.06276 | null | null |
Deep Neural Networks for Improved, Impromptu Trajectory Tracking of
Quadrotors | cs.RO cs.LG cs.NE cs.SY | Trajectory tracking control for quadrotors is important for applications
ranging from surveying and inspection, to film making. However, designing and
tuning classical controllers, such as proportional-integral-derivative (PID)
controllers, to achieve high tracking precision can be time-consuming and
difficult, due to hidden dynamics and other non-idealities. The Deep Neural
Network (DNN), with its superior capability of approximating abstract,
nonlinear functions, proposes a novel approach for enhancing trajectory
tracking control. This paper presents a DNN-based algorithm as an add-on module
that improves the tracking performance of a classical feedback controller.
Given a desired trajectory, the DNNs provide a tailored reference input to the
controller based on their gained experience. The input aims to achieve a unity
map between the desired and the output trajectory. The motivation for this work
is an interactive "fly-as-you-draw" application, in which a user draws a
trajectory on a mobile device, and a quadrotor instantly flies that trajectory
with the DNN-enhanced control system. Experimental results demonstrate that the
proposed approach improves the tracking precision for user-drawn trajectories
after the DNNs are trained on selected periodic trajectories, suggesting the
method's potential in real-world applications. Tracking errors are reduced by
around 40-50% for both training and testing trajectories from users,
highlighting the DNNs' capability of generalizing knowledge.
| Qiyang Li, Jingxing Qian, Zining Zhu, Xuchan Bao, Mohamed K. Helwa,
Angela P. Schoellig | null | 1610.06283 | null | null |
A Growing Long-term Episodic & Semantic Memory | cs.AI cs.LG cs.NE | The long-term memory of most connectionist systems lies entirely in the
weights of the system. Since the number of weights is typically fixed, this
bounds the total amount of knowledge that can be learned and stored. Though
this is not normally a problem for a neural network designed for a specific
task, such a bound is undesirable for a system that continually learns over an
open range of domains. To address this, we describe a lifelong learning system
that leverages a fast, though non-differentiable, content-addressable memory
which can be exploited to encode both a long history of sequential episodic
knowledge and semantic knowledge over many episodes for an unbounded number of
domains. This opens the door for investigation into transfer learning, and
leveraging prior knowledge that has been learned over a lifetime of experiences
to new domains.
| Marc Pickett and Rami Al-Rfou and Louis Shao and Chris Tar | null | 1610.06402 | null | null |
Mixed Neural Network Approach for Temporal Sleep Stage Classification | q-bio.NC cs.CV cs.LG cs.NE | This paper proposes a practical approach to addressing limitations posed by
use of single active electrodes in applications for sleep stage classification.
Electroencephalography (EEG)-based characterizations of sleep stage progression
contribute the diagnosis and monitoring of the many pathologies of sleep.
Several prior reports have explored ways of automating the analysis of sleep
EEG and of reducing the complexity of the data needed for reliable
discrimination of sleep stages in order to make it possible to perform sleep
studies at lower cost in the home (rather than only in specialized clinical
facilities). However, these reports have involved recordings from electrodes
placed on the cranial vertex or occiput, which can be uncomfortable or
difficult for subjects to position. Those that have utilized single EEG
channels which contain less sleep information, have showed poor classification
performance. We have taken advantage of Rectifier Neural Network for feature
detection and Long Short-Term Memory (LSTM) network for sequential data
learning to optimize classification performance with single electrode
recordings. After exploring alternative electrode placements, we found a
comfortable configuration of a single-channel EEG on the forehead and have
shown that it can be integrated with additional electrodes for simultaneous
recording of the electroocuolgram (EOG). Evaluation of data from 62 people
(with 494 hours sleep) demonstrated better performance of our analytical
algorithm for automated sleep classification than existing approaches using
vertex or occipital electrode placements. Use of this recording configuration
with neural network deconvolution promises to make clinically indicated home
sleep studies practical.
| Hao Dong, Akara Supratak, Wei Pan, Chao Wu, Paul M. Matthews and Yike
Guo | 10.1109/TNSRE.2017.2733220 | 1610.06421 | null | null |
Kernel Alignment for Unsupervised Transfer Learning | stat.ML cs.LG | The ability of a human being to extrapolate previously gained knowledge to
other domains inspired a new family of methods in machine learning called
transfer learning. Transfer learning is often based on the assumption that
objects in both target and source domains share some common feature and/or data
space. In this paper, we propose a simple and intuitive approach that minimizes
iteratively the distance between source and target task distributions by
optimizing the kernel target alignment (KTA). We show that this procedure is
suitable for transfer learning by relating it to Hilbert-Schmidt Independence
Criterion (HSIC) and Quadratic Mutual Information (QMI) maximization. We run
our method on benchmark computer vision data sets and show that it can
outperform some state-of-art methods.
| Ievgen Redko, Youn\`es Bennani | null | 1610.06434 | null | null |
Regularized Optimal Transport and the Rot Mover's Distance | stat.ML cs.LG | This paper presents a unified framework for smooth convex regularization of
discrete optimal transport problems. In this context, the regularized optimal
transport turns out to be equivalent to a matrix nearness problem with respect
to Bregman divergences. Our framework thus naturally generalizes a previously
proposed regularization based on the Boltzmann-Shannon entropy related to the
Kullback-Leibler divergence, and solved with the Sinkhorn-Knopp algorithm. We
call the regularized optimal transport distance the rot mover's distance in
reference to the classical earth mover's distance. We develop two generic
schemes that we respectively call the alternate scaling algorithm and the
non-negative alternate scaling algorithm, to compute efficiently the
regularized optimal plans depending on whether the domain of the regularizer
lies within the non-negative orthant or not. These schemes are based on
Dykstra's algorithm with alternate Bregman projections, and further exploit the
Newton-Raphson method when applied to separable divergences. We enhance the
separable case with a sparse extension to deal with high data dimensions. We
also instantiate our proposed framework and discuss the inherent specificities
for well-known regularizers and statistical divergences in the machine learning
and information geometry communities. Finally, we demonstrate the merits of our
methods with experiments using synthetic data to illustrate the effect of
different regularizers and penalties on the solutions, as well as real-world
data for a pattern recognition application to audio scene classification.
| Arnaud Dessein and Nicolas Papadakis and Jean-Luc Rouas | null | 1610.06447 | null | null |
Change-point Detection Methods for Body-Worn Video | cs.CV cs.LG stat.ML | Body-worn video (BWV) cameras are increasingly utilized by police departments
to provide a record of police-public interactions. However, large-scale BWV
deployment produces terabytes of data per week, necessitating the development
of effective computational methods to identify salient changes in video. In
work carried out at the 2016 RIPS program at IPAM, UCLA, we present a novel
two-stage framework for video change-point detection. First, we employ
state-of-the-art machine learning methods including convolutional neural
networks and support vector machines for scene classification. We then develop
and compare change-point detection algorithms utilizing mean squared-error
minimization, forecasting methods, hidden Markov models, and maximum likelihood
estimation to identify noteworthy changes. We test our framework on detection
of vehicle exits and entrances in a BWV data set provided by the Los Angeles
Police Department and achieve over 90% recall and nearly 70% precision --
demonstrating robustness to rapid scene changes, extreme luminance differences,
and frequent camera occlusions.
| Stephanie Allen, David Madras, Ye Ye, Greg Zanotti | null | 1610.06453 | null | null |
Utilization of Deep Reinforcement Learning for saccadic-based object
visual search | cs.CV cs.LG | The paper focuses on the problem of learning saccades enabling visual object
search. The developed system combines reinforcement learning with a neural
network for learning to predict the possible outcomes of its actions. We
validated the solution in three types of environment consisting of
(pseudo)-randomly generated matrices of digits. The experimental verification
is followed by the discussion regarding elements required by systems mimicking
the fovea movement and possible further research directions.
| Tomasz Kornuta and Kamil Rocki | null | 1610.06492 | null | null |
ChoiceRank: Identifying Preferences from Node Traffic in Networks | stat.ML cs.LG cs.SI | Understanding how users navigate in a network is of high interest in many
applications. We consider a setting where only aggregate node-level traffic is
observed and tackle the task of learning edge transition probabilities. We cast
it as a preference learning problem, and we study a model where choices follow
Luce's axiom. In this case, the $O(n)$ marginal counts of node visits are a
sufficient statistic for the $O(n^2)$ transition probabilities. We show how to
make the inference problem well-posed regardless of the network's structure,
and we present ChoiceRank, an iterative algorithm that scales to networks that
contains billions of nodes and edges. We apply the model to two clickstream
datasets and show that it successfully recovers the transition probabilities
using only the network structure and marginal (node-level) traffic data.
Finally, we also consider an application to mobility networks and apply the
model to one year of rides on New York City's bicycle-sharing system.
| Lucas Maystre, Matthias Grossglauser | null | 1610.06525 | null | null |
Autonomous Racing using Learning Model Predictive Control | cs.LG math.OC | A novel learning Model Predictive Control technique is applied to the
autonomous racing problem. The goal of the controller is to minimize the time
to complete a lap. The proposed control strategy uses the data from previous
laps to improve its performance while satisfying safety requirements. Moreover,
a system identification technique is proposed to estimate the vehicle dynamics.
Simulation results with the high fidelity simulator software CarSim show the
effectiveness of the proposed control scheme.
| Ugo Rosolia, Ashwin Carvalho and Francesco Borrelli | null | 1610.06534 | null | null |
Combinatorial Multi-Armed Bandit with General Reward Functions | cs.LG cs.DS stat.ML | In this paper, we study the stochastic combinatorial multi-armed bandit
(CMAB) framework that allows a general nonlinear reward function, whose
expected value may not depend only on the means of the input random variables
but possibly on the entire distributions of these variables. Our framework
enables a much larger class of reward functions such as the $\max()$ function
and nonlinear utility functions. Existing techniques relying on accurate
estimations of the means of random variables, such as the upper confidence
bound (UCB) technique, do not work directly on these functions. We propose a
new algorithm called stochastically dominant confidence bound (SDCB), which
estimates the distributions of underlying random variables and their
stochastically dominant confidence bounds. We prove that SDCB can achieve
$O(\log{T})$ distribution-dependent regret and $\tilde{O}(\sqrt{T})$
distribution-independent regret, where $T$ is the time horizon. We apply our
results to the $K$-MAX problem and expected utility maximization problems. In
particular, for $K$-MAX, we provide the first polynomial-time approximation
scheme (PTAS) for its offline problem, and give the first $\tilde{O}(\sqrt T)$
bound on the $(1-\epsilon)$-approximation regret of its online problem, for any
$\epsilon>0$.
| Wei Chen, Wei Hu, Fu Li, Jian Li, Yu Liu, Pinyan Lu | null | 1610.06603 | null | null |
Novelty Learning via Collaborative Proximity Filtering | cs.HC cs.LG | The vast majority of recommender systems model preferences as static or
slowly changing due to observable user experience. However, spontaneous changes
in user preferences are ubiquitous in many domains like media consumption and
key factors that drive changes in preferences are not directly observable.
These latent sources of preference change pose new challenges. When systems do
not track and adapt to users' tastes, users lose confidence and trust,
increasing the risk of user churn. We meet these challenges by developing a
model of novelty preferences that learns and tracks latent user tastes. We
combine three innovations: a new measure of item similarity based on patterns
of consumption co-occurrence; model for {\em spontaneous} changes in
preferences; and a learning agent that tracks each user's dynamic preferences
and learns individualized policies for variety. The resulting framework
adaptively provides users with novelty tailored to their preferences for change
per se.
| Arun Kumar, Paul Schrater | null | 1610.06633 | null | null |
Single Pass PCA of Matrix Products | stat.ML cs.DS cs.IT cs.LG math.IT | In this paper we present a new algorithm for computing a low rank
approximation of the product $A^TB$ by taking only a single pass of the two
matrices $A$ and $B$. The straightforward way to do this is to (a) first sketch
$A$ and $B$ individually, and then (b) find the top components using PCA on the
sketch. Our algorithm in contrast retains additional summary information about
$A,B$ (e.g. row and column norms etc.) and uses this additional information to
obtain an improved approximation from the sketches. Our main analytical result
establishes a comparable spectral norm guarantee to existing two-pass methods;
in addition we also provide results from an Apache Spark implementation that
shows better computational and statistical performance on real-world and
synthetic evaluation datasets.
| Shanshan Wu, Srinadh Bhojanapalli, Sujay Sanghavi, Alexandros G.
Dimakis | null | 1610.06656 | null | null |
Stochastic Gradient MCMC with Stale Gradients | stat.ML cs.LG | Stochastic gradient MCMC (SG-MCMC) has played an important role in
large-scale Bayesian learning, with well-developed theoretical convergence
properties. In such applications of SG-MCMC, it is becoming increasingly
popular to employ distributed systems, where stochastic gradients are computed
based on some outdated parameters, yielding what are termed stale gradients.
While stale gradients could be directly used in SG-MCMC, their impact on
convergence properties has not been well studied. In this paper we develop
theory to show that while the bias and MSE of an SG-MCMC algorithm depend on
the staleness of stochastic gradients, its estimation variance (relative to the
expected estimate, based on a prescribed number of samples) is independent of
it. In a simple Bayesian distributed system with SG-MCMC, where stale gradients
are computed asynchronously by a set of workers, our theory indicates a linear
speedup on the decrease of estimation variance w.r.t. the number of workers.
Experiments on synthetic data and deep neural networks validate our theory,
demonstrating the effectiveness and scalability of SG-MCMC with stale
gradients.
| Changyou Chen and Nan Ding and Chunyuan Li and Yizhe Zhang and
Lawrence Carin | null | 1610.06664 | null | null |
End-to-End Training Approaches for Discriminative Segmental Models | cs.CL cs.LG stat.ML | Recent work on discriminative segmental models has shown that they can
achieve competitive speech recognition performance, using features based on
deep neural frame classifiers. However, segmental models can be more
challenging to train than standard frame-based approaches. While some segmental
models have been successfully trained end to end, there is a lack of
understanding of their training under different settings and with different
losses.
We investigate a model class based on recent successful approaches,
consisting of a linear model that combines segmental features based on an LSTM
frame classifier. Similarly to hybrid HMM-neural network models, segmental
models of this class can be trained in two stages (frame classifier training
followed by linear segmental model weight training), end to end (joint training
of both frame classifier and linear weights), or with end-to-end fine-tuning
after two-stage training.
We study segmental models trained end to end with hinge loss, log loss,
latent hinge loss, and marginal log loss. We consider several losses for the
case where training alignments are available as well as where they are not.
We find that in general, marginal log loss provides the most consistent
strong performance without requiring ground-truth alignments. We also find that
training with dropout is very important in obtaining good performance with
end-to-end training. Finally, the best results are typically obtained by a
combination of two-stage training and fine-tuning.
| Hao Tang, Weiran Wang, Kevin Gimpel, Karen Livescu | null | 1610.067 | null | null |
Maximally Divergent Intervals for Anomaly Detection | stat.ML cs.LG | We present new methods for batch anomaly detection in multivariate time
series. Our methods are based on maximizing the Kullback-Leibler divergence
between the data distribution within and outside an interval of the time
series. An empirical analysis shows the benefits of our algorithms compared to
methods that treat each time step independently from each other without
optimizing with respect to all possible intervals.
| Erik Rodner, Bj\"orn Barz, Yanira Guanche, Milan Flach, Miguel
Mahecha, Paul Bodesheim, Markus Reichstein, Joachim Denzler | 10.17871/BACI_ICML2016_Rodner | 1610.06761 | null | null |
Modular Deep Q Networks for Sim-to-real Transfer of Visuo-motor Policies | cs.RO cs.AI cs.CV cs.LG cs.SY | While deep learning has had significant successes in computer vision thanks
to the abundance of visual data, collecting sufficiently large real-world
datasets for robot learning can be costly. To increase the practicality of
these techniques on real robots, we propose a modular deep reinforcement
learning method capable of transferring models trained in simulation to a
real-world robotic task. We introduce a bottleneck between perception and
control, enabling the networks to be trained independently, but then merged and
fine-tuned in an end-to-end manner to further improve hand-eye coordination. On
a canonical, planar visually-guided robot reaching task a fine-tuned accuracy
of 1.6 pixels is achieved, a significant improvement over naive transfer (17.5
pixels), showing the potential for more complicated and broader applications.
Our method provides a technique for more efficient learning and transfer of
visuo-motor policies for real robotic systems without relying entirely on large
real-world robot datasets.
| Fangyi Zhang, J\"urgen Leitner, Michael Milford, Peter Corke | null | 1610.06781 | null | null |
Robust training on approximated minimal-entropy set | cs.LG stat.ML | In this paper, we propose a general framework to learn a robust large-margin
binary classifier when corrupt measurements, called anomalies, caused by sensor
failure might be present in the training set. The goal is to minimize the
generalization error of the classifier on non-corrupted measurements while
controlling the false alarm rate associated with anomalous samples. By
incorporating a non-parametric regularizer based on an empirical entropy
estimator, we propose a Geometric-Entropy-Minimization regularized Maximum
Entropy Discrimination (GEM-MED) method to learn to classify and detect
anomalies in a joint manner. We demonstrate using simulated data and a real
multimodal data set. Our GEM-MED method can yield improved performance over
previous robust classification methods in terms of both classification accuracy
and anomaly detection rate.
| Tianpei Xie, Nasser. M. Narabadi and Alfred O. Hero | null | 1610.06806 | null | null |
Convex Formulation for Kernel PCA and its Use in Semi-Supervised
Learning | cs.LG stat.ML | In this paper, Kernel PCA is reinterpreted as the solution to a convex
optimization problem. Actually, there is a constrained convex problem for each
principal component, so that the constraints guarantee that the principal
component is indeed a solution, and not a mere saddle point. Although these
insights do not imply any algorithmic improvement, they can be used to further
understand the method, formulate possible extensions and properly address them.
As an example, a new convex optimization problem for semi-supervised
classification is proposed, which seems particularly well-suited whenever the
number of known labels is small. Our formulation resembles a Least Squares SVM
problem with a regularization parameter multiplied by a negative sign, combined
with a variational principle for Kernel PCA. Our primal optimization principle
for semi-supervised learning is solved in terms of the Lagrange multipliers.
Numerical experiments in several classification tasks illustrate the
performance of the proposed model in problems with only a few labeled data.
| Carlos M. Ala\'iz, Micha\"el Fanuel, Johan A. K. Suykens | 10.1109/TNNLS.2017.2709838 | 1610.06811 | null | null |
An Efficient Minibatch Acceptance Test for Metropolis-Hastings | cs.LG stat.ML | We present a novel Metropolis-Hastings method for large datasets that uses
small expected-size minibatches of data. Previous work on reducing the cost of
Metropolis-Hastings tests yield variable data consumed per sample, with only
constant factor reductions versus using the full dataset for each sample. Here
we present a method that can be tuned to provide arbitrarily small batch sizes,
by adjusting either proposal step size or temperature. Our test uses the
noise-tolerant Barker acceptance test with a novel additive correction
variable. The resulting test has similar cost to a normal SGD update. Our
experiments demonstrate several order-of-magnitude speedups over previous work.
| Daniel Seita, Xinlei Pan, Haoyu Chen, John Canny | null | 1610.06848 | null | null |
Learning to Protect Communications with Adversarial Neural Cryptography | cs.CR cs.LG | We ask whether neural networks can learn to use secret keys to protect
information from other neural networks. Specifically, we focus on ensuring
confidentiality properties in a multiagent system, and we specify those
properties in terms of an adversary. Thus, a system may consist of neural
networks named Alice and Bob, and we aim to limit what a third neural network
named Eve learns from eavesdropping on the communication between Alice and Bob.
We do not prescribe specific cryptographic algorithms to these neural networks;
instead, we train end-to-end, adversarially. We demonstrate that the neural
networks can learn how to perform forms of encryption and decryption, and also
how to apply these operations selectively in order to meet confidentiality
goals.
| Mart\'in Abadi and David G. Andersen (Google Brain) | null | 1610.06918 | null | null |
Bit-pragmatic Deep Neural Network Computing | cs.LG cs.AI cs.AR cs.CV | We quantify a source of ineffectual computations when processing the
multiplications of the convolutional layers in Deep Neural Networks (DNNs) and
propose Pragmatic (PRA), an architecture that exploits it improving performance
and energy efficiency. The source of these ineffectual computations is best
understood in the context of conventional multipliers which generate internally
multiple terms, that is, products of the multiplicand and powers of two, which
added together produce the final product [1]. At runtime, many of these terms
are zero as they are generated when the multiplicand is combined with the
zero-bits of the multiplicator. While conventional bit-parallel multipliers
calculate all terms in parallel to reduce individual product latency, PRA
calculates only the non-zero terms using a) on-the-fly conversion of the
multiplicator representation into an explicit list of powers of two, and b)
hybrid bit-parallel multplicand/bit-serial multiplicator processing units. PRA
exploits two sources of ineffectual computations: 1) the aforementioned zero
product terms which are the result of the lack of explicitness in the
multiplicator representation, and 2) the excess in the representation precision
used for both multiplicants and multiplicators, e.g., [2]. Measurements
demonstrate that for the convolutional layers, a straightforward variant of PRA
improves performance by 2.6x over the DaDiaNao (DaDN) accelerator [3] and by
1.4x over STR [4]. Similarly, PRA improves energy efficiency by 28% and 10% on
average compared to DaDN and STR. An improved cross lane synchronication scheme
boosts performance improvements to 3.1x over DaDN. Finally, Pragmatic benefits
persist even with an 8-bit quantized representation [5].
| J. Albericio, P. Judd, A. Delm\'as, S. Sharify, A. Moshovos | null | 1610.0692 | null | null |
Safety Verification of Deep Neural Networks | cs.AI cs.LG stat.ML | Deep neural networks have achieved impressive experimental results in image
classification, but can surprisingly be unstable with respect to adversarial
perturbations, that is, minimal changes to the input image that cause the
network to misclassify it. With potential applications including perception
modules and end-to-end controllers for self-driving cars, this raises concerns
about their safety. We develop a novel automated verification framework for
feed-forward multi-layer neural networks based on Satisfiability Modulo Theory
(SMT). We focus on safety of image classification decisions with respect to
image manipulations, such as scratches or changes to camera angle or lighting
conditions that would result in the same class being assigned by a human, and
define safety for an individual decision in terms of invariance of the
classification within a small neighbourhood of the original image. We enable
exhaustive search of the region by employing discretisation, and propagate the
analysis layer by layer. Our method works directly with the network code and,
in contrast to existing methods, can guarantee that adversarial examples, if
they exist, are found for the given region and family of manipulations. If
found, adversarial examples can be shown to human testers and/or used to
fine-tune the network. We implement the techniques using Z3 and evaluate them
on state-of-the-art networks, including regularised and deep learning networks.
We also compare against existing techniques to search for adversarial examples
and estimate network robustness.
| Xiaowei Huang and Marta Kwiatkowska and Sen Wang and Min Wu | null | 1610.0694 | null | null |
Learning Cost-Effective Treatment Regimes using Markov Decision
Processes | cs.AI cs.LG stat.ML | Decision makers, such as doctors and judges, make crucial decisions such as
recommending treatments to patients, and granting bails to defendants on a
daily basis. Such decisions typically involve weighting the potential benefits
of taking an action against the costs involved. In this work, we aim to
automate this task of learning \emph{cost-effective, interpretable and
actionable treatment regimes}. We formulate this as a problem of learning a
decision list -- a sequence of if-then-else rules -- which maps characteristics
of subjects (eg., diagnostic test results of patients) to treatments. We
propose a novel objective to construct a decision list which maximizes outcomes
for the population, and minimizes overall costs. We model the problem of
learning such a list as a Markov Decision Process (MDP) and employ a variant of
the Upper Confidence Bound for Trees (UCT) strategy which leverages customized
checks for pruning the search space effectively. Experimental results on real
world observational data capturing judicial bail decisions and treatment
recommendations for asthma patients demonstrate the effectiveness of our
approach.
| Himabindu Lakkaraju, Cynthia Rudin | null | 1610.06972 | null | null |
Ranking of classification algorithms in terms of mean-standard deviation
using A-TOPSIS | cs.LG | In classification problems when multiples algorithms are applied to different
benchmarks a difficult issue arises, i.e., how can we rank the algorithms? In
machine learning it is common run the algorithms several times and then a
statistic is calculated in terms of means and standard deviations. In order to
compare the performance of the algorithms, it is very common to employ
statistical tests. However, these tests may also present limitations, since
they consider only the means and not the standard deviations of the obtained
results. In this paper, we present the so called A-TOPSIS, based on TOPSIS
(Technique for Order Preference by Similarity to Ideal Solution), to solve the
problem of ranking and comparing classification algorithms in terms of means
and standard deviations. We use two case studies to illustrate the A-TOPSIS for
ranking classification algorithms and the results show the suitability of
A-TOPSIS to rank the algorithms. The presented approach is general and can be
applied to compare the performance of stochastic algorithms in machine
learning. Finally, to encourage researchers to use the A-TOPSIS for ranking
algorithms we also presented in this work an easy-to-use A-TOPSIS web
framework.
| Andre G. C. Pacheco and Renato A. Krohling | null | 1610.06998 | null | null |
Exercise Motion Classification from Large-Scale Wearable Sensor Data
Using Convolutional Neural Networks | cs.CV cs.LG | The ability to accurately identify human activities is essential for
developing automatic rehabilitation and sports training systems. In this paper,
large-scale exercise motion data obtained from a forearm-worn wearable sensor
are classified with a convolutional neural network (CNN). Time-series data
consisting of accelerometer and orientation measurements are formatted as
images, allowing the CNN to automatically extract discriminative features. A
comparative study on the effects of image formatting and different CNN
architectures is also presented. The best performing configuration classifies
50 gym exercises with 92.1% accuracy.
| Terry Taewoong Um, Vahid Babakeshizadeh and Dana Kuli\'c | null | 1610.07031 | null | null |
Online Classification with Complex Metrics | stat.ML cs.LG | We present a framework and analysis of consistent binary classification for
complex and non-decomposable performance metrics such as the F-measure and the
Jaccard measure. The proposed framework is general, as it applies to both batch
and online learning, and to both linear and non-linear models. Our work follows
recent results showing that the Bayes optimal classifier for many complex
metrics is given by a thresholding of the conditional probability of the
positive class. This manuscript extends this thresholding characterization --
showing that the utility is strictly locally quasi-concave with respect to the
threshold for a wide range of models and performance metrics. This, in turn,
motivates simple normalized gradient ascent updates for threshold estimation.
We present a finite-sample regret analysis for the resulting procedure. In
particular, the risk for the batch case converges to the Bayes risk at the same
rate as that of the underlying conditional probability estimation, and the risk
of proposed online algorithm converges at a rate that depends on the
conditional probability estimation risk. For instance, in the special case
where the conditional probability model is logistic regression, our procedure
achieves $O(\frac{1}{\sqrt{n}})$ sample complexity, both for batch and online
training. Empirical evaluation shows that the proposed algorithms out-perform
alternatives in practice, with comparable or better prediction performance and
reduced run time for various metrics and datasets.
| Bowei Yan, Oluwasanmi Koyejo, Kai Zhong, Pradeep Ravikumar | null | 1610.07116 | null | null |
Cross Device Matching for Online Advertising with Neural Feature
Ensembles : First Place Solution at CIKM Cup 2016 | cs.LG cs.IR stat.ML | We describe the 1st place winning approach for the CIKM Cup 2016 Challenge.
In this paper, we provide an approach to reasonably identify same users across
multiple devices based on browsing logs. Our approach regards a candidate
ranking problem as pairwise classification and utilizes an unsupervised neural
feature ensemble approach to learn latent features of users. Combined with
traditional hand crafted features, each user pair feature is fed into a
supervised classifier in order to perform pairwise classification. Lastly, we
propose supervised and unsupervised inference techniques.
| Minh C. Phan, Yi Tay, Tuan-Anh Nguyen Pham | null | 1610.07119 | null | null |
How to be Fair and Diverse? | cs.LG | Due to the recent cases of algorithmic bias in data-driven decision-making,
machine learning methods are being put under the microscope in order to
understand the root cause of these biases and how to correct them. Here, we
consider a basic algorithmic task that is central in machine learning:
subsampling from a large data set. Subsamples are used both as an end-goal in
data summarization (where fairness could either be a legal, political or moral
requirement) and to train algorithms (where biases in the samples are often a
source of bias in the resulting model). Consequently, there is a growing effort
to modify either the subsampling methods or the algorithms themselves in order
to ensure fairness. However, in doing so, a question that seems to be
overlooked is whether it is possible to produce fair subsamples that are also
adequately representative of the feature space of the data set - an important
and classic requirement in machine learning. Can diversity and fairness be
simultaneously ensured? We start by noting that, in some applications,
guaranteeing one does not necessarily guarantee the other, and a new approach
is required. Subsequently, we present an algorithmic framework which allows us
to produce both fair and diverse samples. Our experimental results on an image
summarization task show marked improvements in fairness without compromising
feature diversity by much, giving us the best of both the worlds.
| L. Elisa Celis, Amit Deshpande, Tarun Kathuria, Nisheeth K. Vishnoi | null | 1610.07183 | null | null |
Learning Deep Architectures for Interaction Prediction in
Structure-based Virtual Screening | stat.ML cs.LG | We introduce a deep learning architecture for structure-based virtual
screening that generates fixed-sized fingerprints of proteins and small
molecules by applying learnable atom convolution and softmax operations to each
compound separately. These fingerprints are further transformed non-linearly,
their inner-product is calculated and used to predict the binding potential.
Moreover, we show that widely used benchmark datasets may be insufficient for
testing structure-based virtual screening methods that utilize machine
learning. Therefore, we introduce a new benchmark dataset, which we constructed
based on DUD-E and PDBBind databases.
| Adam Gonczarek, Jakub M. Tomczak, Szymon Zar\k{e}ba, Joanna Kaczmar,
Piotr D\k{a}browski, Micha{\l} J. Walczak | 10.1016/j.compbiomed.2017.09.007 | 1610.07187 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.