title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Differentially Private Identity and Closeness Testing of Discrete
Distributions | cs.LG cs.DS cs.IT math.IT stat.ML | We investigate the problems of identity and closeness testing over a discrete
population from random samples. Our goal is to develop efficient testers while
guaranteeing Differential Privacy to the individuals of the population. We
describe an approach that yields sample-efficient differentially private
testers for these problems. Our theoretical results show that there exist
private identity and closeness testers that are nearly as sample-efficient as
their non-private counterparts. We perform an experimental evaluation of our
algorithms on synthetic data. Our experiments illustrate that our private
testers achieve small type I and type II errors with sample size sublinear in
the domain size of the underlying distributions.
| Maryam Aliakbarpour, Ilias Diakonikolas, Ronitt Rubinfeld | null | 1707.05497 | null | null |
A Machine Learning Approach for Evaluating Creative Artifacts | cs.LG cs.AI stat.ML | Much work has been done in understanding human creativity and defining
measures to evaluate creativity. This is necessary mainly for the reason of
having an objective and automatic way of quantifying creative artifacts. In
this work, we propose a regression-based learning framework which takes into
account quantitatively the essential criteria for creativity like novelty,
influence, value and unexpectedness. As it is often the case with most creative
domains, there is no clear ground truth available for creativity. Our proposed
learning framework is applicable to all creative domains; yet we evaluate it on
a dataset of movies created from IMDb and Rotten Tomatoes due to availability
of audience and critic scores, which can be used as proxy ground truth labels
for creativity. We report promising results and observations from our
experiments in the following ways : 1) Correlation of creative criteria with
critic scores, 2) Improvement in movie rating prediction with inclusion of
various creative criteria, and 3) Identification of creative movies.
| Disha Shrivastava, Saneem Ahmed CG, Anirban Laha, Karthik
Sankaranarayanan | null | 1707.05499 | null | null |
Bayesian Nonlinear Support Vector Machines for Big Data | stat.ML cs.LG | We propose a fast inference method for Bayesian nonlinear support vector
machines that leverages stochastic variational inference and inducing points.
Our experiments show that the proposed method is faster than competing Bayesian
approaches and scales easily to millions of data points. It provides additional
features over frequentist competitors such as accurate predictive uncertainty
estimates and automatic hyperparameter search.
| Florian Wenzel, Theo Galy-Fajou, Matthaeus Deutsch, Marius Kloft | null | 1707.05532 | null | null |
Global optimization for low-dimensional switching linear regression and
bounded-error estimation | cs.LG stat.ML | The paper provides global optimization algorithms for two particularly
difficult nonconvex problems raised by hybrid system identification: switching
linear regression and bounded-error estimation. While most works focus on local
optimization heuristics without global optimality guarantees or with guarantees
valid only under restrictive conditions, the proposed approach always yields a
solution with a certificate of global optimality. This approach relies on a
branch-and-bound strategy for which we devise lower bounds that can be
efficiently computed. In order to obtain scalable algorithms with respect to
the number of data, we directly optimize the model parameters in a continuous
optimization setting without involving integer variables. Numerical experiments
show that the proposed algorithms offer a higher accuracy than convex
relaxations with a reasonable computational burden for hybrid system
identification. In addition, we discuss how bounded-error estimation is related
to robust estimation in the presence of outliers and exact recovery under
sparse noise, for which we also obtain promising numerical results.
| Fabien Lauer (ABC) | null | 1707.05533 | null | null |
Latent Gaussian Process Regression | stat.ML cs.LG | We introduce Latent Gaussian Process Regression which is a latent variable
extension allowing modelling of non-stationary multi-modal processes using GPs.
The approach is built on extending the input space of a regression problem with
a latent variable that is used to modulate the covariance function over the
training data. We show how our approach can be used to model multi-modal and
non-stationary processes. We exemplify the approach on a set of synthetic data
and provide results on real data from motion capture and geostatistics.
| Erik Bodin, Neill D. F. Campbell, Carl Henrik Ek | null | 1707.05534 | null | null |
One-Shot Learning in Discriminative Neural Networks | stat.ML cs.LG | We consider the task of one-shot learning of visual categories. In this paper
we explore a Bayesian procedure for updating a pretrained convnet to classify a
novel image category for which data is limited. We decompose this convnet into
a fixed feature extractor and softmax classifier. We assume that the target
weights for the new task come from the same distribution as the pretrained
softmax weights, which we model as a multivariate Gaussian. By using this as a
prior for the new weights, we demonstrate competitive performance with
state-of-the-art methods whilst also being consistent with 'normal' methods for
training deep networks on large data.
| Jordan Burgess, James Robert Lloyd, Zoubin Ghahramani | null | 1707.05562 | null | null |
Graph learning under sparsity priors | cs.LG cs.SI stat.ML | Graph signals offer a very generic and natural representation for data that
lives on networks or irregular structures. The actual data structure is however
often unknown a priori but can sometimes be estimated from the knowledge of the
application domain. If this is not possible, the data structure has to be
inferred from the mere signal observations. This is exactly the problem that we
address in this paper, under the assumption that the graph signals can be
represented as a sparse linear combination of a few atoms of a structured graph
dictionary. The dictionary is constructed on polynomials of the graph
Laplacian, which can sparsely represent a general class of graph signals
composed of localized patterns on the graph. We formulate a graph learning
problem, whose solution provides an ideal fit between the signal observations
and the sparse graph signal model. As the problem is non-convex, we propose to
solve it by alternating between a signal sparse coding and a graph update step.
We provide experimental results that outline the good graph recovery
performance of our method, which generally compares favourably to other recent
network inference algorithms.
| Hermina Petric Maretic, Dorina Thanou, Pascal Frossard | null | 1707.05587 | null | null |
VSE++: Improving Visual-Semantic Embeddings with Hard Negatives | cs.LG cs.CL cs.CV | We present a new technique for learning visual-semantic embeddings for
cross-modal retrieval. Inspired by hard negative mining, the use of hard
negatives in structured prediction, and ranking loss functions, we introduce a
simple change to common loss functions used for multi-modal embeddings. That,
combined with fine-tuning and use of augmented data, yields significant gains
in retrieval performance. We showcase our approach, VSE++, on MS-COCO and
Flickr30K datasets, using ablation studies and comparisons with existing
methods. On MS-COCO our approach outperforms state-of-the-art methods by 8.8%
in caption retrieval and 11.3% in image retrieval (at R@1).
| Fartash Faghri, David J. Fleet, Jamie Ryan Kiros and Sanja Fidler | null | 1707.05612 | null | null |
Learning Powers of Poisson Binomial Distributions | cs.DS cs.LG math.ST stat.TH | We introduce the problem of simultaneously learning all powers of a Poisson
Binomial Distribution (PBD). A PBD of order $n$ is the distribution of a sum of
$n$ mutually independent Bernoulli random variables $X_i$, where
$\mathbb{E}[X_i] = p_i$. The $k$'th power of this distribution, for $k$ in a
range $[m]$, is the distribution of $P_k = \sum_{i=1}^n X_i^{(k)}$, where each
Bernoulli random variable $X_i^{(k)}$ has $\mathbb{E}[X_i^{(k)}] = (p_i)^k$.
The learning algorithm can query any power $P_k$ several times and succeeds in
learning all powers in the range, if with probability at least $1- \delta$:
given any $k \in [m]$, it returns a probability distribution $Q_k$ with total
variation distance from $P_k$ at most $\epsilon$. We provide almost matching
lower and upper bounds on query complexity for this problem. We first show a
lower bound on the query complexity on PBD powers instances with many distinct
parameters $p_i$ which are separated, and we almost match this lower bound by
examining the query complexity of simultaneously learning all the powers of a
special class of PBD's resembling the PBD's of our lower bound. We study the
fundamental setting of a Binomial distribution, and provide an optimal
algorithm which uses $O(1/\epsilon^2)$ samples. Diakonikolas, Kane and Stewart
[COLT'16] showed a lower bound of $\Omega(2^{1/\epsilon})$ samples to learn the
$p_i$'s within error $\epsilon$. The question whether sampling from powers of
PBDs can reduce this sampling complexity, has a negative answer since we show
that the exponential number of samples is inevitable. Having sampling access to
the powers of a PBD we then give a nearly optimal algorithm that learns its
$p_i$'s. To prove our two last lower bounds we extend the classical minimax
risk definition from statistics to estimating functions of sequences of
distributions.
| Dimitris Fotakis, Vasilis Kontonis, Piotr Krysta, and Paul Spirakis | null | 1707.05662 | null | null |
Empirical evaluation of a Q-Learning Algorithm for Model-free Autonomous
Soaring | cs.LG | Autonomous unpowered flight is a challenge for control and guidance systems:
all the energy the aircraft might use during flight has to be harvested
directly from the atmosphere. We investigate the design of an algorithm that
optimizes the closed-loop control of a glider's bank and sideslip angles, while
flying in the lower convective layer of the atmosphere in order to increase its
mission endurance. Using a Reinforcement Learning approach, we demonstrate the
possibility for real-time adaptation of the glider's behaviour to the
time-varying and noisy conditions associated with thermal soaring flight. Our
approach is online, data-based and model-free, hence avoids the pitfalls of
aerological and aircraft modelling and allow us to deal with uncertainties and
non-stationarity. Additionally, we put a particular emphasis on keeping low
computational requirements in order to make on-board execution feasible. This
article presents the stochastic, time-dependent aerological model used for
simulation, together with a standard aircraft model. Then we introduce an
adaptation of a Q-learning algorithm and demonstrate its ability to control the
aircraft and improve its endurance by exploiting updrafts in non-stationary
scenarios.
| Erwan Lecarpentier, Sebastian Rapp, Marc Melo, Emmanuel Rachelson | null | 1707.05668 | null | null |
Submodular Mini-Batch Training in Generative Moment Matching Networks | cs.LG | This article was withdrawn because (1) it was uploaded without the
co-authors' knowledge or consent, and (2) there are allegations of plagiarism.
| Jun Qi | null | 1707.05721 | null | null |
Robust Bayesian Optimization with Student-t Likelihood | cs.LG cs.AI stat.ML | Bayesian optimization has recently attracted the attention of the automatic
machine learning community for its excellent results in hyperparameter tuning.
BO is characterized by the sample efficiency with which it can optimize
expensive black-box functions. The efficiency is achieved in a similar fashion
to the learning to learn methods: surrogate models (typically in the form of
Gaussian processes) learn the target function and perform intelligent sampling.
This surrogate model can be applied even in the presence of noise; however, as
with most regression methods, it is very sensitive to outlier data. This can
result in erroneous predictions and, in the case of BO, biased and inefficient
exploration. In this work, we present a GP model that is robust to outliers
which uses a Student-t likelihood to segregate outliers and robustly conduct
Bayesian optimization. We present numerical results evaluating the proposed
method in both artificial functions and real problems.
| Ruben Martinez-Cantin, Michael McCourt, Kevin Tee | null | 1707.05729 | null | null |
Choosing Smartly: Adaptive Multimodal Fusion for Object Detection in
Changing Environments | cs.RO cs.AI cs.CV cs.LG | Object detection is an essential task for autonomous robots operating in
dynamic and changing environments. A robot should be able to detect objects in
the presence of sensor noise that can be induced by changing lighting
conditions for cameras and false depth readings for range sensors, especially
RGB-D cameras. To tackle these challenges, we propose a novel adaptive fusion
approach for object detection that learns weighting the predictions of
different sensor modalities in an online manner. Our approach is based on a
mixture of convolutional neural network (CNN) experts and incorporates multiple
modalities including appearance, depth and motion. We test our method in
extensive robot experiments, in which we detect people in a combined indoor and
outdoor scenario from RGB-D data, and we demonstrate that our method can adapt
to harsh lighting changes and severe camera motion blur. Furthermore, we
present a new RGB-D dataset for people detection in mixed in- and outdoor
environments, recorded with a mobile robot. Code, pretrained models and dataset
are available at http://adaptivefusion.cs.uni-freiburg.de
| Oier Mees, Andreas Eitel, Wolfram Burgard | 10.1109/IROS.2016.7759048 | 1707.05733 | null | null |
Optimizing the Latent Space of Generative Networks | stat.ML cs.CV cs.LG | Generative Adversarial Networks (GANs) have achieved remarkable results in
the task of generating realistic natural images. In most successful
applications, GAN models share two common aspects: solving a challenging saddle
point optimization problem, interpreted as an adversarial game between a
generator and a discriminator functions; and parameterizing the generator and
the discriminator as deep convolutional neural networks. The goal of this paper
is to disentangle the contribution of these two factors to the success of GANs.
In particular, we introduce Generative Latent Optimization (GLO), a framework
to train deep convolutional generators using simple reconstruction losses.
Throughout a variety of experiments, we show that GLO enjoys many of the
desirable properties of GANs: synthesizing visually-appealing samples,
interpolating meaningfully between samples, and performing linear arithmetic
with noise vectors; all of this without the adversarial optimization scheme.
| Piotr Bojanowski, Armand Joulin, David Lopez-Paz, Arthur Szlam | null | 1707.05776 | null | null |
Improving Gibbs Sampler Scan Quality with DoGS | stat.ML cs.LG math.PR stat.ME | The pairwise influence matrix of Dobrushin has long been used as an
analytical tool to bound the rate of convergence of Gibbs sampling. In this
work, we use Dobrushin influence as the basis of a practical tool to certify
and efficiently improve the quality of a discrete Gibbs sampler. Our
Dobrushin-optimized Gibbs samplers (DoGS) offer customized variable selection
orders for a given sampling budget and variable subset of interest, explicit
bounds on total variation distance to stationarity, and certifiable
improvements over the standard systematic and uniform random scan Gibbs
samplers. In our experiments with joint image segmentation and object
recognition, Markov chain Monte Carlo maximum likelihood estimation, and Ising
model inference, DoGS consistently deliver higher-quality inferences with
significantly smaller sampling budgets than standard Gibbs samplers.
| Ioannis Mitliagkas and Lester Mackey | null | 1707.05807 | null | null |
A deep learning approach to diabetic blood glucose prediction | cs.LG math.NA | We consider the question of 30-minute prediction of blood glucose levels
measured by continuous glucose monitoring devices, using clinical data. While
most studies of this nature deal with one patient at a time, we take a certain
percentage of patients in the data set as training data, and test on the
remainder of the patients; i.e., the machine need not re-calibrate on the new
patients in the data set. We demonstrate how deep learning can outperform
shallow networks in this example. One novelty is to demonstrate how a
parsimonious deep representation can be constructed using domain knowledge.
| H.N. Mhaskar, S.V. Pereverzyev and M.D. van der Walt | 10.3389/fams.2017.00014 | 1707.05828 | null | null |
Multiscale Residual Mixture of PCA: Dynamic Dictionaries for Optimal
Basis Learning | stat.ML cs.LG | In this paper we are interested in the problem of learning an over-complete
basis and a methodology such that the reconstruction or inverse problem does
not need optimization. We analyze the optimality of the presented approaches,
their link to popular already known techniques s.a. Artificial Neural
Networks,k-means or Oja's learning rule. Finally, we will see that one approach
to reach the optimal dictionary is a factorial and hierarchical approach. The
derived approach lead to a formulation of a Deep Oja Network. We present
results on different tasks and present the resulting very efficient learning
algorithm which brings a new vision on the training of deep nets. Finally, the
theoretical work shows that deep frameworks are one way to efficiently have
over-complete (combinatorially large) dictionary yet allowing easy
reconstruction. We thus present the Deep Residual Oja Network (DRON). We
demonstrate that a recursive deep approach working on the residuals allow
exponential decrease of the error w.r.t. the depth.
| Randall Balestriero | null | 1707.0584 | null | null |
Linear Time Complexity Deep Fourier Scattering Network and Extension to
Nonlinear Invariants | stat.ML cs.LG | In this paper we propose a scalable version of a state-of-the-art
deterministic time-invariant feature extraction approach based on consecutive
changes of basis and nonlinearities, namely, the scattering network. The first
focus of the paper is to extend the scattering network to allow the use of
higher order nonlinearities as well as extracting nonlinear and Fourier based
statistics leading to the required invariants of any inherently structured
input. In order to reach fast convolutions and to leverage the intrinsic
structure of wavelets, we derive our complete model in the Fourier domain. In
addition of providing fast computations, we are now able to exploit sparse
matrices due to extremely high sparsity well localized in the Fourier domain.
As a result, we are able to reach a true linear time complexity with inputs in
the Fourier domain allowing fast and energy efficient solutions to machine
learning tasks. Validation of the features and computational results will be
presented through the use of these invariant coefficients to perform
classification on audio recordings of bird songs captured in multiple different
soundscapes. In the end, the applicability of the presented solutions to deep
artificial neural networks is discussed.
| Randall Balestriero, Herve Glotin | null | 1707.05841 | null | null |
On-line Building Energy Optimization using Deep Reinforcement Learning | cs.LG cs.AI math.OC | Unprecedented high volumes of data are becoming available with the growth of
the advanced metering infrastructure. These are expected to benefit planning
and operation of the future power system, and to help the customers transition
from a passive to an active role. In this paper, we explore for the first time
in the smart grid context the benefits of using Deep Reinforcement Learning, a
hybrid type of methods that combines Reinforcement Learning with Deep Learning,
to perform on-line optimization of schedules for building energy management
systems. The learning procedure was explored using two methods, Deep Q-learning
and Deep Policy Gradient, both of them being extended to perform multiple
actions simultaneously. The proposed approach was validated on the large-scale
Pecan Street Inc. database. This highly-dimensional database includes
information about photovoltaic power generation, electric vehicles as well as
buildings appliances. Moreover, these on-line energy scheduling strategies
could be used to provide real-time feedback to consumers to encourage more
efficient use of electricity.
| Elena Mocanu, Decebal Constantin Mocanu, Phuong H. Nguyen, Antonio
Liotta, Michael E. Webber, Madeleine Gibescu, J.G. Slootweg | null | 1707.05878 | null | null |
Recovering Latent Signals from a Mixture of Measurements using a
Gaussian Process Prior | stat.ML cs.LG | In sensing applications, sensors cannot always measure the latent quantity of
interest at the required resolution, sometimes they can only acquire a blurred
version of it due the sensor's transfer function. To recover latent signals
when only noisy mixed measurements of the signal are available, we propose the
Gaussian process mixture of measurements (GPMM), which models the latent signal
as a Gaussian process (GP) and allows us to perform Bayesian inference on such
signal conditional to a set of noisy mixture of measurements. We describe how
to train GPMM, that is, to find the hyperparameters of the GP and the mixing
weights, and how to perform inference on the latent signal under GPMM;
additionally, we identify the solution to the underdetermined linear system
resulting from a sensing application as a particular case of GPMM. The proposed
model is validated in the recovery of three signals: a smooth synthetic signal,
a real-world heart-rate time series and a step function, where GPMM
outperformed the standard GP in terms of estimation error, uncertainty
representation and recovery of the spectral content of the latent signal.
| Felipe Tobar, Gonzalo Rios, Tom\'as Valdivia, Pablo Guerrero | 10.1109/LSP.2016.2637312 | 1707.05909 | null | null |
Equivalence between LINE and Matrix Factorization | cs.LG | LINE [1], as an efficient network embedding method, has shown its
effectiveness in dealing with large-scale undirected, directed, and/or weighted
networks. Particularly, it proposes to preserve both the local structure
(represented by First-order Proximity) and global structure (represented by
Second-order Proximity) of the network. In this study, we prove that LINE with
these two proximities (LINE(1st) and LINE(2nd)) are actually factoring two
different matrices separately. Specifically, LINE(1st) is factoring a matrix M
(1), whose entries are the doubled Pointwise Mutual Information (PMI) of vertex
pairs in undirected networks, shifted by a constant. LINE(2nd) is factoring a
matrix M (2), whose entries are the PMI of vertex and context pairs in directed
networks, shifted by a constant. We hope this finding would provide a basis for
further extensions and generalizations of LINE.
| Qiao Wang, Zheng Wang, Xiaojun Ye | null | 1707.05926 | null | null |
Generalization Bounds of SGLD for Non-convex Learning: Two Theoretical
Viewpoints | cs.LG math.OC stat.ML | Algorithm-dependent generalization error bounds are central to statistical
learning theory. A learning algorithm may use a large hypothesis space, but the
limited number of iterations controls its model capacity and generalization
error. The impacts of stochastic gradient methods on generalization error for
non-convex learning problems not only have important theoretical consequences,
but are also critical to generalization errors of deep learning.
In this paper, we study the generalization errors of Stochastic Gradient
Langevin Dynamics (SGLD) with non-convex objectives. Two theories are proposed
with non-asymptotic discrete-time analysis, using Stability and PAC-Bayesian
results respectively. The stability-based theory obtains a bound of
$O\left(\frac{1}{n}L\sqrt{\beta T_k}\right)$, where $L$ is uniform Lipschitz
parameter, $\beta$ is inverse temperature, and $T_k$ is aggregated step sizes.
For PAC-Bayesian theory, though the bound has a slower $O(1/\sqrt{n})$ rate,
the contribution of each step is shown with an exponentially decaying factor by
imposing $\ell^2$ regularization, and the uniform Lipschitz constant is also
replaced by actual norms of gradients along trajectory. Our bounds have no
implicit dependence on dimensions, norms or other capacity measures of
parameter, which elegantly characterizes the phenomenon of "Fast Training
Guarantees Generalization" in non-convex settings. This is the first
algorithm-dependent result with reasonable dependence on aggregated step sizes
for non-convex learning, and has important implications to statistical learning
aspects of stochastic gradient methods in complicated models such as deep
learning.
| Wenlong Mou, Liwei Wang, Xiyu Zhai, Kai Zheng | null | 1707.05947 | null | null |
Generic Black-Box End-to-End Attack Against State of the Art API Call
Based Malware Classifiers | cs.CR cs.LG cs.NE | In this paper, we present a black-box attack against API call based machine
learning malware classifiers, focusing on generating adversarial sequences
combining API calls and static features (e.g., printable strings) that will be
misclassified by the classifier without affecting the malware functionality. We
show that this attack is effective against many classifiers due to the
transferability principle between RNN variants, feed forward DNNs, and
traditional machine learning classifiers such as SVM. We also implement GADGET,
a software framework to convert any malware binary to a binary undetected by
malware classifiers, using the proposed attack, without access to the malware
source code.
| Ishai Rosenberg, Asaf Shabtai, Lior Rokach, and Yuval Elovici | null | 1707.0597 | null | null |
Probably approximate Bayesian computation: nonasymptotic convergence of
ABC under misspecification | math.ST cs.LG stat.CO stat.TH | Approximate Bayesian computation (ABC) is a widely used inference method in
Bayesian statistics to bypass the point-wise computation of the likelihood. In
this paper we develop theoretical bounds for the distance between the
statistics used in ABC. We show that some versions of ABC are inherently robust
to misspecification. The bounds are given in the form of oracle inequalities
for a finite sample size. The dependence on the dimension of the parameter
space and the number of statistics is made explicit. The results are shown to
be amenable to oracle inequalities in parameter space. We apply our theoretical
results to given prior distributions and data generating processes, including a
non-parametric regression model. In a second part of the paper, we propose a
sequential Monte Carlo (SMC) to sample from the pseudo-posterior, improving
upon the state of the art samplers.
| James Ridgway | null | 1707.05987 | null | null |
Dynamic Layer Normalization for Adaptive Neural Acoustic Modeling in
Speech Recognition | cs.CL cs.LG | Layer normalization is a recently introduced technique for normalizing the
activities of neurons in deep neural networks to improve the training speed and
stability. In this paper, we introduce a new layer normalization technique
called Dynamic Layer Normalization (DLN) for adaptive neural acoustic modeling
in speech recognition. By dynamically generating the scaling and shifting
parameters in layer normalization, DLN adapts neural acoustic models to the
acoustic variability arising from various factors such as speakers, channel
noises, and environments. Unlike other adaptive acoustic models, our proposed
approach does not require additional adaptation data or speaker information
such as i-vectors. Moreover, the model size is fixed as it dynamically
generates adaptation parameters. We apply our proposed DLN to deep
bidirectional LSTM acoustic models and evaluate them on two benchmark datasets
for large vocabulary ASR experiments: WSJ and TED-LIUM release 2. The
experimental results show that our DLN improves neural acoustic models in terms
of transcription accuracy by dynamically adapting to various speakers and
environments.
| Taesup Kim, Inchul Song, Yoshua Bengio | null | 1707.06065 | null | null |
Naive Bayes Classification for Subset Selection | cs.LG | This article focuses on the question of learning how to automatically select
a subset of items among a bigger set. We introduce a methodology for the
inference of ensembles of discrete values, based on the Naive Bayes assumption.
Our motivation stems from practical use cases where one wishes to predict an
unordered set of (possibly interdependent) values from a set of observed
features. This problem can be considered in the context of Multi-label
Classification (MLC) where such values are seen as labels associated to
continuous or discrete features. We introduce the \nbx algorithm, an extension
of Naive Bayes classification into the multi-label domain, discuss its
properties and evaluate our approach on real-world problems.
| Luca Mossina, Emmanuel Rachelson | null | 1707.06142 | null | null |
Self-paced Convolutional Neural Network for Computer Aided Detection in
Medical Imaging Analysis | cs.CV cs.LG stat.ML | Tissue characterization has long been an important component of Computer
Aided Diagnosis (CAD) systems for automatic lesion detection and further
clinical planning. Motivated by the superior performance of deep learning
methods on various computer vision problems, there has been increasing work
applying deep learning to medical image analysis. However, the development of a
robust and reliable deep learning model for computer-aided diagnosis is still
highly challenging due to the combination of the high heterogeneity in the
medical images and the relative lack of training samples. Specifically,
annotation and labeling of the medical images is much more expensive and
time-consuming than other applications and often involves manual labor from
multiple domain experts. In this work, we propose a multi-stage, self-paced
learning framework utilizing a convolutional neural network (CNN) to classify
Computed Tomography (CT) image patches. The key contribution of this approach
is that we augment the size of training samples by refining the unlabeled
instances with a self-paced learning CNN. By implementing the framework on high
performance computing servers including the NVIDIA DGX1 machine, we obtained
the experimental result, showing that the self-pace boosted network
consistently outperformed the original network even with very scarce manual
labels. The performance gain indicates that applications with limited training
samples such as medical image analysis can benefit from using the proposed
framework.
| Xiang Li, Aoxiao Zhong, Ming Lin, Ning Guo, Mu Sun, Arkadiusz Sitek,
Jieping Ye, James Thrall, Quanzheng Li | 10.1007/978-3-319-67389-9_25 | 1707.06145 | null | null |
Learning model-based planning from scratch | cs.AI cs.LG cs.NE stat.ML | Conventional wisdom holds that model-based planning is a powerful approach to
sequential decision-making. It is often very challenging in practice, however,
because while a model can be used to evaluate a plan, it does not prescribe how
to construct a plan. Here we introduce the "Imagination-based Planner", the
first model-based, sequential decision-making agent that can learn to
construct, evaluate, and execute plans. Before any action, it can perform a
variable number of imagination steps, which involve proposing an imagined
action and evaluating it with its model-based imagination. All imagined actions
and outcomes are aggregated, iteratively, into a "plan context" which
conditions future real and imagined actions. The agent can even decide how to
imagine: testing out alternative imagined actions, chaining sequences of
actions together, or building a more complex "imagination tree" by navigating
flexibly among the previously imagined states using a learned policy. And our
agent can learn to plan economically, jointly optimizing for external rewards
and computational costs associated with using its imagination. We show that our
architecture can learn to solve a challenging continuous control problem, and
also learn elaborate planning strategies in a discrete maze-solving task. Our
work opens a new direction toward learning the components of a model-based
planning system and how to use them.
| Razvan Pascanu, Yujia Li, Oriol Vinyals, Nicolas Heess, Lars Buesing,
Sebastien Racani\`ere, David Reichert, Th\'eophane Weber, Daan Wierstra,
Peter Battaglia | null | 1707.0617 | null | null |
Deformable Part-based Fully Convolutional Network for Object Detection | cs.CV cs.AI cs.LG | Existing region-based object detectors are limited to regions with fixed box
geometry to represent objects, even if those are highly non-rectangular. In
this paper we introduce DP-FCN, a deep model for object detection which
explicitly adapts to shapes of objects with deformable parts. Without
additional annotations, it learns to focus on discriminative elements and to
align them, and simultaneously brings more invariance for classification and
geometric information to refine localization. DP-FCN is composed of three main
modules: a Fully Convolutional Network to efficiently maintain spatial
resolution, a deformable part-based RoI pooling layer to optimize positions of
parts and build invariance, and a deformation-aware localization module
explicitly exploiting displacements of parts to improve accuracy of bounding
box regression. We experimentally validate our model and show significant
gains. DP-FCN achieves state-of-the-art performances of 83.1% and 80.9% on
PASCAL VOC 2007 and 2012 with VOC data only.
| Taylor Mordan, Nicolas Thome, Matthieu Cord and Gilles Henaff | null | 1707.06175 | null | null |
Can GAN Learn Topological Features of a Graph? | cs.LG stat.ML | This paper is first-line research expanding GANs into graph topology
analysis. By leveraging the hierarchical connectivity structure of a graph, we
have demonstrated that generative adversarial networks (GANs) can successfully
capture topological features of any arbitrary graph, and rank edge sets by
different stages according to their contribution to topology reconstruction.
Moreover, in addition to acting as an indicator of graph reconstruction, we
find that these stages can also preserve important topological features in a
graph.
| Weiyi Liu and Pin-Yu Chen and Hal Cooper and Min Hwan Oh and Sailung
Yeung and Toyotaro Suzumura | null | 1707.06197 | null | null |
Imagination-Augmented Agents for Deep Reinforcement Learning | cs.LG cs.AI stat.ML | We introduce Imagination-Augmented Agents (I2As), a novel architecture for
deep reinforcement learning combining model-free and model-based aspects. In
contrast to most existing model-based reinforcement learning and planning
methods, which prescribe how a model should be used to arrive at a policy, I2As
learn to interpret predictions from a learned environment model to construct
implicit plans in arbitrary ways, by using the predictions as additional
context in deep policy networks. I2As show improved data efficiency,
performance, and robustness to model misspecification compared to several
baselines.
| Th\'eophane Weber, S\'ebastien Racani\`ere, David P. Reichert, Lars
Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdom\`enech Badia,
Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia,
Demis Hassabis, David Silver, Daan Wierstra | null | 1707.06203 | null | null |
Analysis of $p$-Laplacian Regularization in Semi-Supervised Learning | math.ST cs.LG stat.ML stat.TH | We investigate a family of regression problems in a semi-supervised setting.
The task is to assign real-valued labels to a set of $n$ sample points,
provided a small training subset of $N$ labeled points. A goal of
semi-supervised learning is to take advantage of the (geometric) structure
provided by the large number of unlabeled data when assigning labels. We
consider random geometric graphs, with connection radius $\epsilon(n)$, to
represent the geometry of the data set. Functionals which model the task reward
the regularity of the estimator function and impose or reward the agreement
with the training data. Here we consider the discrete $p$-Laplacian
regularization.
We investigate asymptotic behavior when the number of unlabeled points
increases, while the number of training points remains fixed. We uncover a
delicate interplay between the regularizing nature of the functionals
considered and the nonlocality inherent to the graph constructions. We
rigorously obtain almost optimal ranges on the scaling of $\epsilon(n)$ for the
asymptotic consistency to hold. We prove that the minimizers of the discrete
functionals in random setting converge uniformly to the desired continuum
limit. Furthermore we discover that for the standard model used there is a
restrictive upper bound on how quickly $\epsilon(n)$ must converge to zero as
$n \to \infty$. We introduce a new model which is as simple as the original
model, but overcomes this restriction.
| Dejan Slep\v{c}ev and Matthew Thorpe | null | 1707.06213 | null | null |
Worst-case vs Average-case Design for Estimation from Fixed Pairwise
Comparisons | cs.LG cs.AI cs.IT math.IT stat.ML | Pairwise comparison data arises in many domains, including tournament
rankings, web search, and preference elicitation. Given noisy comparisons of a
fixed subset of pairs of items, we study the problem of estimating the
underlying comparison probabilities under the assumption of strong stochastic
transitivity (SST). We also consider the noisy sorting subclass of the SST
model. We show that when the assignment of items to the topology is arbitrary,
these permutation-based models, unlike their parametric counterparts, do not
admit consistent estimation for most comparison topologies used in practice. We
then demonstrate that consistent estimation is possible when the assignment of
items to the topology is randomized, thus establishing a dichotomy between
worst-case and average-case designs. We propose two estimators in the
average-case setting and analyze their risk, showing that it depends on the
comparison topology only through the degree sequence of the topology. The rates
achieved by these estimators are shown to be optimal for a large class of
graphs. Our results are corroborated by simulations on multiple comparison
topologies.
| Ashwin Pananjady, Cheng Mao, Vidya Muthukumar, Martin J. Wainwright,
Thomas A. Courtade | null | 1707.06217 | null | null |
The Role of Conversation Context for Sarcasm Detection in Online
Interactions | cs.CL cs.AI cs.LG | Computational models for sarcasm detection have often relied on the content
of utterances in isolation. However, speaker's sarcastic intent is not always
obvious without additional context. Focusing on social media discussions, we
investigate two issues: (1) does modeling of conversation context help in
sarcasm detection and (2) can we understand what part of conversation context
triggered the sarcastic reply. To address the first issue, we investigate
several types of Long Short-Term Memory (LSTM) networks that can model both the
conversation context and the sarcastic response. We show that the conditional
LSTM network (Rocktaschel et al., 2015) and LSTM networks with sentence level
attention on context and response outperform the LSTM model that reads only the
response. To address the second issue, we present a qualitative analysis of
attention weights produced by the LSTM models with attention and discuss the
results compared with human performance on the task.
| Debanjan Ghosh, Alexander Richard Fabbri, Smaranda Muresan | null | 1707.06226 | null | null |
From Bach to the Beatles: The simulation of human tonal expectation
using ecologically-trained predictive models | cs.SD cs.LG | Tonal structure is in part conveyed by statistical regularities between
musical events, and research has shown that computational models reflect tonal
structure in music by capturing these regularities in schematic constructs like
pitch histograms. Of the few studies that model the acquisition of perceptual
learning from musical data, most have employed self-organizing models that
learn a topology of static descriptions of musical contexts. Also, the stimuli
used to train these models are often symbolic rather than acoustically faithful
representations of musical material. In this work we investigate whether
sequential predictive models of musical memory (specifically, recurrent neural
networks), trained on audio from commercial CD recordings, induce tonal
knowledge in a similar manner to listeners (as shown in behavioral studies in
music perception). Our experiments indicate that various types of recurrent
neural networks produce musical expectations that clearly convey tonal
structure. Furthermore, the results imply that although implicit knowledge of
tonal structure is a necessary condition for accurate musical expectation, the
most accurate predictive models also use other cues beyond the tonal structure
of the musical context.
| Carlos Cancino-Chac\'on, Maarten Grachten, Kat Agres | null | 1707.06231 | null | null |
Learning Approximate Neural Estimators for Wireless Channel State
Information | cs.LG cs.NE | Estimation is a critical component of synchronization in wireless and signal
processing systems. There is a rich body of work on estimator derivation,
optimization, and statistical characterization from analytic system models
which are used pervasively today. We explore an alternative approach to
building estimators which relies principally on approximate regression using
large datasets and large computationally efficient artificial neural network
models capable of learning non-linear function mappings which provide compact
and accurate estimates. For single carrier PSK modulation, we explore the
accuracy and computational complexity of such estimators compared with the
current gold-standard analytically derived alternatives. We compare performance
in various wireless operating conditions and consider the trade offs between
the two different classes of systems. Our results show the learned estimators
can provide improvements in areas such as short-time estimation and estimation
under non-trivial real world channel conditions such as fading or other
non-linear hardware or propagation effects.
| Timothy J. O'Shea, Kiran Karra, T. Charles Clancy | null | 1707.0626 | null | null |
Non-Asymptotic Uniform Rates of Consistency for k-NN Regression | stat.ML cs.LG | We derive high-probability finite-sample uniform rates of consistency for
$k$-NN regression that are optimal up to logarithmic factors under mild
assumptions. We moreover show that $k$-NN regression adapts to an unknown lower
intrinsic dimension automatically. We then apply the $k$-NN regression rates to
establish new results about estimating the level sets and global maxima of a
function from noisy observations.
| Heinrich Jiang | null | 1707.06261 | null | null |
Deformable Registration through Learning of Context-Specific Metric
Aggregation | cs.CV cs.LG | We propose a novel weakly supervised discriminative algorithm for learning
context specific registration metrics as a linear combination of conventional
similarity measures. Conventional metrics have been extensively used over the
past two decades and therefore both their strengths and limitations are known.
The challenge is to find the optimal relative weighting (or parameters) of
different metrics forming the similarity measure of the registration algorithm.
Hand-tuning these parameters would result in sub optimal solutions and quickly
become infeasible as the number of metrics increases. Furthermore, such
hand-crafted combination can only happen at global scale (entire volume) and
therefore will not be able to account for the different tissue properties. We
propose a learning algorithm for estimating these parameters locally,
conditioned to the data semantic classes. The objective function of our
formulation is a special case of non-convex function, difference of convex
function, which we optimize using the concave convex procedure. As a proof of
concept, we show the impact of our approach on three challenging datasets for
different anatomical structures and modalities.
| Enzo Ferrante and Puneet K Dokania and Rafael Marini and Nikos
Paragios | null | 1707.06263 | null | null |
Unsupervised Domain Adaptation for Robust Speech Recognition via
Variational Autoencoder-Based Data Augmentation | cs.CL cs.LG | Domain mismatch between training and testing can lead to significant
degradation in performance in many machine learning scenarios. Unfortunately,
this is not a rare situation for automatic speech recognition deployments in
real-world applications. Research on robust speech recognition can be regarded
as trying to overcome this domain mismatch issue. In this paper, we address the
unsupervised domain adaptation problem for robust speech recognition, where
both source and target domain speech are presented, but word transcripts are
only available for the source domain speech. We present novel
augmentation-based methods that transform speech in a way that does not change
the transcripts. Specifically, we first train a variational autoencoder on both
source and target domain data (without supervision) to learn a latent
representation of speech. We then transform nuisance attributes of speech that
are irrelevant to recognition by modifying the latent representations, in order
to augment labeled training data with additional data whose distribution is
more similar to the target domain. The proposed method is evaluated on the
CHiME-4 dataset and reduces the absolute word error rate (WER) by as much as
35% compared to the non-adapted baseline.
| Wei-Ning Hsu, Yu Zhang, James Glass | null | 1707.06265 | null | null |
Proximal Policy Optimization Algorithms | cs.LG | We propose a new family of policy gradient methods for reinforcement
learning, which alternate between sampling data through interaction with the
environment, and optimizing a "surrogate" objective function using stochastic
gradient ascent. Whereas standard policy gradient methods perform one gradient
update per data sample, we propose a novel objective function that enables
multiple epochs of minibatch updates. The new methods, which we call proximal
policy optimization (PPO), have some of the benefits of trust region policy
optimization (TRPO), but they are much simpler to implement, more general, and
have better sample complexity (empirically). Our experiments test PPO on a
collection of benchmark tasks, including simulated robotic locomotion and Atari
game playing, and we show that PPO outperforms other online policy gradient
methods, and overall strikes a favorable balance between sample complexity,
simplicity, and wall-time.
| John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg
Klimov | null | 1707.06347 | null | null |
Pragmatic-Pedagogic Value Alignment | cs.AI cs.HC cs.LG cs.RO | As intelligent systems gain autonomy and capability, it becomes vital to
ensure that their objectives match those of their human users; this is known as
the value-alignment problem. In robotics, value alignment is key to the design
of collaborative robots that can integrate into human workflows, successfully
inferring and adapting to their users' objectives as they go. We argue that a
meaningful solution to value alignment must combine multi-agent decision theory
with rich mathematical models of human cognition, enabling robots to tap into
people's natural collaborative capabilities. We present a solution to the
cooperative inverse reinforcement learning (CIRL) dynamic game based on
well-established cognitive models of decision making and theory of mind. The
solution captures a key reciprocity relation: the human will not plan her
actions in isolation, but rather reason pedagogically about how the robot might
learn from them; the robot, in turn, can anticipate this and interpret the
human's actions pragmatically. To our knowledge, this work constitutes the
first formal analysis of value alignment grounded in empirically validated
cognitive models.
| Jaime F. Fisac, Monica A. Gates, Jessica B. Hamrick, Chang Liu, Dylan
Hadfield-Menell, Malayandi Palaniappan, Dhruv Malik, S. Shankar Sastry,
Thomas L. Griffiths, and Anca D. Dragan | null | 1707.06354 | null | null |
Domain Adaptation by Using Causal Inference to Predict Invariant
Conditional Distributions | cs.LG stat.ML | An important goal common to domain adaptation and causal inference is to make
accurate predictions when the distributions for the source (or training)
domain(s) and target (or test) domain(s) differ. In many cases, these different
distributions can be modeled as different contexts of a single underlying
system, in which each distribution corresponds to a different perturbation of
the system, or in causal terms, an intervention. We focus on a class of such
causal domain adaptation problems, where data for one or more source domains
are given, and the task is to predict the distribution of a certain target
variable from measurements of other variables in one or more target domains. We
propose an approach for solving these problems that exploits causal inference
and does not rely on prior knowledge of the causal graph, the type of
interventions or the intervention targets. We demonstrate our approach by
evaluating a possible implementation on simulated and real world data.
| Sara Magliacane, Thijs van Ommen, Tom Claassen, Stephan Bongers,
Philip Versteeg, Joris M. Mooij | null | 1707.06422 | null | null |
Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite
Optimization | math.OC cs.LG stat.ML | Due to their simplicity and excellent performance, parallel asynchronous
variants of stochastic gradient descent have become popular methods to solve a
wide range of large-scale optimization problems on multi-core architectures.
Yet, despite their practical success, support for nonsmooth objectives is still
lacking, making them unsuitable for many problems of interest in machine
learning, such as the Lasso, group Lasso or empirical risk minimization with
convex constraints.
In this work, we propose and analyze ProxASAGA, a fully asynchronous sparse
method inspired by SAGA, a variance reduced incremental gradient algorithm. The
proposed method is easy to implement and significantly outperforms the state of
the art on several nonsmooth, large-scale problems. We prove that our method
achieves a theoretical linear speedup with respect to the sequential version
under assumptions on the sparsity of gradients and block-separability of the
proximal term. Empirical benchmarks on a multi-core architecture illustrate
practical speedups of up to 12x on a 20-core machine.
| Fabian Pedregosa, R\'emi Leblond, Simon Lacoste-Julien | null | 1707.06468 | null | null |
Deep Layer Aggregation | cs.CV cs.LG | Visual recognition requires rich representations that span levels from low to
high, scales from small to large, and resolutions from fine to coarse. Even
with the depth of features in a convolutional network, a layer in isolation is
not enough: compounding and aggregating these representations improves
inference of what and where. Architectural efforts are exploring many
dimensions for network backbones, designing deeper or wider architectures, but
how to best aggregate layers and blocks across a network deserves further
attention. Although skip connections have been incorporated to combine layers,
these connections have been "shallow" themselves, and only fuse by simple,
one-step operations. We augment standard architectures with deeper aggregation
to better fuse information across layers. Our deep layer aggregation structures
iteratively and hierarchically merge the feature hierarchy to make networks
with better accuracy and fewer parameters. Experiments across architectures and
tasks show that deep layer aggregation improves recognition and resolution
compared to existing branching and merging schemes. The code is at
https://github.com/ucbdrive/dla.
| Fisher Yu, Dequan Wang, Evan Shelhamer, Trevor Darrell | null | 1707.06484 | null | null |
A Nonlinear Kernel Support Matrix Machine for Matrix Learning | stat.ML cs.LG | In many problems of supervised tensor learning (STL), real world data such as
face images or MRI scans are naturally represented as matrices, which are also
called as second order tensors. Most existing classifiers based on tensor
representation, such as support tensor machine (STM) need to solve iteratively
which occupy much time and may suffer from local minima. In this paper, we
present a kernel support matrix machine (KSMM) to perform supervised learning
when data are represented as matrices. KSMM is a general framework for the
construction of matrix-based hyperplane to exploit structural information. We
analyze a unifying optimization problem for which we propose an asymptotically
convergent algorithm. Theoretical analysis for the generalization bounds is
derived based on Rademacher complexity with respect to a probability
distribution. We demonstrate the merits of the proposed method by exhaustive
experiments on both simulation study and a number of real-word datasets from a
variety of application domains.
| Yunfei Ye | 10.1007/s13042-018-0896-4 | 1707.06487 | null | null |
Language Transfer of Audio Word2Vec: Learning Audio Segment
Representations without Target Language Data | cs.CL cs.LG | Audio Word2Vec offers vector representations of fixed dimensionality for
variable-length audio segments using Sequence-to-sequence Autoencoder (SA).
These vector representations are shown to describe the sequential phonetic
structures of the audio segments to a good degree, with real world applications
such as query-by-example Spoken Term Detection (STD). This paper examines the
capability of language transfer of Audio Word2Vec. We train SA from one
language (source language) and use it to extract the vector representation of
the audio segments of another language (target language). We found that SA can
still catch phonetic structure from the audio segments of the target language
if the source and target languages are similar. In query-by-example STD, we
obtain the vector representations from the SA learned from a large amount of
source language data, and found them surpass the representations from naive
encoder and SA directly learned from a small amount of target language data.
The result shows that it is possible to learn Audio Word2Vec model from
high-resource languages and use it on low-resource languages. This further
expands the usability of Audio Word2Vec.
| Chia-Hao Shen, Janet Y. Sung, Hung-Yi Lee | null | 1707.06519 | null | null |
Single-Channel Multi-talker Speech Recognition with Permutation
Invariant Training | cs.SD cs.CL cs.LG eess.AS | Although great progresses have been made in automatic speech recognition
(ASR), significant performance degradation is still observed when recognizing
multi-talker mixed speech. In this paper, we propose and evaluate several
architectures to address this problem under the assumption that only a single
channel of mixed signal is available. Our technique extends permutation
invariant training (PIT) by introducing the front-end feature separation module
with the minimum mean square error (MSE) criterion and the back-end recognition
module with the minimum cross entropy (CE) criterion. More specifically, during
training we compute the average MSE or CE over the whole utterance for each
possible utterance-level output-target assignment, pick the one with the
minimum MSE or CE, and optimize for that assignment. This strategy elegantly
solves the label permutation problem observed in the deep learning based
multi-talker mixed speech separation and recognition systems. The proposed
architectures are evaluated and compared on an artificially mixed AMI dataset
with both two- and three-talker mixed speech. The experimental results indicate
that our proposed architectures can cut the word error rate (WER) by 45.0% and
25.0% relatively against the state-of-the-art single-talker speech recognition
system across all speakers when their energies are comparable, for two- and
three-talker mixed speech, respectively. To our knowledge, this is the first
work on the multi-talker mixed speech recognition on the challenging
speaker-independent spontaneous large vocabulary continuous speech task.
| Yanmin Qian, Xuankai Chang and Dong Yu | null | 1707.06527 | null | null |
Discretization-free Knowledge Gradient Methods for Bayesian Optimization | stat.ML cs.AI cs.LG math.OC math.PR | This paper studies Bayesian ranking and selection (R&S) problems with
correlated prior beliefs and continuous domains, i.e. Bayesian optimization
(BO). Knowledge gradient methods [Frazier et al., 2008, 2009] have been widely
studied for discrete R&S problems, which sample the one-step Bayes-optimal
point. When used over continuous domains, previous work on the knowledge
gradient [Scott et al., 2011, Wu and Frazier, 2016, Wu et al., 2017] often rely
on a discretized finite approximation. However, the discretization introduces
error and scales poorly as the dimension of domain grows. In this paper, we
develop a fast discretization-free knowledge gradient method for Bayesian
optimization. Our method is not restricted to the fully sequential setting, but
useful in all settings where knowledge gradient can be used over continuous
domains. We show how our method can be generalized to handle (i) batch of
points suggestion (parallel knowledge gradient); (ii) the setting where
derivative information is available in the optimization process
(derivative-enabled knowledge gradient). In numerical experiments, we
demonstrate that the discretization-free knowledge gradient method finds global
optima significantly faster than previous Bayesian optimization algorithms on
both synthetic test functions and real-world applications, especially when
function evaluations are noisy; and derivative-enabled knowledge gradient can
further improve the performances, even outperforming the gradient-based
optimizer such as BFGS when derivative information is available.
| Jian Wu and Peter I. Frazier | null | 1707.06541 | null | null |
High-risk learning: acquiring new word vectors from tiny data | cs.CL cs.LG | Distributional semantics models are known to struggle with small data. It is
generally accepted that in order to learn 'a good vector' for a word, a model
must have sufficient examples of its usage. This contradicts the fact that
humans can guess the meaning of a word from a few occurrences only. In this
paper, we show that a neural language model such as Word2Vec only necessitates
minor modifications to its standard architecture to learn new terms from tiny
data, using background knowledge from a previously learnt semantic space. We
test our model on word definitions and on a nonce task involving 2-6 sentences'
worth of context, showing a large increase in performance over state-of-the-art
models on the definitional task.
| Aurelie Herbelot and Marco Baroni | null | 1707.06556 | null | null |
VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop | cs.LG cs.CL cs.SD | We present a new neural text to speech (TTS) method that is able to transform
text to speech in voices that are sampled in the wild. Unlike other systems,
our solution is able to deal with unconstrained voice samples and without
requiring aligned phonemes or linguistic features. The network architecture is
simpler than those in the existing literature and is based on a novel shifting
buffer working memory. The same buffer is used for estimating the attention,
computing the output audio, and for updating the buffer itself. The input
sentence is encoded using a context-free lookup table that contains one entry
per character or phoneme. The speakers are similarly represented by a short
vector that can also be fitted to new identities, even with only a few samples.
Variability in the generated speech is achieved by priming the buffer prior to
generating the audio. Experimental results on several datasets demonstrate
convincing capabilities, making TTS accessible to a wider range of
applications. In order to promote reproducibility, we release our source code
and models.
| Yaniv Taigman, Lior Wolf, Adam Polyak, Eliya Nachmani | null | 1707.06588 | null | null |
Decoupled classifiers for fair and efficient machine learning | cs.LG cs.CY | When it is ethical and legal to use a sensitive attribute (such as gender or
race) in machine learning systems, the question remains how to do so. We show
that the naive application of machine learning algorithms using sensitive
features leads to an inherent tradeoff in accuracy between groups. We provide a
simple and efficient decoupling technique, that can be added on top of any
black-box machine learning algorithm, to learn different classifiers for
different groups. Transfer learning is used to mitigate the problem of having
too little data on any one group.
The method can apply to a range of fairness criteria. In particular, we
require the application designer to specify as joint loss function that makes
explicit the trade-off between fairness and accuracy. Our reduction is shown to
efficiently find the minimum loss as long as the objective has a certain
natural monotonicity property which may be of independent interest in the study
of fairness in algorithms.
| Cynthia Dwork, Nicole Immorlica, Adam Tauman Kalai, Max Leiserson | null | 1707.06613 | null | null |
Global Convergence of Langevin Dynamics Based Algorithms for Nonconvex
Optimization | stat.ML cs.LG math.OC | We present a unified framework to analyze the global convergence of Langevin
dynamics based algorithms for nonconvex finite-sum optimization with $n$
component functions. At the core of our analysis is a direct analysis of the
ergodicity of the numerical approximations to Langevin dynamics, which leads to
faster convergence rates. Specifically, we show that gradient Langevin dynamics
(GLD) and stochastic gradient Langevin dynamics (SGLD) converge to the almost
minimizer within $\tilde O\big(nd/(\lambda\epsilon) \big)$ and $\tilde
O\big(d^7/(\lambda^5\epsilon^5) \big)$ stochastic gradient evaluations
respectively, where $d$ is the problem dimension, and $\lambda$ is the spectral
gap of the Markov chain generated by GLD. Both results improve upon the best
known gradient complexity results (Raginsky et al., 2017). Furthermore, for the
first time we prove the global convergence guarantee for variance reduced
stochastic gradient Langevin dynamics (SVRG-LD) to the almost minimizer within
$\tilde O\big(\sqrt{n}d^5/(\lambda^4\epsilon^{5/2})\big)$ stochastic gradient
evaluations, which outperforms the gradient complexities of GLD and SGLD in a
wide regime. Our theoretical analyses shed some light on using Langevin
dynamics based algorithms for nonconvex optimization with provable guarantees.
| Pan Xu and Jinghui Chen and Difan Zou and Quanquan Gu | null | 1707.06618 | null | null |
Acting Thoughts: Towards a Mobile Robotic Service Assistant for Users
with Limited Communication Skills | cs.AI cs.CV cs.HC cs.LG cs.RO | As autonomous service robots become more affordable and thus available also
for the general public, there is a growing need for user friendly interfaces to
control the robotic system. Currently available control modalities typically
expect users to be able to express their desire through either touch, speech or
gesture commands. While this requirement is fulfilled for the majority of
users, paralyzed users may not be able to use such systems. In this paper, we
present a novel framework, that allows these users to interact with a robotic
service assistant in a closed-loop fashion, using only thoughts. The
brain-computer interface (BCI) system is composed of several interacting
components, i.e., non-invasive neuronal signal recording and decoding,
high-level task planning, motion and manipulation planning as well as
environment perception. In various experiments, we demonstrate its
applicability and robustness in real world scenarios, considering
fetch-and-carry tasks and tasks involving human-robot interaction. As our
results demonstrate, our system is capable of adapting to frequent changes in
the environment and reliably completing given tasks within a reasonable amount
of time. Combined with high-level planning and autonomous robotic systems,
interesting new perspectives open up for non-invasive BCI-based human-robot
interactions.
| Felix Burget, Lukas Dominique Josef Fiederer, Daniel Kuhner, Martin
V\"olker, Johannes Aldinger, Robin Tibor Schirrmeister, Chau Do, Joschka
Boedecker, Bernhard Nebel, Tonio Ball, Wolfram Burgard | 10.1109/ECMR.2017.8098658 | 1707.06633 | null | null |
RAIL: Risk-Averse Imitation Learning | cs.LG cs.AI | Imitation learning algorithms learn viable policies by imitating an expert's
behavior when reward signals are not available. Generative Adversarial
Imitation Learning (GAIL) is a state-of-the-art algorithm for learning policies
when the expert's behavior is available as a fixed set of trajectories. We
evaluate in terms of the expert's cost function and observe that the
distribution of trajectory-costs is often more heavy-tailed for GAIL-agents
than the expert at a number of benchmark continuous-control tasks. Thus,
high-cost trajectories, corresponding to tail-end events of catastrophic
failure, are more likely to be encountered by the GAIL-agents than the expert.
This makes the reliability of GAIL-agents questionable when it comes to
deployment in risk-sensitive applications like robotic surgery and autonomous
driving. In this work, we aim to minimize the occurrence of tail-end events by
minimizing tail risk within the GAIL framework. We quantify tail risk by the
Conditional-Value-at-Risk (CVaR) of trajectories and develop the Risk-Averse
Imitation Learning (RAIL) algorithm. We observe that the policies learned with
RAIL show lower tail-end risk than those of vanilla GAIL. Thus the proposed
RAIL algorithm appears as a potent alternative to GAIL for improved reliability
in risk-sensitive applications.
| Anirban Santara, Abhishek Naik, Balaraman Ravindran, Dipankar Das,
Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul | null | 1707.06658 | null | null |
Resting state fMRI functional connectivity-based classification using a
convolutional neural network architecture | stat.ML cs.CV cs.LG | Machine learning techniques have become increasingly popular in the field of
resting state fMRI (functional magnetic resonance imaging) network based
classification. However, the application of convolutional networks has been
proposed only very recently and has remained largely unexplored. In this paper
we describe a convolutional neural network architecture for functional
connectome classification called connectome-convolutional neural network
(CCNN). Our results on simulated datasets and a publicly available dataset for
amnestic mild cognitive impairment classification demonstrate that our CCNN
model can efficiently distinguish between subject groups. We also show that the
connectome-convolutional network is capable to combine information from diverse
functional connectivity metrics and that models using a combination of
different connectivity descriptors are able to outperform classifiers using
only one metric. From this flexibility follows that our proposed CCNN model can
be easily adapted to a wide range of connectome based classification or
regression tasks, by varying which connectivity descriptor combinations are
used to train the network.
| Regina Meszl\'enyi, Krisztian Buza and Zolt\'an Vidny\'anszky | null | 1707.06682 | null | null |
Efficient Defenses Against Adversarial Attacks | cs.LG | Following the recent adoption of deep neural networks (DNN) accross a wide
range of applications, adversarial attacks against these models have proven to
be an indisputable threat. Adversarial samples are crafted with a deliberate
intention of undermining a system. In the case of DNNs, the lack of better
understanding of their working has prevented the development of efficient
defenses. In this paper, we propose a new defense method based on practical
observations which is easy to integrate into models and performs better than
state-of-the-art defenses. Our proposed solution is meant to reinforce the
structure of a DNN, making its prediction more stable and less likely to be
fooled by adversarial samples. We conduct an extensive experimental study
proving the efficiency of our method against multiple attacks, comparing it to
numerous defenses, both in white-box and black-box setups. Additionally, the
implementation of our method brings almost no overhead to the training
procedure, while maintaining the prediction performance of the original model
on clean samples.
| Valentina Zantedeschi, Maria-Irina Nicolae and Ambrish Rawat | null | 1707.06728 | null | null |
Machine Teaching: A New Paradigm for Building Machine Learning Systems | cs.LG cs.AI cs.HC cs.SE stat.ML | The current processes for building machine learning systems require
practitioners with deep knowledge of machine learning. This significantly
limits the number of machine learning systems that can be created and has led
to a mismatch between the demand for machine learning systems and the ability
for organizations to build them. We believe that in order to meet this growing
demand for machine learning systems we must significantly increase the number
of individuals that can teach machines. We postulate that we can achieve this
goal by making the process of teaching machines easy, fast and above all,
universally accessible.
While machine learning focuses on creating new algorithms and improving the
accuracy of "learners", the machine teaching discipline focuses on the efficacy
of the "teachers". Machine teaching as a discipline is a paradigm shift that
follows and extends principles of software engineering and programming
languages. We put a strong emphasis on the teacher and the teacher's
interaction with data, as well as crucial components such as techniques and
design principles of interaction and visualization.
In this paper, we present our position regarding the discipline of machine
teaching and articulate fundamental machine teaching principles. We also
describe how, by decoupling knowledge about machine learning algorithms from
the process of teaching, we can accelerate innovation and empower millions of
new uses for machine learning models.
| Patrice Y. Simard, Saleema Amershi, David M. Chickering, Alicia
Edelman Pelton, Soroush Ghorashi, Christopher Meek, Gonzalo Ramos, Jina Suh,
Johan Verwey, Mo Wang, and John Wernsing | null | 1707.06742 | null | null |
An Infinite Hidden Markov Model With Similarity-Biased Transitions | stat.ML cs.AI cs.LG stat.ME | We describe a generalization of the Hierarchical Dirichlet Process Hidden
Markov Model (HDP-HMM) which is able to encode prior information that state
transitions are more likely between "nearby" states. This is accomplished by
defining a similarity function on the state space and scaling transition
probabilities by pair-wise similarities, thereby inducing correlations among
the transition distributions. We present an augmented data representation of
the model as a Markov Jump Process in which: (1) some jump attempts fail, and
(2) the probability of success is proportional to the similarity between the
source and destination states. This augmentation restores conditional conjugacy
and admits a simple Gibbs sampler. We evaluate the model and inference method
on a speaker diarization task and a "harmonic parsing" task using four-part
chorale data, as well as on several synthetic datasets, achieving favorable
comparisons to existing models.
| Colin Reimer Dawson, Chaofan Huang, Clayton T. Morrison | null | 1707.06756 | null | null |
A Nonlinear Dimensionality Reduction Framework Using Smooth Geodesics | stat.ML cs.CV cs.LG math.DS | Existing dimensionality reduction methods are adept at revealing hidden
underlying manifolds arising from high-dimensional data and thereby producing a
low-dimensional representation. However, the smoothness of the manifolds
produced by classic techniques over sparse and noisy data is not guaranteed. In
fact, the embedding generated using such data may distort the geometry of the
manifold and thereby produce an unfaithful embedding. Herein, we propose a
framework for nonlinear dimensionality reduction that generates a manifold in
terms of smooth geodesics that is designed to treat problems in which manifold
measurements are either sparse or corrupted by noise. Our method generates a
network structure for given high-dimensional data using a nearest neighbors
search and then produces piecewise linear shortest paths that are defined as
geodesics. Then, we fit points in each geodesic by a smoothing spline to
emphasize the smoothness. The robustness of this approach for sparse and noisy
datasets is demonstrated by the implementation of the method on synthetic and
real-world datasets.
| Kelum Gajamannage, Randy Paffenroth, Erik M. Bollt | null | 1707.06757 | null | null |
An Error-Oriented Approach to Word Embedding Pre-Training | cs.CL cs.LG cs.NE | We propose a novel word embedding pre-training approach that exploits writing
errors in learners' scripts. We compare our method to previous models that tune
the embeddings based on script scores and the discrimination between correct
and corrupt word contexts in addition to the generic commonly-used embeddings
pre-trained on large corpora. The comparison is achieved by using the
aforementioned models to bootstrap a neural network that learns to predict a
holistic score for scripts. Furthermore, we investigate augmenting our model
with error corrections and monitor the impact on performance. Our results show
that our error-oriented approach outperforms other comparable ones which is
further demonstrated when training on more data. Additionally, extending the
model with corrections provides further performance gains when data sparsity is
an issue.
| Youmna Farag, Marek Rei, Ted Briscoe | null | 1707.06841 | null | null |
A Distributional Perspective on Reinforcement Learning | cs.LG cs.AI stat.ML | In this paper we argue for the fundamental importance of the value
distribution: the distribution of the random return received by a reinforcement
learning agent. This is in contrast to the common approach to reinforcement
learning which models the expectation of this return, or value. Although there
is an established body of literature studying the value distribution, thus far
it has always been used for a specific purpose such as implementing risk-aware
behaviour. We begin with theoretical results in both the policy evaluation and
control settings, exposing a significant distributional instability in the
latter. We then use the distributional perspective to design a new algorithm
which applies Bellman's equation to the learning of approximate value
distributions. We evaluate our algorithm using the suite of games from the
Arcade Learning Environment. We obtain both state-of-the-art results and
anecdotal evidence demonstrating the importance of the value distribution in
approximate reinforcement learning. Finally, we combine theoretical and
empirical evidence to highlight the ways in which the value distribution
impacts learning in the approximate setting.
| Marc G. Bellemare, Will Dabney, R\'emi Munos | null | 1707.06887 | null | null |
A New Family of Near-metrics for Universal Similarity | stat.ML cs.LG | We propose a family of near-metrics based on local graph diffusion to capture
similarity for a wide class of data sets. These quasi-metametrics, as their
names suggest, dispense with one or two standard axioms of metric spaces,
specifically distinguishability and symmetry, so that similarity between data
points of arbitrary type and form could be measured broadly and effectively.
The proposed near-metric family includes the forward k-step diffusion and its
reverse, typically on the graph consisting of data objects and their features.
By construction, this family of near-metrics is particularly appropriate for
categorical data, continuous data, and vector representations of images and
text extracted via deep learning approaches. We conduct extensive experiments
to evaluate the performance of this family of similarity measures and compare
and contrast with traditional measures of similarity used for each specific
application and with the ground truth when available. We show that for
structured data including categorical and continuous data, the near-metrics
corresponding to normalized forward k-step diffusion (k small) work as one of
the best performing similarity measures; for vector representations of text and
images including those extracted from deep learning, the near-metrics derived
from normalized and reverse k-step graph diffusion (k very small) exhibit
outstanding ability to distinguish data points from different classes.
| Chu Wang, Iraj Saniee, William S. Kennedy, Chris A. White | null | 1707.06903 | null | null |
Dictionary Learning and Sparse Coding-based Denoising for
High-Resolution Task Functional Connectivity MRI Analysis | cs.LG stat.ML | We propose a novel denoising framework for task functional Magnetic Resonance
Imaging (tfMRI) data to delineate the high-resolution spatial pattern of the
brain functional connectivity via dictionary learning and sparse coding (DLSC).
In order to address the limitations of the unsupervised DLSC-based fMRI
studies, we utilize the prior knowledge of task paradigm in the learning step
to train a data-driven dictionary and to model the sparse representation. We
apply the proposed DLSC-based method to Human Connectome Project (HCP) motor
tfMRI dataset. Studies on the functional connectivity of cerebrocerebellar
circuits in somatomotor networks show that the DLSC-based denoising framework
can significantly improve the prominent connectivity patterns, in comparison to
the temporal non-local means (tNLM)-based denoising method as well as the case
without denoising, which is consistent and neuroscientifically meaningful
within motor area. The promising results show that the proposed method can
provide an important foundation for the high-resolution functional connectivity
analysis, and provide a better approach for fMRI preprocessing.
| Seongah Jeong, Xiang Li, Jiarui Yang, Quanzheng Li, Vahid Tarokh | null | 1707.06962 | null | null |
Ideological Sublations: Resolution of Dialectic in Population-based
Optimization | cs.LG cs.AI cs.CC cs.NE | A population-based optimization algorithm was designed, inspired by two main
thinking modes in philosophy, both based on dialectic concept and
thesis-antithesis paradigm. They impose two different kinds of dialectics.
Idealistic and materialistic antitheses are formulated as optimization models.
Based on the models, the population is coordinated for dialectical
interactions. At the population-based context, the formulated optimization
models are reduced to a simple detection problem for each thinker (particle).
According to the assigned thinking mode to each thinker and her/his
measurements of corresponding dialectic with other candidate particles, they
deterministically decide to interact with a thinker in maximum dialectic with
their theses. The position of a thinker at maximum dialectic is known as an
available antithesis among the existing solutions. The dialectical interactions
at each ideological community are distinguished by meaningful distributions of
step-sizes for each thinking mode. In fact, the thinking modes are regarded as
exploration and exploitation elements of the proposed algorithm. The result is
a delicate balance without any requirement for adjustment of step-size
coefficients. Main parameter of the proposed algorithm is the number of
particles appointed to each thinking modes, or equivalently for each kind of
motions. An additional integer parameter is defined to boost the stability of
the final algorithm in some particular problems. The proposed algorithm is
evaluated by a testbed of 12 single-objective continuous benchmark functions.
Moreover, its performance and speed were highlighted in sparse reconstruction
and antenna selection problems, at the context of compressed sensing and
massive MIMO, respectively. The results indicate fast and efficient performance
in comparison with well-known evolutionary algorithms and dedicated
state-of-the-art algorithms.
| S. Hossein Hosseini and Afshin Ebrahimi | null | 1707.06992 | null | null |
Machine Learning for Structured Clinical Data | cs.LG | Research is a tertiary priority in the EHR, where the priorities are patient
care and billing. Because of this, the data is not standardized or formatted in
a manner easily adapted to machine learning approaches. Data may be missing for
a large variety of reasons ranging from individual input styles to differences
in clinical decision making, for example, which lab tests to issue. Few
patients are annotated at a research quality, limiting sample size and
presenting a moving gold standard. Patient progression over time is key to
understanding many diseases but many machine learning algorithms require a
snapshot, at a single time point, to create a usable vector form. Furthermore,
algorithms that produce black box results do not provide the interpretability
required for clinical adoption. This chapter discusses these challenges and
others in applying machine learning techniques to the structured EHR (i.e.
Patient Demographics, Family History, Medication Information, Vital Signs,
Laboratory Tests, Genetic Testing). It does not cover feature extraction from
additional sources such as imaging data or free text patient notes but the
approaches discussed can include features extracted from these sources.
| Brett K. Beaulieu-Jones | null | 1707.06997 | null | null |
Learning Transferable Architectures for Scalable Image Recognition | cs.CV cs.LG stat.ML | Developing neural network image classification models often requires
significant architecture engineering. In this paper, we study a method to learn
the model architectures directly on the dataset of interest. As this approach
is expensive when the dataset is large, we propose to search for an
architectural building block on a small dataset and then transfer the block to
a larger dataset. The key contribution of this work is the design of a new
search space (the "NASNet search space") which enables transferability. In our
experiments, we search for the best convolutional layer (or "cell") on the
CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking
together more copies of this cell, each with their own parameters to design a
convolutional architecture, named "NASNet architecture". We also introduce a
new regularization technique called ScheduledDropPath that significantly
improves generalization in the NASNet models. On CIFAR-10 itself, NASNet
achieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet
achieves, among the published works, state-of-the-art accuracy of 82.7% top-1
and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than
the best human-invented architectures while having 9 billion fewer FLOPS - a
reduction of 28% in computational demand from the previous state-of-the-art
model. When evaluated at different levels of computational cost, accuracies of
NASNets exceed those of the state-of-the-art human-designed models. For
instance, a small version of NASNet also achieves 74% top-1 accuracy, which is
3.1% better than equivalently-sized, state-of-the-art models for mobile
platforms. Finally, the learned features by NASNet used with the Faster-RCNN
framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO
dataset.
| Barret Zoph, Vijay Vasudevan, Jonathon Shlens, Quoc V. Le | null | 1707.07012 | null | null |
Adversarial Variational Optimization of Non-Differentiable Simulators | stat.ML cs.LG | Complex computer simulators are increasingly used across fields of science as
generative models tying parameters of an underlying theory to experimental
observations. Inference in this setup is often difficult, as simulators rarely
admit a tractable density or likelihood function. We introduce Adversarial
Variational Optimization (AVO), a likelihood-free inference algorithm for
fitting a non-differentiable generative model incorporating ideas from
generative adversarial networks, variational optimization and empirical Bayes.
We adapt the training procedure of generative adversarial networks by replacing
the differentiable generative network with a domain-specific simulator. We
solve the resulting non-differentiable minimax problem by minimizing
variational upper bounds of the two adversarial objectives. Effectively, the
procedure results in learning a proposal distribution over simulator
parameters, such that the JS divergence between the marginal distribution of
the synthetic data and the empirical distribution of observed data is
minimized. We evaluate and compare the method with simulators producing both
discrete and continuous data.
| Gilles Louppe, Joeri Hermans, Kyle Cranmer | null | 1707.07113 | null | null |
Sketched Subspace Clustering | stat.ML cs.LG | The immense amount of daily generated and communicated data presents unique
challenges in their processing. Clustering, the grouping of data without the
presence of ground-truth labels, is an important tool for drawing inferences
from data. Subspace clustering (SC) is a relatively recent method that is able
to successfully classify nonlinearly separable data in a multitude of settings.
In spite of their high clustering accuracy, SC methods incur prohibitively high
computational complexity when processing large volumes of high-dimensional
data. Inspired by random sketching approaches for dimensionality reduction, the
present paper introduces a randomized scheme for SC, termed Sketch-SC, tailored
for large volumes of high-dimensional data. Sketch-SC accelerates the
computationally heavy parts of state-of-the-art SC approaches by compressing
the data matrix across both dimensions using random projections, thus enabling
fast and accurate large-scale SC. Performance analysis as well as extensive
numerical tests on real data corroborate the potential of Sketch-SC and its
competitive performance relative to state-of-the-art scalable SC approaches.
| Panagiotis A. Traganitis and Georgios B. Giannakis | 10.1109/TSP.2017.2781649 | 1707.07196 | null | null |
Language modeling with Neural trans-dimensional random fields | cs.CL cs.LG stat.ML | Trans-dimensional random field language models (TRF LMs) have recently been
introduced, where sentences are modeled as a collection of random fields. The
TRF approach has been shown to have the advantages of being computationally
more efficient in inference than LSTM LMs with close performance and being able
to flexibly integrating rich features. In this paper we propose neural TRFs,
beyond of the previous discrete TRFs that only use linear potentials with
discrete features. The idea is to use nonlinear potentials with continuous
features, implemented by neural networks (NNs), in the TRF framework. Neural
TRFs combine the advantages of both NNs and TRFs. The benefits of word
embedding, nonlinear feature learning and larger context modeling are inherited
from the use of NNs. At the same time, the strength of efficient inference by
avoiding expensive softmax is preserved. A number of technical contributions,
including employing deep convolutional neural networks (CNNs) to define the
potentials and incorporating the joint stochastic approximation (JSA) strategy
in the training algorithm, are developed in this work, which enable us to
successfully train neural TRF LMs. Various LMs are evaluated in terms of speech
recognition WERs by rescoring the 1000-best lists of WSJ'92 test data. The
results show that neural TRF LMs not only improve over discrete TRF LMs, but
also perform slightly better than LSTM LMs with only one fifth of parameters
and 16x faster inference efficiency.
| Bin Wang and Zhijian Ou | null | 1707.0724 | null | null |
Pairing an arbitrary regressor with an artificial neural network
estimating aleatoric uncertainty | stat.ML cs.LG | We suggest a general approach to quantification of different forms of
aleatoric uncertainty in regression tasks performed by artificial neural
networks. It is based on the simultaneous training of two neural networks with
a joint loss function and a specific hyperparameter $\lambda>0$ that allows for
automatically detecting noisy and clean regions in the input space and
controlling their {\em relative contribution} to the loss and its gradients.
After the model has been trained, one of the networks performs predictions and
the other quantifies the uncertainty of these predictions by estimating the
locally averaged loss of the first one. Unlike in many classical uncertainty
quantification methods, we do not assume any a priori knowledge of the ground
truth probability distribution, neither do we, in general, maximize the
likelihood of a chosen parametric family of distributions. We analyze the
learning process and the influence of clean and noisy regions of the input
space on the loss surface, depending on $\lambda$. In particular, we show that
small values of $\lambda$ increase the relative contribution of clean regions
to the loss and its gradients. This explains why choosing small $\lambda$
allows for better predictions compared with neural networks without uncertainty
counterparts and those based on classical likelihood maximization. Finally, we
demonstrate that one can naturally form ensembles of pairs of our networks and
thus capture both aleatoric and epistemic uncertainty and avoid overfitting.
| Pavel Gurevich, Hannes Stuke | null | 1707.07287 | null | null |
Joint DOA Estimation and Array Calibration Using Multiple Parametric
Dictionary Learning | cs.LG | This letter proposes a multiple parametric dictionary learning algorithm for
direction of arrival (DOA) estimation in presence of array gain-phase error and
mutual coupling. It jointly solves both the DOA estimation and array
imperfection problems to yield a robust DOA estimation in presence of array
imperfection errors and off-grid. In the proposed method, a multiple parametric
dictionary learning-based algorithm with an steepest-descent iteration is used
for learning the parametric perturbation matrices and the steering matrix
simultaneously. It also exploits the multiple snapshots information to enhance
the performance of DOA estimation. Simulation results show the efficiency of
the proposed algorithm when both off-grid problem and array imperfection exist.
| H. Ghanbari, H. Zayyani, E. Yazdian | null | 1707.07299 | null | null |
Adversarial Examples for Evaluating Reading Comprehension Systems | cs.CL cs.LG | Standard accuracy metrics indicate that reading comprehension systems are
making rapid progress, but the extent to which these systems truly understand
language remains unclear. To reward systems with real language understanding
abilities, we propose an adversarial evaluation scheme for the Stanford
Question Answering Dataset (SQuAD). Our method tests whether systems can answer
questions about paragraphs that contain adversarially inserted sentences, which
are automatically generated to distract computer systems without changing the
correct answer or misleading humans. In this adversarial setting, the accuracy
of sixteen published models drops from an average of $75\%$ F1 score to $36\%$;
when the adversary is allowed to add ungrammatical sequences of words, average
accuracy on four models decreases further to $7\%$. We hope our insights will
motivate the development of new models that understand language more precisely.
| Robin Jia and Percy Liang | null | 1707.07328 | null | null |
Prediction-Constrained Training for Semi-Supervised Mixture and Topic
Models | stat.ML cs.AI cs.LG | Supervisory signals have the potential to make low-dimensional data
representations, like those learned by mixture and topic models, more
interpretable and useful. We propose a framework for training latent variable
models that explicitly balances two goals: recovery of faithful generative
explanations of high-dimensional data, and accurate prediction of associated
semantic labels. Existing approaches fail to achieve these goals due to an
incomplete treatment of a fundamental asymmetry: the intended application is
always predicting labels from data, not data from labels. Our
prediction-constrained objective for training generative models coherently
integrates loss-based supervisory signals while enabling effective
semi-supervised learning from partially labeled data. We derive learning
algorithms for semi-supervised mixture and topic models using stochastic
gradient descent with automatic differentiation. We demonstrate improved
prediction quality compared to several previous supervised topic models,
achieving predictions competitive with high-dimensional logistic regression on
text sentiment analysis and electronic health records tasks while
simultaneously learning interpretable topics.
| Michael C. Hughes and Leah Weiner and Gabriel Hope and Thomas H. McCoy
Jr. and Roy H. Perlis and Erik B. Sudderth and Finale Doshi-Velez | null | 1707.07341 | null | null |
An Online Learning Approach to Buying and Selling Demand Response | cs.SY cs.LG | We adopt the perspective of an aggregator, which seeks to coordinate its
purchase of demand reductions from a fixed group of residential electricity
customers, with its sale of the aggregate demand reduction in a two-settlement
wholesale energy market. The aggregator procures reductions in demand by
offering its customers a uniform price for reductions in consumption relative
to their predetermined baselines. Prior to its realization of the aggregate
demand reduction, the aggregator must also determine how much energy to sell
into the two-settlement energy market. In the day-ahead market, the aggregator
commits to a forward contract, which calls for the delivery of energy in the
real-time market. The underlying aggregate demand curve, which relates the
aggregate demand reduction to the aggregator's offered price, is assumed to be
affine and subject to unobservable, random shocks. Assuming that both the
parameters of the demand curve and the distribution of the random shocks are
initially unknown to the aggregator, we investigate the extent to which the
aggregator might dynamically adapt its offered prices and forward contracts to
maximize its expected profit over a time window of $T$ days. Specifically, we
design a dynamic pricing and contract offering policy that resolves the
aggregator's need to learn the unknown demand model with its desire to maximize
its cumulative expected profit over time. In particular, the proposed pricing
policy is proven to incur a regret over $T$ days that is no greater than
$O(\log(T)\sqrt{T})$.
| Kia Khezeli and Eilyan Bitar | null | 1707.07342 | null | null |
Wavelet Convolutional Neural Networks for Texture Classification | cs.CV cs.LG | Texture classification is an important and challenging problem in many image
processing applications. While convolutional neural networks (CNNs) achieved
significant successes for image classification, texture classification remains
a difficult problem since textures usually do not contain enough information
regarding the shape of object. In image processing, texture classification has
been traditionally studied well with spectral analyses which exploit repeated
structures in many textures. Since CNNs process images as-is in the spatial
domain whereas spectral analyses process images in the frequency domain, these
models have different characteristics in terms of performance. We propose a
novel CNN architecture, wavelet CNNs, which integrates a spectral analysis into
CNNs. Our insight is that the pooling layer and the convolution layer can be
viewed as a limited form of a spectral analysis. Based on this insight, we
generalize both layers to perform a spectral analysis with wavelet transform.
Wavelet CNNs allow us to utilize spectral information which is lost in
conventional CNNs but useful in texture classification. The experiments
demonstrate that our model achieves better accuracy in texture classification
than existing models. We also show that our model has significantly fewer
parameters than CNNs, making our model easier to train with less memory.
| Shin Fujieda, Kohei Takayama and Toshiya Hachisuka | null | 1707.07394 | null | null |
Learning for Multi-robot Cooperation in Partially Observable Stochastic
Environments with Macro-actions | cs.MA cs.LG cs.RO | This paper presents a data-driven approach for multi-robot coordination in
partially-observable domains based on Decentralized Partially Observable Markov
Decision Processes (Dec-POMDPs) and macro-actions (MAs). Dec-POMDPs provide a
general framework for cooperative sequential decision making under uncertainty
and MAs allow temporally extended and asynchronous action execution. To date,
most methods assume the underlying Dec-POMDP model is known a priori or a full
simulator is available during planning time. Previous methods which aim to
address these issues suffer from local optimality and sensitivity to initial
conditions. Additionally, few hardware demonstrations involving a large team of
heterogeneous robots and with long planning horizons exist. This work addresses
these gaps by proposing an iterative sampling based Expectation-Maximization
algorithm (iSEM) to learn polices using only trajectory data containing
observations, MAs, and rewards. Our experiments show the algorithm is able to
achieve better solution quality than the state-of-the-art learning-based
methods. We implement two variants of multi-robot Search and Rescue (SAR)
domains (with and without obstacles) on hardware to demonstrate the learned
policies can effectively control a team of distributed robots to cooperate in a
partially observable stochastic environment.
| Miao Liu, Kavinayan Sivakumar, Shayegan Omidshafiei, Christopher
Amato, Jonathan P. How | null | 1707.07399 | null | null |
Reinforcement Learning for Bandit Neural Machine Translation with
Simulated Human Feedback | cs.CL cs.AI cs.HC cs.LG | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors.
| Khanh Nguyen, Hal Daum\'e III and Jordan Boyd-Graber | null | 1707.07402 | null | null |
Big Data Regression Using Tree Based Segmentation | stat.ML cs.LG | Scaling regression to large datasets is a common problem in many application
areas. We propose a two step approach to scaling regression to large datasets.
Using a regression tree (CART) to segment the large dataset constitutes the
first step of this approach. The second step of this approach is to develop a
suitable regression model for each segment. Since segment sizes are not very
large, we have the ability to apply sophisticated regression techniques if
required. A nice feature of this two step approach is that it can yield models
that have good explanatory power as well as good predictive performance.
Ensemble methods like Gradient Boosted Trees can offer excellent predictive
performance but may not provide interpretable models. In the experiments
reported in this study, we found that the predictive performance of the
proposed approach matched the predictive performance of Gradient Boosted Trees.
| Rajiv Sambasivan, Sourish Das | null | 1707.07409 | null | null |
Combinatorial Multi-armed Bandit with Probabilistically Triggered Arms:
A Case with Bounded Regret | cs.LG stat.ML | In this paper, we study the combinatorial multi-armed bandit problem (CMAB)
with probabilistically triggered arms (PTAs). Under the assumption that the arm
triggering probabilities (ATPs) are positive for all arms, we prove that a
class of upper confidence bound (UCB) policies, named Combinatorial UCB with
exploration rate $\kappa$ (CUCB-$\kappa$), and Combinatorial Thompson Sampling
(CTS), which estimates the expected states of the arms via Thompson sampling,
achieve bounded regret. In addition, we prove that CUCB-$0$ and CTS incur
$O(\sqrt{T})$ gap-independent regret. These results improve the results in
previous works, which show $O(\log T)$ gap-dependent and $O(\sqrt{T\log T})$
gap-independent regrets, respectively, under no assumptions on the ATPs. Then,
we numerically evaluate the performance of CUCB-$\kappa$ and CTS in a
real-world movie recommendation problem, where the actions correspond to
recommending a set of movies, the arms correspond to the edges between the
movies and the users, and the goal is to maximize the total number of users
that are attracted by at least one movie. Our numerical results complement our
theoretical findings on bounded regret. Apart from this problem, our results
also directly apply to the online influence maximization (OIM) problem studied
in numerous prior works.
| A. \"Omer Sar{\i}ta\c{c} and Cem Tekin | null | 1707.07443 | null | null |
Character-level Intra Attention Network for Natural Language Inference | cs.CL cs.LG | Natural language inference (NLI) is a central problem in language
understanding. End-to-end artificial neural networks have reached
state-of-the-art performance in NLI field recently.
In this paper, we propose Character-level Intra Attention Network (CIAN) for
the NLI task. In our model, we use the character-level convolutional network to
replace the standard word embedding layer, and we use the intra attention to
capture the intra-sentence semantics. The proposed CIAN model provides improved
results based on a newly published MNLI corpus.
| Han Yang, Marta R. Costa-juss\`a and Jos\'e A. R. Fonollosa | null | 1707.07469 | null | null |
Likelihood Estimation for Generative Adversarial Networks | cs.LG cs.AI | We present a simple method for assessing the quality of generated images in
Generative Adversarial Networks (GANs). The method can be applied in any kind
of GAN without interfering with the learning procedure or affecting the
learning objective. The central idea is to define a likelihood function that
correlates with the quality of the generated images. In particular, we derive a
Gaussian likelihood function from the distribution of the embeddings (hidden
activations) of the real images in the discriminator, and based on this, define
two simple measures of how likely it is that the embeddings of generated images
are from the distribution of the embeddings of the real images. This yields a
simple measure of fitness for generated images, for all varieties of GANs.
Empirical results on CIFAR-10 demonstrate a strong correlation between the
proposed measures and the perceived quality of the generated images.
| Hamid Eghbal-zadeh, Gerhard Widmer | null | 1707.0753 | null | null |
Exploring Outliers in Crowdsourced Ranking for QoE | stat.ML cs.LG | Outlier detection is a crucial part of robust evaluation for crowdsourceable
assessment of Quality of Experience (QoE) and has attracted much attention in
recent years. In this paper, we propose some simple and fast algorithms for
outlier detection and robust QoE evaluation based on the nonconvex optimization
principle. Several iterative procedures are designed with or without knowing
the number of outliers in samples. Theoretical analysis is given to show that
such procedures can reach statistically good estimates under mild conditions.
Finally, experimental results with simulated and real-world crowdsourcing
datasets show that the proposed algorithms could produce similar performance to
Huber-LASSO approach in robust ranking, yet with nearly 8 or 90 times speed-up,
without or with a prior knowledge on the sparsity size of outliers,
respectively. Therefore the proposed methodology provides us a set of helpful
tools for robust QoE evaluation with crowdsourcing data.
| Qianqian Xu, Ming Yan, Chendi Huang, Jiechao Xiong, Qingming Huang,
Yuan Yao | null | 1707.07539 | null | null |
Automatic breast cancer grading in lymph nodes using a deep neural
network | cs.CV cs.LG | The progression of breast cancer can be quantified in lymph node whole-slide
images (WSIs). We describe a novel method for effectively performing
classification of whole-slide images and patient level breast cancer grading.
Our method utilises a deep neural network. The method performs classification
on small patches and uses model averaging for boosting. In the first step,
region of interest patches are determined and cropped automatically by color
thresholding and then classified by the deep neural network. The classification
results are used to determine a slide level class and for further aggregation
to predict a patient level grade. Fast processing speed of our method enables
high throughput image analysis.
| Thomas Wollmann, Karl Rohr | null | 1707.07565 | null | null |
Interpreting Classifiers through Attribute Interactions in Datasets | stat.ML cs.LG | In this work we present the novel ASTRID method for investigating which
attribute interactions classifiers exploit when making predictions. Attribute
interactions in classification tasks mean that two or more attributes together
provide stronger evidence for a particular class label. Knowledge of such
interactions makes models more interpretable by revealing associations between
attributes. This has applications, e.g., in pharmacovigilance to identify
interactions between drugs or in bioinformatics to investigate associations
between single nucleotide polymorphisms. We also show how the found attribute
partitioning is related to a factorisation of the data generating distribution
and empirically demonstrate the utility of the proposed method.
| Andreas Henelius, Kai Puolam\"aki, Antti Ukkonen | null | 1707.07576 | null | null |
Stock Prediction: a method based on extraction of news features and
recurrent neural networks | q-fin.ST cs.CL cs.IR cs.LG | This paper proposed a method for stock prediction. In terms of feature
extraction, we extract the features of stock-related news besides stock prices.
We first select some seed words based on experience which are the symbols of
good news and bad news. Then we propose an optimization method and calculate
the positive polar of all words. After that, we construct the features of news
based on the positive polar of their words. In consideration of sequential
stock prices and continuous news effects, we propose a recurrent neural network
model to help predict stock prices. Compared to SVM classifier with price
features, we find our proposed method has an over 5% improvement on stock
prediction accuracy in experiments.
| Zeya Zhang, Weizheng Chen and Hongfei Yan | null | 1707.07585 | null | null |
Share your Model instead of your Data: Privacy Preserving Mimic Learning
for Ranking | cs.IR cs.AI cs.CL cs.LG | Deep neural networks have become a primary tool for solving problems in many
fields. They are also used for addressing information retrieval problems and
show strong performance in several tasks. Training these models requires large,
representative datasets and for most IR tasks, such data contains sensitive
information from users. Privacy and confidentiality concerns prevent many data
owners from sharing the data, thus today the research community can only
benefit from research on large-scale datasets in a limited manner. In this
paper, we discuss privacy preserving mimic learning, i.e., using predictions
from a privacy preserving trained model instead of labels from the original
sensitive training data as a supervision signal. We present the results of
preliminary experiments in which we apply the idea of mimic learning and
privacy preserving mimic learning for the task of document re-ranking as one of
the core IR tasks. This research is a step toward laying the ground for
enabling researchers from data-rich environments to share knowledge learned
from actual users' data, which should facilitate research collaborations.
| Mostafa Dehghani, Hosein Azarbonyad, Jaap Kamps, Maarten de Rijke | null | 1707.07605 | null | null |
A Deep Learning Approach to Digitally Stain Optical Coherence Tomography
Images of the Optic Nerve Head | cs.LG | Purpose: To develop a deep learning approach to digitally-stain optical
coherence tomography (OCT) images of the optic nerve head (ONH).
Methods: A horizontal B-scan was acquired through the center of the ONH using
OCT (Spectralis) for 1 eye of each of 100 subjects (40 normal & 60 glaucoma).
All images were enhanced using adaptive compensation. A custom deep learning
network was then designed and trained with the compensated images to digitally
stain (i.e. highlight) 6 tissue layers of the ONH. The accuracy of our
algorithm was assessed (against manual segmentations) using the Dice
coefficient, sensitivity, and specificity. We further studied how compensation
and the number of training images affected the performance of our algorithm.
Results: For images it had not yet assessed, our algorithm was able to
digitally stain the retinal nerve fiber layer + prelamina, the retinal pigment
epithelium, all other retinal layers, the choroid, and the peripapillary sclera
and lamina cribrosa. For all tissues, the mean dice coefficient was $0.84 \pm
0.03$, the mean sensitivity $0.92 \pm 0.03$, and the mean specificity $0.99 \pm
0.00$. Our algorithm performed significantly better when compensated images
were used for training. Increasing the number of images (from 10 to 40) to
train our algorithm did not significantly improve performance, except for the
RPE.
Conclusion. Our deep learning algorithm can simultaneously stain neural and
connective tissues in ONH images. Our approach offers a framework to
automatically measure multiple key structural parameters of the ONH that may be
critical to improve glaucoma management.
| Sripad Krishna Devalla, Jean-Martial Mari, Tin A. Tun, Nicholas G.
Strouthidis, Tin Aung, Alexandre H. Thiery, Michael J. A. Girard | null | 1707.07609 | null | null |
Comparison of Decision Tree Based Classification Strategies to Detect
External Chemical Stimuli from Raw and Filtered Plant Electrical Response | physics.bio-ph cs.LG physics.data-an stat.AP stat.ML | Plants monitor their surrounding environment and control their physiological
functions by producing an electrical response. We recorded electrical signals
from different plants by exposing them to Sodium Chloride (NaCl), Ozone (O3)
and Sulfuric Acid (H2SO4) under laboratory conditions. After applying
pre-processing techniques such as filtering and drift removal, we extracted few
statistical features from the acquired plant electrical signals. Using these
features, combined with different classification algorithms, we used a decision
tree based multi-class classification strategy to identify the three different
external chemical stimuli. We here present our exploration to obtain the
optimum set of ranked feature and classifier combination that can separate a
particular chemical stimulus from the incoming stream of plant electrical
signals. The paper also reports an exhaustive comparison of similar feature
based classification using the filtered and the raw plant signals, containing
the high frequency stochastic part and also the low frequency trends present in
it, as two different cases for feature extraction. The work, presented in this
paper opens up new possibilities for using plant electrical signals to monitor
and detect other environmental stimuli apart from NaCl, O3 and H2SO4 in future.
| Shre Kumar Chatterjee, Saptarshi Das, Koushik Maharatna, Elisa Masi,
Luisa Santopolo, Ilaria Colzi, Stefano Mancuso and Andrea Vitaletti | 10.1016/j.snb.2017.04.071 | 1707.0762 | null | null |
Engineering fast multilevel support vector machines | cs.LG cs.DS stat.CO stat.ML | The computational complexity of solving nonlinear support vector machine
(SVM) is prohibitive on large-scale data. In particular, this issue becomes
very sensitive when the data represents additional difficulties such as highly
imbalanced class sizes. Typically, nonlinear kernels produce significantly
higher classification quality to linear kernels but introduce extra kernel and
model parameters which requires computationally expensive fitting. This
increases the quality but also reduces the performance dramatically. We
introduce a generalized fast multilevel framework for regular and weighted SVM
and discuss several versions of its algorithmic components that lead to a good
trade-off between quality and time. Our framework is implemented using PETSc
which allows an easy integration with scientific computing tasks. The
experimental results demonstrate significant speed up compared to the
state-of-the-art nonlinear SVM libraries.
Reproducibility: our source code, documentation and parameters are available
at https:// github.com/esadr/mlsvm.
| E. Sadrfaridpour, T. Razzaghi, I. Safro | null | 1707.07657 | null | null |
Per-instance Differential Privacy | stat.ML cs.LG | We consider a refinement of differential privacy --- per instance
differential privacy (pDP), which captures the privacy of a specific individual
with respect to a fixed data set. We show that this is a strict generalization
of the standard DP and inherits all its desirable properties, e.g.,
composition, invariance to side information and closedness to postprocessing,
except that they all hold for every instance separately. When the data is drawn
from a distribution, we show that per-instance DP implies generalization.
Moreover, we provide explicit calculations of the per-instance DP for the
output perturbation on a class of smooth learning problems. The result reveals
an interesting and intuitive fact that an individual has stronger privacy if
he/she has small "leverage score" with respect to the data set and if he/she
can be predicted more accurately using the leave-one-out data set. Our
simulation shows several orders-of-magnitude more favorable privacy and utility
trade-off when we consider the privacy of only the users in the data set. In a
case study on differentially private linear regression, provide a novel
analysis of the One-Posterior-Sample (OPS) estimator and show that when the
data set is well-conditioned it provides $(\epsilon,\delta)$-pDP for any target
individuals and matches the exact lower bound up to a
$1+\tilde{O}(n^{-1}\epsilon^{-2})$ multiplicative factor. We also demonstrate
how we can use a "pDP to DP conversion" step to design AdaOPS which uses
adaptive regularization to achieve the same results with
$(\epsilon,\delta)$-DP.
| Yu-Xiang Wang | null | 1707.07708 | null | null |
Stochastic Gradient Descent for Relational Logistic Regression via
Partial Network Crawls | stat.ML cs.LG | Research in statistical relational learning has produced a number of methods
for learning relational models from large-scale network data. While these
methods have been successfully applied in various domains, they have been
developed under the unrealistic assumption of full data access. In practice,
however, the data are often collected by crawling the network, due to
proprietary access, limited resources, and privacy concerns. Recently, we
showed that the parameter estimates for relational Bayes classifiers computed
from network samples collected by existing network crawlers can be quite
inaccurate, and developed a crawl-aware estimation method for such models
(Yang, Ribeiro, and Neville, 2017). In this work, we extend the methodology to
learning relational logistic regression models via stochastic gradient descent
from partial network crawls, and show that the proposed method yields accurate
parameter estimates and confidence intervals.
| Jiasen Yang, Bruno Ribeiro, Jennifer Neville | null | 1707.07716 | null | null |
Bellman Gradient Iteration for Inverse Reinforcement Learning | cs.LG cs.RO | This paper develops an inverse reinforcement learning algorithm aimed at
recovering a reward function from the observed actions of an agent. We
introduce a strategy to flexibly handle different types of actions with two
approximations of the Bellman Optimality Equation, and a Bellman Gradient
Iteration method to compute the gradient of the Q-value with respect to the
reward function. These methods allow us to build a differentiable relation
between the Q-value and the reward function and learn an approximately optimal
reward function with gradient methods. We test the proposed method in two
simulated environments by evaluating the accuracy of different approximations
and comparing the proposed method with existing solutions. The results show
that even with a linear reward function, the proposed method has a comparable
accuracy with the state-of-the-art method adopting a non-linear reward
function, and the proposed method is more flexible because it is defined on
observed actions instead of trajectories.
| Kun Li, Yanan Sui, Joel W. Burdick | null | 1707.07767 | null | null |
Exact Identification of a Quantum Change Point | quant-ph cs.LG | The detection of change points is a pivotal task in statistical analysis. In
the quantum realm, it is a new primitive where one aims at identifying the
point where a source that supposedly prepares a sequence of particles in
identical quantum states starts preparing a mutated one. We obtain the optimal
procedure to identify the change point with certainty---naturally at the price
of having a certain probability of getting an inconclusive answer. We obtain
the analytical form of the optimal probability of successful identification for
any length of the particle sequence. We show that the conditional success
probabilities of identifying each possible change point show an unexpected
oscillatory behaviour. We also discuss local (online) protocols and compare
them with the optimal procedure.
| Gael Sent\'is, John Calsamiglia, Ramon Munoz-Tapia | 10.1103/PhysRevLett.119.140506 | 1707.07769 | null | null |
Desensitized RDCA Subspaces for Compressive Privacy in Machine Learning | cs.CR cs.LG | The quest for better data analysis and artificial intelligence has lead to
more and more data being collected and stored. As a consequence, more data are
exposed to malicious entities. This paper examines the problem of privacy in
machine learning for classification. We utilize the Ridge Discriminant
Component Analysis (RDCA) to desensitize data with respect to a privacy label.
Based on five experiments, we show that desensitization by RDCA can effectively
protect privacy (i.e. low accuracy on the privacy label) with small loss in
utility. On HAR and CMU Faces datasets, the use of desensitized data results in
random guess level accuracies for privacy at a cost of 5.14% and 0.04%, on
average, drop in the utility accuracies. For Semeion Handwritten Digit dataset,
accuracies of the privacy-sensitive digits are almost zero, while the
accuracies for the utility-relevant digits drop by 7.53% on average. This
presents a promising solution to the problem of privacy in machine learning for
classification.
| Artur Filipowicz, Thee Chanyaswad, S. Y. Kung | null | 1707.0777 | null | null |
Comparing Aggregators for Relational Probabilistic Models | stat.ML cs.LG | Relational probabilistic models have the challenge of aggregation, where one
variable depends on a population of other variables. Consider the problem of
predicting gender from movie ratings; this is challenging because the number of
movies per user and users per movie can vary greatly. Surprisingly, aggregation
is not well understood. In this paper, we show that existing relational models
(implicitly or explicitly) either use simple numerical aggregators that lose
great amounts of information, or correspond to naive Bayes, logistic
regression, or noisy-OR that suffer from overconfidence. We propose new simple
aggregators and simple modifications of existing models that empirically
outperform the existing ones. The intuition we provide on different (existing
or new) models and their shortcomings plus our empirical findings promise to
form the foundation for future representations.
| Seyed Mehran Kazemi, Bahare Fatemi, Alexandra Kim, Zilun Peng, Moumita
Roy Tora, Xing Zeng, Matthew Dirks, David Poole | null | 1707.07785 | null | null |
Concept Drift Detection and Adaptation with Hierarchical Hypothesis
Testing | stat.ML cs.LG | A fundamental issue for statistical classification models in a streaming
environment is that the joint distribution between predictor and response
variables changes over time (a phenomenon also known as concept drifts), such
that their classification performance deteriorates dramatically. In this paper,
we first present a hierarchical hypothesis testing (HHT) framework that can
detect and also adapt to various concept drift types (e.g., recurrent or
irregular, gradual or abrupt), even in the presence of imbalanced data labels.
A novel concept drift detector, namely Hierarchical Linear Four Rates (HLFR),
is implemented under the HHT framework thereafter. By substituting a
widely-acknowledged retraining scheme with an adaptive training strategy, we
further demonstrate that the concept drift adaptation capability of HLFR can be
significantly boosted. The theoretical analysis on the Type-I and Type-II
errors of HLFR is also performed. Experiments on both simulated and real-world
datasets illustrate that our methods outperform state-of-the-art methods in
terms of detection precision, detection delay as well as the adaptability
across different concept drift types.
| Shujian Yu, Zubin Abraham, Heng Wang, Mohak Shah, Yantao Wei and
Jos\'e C. Pr\'incipe | null | 1707.07821 | null | null |
Linear Discriminant Generative Adversarial Networks | stat.ML cs.LG | We develop a novel method for training of GANs for unsupervised and class
conditional generation of images, called Linear Discriminant GAN (LD-GAN). The
discriminator of an LD-GAN is trained to maximize the linear separability
between distributions of hidden representations of generated and targeted
samples, while the generator is updated based on the decision hyper-planes
computed by performing LDA over the hidden representations. LD-GAN provides a
concrete metric of separation capacity for the discriminator, and we
experimentally show that it is possible to stabilize the training of LD-GAN
simply by calibrating the update frequencies between generators and
discriminators in the unsupervised case, without employment of normalization
methods and constraints on weights. In the class conditional generation tasks,
the proposed method shows improved training stability together with better
generalization performance compared to WGAN that employs an auxiliary
classifier.
| Zhun Sun, Mete Ozay, Takayuki Okatani | null | 1707.07831 | null | null |
Partial Transfer Learning with Selective Adversarial Networks | cs.LG | Adversarial learning has been successfully embedded into deep networks to
learn transferable features, which reduce distribution discrepancy between the
source and target domains. Existing domain adversarial networks assume fully
shared label space across domains. In the presence of big data, there is strong
motivation of transferring both classification and representation models from
existing big domains to unknown small domains. This paper introduces partial
transfer learning, which relaxes the shared label space assumption to that the
target label space is only a subspace of the source label space. Previous
methods typically match the whole source domain to the target domain, which are
prone to negative transfer for the partial transfer problem. We present
Selective Adversarial Network (SAN), which simultaneously circumvents negative
transfer by selecting out the outlier source classes and promotes positive
transfer by maximally matching the data distributions in the shared label
space. Experiments demonstrate that our models exceed state-of-the-art results
for partial transfer learning tasks on several benchmark datasets.
| Zhangjie Cao, Mingsheng Long, Jianmin Wang, Michael I. Jordan | null | 1707.07901 | null | null |
Error Bounds for Piecewise Smooth and Switching Regression | stat.ML cs.LG | The paper deals with regression problems, in which the nonsmooth target is
assumed to switch between different operating modes. Specifically, piecewise
smooth (PWS) regression considers target functions switching deterministically
via a partition of the input space, while switching regression considers
arbitrary switching laws. The paper derives generalization error bounds in
these two settings by following the approach based on Rademacher complexities.
For PWS regression, our derivation involves a chaining argument and a
decomposition of the covering numbers of PWS classes in terms of the ones of
their component functions and the capacity of the classifier partitioning the
input space. This yields error bounds with a radical dependency on the number
of modes. For switching regression, the decomposition can be performed directly
at the level of the Rademacher complexities, which yields bounds with a linear
dependency on the number of modes. By using once more chaining and a
decomposition at the level of covering numbers, we show how to recover a
radical dependency. Examples of applications are given in particular for PWS
and swichting regression with linear and kernel-based component functions.
| Fabien Lauer (ABC) | null | 1707.07938 | null | null |
Towards Evolutional Compression | stat.ML cs.LG | Compressing convolutional neural networks (CNNs) is essential for
transferring the success of CNNs to a wide variety of applications to mobile
devices. In contrast to directly recognizing subtle weights or filters as
redundant in a given CNN, this paper presents an evolutionary method to
automatically eliminate redundant convolution filters. We represent each
compressed network as a binary individual of specific fitness. Then, the
population is upgraded at each evolutionary iteration using genetic operations.
As a result, an extremely compact CNN is generated using the fittest
individual. In this approach, either large or small convolution filters can be
redundant, and filters in the compressed network are more distinct. In
addition, since the number of filters in each convolutional layer is reduced,
the number of filter channels and the size of feature maps are also decreased,
naturally improving both the compression and speed-up ratios. Experiments on
benchmark deep CNN models suggest the superiority of the proposed algorithm
over the state-of-the-art compression methods.
| Yunhe Wang, Chang Xu, Jiayan Qiu, Chao Xu, Dacheng Tao | null | 1707.08005 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.