title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
A Meta-Learning Approach to One-Step Active Learning | cs.LG | We consider the problem of learning when obtaining the training labels is
costly, which is usually tackled in the literature using active-learning
techniques. These approaches provide strategies to choose the examples to label
before or during training. These strategies are usually based on heuristics or
even theoretical measures, but are not learned as they are directly used during
training. We design a model which aims at \textit{learning active-learning
strategies} using a meta-learning setting. More specifically, we consider a
pool-based setting, where the system observes all the examples of the dataset
of a problem and has to choose the subset of examples to label in a single
shot. Experiments show encouraging results.
| Gabriella Contardo, Ludovic Denoyer, Thierry Artieres | null | 1706.08334 | null | null |
GPU-acceleration for Large-scale Tree Boosting | stat.ML cs.DC cs.LG | In this paper, we present a novel massively parallel algorithm for
accelerating the decision tree building procedure on GPUs (Graphics Processing
Units), which is a crucial step in Gradient Boosted Decision Tree (GBDT) and
random forests training. Previous GPU based tree building algorithms are based
on parallel multi-scan or radix sort to find the exact tree split, and thus
suffer from scalability and performance issues. We show that using a histogram
based algorithm to approximately find the best split is more efficient and
scalable on GPU. By identifying the difference between classical GPU-based
image histogram construction and the feature histogram construction in decision
tree training, we develop a fast feature histogram building kernel on GPU with
carefully designed computational and memory access sequence to reduce atomic
update conflict and maximize GPU utilization. Our algorithm can be used as a
drop-in replacement for histogram construction in popular tree boosting systems
to improve their scalability. As an example, to train GBDT on epsilon dataset,
our method using a main-stream GPU is 7-8 times faster than histogram based
algorithm on CPU in LightGBM and 25 times faster than the exact-split finding
algorithm in XGBoost on a dual-socket 28-core Xeon server, while achieving
similar prediction accuracy.
| Huan Zhang, Si Si, Cho-Jui Hsieh | null | 1706.08359 | null | null |
Approximate Steepest Coordinate Descent | cs.LG math.OC | We propose a new selection rule for the coordinate selection in coordinate
descent methods for huge-scale optimization. The efficiency of this novel
scheme is provably better than the efficiency of uniformly random selection,
and can reach the efficiency of steepest coordinate descent (SCD), enabling an
acceleration of a factor of up to $n$, the number of coordinates. In many
practical applications, our scheme can be implemented at no extra cost and
computational efficiency very close to the faster uniform selection. Numerical
experiments with Lasso and Ridge regression show promising improvements, in
line with our theoretical guarantees.
| Sebastian U. Stich, Anant Raj, Martin Jaggi | null | 1706.08427 | null | null |
Efficiency of quantum versus classical annealing in non-convex learning
problems | quant-ph cond-mat.dis-nn cs.LG stat.ML | Quantum annealers aim at solving non-convex optimization problems by
exploiting cooperative tunneling effects to escape local minima. The underlying
idea consists in designing a classical energy function whose ground states are
the sought optimal solutions of the original optimization problem and add a
controllable quantum transverse field to generate tunneling processes. A key
challenge is to identify classes of non-convex optimization problems for which
quantum annealing remains efficient while thermal annealing fails. We show that
this happens for a wide class of problems which are central to machine
learning. Their energy landscapes is dominated by local minima that cause
exponential slow down of classical thermal annealers while simulated quantum
annealing converges efficiently to rare dense regions of optimal solutions.
| Carlo Baldassi, Riccardo Zecchina | 10.1073/pnas.1711456115 | 1706.0847 | null | null |
Cognitive Subscore Trajectory Prediction in Alzheimer's Disease | stat.ML cs.LG | Accurate diagnosis of Alzheimer's Disease (AD) entails clinical evaluation of
multiple cognition metrics and biomarkers. Metrics such as the Alzheimer's
Disease Assessment Scale - Cognitive test (ADAS-cog) comprise multiple
subscores that quantify different aspects of a patient's cognitive state such
as learning, memory, and language production/comprehension. Although
computer-aided diagnostic techniques for classification of a patient's current
disease state exist, they provide little insight into the relationship between
changes in brain structure and different aspects of a patient's cognitive state
that occur over time in AD. We have developed a Convolutional Neural Network
architecture that can concurrently predict the trajectories of the 13 subscores
comprised by a subject's ADAS-cog examination results from a current minimally
preprocessed structural MRI scan up to 36 months from image acquisition time
without resorting to manual feature extraction. Mean performance metrics are
within range of those of existing techniques that require manual feature
selection and are limited to predicting aggregate scores.
| Lev E. Givon (1), Laura J. Mariano (1), David O'Dowd (1), John M.
Irvine (1), Abraham R. Schneider (1) ((1) The Charles Stark Draper
Laboratory, Inc.) | null | 1706.08491 | null | null |
Spectrally-normalized margin bounds for neural networks | cs.LG cs.NE stat.ML | This paper presents a margin-based multiclass generalization bound for neural
networks that scales with their margin-normalized "spectral complexity": their
Lipschitz constant, meaning the product of the spectral norms of the weight
matrices, times a certain correction factor. This bound is empirically
investigated for a standard AlexNet network trained with SGD on the mnist and
cifar10 datasets, with both original and random labels; the bound, the
Lipschitz constants, and the excess risks are all in direct correlation,
suggesting both that SGD selects predictors whose complexity scales with the
difficulty of the learning task, and secondly that the presented bound is
sensitive to this complexity.
| Peter Bartlett, Dylan J. Foster, Matus Telgarsky | null | 1706.08498 | null | null |
GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash
Equilibrium | cs.LG stat.ML | Generative Adversarial Networks (GANs) excel at creating realistic images
with complex models for which maximum likelihood is infeasible. However, the
convergence of GAN training has still not been proved. We propose a two
time-scale update rule (TTUR) for training GANs with stochastic gradient
descent on arbitrary GAN loss functions. TTUR has an individual learning rate
for both the discriminator and the generator. Using the theory of stochastic
approximation, we prove that the TTUR converges under mild assumptions to a
stationary local Nash equilibrium. The convergence carries over to the popular
Adam optimization, for which we prove that it follows the dynamics of a heavy
ball with friction and thus prefers flat minima in the objective landscape. For
the evaluation of the performance of GANs at image generation, we introduce the
"Fr\'echet Inception Distance" (FID) which captures the similarity of generated
images to real ones better than the Inception Score. In experiments, TTUR
improves learning for DCGANs and Improved Wasserstein GANs (WGAN-GP)
outperforming conventional GAN training on CelebA, CIFAR-10, SVHN, LSUN
Bedrooms, and the One Billion Word Benchmark.
| Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler,
Sepp Hochreiter | null | 1706.085 | null | null |
On conditional parity as a notion of non-discrimination in machine
learning | stat.ML cs.CY cs.LG | We identify conditional parity as a general notion of non-discrimination in
machine learning. In fact, several recently proposed notions of
non-discrimination, including a few counterfactual notions, are instances of
conditional parity. We show that conditional parity is amenable to statistical
analysis by studying randomization as a general mechanism for achieving
conditional parity and a kernel-based test of conditional parity.
| Ya'acov Ritov, Yuekai Sun, Ruofei Zhao | null | 1706.08519 | null | null |
Learning Local Feature Aggregation Functions with Backpropagation | cs.LG stat.ML | This paper introduces a family of local feature aggregation functions and a
novel method to estimate their parameters, such that they generate optimal
representations for classification (or any task that can be expressed as a cost
function minimization problem). To achieve that, we compose the local feature
aggregation function with the classifier cost function and we backpropagate the
gradient of this cost function in order to update the local feature aggregation
function parameters. Experiments on synthetic datasets indicate that our method
discovers parameters that model the class-relevant information in addition to
the local feature space. Further experiments on a variety of motion and visual
descriptors, both on image and video datasets, show that our method outperforms
other state-of-the-art local feature aggregation functions, such as Bag of
Words, Fisher Vectors and VLAD, by a large margin.
| Angelos Katharopoulos, Despoina Paschalidou, Christos Diou and
Anastasios Delopoulos | null | 1706.0858 | null | null |
Cognitive Psychology for Deep Neural Networks: A Shape Bias Case Study | stat.ML cs.CV cs.LG | Deep neural networks (DNNs) have achieved unprecedented performance on a wide
range of complex tasks, rapidly outpacing our understanding of the nature of
their solutions. This has caused a recent surge of interest in methods for
rendering modern neural systems more interpretable. In this work, we propose to
address the interpretability problem in modern DNNs using the rich history of
problem descriptions, theories and experimental methods developed by cognitive
psychologists to study the human mind. To explore the potential value of these
tools, we chose a well-established analysis from developmental psychology that
explains how children learn word labels for objects, and applied that analysis
to DNNs. Using datasets of stimuli inspired by the original cognitive
psychology experiments, we find that state-of-the-art one shot learning models
trained on ImageNet exhibit a similar bias to that observed in humans: they
prefer to categorize objects according to shape rather than color. The
magnitude of this shape bias varies greatly among architecturally identical,
but differently seeded models, and even fluctuates within seeds throughout
training, despite nearly equivalent classification performance. These results
demonstrate the capability of tools from cognitive psychology for exposing
hidden computational properties of DNNs, while concurrently providing us with a
computational model for human word learning.
| Samuel Ritter, David G.T. Barrett, Adam Santoro and Matt M. Botvinick | null | 1706.08606 | null | null |
Fast and robust tensor decomposition with applications to dictionary
learning | cs.LG cs.DS stat.ML | We develop fast spectral algorithms for tensor decomposition that match the
robustness guarantees of the best known polynomial-time algorithms for this
problem based on the sum-of-squares (SOS) semidefinite programming hierarchy.
Our algorithms can decompose a 4-tensor with $n$-dimensional orthonormal
components in the presence of error with constant spectral norm (when viewed as
an $n^2$-by-$n^2$ matrix). The running time is $n^5$ which is close to linear
in the input size $n^4$.
We also obtain algorithms with similar running time to learn sparsely-used
orthogonal dictionaries even when feature representations have constant
relative sparsity and non-independent coordinates.
The only previous polynomial-time algorithms to solve these problem are based
on solving large semidefinite programs. In contrast, our algorithms are easy to
implement directly and are based on spectral projections and tensor-mode
rearrangements.
Or work is inspired by recent of Hopkins, Schramm, Shi, and Steurer (STOC'16)
that shows how fast spectral algorithms can achieve the guarantees of SOS for
average-case problems. In this work, we introduce general techniques to capture
the guarantees of SOS for worst-case problems.
| Tselil Schramm and David Steurer | null | 1706.08672 | null | null |
Proceedings of the First International Workshop on Deep Learning and
Music | cs.NE cs.LG cs.MM cs.SD | Proceedings of the First International Workshop on Deep Learning and Music,
joint with IJCNN, Anchorage, US, May 17-18, 2017
| Dorien Herremans, Ching-Hua Chuan | 10.13140/RG.2.2.22227.99364/1 | 1706.08675 | null | null |
Controlled Tactile Exploration and Haptic Object Recognition | cs.RO cs.LG | In this paper we propose a novel method for in-hand object recognition. The
method is composed of a grasp stabilization controller and two exploratory
behaviours to capture the shape and the softness of an object. Grasp
stabilization plays an important role in recognizing objects. First, it
prevents the object from slipping and facilitates the exploration of the
object. Second, reaching a stable and repeatable position adds robustness to
the learning algorithm and increases invariance with respect to the way in
which the robot grasps the object. The stable poses are estimated using a
Gaussian mixture model (GMM). We present experimental results showing that
using our method the classifier can successfully distinguish 30 objects.We also
compare our method with a benchmark experiment, in which the grasp
stabilization is disabled. We show, with statistical significance, that our
method outperforms the benchmark method.
| Massimo Regoli, Nawid Jamali, Giorgio Metta and Lorenzo Natale | 10.1109/ICAR.2017.8023495 | 1706.08697 | null | null |
Forecasting and Granger Modelling with Non-linear Dynamical Dependencies | cs.LG stat.ML | Traditional linear methods for forecasting multivariate time series are not
able to satisfactorily model the non-linear dependencies that may exist in
non-Gaussian series. We build on the theory of learning vector-valued functions
in the reproducing kernel Hilbert space and develop a method for learning
prediction functions that accommodate such non-linearities. The method not only
learns the predictive function but also the matrix-valued kernel underlying the
function search space directly from the data. Our approach is based on learning
multiple matrix-valued kernels, each of those composed of a set of input
kernels and a set of output kernels learned in the cone of positive
semi-definite matrices. In addition to superior predictive performance in the
presence of strong non-linearities, our method also recovers the hidden dynamic
relationships between the series and thus is a new alternative to existing
graphical Granger techniques.
| Magda Gregorov\'a, Alexandros Kalousis, and St\'ephane
Marchand-Maillet | null | 1706.08811 | null | null |
Gabor frames and deep scattering networks in audio processing | cs.SD cs.LG | This paper introduces Gabor scattering, a feature extractor based on Gabor
frames and Mallat's scattering transform. By using a simple signal model for
audio signals specific properties of Gabor scattering are studied. It is shown
that for each layer, specific invariances to certain signal characteristics
occur. Furthermore, deformation stability of the coefficient vector generated
by the feature extractor is derived by using a decoupling technique which
exploits the contractivity of general scattering networks. Deformations are
introduced as changes in spectral shape and frequency modulation. The
theoretical results are illustrated by numerical examples and experiments.
Numerical evidence is given by evaluation on a synthetic and a "real" data set,
that the invariances encoded by the Gabor scattering transform lead to higher
performance in comparison with just using Gabor transform, especially when few
training samples are available.
| Roswitha Bammer, Monika D\"orfler and Pavol Harar | 10.3390/axioms8040106 | 1706.08818 | null | null |
TimeNet: Pre-trained deep recurrent neural network for time series
classification | cs.LG | Inspired by the tremendous success of deep Convolutional Neural Networks as
generic feature extractors for images, we propose TimeNet: a deep recurrent
neural network (RNN) trained on diverse time series in an unsupervised manner
using sequence to sequence (seq2seq) models to extract features from time
series. Rather than relying on data from the problem domain, TimeNet attempts
to generalize time series representation across domains by ingesting time
series from several domains simultaneously. Once trained, TimeNet can be used
as a generic off-the-shelf feature extractor for time series. The
representations or embeddings given by a pre-trained TimeNet are found to be
useful for time series classification (TSC). For several publicly available
datasets from UCR TSC Archive and an industrial telematics sensor data from
vehicles, we observe that a classifier learned over the TimeNet embeddings
yields significantly better performance compared to (i) a classifier learned
over the embeddings given by a domain-specific RNN, as well as (ii) a nearest
neighbor classifier based on Dynamic Time Warping.
| Pankaj Malhotra, Vishnu TV, Lovekesh Vig, Puneet Agarwal, Gautam
Shroff | null | 1706.08838 | null | null |
Preserving Differential Privacy in Convolutional Deep Belief Networks | cs.LG stat.ML | The remarkable development of deep learning in medicine and healthcare domain
presents obvious privacy issues, when deep neural networks are built on users'
personal and highly sensitive data, e.g., clinical records, user profiles,
biomedical images, etc. However, only a few scientific studies on preserving
privacy in deep learning have been conducted. In this paper, we focus on
developing a private convolutional deep belief network (pCDBN), which
essentially is a convolutional deep belief network (CDBN) under differential
privacy. Our main idea of enforcing epsilon-differential privacy is to leverage
the functional mechanism to perturb the energy-based objective functions of
traditional CDBNs, rather than their results. One key contribution of this work
is that we propose the use of Chebyshev expansion to derive the approximate
polynomial representation of objective functions. Our theoretical analysis
shows that we can further derive the sensitivity and error bounds of the
approximate polynomial representation. As a result, preserving differential
privacy in CDBNs is feasible. We applied our model in a health social network,
i.e., YesiWell data, and in a handwriting digit dataset, i.e., MNIST data, for
human behavior prediction, human behavior classification, and handwriting digit
recognition tasks. Theoretical analysis and rigorous experimental evaluations
show that the pCDBN is highly effective. It significantly outperforms existing
solutions.
| NhatHai Phan, Xintao Wu, and Dejing Dou | null | 1706.08839 | null | null |
Gradient Episodic Memory for Continual Learning | cs.LG cs.AI | One major obstacle towards AI is the poor ability of models to solve new
problems quicker, and without forgetting previously acquired knowledge. To
better understand this issue, we study the problem of continual learning, where
the model observes, once and one by one, examples concerning a sequence of
tasks. First, we propose a set of metrics to evaluate models learning over a
continuum of data. These metrics characterize models not only by their test
accuracy, but also in terms of their ability to transfer knowledge across
tasks. Second, we propose a model for continual learning, called Gradient
Episodic Memory (GEM) that alleviates forgetting, while allowing beneficial
transfer of knowledge to previous tasks. Our experiments on variants of the
MNIST and CIFAR-100 datasets demonstrate the strong performance of GEM when
compared to the state-of-the-art.
| David Lopez-Paz and Marc'Aurelio Ranzato | null | 1706.0884 | null | null |
Rate-Distortion Classification for Self-Tuning IoT Networks | cs.NI cs.LG | Many future wireless sensor networks and the Internet of Things are expected
to follow a software defined paradigm, where protocol parameters and behaviors
will be dynamically tuned as a function of the signal statistics. New protocols
will be then injected as a software as certain events occur. For instance, new
data compressors could be (re)programmed on-the-fly as the monitored signal
type or its statistical properties change. We consider a lossy compression
scenario, where the application tolerates some distortion of the gathered
signal in return for improved energy efficiency. To reap the full benefits of
this paradigm, we discuss an automatic sensor profiling approach where the
signal class, and in particular the corresponding rate-distortion curve, is
automatically assessed using machine learning tools (namely, support vector
machines and neural networks). We show that this curve can be reliably
estimated on-the-fly through the computation of a small number (from ten to
twenty) of statistical features on time windows of a few hundreds samples.
| Davide Zordan, Michele Rossi, Michele Zorzi | null | 1706.08877 | null | null |
Unsupervised Feature Selection Based on Space Filling Concept | stat.ML cs.LG stat.ME | The paper deals with the adaptation of a new measure for the unsupervised
feature selection problems. The proposed measure is based on space filling
concept and is called the coverage measure. This measure was used for judging
the quality of an experimental space filling design. In the present work, the
coverage measure is adapted for selecting the smallest informative subset of
variables by reducing redundancy in data. This paper proposes a simple analogy
to apply this measure. It is implemented in a filter algorithm for unsupervised
feature selection problems.
The proposed filter algorithm is robust with high dimensional data and can be
implemented without extra parameters. Further, it is tested with simulated data
and real world case studies including environmental data and hyperspectral
image. Finally, the results are evaluated by using random forest algorithm.
| Mohamed Laib and Mikhail Kanevski | null | 1706.08894 | null | null |
The Fog of War: A Machine Learning Approach to Forecasting Weather on
Mars | astro-ph.IM cs.LG cs.NE | For over a decade, scientists at NASA's Jet Propulsion Laboratory (JPL) have
been recording measurements from the Martian surface as a part of the Mars
Exploration Rovers mission. One quantity of interest has been the opacity of
Mars's atmosphere for its importance in day-to-day estimations of the amount of
power available to the rover from its solar arrays. This paper proposes the use
of neural networks as a method for forecasting Martian atmospheric opacity that
is more effective than the current empirical model. The more accurate
prediction provided by these networks would allow operators at JPL to make more
accurate predictions of the amount of energy available to the rover when they
plan activities for coming sols.
| Daniele Bellutta | null | 1706.08915 | null | null |
Classical Music Clustering Based on Acoustic Features | cs.IR cs.LG cs.SD | In this paper we cluster 330 classical music pieces collected from MusicNet
database based on their musical note sequence. We use shingling and chord
trajectory matrices to create signature for each music piece and performed
spectral clustering to find the clusters. Based on different resolution, the
output clusters distinctively indicate composition from different classical
music era and different composing style of the musicians.
| Xindi Wang and Syed Arefinul Haque | null | 1706.08928 | null | null |
Reexamining Low Rank Matrix Factorization for Trace Norm Regularization | cs.LG stat.ML | Trace norm regularization is a widely used approach for learning low rank
matrices. A standard optimization strategy is based on formulating the problem
as one of low rank matrix factorization which, however, leads to a non-convex
problem. In practice this approach works well, and it is often computationally
faster than standard convex solvers such as proximal gradient methods.
Nevertheless, it is not guaranteed to converge to a global optimum, and the
optimization can be trapped at poor stationary points. In this paper we show
that it is possible to characterize all critical points of the non-convex
problem. This allows us to provide an efficient criterion to determine whether
a critical point is also a global minimizer. Our analysis suggests an iterative
meta-algorithm that dynamically expands the parameter space and allows the
optimization to escape any non-global critical point, thereby converging to a
global minimizer. The algorithm can be applied to problems such as matrix
completion or multitask learning, and our analysis holds for any random
initialization of the factor matrices. Finally, we confirm the good performance
of the algorithm on synthetic and real datasets.
| Carlo Ciliberto, Dimitris Stamos and Massimiliano Pontil | null | 1706.08934 | null | null |
Exploring Generalization in Deep Learning | cs.LG | With a goal of understanding what drives generalization in deep networks, we
consider several recently suggested explanations, including norm-based control,
sharpness and robustness. We study how these measures can ensure
generalization, highlighting the importance of scale normalization, and making
a connection between sharpness and PAC-Bayes theory. We then investigate how
well the measures explain different observed phenomena.
| Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, Nathan
Srebro | null | 1706.08947 | null | null |
Training a Fully Convolutional Neural Network to Route Integrated
Circuits | cs.CV cs.AI cs.LG | We present a deep, fully convolutional neural network that learns to route a
circuit layout net with appropriate choice of metal tracks and wire class
combinations. Inputs to the network are the encoded layouts containing spatial
location of pins to be routed. After 15 fully convolutional stages followed by
a score comparator, the network outputs 8 layout layers (corresponding to 4
route layers, 3 via layers and an identity-mapped pin layer) which are then
decoded to obtain the routed layouts. We formulate this as a binary
segmentation problem on a per-pixel per-layer basis, where the network is
trained to correctly classify pixels in each layout layer to be 'on' or 'off'.
To demonstrate learnability of layout design rules, we train the network on a
dataset of 50,000 train and 10,000 validation samples that we generate based on
certain pre-defined layout constraints. Precision, recall and $F_1$ score
metrics are used to track the training progress. Our network achieves
$F_1\approx97\%$ on the train set and $F_1\approx92\%$ on the validation set.
We use PyTorch for implementing our model. Code is made publicly available at
https://github.com/sjain-stanford/deep-route .
| Sambhav R. Jain, Kye Okabe | null | 1706.08948 | null | null |
The k-means-u* algorithm: non-local jumps and greedy retries improve
k-means++ clustering | cs.LG | We present a new clustering algorithm called k-means-u* which in many cases
is able to significantly improve the clusterings found by k-means++, the
current de-facto standard for clustering in Euclidean spaces. First we
introduce the k-means-u algorithm which starts from a result of k-means++ and
attempts to improve it with a sequence of non-local "jumps" alternated by runs
of standard k-means. Each jump transfers the "least useful" center towards the
center with the largest local error, offset by a small random vector. This is
continued as long as the error decreases and often leads to an improved
solution. Occasionally k-means-u terminates despite obvious remaining
optimization possibilities. By allowing a limited number of retries for the
last jump it is frequently possible to reach better local minima. The resulting
algorithm is called k-means-u* and dominates k-means++ wrt. solution quality
which is demonstrated empirically using various data sets. By construction the
logarithmic quality bound established for k-means++ holds for k-means-u* as
well.
| Bernd Fritzke | null | 1706.09059 | null | null |
An Actor-Critic Contextual Bandit Algorithm for Personalized Mobile
Health Interventions | stat.ML cs.LG | Increasing technological sophistication and widespread use of smartphones and
wearable devices provide opportunities for innovative and highly personalized
health interventions. A Just-In-Time Adaptive Intervention (JITAI) uses
real-time data collection and communication capabilities of modern mobile
devices to deliver interventions in real-time that are adapted to the
in-the-moment needs of the user. The lack of methodological guidance in
constructing data-based JITAIs remains a hurdle in advancing JITAI research
despite the increasing popularity of JITAIs among clinical scientists. In this
article, we make a first attempt to bridge this methodological gap by
formulating the task of tailoring interventions in real-time as a contextual
bandit problem. Interpretability requirements in the domain of mobile health
lead us to formulate the problem differently from existing formulations
intended for web applications such as ad or news article placement. Under the
assumption of linear reward function, we choose the reward function (the
"critic") parameterization separately from a lower dimensional parameterization
of stochastic policies (the "actor"). We provide an online actor-critic
algorithm that guides the construction and refinement of a JITAI. Asymptotic
properties of the actor-critic algorithm are developed and backed up by
numerical experiments. Additional numerical experiments are conducted to test
the robustness of the algorithm when idealized assumptions used in the analysis
of contextual bandit algorithm are breached.
| Huitian Lei, Yangyi Lu, Ambuj Tewari, Susan A. Murphy | null | 1706.0909 | null | null |
Generative Bridging Network in Neural Sequence Prediction | cs.AI cs.LG stat.ML | In order to alleviate data sparsity and overfitting problems in maximum
likelihood estimation (MLE) for sequence prediction tasks, we propose the
Generative Bridging Network (GBN), in which a novel bridge module is introduced
to assist the training of the sequence prediction model (the generator
network). Unlike MLE directly maximizing the conditional likelihood, the bridge
extends the point-wise ground truth to a bridge distribution conditioned on it,
and the generator is optimized to minimize their KL-divergence. Three different
GBNs, namely uniform GBN, language-model GBN and coaching GBN, are proposed to
penalize confidence, enhance language smoothness and relieve learning burden.
Experiments conducted on two recognized sequence prediction tasks (machine
translation and abstractive text summarization) show that our proposed GBNs can
yield significant improvements over strong baselines. Furthermore, by analyzing
samples drawn from different bridges, expected influences on the generator are
verified.
| Wenhu Chen, Guanlin Li, Shuo Ren, Shujie Liu, Zhirui Zhang, Mu Li,
Ming Zhou | null | 1706.09152 | null | null |
Stochastic Bandit Models for Delayed Conversions | cs.LG | Online advertising and product recommendation are important domains of
applications for multi-armed bandit methods. In these fields, the reward that
is immediately available is most often only a proxy for the actual outcome of
interest, which we refer to as a conversion. For instance, in web advertising,
clicks can be observed within a few seconds after an ad display but the
corresponding sale --if any-- will take hours, if not days to happen. This
paper proposes and investigates a new stochas-tic multi-armed bandit model in
the framework proposed by Chapelle (2014) --based on empirical studies in the
field of web advertising-- in which each action may trigger a future reward
that will then happen with a stochas-tic delay. We assume that the probability
of conversion associated with each action is unknown while the distribution of
the conversion delay is known, distinguishing between the (idealized) case
where the conversion events may be observed whatever their delay and the more
realistic setting in which late conversions are censored. We provide
performance lower bounds as well as two simple but efficient algorithms based
on the UCB and KLUCB frameworks. The latter algorithm, which is preferable when
conversion rates are low, is based on a Poissonization argument, of independent
interest in other settings where aggregation of Bernoulli observations with
different success probabilities is required.
| Claire Vernade, Olivier Capp\'e, Vianney Perchet | null | 1706.09186 | null | null |
Energy-Based Sequence GANs for Recommendation and Their Connection to
Imitation Learning | cs.IR cs.LG stat.ML | Recommender systems aim to find an accurate and efficient mapping from
historic data of user-preferred items to a new item that is to be liked by a
user. Towards this goal, energy-based sequence generative adversarial nets
(EB-SeqGANs) are adopted for recommendation by learning a generative model for
the time series of user-preferred items. By recasting the energy function as
the feature function, the proposed EB-SeqGANs is interpreted as an instance of
maximum-entropy imitation learning.
| Jaeyoon Yoo, Heonseok Ha, Jihun Yi, Jongha Ryu, Chanju Kim, Jung-Woo
Ha, Young-Han Kim, and Sungroh Yoon | null | 1706.092 | null | null |
Logics and practices of transparency and opacity in real-world
applications of public sector machine learning | cs.CY cs.LG | Machine learning systems are increasingly used to support public sector
decision-making across a variety of sectors. Given concerns around
accountability in these domains, and amidst accusations of intentional or
unintentional bias, there have been increased calls for transparency of these
technologies. Few, however, have considered how logics and practices concerning
transparency have been understood by those involved in the machine learning
systems already being piloted and deployed in public bodies today. This short
paper distils insights about transparency on the ground from interviews with 27
such actors, largely public servants and relevant contractors, across 5 OECD
countries. Considering transparency and opacity in relation to trust and
buy-in, better decision-making, and the avoidance of gaming, it seeks to
provide useful insights for those hoping to develop socio-technical approaches
to transparency that might be useful to practitioners on-the-ground.
An extended, archival version of this paper is available as Veale M., Van
Kleek M., & Binns R. (2018). `Fairness and accountability design needs for
algorithmic support in high-stakes public sector decision-making' Proceedings
of the 2018 CHI Conference on Human Factors in Computing Systems (CHI'18),
http://doi.org/10.1145/3173574.3174014.
| Michael Veale | null | 1706.09249 | null | null |
Concentration of tempered posteriors and of their variational
approximations | math.ST cs.LG stat.TH | While Bayesian methods are extremely popular in statistics and machine
learning, their application to massive datasets is often challenging, when
possible at all. Indeed, the classical MCMC algorithms are prohibitively slow
when both the model dimension and the sample size are large. Variational
Bayesian methods aim at approximating the posterior by a distribution in a
tractable family. Thus, MCMC are replaced by an optimization algorithm which is
orders of magnitude faster. VB methods have been applied in such
computationally demanding applications as including collaborative filtering,
image and video processing, NLP and text processing... However, despite very
nice results in practice, the theoretical properties of these approximations
are usually not known. In this paper, we propose a general approach to prove
the concentration of variational approximations of fractional posteriors. We
apply our theory to two examples: matrix completion, and Gaussian VB.
| Pierre Alquier and James Ridgway | null | 1706.09293 | null | null |
Alternative Semantic Representations for Zero-Shot Human Action
Recognition | cs.CV cs.IR cs.LG cs.MM | A proper semantic representation for encoding side information is key to the
success of zero-shot learning. In this paper, we explore two alternative
semantic representations especially for zero-shot human action recognition:
textual descriptions of human actions and deep features extracted from still
images relevant to human actions. Such side information are accessible on Web
with little cost, which paves a new way in gaining side information for
large-scale zero-shot human action recognition. We investigate different
encoding methods to generate semantic representations for human actions from
such side information. Based on our zero-shot visual recognition method, we
conducted experiments on UCF101 and HMDB51 to evaluate two proposed semantic
representations . The results suggest that our proposed text- and image-based
semantic representations outperform traditional attributes and word vectors
considerably for zero-shot human action recognition. In particular, the
image-based semantic representations yield the favourable performance even
though the representation is extracted from a small number of images per class.
| Qian Wang and Ke Chen | null | 1706.09317 | null | null |
Retinal Vessel Segmentation in Fundoscopic Images with Generative
Adversarial Networks | cs.CV cs.LG | Retinal vessel segmentation is an indispensable step for automatic detection
of retinal diseases with fundoscopic images. Though many approaches have been
proposed, existing methods tend to miss fine vessels or allow false positives
at terminal branches. Let alone under-segmentation, over-segmentation is also
problematic when quantitative studies need to measure the precise width of
vessels. In this paper, we present a method that generates the precise map of
retinal vessels using generative adversarial training. Our methods achieve dice
coefficient of 0.829 on DRIVE dataset and 0.834 on STARE dataset which is the
state-of-the-art performance on both datasets.
| Jaemin Son, Sang Jun Park, and Kyu-Hwan Jung | null | 1706.09318 | null | null |
autoBagging: Learning to Rank Bagging Workflows with Metalearning | stat.ML cs.LG | Machine Learning (ML) has been successfully applied to a wide range of
domains and applications. One of the techniques behind most of these successful
applications is Ensemble Learning (EL), the field of ML that gave birth to
methods such as Random Forests or Boosting. The complexity of applying these
techniques together with the market scarcity on ML experts, has created the
need for systems that enable a fast and easy drop-in replacement for ML
libraries. Automated machine learning (autoML) is the field of ML that attempts
to answers these needs. Typically, these systems rely on optimization
techniques such as bayesian optimization to lead the search for the best model.
Our approach differs from these systems by making use of the most recent
advances on metalearning and a learning to rank approach to learn from
metadata. We propose autoBagging, an autoML system that automatically ranks 63
bagging workflows by exploiting past performance and dataset characterization.
Results on 140 classification datasets from the OpenML platform show that
autoBagging can yield better performance than the Average Rank method and
achieve results that are not statistically different from an ideal model that
systematically selects the best workflow for each dataset. For the purpose of
reproducibility and generalizability, autoBagging is publicly available as an R
package on CRAN.
| F\'abio Pinto, V\'itor Cerqueira, Carlos Soares, Jo\~ao Mendes-Moreira | null | 1706.09367 | null | null |
The difference between memory and prediction in linear recurrent
networks | cs.LG | Recurrent networks are trained to memorize their input better, often in the
hopes that such training will increase the ability of the network to predict.
We show that networks designed to memorize input can be arbitrarily bad at
prediction. We also find, for several types of inputs, that one-node networks
optimized for prediction are nearly at upper bounds on predictive capacity
given by Wiener filters, and are roughly equivalent in performance to randomly
generated five-node networks. Our results suggest that maximizing memory
capacity leads to very different networks than maximizing predictive capacity,
and that optimizing recurrent weights can decrease reservoir size by half an
order of magnitude.
| Sarah Marzen | 10.1103/PhysRevE.96.032308 | 1706.09382 | null | null |
Recovery of Missing Samples Using Sparse Approximation via a Convex
Similarity Measure | stat.ML cs.LG | In this paper, we study the missing sample recovery problem using methods
based on sparse approximation. In this regard, we investigate the algorithms
used for solving the inverse problem associated with the restoration of missed
samples of image signal. This problem is also known as inpainting in the
context of image processing and for this purpose, we suggest an iterative
sparse recovery algorithm based on constrained $l_1$-norm minimization with a
new fidelity metric. The proposed metric called Convex SIMilarity (CSIM) index,
is a simplified version of the Structural SIMilarity (SSIM) index, which is
convex and error-sensitive. The optimization problem incorporating this
criterion, is then solved via Alternating Direction Method of Multipliers
(ADMM). Simulation results show the efficiency of the proposed method for
missing sample recovery of 1D patch vectors and inpainting of 2D image signals.
| Amirhossein Javaheri, Hadi Zayyani, Farokh Marvasti | null | 1706.09395 | null | null |
CatBoost: unbiased boosting with categorical features | cs.LG | This paper presents the key algorithmic techniques behind CatBoost, a new
gradient boosting toolkit. Their combination leads to CatBoost outperforming
other publicly available boosting implementations in terms of quality on a
variety of datasets. Two critical algorithmic advances introduced in CatBoost
are the implementation of ordered boosting, a permutation-driven alternative to
the classic algorithm, and an innovative algorithm for processing categorical
features. Both techniques were created to fight a prediction shift caused by a
special kind of target leakage present in all currently existing
implementations of gradient boosting algorithms. In this paper, we provide a
detailed analysis of this problem and demonstrate that proposed algorithms
solve it effectively, leading to excellent empirical results.
| Liudmila Prokhorenkova, Gleb Gusev, Aleksandr Vorobev, Anna Veronika
Dorogush, Andrey Gulin | null | 1706.09516 | null | null |
Neural SLAM: Learning to Explore with External Memory | cs.LG cs.AI cs.RO | We present an approach for agents to learn representations of a global map
from sensor data, to aid their exploration in new environments. To achieve
this, we embed procedures mimicking that of traditional Simultaneous
Localization and Mapping (SLAM) into the soft attention based addressing of
external memory architectures, in which the external memory acts as an internal
representation of the environment. This structure encourages the evolution of
SLAM-like behaviors inside a completely differentiable deep neural network. We
show that this approach can help reinforcement learning agents to successfully
explore new environments where long-term memory is essential. We validate our
approach in both challenging grid-world environments and preliminary Gazebo
experiments. A video of our experiments can be found at: https://goo.gl/G2Vu5y.
| Jingwei Zhang, Lei Tai, Ming Liu, Joschka Boedecker, Wolfram Burgard | null | 1706.0952 | null | null |
Learning to Learn: Meta-Critic Networks for Sample Efficient Learning | cs.LG | We propose a novel and flexible approach to meta-learning for
learning-to-learn from only a few examples. Our framework is motivated by
actor-critic reinforcement learning, but can be applied to both reinforcement
and supervised learning. The key idea is to learn a meta-critic: an
action-value function neural network that learns to criticise any actor trying
to solve any specified task. For supervised learning, this corresponds to the
novel idea of a trainable task-parametrised loss generator. This meta-critic
approach provides a route to knowledge transfer that can flexibly deal with
few-shot and semi-supervised conditions for both reinforcement and supervised
learning. Promising results are shown on both reinforcement and supervised
learning problems.
| Flood Sung, Li Zhang, Tao Xiang, Timothy Hospedales, Yongxin Yang | null | 1706.09529 | null | null |
Distributional Adversarial Networks | cs.LG | We propose a framework for adversarial training that relies on a sample
rather than a single sample point as the fundamental unit of discrimination.
Inspired by discrepancy measures and two-sample tests between probability
distributions, we propose two such distributional adversaries that operate and
predict on samples, and show how they can be easily implemented on top of
existing models. Various experimental results show that generators trained with
our distributional adversaries are much more stable and are remarkably less
prone to mode collapse than traditional models trained with pointwise
prediction discriminators. The application of our framework to domain
adaptation also results in considerable improvement over recent
state-of-the-art.
| Chengtao Li, David Alvarez-Melis, Keyulu Xu, Stefanie Jegelka, Suvrit
Sra | null | 1706.09549 | null | null |
Transforming Musical Signals through a Genre Classifying Convolutional
Neural Network | cs.SD cs.LG cs.MM cs.NE | Convolutional neural networks (CNNs) have been successfully applied on both
discriminative and generative modeling for music-related tasks. For a
particular task, the trained CNN contains information representing the decision
making or the abstracting process. One can hope to manipulate existing music
based on this 'informed' network and create music with new features
corresponding to the knowledge obtained by the network. In this paper, we
propose a method to utilize the stored information from a CNN trained on
musical genre classification task. The network was composed of three
convolutional layers, and was trained to classify five-second song clips into
five different genres. After training, randomly selected clips were modified by
maximizing the sum of outputs from the network layers. In addition to the
potential of such CNNs to produce interesting audio transformation, more
information about the network and the original music could be obtained from the
analysis of the generated features since these features indicate how the
network 'understands' the music.
| S. Geng, G. Ren, M. Ogihara | null | 1706.09553 | null | null |
Music Signal Processing Using Vector Product Neural Networks | cs.SD cs.LG cs.MM cs.NE | We propose a novel neural network model for music signal processing using
vector product neurons and dimensionality transformations. Here, the inputs are
first mapped from real values into three-dimensional vectors then fed into a
three-dimensional vector product neural network where the inputs, outputs, and
weights are all three-dimensional values. Next, the final outputs are mapped
back to the reals. Two methods for dimensionality transformation are proposed,
one via context windows and the other via spectral coloring. Experimental
results on the iKala dataset for blind singing voice separation confirm the
efficacy of our model.
| Z.C. Fan, T.S. Chan, Y.H. Yang, and J.S. R. Jang | null | 1706.09555 | null | null |
Machine listening intelligence | cs.SD cs.LG | This manifesto paper will introduce machine listening intelligence, an
integrated research framework for acoustic and musical signals modelling, based
on signal processing, deep learning and computational musicology.
| C.E. Cella | null | 1706.09557 | null | null |
Audio Spectrogram Representations for Processing with Convolutional
Neural Networks | cs.SD cs.LG cs.MM cs.NE | One of the decisions that arise when designing a neural network for any
application is how the data should be represented in order to be presented to,
and possibly generated by, a neural network. For audio, the choice is less
obvious than it seems to be for visual images, and a variety of representations
have been used for different applications including the raw digitized sample
stream, hand-crafted features, machine discovered features, MFCCs and variants
that include deltas, and a variety of spectral representations. This paper
reviews some of these representations and issues that arise, focusing
particularly on spectrograms for generating audio using neural networks for
style transfer.
| L. Wyse | null | 1706.09559 | null | null |
Online Convolutional Dictionary Learning | cs.LG cs.CV eess.IV | While a number of different algorithms have recently been proposed for
convolutional dictionary learning, this remains an expensive problem. The
single biggest impediment to learning from large training sets is the memory
requirements, which grow at least linearly with the size of the training set
since all existing methods are batch algorithms. The work reported here
addresses this limitation by extending online dictionary learning ideas to the
convolutional context.
| Jialin Liu and Cristina Garcia-Cardona and Brendt Wohlberg and Wotao
Yin | 10.1109/ICIP.2017.8296573 | 1706.09563 | null | null |
Online Reweighted Least Squares Algorithm for Sparse Recovery and
Application to Short-Wave Infrared Imaging | cs.LG | We address the problem of sparse recovery in an online setting, where random
linear measurements of a sparse signal are revealed sequentially and the
objective is to recover the underlying signal. We propose a reweighted least
squares (RLS) algorithm to solve the problem of online sparse reconstruction,
wherein a system of linear equations is solved using conjugate gradient with
the arrival of every new measurement. The proposed online algorithm is useful
in a setting where one seeks to design a progressive decoding strategy to
reconstruct a sparse signal from linear measurements so that one does not have
to wait until all measurements are acquired. Moreover, the proposed algorithm
is also useful in applications where it is infeasible to process all the
measurements using a batch algorithm, owing to computational and storage
constraints. It is not needed a priori to collect a fixed number of
measurements; rather one can keep collecting measurements until the quality of
reconstruction is satisfactory and stop taking further measurements once the
reconstruction is sufficiently accurate. We provide a proof-of-concept by
comparing the performance of our algorithm with the RLS-based batch
reconstruction strategy, known as iteratively reweighted least squares (IRLS),
on natural images. Experiments on a recently proposed focal plane array-based
imaging setup show up to 1 dB improvement in output peak signal-to-noise ratio
as compared with the total variation-based reconstruction.
| Subhadip Mukherjee, Deepak R., Huaijin Chen, Ashok Veeraraghavan, and
Chandra Sekhar Seelamantula | null | 1706.09585 | null | null |
Deep learning bank distress from news and numerical financial data | stat.ML cs.LG | In this paper we focus our attention on the exploitation of the information
contained in financial news to enhance the performance of a classifier of bank
distress. Such information should be analyzed and inserted into the predictive
model in the most efficient way and this task deals with all the issues related
to text analysis and specifically analysis of news media. Among the different
models proposed for such purpose, we investigate one of the possible deep
learning approaches, based on a doc2vec representation of the textual data, a
kind of neural network able to map the sequential and symbolic text input onto
a reduced latent semantic space. Afterwards, a second supervised neural network
is trained combining news data with standard financial figures to classify
banks whether in distressed or tranquil states, based on a small set of known
distress events. Then the final aim is not only the improvement of the
predictive performance of the classifier but also to assess the importance of
news data in the classification process. Does news data really bring more
useful information not contained in standard financial variables? Our results
seem to confirm such hypothesis.
| Paola Cerchiello, Giancarlo Nicola, Samuel Ronnqvist, Peter Sarlin | null | 1706.09627 | null | null |
Image classification using local tensor singular value decompositions | stat.ML cs.LG stat.CO | From linear classifiers to neural networks, image classification has been a
widely explored topic in mathematics, and many algorithms have proven to be
effective classifiers. However, the most accurate classifiers typically have
significantly high storage costs, or require complicated procedures that may be
computationally expensive. We present a novel (nonlinear) classification
approach using truncation of local tensor singular value decompositions (tSVD)
that robustly offers accurate results, while maintaining manageable storage
costs. Our approach takes advantage of the optimality of the representation
under the tensor algebra described to determine to which class an image
belongs. We extend our approach to a method that can determine specific
pairwise match scores, which could be useful in, for example, object
recognition problems where pose/position are different. We demonstrate the
promise of our new techniques on the MNIST data set.
| Elizabeth Newman, Misha Kilmer, and Lior Horesh | null | 1706.09693 | null | null |
A Deep Multimodal Approach for Cold-start Music Recommendation | cs.IR cs.LG | An increasing amount of digital music is being published daily. Music
streaming services often ingest all available music, but this poses a
challenge: how to recommend new artists for which prior knowledge is scarce? In
this work we aim to address this so-called cold-start problem by combining text
and audio information with user feedback data using deep network architectures.
Our method is divided into three steps. First, artist embeddings are learned
from biographies by combining semantics, text features, and aggregated usage
data. Second, track embeddings are learned from the audio signal and available
feedback data. Finally, artist and track embeddings are combined in a
multimodal network. Results suggest that both splitting the recommendation
problem between feature levels (i.e., artist metadata and audio track), and
merging feature embeddings in a multimodal approach improve the accuracy of the
recommendations.
| Sergio Oramas, Oriol Nieto, Mohamed Sordo, Xavier Serra | 10.1145/3125486.3125492 | 1706.09739 | null | null |
Dynamical selection of Nash equilibria using Experience Weighted
Attraction Learning: emergence of heterogeneous mixed equilibria | cs.GT cond-mat.stat-mech cs.LG q-fin.EC | We study the distribution of strategies in a large game that models how
agents choose among different double auction markets. We classify the possible
mean field Nash equilibria, which include potentially segregated states where
an agent population can split into subpopulations adopting different
strategies. As the game is aggregative, the actual equilibrium strategy
distributions remain undetermined, however. We therefore compare with the
results of Experience-Weighted Attraction (EWA) learning, which at long times
leads to Nash equilibria in the appropriate limits of large intensity of
choice, low noise (long agent memory) and perfect imputation of missing scores
(fictitious play). The learning dynamics breaks the indeterminacy of the Nash
equilibria. Non-trivially, depending on how the relevant limits are taken, more
than one type of equilibrium can be selected. These include the standard
homogeneous mixed and heterogeneous pure states, but also \emph{heterogeneous
mixed} states where different agents play different strategies that are not all
pure. The analysis of the EWA learning involves Fokker-Planck modeling combined
with large deviation methods. The theoretical results are confirmed by
multi-agent simulations.
| Robin Nicole and Peter Sollich | 10.1371/journal.pone.0196577 | 1706.09763 | null | null |
Interpretability via Model Extraction | cs.LG cs.CY stat.ML | The ability to interpret machine learning models has become increasingly
important now that machine learning is used to inform consequential decisions.
We propose an approach called model extraction for interpreting complex,
blackbox models. Our approach approximates the complex model using a much more
interpretable model; as long as the approximation quality is good, then
statistical properties of the complex model are reflected in the interpretable
model. We show how model extraction can be used to understand and debug random
forests and neural nets trained on several datasets from the UCI Machine
Learning Repository, as well as control policies learned for several classical
reinforcement learning problems.
| Osbert Bastani and Carolyn Kim and Hamsa Bastani | null | 1706.09773 | null | null |
Feature uncertainty bounding schemes for large robust nonlinear SVM
classifiers | stat.ML cs.LG | We consider the binary classification problem when data are large and subject
to unknown but bounded uncertainties. We address the problem by formulating the
nonlinear support vector machine training problem with robust optimization. To
do so, we analyze and propose two bounding schemes for uncertainties associated
to random approximate features in low dimensional spaces. The proposed
techniques are based on Random Fourier Features and the Nystr\"om methods. The
resulting formulations can be solved with efficient stochastic approximation
techniques such as stochastic (sub)-gradient, stochastic proximal gradient
techniques or their variants.
| Nicolas Couellan and Sophie Jan | null | 1706.09795 | null | null |
Data-dependent Generalization Bounds for Multi-class Classification | cs.LG | In this paper, we study data-dependent generalization error bounds exhibiting
a mild dependency on the number of classes, making them suitable for
multi-class learning with a large number of label classes. The bounds generally
hold for empirical multi-class risk minimization algorithms using an arbitrary
norm as regularizer. Key to our analysis are new structural results for
multi-class Gaussian complexities and empirical $\ell_\infty$-norm covering
numbers, which exploit the Lipschitz continuity of the loss function with
respect to the $\ell_2$- and $\ell_\infty$-norm, respectively. We establish
data-dependent error bounds in terms of complexities of a linear function class
defined on a finite set induced by training examples, for which we show tight
lower and upper bounds. We apply the results to several prominent multi-class
learning machines, exhibiting a tighter dependency on the number of classes
than the state of the art. For instance, for the multi-class SVM by Crammer and
Singer (2002), we obtain a data-dependent bound with a logarithmic dependency
which significantly improves the previous square-root dependency. Experimental
results are reported to verify the effectiveness of our theoretical findings.
| Yunwen Lei, Urun Dogan, Ding-Xuan Zhou, Marius Kloft | null | 1706.09814 | null | null |
Generalising Random Forest Parameter Optimisation to Include Stability
and Cost | stat.ML cs.CY cs.LG | Random forests are among the most popular classification and regression
methods used in industrial applications. To be effective, the parameters of
random forests must be carefully tuned. This is usually done by choosing values
that minimize the prediction error on a held out dataset. We argue that error
reduction is only one of several metrics that must be considered when
optimizing random forest parameters for commercial applications. We propose a
novel metric that captures the stability of random forests predictions, which
we argue is key for scenarios that require successive predictions. We motivate
the need for multi-criteria optimization by showing that in practical
applications, simply choosing the parameters that lead to the lowest error can
introduce unnecessary costs and produce predictions that are not stable across
independent runs. To optimize this multi-criteria trade-off, we present a new
framework that efficiently finds a principled balance between these three
considerations using Bayesian optimisation. The pitfalls of optimising forest
parameters purely for error reduction are demonstrated using two publicly
available real world datasets. We show that our framework leads to parameter
settings that are markedly different from the values discovered by error
reduction metrics.
| C.H. Bryan Liu, Benjamin Paul Chamberlain, Duncan A. Little, Angelo
Cardoso | 10.1007/978-3-319-71273-4_9 | 1706.09865 | null | null |
On the Limitations of First-Order Approximation in GAN Dynamics | cs.LG cs.DS | While Generative Adversarial Networks (GANs) have demonstrated promising
performance on multiple vision tasks, their learning dynamics are not yet well
understood, both in theory and in practice. To address this issue, we study GAN
dynamics in a simple yet rich parametric model that exhibits several of the
common problematic convergence behaviors such as vanishing gradients, mode
collapse, and diverging or oscillatory behavior. In spite of the non-convex
nature of our model, we are able to perform a rigorous theoretical analysis of
its convergence behavior. Our analysis reveals an interesting dichotomy: a GAN
with an optimal discriminator provably converges, while first order
approximations of the discriminator steps lead to unstable GAN dynamics and
mode collapse. Our result suggests that using first order discriminator steps
(the de-facto standard in most existing GAN setups) might be one of the factors
that makes GAN training challenging in practice.
| Jerry Li and Aleksander Madry and John Peebles and Ludwig Schmidt | null | 1706.09884 | null | null |
Graph Convolution: A High-Order and Adaptive Approach | cs.LG stat.ML | In this paper, we presented a novel convolutional neural network framework
for graph modeling, with the introduction of two new modules specially designed
for graph-structured data: the $k$-th order convolution operator and the
adaptive filtering module. Importantly, our framework of High-order and
Adaptive Graph Convolutional Network (HA-GCN) is a general-purposed
architecture that fits various applications on both node and graph centrics, as
well as graph generative models. We conducted extensive experiments on
demonstrating the advantages of our framework. Particularly, our HA-GCN
outperforms the state-of-the-art models on node classification and molecule
property prediction tasks. It also generates 32% more real molecules on the
molecule generation task, both of which will significantly benefit real-world
applications such as material design and drug screening.
| Zhenpeng Zhou, and Xiaocheng Li | null | 1706.09916 | null | null |
Phase Retrieval via Randomized Kaczmarz: Theoretical Guarantees | math.NA cs.IT cs.LG math.IT math.PR math.ST stat.TH | We consider the problem of phase retrieval, i.e. that of solving systems of
quadratic equations. A simple variant of the randomized Kaczmarz method was
recently proposed for phase retrieval, and it was shown numerically to have a
computational edge over state-of-the-art Wirtinger flow methods. In this paper,
we provide the first theoretical guarantee for the convergence of the
randomized Kaczmarz method for phase retrieval. We show that it is sufficient
to have as many Gaussian measurements as the dimension, up to a constant
factor. Along the way, we introduce a sufficient condition on measurement sets
for which the randomized Kaczmarz method is guaranteed to work. We show that
Gaussian sampling vectors satisfy this property with high probability; this is
proved using a chaining argument coupled with bounds on VC dimension and metric
entropy.
| Yan Shuo Tan, Roman Vershynin | null | 1706.09993 | null | null |
Hypothesis Testing For Densities and High-Dimensional Multinomials:
Sharp Local Minimax Rates | math.ST cs.IT cs.LG math.IT stat.ML stat.TH | We consider the goodness-of-fit testing problem of distinguishing whether the
data are drawn from a specified distribution, versus a composite alternative
separated from the null in the total variation metric. In the discrete case, we
consider goodness-of-fit testing when the null distribution has a possibly
growing or unbounded number of categories. In the continuous case, we consider
testing a Lipschitz density, with possibly unbounded support, in the
low-smoothness regime where the Lipschitz parameter is not assumed to be
constant. In contrast to existing results, we show that the minimax rate and
critical testing radius in these settings depend strongly, and in a precise
way, on the null distribution being tested and this motivates the study of the
(local) minimax rate as a function of the null distribution. For multinomials
the local minimax rate was recently studied in the work of Valiant and Valiant.
We re-visit and extend their results and develop two modifications to the
chi-squared test whose performance we characterize. For testing Lipschitz
densities, we show that the usual binning tests are inadequate in the
low-smoothness regime and we design a spatially adaptive partitioning scheme
that forms the basis for our locally minimax optimal tests. Furthermore, we
provide the first local minimax lower bounds for this problem which yield a
sharp characterization of the dependence of the critical radius on the null
hypothesis being tested. In the low-smoothness regime we also provide adaptive
tests, that adapt to the unknown smoothness parameter. We illustrate our
results with a variety of simulations that demonstrate the practical utility of
our proposed tests.
| Sivaraman Balakrishnan and Larry Wasserman | null | 1706.10003 | null | null |
Automated Audio Captioning with Recurrent Neural Networks | cs.SD cs.CL cs.LG | We present the first approach to automated audio captioning. We employ an
encoder-decoder scheme with an alignment model in between. The input to the
encoder is a sequence of log mel-band energies calculated from an audio file,
while the output is a sequence of words, i.e. a caption. The encoder is a
multi-layered, bi-directional gated recurrent unit (GRU) and the decoder a
multi-layered GRU with a classification layer connected to the last GRU of the
decoder. The classification layer and the alignment model are fully connected
layers with shared weights between timesteps. The proposed method is evaluated
using data drawn from a commercial sound effects library, ProSound Effects. The
resulting captions were rated through metrics utilized in machine translation
and image captioning fields. Results from metrics show that the proposed method
can predict words appearing in the original caption, but not always correctly
ordered.
| Konstantinos Drossos, Sharath Adavanne, Tuomas Virtanen | null | 1706.10006 | null | null |
Improvement of training set structure in fusion data cleaning using
Time-Domain Global Similarity method | cs.LG physics.plasm-ph | Traditional data cleaning identifies dirty data by classifying original data
sequences, which is a class$-$imbalanced problem since the proportion of
incorrect data is much less than the proportion of correct ones for most
diagnostic systems in Magnetic Confinement Fusion (MCF) devices. When using
machine learning algorithms to classify diagnostic data based on
class$-$imbalanced training set, most classifiers are biased towards the major
class and show very poor classification rates on the minor class. By
transforming the direct classification problem about original data sequences
into a classification problem about the physical similarity between data
sequences, the class$-$balanced effect of Time$-$Domain Global Similarity
(TDGS) method on training set structure is investigated in this paper.
Meanwhile, the impact of improved training set structure on data cleaning
performance of TDGS method is demonstrated with an application example in EAST
POlarimetry$-$INTerferometry (POINT) system.
| Jian Liu, Ting Lan, Hong Qin | 10.1088/1748-0221/12/10/C10004 | 1706.10018 | null | null |
Preference-based performance measures for Time-Domain Global Similarity
method | cs.LG physics.plasm-ph | For Time-Domain Global Similarity (TDGS) method, which transforms the data
cleaning problem into a binary classification problem about the physical
similarity between channels, directly adopting common performance measures
could only guarantee the performance for physical similarity. Nevertheless,
practical data cleaning tasks have preferences for the correctness of original
data sequences. To obtain the general expressions of performance measures based
on the preferences of tasks, the mapping relations between performance of TDGS
method about physical similarity and correctness of data sequences are
investigated by probability theory in this paper. Performance measures for TDGS
method in several common data cleaning tasks are set. Cases when these
preference-based performance measures could be simplified are introduced.
| Ting Lan, Jian Liu, Hong Qin | 10.1088/1748-0221/12/12/C12008 | 1706.1002 | null | null |
Neural Sequence Model Training via $\alpha$-divergence Minimization | stat.ML cs.LG | We propose a new neural sequence model training method in which the objective
function is defined by $\alpha$-divergence. We demonstrate that the objective
function generalizes the maximum-likelihood (ML)-based and reinforcement
learning (RL)-based objective functions as special cases (i.e., ML corresponds
to $\alpha \to 0$ and RL to $\alpha \to1$). We also show that the gradient of
the objective function can be considered a mixture of ML- and RL-based
objective gradients. The experimental results of a machine translation task
show that minimizing the objective function with $\alpha > 0$ outperforms
$\alpha \to 0$, which corresponds to ML-based methods.
| Sotetsu Koyamada, Yuta Kikuchi, Atsunori Kanemura, Shin-ichi Maeda,
Shin Ishii | null | 1706.10031 | null | null |
Providing Effective Real-time Feedback in Simulation-based Surgical
Training | cs.AI cs.LG | Virtual reality simulation is becoming popular as a training platform in
surgical education. However, one important aspect of simulation-based surgical
training that has not received much attention is the provision of automated
real-time performance feedback to support the learning process. Performance
feedback is actionable advice that improves novice behaviour. In simulation,
automated feedback is typically extracted from prediction models trained using
data mining techniques. Existing techniques suffer from either low
effectiveness or low efficiency resulting in their inability to be used in
real-time. In this paper, we propose a random forest based method that finds a
balance between effectiveness and efficiency. Experimental results in a
temporal bone surgery simulation show that the proposed method is able to
extract highly effective feedback at a high level of efficiency.
| Xingjun Ma, Sudanthi Wijewickrema, Yun Zhou, Shuo Zhou, Stephen
O'Leary, James Bailey | null | 1706.10036 | null | null |
Persistence Diagrams with Linear Machine Learning Models | math.AT cs.CV cs.LG | Persistence diagrams have been widely recognized as a compact descriptor for
characterizing multiscale topological features in data. When many datasets are
available, statistical features embedded in those persistence diagrams can be
extracted by applying machine learnings. In particular, the ability for
explicitly analyzing the inverse in the original data space from those
statistical features of persistence diagrams is significantly important for
practical applications. In this paper, we propose a unified method for the
inverse analysis by combining linear machine learning models with persistence
images. The method is applied to point clouds and cubical sets, showing the
ability of the statistical inverse analysis and its advantages.
| Ippei Obayashi and Yasuaki Hiraoka | null | 1706.10082 | null | null |
Prepaid or Postpaid? That is the question. Novel Methods of Subscription
Type Prediction in Mobile Phone Services | cs.SI cs.LG physics.soc-ph stat.ML | In this paper we investigate the behavioural differences between mobile phone
customers with prepaid and postpaid subscriptions. Our study reveals that (a)
postpaid customers are more active in terms of service usage and (b) there are
strong structural correlations in the mobile phone call network as connections
between customers of the same subscription type are much more frequent than
those between customers of different subscription types. Based on these
observations we provide methods to detect the subscription type of customers by
using information about their personal call statistics, and also their
egocentric networks simultaneously. The key of our first approach is to cast
this classification problem as a problem of graph labelling, which can be
solved by max-flow min-cut algorithms. Our experiments show that, by using both
user attributes and relationships, the proposed graph labelling approach is
able to achieve a classification accuracy of $\sim 87\%$, which outperforms by
$\sim 7\%$ supervised learning methods using only user attributes. In our
second problem we aim to infer the subscription type of customers of external
operators. We propose via approximate methods to solve this problem by using
node attributes, and a two-ways indirect inference method based on observed
homophilic structural correlations. Our results have straightforward
applications in behavioural prediction and personal marketing.
| Yongjun Liao, Wei Du, M\'arton Karsai, Carlos Sarraute, Martin Minnoni
and Eric Fleury | null | 1706.10172 | null | null |
Rule-Mining based classification: a benchmark study | cs.LG | This study proposed an exhaustive stable/reproducible rule-mining algorithm
combined to a classifier to generate both accurate and interpretable models.
Our method first extracts rules (i.e., a conjunction of conditions about the
values of a small number of input features) with our exhaustive rule-mining
algorithm, then constructs a new feature space based on the most relevant rules
called "local features" and finally, builds a local predictive model by
training a standard classifier on the new local feature space. This local
feature space is easy interpretable by providing a human-understandable
explanation under the explicit form of rules. Furthermore, our local predictive
approach is as powerful as global classical ones like logistic regression (LR),
support vector machine (SVM) and rules based methods like random forest (RF)
and gradient boosted tree (GBT).
| Margaux Luck, Nicolas Pallet, Cecilia Damon | null | 1706.10199 | null | null |
Optimization Methods for Supervised Machine Learning: From Linear Models
to Deep Learning | stat.ML cs.LG | The goal of this tutorial is to introduce key models, algorithms, and open
questions related to the use of optimization methods for solving problems
arising in machine learning. It is written with an INFORMS audience in mind,
specifically those readers who are familiar with the basics of optimization
algorithms, but less familiar with machine learning. We begin by deriving a
formulation of a supervised learning problem and show how it leads to various
optimization problems, depending on the context and underlying assumptions. We
then discuss some of the distinctive features of these optimization problems,
focusing on the examples of logistic regression and the training of deep neural
networks. The latter half of the tutorial focuses on optimization algorithms,
first for convex logistic regression, for which we discuss the use of
first-order methods, the stochastic gradient method, variance reducing
stochastic methods, and second-order methods. Finally, we discuss how these
approaches can be employed to the training of deep neural networks, emphasizing
the difficulties that arise from the complex, nonconvex structure of these
models.
| Frank E. Curtis and Katya Scheinberg | null | 1706.10207 | null | null |
On Fairness, Diversity and Randomness in Algorithmic Decision Making | stat.ML cs.LG | Consider a binary decision making process where a single machine learning
classifier replaces a multitude of humans. We raise questions about the
resulting loss of diversity in the decision making process. We study the
potential benefits of using random classifier ensembles instead of a single
classifier in the context of fairness-aware learning and demonstrate various
attractive properties: (i) an ensemble of fair classifiers is guaranteed to be
fair, for several different measures of fairness, (ii) an ensemble of unfair
classifiers can still achieve fair outcomes, and (iii) an ensemble of
classifiers can achieve better accuracy-fairness trade-offs than a single
classifier. Finally, we introduce notions of distributional fairness to
characterize further potential benefits of random classifier ensembles.
| Nina Grgi\'c-Hla\v{c}a, Muhammad Bilal Zafar, Krishna P. Gummadi,
Adrian Weller | null | 1706.10208 | null | null |
Improving Session Recommendation with Recurrent Neural Networks by
Exploiting Dwell Time | cs.IR cs.LG | Recently, Recurrent Neural Networks (RNNs) have been applied to the task of
session-based recommendation. These approaches use RNNs to predict the next
item in a user session based on the previ- ously visited items. While some
approaches consider additional item properties, we argue that item dwell time
can be used as an implicit measure of user interest to improve session-based
item recommen- dations. We propose an extension to existing RNN approaches that
captures user dwell time in addition to the visited items and show that
recommendation performance can be improved. Additionally, we investigate the
usefulness of a single validation split for model selection in the case of
minor improvements and find that in our case the best model is not selected and
a fold-like study with different validation sets is necessary to ensure the
selection of the best model.
| Alexander Dallmann (1), Alexander Grimm (1), Christian P\"olitz (1),
Daniel Zoller (1), Andreas Hotho (1 and 2) ((1) University of W\"urzburg, (2)
L3S Research Center) | null | 1706.10231 | null | null |
Probabilistic Active Learning of Functions in Structural Causal Models | stat.ML cs.AI cs.LG | We consider the problem of learning the functions computing children from
parents in a Structural Causal Model once the underlying causal graph has been
identified. This is in some sense the second step after causal discovery.
Taking a probabilistic approach to estimating these functions, we derive a
natural myopic active learning scheme that identifies the intervention which is
optimally informative about all of the unknown functions jointly, given
previously observed data. We test the derived algorithms on simple examples, to
demonstrate that they produce a structured exploration policy that
significantly improves on unstructured base-lines.
| Paul K. Rubenstein, Ilya Tolstikhin, Philipp Hennig, Bernhard
Schoelkopf | null | 1706.10234 | null | null |
Towards Understanding Generalization of Deep Learning: Perspective of
Loss Landscapes | cs.LG cs.AI stat.ML | It is widely observed that deep learning models with learned parameters
generalize well, even with much more model parameters than the number of
training samples. We systematically investigate the underlying reasons why deep
neural networks often generalize well, and reveal the difference between the
minima (with the same training error) that generalize well and those they
don't. We show that it is the characteristics the landscape of the loss
function that explains the good generalization capability. For the landscape of
loss function for deep networks, the volume of basin of attraction of good
minima dominates over that of poor minima, which guarantees optimization
methods with random initialization to converge to good minima. We theoretically
justify our findings through analyzing 2-layer neural networks; and show that
the low-complexity solutions have a small norm of Hessian matrix with respect
to model parameters. For deeper networks, extensive numerical evidence helps to
support our arguments.
| Lei Wu, Zhanxing Zhu, Weinan E | null | 1706.10239 | null | null |
Bridging the Gap between Probabilistic and Deterministic Models: A
Simulation Study on a Variational Bayes Predictive Coding Recurrent Neural
Network Model | cs.AI cs.LG | The current paper proposes a novel variational Bayes predictive coding RNN
model, which can learn to generate fluctuated temporal patterns from exemplars.
The model learns to maximize the lower bound of the weighted sum of the
regularization and reconstruction error terms. We examined how this weighting
can affect development of different types of information processing while
learning fluctuated temporal patterns. Simulation results show that strong
weighting of the reconstruction term causes the development of deterministic
chaos for imitating the randomness observed in target sequences, while strong
weighting of the regularization term causes the development of stochastic
dynamics imitating probabilistic processes observed in targets. Moreover,
results indicate that the most generalized learning emerges between these two
extremes. The paper concludes with implications in terms of the underlying
neuronal mechanisms for autism spectrum disorder and for free action.
| Ahmadreza Ahmadi and Jun Tani | null | 1706.1024 | null | null |
SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted
Cloud | cs.LG cs.CR | Inference using deep neural networks is often outsourced to the cloud since
it is a computationally demanding task. However, this raises a fundamental
issue of trust. How can a client be sure that the cloud has performed inference
correctly? A lazy cloud provider might use a simpler but less accurate model to
reduce its own computational load, or worse, maliciously modify the inference
results sent to the client. We propose SafetyNets, a framework that enables an
untrusted server (the cloud) to provide a client with a short mathematical
proof of the correctness of inference tasks that they perform on behalf of the
client. Specifically, SafetyNets develops and implements a specialized
interactive proof (IP) protocol for verifiable execution of a class of deep
neural networks, i.e., those that can be represented as arithmetic circuits.
Our empirical results on three- and four-layer deep neural networks demonstrate
the run-time costs of SafetyNets for both the client and server are low.
SafetyNets detects any incorrect computations of the neural network by the
untrusted server with high probability, while achieving state-of-the-art
accuracy on the MNIST digit recognition (99.4%) and TIMIT speech recognition
tasks (75.22%).
| Zahra Ghodsi, Tianyu Gu, Siddharth Garg | null | 1706.10268 | null | null |
Lifelong Learning in Costly Feature Spaces | cs.LG | An important long-term goal in machine learning systems is to build learning
agents that, like humans, can learn many tasks over their lifetime, and
moreover use information from these tasks to improve their ability to do so
efficiently. In this work, our goal is to provide new theoretical insights into
the potential of this paradigm. In particular, we propose a lifelong learning
framework that adheres to a novel notion of resource efficiency that is
critical in many real-world domains where feature evaluations are costly. That
is, our learner aims to reuse information from previously learned related tasks
to learn future tasks in a feature-efficient manner. Furthermore, we consider
novel combinatorial ways in which learning tasks can relate. Specifically, we
design lifelong learning algorithms for two structurally different and widely
used families of target functions: decision trees/lists and
monomials/polynomials. We also provide strong feature-efficiency guarantees for
these algorithms; in fact, we show that in order to learn future targets, we
need only slightly more feature evaluations per training example than what is
needed to predict on an arbitrary example using those targets. We also provide
algorithms with guarantees in an agnostic model where not all the targets are
related to each other. Finally, we also provide lower bounds on the performance
of a lifelong learner in these models, which are in fact tight under some
conditions.
| Maria-Florina Balcan, Avrim Blum, Vaishnavh Nagarajan | null | 1706.10271 | null | null |
Noisy Networks for Exploration | cs.LG stat.ML | We introduce NoisyNet, a deep reinforcement learning agent with parametric
noise added to its weights, and show that the induced stochasticity of the
agent's policy can be used to aid efficient exploration. The parameters of the
noise are learned with gradient descent along with the remaining network
weights. NoisyNet is straightforward to implement and adds little computational
overhead. We find that replacing the conventional exploration heuristics for
A3C, DQN and dueling agents (entropy reward and $\epsilon$-greedy respectively)
with NoisyNet yields substantially higher scores for a wide range of Atari
games, in some cases advancing the agent from sub to super-human performance.
| Meire Fortunato, Mohammad Gheshlaghi Azar, Bilal Piot, Jacob Menick,
Ian Osband, Alex Graves, Vlad Mnih, Remi Munos, Demis Hassabis, Olivier
Pietquin, Charles Blundell, Shane Legg | null | 1706.10295 | null | null |
From Parity to Preference-based Notions of Fairness in Classification | stat.ML cs.LG | The adoption of automated, data-driven decision making in an ever expanding
range of applications has raised concerns about its potential unfairness
towards certain social groups. In this context, a number of recent studies have
focused on defining, detecting, and removing unfairness from data-driven
decision systems. However, the existing notions of fairness, based on parity
(equality) in treatment or outcomes for different social groups, tend to be
quite stringent, limiting the overall decision making accuracy. In this paper,
we draw inspiration from the fair-division and envy-freeness literature in
economics and game theory and propose preference-based notions of fairness --
given the choice between various sets of decision treatments or outcomes, any
group of users would collectively prefer its treatment or outcomes, regardless
of the (dis)parity as compared to the other groups. Then, we introduce
tractable proxies to design margin-based classifiers that satisfy these
preference-based notions of fairness. Finally, we experiment with a variety of
synthetic and real-world datasets and show that preference-based fairness
allows for greater decision accuracy than parity-based fairness.
| Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna
P. Gummadi, Adrian Weller | null | 1707.0001 | null | null |
Penalizing Unfairness in Binary Classification | cs.LG stat.ML | We present a new approach for mitigating unfairness in learned classifiers.
In particular, we focus on binary classification tasks over individuals from
two populations, where, as our criterion for fairness, we wish to achieve
similar false positive rates in both populations, and similar false negative
rates in both populations. As a proof of concept, we implement our approach and
empirically evaluate its ability to achieve both fairness and accuracy, using
datasets from the fields of criminal risk assessment, credit, lending, and
college admissions.
| Yahav Bechavod and Katrina Ligett | null | 1707.00044 | null | null |
Data Decisions and Theoretical Implications when Adversarially Learning
Fair Representations | cs.LG cs.CY | How can we learn a classifier that is "fair" for a protected or sensitive
group, when we do not know if the input to the classifier belongs to the
protected group? How can we train such a classifier when data on the protected
group is difficult to attain? In many settings, finding out the sensitive input
attribute can be prohibitively expensive even during model training, and
sometimes impossible during model serving. For example, in recommender systems,
if we want to predict if a user will click on a given recommendation, we often
do not know many attributes of the user, e.g., race or age, and many attributes
of the content are hard to determine, e.g., the language or topic. Thus, it is
not feasible to use a different classifier calibrated based on knowledge of the
sensitive attribute.
Here, we use an adversarial training procedure to remove information about
the sensitive attribute from the latent representation learned by a neural
network. In particular, we study how the choice of data for the adversarial
training effects the resulting fairness properties. We find two interesting
results: a small amount of data is needed to train these adversarial models,
and the data distribution empirically drives the adversary's notion of
fairness.
| Alex Beutel, Jilin Chen, Zhe Zhao, Ed H. Chi | null | 1707.00075 | null | null |
Exploring the Imposition of Synaptic Precision Restrictions For
Evolutionary Synthesis of Deep Neural Networks | cs.NE cs.CV cs.LG | A key contributing factor to incredible success of deep neural networks has
been the significant rise on massively parallel computing devices allowing
researchers to greatly increase the size and depth of deep neural networks,
leading to significant improvements in modeling accuracy. Although deeper,
larger, or complex deep neural networks have shown considerable promise, the
computational complexity of such networks is a major barrier to utilization in
resource-starved scenarios. We explore the synaptogenesis of deep neural
networks in the formation of efficient deep neural network architectures within
an evolutionary deep intelligence framework, where a probabilistic generative
modeling strategy is introduced to stochastically synthesize increasingly
efficient yet effective offspring deep neural networks over generations,
mimicking evolutionary processes such as heredity, random mutation, and natural
selection in a probabilistic manner. In this study, we primarily explore the
imposition of synaptic precision restrictions and its impact on the
evolutionary synthesis of deep neural networks to synthesize more efficient
network architectures tailored for resource-starved scenarios. Experimental
results show significant improvements in synaptic efficiency (~10X decrease for
GoogLeNet-based DetectNet) and inference speed (>5X increase for
GoogLeNet-based DetectNet) while preserving modeling accuracy.
| Mohammad Javad Shafiee, Francis Li, and Alexander Wong | null | 1707.00095 | null | null |
SAM: Semantic Attribute Modulation for Language Modeling and Style
Variation | cs.CL cs.LG stat.ML | This paper presents a Semantic Attribute Modulation (SAM) for language
modeling and style variation. The semantic attribute modulation includes
various document attributes, such as titles, authors, and document categories.
We consider two types of attributes, (title attributes and category
attributes), and a flexible attribute selection scheme by automatically scoring
them via an attribute attention mechanism. The semantic attributes are embedded
into the hidden semantic space as the generation inputs. With the attributes
properly harnessed, our proposed SAM can generate interpretable texts with
regard to the input attributes. Qualitative analysis, including word semantic
analysis and attention values, shows the interpretability of SAM. On several
typical text datasets, we empirically demonstrate the superiority of the
Semantic Attribute Modulated language model with different combinations of
document attributes. Moreover, we present a style variation for the lyric
generation using SAM, which shows a strong connection between the style
variation and the semantic attributes.
| Wenbo Hu, Lifeng Hua, Lei Li, Hang Su, Tian Wang, Ning Chen, Bo Zhang | null | 1707.00117 | null | null |
Sample-efficient Actor-Critic Reinforcement Learning with Supervised
Data for Dialogue Management | cs.CL cs.AI cs.LG | Deep reinforcement learning (RL) methods have significant potential for
dialogue policy optimisation. However, they suffer from a poor performance in
the early stages of learning. This is especially problematic for on-line
learning with real users. Two approaches are introduced to tackle this problem.
Firstly, to speed up the learning process, two sample-efficient neural networks
algorithms: trust region actor-critic with experience replay (TRACER) and
episodic natural actor-critic with experience replay (eNACER) are presented.
For TRACER, the trust region helps to control the learning step size and avoid
catastrophic model changes. For eNACER, the natural gradient identifies the
steepest ascent direction in policy space to speed up the convergence. Both
models employ off-policy learning with experience replay to improve
sample-efficiency. Secondly, to mitigate the cold start issue, a corpus of
demonstration data is utilised to pre-train the models prior to on-line
reinforcement learning. Combining these two approaches, we demonstrate a
practical approach to learn deep RL-based dialogue policies and demonstrate
their effectiveness in a task-oriented information seeking domain.
| Pei-Hao Su, Pawel Budzianowski, Stefan Ultes, Milica Gasic, and Steve
Young | null | 1707.0013 | null | null |
Fast Approximate Nearest Neighbor Search With The Navigating
Spreading-out Graph | cs.LG | Approximate nearest neighbor search (ANNS) is a fundamental problem in
databases and data mining. A scalable ANNS algorithm should be both
memory-efficient and fast. Some early graph-based approaches have shown
attractive theoretical guarantees on search time complexity, but they all
suffer from the problem of high indexing time complexity. Recently, some
graph-based methods have been proposed to reduce indexing complexity by
approximating the traditional graphs; these methods have achieved revolutionary
performance on million-scale datasets. Yet, they still can not scale to
billion-node databases. In this paper, to further improve the search-efficiency
and scalability of graph-based methods, we start by introducing four aspects:
(1) ensuring the connectivity of the graph; (2) lowering the average out-degree
of the graph for fast traversal; (3) shortening the search path; and (4)
reducing the index size. Then, we propose a novel graph structure called
Monotonic Relative Neighborhood Graph (MRNG) which guarantees very low search
complexity (close to logarithmic time). To further lower the indexing
complexity and make it practical for billion-node ANNS problems, we propose a
novel graph structure named Navigating Spreading-out Graph (NSG) by
approximating the MRNG. The NSG takes the four aspects into account
simultaneously. Extensive experiments show that NSG outperforms all the
existing algorithms significantly. In addition, NSG shows superior performance
in the E-commercial search scenario of Taobao (Alibaba Group) and has been
integrated into their search engine at billion-node scale.
| Cong Fu, Chao Xiang, Changxu Wang, Deng Cai | null | 1707.00143 | null | null |
Teacher-Student Curriculum Learning | cs.LG cs.AI | We propose Teacher-Student Curriculum Learning (TSCL), a framework for
automatic curriculum learning, where the Student tries to learn a complex task
and the Teacher automatically chooses subtasks from a given set for the Student
to train on. We describe a family of Teacher algorithms that rely on the
intuition that the Student should practice more those tasks on which it makes
the fastest progress, i.e. where the slope of the learning curve is highest. In
addition, the Teacher algorithms address the problem of forgetting by also
choosing tasks where the Student's performance is getting worse. We demonstrate
that TSCL matches or surpasses the results of carefully hand-crafted curricula
in two tasks: addition of decimal numbers with LSTM and navigation in
Minecraft. Using our automatically generated curriculum enabled to solve a
Minecraft maze that could not be solved at all when training directly on
solving the maze, and the learning was an order of magnitude faster than
uniform sampling of subtasks.
| Tambet Matiisen, Avital Oliver, Taco Cohen, John Schulman | null | 1707.00183 | null | null |
On Scalable Inference with Stochastic Gradient Descent | stat.ML cs.LG | In many applications involving large dataset or online updating, stochastic
gradient descent (SGD) provides a scalable way to compute parameter estimates
and has gained increasing popularity due to its numerical convenience and
memory efficiency. While the asymptotic properties of SGD-based estimators have
been established decades ago, statistical inference such as interval estimation
remains much unexplored. The traditional resampling method such as the
bootstrap is not computationally feasible since it requires to repeatedly draw
independent samples from the entire dataset. The plug-in method is not
applicable when there are no explicit formulas for the covariance matrix of the
estimator. In this paper, we propose a scalable inferential procedure for
stochastic gradient descent, which, upon the arrival of each observation,
updates the SGD estimate as well as a large number of randomly perturbed SGD
estimates. The proposed method is easy to implement in practice. We establish
its theoretical properties for a general class of models that includes
generalized linear models and quantile regression models as special cases. The
finite-sample performance and numerical utility is evaluated by simulation
studies and two real data applications.
| Yixin Fang, Jinfeng Xu, Lei Yang | null | 1707.00192 | null | null |
Efficient Correlated Topic Modeling with Topic Embedding | cs.LG cs.CL stat.ML | Correlated topic modeling has been limited to small model and problem sizes
due to their high computational cost and poor scaling. In this paper, we
propose a new model which learns compact topic embeddings and captures topic
correlations through the closeness between the topic vectors. Our method
enables efficient inference in the low-dimensional embedding space, reducing
previous cubic or quadratic time complexity to linear w.r.t the topic size. We
further speedup variational inference with a fast sampler to exploit sparsity
of topic occurrence. Extensive experiments show that our approach is capable of
handling model and data scales which are several orders of magnitude larger
than existing correlation results, without sacrificing modeling quality by
providing competitive or superior performance in document classification and
retrieval.
| Junxian He, Zhiting Hu, Taylor Berg-Kirkpatrick, Ying Huang, Eric P.
Xing | null | 1707.00206 | null | null |
Location Dependent Dirichlet Processes | stat.ML cs.LG | Dirichlet processes (DP) are widely applied in Bayesian nonparametric
modeling. However, in their basic form they do not directly integrate
dependency information among data arising from space and time. In this paper,
we propose location dependent Dirichlet processes (LDDP) which incorporate
nonparametric Gaussian processes in the DP modeling framework to model such
dependencies. We develop the LDDP in the context of mixture modeling, and
develop a mean field variational inference algorithm for this mixture model.
The effectiveness of the proposed modeling framework is shown on an image
segmentation task.
| Shiliang Sun, John Paisley, Qiuyang Liu | null | 1707.0026 | null | null |
Classification non supervis\'ee des donn\'ees h\'et\'erog\`enes \`a
large \'echelle | cs.DB cs.LG | When it comes to cluster massive data, response time, disk access and quality
of formed classes becoming major issues for companies. It is in this context
that we have come to define a clustering framework for large scale
heterogeneous data that contributes to the resolution of these issues. The
proposed framework is based on, firstly, the descriptive analysis based on MCA,
and secondly, the MapReduce paradigm in a large scale environment. The results
are encouraging and prove the efficiency of the hybrid deployment on response
quality and time component as on qualitative and quantitative data.
| Mohamed Ali Zoghlami, Olfa Arfaoui, Minyar Sassi Hidri, Rahma Ben Ayed | null | 1707.00297 | null | null |
Stochastic Configuration Networks Ensemble for Large-Scale Data
Analytics | cs.LG cs.NE | This paper presents a fast decorrelated neuro-ensemble with heterogeneous
features for large-scale data analytics, where stochastic configuration
networks (SCNs) are employed as base learner models and the well-known negative
correlation learning (NCL) strategy is adopted to evaluate the output weights.
By feeding a large number of samples into the SCN base models, we obtain a huge
sized linear equation system which is difficult to be solved by means of
computing a pseudo-inverse used in the least squares method. Based on the group
of heterogeneous features, the block Jacobi and Gauss-Seidel methods are
employed to iteratively evaluate the output weights, and a convergence analysis
is given with a demonstration on the uniqueness of these iterative solutions.
Experiments with comparisons on two large-scale datasets are carried out, and
the system robustness with respect to the regularizing factor used in NCL is
given. Results indicate that the proposed ensemble learning techniques have
good potential for resolving large-scale data modelling problems.
| Dianhui Wang, Caihao Cui | 10.1016/j.ins.2017.07.003 | 1707.003 | null | null |
Variance Regularizing Adversarial Learning | stat.ML cs.LG | We introduce a novel approach for training adversarial models by replacing
the discriminator score with a bi-modal Gaussian distribution over the
real/fake indicator variables. In order to do this, we train the Gaussian
classifier to match the target bi-modal distribution implicitly through
meta-adversarial training. We hypothesize that this approach ensures a non-zero
gradient to the generator, even in the limit of a perfect classifier. We test
our method against standard benchmark image datasets as well as show the
classifier output distribution is smooth and has overlap between the real and
fake modes.
| Karan Grewal and R Devon Hjelm and Yoshua Bengio | null | 1707.00309 | null | null |
Dimensionality reduction with missing values imputation | cs.LG cs.DS stat.ML | In this study, we propose a new statical approach for high-dimensionality
reduction of heterogenous data that limits the curse of dimensionality and
deals with missing values. To handle these latter, we propose to use the Random
Forest imputation's method. The main purpose here is to extract useful
information and so reducing the search space to facilitate the data exploration
process. Several illustrative numeric examples, using data coming from publicly
available machine learning repositories are also included. The experimental
component of the study shows the efficiency of the proposed analytical
approach.
| Rania Mkhinini Gahar, Olfa Arfaoui, Minyar Sassi Hidri, Nejib Ben-Hadj
Alouane | null | 1707.00351 | null | null |
Deep Convolutional Framelets: A General Deep Learning Framework for
Inverse Problems | stat.ML cs.CV cs.IT cs.LG math.IT | Recently, deep learning approaches with various network architectures have
achieved significant performance improvement over existing iterative
reconstruction methods in various imaging problems. However, it is still
unclear why these deep learning architectures work for specific inverse
problems. To address these issues, here we show that the long-searched-for
missing link is the convolution framelets for representing a signal by
convolving local and non-local bases. The convolution framelets was originally
developed to generalize the theory of low-rank Hankel matrix approaches for
inverse problems, and this paper further extends the idea so that we can obtain
a deep neural network using multilayer convolution framelets with perfect
reconstruction (PR) under rectilinear linear unit nonlinearity (ReLU). Our
analysis also shows that the popular deep network components such as residual
block, redundant filter channels, and concatenated ReLU (CReLU) do indeed help
to achieve the PR, while the pooling and unpooling layers should be augmented
with high-pass branches to meet the PR condition. Moreover, by changing the
number of filter channels and bias, we can control the shrinkage behaviors of
the neural network. This discovery leads us to propose a novel theory for deep
convolutional framelets neural network. Using numerical experiments with
various inverse problems, we demonstrated that our deep convolution framelets
network shows consistent improvement over existing deep architectures.This
discovery suggests that the success of deep learning is not from a magical
power of a black-box, but rather comes from the power of a novel signal
representation using non-local basis combined with data-driven local basis,
which is indeed a natural extension of classical signal processing theory.
| Jong Chul Ye, Yoseob Han, Eunju Cha | null | 1707.00372 | null | null |
Convolutional Dictionary Learning: Acceleration and Convergence | cs.LG math.OC | Convolutional dictionary learning (CDL or sparsifying CDL) has many
applications in image processing and computer vision. There has been growing
interest in developing efficient algorithms for CDL, mostly relying on the
augmented Lagrangian (AL) method or the variant alternating direction method of
multipliers (ADMM). When their parameters are properly tuned, AL methods have
shown fast convergence in CDL. However, the parameter tuning process is not
trivial due to its data dependence and, in practice, the convergence of AL
methods depends on the AL parameters for nonconvex CDL problems. To moderate
these problems, this paper proposes a new practically feasible and convergent
Block Proximal Gradient method using a Majorizer (BPG-M) for CDL. The
BPG-M-based CDL is investigated with different block updating schemes and
majorization matrix designs, and further accelerated by incorporating some
momentum coefficient formulas and restarting techniques. All of the methods
investigated incorporate a boundary artifacts removal (or, more generally,
sampling) operator in the learning model. Numerical experiments show that,
without needing any parameter tuning process, the proposed BPG-M approach
converges more stably to desirable solutions of lower objective values than the
existing state-of-the-art ADMM algorithm and its memory-efficient variant do.
Compared to the ADMM approaches, the BPG-M method using a multi-block updating
scheme is particularly useful in single-threaded CDL algorithm handling large
datasets, due to its lower memory requirement and no polynomial computational
complexity. Image denoising experiments show that, for relatively strong
additive white Gaussian noise, the filters learned by BPG-M-based CDL
outperform those trained by the ADMM approach.
| Il Yong Chun, Jeffrey A. Fessler | 10.1109/TIP.2017.2761545 | 1707.00389 | null | null |
Fair Pipelines | cs.CY cs.LG stat.ML | This work facilitates ensuring fairness of machine learning in the real world
by decoupling fairness considerations in compound decisions. In particular,
this work studies how fairness propagates through a compound decision-making
processes, which we call a pipeline. Prior work in algorithmic fairness only
focuses on fairness with respect to one decision. However, many decision-making
processes require more than one decision. For instance, hiring is at least a
two stage model: deciding who to interview from the applicant pool and then
deciding who to hire from the interview pool. Perhaps surprisingly, we show
that the composition of fair components may not guarantee a fair pipeline under
a $(1+\varepsilon)$-equal opportunity definition of fair. However, we identify
circumstances that do provide that guarantee. We also propose numerous
directions for future work on more general compound machine learning decisions.
| Amanda Bower and Sarah N. Kitchen and Laura Niss and Martin J. Strauss
and Alexander Vargas and Suresh Venkatasubramanian | null | 1707.00391 | null | null |
Dual Supervised Learning | cs.LG stat.ML | Many supervised learning tasks are emerged in dual forms, e.g.,
English-to-French translation vs. French-to-English translation, speech
recognition vs. text to speech, and image classification vs. image generation.
Two dual tasks have intrinsic connections with each other due to the
probabilistic correlation between their models. This connection is, however,
not effectively utilized today, since people usually train the models of two
dual tasks separately and independently. In this work, we propose training the
models of two dual tasks simultaneously, and explicitly exploiting the
probabilistic correlation between them to regularize the training process. For
ease of reference, we call the proposed approach \emph{dual supervised
learning}. We demonstrate that dual supervised learning can improve the
practical performances of both tasks, for various applications including
machine translation, image processing, and sentiment analysis.
| Yingce Xia, Tao Qin, Wei Chen, Jiang Bian, Nenghai Yu, Tie-Yan Liu | null | 1707.00415 | null | null |
Learning Deep Latent Spaces for Multi-Label Classification | cs.LG | Multi-label classification is a practical yet challenging task in machine
learning related fields, since it requires the prediction of more than one
label category for each input instance. We propose a novel deep neural networks
(DNN) based model, Canonical Correlated AutoEncoder (C2AE), for solving this
task. Aiming at better relating feature and label domain data for improved
classification, we uniquely perform joint feature and label embedding by
deriving a deep latent space, followed by the introduction of label-correlation
sensitive loss function for recovering the predicted label outputs. Our C2AE is
achieved by integrating the DNN architectures of canonical correlation analysis
and autoencoder, which allows end-to-end learning and prediction with the
ability to exploit label dependency. Moreover, our C2AE can be easily extended
to address the learning problem with missing labels. Our experiments on
multiple datasets with different scales confirm the effectiveness and
robustness of our proposed method, which is shown to perform favorably against
state-of-the-art methods for multi-label classification.
| Chih-Kuan Yeh, Wei-Chieh Wu, Wei-Jen Ko, Yu-Chiang Frank Wang | null | 1707.00418 | null | null |
Parle: parallelizing stochastic gradient descent | cs.LG cs.DC stat.ML | We propose a new algorithm called Parle for parallel training of deep
networks that converges 2-4x faster than a data-parallel implementation of SGD,
while achieving significantly improved error rates that are nearly
state-of-the-art on several benchmarks including CIFAR-10 and CIFAR-100,
without introducing any additional hyper-parameters. We exploit the phenomenon
of flat minima that has been shown to lead to improved generalization error for
deep networks. Parle requires very infrequent communication with the parameter
server and instead performs more computation on each client, which makes it
well-suited to both single-machine, multi-GPU settings and distributed
implementations.
| Pratik Chaudhari, Carlo Baldassi, Riccardo Zecchina, Stefano Soatto,
Ameet Talwalkar, Adam Oberman | null | 1707.00424 | null | null |
Recommender System for News Articles using Supervised Learning | cs.IR cs.LG | In the last decade we have observed a mass increase of information, in
particular information that is shared through smartphones. Consequently, the
amount of information that is available does not allow the average user to be
aware of all his options. In this context, recommender systems use a number of
techniques to help a user find the desired product. Hence, nowadays recommender
systems play an important role. Recommender Systems' aim to identify products
that best fits user preferences. These techniques are advantageous to both
users and vendors, as it enables the user to rapidly find what he needs and the
vendors to promote their products and sales. As the industry became aware of
the gains that could be accomplished by using these algorithms, also a very
interesting problem for many researchers, recommender systems became a very
active area since the mid 90's. Having in mind that this is an ongoing problem
the present thesis intends to observe the value of using a recommender
algorithm to find users likes by observing her domain preferences. In a
balanced probabilistic method, this thesis will show how news topics can be
used to recommend news articles. In this thesis, we used different machine
learning methods to determine the user ratings for an article. To tackle this
problem, supervised learning methods such as linear regression, Naive Bayes and
logistic regression are used. All the aforementioned models have a different
nature which has an impact on the solution of the given problem. Furthermore,
number of experiments are presented and discussed to identify the feature set
that fits best to the problem.
| Akshay Kumar Chaturvedi, Filipa Peleja, Ana Freire | null | 1707.00506 | null | null |
Hashing over Predicted Future Frames for Informed Exploration of Deep
Reinforcement Learning | cs.LG cs.AI stat.ML | In deep reinforcement learning (RL) tasks, an efficient exploration mechanism
should be able to encourage an agent to take actions that lead to less frequent
states which may yield higher accumulative future return. However, both knowing
about the future and evaluating the frequentness of states are non-trivial
tasks, especially for deep RL domains, where a state is represented by
high-dimensional image frames. In this paper, we propose a novel informed
exploration framework for deep RL, where we build the capability for an RL
agent to predict over the future transitions and evaluate the frequentness for
the predicted future frames in a meaningful manner. To this end, we train a
deep prediction model to predict future frames given a state-action pair, and a
convolutional autoencoder model to hash over the seen frames. In addition, to
utilize the counts derived from the seen frames to evaluate the frequentness
for the predicted frames, we tackle the challenge of matching the predicted
future frames and their corresponding seen frames at the latent feature level.
In this way, we derive a reliable metric for evaluating the novelty of the
future direction pointed by each action, and hence inform the agent to explore
the least frequent one.
| Haiyan Yin, Jianda Chen, Sinno Jialin Pan | null | 1707.00524 | null | null |
Robust Cost-Sensitive Learning for Recommendation with Implicit Feedback | cs.LG cs.IR stat.ML | Recommendation is the task of improving customer experience through
personalized recommendation based on users' past feedback. In this paper, we
investigate the most common scenario: the user-item (U-I) matrix of implicit
feedback. Even though many recommendation approaches are designed based on
implicit feedback, they attempt to project the U-I matrix into a low-rank
latent space, which is a strict restriction that rarely holds in practice. In
addition, although misclassification costs from imbalanced classes are
significantly different, few methods take the cost of classification error into
account. To address aforementioned issues, we propose a robust framework by
decomposing the U-I matrix into two components: (1) a low-rank matrix that
captures the common preference, and (2) a sparse matrix that detects the
user-specific preference of individuals. A cost-sensitive learning model is
embedded into the framework. Specifically, this model exploits different costs
in the loss function for the observed and unobserved instances. We show that
the resulting non-smooth convex objective can be optimized efficiently by an
accelerated projected gradient method with closed-form solutions. Morever, the
proposed algorithm can be scaled up to large-sized datasets after a relaxation.
The theoretical result shows that even with a small fraction of 1's in the U-I
matrix $M\in\mathbb{R}^{n\times m}$, the cost-sensitive error of the proposed
model is upper bounded by $O(\frac{\alpha}{\sqrt{mn}})$, where $\alpha$ is a
bias over imbalanced classes. Finally, empirical experiments are extensively
carried out to evaluate the effectiveness of our proposed algorithm.
Encouraging experimental results show that our algorithm outperforms several
state-of-the-art algorithms on benchmark recommendation datasets.
| Peng Yang, Peilin Zhao, Xin Gao, Yong Liu | null | 1707.00536 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.