title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
The duality structure gradient descent algorithm: analysis and
applications to neural networks | cs.LG math.OC | The training of deep neural networks is typically carried out using some form
of gradient descent, often with great success. However, existing non-asymptotic
analyses of first-order optimization algorithms typically employ a gradient
smoothness assumption that is too strong to be applicable in the case of deep
neural networks. To address this, we propose an algorithm named duality
structure gradient descent (DSGD) that is amenable to non-asymptotic
performance analysis, under mild assumptions on the training set and network
architecture. The algorithm can be viewed as a form of layer-wise coordinate
descent, where at each iteration the algorithm chooses one layer of the network
to update. The decision of what layer to update is done in a greedy fashion,
based on a rigorous lower bound on the improvement of the objective function
for each choice of layer. In the analysis, we bound the time required to reach
approximate stationary points, in both the deterministic and stochastic
settings. The convergence is measured in terms of a parameter-dependent family
of norms that is derived from the network architecture and designed to confirm
a smoothness-like property on the gradient of the training loss function. We
empirically demonstrate the effectiveness of DSGD in several neural network
training scenarios.
| Thomas Flynn | null | 1708.00523 | null | null |
Using millions of emoji occurrences to learn any-domain representations
for detecting sentiment, emotion and sarcasm | stat.ML cs.LG | NLP tasks are often limited by scarcity of manually annotated data. In social
media sentiment analysis and related tasks, researchers have therefore used
binarized emoticons and specific hashtags as forms of distant supervision. Our
paper shows that by extending the distant supervision to a more diverse set of
noisy labels, the models can learn richer representations. Through emoji
prediction on a dataset of 1246 million tweets containing one of 64 common
emojis we obtain state-of-the-art performance on 8 benchmark datasets within
sentiment, emotion and sarcasm detection using a single pretrained model. Our
analyses confirm that the diversity of our emotional labels yield a performance
improvement over previous distant supervision approaches.
| Bjarke Felbo, Alan Mislove, Anders S{\o}gaard, Iyad Rahwan and Sune
Lehmann | 10.18653/v1/D17-1169 | 1708.00524 | null | null |
End-to-End Neural Segmental Models for Speech Recognition | cs.CL cs.LG cs.SD | Segmental models are an alternative to frame-based models for sequence
prediction, where hypothesized path weights are based on entire segment scores
rather than a single frame at a time. Neural segmental models are segmental
models that use neural network-based weight functions. Neural segmental models
have achieved competitive results for speech recognition, and their end-to-end
training has been explored in several studies. In this work, we review neural
segmental models, which can be viewed as consisting of a neural network-based
acoustic encoder and a finite-state transducer decoder. We study end-to-end
segmental models with different weight functions, including ones based on
frame-level neural classifiers and on segmental recurrent neural networks. We
study how reducing the search space size impacts performance under different
weight functions. We also compare several loss functions for end-to-end
training. Finally, we explore training approaches, including multi-stage vs.
end-to-end training and multitask training that combines segmental and
frame-level losses.
| Hao Tang, Liang Lu, Lingpeng Kong, Kevin Gimpel, Karen Livescu, Chris
Dyer, Noah A. Smith, Steve Renals | 10.1109/JSTSP.2017.2752462 | 1708.00531 | null | null |
On $w$-mixtures: Finite convex combinations of prescribed component
distributions | cs.LG | We consider the space of $w$-mixtures which is defined as the set of finite
statistical mixtures sharing the same prescribed component distributions closed
under convex combinations. The information geometry induced by the Bregman
generator set to the Shannon negentropy on this space yields a dually flat
space called the mixture family manifold. We show how the Kullback-Leibler (KL)
divergence can be recovered from the corresponding Bregman divergence for the
negentropy generator: That is, the KL divergence between two $w$-mixtures
amounts to a Bregman Divergence (BD) induced by the Shannon negentropy
generator. Thus the KL divergence between two Gaussian Mixture Models (GMMs)
sharing the same Gaussian components is equivalent to a Bregman divergence.
This KL-BD equivalence on a mixture family manifold implies that we can perform
optimal KL-averaging aggregation of $w$-mixtures without information loss. More
generally, we prove that the statistical skew Jensen-Shannon divergence between
$w$-mixtures is equivalent to a skew Jensen divergence between their
corresponding parameters. Finally, we state several properties, divergence
identities, and inequalities relating to $w$-mixtures.
| Frank Nielsen and Richard Nock | null | 1708.00568 | null | null |
Geometric Convolutional Neural Network for Analyzing Surface-Based
Neuroimaging Data | cs.NE cs.LG | The conventional CNN, widely used for two-dimensional images, however, is not
directly applicable to non-regular geometric surface, such as a cortical
thickness. We propose Geometric CNN (gCNN) that deals with data representation
over a spherical surface and renders pattern recognition in a multi-shell mesh
structure. The classification accuracy for sex was significantly higher than
that of SVM and image based CNN. It only uses MRI thickness data to classify
gender but this method can expand to classify disease from other MRI or fMRI
data
| Si-Baek Seong, Chongwon Pae, and Hae-Jeong Park | null | 1708.00587 | null | null |
Hidden Physics Models: Machine Learning of Nonlinear Partial
Differential Equations | cs.AI cs.LG math.AP stat.ML | While there is currently a lot of enthusiasm about "big data", useful data is
usually "small" and expensive to acquire. In this paper, we present a new
paradigm of learning partial differential equations from {\em small} data. In
particular, we introduce \emph{hidden physics models}, which are essentially
data-efficient learning machines capable of leveraging the underlying laws of
physics, expressed by time dependent and nonlinear partial differential
equations, to extract patterns from high-dimensional data generated from
experiments. The proposed methodology may be applied to the problem of
learning, system identification, or data-driven discovery of partial
differential equations. Our framework relies on Gaussian processes, a powerful
tool for probabilistic inference over functions, that enables us to strike a
balance between model complexity and data fitting. The effectiveness of the
proposed approach is demonstrated through a variety of canonical problems,
spanning a number of scientific domains, including the Navier-Stokes,
Schr\"odinger, Kuramoto-Sivashinsky, and time dependent linear fractional
equations. The methodology provides a promising new direction for harnessing
the long-standing developments of classical methods in applied mathematics and
mathematical physics to design learning machines with the ability to operate in
complex domains without requiring large quantities of data.
| Maziar Raissi and George Em Karniadakis | 10.1016/j.jcp.2017.11.039 | 1708.00588 | null | null |
Controllable Generative Adversarial Network | cs.LG cs.CV stat.ML | Recently introduced generative adversarial network (GAN) has been shown
numerous promising results to generate realistic samples. The essential task of
GAN is to control the features of samples generated from a random distribution.
While the current GAN structures, such as conditional GAN, successfully
generate samples with desired major features, they often fail to produce
detailed features that bring specific differences among samples. To overcome
this limitation, here we propose a controllable GAN (ControlGAN) structure. By
separating a feature classifier from a discriminator, the generator of
ControlGAN is designed to learn generating synthetic samples with the specific
detailed features. Evaluated with multiple image datasets, ControlGAN shows a
power to generate improved samples with well-controlled features. Furthermore,
we demonstrate that ControlGAN can generate intermediate features and opposite
features for interpolated and extrapolated input labels that are not used in
the training process. It implies that ControlGAN can significantly contribute
to the variety of generated samples.
| Minhyeok Lee and Junhee Seok | null | 1708.00598 | null | null |
Exact Tensor Completion from Sparsely Corrupted Observations via Convex
Optimization | cs.LG cs.CV cs.IT math.IT stat.ML | This paper conducts a rigorous analysis for provable estimation of
multidimensional arrays, in particular third-order tensors, from a random
subset of its corrupted entries. Our study rests heavily on a recently proposed
tensor algebraic framework in which we can obtain tensor singular value
decomposition (t-SVD) that is similar to the SVD for matrices, and define a new
notion of tensor rank referred to as the tubal rank. We prove that by simply
solving a convex program, which minimizes a weighted combination of tubal
nuclear norm, a convex surrogate for the tubal rank, and the $\ell_1$-norm, one
can recover an incoherent tensor exactly with overwhelming probability,
provided that its tubal rank is not too large and that the corruptions are
reasonably sparse. Interestingly, our result includes the recovery guarantees
for the problems of tensor completion (TC) and tensor principal component
analysis (TRPCA) under the same algebraic setup as special cases. An
alternating direction method of multipliers (ADMM) algorithm is presented to
solve this optimization problem. Numerical experiments verify our theory and
real-world applications demonstrate the effectiveness of our algorithm.
| Jonathan Q. Jiang, Michael K. Ng | null | 1708.00601 | null | null |
ProjectionNet: Learning Efficient On-Device Deep Networks Using Neural
Projections | cs.LG cs.AI cs.NE | Deep neural networks have become ubiquitous for applications related to
visual recognition and language understanding tasks. However, it is often
prohibitive to use typical neural networks on devices like mobile phones or
smart watches since the model sizes are huge and cannot fit in the limited
memory available on such devices. While these devices could make use of machine
learning models running on high-performance data centers with CPUs or GPUs,
this is not feasible for many applications because data can be privacy
sensitive and inference needs to be performed directly "on" device.
We introduce a new architecture for training compact neural networks using a
joint optimization framework. At its core lies a novel objective that jointly
trains using two different types of networks--a full trainer neural network
(using existing architectures like Feed-forward NNs or LSTM RNNs) combined with
a simpler "projection" network that leverages random projections to transform
inputs or intermediate representations into bits. The simpler network encodes
lightweight and efficient-to-compute operations in bit space with a low memory
footprint. The two networks are trained jointly using backpropagation, where
the projection network learns from the full network similar to apprenticeship
learning. Once trained, the smaller network can be used directly for inference
at low memory and computation cost. We demonstrate the effectiveness of the new
approach at significantly shrinking the memory requirements of different types
of neural networks while preserving good accuracy on visual recognition and
text classification tasks. We also study the question "how many neural bits are
required to solve a given task?" using the new framework and show empirical
results contrasting model predictive capacity (in bits) versus accuracy on
several datasets.
| Sujith Ravi | null | 1708.0063 | null | null |
On the Importance of Consistency in Training Deep Neural Networks | cs.LG cs.AI cs.CV cs.NE | We explain that the difficulties of training deep neural networks come from a
syndrome of three consistency issues. This paper describes our efforts in their
analysis and treatment. The first issue is the training speed inconsistency in
different layers. We propose to address it with an intuitive,
simple-to-implement, low footprint second-order method. The second issue is the
scale inconsistency between the layer inputs and the layer residuals. We
explain how second-order information provides favorable convenience in removing
this roadblock. The third and most challenging issue is the inconsistency in
residual propagation. Based on the fundamental theorem of linear algebra, we
provide a mathematical characterization of the famous vanishing gradient
problem. Thus, an important design principle for future optimization and neural
network design is derived. We conclude this paper with the construction of a
novel contractive neural network.
| Chengxi Ye, Yezhou Yang, Cornelia Fermuller, Yiannis Aloimonos | null | 1708.00631 | null | null |
A Multi-Objective Learning to re-Rank Approach to Optimize Online
Marketplaces for Multiple Stakeholders | cs.IR cs.LG | Multi-objective recommender systems address the difficult task of
recommending items that are relevant to multiple, possibly conflicting,
criteria. However these systems are most often designed to address the
objective of one single stakeholder, typically, in online commerce, the
consumers whose input and purchasing decisions ultimately determine the success
of the recommendation systems. In this work, we address the multi-objective,
multi-stakeholder, recommendation problem involving one or more objective(s)
per stakeholder. In addition to the consumer stakeholder, we also consider two
other stakeholders; the suppliers who provide the goods and services for sale
and the intermediary who is responsible for helping connect consumers to
suppliers via its recommendation algorithms. We analyze the multi-objective,
multi-stakeholder, problem from the point of view of the online marketplace
intermediary whose objective is to maximize its commission through its
recommender system. We define a multi-objective problem relating all our three
stakeholders which we solve with a novel learning-to-re-rank approach that
makes use of a novel regularization function based on the Kendall tau
correlation metric and its kernel version; given an initial ranking of item
recommendations built for the consumer, we aim to re-rank it such that the new
ranking is also optimized for the secondary objectives while staying close to
the initial ranking. We evaluate our approach on a real-world dataset of hotel
recommendations provided by Expedia where we show the effectiveness of our
approach against a business-rules oriented baseline model.
| Phong Nguyen, John Dines and Jan Krasnodebski | null | 1708.00651 | null | null |
Fairness-aware machine learning: a perspective | cs.AI cs.CY cs.LG stat.ML | Algorithms learned from data are increasingly used for deciding many aspects
in our life: from movies we see, to prices we pay, or medicine we get. Yet
there is growing evidence that decision making by inappropriately trained
algorithms may unintentionally discriminate people. For example, in automated
matching of candidate CVs with job descriptions, algorithms may capture and
propagate ethnicity related biases. Several repairs for selected algorithms
have already been proposed, but the underlying mechanisms how such
discrimination happens from the computational perspective are not yet
scientifically understood. We need to develop theoretical understanding how
algorithms may become discriminatory, and establish fundamental machine
learning principles for prevention. We need to analyze machine learning process
as a whole to systematically explain the roots of discrimination occurrence,
which will allow to devise global machine learning optimization criteria for
guaranteed prevention, as opposed to pushing empirical constraints into
existing algorithms case-by-case. As a result, the state-of-the-art will
advance from heuristic repairing, to proactive and theoretically supported
prevention. This is needed not only because law requires to protect vulnerable
people. Penetration of big data initiatives will only increase, and computer
science needs to provide solid explanations and accountability to the public,
before public concerns lead to unnecessarily restrictive regulations against
machine learning.
| Indre Zliobaite | null | 1708.00754 | null | null |
Streaming kernel regression with provably adaptive mean, variance, and
regularization | stat.ML cs.LG | We consider the problem of streaming kernel regression, when the observations
arrive sequentially and the goal is to recover the underlying mean function,
assumed to belong to an RKHS. The variance of the noise is not assumed to be
known. In this context, we tackle the problem of tuning the regularization
parameter adaptively at each time step, while maintaining tight confidence
bounds estimates on the value of the mean function at each point. To this end,
we first generalize existing results for finite-dimensional linear regression
with fixed regularization and known variance to the kernel setup with a
regularization parameter allowed to be a measurable function of past
observations. Then, using appropriate self-normalized inequalities we build
upper and lower bound estimates for the variance, leading to Bersntein-like
concentration bounds. The later is used in order to define the adaptive
regularization. The bounds resulting from our technique are valid uniformly
over all observation points and all time steps, and are compared against the
literature with numerical experiments. Finally, the potential of these tools is
illustrated by an application to kernelized bandits, where we revisit the
Kernel UCB and Kernel Thompson Sampling procedures, and show the benefits of
the novel adaptive kernel tuning strategy.
| Audrey Durand, Odalric-Ambrym Maillard, Joelle Pineau | null | 1708.00768 | null | null |
Dynamic Entity Representations in Neural Language Models | cs.CL cs.LG | Understanding a long document requires tracking how entities are introduced
and evolve over time. We present a new type of language model, EntityNLM, that
can explicitly model entities, dynamically update their representations, and
contextually generate their mentions. Our model is generative and flexible; it
can model an arbitrary number of entities in context while generating each
entity mention at an arbitrary length. In addition, it can be used for several
different tasks such as language modeling, coreference resolution, and entity
prediction. Experimental results with all these tasks demonstrate that our
model consistently outperforms strong baselines and prior work.
| Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, Noah A.
Smith | null | 1708.00781 | null | null |
Variational Generative Stochastic Networks with Collaborative Shaping | cs.LG | We develop an approach to training generative models based on unrolling a
variational auto-encoder into a Markov chain, and shaping the chain's
trajectories using a technique inspired by recent work in Approximate Bayesian
computation. We show that the global minimizer of the resulting objective is
achieved when the generative model reproduces the target distribution. To allow
finer control over the behavior of the models, we add a regularization term
inspired by techniques used for regularizing certain types of policy search in
reinforcement learning. We present empirical results on the MNIST and TFD
datasets which show that our approach offers state-of-the-art performance, both
quantitatively and from a qualitative point of view.
| Philip Bachman and Doina Precup | null | 1708.00805 | null | null |
Adversarial-Playground: A Visualization Suite Showing How Adversarial
Examples Fool Deep Learning | cs.CR cs.AI cs.LG | Recent studies have shown that attackers can force deep learning models to
misclassify so-called "adversarial examples": maliciously generated images
formed by making imperceptible modifications to pixel values. With growing
interest in deep learning for security applications, it is important for
security experts and users of machine learning to recognize how learning
systems may be attacked. Due to the complex nature of deep learning, it is
challenging to understand how deep models can be fooled by adversarial
examples. Thus, we present a web-based visualization tool,
Adversarial-Playground, to demonstrate the efficacy of common adversarial
methods against a convolutional neural network (CNN) system.
Adversarial-Playground is educational, modular and interactive. (1) It enables
non-experts to compare examples visually and to understand why an adversarial
example can fool a CNN-based image classifier. (2) It can help security experts
explore more vulnerability of deep learning as a software module. (3) Building
an interactive visualization is challenging in this domain due to the large
feature space of image classification (generating adversarial examples is slow
in general and visualizing images are costly). Through multiple novel design
choices, our tool can provide fast and accurate responses to user requests.
Empirically, we find that our client-server division strategy reduced the
response time by an average of 1.5 seconds per sample. Our other innovation, a
faster variant of JSMA evasion algorithm, empirically performed twice as fast
as JSMA and yet maintains a comparable evasion rate.
Project source code and data from our experiments available at:
https://github.com/QData/AdversarialDNN-Playground
| Andrew P. Norton, Yanjun Qi | null | 1708.00807 | null | null |
Audio Super Resolution using Neural Networks | cs.SD cs.LG | We introduce a new audio processing technique that increases the sampling
rate of signals such as speech or music using deep convolutional neural
networks. Our model is trained on pairs of low and high-quality audio examples;
at test-time, it predicts missing samples within a low-resolution signal in an
interpolation process similar to image super-resolution. Our method is simple
and does not involve specialized audio processing techniques; in our
experiments, it outperforms baselines on standard speech and music benchmarks
at upscaling ratios of 2x, 4x, and 6x. The method has practical applications in
telephony, compression, and text-to-speech generation; it demonstrates the
effectiveness of feed-forward convolutional architectures on an audio
generation task.
| Volodymyr Kuleshov, S. Zayd Enam, Stefano Ermon | null | 1708.00853 | null | null |
Machine learning for neural decoding | q-bio.NC cs.LG stat.ML | Despite rapid advances in machine learning tools, the majority of neural
decoding approaches still use traditional methods. Modern machine learning
tools, which are versatile and easy to use, have the potential to significantly
improve decoding performance. This tutorial describes how to effectively apply
these algorithms for typical decoding problems. We provide descriptions, best
practices, and code for applying common machine learning methods, including
neural networks and gradient boosting. We also provide detailed comparisons of
the performance of various methods at the task of decoding spiking activity in
motor cortex, somatosensory cortex, and hippocampus. Modern methods,
particularly neural networks and ensembles, significantly outperform
traditional approaches, such as Wiener and Kalman filters. Improving the
performance of neural decoding algorithms allows neuroscientists to better
understand the information contained in a neural population and can help
advance engineering applications such as brain machine interfaces.
| Joshua I. Glaser, Ari S. Benjamin, Raeed H. Chowdhury, Matthew G.
Perich, Lee E. Miller, Konrad P. Kording | null | 1708.00909 | null | null |
On the convergence properties of a $K$-step averaging stochastic
gradient descent algorithm for nonconvex optimization | cs.LG cs.DC stat.ML | Despite their popularity, the practical performance of asynchronous
stochastic gradient descent methods (ASGD) for solving large scale machine
learning problems are not as good as theoretical results indicate. We adopt and
analyze a synchronous K-step averaging stochastic gradient descent algorithm
which we call K-AVG. We establish the convergence results of K-AVG for
nonconvex objectives and explain why the K-step delay is necessary and leads to
better performance than traditional parallel stochastic gradient descent which
is a special case of K-AVG with $K=1$. We also show that K-AVG scales better
than ASGD. Another advantage of K-AVG over ASGD is that it allows larger
stepsizes. On a cluster of $128$ GPUs, K-AVG is faster than ASGD
implementations and achieves better accuracies and faster convergence for
\cifar dataset.
| Fan Zhou and Guojing Cong | 10.24963/ijcai.2018/447 | 1708.01012 | null | null |
Sensor Transformation Attention Networks | cs.LG cs.CV | Recent work on encoder-decoder models for sequence-to-sequence mapping has
shown that integrating both temporal and spatial attention mechanisms into
neural networks increases the performance of the system substantially. In this
work, we report on the application of an attentional signal not on temporal and
spatial regions of the input, but instead as a method of switching among inputs
themselves. We evaluate the particular role of attentional switching in the
presence of dynamic noise in the sensors, and demonstrate how the attentional
signal responds dynamically to changing noise levels in the environment to
achieve increased performance on both audio and visual tasks in three
commonly-used datasets: TIDIGITS, Wall Street Journal, and GRID. Moreover, the
proposed sensor transformation network architecture naturally introduces a
number of advantages that merit exploration, including ease of adding new
sensors to existing architectures, attentional interpretability, and increased
robustness in a variety of noisy environments not seen during training.
Finally, we demonstrate that the sensor selection attention mechanism of a
model trained only on the small TIDIGITS dataset can be transferred directly to
a pre-existing larger network trained on the Wall Street Journal dataset,
maintaining functionality of switching between sensors to yield a dramatic
reduction of error in the presence of noise.
| Stefan Braun, Daniel Neil, Enea Ceolini, Jithendar Anumula, Shih-Chii
Liu | null | 1708.01015 | null | null |
Applying advanced machine learning models to classify
electro-physiological activity of human brain for use in biometric
identification | cs.LG | In this article we present the results of our research related to the study
of correlations between specific visual stimulation and the elicited brain's
electro-physiological response collected by EEG sensors from a group of
participants. We will look at how the various characteristics of visual
stimulation affect the measured electro-physiological response of the brain and
describe the optimal parameters found that elicit a steady-state visually
evoked potential (SSVEP) in certain parts of the cerebral cortex where it can
be reliably perceived by the electrode of the EEG device. After that, we
continue with a description of the advanced machine learning pipeline model
that can perform confident classification of the collected EEG data in order to
(a) reliably distinguish signal from noise (about 85% validation score) and (b)
reliably distinguish between EEG records collected from different human
participants (about 80% validation score). Finally, we demonstrate that the
proposed method works reliably even with an inexpensive (less than $100)
consumer-grade EEG sensing device and with participants who do not have
previous experience with EEG technology (EEG illiterate). All this in
combination opens up broad prospects for the development of new types of
consumer devices, [e.g.] based on virtual reality helmets or augmented reality
glasses where EEG sensor can be easily integrated. The proposed method can be
used to improve an online user experience by providing [e.g.] password-less
user identification for VR / AR applications. It can also find a more advanced
application in intensive care units where collected EEG data can be used to
classify the level of conscious awareness of patients during anesthesia or to
automatically detect hardware failures by classifying the input signal as
noise.
| Iaroslav Omelianenko | null | 1708.01167 | null | null |
DSOD: Learning Deeply Supervised Object Detectors from Scratch | cs.CV cs.LG | We present Deeply Supervised Object Detector (DSOD), a framework that can
learn object detectors from scratch. State-of-the-art object objectors rely
heavily on the off-the-shelf networks pre-trained on large-scale classification
datasets like ImageNet, which incurs learning bias due to the difference on
both the loss functions and the category distributions between classification
and detection tasks. Model fine-tuning for the detection task could alleviate
this bias to some extent but not fundamentally. Besides, transferring
pre-trained models from classification to detection between discrepant domains
is even more difficult (e.g. RGB to depth images). A better solution to tackle
these two critical problems is to train object detectors from scratch, which
motivates our proposed DSOD. Previous efforts in this direction mostly failed
due to much more complicated loss functions and limited training data in object
detection. In DSOD, we contribute a set of design principles for training
object detectors from scratch. One of the key findings is that deep
supervision, enabled by dense layer-wise connections, plays a critical role in
learning a good detector. Combining with several other principles, we develop
DSOD following the single-shot detection (SSD) framework. Experiments on PASCAL
VOC 2007, 2012 and MS COCO datasets demonstrate that DSOD can achieve better
results than the state-of-the-art solutions with much more compact models. For
instance, DSOD outperforms SSD on all three benchmarks with real-time detection
speed, while requires only 1/2 parameters to SSD and 1/10 parameters to Faster
RCNN. Our code and models are available at: https://github.com/szq0214/DSOD .
| Zhiqiang Shen and Zhuang Liu and Jianguo Li and Yu-Gang Jiang and
Yurong Chen and Xiangyang Xue | null | 1708.01241 | null | null |
Independently Controllable Factors | cs.LG cs.AI stat.ML | It has been postulated that a good representation is one that disentangles
the underlying explanatory factors of variation. However, it remains an open
question what kind of training framework could potentially achieve that.
Whereas most previous work focuses on the static setting (e.g., with images),
we postulate that some of the causal factors could be discovered if the learner
is allowed to interact with its environment. The agent can experiment with
different actions and observe their effects. More specifically, we hypothesize
that some of these factors correspond to aspects of the environment which are
independently controllable, i.e., that there exists a policy and a learnable
feature for each such aspect of the environment, such that this policy can
yield changes in that feature with minimal changes to other features that
explain the statistical variations in the observed data. We propose a specific
objective function to find such factors and verify experimentally that it can
indeed disentangle independently controllable aspects of the environment
without any extrinsic reward signal.
| Valentin Thomas, Jules Pondard, Emmanuel Bengio, Marc Sarfati,
Philippe Beaudoin, Marie-Jean Meurs, Joelle Pineau, Doina Precup, Yoshua
Bengio | null | 1708.01289 | null | null |
Effective sketching methods for value function approximation | cs.LG | High-dimensional representations, such as radial basis function networks or
tile coding, are common choices for policy evaluation in reinforcement
learning. Learning with such high-dimensional representations, however, can be
expensive, particularly for matrix methods, such as least-squares temporal
difference learning or quasi-Newton methods that approximate matrix step-sizes.
In this work, we explore the utility of sketching for these two classes of
algorithms. We highlight issues with sketching the high-dimensional features
directly, which can incur significant bias. As a remedy, we demonstrate how to
use sketching more sparingly, with only a left-sided sketch, that can still
enable significant computational gains and the use of these matrix-based
learning algorithms that are less sensitive to parameters. We empirically
investigate these algorithms, in four domains with a variety of
representations. Our aim is to provide insights into effective use of sketching
in practice.
| Yangchen Pan, Erfan Sadeqi Azer and Martha White | null | 1708.01298 | null | null |
CASSL: Curriculum Accelerated Self-Supervised Learning | cs.RO cs.CV cs.LG | Recent self-supervised learning approaches focus on using a few thousand data
points to learn policies for high-level, low-dimensional action spaces.
However, scaling this framework for high-dimensional control require either
scaling up the data collection efforts or using a clever sampling strategy for
training. We present a novel approach - Curriculum Accelerated Self-Supervised
Learning (CASSL) - to train policies that map visual information to high-level,
higher- dimensional action spaces. CASSL orders the sampling of training data
based on control dimensions: the learning and sampling are focused on few
control parameters before other parameters. The right curriculum for learning
is suggested by variance-based global sensitivity analysis of the control
space. We apply our CASSL framework to learning how to grasp using an adaptive,
underactuated multi-fingered gripper, a challenging system to control. Our
experimental results indicate that CASSL provides significant improvement and
generalization compared to baseline methods such as staged curriculum learning
(8% increase) and complete end-to-end learning with random exploration (14%
improvement) tested on a set of novel objects.
| Adithyavairavan Murali, Lerrel Pinto, Dhiraj Gandhi, Abhinav Gupta | null | 1708.01354 | null | null |
Variance-Reduced Stochastic Learning under Random Reshuffling | cs.LG math.OC stat.ML | Several useful variance-reduced stochastic gradient algorithms, such as SVRG,
SAGA, Finito, and SAG, have been proposed to minimize empirical risks with
linear convergence properties to the exact minimizer. The existing convergence
results assume uniform data sampling with replacement. However, it has been
observed in related works that random reshuffling can deliver superior
performance over uniform sampling and, yet, no formal proofs or guarantees of
exact convergence exist for variance-reduced algorithms under random
reshuffling. This paper makes two contributions. First, it resolves this open
issue and provides the first theoretical guarantee of linear convergence under
random reshuffling for SAGA; the argument is also adaptable to other
variance-reduced algorithms. Second, under random reshuffling, the paper
proposes a new amortized variance-reduced gradient (AVRG) algorithm with
constant storage requirements compared to SAGA and with balanced gradient
computations compared to SVRG. AVRG is also shown analytically to converge
linearly.
| Bicheng Ying and Kun Yuan and Ali H. Sayed | null | 1708.01383 | null | null |
Variance-Reduced Stochastic Learning by Networked Agents under Random
Reshuffling | cs.LG math.OC stat.ML | A new amortized variance-reduced gradient (AVRG) algorithm was developed in
\cite{ying2017convergence}, which has constant storage requirement in
comparison to SAGA and balanced gradient computations in comparison to SVRG.
One key advantage of the AVRG strategy is its amenability to decentralized
implementations. In this work, we show how AVRG can be extended to the network
case where multiple learning agents are assumed to be connected by a graph
topology. In this scenario, each agent observes data that is spatially
distributed and all agents are only allowed to communicate with direct
neighbors. Moreover, the amount of data observed by the individual agents may
differ drastically. For such situations, the balanced gradient computation
property of AVRG becomes a real advantage in reducing idle time caused by
unbalanced local data storage requirements, which is characteristic of other
reduced-variance gradient algorithms. The resulting diffusion-AVRG algorithm is
shown to have linear convergence to the exact solution, and is much more memory
efficient than other alternative algorithms. In addition, we propose a
mini-batch strategy to balance the communication and computation efficiency for
diffusion-AVRG. When a proper batch size is employed, it is observed in
simulations that diffusion-AVRG is more computationally efficient than exact
diffusion or EXTRA while maintaining almost the same communication efficiency.
| Kun Yuan, Bicheng Ying, Jiageng Liu, and Ali H. Sayed | null | 1708.01384 | null | null |
The All-Paths and Cycles Graph Kernel | cs.LG | With the recent rise in the amount of structured data available, there has
been considerable interest in methods for machine learning with graphs. Many of
these approaches have been kernel methods, which focus on measuring the
similarity between graphs. These generally involving measuring the similarity
of structural elements such as walks or paths. Borgwardt and Kriegel proposed
the all-paths kernel but emphasized that it is NP-hard to compute and
infeasible in practice, favouring instead the shortest-path kernel. In this
paper, we introduce a new algorithm for computing the all-paths kernel which is
very efficient and enrich it further by including the simple cycles as well. We
demonstrate how it is feasible even on large datasets to compute all the paths
and simple cycles up to a moderate length. We show how to count labelled
paths/simple cycles between vertices of a graph and evaluate a labelled path
and simple cycles kernel. Extensive evaluations on a variety of graph datasets
demonstrate that the all-paths and cycles kernel has superior performance to
the shortest-path kernel and state-of-the-art performance overall.
| P.-L. Giscard and R. C. Wilson | null | 1708.0141 | null | null |
Distributed Solution of Large-Scale Linear Systems via Accelerated
Projection-Based Consensus | cs.LG cs.DC math.NA | Solving a large-scale system of linear equations is a key step at the heart
of many algorithms in machine learning, scientific computing, and beyond. When
the problem dimension is large, computational and/or memory constraints make it
desirable, or even necessary, to perform the task in a distributed fashion. In
this paper, we consider a common scenario in which a taskmaster intends to
solve a large-scale system of linear equations by distributing subsets of the
equations among a number of computing machines/cores. We propose an accelerated
distributed consensus algorithm, in which at each iteration every machine
updates its solution by adding a scaled version of the projection of an error
signal onto the nullspace of its system of equations, and where the taskmaster
conducts an averaging over the solutions with momentum. The convergence
behavior of the proposed algorithm is analyzed in detail and analytically shown
to compare favorably with the convergence rate of alternative distributed
methods, namely distributed gradient descent, distributed versions of
Nesterov's accelerated gradient descent and heavy-ball method, the block
Cimmino method, and ADMM. On randomly chosen linear systems, as well as on
real-world data sets, the proposed method offers significant speed-up relative
to all the aforementioned methods. Finally, our analysis suggests a novel
variation of the distributed heavy-ball method, which employs a particular
distributed preconditioning, and which achieves the same theoretical
convergence rate as the proposed consensus-based method.
| Navid Azizan-Ruhi, Farshad Lahouti, Salman Avestimehr, Babak Hassibi | null | 1708.01413 | null | null |
Exploring the Function Space of Deep-Learning Machines | cond-mat.dis-nn cs.LG | The function space of deep-learning machines is investigated by studying
growth in the entropy of functions of a given error with respect to a reference
function, realized by a deep-learning machine. Using physics-inspired methods
we study both sparsely and densely-connected architectures to discover a
layer-wise convergence of candidate functions, marked by a corresponding
reduction in entropy when approaching the reference function, gain insight into
the importance of having a large number of layers, and observe phase
transitions as the error increases.
| Bo Li and David Saad | 10.1103/PhysRevLett.120.248301 | 1708.01422 | null | null |
Brain Responses During Robot-Error Observation | cs.HC cs.LG cs.RO | Brain-controlled robots are a promising new type of assistive device for
severely impaired persons. Little is however known about how to optimize the
interaction of humans and brain-controlled robots. Information about the
human's perceived correctness of robot performance might provide a useful
teaching signal for adaptive control algorithms and thus help enhancing robot
control. Here, we studied whether watching robots perform erroneous vs. correct
action elicits differential brain responses that can be decoded from single
trials of electroencephalographic (EEG) recordings, and whether brain activity
during human-robot interaction is modulated by the robot's visual similarity to
a human. To address these topics, we designed two experiments. In experiment I,
participants watched a robot arm pour liquid into a cup. The robot performed
the action either erroneously or correctly, i.e. it either spilled some liquid
or not. In experiment II, participants observed two different types of robots,
humanoid and non-humanoid, grabbing a ball. The robots either managed to grab
the ball or not. We recorded high-resolution EEG during the observation tasks
in both experiments to train a Filter Bank Common Spatial Pattern (FBCSP)
pipeline on the multivariate EEG signal and decode for the correctness of the
observed action, and for the type of the observed robot. Our findings show that
it was possible to decode both correctness and robot type for the majority of
participants significantly, although often just slightly, above chance level.
Our findings suggest that non-invasive recordings of brain responses elicited
when observing robots indeed contain decodable information about the
correctness of the robot's action and the type of observed robot.
| Dominik Welke, Joos Behncke, Marina Hader, Robin Tibor Schirrmeister,
Andreas Sch\"onau, Boris E{\ss}mann, Oliver M\"uller, Wolfram Burgard, Tonio
Ball | 10.17185/duepublico/44533 | 1708.01465 | null | null |
A Latent Variable Model for Two-Dimensional Canonical Correlation
Analysis and its Variational Inference | cs.CV cs.LG stat.ML | Describing the dimension reduction (DR) techniques by means of probabilistic
models has recently been given special attention. Probabilistic models, in
addition to a better interpretability of the DR methods, provide a framework
for further extensions of such algorithms. One of the new approaches to the
probabilistic DR methods is to preserving the internal structure of data. It is
meant that it is not necessary that the data first be converted from the matrix
or tensor format to the vector format in the process of dimensionality
reduction. In this paper, a latent variable model for matrix-variate data for
canonical correlation analysis (CCA) is proposed. Since in general there is not
any analytical maximum likelihood solution for this model, we present two
approaches for learning the parameters. The proposed methods are evaluated
using the synthetic data in terms of convergence and quality of mappings. Also,
real data set is employed for assessing the proposed methods with several
probabilistic and none-probabilistic CCA based approaches. The results confirm
the superiority of the proposed methods with respect to the competing
algorithms. Moreover, this model can be considered as a framework for further
extensions.
| Mehran Safayani and Saeid Momenzadeh | 10.1007/s00500-020-04906-8 | 1708.01519 | null | null |
Lifelong Learning with Dynamically Expandable Networks | cs.LG | We propose a novel deep network architecture for lifelong learning which we
refer to as Dynamically Expandable Network (DEN), that can dynamically decide
its network capacity as it trains on a sequence of tasks, to learn a compact
overlapping knowledge sharing structure among tasks. DEN is efficiently trained
in an online manner by performing selective retraining, dynamically expands
network capacity upon arrival of each task with only the necessary number of
units, and effectively prevents semantic drift by splitting/duplicating units
and timestamping them. We validate DEN on multiple public datasets under
lifelong learning scenarios, on which it not only significantly outperforms
existing lifelong learning methods for deep networks, but also achieves the
same level of performance as the batch counterparts with substantially fewer
number of parameters. Further, the obtained network fine-tuned on all tasks
obtained significantly better performance over the batch models, which shows
that it can be used to estimate the optimal network structure even when all
tasks are available in the first place.
| Jaehong Yoon, Eunho Yang, Jeongtae Lee, Sung Ju Hwang | null | 1708.01547 | null | null |
Identification of Probabilities | cs.LG cs.AI | Within psychology, neuroscience and artificial intelligence, there has been
increasing interest in the proposal that the brain builds probabilistic models
of sensory and linguistic input: that is, to infer a probabilistic model from a
sample. The practical problems of such inference are substantial: the brain has
limited data and restricted computational resources. But there is a more
fundamental question: is the problem of inferring a probabilistic model from a
sample possible even in principle? We explore this question and find some
surprisingly positive and general results. First, for a broad class of
probability distributions characterised by computability restrictions, we
specify a learning algorithm that will almost surely identify a probability
distribution in the limit given a finite i.i.d. sample of sufficient but
unknown length. This is similarly shown to hold for sequences generated by a
broad class of Markov chains, subject to computability assumptions. The
technical tool is the strong law of large numbers. Second, for a large class of
dependent sequences, we specify an algorithm which identifies in the limit a
computable measure for which the sequence is typical, in the sense of
Martin-Lof (there may be more than one such measure). The technical tool is the
theory of Kolmogorov complexity. We analyse the associated predictions in both
cases. We also briefly consider special cases, including language learning, and
wider theoretical implications for psychology.
| Paul M.B. Vitanyi (CWI and University of Amsterdam) and Nick Chater
(Behavioural Science Group, Warwick Business School, University of Warwick,
Coventry, UK) | 10.1016/j.jmp.2006.10.002 | 1708.01611 | null | null |
3D-PRNN: Generating Shape Primitives with Recurrent Neural Networks | cs.CV cs.AI cs.LG stat.ML | The success of various applications including robotics, digital content
creation, and visualization demand a structured and abstract representation of
the 3D world from limited sensor data. Inspired by the nature of human
perception of 3D shapes as a collection of simple parts, we explore such an
abstract shape representation based on primitives. Given a single depth image
of an object, we present 3D-PRNN, a generative recurrent neural network that
synthesizes multiple plausible shapes composed of a set of primitives. Our
generative model encodes symmetry characteristics of common man-made objects,
preserves long-range structural coherence, and describes objects of varying
complexity with a compact representation. We also propose a method based on
Gaussian Fields to generate a large scale dataset of primitive-based shape
representations to train our network. We evaluate our approach on a wide range
of examples and show that it outperforms nearest-neighbor based shape retrieval
methods and is on-par with voxel-based generative models while using a
significantly reduced parameter space.
| Chuhang Zou, Ersin Yumer, Jimei Yang, Duygu Ceylan, Derek Hoiem | null | 1708.01648 | null | null |
HTM-MAT: An online prediction software toolbox based on cortical machine
learning algorithm | cs.NE cs.LG | HTM-MAT is a MATLAB based toolbox for implementing cortical learning
algorithms (CLA) including related cortical-like algorithms that possesses
spatiotemporal properties. CLA is a suite of predictive machine learning
algorithms developed by Numenta Inc. and is based on the hierarchical temporal
memory (HTM). This paper presents an implementation of HTM-MAT with several
illustrative examples including several toy datasets and compared with two
sequence learning applications employing state-of-the-art algorithms - the
recurrentjs based on the Long Short-Term Memory (LSTM) algorithm and OS-ELM
which is based on an online sequential version of the Extreme Learning Machine.
The performance of HTM-MAT using two historical benchmark datasets and one real
world dataset is also compared with one of the existing sequence learning
applications, the OS-ELM. The results indicate that HTM-MAT predictions are
indeed competitive and can outperform OS-ELM in sequential prediction tasks.
| V.I. Anireh and EN Osegi | null | 1708.01659 | null | null |
An Effective Training Method For Deep Convolutional Neural Network | cs.LG cs.AI stat.ML | In this paper, we propose the nonlinearity generation method to speed up and
stabilize the training of deep convolutional neural networks. The proposed
method modifies a family of activation functions as nonlinearity generators
(NGs). NGs make the activation functions linear symmetric for their inputs to
lower model capacity, and automatically introduce nonlinearity to enhance the
capacity of the model during training. The proposed method can be considered an
unusual form of regularization: the model parameters are obtained by training a
relatively low-capacity model, that is relatively easy to optimize at the
beginning, with only a few iterations, and these parameters are reused for the
initialization of a higher-capacity model. We derive the upper and lower bounds
of variance of the weight variation, and show that the initial symmetric
structure of NGs helps stabilize training. We evaluate the proposed method on
different frameworks of convolutional neural networks over two object
recognition benchmark tasks (CIFAR-10 and CIFAR-100). Experimental results
showed that the proposed method allows us to (1) speed up the convergence of
training, (2) allow for less careful weight initialization, (3) improve or at
least maintain the performance of the model at negligible extra computational
cost, and (4) easily train a very deep model.
| Yang Jiang, Zeyang Dou, Qun Hao, Jie Cao, Kun Gao, Xi Chen | null | 1708.01666 | null | null |
Training Deep AutoEncoders for Collaborative Filtering | stat.ML cs.LG | This paper proposes a novel model for the rating prediction task in
recommender systems which significantly outperforms previous state-of-the art
models on a time-split Netflix data set. Our model is based on deep autoencoder
with 6 layers and is trained end-to-end without any layer-wise pre-training. We
empirically demonstrate that: a) deep autoencoder models generalize much better
than the shallow ones, b) non-linear activation functions with negative parts
are crucial for training deep models, and c) heavy use of regularization
techniques such as dropout is necessary to prevent over-fiting. We also propose
a new training algorithm based on iterative output re-feeding to overcome
natural sparseness of collaborate filtering. The new algorithm significantly
speeds up training and improves model performance. Our code is available at
https://github.com/NVIDIA/DeepRecommender
| Oleksii Kuchaiev, Boris Ginsburg | null | 1708.01715 | null | null |
Inception Score, Label Smoothing, Gradient Vanishing and -log(D(x))
Alternative | cs.LG cs.AI cs.CV stat.ML | In this article, we mathematically study several GAN related topics,
including Inception score, label smoothing, gradient vanishing and the
-log(D(x)) alternative.
---
An advanced version is included in arXiv:1703.02000 "Activation Maximization
Generative Adversarial Nets".
Please refer Section 6 in 1703.02000 for detailed analysis on Inception
Score, and refer its appendix for the discussions on Label Smoothing, Gradient
Vanishing and -log(D(x)) Alternative.
| Zhiming Zhou, Weinan Zhang, Jun Wang | null | 1708.01729 | null | null |
Boosting Variational Inference: an Optimization Perspective | cs.LG cs.AI stat.ML | Variational inference is a popular technique to approximate a possibly
intractable Bayesian posterior with a more tractable one. Recently, boosting
variational inference has been proposed as a new paradigm to approximate the
posterior by a mixture of densities by greedily adding components to the
mixture. However, as is the case with many other variational inference
algorithms, its theoretical properties have not been studied. In the present
work, we study the convergence properties of this approach from a modern
optimization viewpoint by establishing connections to the classic Frank-Wolfe
algorithm. Our analyses yields novel theoretical insights regarding the
sufficient conditions for convergence, explicit rates, and algorithmic
simplifications. Since a lot of focus in previous works for variational
inference has been on tractability, our work is especially important as a much
needed attempt to bridge the gap between probabilistic models and their
corresponding theoretical properties.
| Francesco Locatello, Rajiv Khanna, Joydeep Ghosh, Gunnar R\"atsch | null | 1708.01733 | null | null |
An aggregating strategy for shifting experts in discrete sequence
prediction | cs.LG | We study how we can adapt a predictor to a non-stationary environment with
advises from multiple experts. We study the problem under complete feedback
when the best expert changes over time from a decision theoretic point of view.
Proposed algorithm is based on popular exponential weighing method with
exponential discounting. We provide theoretical results bounding regret under
the exponential discounting setting. Upper bound on regret is derived for
finite time horizon problem. Numerical verification of different real life
datasets are provided to show the utility of proposed algorithm.
| Vishnu Raj and Sheetal Kalyani | null | 1708.01744 | null | null |
e-QRAQ: A Multi-turn Reasoning Dataset and Simulator with Explanations | cs.LG cs.AI cs.CL | In this paper we present a new dataset and user simulator e-QRAQ (explainable
Query, Reason, and Answer Question) which tests an Agent's ability to read an
ambiguous text; ask questions until it can answer a challenge question; and
explain the reasoning behind its questions and answer. The User simulator
provides the Agent with a short, ambiguous story and a challenge question about
the story. The story is ambiguous because some of the entities have been
replaced by variables. At each turn the Agent may ask for the value of a
variable or try to answer the challenge question. In response the User
simulator provides a natural language explanation of why the Agent's query or
answer was useful in narrowing down the set of possible answers, or not. To
demonstrate one potential application of the e-QRAQ dataset, we train a new
neural architecture based on End-to-End Memory Networks to successfully
generate both predictions and partial explanations of its current understanding
of the problem. We observe a strong correlation between the quality of the
prediction and explanation.
| Clemens Rosenbaum, Tian Gao, Tim Klinger | null | 1708.01776 | null | null |
Efficient Contextual Bandits in Non-stationary Worlds | cs.LG stat.ML | Most contextual bandit algorithms minimize regret against the best fixed
policy, a questionable benchmark for non-stationary environments that are
ubiquitous in applications. In this work, we develop several efficient
contextual bandit algorithms for non-stationary environments by equipping
existing methods for i.i.d. problems with sophisticated statistical tests so as
to dynamically adapt to a change in distribution.
We analyze various standard notions of regret suited to non-stationary
environments for these algorithms, including interval regret, switching regret,
and dynamic regret. When competing with the best policy at each time, one of
our algorithms achieves regret $\mathcal{O}(\sqrt{ST})$ if there are $T$ rounds
with $S$ stationary periods, or more generally
$\mathcal{O}(\Delta^{1/3}T^{2/3})$ where $\Delta$ is some non-stationarity
measure. These results almost match the optimal guarantees achieved by an
inefficient baseline that is a variant of the classic Exp4 algorithm. The
dynamic regret result is also the first one for efficient and fully adversarial
contextual bandit.
Furthermore, while the results above require tuning a parameter based on the
unknown quantity $S$ or $\Delta$, we also develop a parameter free algorithm
achieving regret $\min\{S^{1/4}T^{3/4}, \Delta^{1/5}T^{4/5}\}$. This improves
and generalizes the best existing result $\Delta^{0.18}T^{0.82}$ by Karnin and
Anava (2016) which only holds for the two-armed bandit problem.
| Haipeng Luo and Chen-Yu Wei and Alekh Agarwal and John Langford | null | 1708.01799 | null | null |
An Information-Theoretic Optimality Principle for Deep Reinforcement
Learning | cs.AI cs.LG stat.ML | We methodologically address the problem of Q-value overestimation in deep
reinforcement learning to handle high-dimensional state spaces efficiently. By
adapting concepts from information theory, we introduce an intrinsic penalty
signal encouraging reduced Q-value estimates. The resultant algorithm
encompasses a wide range of learning outcomes containing deep Q-networks as a
special case. Different learning outcomes can be demonstrated by tuning a
Lagrange multiplier accordingly. We furthermore propose a novel scheduling
scheme for this Lagrange multiplier to ensure efficient and robust learning. In
experiments on Atari, our algorithm outperforms other algorithms (e.g. deep and
double deep Q-networks) in terms of both game-play performance and sample
complexity. These results remain valid under the recently proposed dueling
architecture.
| Felix Leibfried, Jordi Grau-Moya and Haitham Bou-Ammar | null | 1708.01867 | null | null |
Probabilistic Generative Adversarial Networks | cs.LG stat.ML | We introduce the Probabilistic Generative Adversarial Network (PGAN), a new
GAN variant based on a new kind of objective function. The central idea is to
integrate a probabilistic model (a Gaussian Mixture Model, in our case) into
the GAN framework which supports a new kind of loss function (based on
likelihood rather than classification loss), and at the same time gives a
meaningful measure of the quality of the outputs generated by the network.
Experiments with MNIST show that the model learns to generate realistic images,
and at the same time computes likelihoods that are correlated with the quality
of the generated images. We show that PGAN is better able to cope with
instability problems that are usually observed in the GAN training procedure.
We investigate this from three aspects: the probability landscape of the
discriminator, gradients of the generator, and the perfect discriminator
problem.
| Hamid Eghbal-zadeh, Gerhard Widmer | null | 1708.01886 | null | null |
Universally consistent predictive distributions | cs.LG | This paper describes simple universally consistent procedures of probability
forecasting that satisfy a natural property of small-sample validity, under the
assumption that the observations are produced independently in the IID fashion.
| Vladimir Vovk | null | 1708.01902 | null | null |
Empathy in Bimatrix Games | cs.GT cs.LG | Although the definition of what empathetic preferences exactly are is still
evolving, there is a general consensus in the psychology, science and
engineering communities that the evolution toward players' behaviors in
interactive decision-making problems will be accompanied by the exploitation of
their empathy, sympathy, compassion, antipathy, spitefulness, selfishness,
altruism, and self-abnegating states in the payoffs. In this article, we study
one-shot bimatrix games from a psychological game theory viewpoint. A new
empathetic payoff model is calculated to fit empirical observations and both
pure and mixed equilibria are investigated. For a realized empathy structure,
the bimatrix game is categorized among four generic class of games. Number of
interesting results are derived. A notable level of involvement can be observed
in the empathetic one-shot game compared the non-empathetic one and this holds
even for games with dominated strategies. Partial altruism can help in breaking
symmetry, in reducing payoff-inequality and in selecting social welfare and
more efficient outcomes. By contrast, partial spite and self-abnegating may
worsen payoff equity. Empathetic evolutionary game dynamics are introduced to
capture the resulting empathetic evolutionarily stable strategies under wide
range of revision protocols including Brown-von Neumann-Nash, Smith, imitation,
replicator, and hybrid dynamics. Finally, mutual support and Berge solution are
investigated and their connection with empathetic preferences are established.
We show that pure altruism is logically inconsistent, only by balancing it with
some partial selfishness does it create a consistent psychology.
| Brian Powers and Michalis Smyrnakis and Hamidou Tembine | null | 1708.0191 | null | null |
Training of Deep Neural Networks based on Distance Measures using
RMSProp | cs.LG cs.AI stat.ML | The vanishing gradient problem was a major obstacle for the success of deep
learning. In recent years it was gradually alleviated through multiple
different techniques. However the problem was not really overcome in a
fundamental way, since it is inherent to neural networks with activation
functions based on dot products. In a series of papers, we are going to analyze
alternative neural network structures which are not based on dot products. In
this first paper, we revisit neural networks built up of layers based on
distance measures and Gaussian activation functions. These kinds of networks
were only sparsely used in the past since they are hard to train when using
plain stochastic gradient descent methods. We show that by using Root Mean
Square Propagation (RMSProp) it is possible to efficiently learn multi-layer
neural networks. Furthermore we show that when appropriately initialized these
kinds of neural networks suffer much less from the vanishing and exploding
gradient problem than traditional neural networks even for deep networks.
| Thomas Kurbiel and Shahrzad Khaleghian | null | 1708.01911 | null | null |
A Bootstrap Method for Error Estimation in Randomized Matrix
Multiplication | stat.ML cs.LG cs.NA | In recent years, randomized methods for numerical linear algebra have
received growing interest as a general approach to large-scale problems.
Typically, the essential ingredient of these methods is some form of randomized
dimension reduction, which accelerates computations, but also creates random
approximation error. In this way, the dimension reduction step encodes a
tradeoff between cost and accuracy. However, the exact numerical relationship
between cost and accuracy is typically unknown, and consequently, it may be
difficult for the user to precisely know (1) how accurate a given solution is,
or (2) how much computation is needed to achieve a given level of accuracy. In
the current paper, we study randomized matrix multiplication (sketching) as a
prototype setting for addressing these general problems. As a solution, we
develop a bootstrap method for \emph{directly estimating} the accuracy as a
function of the reduced dimension (as opposed to deriving worst-case bounds on
the accuracy in terms of the reduced dimension). From a computational
standpoint, the proposed method does not substantially increase the cost of
standard sketching methods, and this is made possible by an "extrapolation"
technique. In addition, we provide both theoretical and empirical results to
demonstrate the effectiveness of the proposed method.
| Miles E. Lopes and Shusen Wang and Michael W. Mahoney | null | 1708.01945 | null | null |
Learning Theory of Distributed Regression with Bias Corrected
Regularization Kernel Network | cs.LG stat.ML | Distributed learning is an effective way to analyze big data. In distributed
regression, a typical approach is to divide the big data into multiple blocks,
apply a base regression algorithm on each of them, and then simply average the
output functions learnt from these blocks. Since the average process will
decrease the variance, not the bias, bias correction is expected to improve the
learning performance if the base regression algorithm is a biased one.
Regularization kernel network is an effective and widely used method for
nonlinear regression analysis. In this paper we will investigate a bias
corrected version of regularization kernel network. We derive the error bounds
when it is applied to a single data set and when it is applied as a base
algorithm in distributed regression. We show that, under certain appropriate
conditions, the optimal learning rates can be reached in both situations.
| Zhengchu Guo, Lei Shi and Qiang Wu | null | 1708.0196 | null | null |
Why Adaptively Collected Data Have Negative Bias and How to Correct for
It | stat.ML cs.LG | From scientific experiments to online A/B testing, the previously observed
data often affects how future experiments are performed, which in turn affects
which data will be collected. Such adaptivity introduces complex correlations
between the data and the collection procedure. In this paper, we prove that
when the data collection procedure satisfies natural conditions, then sample
means of the data have systematic \emph{negative} biases. As an example,
consider an adaptive clinical trial where additional data points are more
likely to be tested for treatments that show initial promise. Our surprising
result implies that the average observed treatment effects would underestimate
the true effects of each treatment. We quantitatively analyze the magnitude and
behavior of this negative bias in a variety of settings. We also propose a
novel debiasing algorithm based on selective inference techniques. In
experiments, our method can effectively reduce bias and estimation error.
| Xinkun Nie, Xiaoying Tian, Jonathan Taylor, James Zou | null | 1708.01977 | null | null |
Unconstrained Fashion Landmark Detection via Hierarchical Recurrent
Transformer Networks | cs.CV cs.LG | Fashion landmarks are functional key points defined on clothes, such as
corners of neckline, hemline, and cuff. They have been recently introduced as
an effective visual representation for fashion image understanding. However,
detecting fashion landmarks are challenging due to background clutters, human
poses, and scales. To remove the above variations, previous works usually
assumed bounding boxes of clothes are provided in training and test as
additional annotations, which are expensive to obtain and inapplicable in
practice. This work addresses unconstrained fashion landmark detection, where
clothing bounding boxes are not provided in both training and test. To this
end, we present a novel Deep LAndmark Network (DLAN), where bounding boxes and
landmarks are jointly estimated and trained iteratively in an end-to-end
manner. DLAN contains two dedicated modules, including a Selective Dilated
Convolution for handling scale discrepancies, and a Hierarchical Recurrent
Spatial Transformer for handling background clutters. To evaluate DLAN, we
present a large-scale fashion landmark dataset, namely Unconstrained Landmark
Database (ULD), consisting of 30K images. Statistics show that ULD is more
challenging than existing datasets in terms of image scales, background
clutters, and human poses. Extensive experiments demonstrate the effectiveness
of DLAN over the state-of-the-art methods. DLAN also exhibits excellent
generalization across different clothing categories and modalities, making it
extremely suitable for real-world fashion analysis.
| Sijie Yan, Ziwei Liu, Ping Luo, Shi Qiu, Xiaogang Wang, Xiaoou Tang | null | 1708.02044 | null | null |
Nonconvex Sparse Logistic Regression with Weakly Convex Regularization | cs.LG stat.ML | In this work we propose to fit a sparse logistic regression model by a weakly
convex regularized nonconvex optimization problem. The idea is based on the
finding that a weakly convex function as an approximation of the $\ell_0$
pseudo norm is able to better induce sparsity than the commonly used $\ell_1$
norm. For a class of weakly convex sparsity inducing functions, we prove the
nonconvexity of the corresponding sparse logistic regression problem, and study
its local optimality conditions and the choice of the regularization parameter
to exclude trivial solutions. Despite the nonconvexity, a method based on
proximal gradient descent is used to solve the general weakly convex sparse
logistic regression, and its convergence behavior is studied theoretically.
Then the general framework is applied to a specific weakly convex function, and
a necessary and sufficient local optimality condition is provided. The solution
method is instantiated in this case as an iterative firm-shrinkage algorithm,
and its effectiveness is demonstrated in numerical experiments by both randomly
generated and real datasets.
| Xinyue Shen, Yuantao Gu | 10.1109/TSP.2018.2824289 | 1708.02059 | null | null |
Measuring Catastrophic Forgetting in Neural Networks | cs.AI cs.CV cs.LG | Deep neural networks are used in many state-of-the-art systems for machine
perception. Once a network is trained to do a specific task, e.g., bird
classification, it cannot easily be trained to do new tasks, e.g.,
incrementally learning to recognize additional bird species or learning an
entirely different task such as flower recognition. When new tasks are added,
typical deep neural networks are prone to catastrophically forgetting previous
tasks. Networks that are capable of assimilating new information incrementally,
much like how humans form new memories over time, will be more efficient than
re-training the model from scratch each time a new task needs to be learned.
There have been multiple attempts to develop schemes that mitigate catastrophic
forgetting, but these methods have not been directly compared, the tests used
to evaluate them vary considerably, and these methods have only been evaluated
on small-scale problems (e.g., MNIST). In this paper, we introduce new metrics
and benchmarks for directly comparing five different mechanisms designed to
mitigate catastrophic forgetting in neural networks: regularization,
ensembling, rehearsal, dual-memory, and sparse-coding. Our experiments on
real-world images and sounds show that the mechanism(s) that are critical for
optimal performance vary based on the incremental training paradigm and type of
data being used, but they all demonstrate that the catastrophic forgetting
problem has yet to be solved.
| Ronald Kemker, Marc McClure, Angelina Abitino, Tyler Hayes, and
Christopher Kanan | null | 1708.02072 | null | null |
Linear Convergence of a Frank-Wolfe Type Algorithm over Trace-Norm Balls | cs.LG cs.DS math.OC stat.ML | We propose a rank-$k$ variant of the classical Frank-Wolfe algorithm to solve
convex optimization over a trace-norm ball. Our algorithm replaces the top
singular-vector computation ($1$-SVD) in Frank-Wolfe with a top-$k$
singular-vector computation ($k$-SVD), which can be done by repeatedly applying
$1$-SVD $k$ times. Alternatively, our algorithm can be viewed as a rank-$k$
restricted version of projected gradient descent. We show that our algorithm
has a linear convergence rate when the objective function is smooth and
strongly convex, and the optimal solution has rank at most $k$. This improves
the convergence rate and the total time complexity of the Frank-Wolfe method
and its variants.
| Zeyuan Allen-Zhu, Elad Hazan, Wei Hu, Yuanzhi Li | null | 1708.02105 | null | null |
Regularizing and Optimizing LSTM Language Models | cs.CL cs.LG cs.NE | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2.
| Stephen Merity, Nitish Shirish Keskar, Richard Socher | null | 1708.02182 | null | null |
PowerAI DDL | cs.DC cs.AI cs.LG | As deep neural networks become more complex and input datasets grow larger,
it can take days or even weeks to train a deep neural network to the desired
accuracy. Therefore, distributed Deep Learning at a massive scale is a critical
capability, since it offers the potential to reduce the training time from
weeks to hours. In this paper, we present a software-hardware co-optimized
distributed Deep Learning system that can achieve near-linear scaling up to
hundreds of GPUs. The core algorithm is a multi-ring communication pattern that
provides a good tradeoff between latency and bandwidth and adapts to a variety
of system configurations. The communication algorithm is implemented as a
library for easy use. This library has been integrated into Tensorflow, Caffe,
and Torch. We train Resnet-101 on Imagenet 22K with 64 IBM Power8 S822LC
servers (256 GPUs) in about 7 hours to an accuracy of 33.8 % validation
accuracy. Microsoft's ADAM and Google's DistBelief results did not reach 30 %
validation accuracy for Imagenet 22K. Compared to Facebook AI Research's recent
paper on 256 GPU training, we use a different communication algorithm, and our
combined software and hardware system offers better communication overhead for
Resnet-50. A PowerAI DDL enabled version of Torch completed 90 epochs of
training on Resnet 50 for 1K classes in 50 minutes using 64 IBM Power8 S822LC
servers (256 GPUs).
| Minsik Cho, Ulrich Finkler, Sameer Kumar, David Kung, Vaibhav Saxena,
Dheeraj Sreedhar | null | 1708.02188 | null | null |
Intrinsically Motivated Goal Exploration Processes with Automatic
Curriculum Learning | cs.AI cs.LG | Intrinsically motivated spontaneous exploration is a key enabler of
autonomous developmental learning in human children. It enables the discovery
of skill repertoires through autotelic learning, i.e. the self-generation,
self-selection, self-ordering and self-experimentation of learning goals. We
present an algorithmic approach called Intrinsically Motivated Goal Exploration
Processes (IMGEP) to enable similar properties of autonomous learning in
machines. The IMGEP architecture relies on several principles: 1)
self-generation of goals, generalized as parameterized fitness functions; 2)
selection of goals based on intrinsic rewards; 3) exploration with incremental
goal-parameterized policy search and exploitation with a batch learning
algorithm; 4) systematic reuse of information acquired when targeting a goal
for improving towards other goals. We present a particularly efficient form of
IMGEP, called AMB, that uses a population-based policy and an object-centered
spatio-temporal modularity. We provide several implementations of this
architecture and demonstrate their ability to automatically generate a learning
curriculum within several experimental setups. One of these experiments
includes a real humanoid robot exploring multiple spaces of goals with several
hundred continuous dimensions and with distractors. While no particular target
goal is provided to these autotelic agents, this curriculum allows the
discovery of diverse skills that act as stepping stones for learning more
complex skills, e.g. nested tool use.
| S\'ebastien Forestier, R\'emy Portelas, Yoan Mollard, Pierre-Yves
Oudeyer | null | 1708.0219 | null | null |
Image Quality Assessment Techniques Show Improved Training and
Evaluation of Autoencoder Generative Adversarial Networks | cs.CV cs.LG | We propose a training and evaluation approach for autoencoder Generative
Adversarial Networks (GANs), specifically the Boundary Equilibrium Generative
Adversarial Network (BEGAN), based on methods from the image quality assessment
literature. Our approach explores a multidimensional evaluation criterion that
utilizes three distance functions: an $l_1$ score, the Gradient Magnitude
Similarity Mean (GMSM) score, and a chrominance score. We show that each of the
different distance functions captures a slightly different set of properties in
image space and, consequently, requires its own evaluation criterion to
properly assess whether the relevant property has been adequately learned. We
show that models using the new distance functions are able to produce better
images than the original BEGAN model in predicted ways.
| Michael O. Vertolli and Jim Davies | null | 1708.02237 | null | null |
Parallelizing Over Artificial Neural Network Training Runs with
Multigrid | cs.NA cs.LG | Artificial neural networks are a popular and effective machine learning
technique. Great progress has been made parallelizing the expensive training
phase of an individual network, leading to highly specialized pieces of
hardware, many based on GPU-type architectures, and more concurrent algorithms
such as synthetic gradients. However, the training phase continues to be a
bottleneck, where the training data must be processed serially over thousands
of individual training runs. This work considers a multigrid reduction in time
(MGRIT) algorithm that is able to parallelize over the thousands of training
runs and converge to the exact same solution as traditional training would
provide. MGRIT was originally developed to provide parallelism for time
evolution problems that serially step through a finite number of time-steps.
This work recasts the training of a neural network similarly, treating neural
network training as an evolution equation that evolves the network weights from
one step to the next. Thus, this work concerns distributed computing approaches
for neural networks, but is distinct from other approaches which seek to
parallelize only over individual training runs. The work concludes with
supporting numerical results for two model problems.
| Jacob B. Schroder | null | 1708.02276 | null | null |
Jointly Attentive Spatial-Temporal Pooling Networks for Video-based
Person Re-Identification | cs.CV cs.LG stat.ML | Person Re-Identification (person re-id) is a crucial task as its applications
in visual surveillance and human-computer interaction. In this work, we present
a novel joint Spatial and Temporal Attention Pooling Network (ASTPN) for
video-based person re-identification, which enables the feature extractor to be
aware of the current input video sequences, in a way that interdependency from
the matching items can directly influence the computation of each other's
representation. Specifically, the spatial pooling layer is able to select
regions from each frame, while the attention temporal pooling performed can
select informative frames over the sequence, both pooling guided by the
information from distance matching. Experiments are conduced on the iLIDS-VID,
PRID-2011 and MARS datasets and the results demonstrate that this approach
outperforms existing state-of-art methods. We also analyze how the joint
pooling in both dimensions can boost the person re-id performance more
effectively than using either of them separately.
| Shuangjie Xu, Yu Cheng, Kang Gu, Yang Yang, Shiyu Chang, Pan Zhou | null | 1708.02286 | null | null |
Reinforced Video Captioning with Entailment Rewards | cs.CL cs.AI cs.CV cs.LG | Sequence-to-sequence models have shown promising improvements on the temporal
task of video captioning, but they optimize word-level cross-entropy loss
during training. First, using policy gradient and mixed-loss methods for
reinforcement learning, we directly optimize sentence-level task-based metrics
(as rewards), achieving significant improvements over the baseline, based on
both automatic metrics and human evaluation on multiple datasets. Next, we
propose a novel entailment-enhanced reward (CIDEnt) that corrects
phrase-matching based metrics (such as CIDEr) to only allow for
logically-implied partial matches and avoid contradictions, achieving further
significant improvements over the CIDEr-reward model. Overall, our
CIDEnt-reward model achieves the new state-of-the-art on the MSR-VTT dataset.
| Ramakanth Pasunuru, Mohit Bansal | null | 1708.023 | null | null |
Shortcut-Stacked Sentence Encoders for Multi-Domain Inference | cs.CL cs.AI cs.LG | We present a simple sequential sentence encoder for multi-domain natural
language inference. Our encoder is based on stacked bidirectional LSTM-RNNs
with shortcut connections and fine-tuning of word embeddings. The overall
supervised model uses the above encoder to encode two input sentences into two
vectors, and then uses a classifier over the vector combination to label the
relationship between these two sentences as that of entailment, contradiction,
or neural. Our Shortcut-Stacked sentence encoders achieve strong improvements
over existing encoders on matched and mismatched multi-domain natural language
inference (top non-ensemble single-model result in the EMNLP RepEval 2017
Shared Task (Nangia et al., 2017)). Moreover, they achieve the new
state-of-the-art encoding result on the original SNLI dataset (Bowman et al.,
2015).
| Yixin Nie, Mohit Bansal | null | 1708.02312 | null | null |
GPLAC: Generalizing Vision-Based Robotic Skills using Weakly Labeled
Images | cs.LG cs.CV cs.RO | We tackle the problem of learning robotic sensorimotor control policies that
can generalize to visually diverse and unseen environments. Achieving broad
generalization typically requires large datasets, which are difficult to obtain
for task-specific interactive processes such as reinforcement learning or
learning from demonstration. However, much of the visual diversity in the world
can be captured through passively collected datasets of images or videos. In
our method, which we refer to as GPLAC (Generalized Policy Learning with
Attentional Classifier), we use both interaction data and weakly labeled image
data to augment the generalization capacity of sensorimotor policies. Our
method combines multitask learning on action selection and an auxiliary binary
classification objective, together with a convolutional neural network
architecture that uses an attentional mechanism to avoid distractors. We show
that pairing interaction data from just a single environment with a diverse
dataset of weakly labeled data results in greatly improved generalization to
unseen environments, and show that this generalization depends on both the
auxiliary objective and the attentional architecture that we propose. We
demonstrate our results in both simulation and on a real robotic manipulator,
and demonstrate substantial improvement over standard convolutional
architectures and domain adaptation methods.
| Avi Singh, Larry Yang, Sergey Levine | null | 1708.02313 | null | null |
EnLLVM: Ensemble Based Nonlinear Bayesian Filtering Using Linear Latent
Variable Models | stat.CO cs.LG | Real-time nonlinear Bayesian filtering algorithms are overwhelmed by data
volume, velocity and increasing complexity of computational models. In this
paper, we propose a novel ensemble based nonlinear Bayesian filtering approach
which only requires a small number of simulations and can be applied to
high-dimensional systems in the presence of intractable likelihood functions.
The proposed approach uses linear latent projections to estimate the joint
probability distribution between states, parameters, and observables using a
mixture of Gaussian components generated by the reconstruction error for each
ensemble member. Since it leverages the computational machinery behind linear
latent variable models, it can achieve fast implementations without the need to
compute high-dimensional sample covariance matrices. The performance of the
proposed approach is compared with the performance of ensemble Kalman filter on
a high-dimensional Lorenz nonlinear dynamical system.
| Xiao Lin, Gabriel Terejanu | null | 1708.0234 | null | null |
Learning how to Active Learn: A Deep Reinforcement Learning Approach | cs.CL cs.AI cs.LG | Active learning aims to select a small subset of data for annotation such
that a classifier learned on the data is highly accurate. This is usually done
using heuristic selection methods, however the effectiveness of such methods is
limited and moreover, the performance of heuristics varies between datasets. To
address these shortcomings, we introduce a novel formulation by reframing the
active learning as a reinforcement learning problem and explicitly learning a
data selection policy, where the policy takes the role of the active learning
heuristic. Importantly, our method allows the selection policy learned using
simulation on one language to be transferred to other languages. We demonstrate
our method using cross-lingual named entity recognition, observing uniform
improvements over traditional active learning.
| Meng Fang, Yuan Li and Trevor Cohn | null | 1708.02383 | null | null |
Robust Conditional Probabilities | cs.LG | Conditional probabilities are a core concept in machine learning. For
example, optimal prediction of a label $Y$ given an input $X$ corresponds to
maximizing the conditional probability of $Y$ given $X$. A common approach to
inference tasks is learning a model of conditional probabilities. However,
these models are often based on strong assumptions (e.g., log-linear models),
and hence their estimate of conditional probabilities is not robust and is
highly dependent on the validity of their assumptions.
Here we propose a framework for reasoning about conditional probabilities
without assuming anything about the underlying distributions, except knowledge
of their second order marginals, which can be estimated from data. We show how
this setting leads to guaranteed bounds on conditional probabilities, which can
be calculated efficiently in a variety of settings, including
structured-prediction. Finally, we apply them to semi-supervised deep learning,
obtaining results competitive with variational autoencoders.
| Yoav Wald, Amir Globerson | null | 1708.02406 | null | null |
Fast Low-Rank Bayesian Matrix Completion with Hierarchical Gaussian
Prior Models | cs.LG stat.ML | The problem of low rank matrix completion is considered in this paper. To
exploit the underlying low-rank structure of the data matrix, we propose a
hierarchical Gaussian prior model, where columns of the low-rank matrix are
assumed to follow a Gaussian distribution with zero mean and a common precision
matrix, and a Wishart distribution is specified as a hyperprior over the
precision matrix. We show that such a hierarchical Gaussian prior has the
potential to encourage a low-rank solution. Based on the proposed hierarchical
prior model, a variational Bayesian method is developed for matrix completion,
where the generalized approximate massage passing (GAMP) technique is embedded
into the variational Bayesian inference in order to circumvent cumbersome
matrix inverse operations. Simulation results show that our proposed method
demonstrates superiority over existing state-of-the-art matrix completion
methods.
| Linxiao Yang, Jun Fang, Huiping Duan, Hongbin Li and Bing Zeng | 10.1109/TSP.2018.2816575 | 1708.02455 | null | null |
Multiscale Strategies for Computing Optimal Transport | cs.LG | This paper presents a multiscale approach to efficiently compute approximate
optimal transport plans between point sets. It is particularly well-suited for
point sets that are in high-dimensions, but are close to being intrinsically
low-dimensional. The approach is based on an adaptive multiscale decomposition
of the point sets. The multiscale decomposition yields a sequence of optimal
transport problems, that are solved in a top-to-bottom fashion from the
coarsest to the finest scale. We provide numerical evidence that this
multiscale approach scales approximately linearly, in time and memory, in the
number of nodes, instead of quadratically or worse for a direct solution.
Empirically, the multiscale approach results in less than one percent relative
error in the objective function. Furthermore, the multiscale plans constructed
are of interest by themselves as they may be used to introduce novel features
and notions of distances between point sets. An analysis of sets of brain MRI
based on optimal transport distances illustrates the effectiveness of the
proposed method on a real world data set. The application demonstrates that
multiscale optimal transport distances have the potential to improve on
state-of-the-art metrics currently used in computational anatomy.
| Samuel Gerber and Mauro Maggioni | null | 1708.02469 | null | null |
Learning non-parametric Markov networks with mutual information | cs.LG cs.IT math.IT stat.ML | We propose a method for learning Markov network structures for continuous
data without invoking any assumptions about the distribution of the variables.
The method makes use of previous work on a non-parametric estimator for mutual
information which is used to create a non-parametric test for multivariate
conditional independence. This independence test is then combined with an
efficient constraint-based algorithm for learning the graph structure. The
performance of the method is evaluated on several synthetic data sets and it is
shown to learn considerably more accurate structures than competing methods
when the dependencies between the variables involve non-linearities.
| Janne Lepp\"a-aho, Santeri R\"ais\"anen, Xiao Yang, Teemu Roos | null | 1708.02497 | null | null |
Parametric Adversarial Divergences are Good Losses for Generative
Modeling | cs.LG stat.ML | Parametric adversarial divergences, which are a generalization of the losses
used to train generative adversarial networks (GANs), have often been described
as being approximations of their nonparametric counterparts, such as the
Jensen-Shannon divergence, which can be derived under the so-called optimal
discriminator assumption. In this position paper, we argue that despite being
"non-optimal", parametric divergences have distinct properties from their
nonparametric counterparts which can make them more suitable for learning
high-dimensional distributions. A key property is that parametric divergences
are only sensitive to certain aspects/moments of the distribution, which depend
on the architecture of the discriminator and the loss it was trained with. In
contrast, nonparametric divergences such as the Kullback-Leibler divergence are
sensitive to moments ignored by the discriminator, but they do not necessarily
correlate with sample quality (Theis et al., 2016). Similarly, we show that
mutual information can lead to unintuitive interpretations, and explore more
intuitive alternatives based on parametric divergences. We conclude that
parametric divergences are a flexible framework for defining statistical
quantities relevant to a specific modeling task.
| Gabriel Huang, Hugo Berard, Ahmed Touati, Gauthier Gidel, Pascal
Vincent, Simon Lacoste-Julien | null | 1708.02511 | null | null |
Stochastic Optimization with Bandit Sampling | cs.LG cs.AI math.OC stat.ML | Many stochastic optimization algorithms work by estimating the gradient of
the cost function on the fly by sampling datapoints uniformly at random from a
training set. However, the estimator might have a large variance, which
inadvertently slows down the convergence rate of the algorithms. One way to
reduce this variance is to sample the datapoints from a carefully selected
non-uniform distribution. In this work, we propose a novel non-uniform sampling
approach that uses the multi-armed bandit framework. Theoretically, we show
that our algorithm asymptotically approximates the optimal variance within a
factor of 3. Empirically, we show that using this datapoint-selection technique
results in a significant reduction in the convergence time and variance of
several stochastic optimization algorithms such as SGD, SVRG and SAGA. This
approach for sampling datapoints is general, and can be used in conjunction
with any algorithm that uses an unbiased gradient estimation -- we expect it to
have broad applicability beyond the specific examples explored in this work.
| Farnood Salehi, L. Elisa Celis and Patrick Thiran | null | 1708.02544 | null | null |
Multi-Generator Generative Adversarial Nets | cs.LG cs.AI stat.ML | We propose a new approach to train the Generative Adversarial Nets (GANs)
with a mixture of generators to overcome the mode collapsing problem. The main
intuition is to employ multiple generators, instead of using a single one as in
the original GAN. The idea is simple, yet proven to be extremely effective at
covering diverse data modes, easily overcoming the mode collapse and delivering
state-of-the-art results. A minimax formulation is able to establish among a
classifier, a discriminator, and a set of generators in a similar spirit with
GAN. Generators create samples that are intended to come from the same
distribution as the training data, whilst the discriminator determines whether
samples are true data or generated by generators, and the classifier specifies
which generator a sample comes from. The distinguishing feature is that
internal samples are created from multiple generators, and then one of them
will be randomly selected as final output similar to the mechanism of a
probabilistic mixture model. We term our method Mixture GAN (MGAN). We develop
theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon
divergence (JSD) between the mixture of generators' distributions and the
empirical data distribution is minimal, whilst the JSD among generators'
distributions is maximal, hence effectively avoiding the mode collapse. By
utilizing parameter sharing, our proposed model adds minimal computational cost
to the standard GAN, and thus can also efficiently scale to large-scale
datasets. We conduct extensive experiments on synthetic 2D data and natural
image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior
performance of our MGAN in achieving state-of-the-art Inception scores over
latest baselines, generating diverse and appealing recognizable objects at
different resolutions, and specializing in capturing different types of objects
by generators.
| Quan Hoang, Tu Dinh Nguyen, Trung Le and Dinh Phung | null | 1708.02556 | null | null |
Belief Propagation, Bethe Approximation and Polynomials | cs.LG cs.DS cs.IT math.IT stat.ML | Factor graphs are important models for succinctly representing probability
distributions in machine learning, coding theory, and statistical physics.
Several computational problems, such as computing marginals and partition
functions, arise naturally when working with factor graphs. Belief propagation
is a widely deployed iterative method for solving these problems. However,
despite its significant empirical success, not much is known about the
correctness and efficiency of belief propagation.
Bethe approximation is an optimization-based framework for approximating
partition functions. While it is known that the stationary points of the Bethe
approximation coincide with the fixed points of belief propagation, in general,
the relation between the Bethe approximation and the partition function is not
well understood. It has been observed that for a few classes of factor graphs,
the Bethe approximation always gives a lower bound to the partition function,
which distinguishes them from the general case, where neither a lower bound,
nor an upper bound holds universally. This has been rigorously proved for
permanents and for attractive graphical models.
Here we consider bipartite normal factor graphs and show that if the local
constraints satisfy a certain analytic property, the Bethe approximation is a
lower bound to the partition function. We arrive at this result by viewing
factor graphs through the lens of polynomials. In this process, we reformulate
the Bethe approximation as a polynomial optimization problem. Our sufficient
condition for the lower bound property to hold is inspired by recent
developments in the theory of real stable polynomials. We believe that this way
of viewing factor graphs and its connection to real stability might lead to a
better understanding of belief propagation and factor graphs in general.
| Damian Straszak and Nisheeth K. Vishnoi | null | 1708.02581 | null | null |
Cascade Adversarial Machine Learning Regularized with a Unified
Embedding | stat.ML cs.LG | Injecting adversarial examples during training, known as adversarial
training, can improve robustness against one-step attacks, but not for unknown
iterative attacks. To address this challenge, we first show iteratively
generated adversarial images easily transfer between networks trained with the
same strategy. Inspired by this observation, we propose cascade adversarial
training, which transfers the knowledge of the end results of adversarial
training. We train a network from scratch by injecting iteratively generated
adversarial images crafted from already defended networks in addition to
one-step adversarial images from the network being trained. We also propose to
utilize embedding space for both classification and low-level (pixel-level)
similarity learning to ignore unknown pixel level perturbation. During
training, we inject adversarial images without replacing their corresponding
clean images and penalize the distance between the two embeddings (clean and
adversarial). Experimental results show that cascade adversarial training
together with our proposed low-level similarity learning efficiently enhances
the robustness against iterative attacks, but at the expense of decreased
robustness against one-step attacks. We show that combining those two
techniques can also improve robustness under the worst case black box attack
scenario.
| Taesik Na, Jong Hwan Ko, and Saibal Mukhopadhyay | null | 1708.02582 | null | null |
Neural Network Dynamics for Model-Based Deep Reinforcement Learning with
Model-Free Fine-Tuning | cs.LG cs.AI cs.RO | Model-free deep reinforcement learning algorithms have been shown to be
capable of learning a wide range of robotic skills, but typically require a
very large number of samples to achieve good performance. Model-based
algorithms, in principle, can provide for much more efficient learning, but
have proven difficult to extend to expressive, high-capacity models such as
deep neural networks. In this work, we demonstrate that medium-sized neural
network models can in fact be combined with model predictive control (MPC) to
achieve excellent sample complexity in a model-based reinforcement learning
algorithm, producing stable and plausible gaits to accomplish various complex
locomotion tasks. We also propose using deep neural network dynamics models to
initialize a model-free learner, in order to combine the sample efficiency of
model-based approaches with the high task-specific performance of model-free
methods. We empirically demonstrate on MuJoCo locomotion tasks that our pure
model-based approach trained on just random action data can follow arbitrary
trajectories with excellent sample efficiency, and that our hybrid algorithm
can accelerate model-free learning on high-speed benchmark tasks, achieving
sample efficiency gains of 3-5x on swimmer, cheetah, hopper, and ant agents.
Videos can be found at https://sites.google.com/view/mbmf
| Anusha Nagabandi, Gregory Kahn, Ronald S. Fearing, Sergey Levine | null | 1708.02596 | null | null |
Real Time Analytics: Algorithms and Systems | cs.DB cs.LG | Velocity is one of the 4 Vs commonly used to characterize Big Data. In this
regard, Forrester remarked the following in Q3 2014: "The high velocity,
white-water flow of data from innumerable real-time data sources such as market
data, Internet of Things, mobile, sensors, click-stream, and even transactions
remain largely unnavigated by most firms. The opportunity to leverage streaming
analytics has never been greater." Example use cases of streaming analytics
include, but not limited to: (a) visualization of business metrics in real-time
(b) facilitating highly personalized experiences (c) expediting response during
emergencies. Streaming analytics is extensively used in a wide variety of
domains such as healthcare, e-commerce, financial services, telecommunications,
energy and utilities, manufacturing, government and transportation.
In this tutorial, we shall present an in-depth overview of streaming
analytics - applications, algorithms and platforms - landscape. We shall walk
through how the field has evolved over the last decade and then discuss the
current challenges - the impact of the other three Vs, viz., Volume, Variety
and Veracity, on Big Data streaming analytics. The tutorial is intended for
both researchers and practitioners in the industry. We shall also present
state-of-the-affairs of streaming analytics at Twitter.
| Arun Kejariwal, Sanjeev Kulkarni and Karthik Ramasamy | null | 1708.02621 | null | null |
Protecting Genomic Privacy by a Sequence-Similarity Based Obfuscation
Method | cs.CR cs.LG | In the post-genomic era, large-scale personal DNA sequences are produced and
collected for genetic medical diagnoses and new drug discovery, which, however,
simultaneously poses serious challenges to the protection of personal genomic
privacy. Existing genomic privacy-protection methods are either time-consuming
or with low accuracy. To tackle these problems, this paper proposes a sequence
similarity-based obfuscation method, namely IterMegaBLAST, for fast and
reliable protection of personal genomic privacy. Specifically, given a randomly
selected sequence from a dataset of DNA sequences, we first use MegaBLAST to
find its most similar sequence from the dataset. These two aligned sequences
form a cluster, for which an obfuscated sequence was generated via a DNA
generalization lattice scheme. These procedures are iteratively performed until
all of the sequences in the dataset are clustered and their obfuscated
sequences are generated. Experimental results on two benchmark datasets
demonstrate that under the same degree of anonymity, IterMegaBLAST
significantly outperforms existing state-of-the-art approaches in terms of both
utility accuracy and time complexity.
| Shibiao Wan, Man-Wai Mak and Sun-Yuan Kung | null | 1708.02629 | null | null |
Anomaly Detection in Multivariate Non-stationary Time Series for
Automatic DBMS Diagnosis | stat.ML cs.LG stat.AP | Anomaly detection in database management systems (DBMSs) is difficult because
of increasing number of statistics (stat) and event metrics in big data system.
In this paper, I propose an automatic DBMS diagnosis system that detects
anomaly periods with abnormal DB stat metrics and finds causal events in the
periods. Reconstruction error from deep autoencoder and statistical process
control approach are applied to detect time period with anomalies. Related
events are found using time series similarity measures between events and
abnormal stat metrics. After training deep autoencoder with DBMS metric data,
efficacy of anomaly detection is investigated from other DBMSs containing
anomalies. Experiment results show effectiveness of proposed model, especially,
batch temporal normalization layer. Proposed model is used for publishing
automatic DBMS diagnosis reports in order to determine DBMS configuration and
SQL tuning.
| Doyup Lee | 10.1109/ICMLA.2017.0-126 | 1708.02635 | null | null |
TensorFlow Estimators: Managing Simplicity vs. Flexibility in High-Level
Machine Learning Frameworks | cs.DC cs.LG | We present a framework for specifying, training, evaluating, and deploying
machine learning models. Our focus is on simplifying cutting edge machine
learning for practitioners in order to bring such technologies into production.
Recognizing the fast evolution of the field of deep learning, we make no
attempt to capture the design space of all possible model architectures in a
domain- specific language (DSL) or similar configuration language. We allow
users to write code to define their models, but provide abstractions that guide
develop- ers to write models in ways conducive to productionization. We also
provide a unifying Estimator interface, making it possible to write downstream
infrastructure (e.g. distributed training, hyperparameter tuning) independent
of the model implementation. We balance the competing demands for flexibility
and simplicity by offering APIs at different levels of abstraction, making
common model architectures available out of the box, while providing a library
of utilities designed to speed up experimentation with model architectures. To
make out of the box models flexible and usable across a wide range of problems,
these canned Estimators are parameterized not only over traditional
hyperparameters, but also using feature columns, a declarative specification
describing how to interpret input data. We discuss our experience in using this
framework in re- search and production environments, and show the impact on
code health, maintainability, and development speed.
| Heng-Tze Cheng, Zakaria Haque, Lichan Hong, Mustafa Ispir, Clemens
Mewald, Illia Polosukhin, Georgios Roumpos, D Sculley, Jamie Smith, David
Soergel, Yuan Tang, Philipp Tucker, Martin Wicke, Cassandra Xia, Jianwei Xie | 10.1145/3097983.3098171 | 1708.02637 | null | null |
Extractor-Based Time-Space Lower Bounds for Learning | cs.LG cs.CC | A matrix $M: A \times X \rightarrow \{-1,1\}$ corresponds to the following
learning problem: An unknown element $x \in X$ is chosen uniformly at random. A
learner tries to learn $x$ from a stream of samples, $(a_1, b_1), (a_2, b_2)
\ldots$, where for every $i$, $a_i \in A$ is chosen uniformly at random and
$b_i = M(a_i,x)$.
Assume that $k,\ell, r$ are such that any submatrix of $M$ of at least
$2^{-k} \cdot |A|$ rows and at least $2^{-\ell} \cdot |X|$ columns, has a bias
of at most $2^{-r}$. We show that any learning algorithm for the learning
problem corresponding to $M$ requires either a memory of size at least
$\Omega\left(k \cdot \ell \right)$, or at least $2^{\Omega(r)}$ samples. The
result holds even if the learner has an exponentially small success probability
(of $2^{-\Omega(r)}$).
In particular, this shows that for a large class of learning problems, any
learning algorithm requires either a memory of size at least $\Omega\left((\log
|X|) \cdot (\log |A|)\right)$ or an exponential number of samples, achieving a
tight $\Omega\left((\log |X|) \cdot (\log |A|)\right)$ lower bound on the size
of the memory, rather than a bound of $\Omega\left(\min\left\{(\log
|X|)^2,(\log |A|)^2\right\}\right)$ obtained in previous works [R17,MM17b].
Moreover, our result implies all previous memory-samples lower bounds, as
well as a number of new applications.
Our proof builds on [R17] that gave a general technique for proving
memory-samples lower bounds.
| Sumegha Garg, Ran Raz, Avishay Tal | null | 1708.02639 | null | null |
Time-Space Tradeoffs for Learning from Small Test Spaces: Learning Low
Degree Polynomial Functions | cs.LG cs.CC | We develop an extension of recently developed methods for obtaining
time-space tradeoff lower bounds for problems of learning from random test
samples to handle the situation where the space of tests is signficantly
smaller than the space of inputs, a class of learning problems that is not
handled by prior work. This extension is based on a measure of how matrices
amplify the 2-norms of probability distributions that is more refined than the
2-norms of these matrices.
As applications that follow from our new technique, we show that any
algorithm that learns $m$-variate homogeneous polynomial functions of degree at
most $d$ over $\mathbb{F}_2$ from evaluations on randomly chosen inputs either
requires space $\Omega(mn)$ or $2^{\Omega(m)}$ time where $n=m^{\Theta(d)}$ is
the dimension of the space of such functions. These bounds are asymptotically
optimal since they match the tradeoffs achieved by natural learning algorithms
for the problems.
| Paul Beame, Shayan Oveis Gharan and Xin Yang | null | 1708.0264 | null | null |
Which Encoding is the Best for Text Classification in Chinese, English,
Japanese and Korean? | cs.CL cs.LG | This article offers an empirical study on the different ways of encoding
Chinese, Japanese, Korean (CJK) and English languages for text classification.
Different encoding levels are studied, including UTF-8 bytes, characters,
words, romanized characters and romanized words. For all encoding levels,
whenever applicable, we provide comparisons with linear models, fastText and
convolutional networks. For convolutional networks, we compare between encoding
mechanisms using character glyph images, one-hot (or one-of-n) encoding, and
embedding. In total there are 473 models, using 14 large-scale text
classification datasets in 4 languages including Chinese, English, Japanese and
Korean. Some conclusions from these results include that byte-level one-hot
encoding based on UTF-8 consistently produces competitive results for
convolutional networks, that word-level n-grams linear models are competitive
even without perfect word segmentation, and that fastText provides the best
result using character-level n-gram encoding but can overfit when the features
are overly rich.
| Xiang Zhang, Yann LeCun | null | 1708.02657 | null | null |
Gradient-enhanced kriging for high-dimensional problems | cs.LG stat.ML | Surrogate models provide a low computational cost alternative to evaluating
expensive functions. The construction of accurate surrogate models with large
numbers of independent variables is currently prohibitive because it requires a
large number of function evaluations. Gradient-enhanced kriging has the
potential to reduce the number of function evaluations for the desired accuracy
when efficient gradient computation, such as an adjoint method, is available.
However, current gradient-enhanced kriging methods do not scale well with the
number of sampling points due to the rapid growth in the size of the
correlation matrix where new information is added for each sampling point in
each direction of the design space. They do not scale well with the number of
independent variables either due to the increase in the number of
hyperparameters that needs to be estimated. To address this issue, we develop a
new gradient-enhanced surrogate model approach that drastically reduced the
number of hyperparameters through the use of the partial-least squares method
that maintains accuracy. In addition, this method is able to control the size
of the correlation matrix by adding only relevant points defined through the
information provided by the partial-least squares method. To validate our
method, we compare the global accuracy of the proposed method with conventional
kriging surrogate models on two analytic functions with up to 100 dimensions,
as well as engineering problems of varied complexity with up to 15 dimensions.
We show that the proposed method requires fewer sampling points than
conventional methods to obtain the desired accuracy, or provides more accuracy
for a fixed budget of sampling points. In some cases, we get over 3 times more
accurate models than a bench of surrogate models from the literature, and also
over 3200 times faster than standard gradient-enhanced kriging models.
| Mohamed Amine Bouhlel and Joaquim R. R. A. Martins | null | 1708.02663 | null | null |
Proceedings of the 2017 ICML Workshop on Human Interpretability in
Machine Learning (WHI 2017) | stat.ML cs.LG | This is the Proceedings of the 2017 ICML Workshop on Human Interpretability
in Machine Learning (WHI 2017), which was held in Sydney, Australia, August 10,
2017. Invited speakers were Tony Jebara, Pang Wei Koh, and David Sontag.
| Been Kim, Dmitry M. Malioutov, Kush R. Varshney, Adrian Weller | null | 1708.02666 | null | null |
Universal Function Approximation by Deep Neural Nets with Bounded Width
and ReLU Activations | stat.ML cs.CG cs.LG math.FA math.ST stat.TH | This article concerns the expressive power of depth in neural nets with ReLU
activations and bounded width. We are particularly interested in the following
questions: what is the minimal width $w_{\text{min}}(d)$ so that ReLU nets of
width $w_{\text{min}}(d)$ (and arbitrary depth) can approximate any continuous
function on the unit cube $[0,1]^d$ aribitrarily well? For ReLU nets near this
minimal width, what can one say about the depth necessary to approximate a
given function? Our approach to this paper is based on the observation that,
due to the convexity of the ReLU activation, ReLU nets are particularly
well-suited for representing convex functions. In particular, we prove that
ReLU nets with width $d+1$ can approximate any continuous convex function of
$d$ variables arbitrarily well. These results then give quantitative depth
estimates for the rate of approximation of any continuous scalar function on
the $d$-dimensional cube $[0,1]^d$ by ReLU nets with width $d+3.$
| Boris Hanin | 10.3390/math7100992 | 1708.02691 | null | null |
Optimal Identity Testing with High Probability | cs.DS cs.IT cs.LG math.IT math.ST stat.TH | We study the problem of testing identity against a given distribution with a
focus on the high confidence regime. More precisely, given samples from an
unknown distribution $p$ over $n$ elements, an explicitly given distribution
$q$, and parameters $0< \epsilon, \delta < 1$, we wish to distinguish, {\em
with probability at least $1-\delta$}, whether the distributions are identical
versus $\varepsilon$-far in total variation distance. Most prior work focused
on the case that $\delta = \Omega(1)$, for which the sample complexity of
identity testing is known to be $\Theta(\sqrt{n}/\epsilon^2)$. Given such an
algorithm, one can achieve arbitrarily small values of $\delta$ via black-box
amplification, which multiplies the required number of samples by
$\Theta(\log(1/\delta))$.
We show that black-box amplification is suboptimal for any $\delta = o(1)$,
and give a new identity tester that achieves the optimal sample complexity. Our
new upper and lower bounds show that the optimal sample complexity of identity
testing is \[
\Theta\left( \frac{1}{\epsilon^2}\left(\sqrt{n \log(1/\delta)} +
\log(1/\delta) \right)\right) \] for any $n, \varepsilon$, and $\delta$. For
the special case of uniformity testing, where the given distribution is the
uniform distribution $U_n$ over the domain, our new tester is surprisingly
simple: to test whether $p = U_n$ versus $d_{\mathrm TV}(p, U_n) \geq
\varepsilon$, we simply threshold $d_{\mathrm TV}(\widehat{p}, U_n)$, where
$\widehat{p}$ is the empirical probability distribution. The fact that this
simple "plug-in" estimator is sample-optimal is surprising, even in the
constant $\delta$ case. Indeed, it was believed that such a tester would not
attain sublinear sample complexity even for constant values of $\varepsilon$
and $\delta$.
| Ilias Diakonikolas, Themis Gouleakis, John Peebles, Eric Price | null | 1708.02728 | null | null |
Gaussian Prototypical Networks for Few-Shot Learning on Omniglot | cs.LG cs.CV cs.NE stat.ML | We propose a novel architecture for $k$-shot classification on the Omniglot
dataset. Building on prototypical networks, we extend their architecture to
what we call Gaussian prototypical networks. Prototypical networks learn a map
between images and embedding vectors, and use their clustering for
classification. In our model, a part of the encoder output is interpreted as a
confidence region estimate about the embedding point, and expressed as a
Gaussian covariance matrix. Our network then constructs a direction and class
dependent distance metric on the embedding space, using uncertainties of
individual data points as weights. We show that Gaussian prototypical networks
are a preferred architecture over vanilla prototypical networks with an
equivalent number of parameters. We report state-of-the-art performance in
1-shot and 5-shot classification both in 5-way and 20-way regime (for 5-shot
5-way, we are comparable to previous state-of-the-art) on the Omniglot dataset.
We explore artificially down-sampling a fraction of images in the training set,
which improves our performance even further. We therefore hypothesize that
Gaussian prototypical networks might perform better in less homogeneous,
noisier datasets, which are commonplace in real world applications.
| Stanislav Fort | null | 1708.02735 | null | null |
A Data Prism: Semi-Verified Learning in the Small-Alpha Regime | cs.LG cs.IT math.IT | We consider a model of unreliable or crowdsourced data where there is an
underlying set of $n$ binary variables, each evaluator contributes a (possibly
unreliable or adversarial) estimate of the values of some subset of $r$ of the
variables, and the learner is given the true value of a constant number of
variables. We show that, provided an $\alpha$-fraction of the evaluators are
"good" (either correct, or with independent noise rate $p < 1/2$), then the
true values of a $(1-\epsilon)$ fraction of the $n$ underlying variables can be
deduced as long as $\alpha > 1/(2-2p)^r$. This setting can be viewed as an
instance of the semi-verified learning model introduced in [CSV17], which
explores the tradeoff between the number of items evaluated by each worker and
the fraction of good evaluators. Our results require the number of evaluators
to be extremely large, $>n^r$, although our algorithm runs in linear time,
$O_{r,\epsilon}(n)$, given query access to the large dataset of evaluations.
This setting and results can also be viewed as examining a general class of
semi-adversarial CSPs with a planted assignment.
This parameter regime where the fraction of reliable data is small, is
relevant to a number of practical settings. For example, settings where one has
a large dataset of customer preferences, with each customer specifying
preferences for a small (constant) number of items, and the goal is to
ascertain the preferences of a specific demographic of interest. Our results
show that this large dataset (which lacks demographic information) can be
leveraged together with the preferences of the demographic of interest for a
constant number of randomly selected items, to recover an accurate estimate of
the entire set of preferences. In this sense, our results can be viewed as a
"data prism" allowing one to extract the behavior of specific cohorts from a
large, mixed, dataset.
| Michela Meister and Gregory Valiant | null | 1708.0274 | null | null |
Non-Adaptive Randomized Algorithm for Group Testing | cs.LG | We study the problem of group testing with a non-adaptive randomized
algorithm in the random incidence design (RID) model where each entry in the
test is chosen randomly independently from $\{0,1\}$ with a fixed probability
$p$.
The property that is sufficient and necessary for a unique decoding is the
separability of the tests, but unfortunately no linear time algorithm is known
for such tests. In order to achieve linear-time decodable tests, the algorithms
in the literature use the disjunction property that gives almost optimal number
of tests.
We define a new property for the tests which we call semi-disjunction
property. We show that there is a linear time decoding for such test and for
$d\to \infty$ the number of tests converges to the number of tests with the
separability property and is therefore optimal (in the RID model). Our analysis
shows that, in the RID model, the number of tests in our algorithm is better
than the one with the disjunction property even for small $d$.
| Nader H. Bshouty, Nuha Diab, Shada R. Kawar, Robert J. Shahla | null | 1708.02787 | null | null |
Simulated Annealing with Levy Distribution for Fast Matrix
Factorization-Based Collaborative Filtering | cs.LG cs.IR stat.ML | Matrix factorization is one of the best approaches for collaborative
filtering, because of its high accuracy in presenting users and items latent
factors. The main disadvantages of matrix factorization are its complexity, and
being very hard to be parallelized, specially with very large matrices. In this
paper, we introduce a new method for collaborative filtering based on Matrix
Factorization by combining simulated annealing with levy distribution. By using
this method, good solutions are achieved in acceptable time with low
computations, compared to other methods like stochastic gradient descent,
alternating least squares, and weighted non-negative matrix factorization.
| Mostafa A. Shehata, Mohammad Nassef and Amr A. Badr | null | 1708.02867 | null | null |
Spectral Dynamics of Learning Restricted Boltzmann Machines | cond-mat.dis-nn cond-mat.stat-mech cs.LG | The Restricted Boltzmann Machine (RBM), an important tool used in machine
learning in particular for unsupervized learning tasks, is investigated from
the perspective of its spectral properties. Starting from empirical
observations, we propose a generic statistical ensemble for the weight matrix
of the RBM and characterize its mean evolution. This let us show how in the
linear regime, in which the RBM is found to operate at the beginning of the
training, the statistical properties of the data drive the selection of the
unstable modes of the weight matrix. A set of equations characterizing the
non-linear regime is then derived, unveiling in some way how the selected modes
interact in later stages of the learning procedure and defining a deterministic
learning curve for the RBM.
| Aur\'elien Decelle, Giancarlo Fissore and Cyril Furtlehner | 10.1209/0295-5075/119/60001 | 1708.02917 | null | null |
The Tensor Memory Hypothesis | cs.AI cs.LG q-bio.NC stat.ML | We discuss memory models which are based on tensor decompositions using
latent representations of entities and events. We show how episodic memory and
semantic memory can be realized and discuss how new memory traces can be
generated from sensory input: Existing memories are the basis for perception
and new memories are generated via perception. We relate our mathematical
approach to the hippocampal memory indexing theory. We describe the first
detailed mathematical models for the complete processing pipeline from sensory
input and its semantic decoding, i.e., perception, to the formation of episodic
and semantic memories and their declarative semantic decodings. Our main
hypothesis is that perception includes an active semantic decoding process,
which relies on latent representations of entities and predicates, and that
episodic and semantic memories depend on the same decoding process. We
contribute to the debate between the leading memory consolidation theories,
i.e., the standard consolidation theory (SCT) and the multiple trace theory
(MTT). The latter is closely related to the complementary learning systems
(CLS) framework. In particular, we show explicitly how episodic memory can
teach the neocortex to form a semantic memory, which is a core issue in MTT and
CLS.
| Volker Tresp and Yunpu Ma | null | 1708.02918 | null | null |
Enabling Massive Deep Neural Networks with the GraphBLAS | cs.DC cs.LG | Deep Neural Networks (DNNs) have emerged as a core tool for machine learning.
The computations performed during DNN training and inference are dominated by
operations on the weight matrices describing the DNN. As DNNs incorporate more
stages and more nodes per stage, these weight matrices may be required to be
sparse because of memory limitations. The GraphBLAS.org math library standard
was developed to provide high performance manipulation of sparse weight
matrices and input/output vectors. For sufficiently sparse matrices, a sparse
matrix library requires significantly less memory than the corresponding dense
matrix implementation. This paper provides a brief description of the
mathematics underlying the GraphBLAS. In addition, the equations of a typical
DNN are rewritten in a form designed to use the GraphBLAS. An implementation of
the DNN is given using a preliminary GraphBLAS C library. The performance of
the GraphBLAS implementation is measured relative to a standard dense linear
algebra library implementation. For various sizes of DNN weight matrices, it is
shown that the GraphBLAS sparse implementation outperforms a BLAS dense
implementation as the weight matrix becomes sparser.
| Jeremy Kepner, Manoj Kumar, Jos\'e Moreira, Pratap Pattnaik, Mauricio
Serrano, Henry Tufo | 10.1109/HPEC.2017.8091098 | 1708.02937 | null | null |
Convergence of Unregularized Online Learning Algorithms | cs.LG | In this paper we study the convergence of online gradient descent algorithms
in reproducing kernel Hilbert spaces (RKHSs) without regularization. We
establish a sufficient condition and a necessary condition for the convergence
of excess generalization errors in expectation. A sufficient condition for the
almost sure convergence is also given. With high probability, we provide
explicit convergence rates of the excess generalization errors for both
averaged iterates and the last iterate, which in turn also imply convergence
rates with probability one. To our best knowledge, this is the first
high-probability convergence rate for the last iterate of online gradient
descent algorithms without strong convexity. Without any boundedness
assumptions on iterates, our results are derived by a novel use of two measures
of the algorithm's one-step progress, respectively by generalization errors and
by distances in RKHSs, where the variances of the involved martingales are
cancelled out by the descent property of the algorithm.
| Yunwen Lei, Lei Shi and Zheng-Chu Guo | null | 1708.02939 | null | null |
Anomaly Detection on Graph Time Series | cs.LG cs.NE stat.ML | In this paper, we use variational recurrent neural network to investigate the
anomaly detection problem on graph time series. The temporal correlation is
modeled by the combination of recurrent neural network (RNN) and variational
inference (VI), while the spatial information is captured by the graph
convolutional network. In order to incorporate external factors, we use feature
extractor to augment the transition of latent variables, which can learn the
influence of external factors. With the target function as accumulative ELBO,
it is easy to extend this model to on-line method. The experimental study on
traffic flow data shows the detection capability of the proposed method.
| Daniel Hsu | null | 1708.02975 | null | null |
Hierarchically-Attentive RNN for Album Summarization and Storytelling | cs.CL cs.AI cs.CV cs.LG | We address the problem of end-to-end visual storytelling. Given a photo
album, our model first selects the most representative (summary) photos, and
then composes a natural language story for the album. For this task, we make
use of the Visual Storytelling dataset and a model composed of three
hierarchically-attentive Recurrent Neural Nets (RNNs) to: encode the album
photos, select representative (summary) photos, and compose the story.
Automatic and human evaluations show our model achieves better performance on
selection, generation, and retrieval than baselines.
| Licheng Yu and Mohit Bansal and Tamara L. Berg | null | 1708.02977 | null | null |
Tikhonov Regularization for Long Short-Term Memory Networks | cs.LG cs.NE stat.ML | It is a well-known fact that adding noise to the input data often improves
network performance. While the dropout technique may be a cause of memory loss,
when it is applied to recurrent connections, Tikhonov regularization, which can
be regarded as the training with additive noise, avoids this issue naturally,
though it implies regularizer derivation for different architectures. In case
of feedforward neural networks this is straightforward, while for networks with
recurrent connections and complicated layers it leads to some difficulties. In
this paper, a Tikhonov regularizer is derived for Long-Short Term Memory (LSTM)
networks. Although it is independent of time for simplicity, it considers
interaction between weights of the LSTM unit, which in theory makes it possible
to regularize the unit with complicated dependences by using only one parameter
that measures the input data perturbation. The regularizer that is proposed in
this paper has three parameters: one to control the regularization process, and
other two to maintain computation stability while the network is being trained.
The theory developed in this paper can be applied to get such regularizers for
different recurrent neural networks with Hadamard products and Lipschitz
continuous functions.
| Andrei Turkin | null | 1708.02979 | null | null |
Non-stationary Stochastic Optimization under $L_{p,q}$-Variation
Measures | stat.ML cs.LG | We consider a non-stationary sequential stochastic optimization problem, in
which the underlying cost functions change over time under a variation budget
constraint. We propose an $L_{p,q}$-variation functional to quantify the
change, which yields less variation for dynamic function sequences whose
changes are constrained to short time periods or small subsets of input domain.
Under the $L_{p,q}$-variation constraint, we derive both upper and matching
lower regret bounds for smooth and strongly convex function sequences, which
generalize previous results in Besbes et al. (2015). Furthermore, we provide an
upper bound for general convex function sequences with noisy gradient feedback,
which matches the optimal rate as $p\to\infty$. Our results reveal some
surprising phenomena under this general variation functional, such as the curse
of dimensionality of the function domain. The key technical novelties in our
analysis include affinity lemmas that characterize the distance of the
minimizers of two convex functions with bounded Lp difference, and a cubic
spline based construction that attains matching lower bounds.
| Xi Chen, Yining Wang, Yu-Xiang Wang | null | 1708.0302 | null | null |
Using Deep Neural Networks to Automate Large Scale Statistical Analysis
for Big Data Applications | stat.ML cs.LG stat.CO | Statistical analysis (SA) is a complex process to deduce population
properties from analysis of data. It usually takes a well-trained analyst to
successfully perform SA, and it becomes extremely challenging to apply SA to
big data applications. We propose to use deep neural networks to automate the
SA process. In particular, we propose to construct convolutional neural
networks (CNNs) to perform automatic model selection and parameter estimation,
two most important SA tasks. We refer to the resulting CNNs as the neural model
selector and the neural model estimator, respectively, which can be properly
trained using labeled data systematically generated from candidate models.
Simulation study shows that both the selector and estimator demonstrate
excellent performances. The idea and proposed framework can be further extended
to automate the entire SA process and have the potential to revolutionize how
SA is performed in big data analytics.
| Rongrong Zhang, Wei Deng, Michael Yu Zhu | null | 1708.03027 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.