title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Towards meaningful physics from generative models | hep-lat cond-mat.stat-mech cs.LG | In several physical systems, important properties characterizing the system
itself are theoretically related with specific degrees of freedom. Although
standard Monte Carlo simulations provide an effective tool to accurately
reconstruct the physical configurations of the system, they are unable to
isolate the different contributions corresponding to different degrees of
freedom. Here we show that unsupervised deep learning can become a valid
support to MC simulation, coupling useful insights in the phases detection task
with good reconstruction performance. As a testbed we consider the 2D XY model,
showing that a deep neural network based on variational autoencoders can detect
the continuous Kosterlitz-Thouless (KT) transitions, and that, if endowed with
the appropriate constrains, they generate configurations with meaningful
physical content.
| Marco Cristoforetti, Giuseppe Jurman, Andrea I. Nardelli, Cesare
Furlanello | null | 1705.09524 | null | null |
Classification regions of deep neural networks | cs.CV cs.AI cs.LG stat.ML | The goal of this paper is to analyze the geometric properties of deep neural
network classifiers in the input space. We specifically study the topology of
classification regions created by deep networks, as well as their associated
decision boundary. Through a systematic empirical investigation, we show that
state-of-the-art deep nets learn connected classification regions, and that the
decision boundary in the vicinity of datapoints is flat along most directions.
We further draw an essential connection between two seemingly unrelated
properties of deep networks: their sensitivity to additive perturbations in the
inputs, and the curvature of their decision boundary. The directions where the
decision boundary is curved in fact remarkably characterize the directions to
which the classifier is the most vulnerable. We finally leverage a fundamental
asymmetry in the curvature of the decision boundary of deep nets, and propose a
method to discriminate between original images, and images perturbed with small
adversarial examples. We show the effectiveness of this purely geometric
approach for detecting small adversarial perturbations in images, and for
recovering the labels of perturbed images.
| Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard,
Stefano Soatto | null | 1705.09552 | null | null |
Robustness of classifiers to universal perturbations: a geometric
perspective | cs.CV cs.AI cs.LG stat.ML | Deep networks have recently been shown to be vulnerable to universal
perturbations: there exist very small image-agnostic perturbations that cause
most natural images to be misclassified by such classifiers. In this paper, we
propose the first quantitative analysis of the robustness of classifiers to
universal perturbations, and draw a formal link between the robustness to
universal perturbations, and the geometry of the decision boundary.
Specifically, we establish theoretical bounds on the robustness of classifiers
under two decision boundary models (flat and curved models). We show in
particular that the robustness of deep networks to universal perturbations is
driven by a key property of their curvature: there exists shared directions
along which the decision boundary of deep networks is systematically positively
curved. Under such conditions, we prove the existence of small universal
perturbations. Our analysis further provides a novel geometric method for
computing universal perturbations, in addition to explaining their properties.
| Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, Pascal
Frossard, Stefano Soatto | null | 1705.09554 | null | null |
Bayesian GAN | stat.ML cs.AI cs.CV cs.LG | Generative adversarial networks (GANs) can implicitly learn rich
distributions over images, audio, and data which are hard to model with an
explicit likelihood. We present a practical Bayesian formulation for
unsupervised and semi-supervised learning with GANs. Within this framework, we
use stochastic gradient Hamiltonian Monte Carlo to marginalize the weights of
the generator and discriminator networks. The resulting approach is
straightforward and obtains good performance without any standard interventions
such as feature matching, or mini-batch discrimination. By exploring an
expressive posterior over the parameters of the generator, the Bayesian GAN
avoids mode-collapse, produces interpretable and diverse candidate samples, and
provides state-of-the-art quantitative results for semi-supervised learning on
benchmarks including SVHN, CelebA, and CIFAR-10, outperforming DCGAN,
Wasserstein GANs, and DCGAN ensembles.
| Yunus Saatchi, Andrew Gordon Wilson | null | 1705.09558 | null | null |
Combinatorial Multi-Armed Bandits with Filtered Feedback | cs.LG stat.ML | Motivated by problems in search and detection we present a solution to a
Combinatorial Multi-Armed Bandit (CMAB) problem with both heavy-tailed reward
distributions and a new class of feedback, filtered semibandit feedback. In a
CMAB problem an agent pulls a combination of arms from a set $\{1,...,k\}$ in
each round, generating random outcomes from probability distributions
associated with these arms and receiving an overall reward. Under semibandit
feedback it is assumed that the random outcomes generated are all observed.
Filtered semibandit feedback allows the outcomes that are observed to be
sampled from a second distribution conditioned on the initial random outcomes.
This feedback mechanism is valuable as it allows CMAB methods to be applied to
sequential search and detection problems where combinatorial actions are made,
but the true rewards (number of objects of interest appearing in the round) are
not observed, rather a filtered reward (the number of objects the searcher
successfully finds, which must by definition be less than the number that
appear). We present an upper confidence bound type algorithm, Robust-F-CUCB,
and associated regret bound of order $\mathcal{O}(\ln(n))$ to balance
exploration and exploitation in the face of both filtering of reward and heavy
tailed reward distributions.
| James A. Grant, David S. Leslie, Kevin Glazebrook, Roberto Szechtman | null | 1705.09605 | null | null |
Discriminative Metric Learning with Deep Forest | stat.ML cs.LG | A Discriminative Deep Forest (DisDF) as a metric learning algorithm is
proposed in the paper. It is based on the Deep Forest or gcForest proposed by
Zhou and Feng and can be viewed as a gcForest modification. The case of the
fully supervised learning is studied when the class labels of individual
training examples are known. The main idea underlying the algorithm is to
assign weights to decision trees in random forest in order to reduce distances
between objects from the same class and to increase them between objects from
different classes. The weights are training parameters. A specific objective
function which combines Euclidean and Manhattan distances and simplifies the
optimization problem for training the DisDF is proposed. The numerical
experiments illustrate the proposed distance metric algorithm.
| Lev V. Utkin and Mikhail A. Ryabinin | null | 1705.0962 | null | null |
Learning Causal Structures Using Regression Invariance | cs.LG cs.AI stat.ML | We study causal inference in a multi-environment setting, in which the
functional relations for producing the variables from their direct causes
remain the same across environments, while the distribution of exogenous noises
may vary. We introduce the idea of using the invariance of the functional
relations of the variables to their causes across a set of environments. We
define a notion of completeness for a causal inference algorithm in this
setting and prove the existence of such algorithm by proposing the baseline
algorithm. Additionally, we present an alternate algorithm that has
significantly improved computational and sample complexity compared to the
baseline algorithm. The experiment results show that the proposed algorithm
outperforms the other existing algorithms.
| AmirEmad Ghassami, Saber Salehkaleybar, Negar Kiyavash, Kun Zhang | null | 1705.09644 | null | null |
Anomaly Detection in a Digital Video Broadcasting System Using Timed
Automata | cs.LG cs.AI cs.FL cs.LO | This paper focuses on detecting anomalies in a digital video broadcasting
(DVB) system from providers' perspective. We learn a probabilistic
deterministic real timed automaton profiling benign behavior of encryption
control in the DVB control access system. This profile is used as a one-class
classifier. Anomalous items in a testing sequence are detected when the
sequence is not accepted by the learned model.
| Xiaoran Liu and Qin Lin and Sicco Verwer and Dmitri Jarnikov | null | 1705.0965 | null | null |
Style Transfer from Non-Parallel Text by Cross-Alignment | cs.CL cs.LG | This paper focuses on style transfer on the basis of non-parallel text. This
is an instance of a broad family of problems including machine translation,
decipherment, and sentiment modification. The key challenge is to separate the
content from other aspects such as style. We assume a shared latent content
distribution across different text corpora, and propose a method that leverages
refined alignment of latent representations to perform style transfer. The
transferred sentences from one style should match example sentences from the
other style as a population. We demonstrate the effectiveness of this
cross-alignment method on three tasks: sentiment modification, decipherment of
word substitution ciphers, and recovery of word order.
| Tianxiao Shen, Tao Lei, Regina Barzilay, Tommi Jaakkola | null | 1705.09655 | null | null |
Fisher GAN | cs.LG stat.ML | Generative Adversarial Networks (GANs) are powerful models for learning
complex distributions. Stable training of GANs has been addressed in many
recent works which explore different metrics between distributions. In this
paper we introduce Fisher GAN which fits within the Integral Probability
Metrics (IPM) framework for training GANs. Fisher GAN defines a critic with a
data dependent constraint on its second order moments. We show in this paper
that Fisher GAN allows for stable and time efficient training that does not
compromise the capacity of the critic, and does not need data independent
constraints such as weight clipping. We analyze our Fisher IPM theoretically
and provide an algorithm based on Augmented Lagrangian for Fisher GAN. We
validate our claims on both image sample generation and semi-supervised
classification using Fisher GAN.
| Youssef Mroueh, Tom Sercu | null | 1705.09675 | null | null |
Multiple Source Domain Adaptation with Adversarial Training of Neural
Networks | cs.LG cs.AI stat.ML | While domain adaptation has been actively researched in recent years, most
theoretical results and algorithms focus on the single-source-single-target
adaptation setting. Naive application of such algorithms on multiple source
domain adaptation problem may lead to suboptimal solutions. As a step toward
bridging the gap, we propose a new generalization bound for domain adaptation
when there are multiple source domains with labeled instances and one target
domain with unlabeled instances. Compared with existing bounds, the new bound
does not require expert knowledge about the target distribution, nor the
optimal combination rule for multisource domains. Interestingly, our theory
also leads to an efficient learning strategy using adversarial neural networks:
we show how to interpret it as learning feature representations that are
invariant to the multiple domain shifts while still being discriminative for
the learning task. To this end, we propose two models, both of which we call
multisource domain adversarial networks (MDANs): the first model optimizes
directly our bound, while the second model is a smoothed approximation of the
first one, leading to a more data-efficient and task-adaptive model. The
optimization tasks of both models are minimax saddle point problems that can be
optimized by adversarial training. To demonstrate the effectiveness of MDANs,
we conduct extensive experiments showing superior adaptation performance on
three real-world datasets: sentiment analysis, digit classification, and
vehicle counting.
| Han Zhao, Shanghang Zhang, Guanhang Wu, Jo\~ao P. Costeira, Jos\'e M.
F. Moura, Geoffrey J. Gordon | null | 1705.09684 | null | null |
Multi-scale Online Learning and its Applications to Online Auctions | cs.GT cs.DS cs.LG stat.ML | We consider revenue maximization in online auction/pricing problems. A seller
sells an identical item in each period to a new buyer, or a new set of buyers.
For the online posted pricing problem, we show regret bounds that scale with
the best fixed price, rather than the range of the values. We also show regret
bounds that are almost scale free, and match the offline sample complexity,
when comparing to a benchmark that requires a lower bound on the market share.
These results are obtained by generalizing the classical learning from experts
and multi-armed bandit problems to their multi-scale versions. In this version,
the reward of each action is in a different range, and the regret w.r.t. a
given action scales with its own range, rather than the maximum range.
| S\'ebastien Bubeck, Nikhil R. Devanur, Zhiyi Huang, Rad Niazadeh | null | 1705.097 | null | null |
Stochastic Feedback Control of Systems with Unknown Nonlinear Dynamics | cs.SY cs.LG | This paper studies the stochastic optimal control problem for systems with
unknown dynamics. First, an open-loop deterministic trajectory optimization
problem is solved without knowing the explicit form of the dynamical system.
Next, a Linear Quadratic Gaussian (LQG) controller is designed for the nominal
trajectory-dependent linearized system, such that under a small noise
assumption, the actual states remain close to the optimal trajectory. The
trajectory-dependent linearized system is identified using input-output
experimental data consisting of the impulse responses of the nominal system. A
computational example is given to illustrate the performance of the proposed
approach.
| Dan Yu, Mohammadhussein Rafieisakhaei and Suman Chakravorty | null | 1705.09761 | null | null |
MAT: A Multi-strength Adversarial Training Method to Mitigate
Adversarial Attacks | cs.LG cs.CV | Some recent works revealed that deep neural networks (DNNs) are vulnerable to
so-called adversarial attacks where input examples are intentionally perturbed
to fool DNNs. In this work, we revisit the DNN training process that includes
adversarial examples into the training dataset so as to improve DNN's
resilience to adversarial attacks, namely, adversarial training. Our
experiments show that different adversarial strengths, i.e., perturbation
levels of adversarial examples, have different working zones to resist the
attack. Based on the observation, we propose a multi-strength adversarial
training method (MAT) that combines the adversarial training examples with
different adversarial strengths to defend adversarial attacks. Two training
structures - mixed MAT and parallel MAT - are developed to facilitate the
tradeoffs between training time and memory occupation. Our results show that
MAT can substantially minimize the accuracy degradation of deep learning
systems to adversarial attacks on MNIST, CIFAR-10, CIFAR-100, and SVHN.
| Chang Song, Hsin-Pai Cheng, Huanrui Yang, Sicheng Li, Chunpeng Wu,
Qing Wu, Hai Li, Yiran Chen | null | 1705.09764 | null | null |
Good Semi-supervised Learning that Requires a Bad GAN | cs.LG cs.AI | Semi-supervised learning methods based on generative adversarial networks
(GANs) obtained strong empirical results, but it is not clear 1) how the
discriminator benefits from joint training with a generator, and 2) why good
semi-supervised classification performance and a good generator cannot be
obtained at the same time. Theoretically, we show that given the discriminator
objective, good semisupervised learning indeed requires a bad generator, and
propose the definition of a preferred generator. Empirically, we derive a novel
formulation based on our analysis that substantially improves over feature
matching GANs, obtaining state-of-the-art results on multiple benchmark
datasets.
| Zihang Dai, Zhilin Yang, Fan Yang, William W. Cohen, Ruslan
Salakhutdinov | null | 1705.09783 | null | null |
AMPNet: Asynchronous Model-Parallel Training for Dynamic Neural Networks | cs.LG cs.AI cs.DC stat.ML | New types of machine learning hardware in development and entering the market
hold the promise of revolutionizing deep learning in a manner as profound as
GPUs. However, existing software frameworks and training algorithms for deep
learning have yet to evolve to fully leverage the capability of the new wave of
silicon. We already see the limitations of existing algorithms for models that
exploit structured input via complex and instance-dependent control flow, which
prohibits minibatching. We present an asynchronous model-parallel (AMP)
training algorithm that is specifically motivated by training on networks of
interconnected devices. Through an implementation on multi-core CPUs, we show
that AMP training converges to the same accuracy as conventional synchronous
training algorithms in a similar number of epochs, but utilizes the available
hardware more efficiently even for small minibatch sizes, resulting in
significantly shorter overall training times. Our framework opens the door for
scaling up a new class of deep learning models that cannot be efficiently
trained today.
| Alexander L. Gaunt, Matthew A. Johnson, Maik Riechert, Daniel Tarlow,
Ryota Tomioka, Dimitrios Vytiniotis, Sam Webster | null | 1705.09786 | null | null |
Deep Complex Networks | cs.NE cs.LG | At present, the vast majority of building blocks, techniques, and
architectures for deep learning are based on real-valued operations and
representations. However, recent work on recurrent neural networks and older
fundamental theoretical analysis suggests that complex numbers could have a
richer representational capacity and could also facilitate noise-robust memory
retrieval mechanisms. Despite their attractive properties and potential for
opening up entirely new neural architectures, complex-valued deep neural
networks have been marginalized due to the absence of the building blocks
required to design such models. In this work, we provide the key atomic
components for complex-valued deep neural networks and apply them to
convolutional feed-forward networks and convolutional LSTMs. More precisely, we
rely on complex convolutions and present algorithms for complex
batch-normalization, complex weight initialization strategies for
complex-valued neural nets and we use them in experiments with end-to-end
training schemes. We demonstrate that such complex-valued models are
competitive with their real-valued counterparts. We test deep complex models on
several computer vision tasks, on music transcription using the MusicNet
dataset and on Speech Spectrum Prediction using the TIMIT dataset. We achieve
state-of-the-art performance on these audio-related tasks.
| Chiheb Trabelsi, Olexa Bilaniuk, Ying Zhang, Dmitriy Serdyuk, Sandeep
Subramanian, Jo\~ao Felipe Santos, Soroush Mehri, Negar Rostamzadeh, Yoshua
Bengio, Christopher J Pal | null | 1705.09792 | null | null |
Growth-Optimal Portfolio Selection under CVaR Constraints | q-fin.MF cs.LG | Online portfolio selection research has so far focused mainly on minimizing
regret defined in terms of wealth growth. Practical financial decision making,
however, is deeply concerned with both wealth and risk. We consider online
learning of portfolios of stocks whose prices are governed by arbitrary
(unknown) stationary and ergodic processes, where the goal is to maximize
wealth while keeping the conditional value at risk (CVaR) below a desired
threshold. We characterize the asymptomatically optimal risk-adjusted
performance and present an investment strategy whose portfolios are guaranteed
to achieve the asymptotic optimal solution while fulfilling the desired risk
constraint. We also numerically demonstrate and validate the viability of our
method on standard datasets.
| Guy Uziel and Ran El-Yaniv | null | 1705.098 | null | null |
PVEs: Position-Velocity Encoders for Unsupervised Learning of Structured
State Representations | cs.RO cs.CV cs.LG | We propose position-velocity encoders (PVEs) which learn---without
supervision---to encode images to positions and velocities of task-relevant
objects. PVEs encode a single image into a low-dimensional position state and
compute the velocity state from finite differences in position. In contrast to
autoencoders, position-velocity encoders are not trained by image
reconstruction, but by making the position-velocity representation consistent
with priors about interacting with the physical world. We applied PVEs to
several simulated control tasks from pixels and achieved promising preliminary
results.
| Rico Jonschkowski, Roland Hafner, Jonathan Scholz, and Martin
Riedmiller | null | 1705.09805 | null | null |
Global hard thresholding algorithms for joint sparse image
representation and denoising | cs.CV cs.LG | Sparse coding of images is traditionally done by cutting them into small
patches and representing each patch individually over some dictionary given a
pre-determined number of nonzero coefficients to use for each patch. In lack of
a way to effectively distribute a total number (or global budget) of nonzero
coefficients across all patches, current sparse recovery algorithms distribute
the global budget equally across all patches despite the wide range of
differences in structural complexity among them. In this work we propose a new
framework for joint sparse representation and recovery of all image patches
simultaneously. We also present two novel global hard thresholding algorithms,
based on the notion of variable splitting, for solving the joint sparse model.
Experimentation using both synthetic and real data shows effectiveness of the
proposed framework for sparse image representation and denoising tasks.
Additionally, time complexity analysis of the proposed algorithms indicate high
scalability of both algorithms, making them favorable to use on large megapixel
images.
| Reza Borhani, Jeremy Watt, Aggelos Katsaggelos | null | 1705.09816 | null | null |
Lifelong Generative Modeling | stat.ML cs.LG | Lifelong learning is the problem of learning multiple consecutive tasks in a
sequential manner, where knowledge gained from previous tasks is retained and
used to aid future learning over the lifetime of the learner. It is essential
towards the development of intelligent machines that can adapt to their
surroundings. In this work we focus on a lifelong learning approach to
unsupervised generative modeling, where we continuously incorporate newly
observed distributions into a learned model. We do so through a student-teacher
Variational Autoencoder architecture which allows us to learn and preserve all
the distributions seen so far, without the need to retain the past data nor the
past models. Through the introduction of a novel cross-model regularizer,
inspired by a Bayesian update rule, the student model leverages the information
learned by the teacher, which acts as a probabilistic knowledge store. The
regularizer reduces the effect of catastrophic interference that appears when
we learn over sequences of distributions. We validate our model's performance
on sequential variants of MNIST, FashionMNIST, PermutedMNIST, SVHN and Celeb-A
and demonstrate that our model mitigates the effects of catastrophic
interference faced by neural networks in sequential learning scenarios.
| Jason Ramapuram, Magda Gregorova, Alexandros Kalousis | 10.1016/j.neucom.2020.02.115 | 1705.09847 | null | null |
Efficient Modeling of Latent Information in Supervised Learning using
Gaussian Processes | stat.ML cs.LG | Often in machine learning, data are collected as a combination of multiple
conditions, e.g., the voice recordings of multiple persons, each labeled with
an ID. How could we build a model that captures the latent information related
to these conditions and generalize to a new one with few data? We present a new
model called Latent Variable Multiple Output Gaussian Processes (LVMOGP) and
that allows to jointly model multiple conditions for regression and generalize
to a new condition with a few data points at test time. LVMOGP infers the
posteriors of Gaussian processes together with a latent space representing the
information about different conditions. We derive an efficient variational
inference method for LVMOGP, of which the computational complexity is as low as
sparse Gaussian processes. We show that LVMOGP significantly outperforms
related Gaussian process methods on various tasks with both synthetic and real
data.
| Zhenwen Dai, Mauricio A. \'Alvarez, Neil D. Lawrence | null | 1705.09862 | null | null |
BMXNet: An Open-Source Binary Neural Network Implementation Based on
MXNet | cs.LG cs.CV cs.NE | Binary Neural Networks (BNNs) can drastically reduce memory size and accesses
by applying bit-wise operations instead of standard arithmetic operations.
Therefore it could significantly improve the efficiency and lower the energy
consumption at runtime, which enables the application of state-of-the-art deep
learning models on low power devices. BMXNet is an open-source BNN library
based on MXNet, which supports both XNOR-Networks and Quantized Neural
Networks. The developed BNN layers can be seamlessly applied with other
standard library components and work in both GPU and CPU mode. BMXNet is
maintained and developed by the multimedia research group at Hasso Plattner
Institute and released under Apache license. Extensive experiments validate the
efficiency and effectiveness of our implementation. The BMXNet library, several
sample projects, and a collection of pre-trained binary deep models are
available for download at https://github.com/hpi-xnor
| Haojin Yang, Martin Fritzsche, Christian Bartz, Christoph Meinel | null | 1705.09864 | null | null |
Dimensionality reduction for acoustic vehicle classification with
spectral embedding | stat.ML cs.LG physics.data-an | We propose a method for recognizing moving vehicles, using data from roadside
audio sensors. This problem has applications ranging widely, from traffic
analysis to surveillance. We extract a frequency signature from the audio
signal using a short-time Fourier transform, and treat each time window as an
individual data point to be classified. By applying a spectral embedding, we
decrease the dimensionality of the data sufficiently for K-nearest neighbors to
provide accurate vehicle identification.
| Justin Sunu, Allon G. Percus | null | 1705.09869 | null | null |
Convergence Analysis of Two-layer Neural Networks with ReLU Activation | cs.LG | In recent years, stochastic gradient descent (SGD) based techniques has
become the standard tools for training neural networks. However, formal
theoretical understanding of why SGD can train neural networks in practice is
largely missing.
In this paper, we make progress on understanding this mystery by providing a
convergence analysis for SGD on a rich subset of two-layer feedforward networks
with ReLU activations. This subset is characterized by a special structure
called "identity mapping". We prove that, if input follows from Gaussian
distribution, with standard $O(1/\sqrt{d})$ initialization of the weights, SGD
converges to the global minimum in polynomial number of steps. Unlike normal
vanilla networks, the "identity mapping" makes our network asymmetric and thus
the global minimum is unique. To complement our theory, we are also able to
show experimentally that multi-layer networks with this mapping have better
performance compared with normal vanilla networks.
Our convergence theorem differs from traditional non-convex optimization
techniques. We show that SGD converges to optimal in "two phases": In phase I,
the gradient points to the wrong direction, however, a potential function $g$
gradually decreases. Then in phase II, SGD enters a nice one point convex
region and converges. We also show that the identity mapping is necessary for
convergence, as it moves the initial point to a better place for optimization.
Experiment verifies our claims.
| Yuanzhi Li, Yang Yuan | null | 1705.09886 | null | null |
Bayesian Unification of Gradient and Bandit-based Learning for
Accelerated Global Optimisation | cs.AI cs.LG | Bandit based optimisation has a remarkable advantage over gradient based
approaches due to their global perspective, which eliminates the danger of
getting stuck at local optima. However, for continuous optimisation problems or
problems with a large number of actions, bandit based approaches can be
hindered by slow learning. Gradient based approaches, on the other hand,
navigate quickly in high-dimensional continuous spaces through local
optimisation, following the gradient in fine grained steps. Yet, apart from
being susceptible to local optima, these schemes are less suited for online
learning due to their reliance on extensive trial-and-error before the optimum
can be identified. In this paper, we propose a Bayesian approach that unifies
the above two paradigms in one single framework, with the aim of combining
their advantages. At the heart of our approach we find a stochastic linear
approximation of the function to be optimised, where both the gradient and
values of the function are explicitly captured. This allows us to learn from
both noisy function and gradient observations, and predict these properties
across the action space to support optimisation. We further propose an
accompanying bandit driven exploration scheme that uses Bayesian credible
bounds to trade off exploration against exploitation. Our empirical results
demonstrate that by unifying bandit and gradient based learning, one obtains
consistently improved performance across a wide spectrum of problem
environments. Furthermore, even when gradient feedback is unavailable, the
flexibility of our model, including gradient prediction, still allows us
outperform competing approaches, although with a smaller margin. Due to the
pervasiveness of bandit based optimisation, our scheme opens up for improved
performance both in meta-optimisation and in applications where gradient
related information is readily available.
| Ole-Christoffer Granmo | 10.1109/ICMLA.2016.0044 | 1705.09922 | null | null |
Learning Data Manifolds with a Cutting Plane Method | cs.LG stat.ML | We consider the problem of classifying data manifolds where each manifold
represents invariances that are parameterized by continuous degrees of freedom.
Conventional data augmentation methods rely upon sampling large numbers of
training examples from these manifolds; instead, we propose an iterative
algorithm called M_{CP} based upon a cutting-plane approach that efficiently
solves a quadratic semi-infinite programming problem to find the maximum margin
solution. We provide a proof of convergence as well as a polynomial bound on
the number of iterations required for a desired tolerance in the objective
function. The efficiency and performance of M_{CP} are demonstrated in
high-dimensional simulations and on image manifolds generated from the ImageNet
dataset. Our results indicate that M_{CP} is able to rapidly learn good
classifiers and shows superior generalization performance compared with
conventional maximum margin methods using data augmentation methods.
| SueYeon Chung, Uri Cohen, Haim Sompolinsky, Daniel D. Lee | 10.1162/neco_a_01119 | 1705.09944 | null | null |
Attribute-Guided Face Generation Using Conditional CycleGAN | cs.CV cs.LG stat.ML | We are interested in attribute-guided face generation: given a low-res face
input image, an attribute vector that can be extracted from a high-res image
(attribute image), our new method generates a high-res face image for the
low-res input that satisfies the given attributes. To address this problem, we
condition the CycleGAN and propose conditional CycleGAN, which is designed to
1) handle unpaired training data because the training low/high-res and high-res
attribute images may not necessarily align with each other, and to 2) allow
easy control of the appearance of the generated face via the input attributes.
We demonstrate impressive results on the attribute-guided conditional CycleGAN,
which can synthesize realistic face images with appearance easily controlled by
user-supplied attributes (e.g., gender, makeup, hair color, eyeglasses). Using
the attribute image as identity to produce the corresponding conditional vector
and by incorporating a face verification network, the attribute-guided network
becomes the identity-guided conditional CycleGAN which produces impressive and
interesting results on identity transfer. We demonstrate three applications on
identity-guided conditional CycleGAN: identity-preserving face superresolution,
face swapping, and frontal face generation, which consistently show the
advantage of our new method.
| Yongyi Lu, Yu-Wing Tai, Chi-Keung Tang | null | 1705.09966 | null | null |
Deep Learning for User Comment Moderation | cs.CL cs.LG | Experimenting with a new dataset of 1.6M user comments from a Greek news
portal and existing datasets of English Wikipedia comments, we show that an RNN
outperforms the previous state of the art in moderation. A deep,
classification-specific attention mechanism improves further the overall
performance of the RNN. We also compare against a CNN and a word-list baseline,
considering both fully automatic and semi-automatic moderation.
| John Pavlopoulos and Prodromos Malakasiotis and Ion Androutsopoulos | null | 1705.09993 | null | null |
Improving the Expected Improvement Algorithm | cs.LG stat.ML | The expected improvement (EI) algorithm is a popular strategy for information
collection in optimization under uncertainty. The algorithm is widely known to
be too greedy, but nevertheless enjoys wide use due to its simplicity and
ability to handle uncertainty and noise in a coherent decision theoretic
framework. To provide rigorous insight into EI, we study its properties in a
simple setting of Bayesian optimization where the domain consists of a finite
grid of points. This is the so-called best-arm identification problem, where
the goal is to allocate measurement effort wisely to confidently identify the
best arm using a small number of measurements. In this framework, one can show
formally that EI is far from optimal. To overcome this shortcoming, we
introduce a simple modification of the expected improvement algorithm.
Surprisingly, this simple change results in an algorithm that is asymptotically
optimal for Gaussian best-arm identification problems, and provably outperforms
standard EI by an order of magnitude.
| Chao Qin, Diego Klabjan, and Daniel Russo | null | 1705.10033 | null | null |
Learning Network Structures from Contagion | cs.LG cs.SI | In 2014, Amin, Heidari, and Kearns proved that tree networks can be learned
by observing only the infected set of vertices of the contagion process under
the independent cascade model, in both the active and passive query models.
They also showed empirically that simple extensions of their algorithms work on
sparse networks. In this work, we focus on the active model. We prove that a
simple modification of Amin et al.'s algorithm works on more general classes of
networks, namely (i) networks with large girth and low path growth rate, and
(ii) networks with bounded degree. This also provides partial theoretical
explanation for Amin et al.'s experiments on sparse networks.
| Adisak Supeesun (Kasetsart University, Bangkok, Thailand) and Jittat
Fakcharoenphol (Kasetsart University, Bangkok, Thailand) | 10.1016/j.ipl.2017.01.005 | 1705.10051 | null | null |
Temporal anomaly detection: calibrating the surprise | cs.CR cs.LG | We propose a hybrid approach to temporal anomaly detection in access data of
users to databases --- or more generally, any kind of subject-object
co-occurrence data. We consider a high-dimensional setting that also requires
fast computation at test time. Our methodology identifies anomalies based on a
single stationary model, instead of requiring a full temporal one, which would
be prohibitive in this setting. We learn a low-rank stationary model from the
training data, and then fit a regression model for predicting the expected
likelihood score of normal access patterns in the future. The disparity between
the predicted likelihood score and the observed one is used to assess the
`surprise' at test time. This approach enables calibration of the anomaly
score, so that time-varying normal behavior patterns are not considered
anomalous. We provide a detailed description of the algorithm, including a
convergence analysis, and report encouraging empirical results. One of the data
sets that we tested, TDA, is new for the public domain. It consists of two
months' worth of database access records from a live system. Our code is
publicly available at https://github.com/eyalgut/TLR_anomaly_detection.git. The
TDA data set is available at
https://www.kaggle.com/eyalgut/binary-traffic-matrices.
| Eyal Gutflaish, Aryeh Kontorovich, Sivan Sabato, Ofer Biller, Oded
Sofer | null | 1705.10085 | null | null |
DICOD: Distributed Convolutional Sparse Coding | cs.LG stat.ML | In this paper, we introduce DICOD, a convolutional sparse coding algorithm
which builds shift invariant representations for long signals. This algorithm
is designed to run in a distributed setting, with local message passing, making
it communication efficient. It is based on coordinate descent and uses locally
greedy updates which accelerate the resolution compared to greedy coordinate
selection. We prove the convergence of this algorithm and highlight its
computational speed-up which is super-linear in the number of cores used. We
also provide empirical evidence for the acceleration properties of our
algorithm compared to state-of-the-art methods.
| Thomas Moreau, Laurent Oudre, Nicolas Vayatis | null | 1705.10087 | null | null |
Structural Conditions for Projection-Cost Preservation via Randomized
Matrix Multiplication | stat.ML cs.LG | Projection-cost preservation is a low-rank approximation guarantee which
ensures that the cost of any rank-$k$ projection can be preserved using a
smaller sketch of the original data matrix. We present a general structural
result outlining four sufficient conditions to achieve projection-cost
preservation. These conditions can be satisfied using tools from the Randomized
Linear Algebra literature.
| Agniva Chowdhury, Jiasen Yang, Petros Drineas | null | 1705.10102 | null | null |
Kernel Implicit Variational Inference | stat.ML cs.AI cs.LG cs.NE | Recent progress in variational inference has paid much attention to the
flexibility of variational posteriors. One promising direction is to use
implicit distributions, i.e., distributions without tractable densities as the
variational posterior. However, existing methods on implicit posteriors still
face challenges of noisy estimation and computational infeasibility when
applied to models with high-dimensional latent variables. In this paper, we
present a new approach named Kernel Implicit Variational Inference that
addresses these challenges. As far as we know, for the first time implicit
variational inference is successfully applied to Bayesian neural networks,
which shows promising results on both regression and classification tasks.
| Jiaxin Shi, Shengyang Sun, Jun Zhu | null | 1705.10119 | null | null |
On Residual CNN in text-dependent speaker verification task | cs.SD cs.LG | Deep learning approaches are still not very common in the speaker
verification field. We investigate the possibility of using deep residual
convolutional neural network with spectrograms as an input features in the
text-dependent speaker verification task. Despite the fact that we were not
able to surpass the baseline system in quality, we achieved a quite good
results for such a new approach getting an 5.23% ERR on the RSR2015 evaluation
part. Fusion of the baseline and proposed systems outperformed the best
individual system by 18% relatively.
| Egor Malykh, Sergey Novoselov, Oleg Kudashev | null | 1705.10134 | null | null |
Kronecker Recurrent Units | cs.LG | Our work addresses two important issues with recurrent neural networks: (1)
they are over-parameterized, and (2) the recurrence matrix is ill-conditioned.
The former increases the sample complexity of learning and the training time.
The latter causes the vanishing and exploding gradient problem. We present a
flexible recurrent neural network model called Kronecker Recurrent Units (KRU).
KRU achieves parameter efficiency in RNNs through a Kronecker factored
recurrent matrix. It overcomes the ill-conditioning of the recurrent matrix by
enforcing soft unitary constraints on the factors. Thanks to the small
dimensionality of the factors, maintaining these constraints is computationally
efficient. Our experimental results on seven standard data-sets reveal that KRU
can reduce the number of parameters by three orders of magnitude in the
recurrent weight matrix compared to the existing recurrent models, without
trading the statistical performance. These results in particular show that
while there are advantages in having a high dimensional recurrent space, the
capacity of the recurrent part of the model can be dramatically reduced.
| Cijo Jose, Moustpaha Cisse and Francois Fleuret | null | 1705.10142 | null | null |
Tangent Cones to TT Varieties | math.OC cs.LG math.AG math.NA | As already done for the matrix case for example in [Joe Harris, Algebraic
Geometry - A first course, p.256] we give a parametrization of the Bouligand
tangent cone of the variety of tensors of bounded TT rank. We discuss how the
proof generalizes to any binary hierarchical format. The parametrization can be
rewritten as an orthogonal sum of TT tensors. Its retraction onto the variety
is particularly easy to compose. We also give an implicit description of the
tangent cone as the solution of a system of polynomial equations.
| Benjamin Kutschan | null | 1705.10152 | null | null |
Fast learning rate of deep learning via a kernel perspective | math.ST cs.LG stat.ML stat.TH | We develop a new theoretical framework to analyze the generalization error of
deep learning, and derive a new fast learning rate for two representative
algorithms: empirical risk minimization and Bayesian deep learning. The series
of theoretical analyses of deep learning has revealed its high expressive power
and universal approximation capability. Although these analyses are highly
nonparametric, existing generalization error analyses have been developed
mainly in a fixed dimensional parametric model. To compensate this gap, we
develop an infinite dimensional model that is based on an integral form as
performed in the analysis of the universal approximation capability. This
allows us to define a reproducing kernel Hilbert space corresponding to each
layer. Our point of view is to deal with the ordinary finite dimensional deep
neural network as a finite approximation of the infinite dimensional one. The
approximation error is evaluated by the degree of freedom of the reproducing
kernel Hilbert space in each layer. To estimate a good finite dimensional
model, we consider both of empirical risk minimization and Bayesian deep
learning. We derive its generalization error bound and it is shown that there
appears bias-variance trade-off in terms of the number of parameters of the
finite dimensional approximation. We show that the optimal width of the
internal layers can be determined through the degree of freedom and the
convergence rate can be faster than $O(1/\sqrt{n})$ rate which has been shown
in the existing studies.
| Taiji Suzuki | null | 1705.10182 | null | null |
Adaptive Classification for Prediction Under a Budget | stat.ML cs.LG | We propose a novel adaptive approximation approach for test-time
resource-constrained prediction. Given an input instance at test-time, a gating
function identifies a prediction model for the input among a collection of
models. Our objective is to minimize overall average cost without sacrificing
accuracy. We learn gating and prediction models on fully labeled training data
by means of a bottom-up strategy. Our novel bottom-up method first trains a
high-accuracy complex model. Then a low-complexity gating and prediction model
are subsequently learned to adaptively approximate the high-accuracy model in
regions where low-cost models are capable of making highly accurate
predictions. We pose an empirical loss minimization problem with cost
constraints to jointly train gating and prediction models. On a number of
benchmark datasets our method outperforms state-of-the-art achieving higher
accuracy for the same cost.
| Feng Nan, Venkatesh Saligrama | null | 1705.10194 | null | null |
Mining Process Model Descriptions of Daily Life through Event
Abstraction | cs.LG cs.AI cs.DB | Process mining techniques focus on extracting insight in processes from event
logs. Process mining has the potential to provide valuable insights in
(un)healthy habits and to contribute to ambient assisted living solutions when
applied on data from smart home environments. However, events recorded in smart
home environments are on the level of sensor triggers, at which process
discovery algorithms produce overgeneralizing process models that allow for too
much behavior and that are difficult to interpret for human experts. We show
that abstracting the events to a higher-level interpretation can enable
discovery of more precise and more comprehensible models. We present a
framework for the extraction of features that can be used for abstraction with
supervised learning methods that is based on the XES IEEE standard for event
logs. This framework can automatically abstract sensor-level events to their
interpretation at the human activity level, after training it on training data
for which both the sensor and human activity events are known. We demonstrate
our abstraction framework on three real-life smart home event logs and show
that the process models that can be discovered after abstraction are more
precise indeed.
| Niek Tax, Natalia Sidorova, Reinder Haakma, Wil M.P. van der Aalst | 10.1007/978-3-319-69266-1_5 | 1705.10202 | null | null |
On Multilingual Training of Neural Dependency Parsers | cs.CL cs.LG cs.NE | We show that a recently proposed neural dependency parser can be improved by
joint training on multiple languages from the same family. The parser is
implemented as a deep neural network whose only input is orthographic
representations of words. In order to successfully parse, the network has to
discover how linguistically relevant concepts can be inferred from word
spellings. We analyze the representations of characters and words that are
learned by the network to establish which properties of languages were
accounted for. In particular we show that the parser has approximately learned
to associate Latin characters with their Cyrillic counterparts and that it can
group Polish and Russian words that have a similar grammatical function.
Finally, we evaluate the parser on selected languages from the Universal
Dependencies dataset and show that it is competitive with other recently
proposed state-of-the art methods, while having a simple structure.
| Micha{\l} Zapotoczny, Pawe{\l} Rychlikowski, and Jan Chorowski | null | 1705.10209 | null | null |
Latent Intention Dialogue Models | cs.CL cs.LG cs.NE stat.ML | Developing a dialogue agent that is capable of making autonomous decisions
and communicating by natural language is one of the long-term goals of machine
learning research. Traditional approaches either rely on hand-crafting a small
state-action set for applying reinforcement learning that is not scalable or
constructing deterministic models for learning dialogue sentences that fail to
capture natural conversational variability. In this paper, we propose a Latent
Intention Dialogue Model (LIDM) that employs a discrete latent variable to
learn underlying dialogue intentions in the framework of neural variational
inference. In a goal-oriented dialogue scenario, these latent intentions can be
interpreted as actions guiding the generation of machine responses, which can
be further refined autonomously by reinforcement learning. The experimental
evaluation of LIDM shows that the model out-performs published benchmarks for
both corpus-based and human evaluation, demonstrating the effectiveness of
discrete latent variable models for learning goal-oriented dialogues.
| Tsung-Hsien Wen, Yishu Miao, Phil Blunsom, Steve Young | null | 1705.10229 | null | null |
Deep Learning for Patient-Specific Kidney Graft Survival Analysis | cs.LG stat.ML | An accurate model of patient-specific kidney graft survival distributions can
help to improve shared-decision making in the treatment and care of patients.
In this paper, we propose a deep learning method that directly models the
survival function instead of estimating the hazard function to predict survival
times for graft patients based on the principle of multi-task learning. By
learning to jointly predict the time of the event, and its rank in the cox
partial log likelihood framework, our deep learning approach outperforms, in
terms of survival time prediction quality and concordance index, other common
methods for survival analysis, including the Cox Proportional Hazards model and
a network trained on the cox partial log-likelihood.
| Margaux Luck, Tristan Sylvain, H\'elo\"ise Cardinal, Andrea Lodi,
Yoshua Bengio | null | 1705.10245 | null | null |
Fast Single-Class Classification and the Principle of Logit Separation | stat.ML cs.LG | We consider neural network training, in applications in which there are many
possible classes, but at test-time, the task is a binary classification task of
determining whether the given example belongs to a specific class, where the
class of interest can be different each time the classifier is applied. For
instance, this is the case for real-time image search. We define the Single
Logit Classification (SLC) task: training the network so that at test-time, it
would be possible to accurately identify whether the example belongs to a given
class in a computationally efficient manner, based only on the output logit for
this class. We propose a natural principle, the Principle of Logit Separation,
as a guideline for choosing and designing losses suitable for the SLC. We show
that the cross-entropy loss function is not aligned with the Principle of Logit
Separation. In contrast, there are known loss functions, as well as novel batch
loss functions that we propose, which are aligned with this principle. In
total, we study seven loss functions. Our experiments show that indeed in
almost all cases, losses that are aligned with the Principle of Logit
Separation obtain at least 20% relative accuracy improvement in the SLC task
compared to losses that are not aligned with it, and sometimes considerably
more. Furthermore, we show that fast SLC does not cause any drop in binary
classification accuracy, compared to standard classification in which all
logits are computed, and yields a speedup which grows with the number of
classes. For instance, we demonstrate a 10x speedup when the number of classes
is 400,000. Tensorflow code for optimizing the new batch losses is publicly
available at https://github.com/cruvadom/Logit Separation.
| Gil Keren, Sivan Sabato, Bj\"orn Schuller | null | 1705.10246 | null | null |
Boltzmann Exploration Done Right | cs.LG stat.ML | Boltzmann exploration is a classic strategy for sequential decision-making
under uncertainty, and is one of the most standard tools in Reinforcement
Learning (RL). Despite its widespread use, there is virtually no theoretical
understanding about the limitations or the actual benefits of this exploration
scheme. Does it drive exploration in a meaningful way? Is it prone to
misidentifying the optimal actions or spending too much time exploring the
suboptimal ones? What is the right tuning for the learning rate? In this paper,
we address several of these questions in the classic setup of stochastic
multi-armed bandits. One of our main results is showing that the Boltzmann
exploration strategy with any monotone learning-rate sequence will induce
suboptimal behavior. As a remedy, we offer a simple non-monotone schedule that
guarantees near-optimal performance, albeit only when given prior access to key
problem parameters that are typically not available in practical situations
(like the time horizon $T$ and the suboptimality gap $\Delta$). More
importantly, we propose a novel variant that uses different learning rates for
different arms, and achieves a distribution-dependent regret bound of order
$\frac{K\log^2 T}{\Delta}$ and a distribution-independent bound of order
$\sqrt{KT}\log K$ without requiring such prior knowledge. To demonstrate the
flexibility of our technique, we also propose a variant that guarantees the
same performance bounds even if the rewards are heavy-tailed.
| Nicol\`o Cesa-Bianchi and Claudio Gentile and G\'abor Lugosi and
Gergely Neu | null | 1705.10257 | null | null |
Contextual Explanation Networks | cs.LG cs.AI stat.ML | Modern learning algorithms excel at producing accurate but complex models of
the data. However, deploying such models in the real-world requires extra care:
we must ensure their reliability, robustness, and absence of undesired biases.
This motivates the development of models that are equally accurate but can be
also easily inspected and assessed beyond their predictive performance. To this
end, we introduce contextual explanation networks (CEN)---a class of
architectures that learn to predict by generating and utilizing intermediate,
simplified probabilistic models. Specifically, CENs generate parameters for
intermediate graphical models which are further used for prediction and play
the role of explanations. Contrary to the existing post-hoc model-explanation
tools, CENs learn to predict and to explain simultaneously. Our approach offers
two major advantages: (i) for each prediction valid, instance-specific
explanation is generated with no computational overhead and (ii) prediction via
explanation acts as a regularizer and boosts performance in data-scarce
settings. We analyze the proposed framework theoretically and experimentally.
Our results on image and text classification and survival analysis tasks
demonstrate that CENs are not only competitive with the state-of-the-art
methods but also offer additional insights behind each prediction, that can be
valuable for decision support. We also show that while post-hoc methods may
produce misleading explanations in certain cases, CENs are consistent and allow
to detect such cases systematically.
| Maruan Al-Shedivat, Avinava Dubey, Eric P. Xing | null | 1705.10301 | null | null |
Classification of Major Depressive Disorder via Multi-Site Weighted
LASSO Model | cs.LG cs.CE stat.AP | Large-scale collaborative analysis of brain imaging data, in psychiatry and
neu-rology, offers a new source of statistical power to discover features that
boost ac-curacy in disease classification, differential diagnosis, and outcome
prediction. However, due to data privacy regulations or limited accessibility
to large datasets across the world, it is challenging to efficiently integrate
distributed information. Here we propose a novel classification framework
through multi-site weighted LASSO: each site performs an iterative weighted
LASSO for feature selection separately. Within each iteration, the
classification result and the selected features are collected to update the
weighting parameters for each feature. This new weight is used to guide the
LASSO process at the next iteration. Only the fea-tures that help to improve
the classification accuracy are preserved. In tests on da-ta from five sites
(299 patients with major depressive disorder (MDD) and 258 normal controls),
our method boosted classification accuracy for MDD by 4.9% on average. This
result shows the potential of the proposed new strategy as an ef-fective and
practical collaborative platform for machine learning on large scale
distributed imaging and biobank data.
| Dajiang Zhu, Brandalyn C. Riedel, Neda Jahanshad, Nynke A. Groenewold,
Dan J. Stein, Ian H. Gotlib, Matthew D. Sacchet, Danai Dima, James H. Cole,
Cynthia H.Y. Fu, Henrik Walter, Ilya M. Veer, Thomas Frodl, Lianne Schmaal,
Dick J. Veltman, Paul M. Thompson | null | 1705.10312 | null | null |
Deep Learning for Ontology Reasoning | cs.AI cs.LG | In this work, we present a novel approach to ontology reasoning that is based
on deep learning rather than logic-based formal reasoning. To this end, we
introduce a new model for statistical relational learning that is built upon
deep recursive neural networks, and give experimental evidence that it can
easily compete with, or even outperform, existing logic-based reasoners on the
task of ontology reasoning. More precisely, we compared our implemented system
with one of the best logic-based ontology reasoners at present, RDFox, on a
number of large standard benchmark datasets, and found that our system attained
high reasoning quality, while being up to two orders of magnitude faster.
| Patrick Hohenecker and Thomas Lukasiewicz | null | 1705.10342 | null | null |
Neural Embeddings of Graphs in Hyperbolic Space | stat.ML cs.LG | Neural embeddings have been used with great success in Natural Language
Processing (NLP). They provide compact representations that encapsulate word
similarity and attain state-of-the-art performance in a range of linguistic
tasks. The success of neural embeddings has prompted significant amounts of
research into applications in domains other than language. One such domain is
graph-structured data, where embeddings of vertices can be learned that
encapsulate vertex similarity and improve performance on tasks including edge
prediction and vertex labelling. For both NLP and graph based tasks, embeddings
have been learned in high-dimensional Euclidean spaces. However, recent work
has shown that the appropriate isometric space for embedding complex networks
is not the flat Euclidean space, but negatively curved, hyperbolic space. We
present a new concept that exploits these recent insights and propose learning
neural embeddings of graphs in hyperbolic space. We provide experimental
evidence that embedding graphs in their natural geometry significantly improves
performance on downstream tasks for several real-world public datasets.
| Benjamin Paul Chamberlain, James Clough, Marc Peter Deisenroth | null | 1705.10359 | null | null |
Emergent Communication in a Multi-Modal, Multi-Step Referential Game | cs.LG cs.CL cs.CV cs.IT cs.MA math.IT | Inspired by previous work on emergent communication in referential games, we
propose a novel multi-modal, multi-step referential game, where the sender and
receiver have access to distinct modalities of an object, and their information
exchange is bidirectional and of arbitrary duration. The multi-modal multi-step
setting allows agents to develop an internal communication significantly closer
to natural language, in that they share a single set of messages, and that the
length of the conversation may vary according to the difficulty of the task. We
examine these properties empirically using a dataset consisting of images and
textual descriptions of mammals, where the agents are tasked with identifying
the correct object. Our experiments indicate that a robust and efficient
communication protocol emerges, where gradual information exchange informs
better predictions and higher communication bandwidth improves generalization.
| Katrina Evtimova, Andrew Drozdov, Douwe Kiela, Kyunghyun Cho | null | 1705.10369 | null | null |
Collaborative Deep Learning for Speech Enhancement: A Run-Time Model
Selection Method Using Autoencoders | cs.SD cs.LG | We show that a Modular Neural Network (MNN) can combine various speech
enhancement modules, each of which is a Deep Neural Network (DNN) specialized
on a particular enhancement job. Differently from an ordinary ensemble
technique that averages variations in models, the propose MNN selects the best
module for the unseen test signal to produce a greedy ensemble. We see this as
Collaborative Deep Learning (CDL), because it can reuse various already-trained
DNN models without any further refining. In the proposed MNN selecting the best
module during run time is challenging. To this end, we employ a speech
AutoEncoder (AE) as an arbitrator, whose input and output are trained to be as
similar as possible if its input is clean speech. Therefore, the AE can gauge
the quality of the module-specific denoised result by seeing its AE
reconstruction error, e.g. low error means that the module output is similar to
clean speech. We propose an MNN structure with various modules that are
specialized on dealing with a specific noise type, gender, and input
Signal-to-Noise Ratio (SNR) value, and empirically prove that it almost always
works better than an arbitrarily chosen DNN module and sometimes as good as an
oracle result.
| Minje Kim | null | 1705.10385 | null | null |
Distributed SAGA: Maintaining linear convergence rate with limited
communication | math.OC cs.LG | In recent years, variance-reducing stochastic methods have shown great
practical performance, exhibiting linear convergence rate when other stochastic
methods offered a sub-linear rate. However, as datasets grow ever bigger and
clusters become widespread, the need for fast distribution methods is pressing.
We propose here a distribution scheme for SAGA which maintains a linear
convergence rate, even when communication between nodes is limited.
| Cl\'ement Calauz\`enes and Nicolas Le Roux | null | 1705.10405 | null | null |
Gradient Descent Can Take Exponential Time to Escape Saddle Points | math.OC cs.LG stat.ML | Although gradient descent (GD) almost always escapes saddle points
asymptotically [Lee et al., 2016], this paper shows that even with fairly
natural random initialization schemes and non-pathological functions, GD can be
significantly slowed down by saddle points, taking exponential time to escape.
On the other hand, gradient descent with perturbations [Ge et al., 2015, Jin et
al., 2017] is not slowed down by saddle points - it can find an approximate
local minimizer in polynomial time. This result implies that GD is inherently
slower than perturbed GD, and justifies the importance of adding perturbations
for efficient non-convex optimization. While our focus is theoretical, we also
present experiments that illustrate our theoretical findings.
| Simon S. Du, Chi Jin, Jason D. Lee, Michael I. Jordan, Barnabas
Poczos, Aarti Singh | null | 1705.10412 | null | null |
Solving the Conjugacy Decision Problem via Machine Learning | math.GR cs.LG | Machine learning and pattern recognition techniques have been successfully
applied to algorithmic problems in free groups. In this paper, we seek to
extend these techniques to finitely presented non-free groups, with a
particular emphasis on polycyclic and metabelian groups that are of interest to
non-commutative cryptography.
As a prototypical example, we utilize supervised learning methods to
construct classifiers that can solve the conjugacy decision problem, i.e.,
determine whether or not a pair of elements from a specified group are
conjugate. The accuracies of classifiers created using decision trees, random
forests, and N-tuple neural network models are evaluated for several non-free
groups. The very high accuracy of these classifiers suggests an underlying
mathematical relationship with respect to conjugacy in the tested groups.
| Jonathan Gryak, Robert M. Haralick, Delaram Kahrobaei | null | 1705.10417 | null | null |
Learning End-to-end Multimodal Sensor Policies for Autonomous Navigation | cs.RO cs.AI cs.LG | Multisensory polices are known to enhance both state estimation and target
tracking. However, in the space of end-to-end sensorimotor control, this
multi-sensor outlook has received limited attention. Moreover, systematic ways
to make policies robust to partial sensor failure are not well explored. In
this work, we propose a specific customization of Dropout, called
\textit{Sensor Dropout}, to improve multisensory policy robustness and handle
partial failure in the sensor-set. We also introduce an additional auxiliary
loss on the policy network in order to reduce variance in the band of potential
multi- and uni-sensory policies to reduce jerks during policy switching
triggered by an abrupt sensor failure or deactivation/activation. Finally,
through the visualization of gradients, we show that the learned policies are
conditioned on the same latent states representation despite having diverse
observations spaces - a hallmark of true sensor-fusion. Simulation results of
the multisensory policy, as visualized in TORCS racing game, can be seen here:
https://youtu.be/QAK2lcXjNZc.
| Guan-Horng Liu, Avinash Siravuru, Sai Prabhakar, Manuela Veloso,
George Kantor | null | 1705.10422 | null | null |
The Numerics of GANs | cs.LG | In this paper, we analyze the numerics of common algorithms for training
Generative Adversarial Networks (GANs). Using the formalism of smooth
two-player games we analyze the associated gradient vector field of GAN
training objectives. Our findings suggest that the convergence of current
algorithms suffers due to two factors: i) presence of eigenvalues of the
Jacobian of the gradient vector field with zero real-part, and ii) eigenvalues
with big imaginary part. Using these findings, we design a new algorithm that
overcomes some of these limitations and has better convergence properties.
Experimentally, we demonstrate its superiority on training common GAN
architectures and show convergence on GAN architectures that are known to be
notoriously hard to train.
| Lars Mescheder, Sebastian Nowozin, Andreas Geiger | null | 1705.10461 | null | null |
Federated Multi-Task Learning | cs.LG stat.ML | Federated learning poses new statistical and systems challenges in training
machine learning models over distributed networks of devices. In this work, we
show that multi-task learning is naturally suited to handle the statistical
challenges of this setting, and propose a novel systems-aware optimization
method, MOCHA, that is robust to practical systems issues. Our method and
theory for the first time consider issues of high communication cost,
stragglers, and fault tolerance for distributed multi-task learning. The
resulting method achieves significant speedups compared to alternatives in the
federated setting, as we demonstrate through simulations on real-world
federated datasets.
| Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, Ameet Talwalkar | null | 1705.10467 | null | null |
Iterative Machine Teaching | stat.ML cs.LG | In this paper, we consider the problem of machine teaching, the inverse
problem of machine learning. Different from traditional machine teaching which
views the learners as batch algorithms, we study a new paradigm where the
learner uses an iterative algorithm and a teacher can feed examples
sequentially and intelligently based on the current performance of the learner.
We show that the teaching complexity in the iterative case is very different
from that in the batch case. Instead of constructing a minimal training set for
learners, our iterative machine teaching focuses on achieving fast convergence
in the learner model. Depending on the level of information the teacher has
from the learner model, we design teaching algorithms which can provably reduce
the number of teaching examples and achieve faster convergence than learning
without teachers. We also validate our theoretical findings with extensive
experiments on different data distribution and real image datasets.
| Weiyang Liu, Bo Dai, Ahmad Humayun, Charlene Tay, Chen Yu, Linda B.
Smith, James M. Rehg, Le Song | null | 1705.1047 | null | null |
Multi-Modal Imitation Learning from Unstructured Demonstrations using
Generative Adversarial Nets | cs.RO cs.LG | Imitation learning has traditionally been applied to learn a single task from
demonstrations thereof. The requirement of structured and isolated
demonstrations limits the scalability of imitation learning approaches as they
are difficult to apply to real-world scenarios, where robots have to be able to
execute a multitude of tasks. In this paper, we propose a multi-modal imitation
learning framework that is able to segment and imitate skills from unlabelled
and unstructured demonstrations by learning skill segmentation and imitation
learning jointly. The extensive simulation results indicate that our method can
efficiently separate the demonstrations into individual skills and learn to
imitate them using a single multi-modal policy. The video of our experiments is
available at http://sites.google.com/view/nips17intentiongan
| Karol Hausman, Yevgen Chebotar, Stefan Schaal, Gaurav Sukhatme, Joseph
Lim | null | 1705.10479 | null | null |
Joint auto-encoders: a flexible multi-task learning framework | stat.ML cs.LG | The incorporation of prior knowledge into learning is essential in achieving
good performance based on small noisy samples. Such knowledge is often
incorporated through the availability of related data arising from domains and
tasks similar to the one of current interest. Ideally one would like to allow
both the data for the current task and for previous related tasks to
self-organize the learning system in such a way that commonalities and
differences between the tasks are learned in a data-driven fashion. We develop
a framework for learning multiple tasks simultaneously, based on sharing
features that are common to all tasks, achieved through the use of a modular
deep feedforward neural network consisting of shared branches, dealing with the
common features of all tasks, and private branches, learning the specific
unique aspects of each task. Once an appropriate weight sharing architecture
has been established, learning takes place through standard algorithms for
feedforward networks, e.g., stochastic gradient descent and its variations. The
method deals with domain adaptation and multi-task learning in a unified
fashion, and can easily deal with data arising from different types of sources.
Numerical experiments demonstrate the effectiveness of learning in domain
adaptation and transfer learning setups, and provide evidence for the flexible
and task-oriented representations arising in the network.
| Baruch Epstein. Ron Meir, Tomer Michaeli | null | 1705.10494 | null | null |
Zonotope hit-and-run for efficient sampling from projection DPPs | stat.ML cs.LG stat.CO | Determinantal point processes (DPPs) are distributions over sets of items
that model diversity using kernels. Their applications in machine learning
include summary extraction and recommendation systems. Yet, the cost of
sampling from a DPP is prohibitive in large-scale applications, which has
triggered an effort towards efficient approximate samplers. We build a novel
MCMC sampler that combines ideas from combinatorial geometry, linear
programming, and Monte Carlo methods to sample from DPPs with a fixed sample
cardinality, also called projection DPPs. Our sampler leverages the ability of
the hit-and-run MCMC kernel to efficiently move across convex bodies. Previous
theoretical results yield a fast mixing time of our chain when targeting a
distribution that is close to a projection DPP, but not a DPP in general. Our
empirical results demonstrate that this extends to sampling projection DPPs,
i.e., our sampler is more sample-efficient than previous approaches which in
turn translates to faster convergence when dealing with costly-to-evaluate
functions, such as summary extraction in our experiments.
| Guillaume Gautier, R\'emi Bardenet, Michal Valko | null | 1705.10498 | null | null |
Online to Offline Conversions, Universality and Adaptive Minibatch Sizes | cs.LG math.OC stat.ML | We present an approach towards convex optimization that relies on a novel
scheme which converts online adaptive algorithms into offline methods. In the
offline optimization setting, our derived methods are shown to obtain
favourable adaptive guarantees which depend on the harmonic sum of the queried
gradients. We further show that our methods implicitly adapt to the objective's
structure: in the smooth case fast convergence rates are ensured without any
prior knowledge of the smoothness parameter, while still maintaining guarantees
in the non-smooth setting. Our approach has a natural extension to the
stochastic setting, resulting in a lazy version of SGD (stochastic GD), where
minibathces are chosen \emph{adaptively} depending on the magnitude of the
gradients. Thus providing a principled approach towards choosing minibatch
sizes.
| Kfir Y. Levy | null | 1705.10499 | null | null |
Exploiting Restricted Boltzmann Machines and Deep Belief Networks in
Compressed Sensing | cs.LG | This paper proposes a CS scheme that exploits the representational power of
restricted Boltzmann machines and deep learning architectures to model the
prior distribution of the sparsity pattern of signals belonging to the same
class. The determined probability distribution is then used in a maximum a
posteriori (MAP) approach for the reconstruction. The parameters of the prior
distribution are learned from training data. The motivation behind this
approach is to model the higher-order statistical dependencies between the
coefficients of the sparse representation, with the final goal of improving the
reconstruction. The performance of the proposed method is validated on the
Berkeley Segmentation Dataset and the MNIST Database of handwritten digits.
| Luisa F. Polania, Kenneth E. Barner | 10.1109/TSP.2017.2712128 | 1705.105 | null | null |
Quantum Low Entropy based Associative Reasoning or QLEAR Learning | cs.LG cs.IT math.IT | In this paper, we propose the classification method based on a learning
paradigm we are going to call Quantum Low Entropy based Associative Reasoning
or QLEAR learning. The approach is based on the idea that classification can be
understood as supervised clustering, where a quantum entropy in the context of
the quantum probabilistic model, will be used as a "capturer" (measure, or
external index), of the "natural structure" of the data. By using quantum
entropy we do not make any assumption about linear separability of the data
that are going to be classified. The basic idea is to find close neighbors to a
query sample and then use relative change in the quantum entropy as a measure
of similarity of the newly arrived sample with the representatives of interest.
In other words, method is based on calculation of quantum entropy of the
referent system and its relative change with the addition of the newly arrived
sample. Referent system consists of vectors that represent individual classes
and that are the most similar, in Euclidean distance sense, to the vector that
is analyzed. Here, we analyze the classification problem in the context of
measuring similarities to prototype examples of categories. While nearest
neighbor classifiers are natural in this setting, they suffer from the problem
of high variance (in bias-variance decomposition) in the case of limited
sampling. Alternatively, one could use machine learning techniques (like
support vector machines) but they involve time-consuming optimization. Here we
propose a hybrid of nearest neighbor and machine learning technique which deals
naturally with the multi-class setting, has reasonable computational complexity
both in training and at run time, and yields excellent results in practice.
| Marko V. Jankovic | null | 1705.10503 | null | null |
Implications of Decentralized Q-learning Resource Allocation in Wireless
Networks | cs.NI cs.LG | Reinforcement Learning is gaining attention by the wireless networking
community due to its potential to learn good-performing configurations only
from the observed results. In this work we propose a stateless variation of
Q-learning, which we apply to exploit spatial reuse in a wireless network. In
particular, we allow networks to modify both their transmission power and the
channel used solely based on the experienced throughput. We concentrate in a
completely decentralized scenario in which no information about neighbouring
nodes is available to the learners. Our results show that although the
algorithm is able to find the best-performing actions to enhance aggregate
throughput, there is high variability in the throughput experienced by the
individual networks. We identify the cause of this variability as the
adversarial setting of our setup, in which the most played actions provide
intermittent good/poor performance depending on the neighbouring decisions. We
also evaluate the effect of the intrinsic learning parameters of the algorithm
on this variability.
| Francesc Wilhelmi, Boris Bellalta, Cristina Cano, Anders Jonsson | null | 1705.10508 | null | null |
IRGAN: A Minimax Game for Unifying Generative and Discriminative
Information Retrieval Models | cs.IR cs.LG | This paper provides a unified account of two schools of thinking in
information retrieval modelling: the generative retrieval focusing on
predicting relevant documents given a query, and the discriminative retrieval
focusing on predicting relevancy given a query-document pair. We propose a game
theoretical minimax game to iteratively optimise both models. On one hand, the
discriminative model, aiming to mine signals from labelled and unlabelled data,
provides guidance to train the generative model towards fitting the underlying
relevance distribution over documents given the query. On the other hand, the
generative model, acting as an attacker to the current discriminative model,
generates difficult examples for the discriminative model in an adversarial way
by minimising its discrimination objective. With the competition between these
two models, we show that the unified framework takes advantage of both schools
of thinking: (i) the generative model learns to fit the relevance distribution
over documents via the signals from the discriminative model, and (ii) the
discriminative model is able to exploit the unlabelled data selected by the
generative model to achieve a better estimation for document ranking. Our
experimental results have demonstrated significant performance gains as much as
23.96% on Precision@5 and 15.50% on MAP over strong baselines in a variety of
applications including web search, item recommendation, and question answering.
| Jun Wang, Lantao Yu, Weinan Zhang, Yu Gong, Yinghui Xu, Benyou Wang,
Peng Zhang, Dell Zhang | 10.1145/3077136.3080786 | 1705.10513 | null | null |
Constrained Policy Optimization | cs.LG | For many applications of reinforcement learning it can be more convenient to
specify both a reward function and constraints, rather than trying to design
behavior through the reward function. For example, systems that physically
interact with or around humans should satisfy safety constraints. Recent
advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015,
Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in
high-dimensional control, but do not consider the constrained setting.
We propose Constrained Policy Optimization (CPO), the first general-purpose
policy search algorithm for constrained reinforcement learning with guarantees
for near-constraint satisfaction at each iteration. Our method allows us to
train neural network policies for high-dimensional control while making
guarantees about policy behavior all throughout training. Our guarantees are
based on a new theoretical result, which is of independent interest: we prove a
bound relating the expected returns of two policies to an average divergence
between them. We demonstrate the effectiveness of our approach on simulated
robot locomotion tasks where the agent must satisfy constraints motivated by
safety.
| Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel | null | 1705.10528 | null | null |
Multi-Focus Image Fusion Using Sparse Representation and Coupled
Dictionary Learning | cs.CV cs.LG | We address the multi-focus image fusion problem, where multiple images
captured with different focal settings are to be fused into an all-in-focus
image of higher quality. Algorithms for this problem necessarily admit the
source image characteristics along with focused and blurred features. However,
most sparsity-based approaches use a single dictionary in focused feature space
to describe multi-focus images, and ignore the representations in blurred
feature space. We propose a multi-focus image fusion approach based on sparse
representation using a coupled dictionary. It exploits the observations that
the patches from a given training set can be sparsely represented by a couple
of overcomplete dictionaries related to the focused and blurred categories of
images and that a sparse approximation based on such coupled dictionary leads
to a more flexible and therefore better fusion strategy than the one based on
just selecting the sparsest representation in the original image estimate. In
addition, to improve the fusion performance, we employ a coupled dictionary
learning approach that enforces pairwise correlation between atoms of
dictionaries learned to represent the focused and blurred feature spaces. We
also discuss the advantages of the fusion approach based on coupled dictionary
learning, and present efficient algorithms for fusion based on coupled
dictionary learning. Extensive experimental comparisons with state-of-the-art
multi-focus image fusion algorithms validate the effectiveness of the proposed
approach.
| Farshad G. Veshki and Sergiy A. Vorobyov | null | 1705.10574 | null | null |
Character-Based Text Classification using Top Down Semantic Model for
Sentence Representation | cs.CL cs.LG | Despite the success of deep learning on many fronts especially image and
speech, its application in text classification often is still not as good as a
simple linear SVM on n-gram TF-IDF representation especially for smaller
datasets. Deep learning tends to emphasize on sentence level semantics when
learning a representation with models like recurrent neural network or
recursive neural network, however from the success of TF-IDF representation, it
seems a bag-of-words type of representation has its strength. Taking advantage
of both representions, we present a model known as TDSM (Top Down Semantic
Model) for extracting a sentence representation that considers both the
word-level semantics by linearly combining the words with attention weights and
the sentence-level semantics with BiLSTM and use it on text classification. We
apply the model on characters and our results show that our model is better
than all the other character-based and word-based convolutional neural network
models by \cite{zhang15} across seven different datasets with only 1\% of their
parameters. We also demonstrate that this model beats traditional linear models
on TF-IDF vectors on small and polished datasets like news article in which
typically deep learning models surrender.
| Zhenzhou Wu and Xin Zheng and Daniel Dahlmeier | null | 1705.10586 | null | null |
Optimizing Memory Efficiency for Convolution Kernels on Kepler GPUs | cs.DC cs.LG | Convolution is a fundamental operation in many applications, such as computer
vision, natural language processing, image processing, etc. Recent successes of
convolutional neural networks in various deep learning applications put even
higher demand on fast convolution. The high computation throughput and memory
bandwidth of graphics processing units (GPUs) make GPUs a natural choice for
accelerating convolution operations. However, maximally exploiting the
available memory bandwidth of GPUs for convolution is a challenging task. This
paper introduces a general model to address the mismatch between the memory
bank width of GPUs and computation data width of threads. Based on this model,
we develop two convolution kernels, one for the general case and the other for
a special case with one input channel. By carefully optimizing memory access
patterns and computation patterns, we design a communication-optimized kernel
for the special case and a communication-reduced kernel for the general case.
Experimental data based on implementations on Kepler GPUs show that our kernels
achieve 5.16X and 35.5% average performance improvement over the latest cuDNN
library, for the special case and the general case, respectively.
| Xiaoming Chen, Jianxu Chen, Danny Z. Chen, and Xiaobo Sharon Hu | null | 1705.10591 | null | null |
Approximation learning methods of Harmonic Mappings in relation to Hardy
Spaces | math.NA cs.LG | A new Hardy space Hardy space approach of Dirichlet type problem based on
Tikhonov regularization and Reproducing Hilbert kernel space is discussed in
this paper, which turns out to be a typical extremal problem located on the
upper upper-high complex plane. If considering this in the Hardy space, the
optimization operator of this problem will be highly simplified and an
efficient algorithm is possible. This is mainly realized by the help of
reproducing properties of the functions in the Hardy space of upper-high
complex plane, and the detail algorithm is proposed. Moreover, harmonic
mappings, which is a significant geometric transformation, are commonly used in
many applications such as image processing, since it describes the energy
minimization mappings between individual manifolds. Particularly, when we focus
on the planer mappings between two Euclid planer regions, the harmonic mappings
are exist and unique, which is guaranteed solidly by the existence of harmonic
function. This property is attractive and simulation results are shown in this
paper to ensure the capability of applications such as planer shape distortion
and surface registration.
| Zhulin Liu and C. L. Philip Chen | 10.1109/ICCSS.2016.7586421 | 1705.10596 | null | null |
Grammatical Inference as a Satisfiability Modulo Theories Problem | cs.FL cs.LG cs.LO | The problem of learning a minimal consistent model from a set of labeled
sequences of symbols is addressed from a satisfiability modulo theories
perspective. We present two encodings for deterministic finite automata and
extend one of these for Moore and Mealy machines. Our experimental results show
that these encodings improve upon the state-of-the-art, and are useful in
practice for learning small models.
| Rick Smetsers | null | 1705.10639 | null | null |
Conditional Adversarial Domain Adaptation | cs.LG | Adversarial learning has been embedded into deep networks to learn
disentangled and transferable representations for domain adaptation. Existing
adversarial domain adaptation methods may not effectively align different
domains of multimodal distributions native in classification problems. In this
paper, we present conditional adversarial domain adaptation, a principled
framework that conditions the adversarial adaptation models on discriminative
information conveyed in the classifier predictions. Conditional domain
adversarial networks (CDANs) are designed with two novel conditioning
strategies: multilinear conditioning that captures the cross-covariance between
feature representations and classifier predictions to improve the
discriminability, and entropy conditioning that controls the uncertainty of
classifier predictions to guarantee the transferability. With theoretical
guarantees and a few lines of codes, the approach has exceeded state-of-the-art
results on five datasets.
| Mingsheng Long, Zhangjie Cao, Jianmin Wang, Michael I. Jordan | null | 1705.10667 | null | null |
Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial
Examples | cs.CR cs.LG | Feature squeezing is a recently-introduced framework for mitigating and
detecting adversarial examples. In previous work, we showed that it is
effective against several earlier methods for generating adversarial examples.
In this short note, we report on recent results showing that simple feature
squeezing techniques also make deep learning models significantly more robust
against the Carlini/Wagner attacks, which are the best known adversarial
methods discovered to date.
| Weilin Xu, David Evans, Yanjun Qi | null | 1705.10686 | null | null |
Deep Learning is Robust to Massive Label Noise | cs.LG cs.AI cs.CV cs.NE | Deep neural networks trained on large supervised datasets have led to
impressive results in image classification and other tasks. However,
well-annotated datasets can be time-consuming and expensive to collect, lending
increased interest to larger but noisy datasets that are more easily obtained.
In this paper, we show that deep neural networks are capable of generalizing
from training data for which true labels are massively outnumbered by incorrect
labels. We demonstrate remarkably high test performance after training on
corrupted data from MNIST, CIFAR, and ImageNet. For example, on MNIST we obtain
test accuracy above 90 percent even after each clean training example has been
diluted with 100 randomly-labeled examples. Such behavior holds across multiple
patterns of label noise, even when erroneous labels are biased towards
confusing classes. We show that training in this regime requires a significant
but manageable increase in dataset size that is related to the factor by which
correct labels have been diluted. Finally, we provide an analysis of our
results that shows how increasing noise decreases the effective batch size.
| David Rolnick, Andreas Veit, Serge Belongie, Nir Shavit | null | 1705.10694 | null | null |
Multi-Labelled Value Networks for Computer Go | cs.AI cs.LG | This paper proposes a new approach to a novel value network architecture for
the game Go, called a multi-labelled (ML) value network. In the ML value
network, different values (win rates) are trained simultaneously for different
settings of komi, a compensation given to balance the initiative of playing
first. The ML value network has three advantages, (a) it outputs values for
different komi, (b) it supports dynamic komi, and (c) it lowers the mean
squared error (MSE). This paper also proposes a new dynamic komi method to
improve game-playing strength. This paper also performs experiments to
demonstrate the merits of the architecture. First, the MSE of the ML value
network is generally lower than the value network alone. Second, the program
based on the ML value network wins by a rate of 67.6% against the program based
on the value network alone. Third, the program with the proposed dynamic komi
method significantly improves the playing strength over the baseline that does
not use dynamic komi, especially for handicap games. To our knowledge, up to
date, no handicap games have been played openly by programs using value
networks. This paper provides these programs with a useful approach to playing
handicap games.
| Ti-Rong Wu, I-Chen Wu, Guan-Wun Chen, Ting-han Wei, Tung-Yi Lai,
Hung-Chun Wu and Li-Cheng Lan | null | 1705.10701 | null | null |
Fast Regression with an $\ell_\infty$ Guarantee | cs.DS cs.LG | Sketching has emerged as a powerful technique for speeding up problems in
numerical linear algebra, such as regression. In the overconstrained regression
problem, one is given an $n \times d$ matrix $A$, with $n \gg d$, as well as an
$n \times 1$ vector $b$, and one wants to find a vector $\hat{x}$ so as to
minimize the residual error $\|Ax-b\|_2$. Using the sketch and solve paradigm,
one first computes $S \cdot A$ and $S \cdot b$ for a randomly chosen matrix
$S$, then outputs $x' = (SA)^{\dagger} Sb$ so as to minimize $\|SAx' - Sb\|_2$.
The sketch-and-solve paradigm gives a bound on $\|x'-x^*\|_2$ when $A$ is
well-conditioned. Our main result is that, when $S$ is the subsampled
randomized Fourier/Hadamard transform, the error $x' - x^*$ behaves as if it
lies in a "random" direction within this bound: for any fixed direction $a\in
\mathbb{R}^d$, we have with $1 - d^{-c}$ probability that
\[
\langle a, x'-x^*\rangle \lesssim
\frac{\|a\|_2\|x'-x^*\|_2}{d^{\frac{1}{2}-\gamma}}, \quad (1)
\]
where $c, \gamma > 0$ are arbitrary constants.
This implies $\|x'-x^*\|_{\infty}$ is a factor $d^{\frac{1}{2}-\gamma}$
smaller than $\|x'-x^*\|_2$. It also gives a better bound on the generalization
of $x'$ to new examples: if rows of $A$ correspond to examples and columns to
features, then our result gives a better bound for the error introduced by
sketch-and-solve when classifying fresh examples. We show that not all
oblivious subspace embeddings $S$ satisfy these properties. In particular, we
give counterexamples showing that matrices based on Count-Sketch or leverage
score sampling do not satisfy these properties.
We also provide lower bounds, both on how small $\|x'-x^*\|_2$ can be, and
for our new guarantee (1), showing that the subsampled randomized
Fourier/Hadamard transform is nearly optimal.
| Eric Price, Zhao Song, David P. Woodruff | null | 1705.10723 | null | null |
The Cramer Distance as a Solution to Biased Wasserstein Gradients | cs.LG stat.ML | The Wasserstein probability metric has received much attention from the
machine learning community. Unlike the Kullback-Leibler divergence, which
strictly measures change in probability, the Wasserstein metric reflects the
underlying geometry between outcomes. The value of being sensitive to this
geometry has been demonstrated, among others, in ordinal regression and
generative modelling. In this paper we describe three natural properties of
probability divergences that reflect requirements from machine learning: sum
invariance, scale sensitivity, and unbiased sample gradients. The Wasserstein
metric possesses the first two properties but, unlike the Kullback-Leibler
divergence, does not possess the third. We provide empirical evidence
suggesting that this is a serious issue in practice. Leveraging insights from
probabilistic forecasting we propose an alternative to the Wasserstein metric,
the Cram\'er distance. We show that the Cram\'er distance possesses all three
desired properties, combining the best of the Wasserstein and Kullback-Leibler
divergences. To illustrate the relevance of the Cram\'er distance in practice
we design a new algorithm, the Cram\'er Generative Adversarial Network (GAN),
and show that it performs significantly better than the related Wasserstein
GAN.
| Marc G. Bellemare, Ivo Danihelka, Will Dabney, Shakir Mohamed, Balaji
Lakshminarayanan, Stephan Hoyer, R\'emi Munos | null | 1705.10743 | null | null |
Knowledge Base Completion: Baselines Strike Back | cs.LG cs.AI | Many papers have been published on the knowledge base completion task in the
past few years. Most of these introduce novel architectures for relation
learning that are evaluated on standard datasets such as FB15k and WN18. This
paper shows that the accuracy of almost all models published on the FB15k can
be outperformed by an appropriately tuned baseline - our reimplementation of
the DistMult model. Our findings cast doubt on the claim that the performance
improvements of recent models are due to architectural changes as opposed to
hyper-parameter tuning or different training objectives. This should prompt
future research to re-consider how the performance of models is evaluated and
reported.
| Rudolf Kadlec, Ondrej Bajgar and Jan Kleindienst | null | 1705.10744 | null | null |
Recurrent Estimation of Distributions | cs.LG stat.ML | This paper presents the recurrent estimation of distributions (RED) for
modeling real-valued data in a semiparametric fashion. RED models make two
novel uses of recurrent neural networks (RNNs) for density estimation of
general real-valued data. First, RNNs are used to transform input covariates
into a latent space to better capture conditional dependencies in inputs.
After, an RNN is used to compute the conditional distributions of the latent
covariates. The resulting model is efficient to train, compute, and sample
from, whilst producing normalized pdfs. The effectiveness of RED is shown via
several real-world data experiments. Our results show that RED models achieve a
lower held-out negative log-likelihood than other neural network approaches
across multiple dataset sizes and dimensionalities. Further context of the
efficacy of RED is provided by considering anomaly detection tasks, where we
also observe better performance over alternative models.
| Junier B. Oliva, Kumar Avinava Dubey, Barnabas Poczos, Eric Xing, Jeff
Schneider | null | 1705.1075 | null | null |
Generative Models of Visually Grounded Imagination | cs.LG cs.CV stat.ML | It is easy for people to imagine what a man with pink hair looks like, even
if they have never seen such a person before. We call the ability to create
images of novel semantic concepts visually grounded imagination. In this paper,
we show how we can modify variational auto-encoders to perform this task. Our
method uses a novel training objective, and a novel product-of-experts
inference network, which can handle partially specified (abstract) concepts in
a principled and efficient way. We also propose a set of easy-to-compute
evaluation metrics that capture our intuitive notions of what it means to have
good visual imagination, namely correctness, coverage, and compositionality
(the 3 C's). Finally, we perform a detailed comparison of our method with two
existing joint image-attribute VAE methods (the JMVAE method of Suzuki et.al.
and the BiVCCA method of Wang et.al.) by applying them to two datasets: the
MNIST-with-attributes dataset (which we introduce here), and the CelebA
dataset.
| Ramakrishna Vedantam, Ian Fischer, Jonathan Huang, Kevin Murphy | null | 1705.10762 | null | null |
Forward-Backward Selection with Early Dropping | cs.LG stat.ML | Forward-backward selection is one of the most basic and commonly-used feature
selection algorithms available. It is also general and conceptually applicable
to many different types of data. In this paper, we propose a heuristic that
significantly improves its running time, while preserving predictive accuracy.
The idea is to temporarily discard the variables that are conditionally
independent with the outcome given the selected variable set. Depending on how
those variables are reconsidered and reintroduced, this heuristic gives rise to
a family of algorithms with increasingly stronger theoretical guarantees. In
distributions that can be faithfully represented by Bayesian networks or
maximal ancestral graphs, members of this algorithmic family are able to
correctly identify the Markov blanket in the sample limit. In experiments we
show that the proposed heuristic increases computational efficiency by about
two orders of magnitude in high-dimensional problems, while selecting fewer
variables and retaining predictive performance. Furthermore, we show that the
proposed algorithm and feature selection with LASSO perform similarly when
restricted to select the same number of variables, making the proposed
algorithm an attractive alternative for problems where no (efficient) algorithm
for LASSO exists.
| Giorgos Borboudakis and Ioannis Tsamardinos | null | 1705.1077 | null | null |
Semi-Supervised Learning for Detecting Human Trafficking | cs.LG cs.AI | Human trafficking is one of the most atrocious crimes and among the
challenging problems facing law enforcement which demands attention of global
magnitude. In this study, we leverage textual data from the website "Backpage"-
used for classified advertisement- to discern potential patterns of human
trafficking activities which manifest online and identify advertisements of
high interest to law enforcement. Due to the lack of ground truth, we rely on a
human analyst from law enforcement, for hand-labeling a small portion of the
crawled data. We extend the existing Laplacian SVM and present S3VM-R, by
adding a regularization term to exploit exogenous information embedded in our
feature space in favor of the task at hand. We train the proposed method using
labeled and unlabeled data and evaluate it on a fraction of the unlabeled data,
herein referred to as unseen data, with our expert's further verification.
Results from comparisons between our method and other semi-supervised and
supervised approaches on the labeled data demonstrate that our learner is
effective in identifying advertisements of high interest to law enforcement
| Hamidreza Alvari, Paulo Shakarian, J.E. Kelly Snyder | null | 1705.10786 | null | null |
Surface Networks | stat.ML cs.GR cs.LG | We study data-driven representations for three-dimensional triangle meshes,
which are one of the prevalent objects used to represent 3D geometry. Recent
works have developed models that exploit the intrinsic geometry of manifolds
and graphs, namely the Graph Neural Networks (GNNs) and its spectral variants,
which learn from the local metric tensor via the Laplacian operator. Despite
offering excellent sample complexity and built-in invariances, intrinsic
geometry alone is invariant to isometric deformations, making it unsuitable for
many applications. To overcome this limitation, we propose several upgrades to
GNNs to leverage extrinsic differential geometry properties of
three-dimensional surfaces, increasing its modeling power.
In particular, we propose to exploit the Dirac operator, whose spectrum
detects principal curvature directions --- this is in stark contrast with the
classical Laplace operator, which directly measures mean curvature. We coin the
resulting models \emph{Surface Networks (SN)}. We prove that these models
define shape representations that are stable to deformation and to
discretization, and we demonstrate the efficiency and versatility of SNs on two
challenging tasks: temporal prediction of mesh deformations under non-linear
dynamics and generative models using a variational autoencoder framework with
encoders/decoders given by SNs.
| Ilya Kostrikov, Zhongshi Jiang, Daniele Panozzo, Denis Zorin, Joan
Bruna | null | 1705.10819 | null | null |
Accelerating Neural Architecture Search using Performance Prediction | cs.LG cs.CV cs.NE | Methods for neural network hyperparameter optimization and meta-modeling are
computationally expensive due to the need to train a large number of model
configurations. In this paper, we show that standard frequentist regression
models can predict the final performance of partially trained model
configurations using features based on network architectures, hyperparameters,
and time-series validation performance data. We empirically show that our
performance prediction models are much more effective than prominent Bayesian
counterparts, are simpler to implement, and are faster to train. Our models can
predict final performance in both visual classification and language modeling
domains, are effective for predicting performance of drastically varying model
architectures, and can even generalize between model classes. Using these
prediction models, we also propose an early stopping method for hyperparameter
optimization and meta-modeling, which obtains a speedup of a factor up to 6x in
both hyperparameter optimization and meta-modeling. Finally, we empirically
show that our early stopping method can be seamlessly incorporated into both
reinforcement learning-based architecture selection algorithms and bandit based
search methods. Through extensive experimentation, we empirically show our
performance prediction models and early stopping algorithm are state-of-the-art
in terms of prediction accuracy and speedup achieved while still identifying
the optimal model configurations.
| Bowen Baker, Otkrist Gupta, Ramesh Raskar and Nikhil Naik | null | 1705.10823 | null | null |
Accuracy First: Selecting a Differential Privacy Level for
Accuracy-Constrained ERM | cs.LG | Traditional approaches to differential privacy assume a fixed privacy
requirement $\epsilon$ for a computation, and attempt to maximize the accuracy
of the computation subject to the privacy constraint. As differential privacy
is increasingly deployed in practical settings, it may often be that there is
instead a fixed accuracy requirement for a given computation and the data
analyst would like to maximize the privacy of the computation subject to the
accuracy constraint. This raises the question of how to find and run a
maximally private empirical risk minimizer subject to a given accuracy
requirement. We propose a general "noise reduction" framework that can apply to
a variety of private empirical risk minimization (ERM) algorithms, using them
to "search" the space of privacy levels to find the empirically strongest one
that meets the accuracy constraint, incurring only logarithmic overhead in the
number of privacy levels searched. The privacy analysis of our algorithm leads
naturally to a version of differential privacy where the privacy parameters are
dependent on the data, which we term ex-post privacy, and which is related to
the recently introduced notion of privacy odometers. We also give an ex-post
privacy analysis of the classical AboveThreshold privacy tool, modifying it to
allow for queries chosen depending on the database. Finally, we apply our
approach to two common objectives, regularized linear and logistic regression,
and empirically compare our noise reduction methods to (i) inverting the
theoretical utility guarantees of standard private ERM algorithms and (ii) a
stronger, empirical baseline based on binary search.
| Katrina Ligett, Seth Neel, Aaron Roth, Bo Waggoner, Z. Steven Wu | null | 1705.10829 | null | null |
Objective-Reinforced Generative Adversarial Networks (ORGAN) for
Sequence Generation Models | stat.ML cs.LG | In unsupervised data generation tasks, besides the generation of a sample
based on previous observations, one would often like to give hints to the model
in order to bias the generation towards desirable metrics. We propose a method
that combines Generative Adversarial Networks (GANs) and reinforcement learning
(RL) in order to accomplish exactly that. While RL biases the data generation
process towards arbitrary metrics, the GAN component of the reward function
ensures that the model still remembers information learned from data. We build
upon previous results that incorporated GANs and RL in order to generate
sequence data and test this model in several settings for the generation of
molecules encoded as text sequences (SMILES) and in the context of music
generation, showing for each case that we can effectively bias the generation
process towards desired metrics.
| Gabriel Lima Guimaraes, Benjamin Sanchez-Lengeling, Carlos Outeiral,
Pedro Luis Cunha Farias, Al\'an Aspuru-Guzik | null | 1705.10843 | null | null |
Deep Learning for Environmentally Robust Speech Recognition: An Overview
of Recent Developments | cs.SD cs.CL cs.LG | Eliminating the negative effect of non-stationary environmental noise is a
long-standing research topic for automatic speech recognition that stills
remains an important challenge. Data-driven supervised approaches, including
ones based on deep neural networks, have recently emerged as potential
alternatives to traditional unsupervised approaches and with sufficient
training, can alleviate the shortcomings of the unsupervised methods in various
real-life acoustic environments. In this light, we review recently developed,
representative deep learning approaches for tackling non-stationary additive
and convolutional degradation of speech with the aim of providing guidelines
for those involved in the development of environmentally robust speech
recognition systems. We separately discuss single- and multi-channel techniques
developed for the front-end and back-end of speech recognition systems, as well
as joint front-end and back-end training frameworks.
| Zixing Zhang, J\"urgen Geiger, Jouni Pohjalainen, Amr El-Desoky Mousa,
Wenyu Jin, Bj\"orn Schuller | null | 1705.10874 | null | null |
Optimization of Tree Ensembles | math.OC cs.LG stat.ML | Tree ensemble models such as random forests and boosted trees are among the
most widely used and practically successful predictive models in applied
machine learning and business analytics. Although such models have been used to
make predictions based on exogenous, uncontrollable independent variables, they
are increasingly being used to make predictions where the independent variables
are controllable and are also decision variables. In this paper, we study the
problem of tree ensemble optimization: given a tree ensemble that predicts some
dependent variable using controllable independent variables, how should we set
these variables so as to maximize the predicted value? We formulate the problem
as a mixed-integer optimization problem. We theoretically examine the strength
of our formulation, provide a hierarchy of approximate formulations with bounds
on approximation quality and exploit the structure of the problem to develop
two large-scale solution methods, one based on Benders decomposition and one
based on iteratively generating tree split constraints. We test our methodology
on real data sets, including two case studies in drug design and customized
pricing, and show that our methodology can efficiently solve large-scale
instances to near or full optimality, and outperforms solutions obtained by
heuristic approaches. In our drug design case, we show how our approach can
identify compounds that efficiently trade-off predicted performance and novelty
with respect to existing, known compounds. In our customized pricing case, we
show how our approach can efficiently determine optimal store-level prices
under a random forest model that delivers excellent predictive accuracy.
| Velibor V. Mi\v{s}i\'c | null | 1705.10883 | null | null |
High Dimensional Structured Superposition Models | cs.LG stat.ML | High dimensional superposition models characterize observations using
parameters which can be written as a sum of multiple component parameters, each
with its own structure, e.g., sum of low rank and sparse matrices, sum of
sparse and rotated sparse vectors, etc. In this paper, we consider general
superposition models which allow sum of any number of component parameters, and
each component structure can be characterized by any norm. We present a simple
estimator for such models, give a geometric condition under which the
components can be accurately estimated, characterize sample complexity of the
estimator, and give high probability non-asymptotic bounds on the componentwise
estimation error. We use tools from empirical processes and generic chaining
for the statistical analysis, and our results, which substantially generalize
prior work on superposition models, are in terms of Gaussian widths of suitable
sets.
| Qilong Gu and Arindam Banerjee | null | 1705.10886 | null | null |
Efficient, sparse representation of manifold distance matrices for
classical scaling | stat.ML cs.CV cs.LG cs.NA | Geodesic distance matrices can reveal shape properties that are largely
invariant to non-rigid deformations, and thus are often used to analyze and
represent 3-D shapes. However, these matrices grow quadratically with the
number of points. Thus for large point sets it is common to use a low-rank
approximation to the distance matrix, which fits in memory and can be
efficiently analyzed using methods such as multidimensional scaling (MDS). In
this paper we present a novel sparse method for efficiently representing
geodesic distance matrices using biharmonic interpolation. This method exploits
knowledge of the data manifold to learn a sparse interpolation operator that
approximates distances using a subset of points. We show that our method is 2x
faster and uses 20x less memory than current leading methods for solving MDS on
large point sets, with similar quality. This enables analyses of large point
sets that were previously infeasible.
| Javier S. Turek, Alexander Huth | null | 1705.10887 | null | null |
Unsupervised Learning of Disentangled Representations from Video | cs.LG cs.AI cs.CV stat.ML | We present a new model DrNET that learns disentangled image representations
from video. Our approach leverages the temporal coherence of video and a novel
adversarial loss to learn a representation that factorizes each frame into a
stationary part and a temporally varying component. The disentangled
representation can be used for a range of tasks. For example, applying a
standard LSTM to the time-vary components enables prediction of future frames.
We evaluate our approach on a range of synthetic and real videos, demonstrating
the ability to coherently generate hundreds of steps into the future.
| Emily Denton, Vighnesh Birodkar | null | 1705.10915 | null | null |
The ALAMO approach to machine learning | cs.LG stat.ML | ALAMO is a computational methodology for leaning algebraic functions from
data. Given a data set, the approach begins by building a low-complexity,
linear model composed of explicit non-linear transformations of the independent
variables. Linear combinations of these non-linear transformations allow a
linear model to better approximate complex behavior observed in real processes.
The model is refined, as additional data are obtained in an adaptive fashion
through error maximization sampling using derivative-free optimization. Models
built using ALAMO can enforce constraints on the response variables to
incorporate first-principles knowledge. The ability of ALAMO to generate simple
and accurate models for a number of reaction problems is demonstrated. The
error maximization sampling is compared with Latin hypercube designs to
demonstrate its sampling efficiency. ALAMO's constrained regression methodology
is used to further refine concentration models, resulting in models that
perform better on validation data and satisfy upper and lower bounds placed on
model outputs.
| Zachary T. Wilson and Nikolaos V. Sahinidis | 10.1016/j.compchemeng.2017.02.010 | 1705.10918 | null | null |
Sequential Dynamic Decision Making with Deep Neural Nets on a Test-Time
Budget | stat.ML cs.LG | Deep neural network (DNN) based approaches hold significant potential for
reinforcement learning (RL) and have already shown remarkable gains over
state-of-art methods in a number of applications. The effectiveness of DNN
methods can be attributed to leveraging the abundance of supervised data to
learn value functions, Q-functions, and policy function approximations without
the need for feature engineering. Nevertheless, the deployment of DNN-based
predictors with very deep architectures can pose an issue due to computational
and other resource constraints at test-time in a number of applications. We
propose a novel approach for reducing the average latency by learning a
computationally efficient gating function that is capable of recognizing states
in a sequential decision process for which policy prescriptions of a shallow
network suffices and deeper layers of the DNN have little marginal utility. The
overall system is adaptive in that it dynamically switches control actions
based on state-estimates in order to reduce average latency without sacrificing
terminal performance. We experiment with a number of alternative loss-functions
to train gating functions and shallow policies and show that in a number of
applications a speed-up of up to almost 5X can be obtained with little loss in
performance.
| Henghui Zhu, Feng Nan, Ioannis Paschalidis, Venkatesh Saligrama | null | 1705.10924 | null | null |
Spectral Norm Regularization for Improving the Generalizability of Deep
Learning | stat.ML cs.LG | We investigate the generalizability of deep learning based on the sensitivity
to input perturbation. We hypothesize that the high sensitivity to the
perturbation of data degrades the performance on it. To reduce the sensitivity
to perturbation, we propose a simple and effective regularization method,
referred to as spectral norm regularization, which penalizes the high spectral
norm of weight matrices in neural networks. We provide supportive evidence for
the abovementioned hypothesis by experimentally confirming that the models
trained using spectral norm regularization exhibit better generalizability than
other baseline methods.
| Yuichi Yoshida and Takeru Miyato | null | 1705.10941 | null | null |
FALKON: An Optimal Large Scale Kernel Method | stat.ML cs.LG | Kernel methods provide a principled way to perform non linear, nonparametric
learning. They rely on solid functional analytic foundations and enjoy optimal
statistical properties. However, at least in their basic form, they have
limited applicability in large scale scenarios because of stringent
computational requirements in terms of time and especially memory. In this
paper, we take a substantial step in scaling up kernel methods, proposing
FALKON, a novel algorithm that allows to efficiently process millions of
points. FALKON is derived combining several algorithmic principles, namely
stochastic subsampling, iterative solvers and preconditioning. Our theoretical
analysis shows that optimal statistical accuracy is achieved requiring
essentially $O(n)$ memory and $O(n\sqrt{n})$ time. An extensive experimental
analysis on large scale datasets shows that, even with a single machine, FALKON
outperforms previous state of the art solutions, which exploit
parallel/distributed architectures.
| Alessandro Rudi, Luigi Carratino and Lorenzo Rosasco | null | 1705.10958 | null | null |
Non-Markovian Control with Gated End-to-End Memory Policy Networks | stat.ML cs.AI cs.LG cs.NE | Partially observable environments present an important open challenge in the
domain of sequential control learning with delayed rewards. Despite numerous
attempts during the two last decades, the majority of reinforcement learning
algorithms and associated approximate models, applied to this context, still
assume Markovian state transitions. In this paper, we explore the use of a
recently proposed attention-based model, the Gated End-to-End Memory Network,
for sequential control. We call the resulting model the Gated End-to-End Memory
Policy Network. More precisely, we use a model-free value-based algorithm to
learn policies for partially observed domains using this memory-enhanced neural
network. This model is end-to-end learnable and it features unbounded memory.
Indeed, because of its attention mechanism and associated non-parametric
memory, the proposed model allows us to define an attention mechanism over the
observation stream unlike recurrent models. We show encouraging results that
illustrate the capability of our attention-based model in the context of the
continuous-state non-stationary control problem of stock trading. We also
present an OpenAI Gym environment for simulated stock exchange and explain its
relevance as a benchmark for the field of non-Markovian decision process
learning.
| Julien Perez and Tomi Silander | null | 1705.10993 | null | null |
Adversarial Ranking for Language Generation | cs.CL cs.LG | Generative adversarial networks (GANs) have great successes on synthesizing
data. However, the existing GANs restrict the discriminator to be a binary
classifier, and thus limit their learning capacity for tasks that need to
synthesize output with rich structures such as natural language descriptions.
In this paper, we propose a novel generative adversarial network, RankGAN, for
generating high-quality language descriptions. Rather than training the
discriminator to learn and assign absolute binary predicate for individual data
sample, the proposed RankGAN is able to analyze and rank a collection of
human-written and machine-written sentences by giving a reference group. By
viewing a set of data samples collectively and evaluating their quality through
relative ranking scores, the discriminator is able to make better assessment
which in turn helps to learn a better generator. The proposed RankGAN is
optimized through the policy gradient technique. Experimental results on
multiple public datasets clearly demonstrate the effectiveness of the proposed
approach.
| Kevin Lin, Dianqi Li, Xiaodong He, Zhengyou Zhang, Ming-Ting Sun | null | 1705.11001 | null | null |
Criticality & Deep Learning II: Momentum Renormalisation Group | cond-mat.stat-mech cs.LG | Guided by critical systems found in nature we develop a novel mechanism
consisting of inhomogeneous polynomial regularisation via which we can induce
scale invariance in deep learning systems. Technically, we map our deep
learning (DL) setup to a genuine field theory, on which we act with the
Renormalisation Group (RG) in momentum space and produce the flow equations of
the couplings; those are translated to constraints and consequently interpreted
as "critical regularisation" conditions in the optimiser; the resulting
equations hence prove to be sufficient conditions for - and serve as an elegant
and simple mechanism to induce scale invariance in any deep learning setup.
| Dan Oprisa, Peter Toth | null | 1705.11023 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.