title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Least Square Variational Bayesian Autoencoder with Regularization | stat.ML cs.LG | In recent years Variation Autoencoders have become one of the most popular
unsupervised learning of complicated distributions.Variational Autoencoder
(VAE) provides more efficient reconstructive performance over a traditional
autoencoder. Variational auto enocders make better approximaiton than MCMC. The
VAE defines a generative process in terms of ancestral sampling through a
cascade of hidden stochastic layers. They are a directed graphic models.
Variational autoencoder is trained to maximise the variational lower bound.
Here we are trying maximise the likelihood and also at the same time we are
trying to make a good approximation of the data. Its basically trading of the
data log-likelihood and the KL divergence from the true posterior. This paper
describes the scenario in which we wish to find a point-estimate to the
parameters $\theta$ of some parametric model in which we generate each
observations by first sampling a local latent variable and then sampling the
associated observation. Here we use least square loss function with
regularization in the the reconstruction of the image, the least square loss
function was found to give better reconstructed images and had a faster
training time.
| Gautam Ramachandra | null | 1707.03134 | null | null |
A Simple Neural Attentive Meta-Learner | cs.AI cs.LG cs.NE stat.ML | Deep neural networks excel in regimes with large amounts of data, but tend to
struggle when data is scarce or when they need to adapt quickly to changes in
the task. In response, recent work in meta-learning proposes training a
meta-learner on a distribution of similar tasks, in the hopes of generalization
to novel but related tasks by learning a high-level strategy that captures the
essence of the problem it is asked to solve. However, many recent meta-learning
approaches are extensively hand-designed, either using architectures
specialized to a particular application, or hard-coding algorithmic components
that constrain how the meta-learner solves the task. We propose a class of
simple and generic meta-learner architectures that use a novel combination of
temporal convolutions and soft attention; the former to aggregate information
from past experience and the latter to pinpoint specific pieces of information.
In the most extensive set of meta-learning experiments to date, we evaluate the
resulting Simple Neural AttentIve Learner (or SNAIL) on several
heavily-benchmarked tasks. On all tasks, in both supervised and reinforcement
learning, SNAIL attains state-of-the-art performance by significant margins.
| Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, Pieter Abbeel | null | 1707.03141 | null | null |
Efficient mixture model for clustering of sparse high dimensional binary
data | cs.LG stat.ML | In this paper we propose a mixture model, SparseMix, for clustering of sparse
high dimensional binary data, which connects model-based with centroid-based
clustering. Every group is described by a representative and a probability
distribution modeling dispersion from this representative. In contrast to
classical mixture models based on EM algorithm, SparseMix:
-is especially designed for the processing of sparse data,
-can be efficiently realized by an on-line Hartigan optimization algorithm,
-is able to automatically reduce unnecessary clusters.
We perform extensive experimental studies on various types of data, which
confirm that SparseMix builds partitions with higher compatibility with
reference grouping than related methods. Moreover, constructed representatives
often better reveal the internal structure of data.
| Marek \'Smieja, Krzysztof Hajto and Jacek Tabor | null | 1707.03157 | null | null |
RegNet: Multimodal Sensor Registration Using Deep Neural Networks | cs.CV cs.AI cs.LG cs.RO | In this paper, we present RegNet, the first deep convolutional neural network
(CNN) to infer a 6 degrees of freedom (DOF) extrinsic calibration between
multimodal sensors, exemplified using a scanning LiDAR and a monocular camera.
Compared to existing approaches, RegNet casts all three conventional
calibration steps (feature extraction, feature matching and global regression)
into a single real-time capable CNN. Our method does not require any human
interaction and bridges the gap between classical offline and target-less
online calibration approaches as it provides both a stable initial estimation
as well as a continuous online correction of the extrinsic parameters. During
training we randomly decalibrate our system in order to train RegNet to infer
the correspondence between projected depth measurements and RGB image and
finally regress the extrinsic calibration. Additionally, with an iterative
execution of multiple CNNs, that are trained on different magnitudes of
decalibration, our approach compares favorably to state-of-the-art methods in
terms of a mean calibration error of 0.28 degrees for the rotational and 6 cm
for the translation components even for large decalibrations up to 1.5 m and 20
degrees.
| Nick Schneider, Florian Piewak, Christoph Stiller, Uwe Franke | null | 1707.03167 | null | null |
A Survey on Resilient Machine Learning | cs.AI cs.CR cs.LG | Machine learning based system are increasingly being used for sensitive tasks
such as security surveillance, guiding autonomous vehicle, taking investment
decisions, detecting and blocking network intrusion and malware etc. However,
recent research has shown that machine learning models are venerable to attacks
by adversaries at all phases of machine learning (eg, training data collection,
training, operation). All model classes of machine learning systems can be
misled by providing carefully crafted inputs making them wrongly classify
inputs. Maliciously created input samples can affect the learning process of a
ML system by either slowing down the learning process, or affecting the
performance of the learned mode, or causing the system make error(s) only in
attacker's planned scenario. Because of these developments, understanding
security of machine learning algorithms and systems is emerging as an important
research area among computer security and machine learning researchers and
practitioners. We present a survey of this emerging area in machine learning.
| Atul Kumar, Sameep Mehta | null | 1707.03184 | null | null |
Accelerated Variance Reduced Stochastic ADMM | cs.LG stat.ML | Recently, many variance reduced stochastic alternating direction method of
multipliers (ADMM) methods (e.g.\ SAG-ADMM, SDCA-ADMM and SVRG-ADMM) have made
exciting progress such as linear convergence rates for strongly convex
problems. However, the best known convergence rate for general convex problems
is O(1/T) as opposed to O(1/T^2) of accelerated batch algorithms, where $T$ is
the number of iterations. Thus, there still remains a gap in convergence rates
between existing stochastic ADMM and batch algorithms. To bridge this gap, we
introduce the momentum acceleration trick for batch optimization into the
stochastic variance reduced gradient based ADMM (SVRG-ADMM), which leads to an
accelerated (ASVRG-ADMM) method. Then we design two different momentum term
update rules for strongly convex and general convex cases. We prove that
ASVRG-ADMM converges linearly for strongly convex problems. Besides having a
low per-iteration complexity as existing stochastic ADMM methods, ASVRG-ADMM
improves the convergence rate on general convex problems from O(1/T) to
O(1/T^2). Our experimental results show the effectiveness of ASVRG-ADMM.
| Yuanyuan Liu, Fanhua Shang, James Cheng | null | 1707.0319 | null | null |
Towards an automated method based on Iterated Local Search optimization
for tuning the parameters of Support Vector Machines | cs.AI cs.LG | We provide preliminary details and formulation of an optimization strategy
under current development that is able to automatically tune the parameters of
a Support Vector Machine over new datasets. The optimization strategy is a
heuristic based on Iterated Local Search, a modification of classic hill
climbing which iterates calls to a local search routine.
| Sergio Consoli, Jacek Kustra, Pieter Vos, Monique Hendriks, Dimitrios
Mavroeidis | null | 1707.03191 | null | null |
DeepTrend: A Deep Hierarchical Neural Network for Traffic Flow
Prediction | cs.LG | In this paper, we consider the temporal pattern in traffic flow time series,
and implement a deep learning model for traffic flow prediction. Detrending
based methods decompose original flow series into trend and residual series, in
which trend describes the fixed temporal pattern in traffic flow and residual
series is used for prediction. Inspired by the detrending method, we propose
DeepTrend, a deep hierarchical neural network used for traffic flow prediction
which considers and extracts the time-variant trend. DeepTrend has two stacked
layers: extraction layer and prediction layer. Extraction layer, a fully
connected layer, is used to extract the time-variant trend in traffic flow by
feeding the original flow series concatenated with corresponding simple average
trend series. Prediction layer, an LSTM layer, is used to make flow prediction
by feeding the obtained trend from the output of extraction layer and
calculated residual series. To make the model more effective, DeepTrend needs
first pre-trained layer-by-layer and then fine-tuned in the entire network.
Experiments show that DeepTrend can noticeably boost the prediction performance
compared with some traditional prediction models and LSTM with detrending based
methods.
| Xingyuan Dai, Rui Fu, Yilun Lin, Li Li, Fei-Yue Wang | null | 1707.03213 | null | null |
Similarity Search Over Graphs Using Localized Spectral Analysis | cs.AI cs.LG | This paper provides a new similarity detection algorithm. Given an input set
of multi-dimensional data points, where each data point is assumed to be
multi-dimensional, and an additional reference data point for similarity
finding, the algorithm uses kernel method that embeds the data points into a
low dimensional manifold. Unlike other kernel methods, which consider the
entire data for the embedding, our method selects a specific set of kernel
eigenvectors. The eigenvectors are chosen to separate between the data points
and the reference data point so that similar data points can be easily
identified as being distinct from most of the members in the dataset.
| Yariv Aizenbud, Amir Averbuch, Gil Shabat and Guy Ziv | null | 1707.03311 | null | null |
Dynamic Stochastic Approximation for Multi-stage Stochastic Optimization | math.OC cs.CC cs.LG stat.ML | In this paper, we consider multi-stage stochastic optimization problems with
convex objectives and conic constraints at each stage. We present a new
stochastic first-order method, namely the dynamic stochastic approximation
(DSA) algorithm, for solving these types of stochastic optimization problems.
We show that DSA can achieve an optimal ${\cal O}(1/\epsilon^4)$ rate of
convergence in terms of the total number of required scenarios when applied to
a three-stage stochastic optimization problem. We further show that this rate
of convergence can be improved to ${\cal O}(1/\epsilon^2)$ when the objective
function is strongly convex. We also discuss variants of DSA for solving more
general multi-stage stochastic optimization problems with the number of stages
$T > 3$. The developed DSA algorithms only need to go through the scenario tree
once in order to compute an $\epsilon$-solution of the multi-stage stochastic
optimization problem. As a result, the memory required by DSA only grows
linearly with respect to the number of stages. To the best of our knowledge,
this is the first time that stochastic approximation type methods are
generalized for multi-stage stochastic optimization with $T \ge 3$.
| Guanghui Lan and Zhiqiang Zhou | null | 1707.03324 | null | null |
Deep Learning for Real Time Crime Forecasting | math.NA cs.LG stat.ML | Accurate real time crime prediction is a fundamental issue for public safety,
but remains a challenging problem for the scientific community. Crime
occurrences depend on many complex factors. Compared to many predictable
events, crime is sparse. At different spatio-temporal scales, crime
distributions display dramatically different patterns. These distributions are
of very low regularity in both space and time. In this work, we adapt the
state-of-the-art deep learning spatio-temporal predictor, ST-ResNet [Zhang et
al, AAAI, 2017], to collectively predict crime distribution over the Los
Angeles area. Our models are two staged. First, we preprocess the raw crime
data. This includes regularization in both space and time to enhance
predictable signals. Second, we adapt hierarchical structures of residual
convolutional units to train multi-factor crime prediction models. Experiments
over a half year period in Los Angeles reveal highly accurate predictive power
of our models.
| Bao Wang, Duo Zhang, Duanhao Zhang, P.Jeffery Brantingham, Andrea L.
Bertozzi | null | 1707.0334 | null | null |
Fast Amortized Inference and Learning in Log-linear Models with Randomly
Perturbed Nearest Neighbor Search | cs.LG stat.ML | Inference in log-linear models scales linearly in the size of output space in
the worst-case. This is often a bottleneck in natural language processing and
computer vision tasks when the output space is feasibly enumerable but very
large. We propose a method to perform inference in log-linear models with
sublinear amortized cost. Our idea hinges on using Gumbel random variable
perturbations and a pre-computed Maximum Inner Product Search data structure to
access the most-likely elements in sublinear amortized time. Our method yields
provable runtime and accuracy guarantees. Further, we present empirical
experiments on ImageNet and Word Embeddings showing significant speedups for
sampling, inference, and learning in log-linear models.
| Stephen Mussmann, Daniel Levy, Stefano Ermon | null | 1707.03372 | null | null |
Imitation from Observation: Learning to Imitate Behaviors from Raw Video
via Context Translation | cs.LG cs.AI cs.CV cs.NE cs.RO | Imitation learning is an effective approach for autonomous systems to acquire
control policies when an explicit reward function is unavailable, using
supervision provided as demonstrations from an expert, typically a human
operator. However, standard imitation learning methods assume that the agent
receives examples of observation-action tuples that could be provided, for
instance, to a supervised learning algorithm. This stands in contrast to how
humans and animals imitate: we observe another person performing some behavior
and then figure out which actions will realize that behavior, compensating for
changes in viewpoint, surroundings, object positions and types, and other
factors. We term this kind of imitation learning "imitation-from-observation,"
and propose an imitation learning method based on video prediction with context
translation and deep reinforcement learning. This lifts the assumption in
imitation learning that the demonstration should consist of observations in the
same environment configuration, and enables a variety of interesting
applications, including learning robotic skills that involve tool use simply by
observing videos of human tool use. Our experimental results show the
effectiveness of our approach in learning a wide range of real-world robotic
tasks modeled after common household chores from videos of a human
demonstrator, including sweeping, ladling almonds, pushing objects as well as a
number of tasks in simulation.
| YuXuan Liu, Abhishek Gupta, Pieter Abbeel, Sergey Levine | null | 1707.03374 | null | null |
Learning like humans with Deep Symbolic Networks | cs.AI cond-mat.dis-nn cs.LG | We introduce the Deep Symbolic Network (DSN) model, which aims at becoming
the white-box version of Deep Neural Networks (DNN). The DSN model provides a
simple, universal yet powerful structure, similar to DNN, to represent any
knowledge of the world, which is transparent to humans. The conjecture behind
the DSN model is that any type of real world objects sharing enough common
features are mapped into human brains as a symbol. Those symbols are connected
by links, representing the composition, correlation, causality, or other
relationships between them, forming a deep, hierarchical symbolic network
structure. Powered by such a structure, the DSN model is expected to learn like
humans, because of its unique characteristics. First, it is universal, using
the same structure to store any knowledge. Second, it can learn symbols from
the world and construct the deep symbolic networks automatically, by utilizing
the fact that real world objects have been naturally separated by
singularities. Third, it is symbolic, with the capacity of performing causal
deduction and generalization. Fourth, the symbols and the links between them
are transparent to us, and thus we will know what it has learned or not - which
is the key for the security of an AI system. Fifth, its transparency enables it
to learn with relatively small data. Sixth, its knowledge can be accumulated.
Last but not least, it is more friendly to unsupervised learning than DNN. We
present the details of the model, the algorithm powering its automatic learning
ability, and describe its usefulness in different use cases. The purpose of
this paper is to generate broad interest to develop it within an open source
project centered on the Deep Symbolic Network (DSN) model towards the
development of general AI.
| Qunzhi Zhang and Didier Sornette | null | 1707.03377 | null | null |
DeepCodec: Adaptive Sensing and Recovery via Deep Convolutional Neural
Networks | stat.ML cs.LG | In this paper we develop a novel computational sensing framework for sensing
and recovering structured signals. When trained on a set of representative
signals, our framework learns to take undersampled measurements and recover
signals from them using a deep convolutional neural network. In other words, it
learns a transformation from the original signals to a near-optimal number of
undersampled measurements and the inverse transformation from measurements to
signals. This is in contrast to traditional compressive sensing (CS) systems
that use random linear measurements and convex optimization or iterative
algorithms for signal recovery. We compare our new framework with
$\ell_1$-minimization from the phase transition point of view and demonstrate
that it outperforms $\ell_1$-minimization in the regions of phase transition
plot where $\ell_1$-minimization cannot recover the exact solution. In
addition, we experimentally demonstrate how learning measurements enhances the
overall recovery performance, speeds up training of recovery framework, and
leads to having fewer parameters to learn.
| Ali Mousavi, Gautam Dasarathy, Richard G. Baraniuk | null | 1707.03386 | null | null |
SCAN: Learning Hierarchical Compositional Visual Concepts | stat.ML cs.LG | The seemingly infinite diversity of the natural world arises from a
relatively small set of coherent rules, such as the laws of physics or
chemistry. We conjecture that these rules give rise to regularities that can be
discovered through primarily unsupervised experiences and represented as
abstract concepts. If such representations are compositional and hierarchical,
they can be recombined into an exponentially large set of new concepts. This
paper describes SCAN (Symbol-Concept Association Network), a new framework for
learning such abstractions in the visual domain. SCAN learns concepts through
fast symbol association, grounding them in disentangled visual primitives that
are discovered in an unsupervised manner. Unlike state of the art multimodal
generative model baselines, our approach requires very few pairings between
symbols and images and makes no assumptions about the form of symbol
representations. Once trained, SCAN is capable of multimodal bi-directional
inference, generating a diverse set of image samples from symbolic descriptions
and vice versa. It also allows for traversal and manipulation of the implicit
hierarchy of visual concepts through symbolic instructions and learnt logical
recombination operations. Such manipulations enable SCAN to break away from its
training data distribution and imagine novel visual concepts through
symbolically instructed recombination of previously learnt concepts.
| Irina Higgins, Nicolas Sonnerat, Loic Matthey, Arka Pal, Christopher P
Burgess, Matko Bosnjak, Murray Shanahan, Matthew Botvinick, Demis Hassabis,
Alexander Lerchner | null | 1707.03389 | null | null |
Multi-Task Learning Using Neighborhood Kernels | cs.LG stat.ML | This paper introduces a new and effective algorithm for learning kernels in a
Multi-Task Learning (MTL) setting. Although, we consider a MTL scenario here,
our approach can be easily applied to standard single task learning, as well.
As shown by our empirical results, our algorithm consistently outperforms the
traditional kernel learning algorithms such as uniform combination solution,
convex combinations of base kernels as well as some kernel alignment-based
models, which have been proven to give promising results in the past. We
present a Rademacher complexity bound based on which a new Multi-Task Multiple
Kernel Learning (MT-MKL) model is derived. In particular, we propose a Support
Vector Machine-regularized model in which, for each task, an optimal kernel is
learned based on a neighborhood-defining kernel that is not restricted to be
positive semi-definite. Comparative experimental results are showcased that
underline the merits of our neighborhood-defining framework in both
classification and regression problems.
| Niloofar Yousefi, Cong Li, Mansooreh Mollaghasemi, Georgios
Anagnostopoulos and Michael Georgiopoulos | null | 1707.03426 | null | null |
Initialising Kernel Adaptive Filters via Probabilistic Inference | stat.ML cs.LG | We present a probabilistic framework for both (i) determining the initial
settings of kernel adaptive filters (KAFs) and (ii) constructing fully-adaptive
KAFs whereby in addition to weights and dictionaries, kernel parameters are
learnt sequentially. This is achieved by formulating the estimator as a
probabilistic model and defining dedicated prior distributions over the kernel
parameters, weights and dictionary, enforcing desired properties such as
sparsity. The model can then be trained using a subset of data to initialise
standard KAFs or updated sequentially each time a new observation becomes
available. Due to the nonlinear/non-Gaussian properties of the model, learning
and inference is achieved using gradient-based maximum-a-posteriori
optimisation and Markov chain Monte Carlo methods, and can be confidently used
to compute predictions. The proposed framework was validated on nonlinear time
series of both synthetic and real-world nature, where it outperformed standard
KAFs in terms of mean square error and the sparsity of the learnt dictionaries.
| Iv\'an Castro, Crist\'obal Silva, Felipe Tobar | null | 1707.0345 | null | null |
Machine Learning in Appearance-based Robot Self-localization | cs.CV cs.LG cs.RO stat.AP | An appearance-based robot self-localization problem is considered in the
machine learning framework. The appearance space is composed of all possible
images, which can be captured by a robot's visual system under all robot
localizations. Using recent manifold learning and deep learning techniques, we
propose a new geometrically motivated solution based on training data
consisting of a finite set of images captured in known locations of the robot.
The solution includes estimation of the robot localization mapping from the
appearance space to the robot localization space, as well as estimation of the
inverse mapping for modeling visual image features. The latter allows solving
the robot localization problem as the Kalman filtering problem.
| Alexander Kuleshov, Alexander Bernstein, Evgeny Burnaev, Yury Yanovich | null | 1707.03469 | null | null |
Value Prediction Network | cs.AI cs.LG | This paper proposes a novel deep reinforcement learning (RL) architecture,
called Value Prediction Network (VPN), which integrates model-free and
model-based RL methods into a single neural network. In contrast to typical
model-based RL methods, VPN learns a dynamics model whose abstract states are
trained to make option-conditional predictions of future values (discounted sum
of rewards) rather than of future observations. Our experimental results show
that VPN has several advantages over both model-free and model-based baselines
in a stochastic environment where careful planning is required but building an
accurate observation-prediction model is difficult. Furthermore, VPN
outperforms Deep Q-Network (DQN) on several Atari games even with
short-lookahead planning, demonstrating its potential as a new way of learning
a good state representation.
| Junhyuk Oh, Satinder Singh, Honglak Lee | null | 1707.03497 | null | null |
Deep Learning for Sensor-based Activity Recognition: A Survey | cs.CV cs.AI cs.LG cs.NE | Sensor-based activity recognition seeks the profound high-level knowledge
about human activities from multitudes of low-level sensor readings.
Conventional pattern recognition approaches have made tremendous progress in
the past years. However, those methods often heavily rely on heuristic
hand-crafted feature extraction, which could hinder their generalization
performance. Additionally, existing methods are undermined for unsupervised and
incremental learning tasks. Recently, the recent advancement of deep learning
makes it possible to perform automatic high-level feature extraction thus
achieves promising performance in many areas. Since then, deep learning based
methods have been widely adopted for the sensor-based activity recognition
tasks. This paper surveys the recent advance of deep learning based
sensor-based activity recognition. We summarize existing literature from three
aspects: sensor modality, deep model, and application. We also present detailed
insights on existing work and propose grand challenges for future research.
| Jindong Wang, Yiqiang Chen, Shuji Hao, Xiaohui Peng and Lisha Hu | 10.1016/j.patrec.2018.02.010 | 1707.03502 | null | null |
Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex
Problems | math.OC cs.LG | In this paper, we introduce a stochastic projected subgradient method for
weakly convex (i.e., uniformly prox-regular) nonsmooth, nonconvex functions---a
wide class of functions which includes the additive and convex composite
classes. At a high-level, the method is an inexact proximal point iteration in
which the strongly convex proximal subproblems are quickly solved with a
specialized stochastic projected subgradient method. The primary contribution
of this paper is a simple proof that the proposed algorithm converges at the
same rate as the stochastic gradient method for smooth nonconvex problems. This
result appears to be the first convergence rate analysis of a stochastic (or
even deterministic) subgradient method for the class of weakly convex
functions.
| Damek Davis, Benjamin Grimmer | null | 1707.03505 | null | null |
An Introduction to the Practical and Theoretical Aspects of
Mixture-of-Experts Modeling | stat.ML cs.LG | Mixture-of-experts (MoE) models are a powerful paradigm for modeling of data
arising from complex data generating processes (DGPs). In this article, we
demonstrate how different MoE models can be constructed to approximate the
underlying DGPs of arbitrary types of data. Due to the probabilistic nature of
MoE models, we propose the maximum quasi-likelihood (MQL) estimator as a method
for estimating MoE model parameters from data, and we provide conditions under
which MQL estimators are consistent and asymptotically normal. The blockwise
minorization-maximizatoin (blockwise-MM) algorithm framework is proposed as an
all-purpose method for constructing algorithms for obtaining MQL estimators. An
example derivation of a blockwise-MM algorithm is provided. We then present a
method for constructing information criteria for estimating the number of
components in MoE models and provide justification for the classic Bayesian
information criterion (BIC). We explain how MoE models can be used to conduct
classification, clustering, and regression and we illustrate these applications
via a pair of worked examples.
| Hien D. Nguyen and Faicel Chamroukhi | null | 1707.03538 | null | null |
Discriminative Block-Diagonal Representation Learning for Image
Recognition | cs.CV cs.LG cs.MM | Existing block-diagonal representation researches mainly focuses on casting
block-diagonal regularization on training data, while only little attention is
dedicated to concurrently learning both block-diagonal representations of
training and test data. In this paper, we propose a discriminative
block-diagonal low-rank representation (BDLRR) method for recognition. In
particular, the elaborate BDLRR is formulated as a joint optimization problem
of shrinking the unfavorable representation from off-block-diagonal elements
and strengthening the compact block-diagonal representation under the
semi-supervised framework of low-rank representation. To this end, we first
impose penalty constraints on the negative representation to eliminate the
correlation between different classes such that the incoherence criterion of
the extra-class representation is boosted. Moreover, a constructed subspace
model is developed to enhance the self-expressive power of training samples and
further build the representation bridge between the training and test samples,
such that the coherence of the learned intra-class representation is
consistently heightened. Finally, the resulting optimization problem is solved
elegantly by employing an alternative optimization strategy, and a simple
recognition algorithm on the learned representation is utilized for final
prediction. Extensive experimental results demonstrate that the proposed method
achieves superb recognition results on four face image datasets, three
character datasets, and the fifteen scene multi-categories dataset. It not only
shows superior potential on image recognition but also outperforms
state-of-the-art methods.
| Zheng Zhang, Yong Xu, Ling Shao, Jian Yang | 10.1109/TNNLS.2017.2712801 | 1707.03548 | null | null |
Multitask Learning for Fine-Grained Twitter Sentiment Analysis | cs.IR cs.CL cs.LG | Traditional sentiment analysis approaches tackle problems like ternary
(3-category) and fine-grained (5-category) classification by learning the tasks
separately. We argue that such classification tasks are correlated and we
propose a multitask approach based on a recurrent neural network that benefits
by jointly learning them. Our study demonstrates the potential of multitask
models on this type of problems and improves the state-of-the-art results in
the fine-grained sentiment classification problem.
| Georgios Balikas, Simon Moura, Massih-Reza Amini | 10.1145/3077136.3080702 | 1707.03569 | null | null |
Adversarial Dropout for Supervised and Semi-supervised Learning | cs.LG cs.CV | Recently, the training with adversarial examples, which are generated by
adding a small but worst-case perturbation on input examples, has been proved
to improve generalization performance of neural networks. In contrast to the
individually biased inputs to enhance the generality, this paper introduces
adversarial dropout, which is a minimal set of dropouts that maximize the
divergence between the outputs from the network with the dropouts and the
training supervisions. The identified adversarial dropout are used to
reconfigure the neural network to train, and we demonstrated that training on
the reconfigured sub-network improves the generalization performance of
supervised and semi-supervised learning tasks on MNIST and CIFAR-10. We
analyzed the trained model to reason the performance improvement, and we found
that adversarial dropout increases the sparsity of neural networks more than
the standard dropout does.
| Sungrae Park, Jun-Keon Park, Su-Jin Shin, Il-Chul Moon | null | 1707.03631 | null | null |
Speaker-independent Speech Separation with Deep Attractor Network | cs.SD cs.LG | Despite the recent success of deep learning for many speech processing tasks,
single-microphone, speaker-independent speech separation remains challenging
for two main reasons. The first reason is the arbitrary order of the target and
masker speakers in the mixture permutation problem, and the second is the
unknown number of speakers in the mixture output dimension problem. We propose
a novel deep learning framework for speech separation that addresses both of
these issues. We use a neural network to project the time-frequency
representation of the mixture signal into a high-dimensional embedding space. A
reference point attractor is created in the embedding space to represent each
speaker which is defined as the centroid of the speaker in the embedding space.
The time-frequency embeddings of each speaker are then forced to cluster around
the corresponding attractor point which is used to determine the time-frequency
assignment of the speaker. We propose three methods for finding the attractors
for each source in the embedding space and compare their advantages and
limitations. The objective function for the network is standard signal
reconstruction error which enables end-to-end operation during both training
and test phases. We evaluated our system using the Wall Street Journal dataset
WSJ0 on two and three speaker mixtures and report comparable or better
performance than other state-of-the-art deep learning methods for speech
separation.
| Yi Luo, Zhuo Chen, Nima Mesgarani | 10.1109/TASLP.2018.2795749 | 1707.03634 | null | null |
Underdamped Langevin MCMC: A non-asymptotic analysis | stat.ML cs.LG stat.CO | We study the underdamped Langevin diffusion when the log of the target
distribution is smooth and strongly concave. We present a MCMC algorithm based
on its discretization and show that it achieves $\varepsilon$ error (in
2-Wasserstein distance) in $\mathcal{O}(\sqrt{d}/\varepsilon)$ steps. This is a
significant improvement over the best known rate for overdamped Langevin MCMC,
which is $\mathcal{O}(d/\varepsilon^2)$ steps under the same
smoothness/concavity assumptions.
The underdamped Langevin MCMC scheme can be viewed as a version of
Hamiltonian Monte Carlo (HMC) which has been observed to outperform overdamped
Langevin MCMC methods in a number of application areas. We provide quantitative
rates that support this empirical wisdom.
| Xiang Cheng, Niladri S. Chatterji, Peter L. Bartlett and Michael I.
Jordan | null | 1707.03663 | null | null |
A Deep Learning Approach for Blind Drift Calibration of Sensor Networks | cs.LG cs.DC | Temporal drift of sensory data is a severe problem impacting the data quality
of wireless sensor networks (WSNs). With the proliferation of large-scale and
long-term WSNs, it is becoming more important to calibrate sensors when the
ground truth is unavailable. This problem is called "blind calibration". In
this paper, we propose a novel deep learning method named projection-recovery
network (PRNet) to blindly calibrate sensor measurements online. The PRNet
first projects the drifted data to a feature space, and uses a powerful deep
convolutional neural network to recover the estimated drift-free measurements.
We deploy a 24-sensor testbed and provide comprehensive empirical evidence
showing that the proposed method significantly improves the sensing accuracy
and drifted sensor detection. Compared with previous methods, PRNet can
calibrate 2x of drifted sensors at the recovery rate of 80% under the same
level of accuracy requirement. We also provide helpful insights for designing
deep neural networks for sensor calibration. We hope our proposed simple and
effective approach will serve as a solid baseline in blind drift calibration of
sensor networks.
| Yuzhi Wang and Anqi Yang and Xiaoming Chen and Pengjun Wang and Yu
Wang and Huazhong Yang | 10.1109/JSEN.2017.2703885 | 1707.03682 | null | null |
LinkNet: Exploiting Encoder Representations for Efficient Semantic
Segmentation | cs.CV cs.LG | Pixel-wise semantic segmentation for visual scene understanding not only
needs to be accurate, but also efficient in order to find any use in real-time
application. Existing algorithms even though are accurate but they do not focus
on utilizing the parameters of neural network efficiently. As a result they are
huge in terms of parameters and number of operations; hence slow too. In this
paper, we propose a novel deep neural network architecture which allows it to
learn without any significant increase in number of parameters. Our network
uses only 11.5 million parameters and 21.2 GFLOPs for processing an image of
resolution 3x640x360. It gives state-of-the-art performance on CamVid and
comparable results on Cityscapes dataset. We also compare our networks
processing time on NVIDIA GPU and embedded system device with existing
state-of-the-art architectures for different image resolutions.
| Abhishek Chaurasia and Eugenio Culurciello | 10.1109/VCIP.2017.8305148 | 1707.03718 | null | null |
Fastest Convergence for Q-learning | cs.SY cs.LG math.OC | The Zap Q-learning algorithm introduced in this paper is an improvement of
Watkins' original algorithm and recent competitors in several respects. It is a
matrix-gain algorithm designed so that its asymptotic variance is optimal.
Moreover, an ODE analysis suggests that the transient behavior is a close match
to a deterministic Newton-Raphson implementation. This is made possible by a
two time-scale update equation for the matrix gain sequence.
The analysis suggests that the approach will lead to stable and efficient
computation even for non-ideal parameterized settings. Numerical experiments
confirm the quick convergence, even in such non-ideal cases.
A secondary goal of this paper is tutorial. The first half of the paper
contains a survey on reinforcement learning algorithms, with a focus on minimum
variance algorithms.
| Adithya M. Devraj and Sean P. Meyn | null | 1707.0377 | null | null |
Source-Target Inference Models for Spatial Instruction Understanding | cs.CL cs.AI cs.LG cs.RO | Models that can execute natural language instructions for situated robotic
tasks such as assembly and navigation have several useful applications in
homes, offices, and remote scenarios. We study the semantics of
spatially-referred configuration and arrangement instructions, based on the
challenging Bisk-2016 blank-labeled block dataset. This task involves finding a
source block and moving it to the target position (mentioned via a reference
block and offset), where the blocks have no names or colors and are just
referred to via spatial location features. We present novel models for the
subtasks of source block classification and target position regression, based
on joint-loss language and spatial-world representation learning, as well as
CNN-based and dual attention models to compute the alignment between the world
blocks and the instruction phrases. For target position prediction, we compare
two inference approaches: annealed sampling via policy gradient versus
expectation inference via supervised regression. Our models achieve the new
state-of-the-art on this task, with an improvement of 47% on source block
accuracy and 22% on target position distance.
| Hao Tan, Mohit Bansal | null | 1707.03804 | null | null |
Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via
Ranking | stat.ML cs.LG cs.SI | Methods that learn representations of nodes in a graph play a critical role
in network analysis since they enable many downstream learning tasks. We
propose Graph2Gauss - an approach that can efficiently learn versatile node
embeddings on large scale (attributed) graphs that show strong performance on
tasks such as link prediction and node classification. Unlike most approaches
that represent nodes as point vectors in a low-dimensional continuous space, we
embed each node as a Gaussian distribution, allowing us to capture uncertainty
about the representation. Furthermore, we propose an unsupervised method that
handles inductive learning scenarios and is applicable to different types of
graphs: plain/attributed, directed/undirected. By leveraging both the network
structure and the associated node attributes, we are able to generalize to
unseen nodes without additional training. To learn the embeddings we adopt a
personalized ranking formulation w.r.t. the node distances that exploits the
natural ordering of the nodes imposed by the network structure. Experiments on
real world networks demonstrate the high performance of our approach,
outperforming state-of-the-art network embedding methods on several different
tasks. Additionally, we demonstrate the benefits of modeling uncertainty - by
analyzing it we can estimate neighborhood diversity and detect the intrinsic
latent dimensionality of a graph.
| Aleksandar Bojchevski, Stephan G\"unnemann | null | 1707.03815 | null | null |
Process Monitoring on Sequences of System Call Count Vectors | cs.CR cs.LG stat.ML | We introduce a methodology for efficient monitoring of processes running on
hosts in a corporate network. The methodology is based on collecting streams of
system calls produced by all or selected processes on the hosts, and sending
them over the network to a monitoring server, where machine learning algorithms
are used to identify changes in process behavior due to malicious activity,
hardware failures, or software errors. The methodology uses a sequence of
system call count vectors as the data format which can handle large and varying
volumes of data.
Unlike previous approaches, the methodology introduced in this paper is
suitable for distributed collection and processing of data in large corporate
networks. We evaluate the methodology both in a laboratory setting on a
real-life setup and provide statistics characterizing performance and accuracy
of the methodology.
| Michael Dymshits, Ben Myara, David Tolpin | null | 1707.03821 | null | null |
Reduced Electron Exposure for Energy-Dispersive Spectroscopy using
Dynamic Sampling | cs.LG cs.CV | Analytical electron microscopy and spectroscopy of biological specimens,
polymers, and other beam sensitive materials has been a challenging area due to
irradiation damage. There is a pressing need to develop novel imaging and
spectroscopic imaging methods that will minimize such sample damage as well as
reduce the data acquisition time. The latter is useful for high-throughput
analysis of materials structure and chemistry. In this work, we present a novel
machine learning based method for dynamic sparse sampling of EDS data using a
scanning electron microscope. Our method, based on the supervised learning
approach for dynamic sampling algorithm and neural networks based
classification of EDS data, allows a dramatic reduction in the total sampling
of up to 90%, while maintaining the fidelity of the reconstructed elemental
maps and spectroscopic data. We believe this approach will enable imaging and
elemental mapping of materials that would otherwise be inaccessible to these
analysis techniques.
| Yan Zhang, G. M. Dilshan Godaliyadda, Nicola Ferrier, Emine B. Gulsoy,
Charles A. Bouman, Charudatta Phatak | null | 1707.03848 | null | null |
Estimating the unseen from multiple populations | cs.LG stat.ML | Given samples from a distribution, how many new elements should we expect to
find if we continue sampling this distribution? This is an important and
actively studied problem, with many applications ranging from unseen species
estimation to genomics. We generalize this extrapolation and related unseen
estimation problems to the multiple population setting, where population $j$
has an unknown distribution $D_j$ from which we observe $n_j$ samples. We
derive an optimal estimator for the total number of elements we expect to find
among new samples across the populations. Surprisingly, we prove that our
estimator's accuracy is independent of the number of populations. We also
develop an efficient optimization algorithm to solve the more general problem
of estimating multi-population frequency distributions. We validate our methods
and theory through extensive experiments. Finally, on a real dataset of human
genomes across multiple ancestries, we demonstrate how our approach for unseen
estimation can enable cohort designs that can discover interesting mutations
with greater efficiency.
| Aditi Raghunathan, Greg Valiant, James Zou | null | 1707.03854 | null | null |
Quasar: Datasets for Question Answering by Search and Reading | cs.CL cs.IR cs.LG | We present two new large-scale datasets aimed at evaluating systems designed
to comprehend a natural language query and extract its answer from a large
corpus of text. The Quasar-S dataset consists of 37000 cloze-style
(fill-in-the-gap) queries constructed from definitions of software entity tags
on the popular website Stack Overflow. The posts and comments on the website
serve as the background corpus for answering the cloze questions. The Quasar-T
dataset consists of 43000 open-domain trivia questions and their answers
obtained from various internet sources. ClueWeb09 serves as the background
corpus for extracting these answers. We pose these datasets as a challenge for
two related subtasks of factoid Question Answering: (1) searching for relevant
pieces of text that include the correct answer to a query, and (2) reading the
retrieved text to answer the query. We also describe a retrieval system for
extracting relevant sentences and documents from the corpus given a query, and
include these in the release for researchers wishing to only focus on (2). We
evaluate several baselines on both datasets, ranging from simple heuristics to
powerful neural models, and show that these lag behind human performance by
16.4% and 32.1% for Quasar-S and -T respectively. The datasets are available at
https://github.com/bdhingra/quasar .
| Bhuwan Dhingra, Kathryn Mazaitis and William W. Cohen | null | 1707.03904 | null | null |
Influence of Resampling on Accuracy of Imbalanced Classification | stat.ML cs.LG stat.AP | In many real-world binary classification tasks (e.g. detection of certain
objects from images), an available dataset is imbalanced, i.e., it has much
less representatives of a one class (a minor class), than of another.
Generally, accurate prediction of the minor class is crucial but it's hard to
achieve since there is not much information about the minor class. One approach
to deal with this problem is to preliminarily resample the dataset, i.e., add
new elements to the dataset or remove existing ones. Resampling can be done in
various ways which raises the problem of choosing the most appropriate one. In
this paper we experimentally investigate impact of resampling on classification
accuracy, compare resampling methods and highlight key points and difficulties
of resampling.
| Evgeny Burnaev, Pavel Erofeev, Artem Papanov | 10.1117/12.2228523 | 1707.03905 | null | null |
Model Selection for Anomaly Detection | stat.ML cs.LG stat.AP | Anomaly detection based on one-class classification algorithms is broadly
used in many applied domains like image processing (e.g. detection of whether a
patient is "cancerous" or "healthy" from mammography image), network intrusion
detection, etc. Performance of an anomaly detection algorithm crucially depends
on a kernel, used to measure similarity in a feature space. The standard
approaches (e.g. cross-validation) for kernel selection, used in two-class
classification problems, can not be used directly due to the specific nature of
a data (absence of a second, abnormal, class data). In this paper we generalize
several kernel selection methods from binary-class case to the case of
one-class classification and perform extensive comparison of these approaches
using both synthetic and real-world data.
| Evgeny Burnaev, Pavel Erofeev, Dmitry Smolyakov | 10.1117/12.2228794 | 1707.03909 | null | null |
Representation Learning for Grounded Spatial Reasoning | cs.CL cs.AI cs.LG | The interpretation of spatial references is highly contextual, requiring
joint inference over both language and the environment. We consider the task of
spatial reasoning in a simulated environment, where an agent can act and
receive rewards. The proposed model learns a representation of the world
steered by instruction text. This design allows for precise alignment of local
neighborhoods with corresponding verbalizations, while also handling global
references in the instructions. We train our model with reinforcement learning
using a variant of generalized value iteration. The model outperforms
state-of-the-art approaches on several metrics, yielding a 45% reduction in
goal localization error.
| Michael Janner, Karthik Narasimhan, Regina Barzilay | null | 1707.03938 | null | null |
A Brief Study of In-Domain Transfer and Learning from Fewer Samples
using A Few Simple Priors | cs.AI cs.LG | Domain knowledge can often be encoded in the structure of a network, such as
convolutional layers for vision, which has been shown to increase
generalization and decrease sample complexity, or the number of samples
required for successful learning. In this study, we ask whether sample
complexity can be reduced for systems where the structure of the domain is
unknown beforehand, and the structure and parameters must both be learned from
the data. We show that sample complexity reduction through learning structure
is possible for at least two simple cases. In studying these cases, we also
gain insight into how this might be done for more complex domains.
| Marc Pickett, Ayush Sekhari, James Davidson | null | 1707.03979 | null | null |
Merge or Not? Learning to Group Faces via Imitation Learning | cs.CV cs.LG | Given a large number of unlabeled face images, face grouping aims at
clustering the images into individual identities present in the data. This task
remains a challenging problem despite the remarkable capability of deep
learning approaches in learning face representation. In particular, grouping
results can still be egregious given profile faces and a large number of
uninteresting faces and noisy detections. Often, a user needs to correct the
erroneous grouping manually. In this study, we formulate a novel face grouping
framework that learns clustering strategy from ground-truth simulated behavior.
This is achieved through imitation learning (a.k.a apprenticeship learning or
learning by watching) via inverse reinforcement learning (IRL). In contrast to
existing clustering approaches that group instances by similarity, our
framework makes sequential decision to dynamically decide when to merge two
face instances/groups driven by short- and long-term rewards. Extensive
experiments on three benchmark datasets show that our framework outperforms
unsupervised and supervised baselines.
| Yue He, Kaidi Cao, Cheng Li and Chen Change Loy | null | 1707.03986 | null | null |
On Measuring and Quantifying Performance: Error Rates, Surrogate Loss,
and an Example in SSL | cs.LG cs.CV stat.ML | In various approaches to learning, notably in domain adaptation, active
learning, learning under covariate shift, semi-supervised learning, learning
with concept drift, and the like, one often wants to compare a baseline
classifier to one or more advanced (or at least different) strategies. In this
chapter, we basically argue that if such classifiers, in their respective
training phases, optimize a so-called surrogate loss that it may also be
valuable to compare the behavior of this loss on the test set, next to the
regular classification error rate. It can provide us with an additional view on
the classifiers' relative performances that error rates cannot capture. As an
example, limited but convincing empirical results demonstrates that we may be
able to find semi-supervised learning strategies that can guarantee performance
improvements with increasing numbers of unlabeled data in terms of
log-likelihood. In contrast, the latter may be impossible to guarantee for the
classification error rate.
| Marco Loog, Jesse H. Krijthe, Are C. Jensen | null | 1707.04025 | null | null |
Kafnets: kernel-based non-parametric activation functions for neural
networks | stat.ML cs.AI cs.LG cs.NE | Neural networks are generally built by interleaving (adaptable) linear layers
with (fixed) nonlinear activation functions. To increase their flexibility,
several authors have proposed methods for adapting the activation functions
themselves, endowing them with varying degrees of flexibility. None of these
approaches, however, have gained wide acceptance in practice, and research in
this topic remains open. In this paper, we introduce a novel family of flexible
activation functions that are based on an inexpensive kernel expansion at every
neuron. Leveraging over several properties of kernel-based models, we propose
multiple variations for designing and initializing these kernel activation
functions (KAFs), including a multidimensional scheme allowing to nonlinearly
combine information from different paths in the network. The resulting KAFs can
approximate any mapping defined over a subset of the real line, either convex
or nonconvex. Furthermore, they are smooth over their entire domain, linear in
their parameters, and they can be regularized using any known scheme, including
the use of $\ell_1$ penalties to enforce sparseness. To the best of our
knowledge, no other known model satisfies all these properties simultaneously.
In addition, we provide a relatively complete overview on alternative
techniques for adapting the activation functions, which is currently lacking in
the literature. A large set of experiments validates our proposal.
| Simone Scardapane, Steven Van Vaerenbergh, Simone Totaro, Aurelio
Uncini | null | 1707.04035 | null | null |
Deep Learning with Topological Signatures | cs.CV cs.LG math.AT | Inferring topological and geometrical information from data can offer an
alternative perspective on machine learning problems. Methods from topological
data analysis, e.g., persistent homology, enable us to obtain such information,
typically in the form of summary representations of topological features.
However, such topological signatures often come with an unusual structure
(e.g., multisets of intervals) that is highly impractical for most machine
learning techniques. While many strategies have been proposed to map these
topological signatures into machine learning compatible representations, they
suffer from being agnostic to the target learning task. In contrast, we propose
a technique that enables us to input topological signatures to deep neural
networks and learn a task-optimal representation during training. Our approach
is realized as a novel input layer with favorable theoretical properties.
Classification experiments on 2D object shapes and social network graphs
demonstrate the versatility of the approach and, in case of the latter, we even
outperform the state-of-the-art by a large margin.
| Christoph Hofer and Roland Kwitt and Marc Niethammer and Andreas Uhl | null | 1707.04041 | null | null |
Stable Distribution Alignment Using the Dual of the Adversarial Distance | cs.LG cs.AI cs.CV | Methods that align distributions by minimizing an adversarial distance
between them have recently achieved impressive results. However, these
approaches are difficult to optimize with gradient descent and they often do
not converge well without careful hyperparameter tuning and proper
initialization. We investigate whether turning the adversarial min-max problem
into an optimization problem by replacing the maximization part with its dual
improves the quality of the resulting alignment and explore its connections to
Maximum Mean Discrepancy. Our empirical results suggest that using the dual
formulation for the restricted family of linear discriminators results in a
more stable convergence to a desirable solution when compared with the
performance of a primal min-max GAN-like objective and an MMD objective under
the same restrictions. We test our hypothesis on the problem of aligning two
synthetic point clouds on a plane and on a real-image domain adaptation problem
on digits. In both cases, the dual formulation yields an iterative procedure
that gives more stable and monotonic improvement over time.
| Ben Usman, Kate Saenko, Brian Kulis | null | 1707.04046 | null | null |
Foolbox: A Python toolbox to benchmark the robustness of machine
learning models | cs.LG cs.CR cs.CV stat.ML | Even todays most advanced machine learning models are easily fooled by almost
imperceptible perturbations of their inputs. Foolbox is a new Python package to
generate such adversarial perturbations and to quantify and compare the
robustness of machine learning models. It is build around the idea that the
most comparable robustness measure is the minimum perturbation needed to craft
an adversarial example. To this end, Foolbox provides reference implementations
of most published adversarial attack methods alongside some new ones, all of
which perform internal hyperparameter tuning to find the minimum adversarial
perturbation. Additionally, Foolbox interfaces with most popular deep learning
frameworks such as PyTorch, Keras, TensorFlow, Theano and MXNet and allows
different adversarial criteria such as targeted misclassification and top-k
misclassification as well as different distance measures. The code is licensed
under the MIT license and is openly available at
https://github.com/bethgelab/foolbox . The most up-to-date documentation can be
found at http://foolbox.readthedocs.io .
| Jonas Rauber, Wieland Brendel, Matthias Bethge | null | 1707.04131 | null | null |
Distral: Robust Multitask Reinforcement Learning | cs.LG stat.ML | Most deep reinforcement learning algorithms are data inefficient in complex
and rich environments, limiting their applicability to many scenarios. One
direction for improving data efficiency is multitask learning with shared
neural network parameters, where efficiency may be improved through transfer
across related tasks. In practice, however, this is not usually observed,
because gradients from different tasks can interfere negatively, making
learning unstable and sometimes even less data efficient. Another issue is the
different reward schemes between tasks, which can easily lead to one task
dominating the learning of a shared model. We propose a new approach for joint
training of multiple tasks, which we refer to as Distral (Distill & transfer
learning). Instead of sharing parameters between the different workers, we
propose to share a "distilled" policy that captures common behaviour across
tasks. Each worker is trained to solve its own task while constrained to stay
close to the shared policy, while the shared policy is trained by distillation
to be the centroid of all task policies. Both aspects of the learning process
are derived by optimizing a joint objective function. We show that our approach
supports efficient transfer on complex 3D environments, outperforming several
related methods. Moreover, the proposed learning process is more robust and
more stable---attributes that are critical in deep reinforcement learning.
| Yee Whye Teh, Victor Bapst, Wojciech Marian Czarnecki, John Quan,
James Kirkpatrick, Raia Hadsell, Nicolas Heess, Razvan Pascanu | null | 1707.04175 | null | null |
Be Careful What You Backpropagate: A Case For Linear Output Activations
& Gradient Boosting | cs.LG cs.CV | In this work, we show that saturating output activation functions, such as
the softmax, impede learning on a number of standard classification tasks.
Moreover, we present results showing that the utility of softmax does not stem
from the normalization, as some have speculated. In fact, the normalization
makes things worse. Rather, the advantage is in the exponentiation of error
gradients. This exponential gradient boosting is shown to speed up convergence
and improve generalization. To this end, we demonstrate faster convergence and
better performance on diverse classification tasks: image classification using
CIFAR-10 and ImageNet, and semantic segmentation using PASCAL VOC 2012. In the
latter case, using the state-of-the-art neural network architecture, the model
converged 33% faster with our method (roughly two days of training less) than
with the standard softmax activation, and with a slightly better performance to
boot.
| Anders Oland and Aayush Bansal and Roger B. Dannenberg and Bhiksha Raj | null | 1707.04199 | null | null |
Learning Features from Co-occurrences: A Theoretical Analysis | cs.CL cs.LG math.ST stat.ML stat.TH | Representing a word by its co-occurrences with other words in context is an
effective way to capture the meaning of the word. However, the theory behind
remains a challenge. In this work, taking the example of a word classification
task, we give a theoretical analysis of the approaches that represent a word X
by a function f(P(C|X)), where C is a context feature, P(C|X) is the
conditional probability estimated from a text corpus, and the function f maps
the co-occurrence measure to a prediction score. We investigate the impact of
context feature C and the function f. We also explain the reasons why using the
co-occurrences with multiple context features may be better than just using a
single one. In addition, some of the results shed light on the theory of
feature learning and machine learning in general.
| Yanpeng Li | null | 1707.04218 | null | null |
Improving Sparsity in Kernel Adaptive Filters Using a Unit-Norm
Dictionary | stat.ML cs.LG | Kernel adaptive filters, a class of adaptive nonlinear time-series models,
are known by their ability to learn expressive autoregressive patterns from
sequential data. However, for trivial monotonic signals, they struggle to
perform accurate predictions and at the same time keep computational complexity
within desired boundaries. This is because new observations are incorporated to
the dictionary when they are far from what the algorithm has seen in the past.
We propose a novel approach to kernel adaptive filtering that compares new
observations against dictionary samples in terms of their unit-norm
(normalised) versions, meaning that new observations that look like previous
samples but have a different magnitude are not added to the dictionary. We
achieve this by proposing the unit-norm Gaussian kernel and define a
sparsification criterion for this novel kernel. This new methodology is
validated on two real-world datasets against standard KAF in terms of the
normalised mean square error and the dictionary size.
| Felipe Tobar | null | 1707.04236 | null | null |
Predicting Abandonment in Online Coding Tutorials | cs.LG cs.AI cs.HC | Learners regularly abandon online coding tutorials when they get bored or
frustrated, but there are few techniques for anticipating this abandonment to
intervene. In this paper, we examine the feasibility of predicting abandonment
with machine-learned classifiers. Using interaction logs from an online
programming game, we extracted a collection of features that are potentially
related to learner abandonment and engagement, then developed classifiers for
each level. Across the first five levels of the game, our classifiers
successfully predicted 61% to 76% of learners who did not complete the next
level, achieving an average AUC of 0.68. In these classifiers, features
negatively associated with abandonment included account activation and
help-seeking behaviors, whereas features positively associated with abandonment
included features indicating difficulty and disengagement. These findings
highlight the feasibility of providing timely intervention to learners likely
to quit.
| An Yan, Michael J. Lee, Andrew J. Ko | 10.1109/VLHCC.2017.8103467 | 1707.04291 | null | null |
Coalescent-based species tree estimation: a stochastic Farris transform | cs.LG math.PR math.ST q-bio.PE stat.TH | The reconstruction of a species phylogeny from genomic data faces two
significant hurdles: 1) the trees describing the evolution of each individual
gene--i.e., the gene trees--may differ from the species phylogeny and 2) the
molecular sequences corresponding to each gene often provide limited
information about the gene trees themselves. In this paper we consider an
approach to species tree reconstruction that addresses both these hurdles.
Specifically, we propose an algorithm for phylogeny reconstruction under the
multispecies coalescent model with a standard model of site substitution. The
multispecies coalescent is commonly used to model gene tree discordance due to
incomplete lineage sorting, a well-studied population-genetic effect.
In previous work, an information-theoretic trade-off was derived in this
context between the number of loci, $m$, needed for an accurate reconstruction
and the length of the locus sequences, $k$. It was shown that to reconstruct an
internal branch of length $f$, one needs $m$ to be of the order of $1/[f^{2}
\sqrt{k}]$. That previous result was obtained under the molecular clock
assumption, i.e., under the assumption that mutation rates (as well as
population sizes) are constant across the species phylogeny.
Here we generalize this result beyond the restrictive molecular clock
assumption, and obtain a new reconstruction algorithm that has the same data
requirement (up to log factors). Our main contribution is a novel reduction to
the molecular clock case under the multispecies coalescent. As a corollary, we
also obtain a new identifiability result of independent interest: for any
species tree with $n \geq 3$ species, the rooted species tree can be identified
from the distribution of its unrooted weighted gene trees even in the absence
of a molecular clock.
| Gautam Dasarathy, Elchanan Mossel, Robert Nowak, Sebastien Roch | null | 1707.043 | null | null |
Model compression as constrained optimization, with application to
neural nets. Part II: quantization | cs.LG cs.NE math.OC stat.ML | We consider the problem of deep neural net compression by quantization: given
a large, reference net, we want to quantize its real-valued weights using a
codebook with $K$ entries so that the training loss of the quantized net is
minimal. The codebook can be optimally learned jointly with the net, or fixed,
as for binarization or ternarization approaches. Previous work has quantized
the weights of the reference net, or incorporated rounding operations in the
backpropagation algorithm, but this has no guarantee of converging to a
loss-optimal, quantized net. We describe a new approach based on the recently
proposed framework of model compression as constrained optimization
\citep{Carreir17a}. This results in a simple iterative "learning-compression"
algorithm, which alternates a step that learns a net of continuous weights with
a step that quantizes (or binarizes/ternarizes) the weights, and is guaranteed
to converge to local optimum of the loss for quantized nets. We develop
algorithms for an adaptive codebook or a (partially) fixed codebook. The latter
includes binarization, ternarization, powers-of-two and other important
particular cases. We show experimentally that we can achieve much higher
compression rates than previous quantization work (even using just 1 bit per
weight) with negligible loss degradation.
| Miguel \'A. Carreira-Perpi\~n\'an and Yerlan Idelbayev | null | 1707.04319 | null | null |
Tensor-Based Backpropagation in Neural Networks with Non-Sequential
Input | cs.LG | Neural networks have been able to achieve groundbreaking accuracy at tasks
conventionally considered only doable by humans. Using stochastic gradient
descent, optimization in many dimensions is made possible, albeit at a
relatively high computational cost. By splitting training data into batches,
networks can be distributed and trained vastly more efficiently and with
minimal accuracy loss. We have explored the mathematics behind efficiently
implementing tensor-based batch backpropagation algorithms. A common approach
to batch training is iterating over batch items individually. Explicitly using
tensor operations to backpropagate allows training to be performed
non-linearly, increasing computational efficiency.
| Hirsh R. Agarwal, Andrew Huang | null | 1707.04324 | null | null |
Human-Level Intelligence or Animal-Like Abilities? | cs.AI cs.CY cs.LG stat.ML | The vision systems of the eagle and the snake outperform everything that we
can make in the laboratory, but snakes and eagles cannot build an eyeglass or a
telescope or a microscope. (Judea Pearl)
| Adnan Darwiche | null | 1707.04327 | null | null |
Weakly Submodular Maximization Beyond Cardinality Constraints: Does
Randomization Help Greedy? | cs.DM cs.AI cs.DS cs.LG stat.ML | Submodular functions are a broad class of set functions, which naturally
arise in diverse areas. Many algorithms have been suggested for the
maximization of these functions. Unfortunately, once the function deviates from
submodularity, the known algorithms may perform arbitrarily poorly. Amending
this issue, by obtaining approximation results for set functions generalizing
submodular functions, has been the focus of recent works.
One such class, known as weakly submodular functions, has received a lot of
attention. A key result proved by Das and Kempe (2011) showed that the
approximation ratio of the greedy algorithm for weakly submodular maximization
subject to a cardinality constraint degrades smoothly with the distance from
submodularity. However, no results have been obtained for maximization subject
to constraints beyond cardinality. In particular, it is not known whether the
greedy algorithm achieves any non-trivial approximation ratio for such
constraints.
In this paper, we prove that a randomized version of the greedy algorithm
(previously used by Buchbinder et al. (2014) for a different problem) achieves
an approximation ratio of $(1 + 1/\gamma)^{-2}$ for the maximization of a
weakly submodular function subject to a general matroid constraint, where
$\gamma$ is a parameter measuring the distance of the function from
submodularity. Moreover, we also experimentally compare the performance of this
version of the greedy algorithm on real world problems against natural
benchmarks, and show that the algorithm we study performs well also in
practice. To the best of our knowledge, this is the first algorithm with a
non-trivial approximation guarantee for maximizing a weakly submodular function
subject to a constraint other than the simple cardinality constraint. In
particular, it is the first algorithm with such a guarantee for the important
and broad class of matroid constraints.
| Lin Chen, Moran Feldman, Amin Karbasi | null | 1707.04347 | null | null |
f-GANs in an Information Geometric Nutshell | cs.LG stat.ML | Nowozin \textit{et al} showed last year how to extend the GAN
\textit{principle} to all $f$-divergences. The approach is elegant but falls
short of a full description of the supervised game, and says little about the
key player, the generator: for example, what does the generator actually
converge to if solving the GAN game means convergence in some space of
parameters? How does that provide hints on the generator's design and compare
to the flourishing but almost exclusively experimental literature on the
subject?
In this paper, we unveil a broad class of distributions for which such
convergence happens --- namely, deformed exponential families, a wide superset
of exponential families --- and show tight connections with the three other key
GAN parameters: loss, game and architecture. In particular, we show that
current deep architectures are able to factorize a very large number of such
densities using an especially compact design, hence displaying the power of
deep architectures and their concinnity in the $f$-GAN game. This result holds
given a sufficient condition on \textit{activation functions} --- which turns
out to be satisfied by popular choices. The key to our results is a variational
generalization of an old theorem that relates the KL divergence between regular
exponential families and divergences between their natural parameters. We
complete this picture with additional results and experimental insights on how
these results may be used to ground further improvements of GAN architectures,
via (i) a principled design of the activation functions in the generator and
(ii) an explicit integration of proper composite losses' link function in the
discriminator.
| Richard Nock and Zac Cranko and Aditya Krishna Menon and Lizhen Qu and
Robert C. Williamson | null | 1707.04385 | null | null |
Lenient Multi-Agent Deep Reinforcement Learning | cs.MA cs.AI cs.LG | Much of the success of single agent deep reinforcement learning (DRL) in
recent years can be attributed to the use of experience replay memories (ERM),
which allow Deep Q-Networks (DQNs) to be trained efficiently through sampling
stored state transitions. However, care is required when using ERMs for
multi-agent deep reinforcement learning (MA-DRL), as stored transitions can
become outdated because agents update their policies in parallel [11]. In this
work we apply leniency [23] to MA-DRL. Lenient agents map state-action pairs to
decaying temperature values that control the amount of leniency applied towards
negative policy updates that are sampled from the ERM. This introduces optimism
in the value-function update, and has been shown to facilitate cooperation in
tabular fully-cooperative multi-agent reinforcement learning problems. We
evaluate our Lenient-DQN (LDQN) empirically against the related Hysteretic-DQN
(HDQN) algorithm [22] as well as a modified version we call scheduled-HDQN,
that uses average reward learning near terminal states. Evaluations take place
in extended variations of the Coordinated Multi-Agent Object Transportation
Problem (CMOTP) [8] which include fully-cooperative sub-tasks and stochastic
rewards. We find that LDQN agents are more likely to converge to the optimal
policy in a stochastic reward CMOTP compared to standard and scheduled-HDQN
agents.
| Gregory Palmer, Karl Tuyls, Daan Bloembergen, Rahul Savani | null | 1707.04402 | null | null |
Guiding InfoGAN with Semi-Supervision | cs.CV cs.LG | In this paper we propose a new semi-supervised GAN architecture (ss-InfoGAN)
for image synthesis that leverages information from few labels (as little as
0.22%, max. 10% of the dataset) to learn semantically meaningful and
controllable data representations where latent variables correspond to label
categories. The architecture builds on Information Maximizing Generative
Adversarial Networks (InfoGAN) and is shown to learn both continuous and
categorical codes and achieves higher quality of synthetic samples compared to
fully unsupervised settings. Furthermore, we show that using small amounts of
labeled data speeds-up training convergence. The architecture maintains the
ability to disentangle latent variables for which no labels are available.
Finally, we contribute an information-theoretic reasoning on how introducing
semi-supervision increases mutual information between synthetic and real data.
| Adrian Spurr, Emre Aksan, Otmar Hilliges | null | 1707.04487 | null | null |
Capturing the diversity of biological tuning curves using generative
adversarial networks | q-bio.QM cs.LG q-bio.NC | Tuning curves characterizing the response selectivities of biological neurons
often exhibit large degrees of irregularity and diversity across neurons.
Theoretical network models that feature heterogeneous cell populations or
random connectivity also give rise to diverse tuning curves. However, a general
framework for fitting such models to experimentally measured tuning curves is
lacking. We address this problem by proposing to view mechanistic network
models as generative models whose parameters can be optimized to fit the
distribution of experimentally measured tuning curves. A major obstacle for
fitting such models is that their likelihood function is not explicitly
available or is highly intractable to compute. Recent advances in machine
learning provide ways for fitting generative models without the need to
evaluate the likelihood and its gradient. Generative Adversarial Networks (GAN)
provide one such framework which has been successful in traditional machine
learning tasks. We apply this approach in two separate experiments, showing how
GANs can be used to fit commonly used mechanistic models in theoretical
neuroscience to datasets of measured tuning curves. This fitting procedure
avoids the computationally expensive step of inferring latent variables, e.g.
the biophysical parameters of individual cells or the particular realization of
the full synaptic connectivity matrix, and directly learns model parameters
which characterize the statistics of connectivity or of single-cell properties.
Another strength of this approach is that it fits the entire, joint
distribution of experimental tuning curves, instead of matching a few summary
statistics picked a priori by the user. More generally, this framework opens
the door to fitting theoretically motivated dynamical network models directly
to simultaneously or non-simultaneously recorded neural responses.
| Takafumi Arakaki, G. Barello, Yashar Ahmadian | null | 1707.04582 | null | null |
The Reversible Residual Network: Backpropagation Without Storing
Activations | cs.CV cs.LG | Deep residual networks (ResNets) have significantly pushed forward the
state-of-the-art on image classification, increasing in performance as networks
grow both deeper and wider. However, memory consumption becomes a bottleneck,
as one needs to store the activations in order to calculate gradients using
backpropagation. We present the Reversible Residual Network (RevNet), a variant
of ResNets where each layer's activations can be reconstructed exactly from the
next layer's. Therefore, the activations for most layers need not be stored in
memory during backpropagation. We demonstrate the effectiveness of RevNets on
CIFAR-10, CIFAR-100, and ImageNet, establishing nearly identical classification
accuracy to equally-sized ResNets, even though the activation storage
requirements are independent of depth.
| Aidan N. Gomez, Mengye Ren, Raquel Urtasun, Roger B. Grosse | null | 1707.04585 | null | null |
GLSR-VAE: Geodesic Latent Space Regularization for Variational
AutoEncoder Architectures | cs.LG cs.AI stat.ML | VAEs (Variational AutoEncoders) have proved to be powerful in the context of
density modeling and have been used in a variety of contexts for creative
purposes. In many settings, the data we model possesses continuous attributes
that we would like to take into account at generation time. We propose in this
paper GLSR-VAE, a Geodesic Latent Space Regularization for the Variational
AutoEncoder architecture and its generalizations which allows a fine control on
the embedding of the data into the latent space. When augmenting the VAE loss
with this regularization, changes in the learned latent space reflects changes
of the attributes of the data. This deeper understanding of the VAE latent
space structure offers the possibility to modulate the attributes of the
generated data in a continuous way. We demonstrate its efficiency on a
monophonic music generation task where we manage to generate variations of
discrete sequences in an intended and playful way.
| Ga\"etan Hadjeres and Frank Nielsen and Fran\c{c}ois Pachet | null | 1707.04588 | null | null |
Cloud-based or On-device: An Empirical Study of Mobile Deep Inference | cs.PF cs.CV cs.LG | Modern mobile applications are benefiting significantly from the advancement
in deep learning, e.g., implementing real-time image recognition and
conversational system. Given a trained deep learning model, applications
usually need to perform a series of matrix operations based on the input data,
in order to infer possible output values. Because of computational complexity
and size constraints, these trained models are often hosted in the cloud. To
utilize these cloud-based models, mobile apps will have to send input data over
the network. While cloud-based deep learning can provide reasonable response
time for mobile apps, it restricts the use case scenarios, e.g. mobile apps
need to have network access. With mobile specific deep learning optimizations,
it is now possible to employ on-device inference. However, because mobile
hardware, such as GPU and memory size, can be very limited when compared to its
desktop counterpart, it is important to understand the feasibility of this new
on-device deep learning inference architecture. In this paper, we empirically
evaluate the inference performance of three Convolutional Neural Networks
(CNNs) using a benchmark Android application we developed. Our measurement and
analysis suggest that on-device inference can cost up to two orders of
magnitude greater response time and energy when compared to cloud-based
inference, and that loading model and computing probability are two performance
bottlenecks for on-device deep inferences.
| Tian Guo | null | 1707.0461 | null | null |
On the Complexity of Learning Neural Networks | cs.LG cs.CC | The stunning empirical successes of neural networks currently lack rigorous
theoretical explanation. What form would such an explanation take, in the face
of existing complexity-theoretic lower bounds? A first step might be to show
that data generated by neural networks with a single hidden layer, smooth
activation functions and benign input distributions can be learned efficiently.
We demonstrate here a comprehensive lower bound ruling out this possibility:
for a wide class of activation functions (including all currently used), and
inputs drawn from any logconcave distribution, there is a family of
one-hidden-layer functions whose output is a sum gate, that are hard to learn
in a precise sense: any statistical query algorithm (which includes all known
variants of stochastic gradient descent with any loss function) needs an
exponential number of queries even using tolerance inversely proportional to
the input dimensionality. Moreover, this hard family of functions is realizable
with a small (sublinear in dimension) number of activation units in the single
hidden layer. The lower bound is also robust to small perturbations of the true
weights. Systematic experiments illustrate a phase transition in the training
error as predicted by the analysis.
| Le Song, Santosh Vempala, John Wilmes, and Bo Xie | null | 1707.04615 | null | null |
Simplified Long Short-term Memory Recurrent Neural Networks: part I | cs.NE cs.LG | We present five variants of the standard Long Short-term Memory (LSTM)
recurrent neural networks by uniformly reducing blocks of adaptive parameters
in the gating mechanisms. For simplicity, we refer to these models as LSTM1,
LSTM2, LSTM3, LSTM4, and LSTM5, respectively. Such parameter-reduced variants
enable speeding up data training computations and would be more suitable for
implementations onto constrained embedded platforms. We comparatively evaluate
and verify our five variant models on the classical MNIST dataset and
demonstrate that these variant models are comparable to a standard
implementation of the LSTM model while using less number of parameters.
Moreover, we observe that in some cases the standard LSTM's accuracy
performance will drop after a number of epochs when using the ReLU
nonlinearity; in contrast, however, LSTM3, LSTM4 and LSTM5 will retain their
performance.
| Atra Akandeh and Fathi M. Salem | null | 1707.04619 | null | null |
Simplified Long Short-term Memory Recurrent Neural Networks: part II | cs.NE cs.LG | This is part II of three-part work. Here, we present a second set of
inter-related five variants of simplified Long Short-term Memory (LSTM)
recurrent neural networks by further reducing adaptive parameters. Two of these
models have been introduced in part I of this work. We evaluate and verify our
model variants on the benchmark MNIST dataset and assert that these models are
comparable to the base LSTM model while use progressively less number of
parameters. Moreover, we observe that in case of using the ReLU activation, the
test accuracy performance of the standard LSTM will drop after a number of
epochs when learning parameter become larger. However all of the new model
variants sustain their performance.
| Atra Akandeh and Fathi M. Salem | null | 1707.04623 | null | null |
Simplified Long Short-term Memory Recurrent Neural Networks: part III | cs.NE cs.LG | This is part III of three-part work. In parts I and II, we have presented
eight variants for simplified Long Short Term Memory (LSTM) recurrent neural
networks (RNNs). It is noted that fast computation, specially in constrained
computing resources, are an important factor in processing big time-sequence
data. In this part III paper, we present and evaluate two new LSTM model
variants which dramatically reduce the computational load while retaining
comparable performance to the base (standard) LSTM RNNs. In these new variants,
we impose (Hadamard) pointwise state multiplications in the cell-memory network
in addition to the gating signal networks.
| Atra Akandeh and Fathi M. Salem | null | 1707.04626 | null | null |
Predicting multicellular function through multi-layer tissue networks | cs.LG cs.SI q-bio.MN stat.ML | Motivation: Understanding functions of proteins in specific human tissues is
essential for insights into disease diagnostics and therapeutics, yet
prediction of tissue-specific cellular function remains a critical challenge
for biomedicine.
Results: Here we present OhmNet, a hierarchy-aware unsupervised node feature
learning approach for multi-layer networks. We build a multi-layer network,
where each layer represents molecular interactions in a different human tissue.
OhmNet then automatically learns a mapping of proteins, represented as nodes,
to a neural embedding based low-dimensional space of features. OhmNet
encourages sharing of similar features among proteins with similar network
neighborhoods and among proteins activated in similar tissues. The algorithm
generalizes prior work, which generally ignores relationships between tissues,
by modeling tissue organization with a rich multiscale tissue hierarchy. We use
OhmNet to study multicellular function in a multi-layer protein interaction
network of 107 human tissues. In 48 tissues with known tissue-specific cellular
functions, OhmNet provides more accurate predictions of cellular function than
alternative approaches, and also generates more accurate hypotheses about
tissue-specific protein actions. We show that taking into account the tissue
hierarchy leads to improved predictive power. Remarkably, we also demonstrate
that it is possible to leverage the tissue hierarchy in order to effectively
transfer cellular functions to a functionally uncharacterized tissue. Overall,
OhmNet moves from flat networks to multiscale models able to predict a range of
phenotypes spanning cellular subsystems
| Marinka Zitnik and Jure Leskovec | 10.1093/bioinformatics/btx252 | 1707.04638 | null | null |
Predictive Liability Models and Visualizations of High Dimensional
Retail Employee Data | cs.LG | Employee theft and dishonesty is a major contributor to loss in the retail
industry. Retailers have reported the need for more automated analytic tools to
assess the liability of their employees. In this work, we train and optimize
several machine learning models for regression prediction and analysis on this
data, which will help retailers identify and manage risky employees. Since the
data we use is very high dimensional, we use feature selection techniques to
identify the most contributing factors to an employee's assessed risk. We also
use dimension reduction and data embedding techniques to present this dataset
in a easy to interpret format.
| Richard R. Yang, Mike Borowczak | 10.1145/3194206.3195587 | 1707.04639 | null | null |
Learning linear structural equation models in polynomial time and sample
complexity | cs.LG stat.ML | The problem of learning structural equation models (SEMs) from data is a
fundamental problem in causal inference. We develop a new algorithm --- which
is computationally and statistically efficient and works in the
high-dimensional regime --- for learning linear SEMs from purely observational
data with arbitrary noise distribution. We consider three aspects of the
problem: identifiability, computational efficiency, and statistical efficiency.
We show that when data is generated from a linear SEM over $p$ nodes and
maximum degree $d$, our algorithm recovers the directed acyclic graph (DAG)
structure of the SEM under an identifiability condition that is more general
than those considered in the literature, and without faithfulness assumptions.
In the population setting, our algorithm recovers the DAG structure in
$\mathcal{O}(p(d^2 + \log p))$ operations. In the finite sample setting, if the
estimated precision matrix is sparse, our algorithm has a smoothed complexity
of $\widetilde{\mathcal{O}}(p^3 + pd^7)$, while if the estimated precision
matrix is dense, our algorithm has a smoothed complexity of
$\widetilde{\mathcal{O}}(p^5)$. For sub-Gaussian noise, we show that our
algorithm has a sample complexity of $\mathcal{O}(\frac{d^8}{\varepsilon^2}
\log (\frac{p}{\sqrt{\delta}}))$ to achieve $\varepsilon$ element-wise additive
error with respect to the true autoregression matrix with probability at most
$1 - \delta$, while for noise with bounded $(4m)$-th moment, with $m$ being a
positive integer, our algorithm has a sample complexity of
$\mathcal{O}(\frac{d^8}{\varepsilon^2} (\frac{p^2}{\delta})^{1/m})$.
| Asish Ghoshal and Jean Honorio | null | 1707.04673 | null | null |
Scalable Training of Artificial Neural Networks with Adaptive Sparse
Connectivity inspired by Network Science | cs.NE cs.AI cs.LG | Through the success of deep learning in various domains, artificial neural
networks are currently among the most used artificial intelligence methods.
Taking inspiration from the network properties of biological neural networks
(e.g. sparsity, scale-freeness), we argue that (contrary to general practice)
artificial neural networks, too, should not have fully-connected layers. Here
we propose sparse evolutionary training of artificial neural networks, an
algorithm which evolves an initial sparse topology (Erd\H{o}s-R\'enyi random
graph) of two consecutive layers of neurons into a scale-free topology, during
learning. Our method replaces artificial neural networks fully-connected layers
with sparse ones before training, reducing quadratically the number of
parameters, with no decrease in accuracy. We demonstrate our claims on
restricted Boltzmann machines, multi-layer perceptrons, and convolutional
neural networks for unsupervised and supervised learning on 15 datasets. Our
approach has the potential to enable artificial neural networks to scale up
beyond what is currently possible.
| Decebal Constantin Mocanu, Elena Mocanu, Peter Stone, Phuong H.
Nguyen, Madeleine Gibescu, Antonio Liotta | 10.1038/s41467-018-04316-3 | 1707.0478 | null | null |
Non-Asymptotic Analysis of Robust Control from Coarse-Grained
Identification | math.OC cs.LG | This work explores the trade-off between the number of samples required to
accurately build models of dynamical systems and the degradation of performance
in various control objectives due to a coarse approximation. In particular, we
show that simple models can be easily fit from input/output data and are
sufficient for achieving various control objectives. We derive bounds on the
number of noisy input/output samples from a stable linear time-invariant system
that are sufficient to guarantee that the corresponding finite impulse response
approximation is close to the true system in the $\mathcal{H}_\infty$-norm. We
demonstrate that these demands are lower than those derived in prior art which
aimed to accurately identify dynamical models. We also explore how different
physical input constraints, such as power constraints, affect the sample
complexity. Finally, we show how our analysis fits within the established
framework of robust control, by demonstrating how a controller designed for an
approximate system provably meets performance objectives on the true system.
| Stephen Tu, Ross Boczar, Andrew Packard, Benjamin Recht | null | 1707.04791 | null | null |
Block-Normalized Gradient Method: An Empirical Study for Training Deep
Neural Network | cs.LG cs.AI | In this paper, we propose a generic and simple strategy for utilizing
stochastic gradient information in optimization. The technique essentially
contains two consecutive steps in each iteration: 1) computing and normalizing
each block (layer) of the mini-batch stochastic gradient; 2) selecting
appropriate step size to update the decision variable (parameter) towards the
negative of the block-normalized gradient. We conduct extensive empirical
studies on various non-convex neural network optimization problems, including
multi-layer perceptron, convolution neural networks and recurrent neural
networks. The results indicate the block-normalized gradient can help
accelerate the training of neural networks. In particular, we observe that the
normalized gradient methods having constant step size with occasionally decay,
such as SGD with momentum, have better performance in the deep convolution
neural networks, while those with adaptive step sizes, such as Adam, perform
better in recurrent neural networks. Besides, we also observe this line of
methods can lead to solutions with better generalization properties, which is
confirmed by the performance improvement over strong baselines.
| Adams Wei Yu, Lei Huang, Qihang Lin, Ruslan Salakhutdinov, Jaime
Carbonell | null | 1707.04822 | null | null |
Minimax deviation strategies for machine learning and recognition with
short learning samples | cs.LG | The article is devoted to the problem of small learning samples in machine
learning. The flaws of maximum likelihood learning and minimax learning are
looked into and the concept of minimax deviation learning is introduced that is
free of those flaws.
| Michail Schlesinger, Evgeniy Vodolazskiy | null | 1707.04849 | null | null |
Overcoming Catastrophic Interference by Conceptors | cs.NE cs.LG | Catastrophic interference has been a major roadblock in the research of
continual learning. Here we propose a variant of the back-propagation
algorithm, "conceptor-aided back-prop" (CAB), in which gradients are shielded
by conceptors against degradation of previously learned tasks. Conceptors have
their origin in reservoir computing, where they have been previously shown to
overcome catastrophic forgetting. CAB extends these results to deep feedforward
networks. On the disjoint MNIST task CAB outperforms two other methods for
coping with catastrophic interference that have recently been proposed in the
deep learning field.
| Xu He and Herbert Jaeger | null | 1707.04853 | null | null |
Efficient Architecture Search by Network Transformation | cs.LG cs.AI | Techniques for automatically designing deep neural network architectures such
as reinforcement learning based approaches have recently shown promising
results. However, their success is based on vast computational resources (e.g.
hundreds of GPUs), making them difficult to be widely used. A noticeable
limitation is that they still design and train each network from scratch during
the exploration of the architecture space, which is highly inefficient. In this
paper, we propose a new framework toward efficient architecture search by
exploring the architecture space based on the current network and reusing its
weights. We employ a reinforcement learning agent as the meta-controller, whose
action is to grow the network depth or layer width with function-preserving
transformations. As such, the previously validated networks can be reused for
further exploration, thus saves a large amount of computational cost. We apply
our method to explore the architecture space of the plain convolutional neural
networks (no skip-connections, branching etc.) on image benchmark datasets
(CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method
can design highly competitive networks that outperform existing networks using
the same design scheme. On CIFAR-10, our model without skip-connections
achieves 4.23\% test error rate, exceeding a vast majority of modern
architectures and approaching DenseNet. Furthermore, by applying our method to
explore the DenseNet architecture space, we are able to achieve more accurate
networks with fewer parameters.
| Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, Jun Wang | null | 1707.04873 | null | null |
Listening while Speaking: Speech Chain by Deep Learning | cs.CL cs.LG cs.SD | Despite the close relationship between speech perception and production,
research in automatic speech recognition (ASR) and text-to-speech synthesis
(TTS) has progressed more or less independently without exerting much mutual
influence on each other. In human communication, on the other hand, a
closed-loop speech chain mechanism with auditory feedback from the speaker's
mouth to her ear is crucial. In this paper, we take a step further and develop
a closed-loop speech chain model based on deep learning. The
sequence-to-sequence model in close-loop architecture allows us to train our
model on the concatenation of both labeled and unlabeled data. While ASR
transcribes the unlabeled speech features, TTS attempts to reconstruct the
original speech waveform based on the text from ASR. In the opposite direction,
ASR also attempts to reconstruct the original text transcription given the
synthesized speech. To the best of our knowledge, this is the first deep
learning model that integrates human speech perception and production
behaviors. Our experimental results show that the proposed approach
significantly improved the performance more than separate systems that were
only trained with labeled data.
| Andros Tjandra, Sakriani Sakti, Satoshi Nakamura | null | 1707.04879 | null | null |
Theoretical insights into the optimization landscape of
over-parameterized shallow neural networks | cs.LG cs.IT math.IT math.OC stat.ML | In this paper we study the problem of learning a shallow artificial neural
network that best fits a training data set. We study this problem in the
over-parameterized regime where the number of observations are fewer than the
number of parameters in the model. We show that with quadratic activations the
optimization landscape of training such shallow neural networks has certain
favorable characteristics that allow globally optimal models to be found
efficiently using a variety of local search heuristics. This result holds for
an arbitrary training data of input/output pairs. For differentiable activation
functions we also show that gradient descent, when suitably initialized,
converges at a linear rate to a globally optimal model. This result focuses on
a realizable model where the inputs are chosen i.i.d. from a Gaussian
distribution and the labels are generated according to planted weight
coefficients.
| Mahdi Soltanolkotabi and Adel Javanmard and Jason D. Lee | null | 1707.04926 | null | null |
Comparative Performance Analysis of Neural Networks Architectures on H2O
Platform for Various Activation Functions | cs.LG cs.CV cs.PF | Deep learning (deep structured learning, hierarchi- cal learning or deep
machine learning) is a branch of machine learning based on a set of algorithms
that attempt to model high- level abstractions in data by using multiple
processing layers with complex structures or otherwise composed of multiple
non-linear transformations. In this paper, we present the results of testing
neural networks architectures on H2O platform for various activation functions,
stopping metrics, and other parameters of machine learning algorithm. It was
demonstrated for the use case of MNIST database of handwritten digits in
single-threaded mode that blind selection of these parameters can hugely
increase (by 2-3 orders) the runtime without the significant increase of
precision. This result can have crucial influence for opitmization of available
and new machine learning methods, especially for image recognition problems.
| Yuriy Kochura, Sergii Stirenko, Yuri Gordienko | 10.1109/YSF.2017.8126654 | 1707.0494 | null | null |
An Ensemble Boosting Model for Predicting Transfer to the Pediatric
Intensive Care Unit | cs.LG stat.AP stat.ML | Our work focuses on the problem of predicting the transfer of pediatric
patients from the general ward of a hospital to the pediatric intensive care
unit. Using data collected over 5.5 years from the electronic health records of
two medical facilities, we develop classifiers based on adaptive boosting and
gradient tree boosting. We further combine these learned classifiers into an
ensemble model and compare its performance to a modified pediatric early
warning score (PEWS) baseline that relies on expert defined guidelines. To
gauge model generalizability, we perform an inter-facility evaluation where we
train our algorithm on data from one facility and perform evaluation on a
hidden test dataset from a separate facility. We show that improvements are
witnessed over the PEWS baseline in accuracy (0.77 vs. 0.69), sensitivity (0.80
vs. 0.68), specificity (0.74 vs. 0.70) and AUROC (0.85 vs. 0.73).
| Jonathan Rubin, Cristhian Potes, Minnan Xu-Wilson, Junzi Dong, Asif
Rahman, Hiep Nguyen, David Moromisato | null | 1707.04958 | null | null |
Deep Learning to Attend to Risk in ICU | cs.LG stat.ML | Modeling physiological time-series in ICU is of high clinical importance.
However, data collected within ICU are irregular in time and often contain
missing measurements. Since absence of a measure would signify its lack of
importance, the missingness is indeed informative and might reflect the
decision making by the clinician. Here we propose a deep learning architecture
that can effectively handle these challenges for predicting ICU mortality
outcomes. The model is based on Long Short-Term Memory, and has layered
attention mechanisms. At the sensing layer, the model decides whether to
observe and incorporate parts of the current measurements. At the reasoning
layer, evidences across time steps are weighted and combined. The model is
evaluated on the PhysioNet 2012 dataset showing competitive and interpretable
results.
| Phuoc Nguyen, Truyen Tran, Svetha Venkatesh | null | 1707.0501 | null | null |
Optimization by gradient boosting | math.ST cs.LG stat.TH | Gradient boosting is a state-of-the-art prediction technique that
sequentially produces a model in the form of linear combinations of simple
predictors---typically decision trees---by solving an infinite-dimensional
convex optimization problem. We provide in the present paper a thorough
analysis of two widespread versions of gradient boosting, and introduce a
general framework for studying these algorithms from the point of view of
functional optimization. We prove their convergence as the number of iterations
tends to infinity and highlight the importance of having a strongly convex risk
functional to minimize. We also present a reasonable statistical context
ensuring consistency properties of the boosting predictors as the sample size
grows. In our approach, the optimization procedures are run forever (that is,
without resorting to an early stopping strategy), and statistical
regularization is basically achieved via an appropriate $L^2$ penalization of
the loss and strong convexity arguments.
| G\'erard Biau (LSTA, LPMA), Beno\^it Cadre (ENS Rennes, IRMAR) | null | 1707.05023 | null | null |
On consistency of optimal pricing algorithms in repeated posted-price
auctions with strategic buyer | cs.GT cs.AI cs.LG stat.ML | We study revenue optimization learning algorithms for repeated posted-price
auctions where a seller interacts with a single strategic buyer that holds a
fixed private valuation for a good and seeks to maximize his cumulative
discounted surplus. For this setting, first, we propose a novel algorithm that
never decreases offered prices and has a tight strategic regret bound in
$\Theta(\log\log T)$ under some mild assumptions on the buyer surplus
discounting. This result closes the open research question on the existence of
a no-regret horizon-independent weakly consistent pricing. The proposed
algorithm is inspired by our observation that a double decrease of offered
prices in a weakly consistent algorithm is enough to cause a linear regret.
This motivates us to construct a novel transformation that maps a
right-consistent algorithm to a weakly consistent one that never decreases
offered prices.
Second, we outperform the previously known strategic regret upper bound of
the algorithm PRRFES, where the improvement is achieved by means of a finer
constant factor $C$ of the principal term $C\log\log T$ in this upper bound.
Finally, we generalize results on strategic regret previously known for
geometric discounting of the buyer's surplus to discounting of other types,
namely: the optimality of the pricing PRRFES to the case of geometrically
concave decreasing discounting; and linear lower bound on the strategic regret
of a wide range of horizon-independent weakly consistent algorithms to the case
of arbitrary discounts.
| Alexey Drutsa | null | 1707.05101 | null | null |
Differentially Private Testing of Identity and Closeness of Discrete
Distributions | cs.LG cs.DS cs.IT math.IT | We study the fundamental problems of identity testing (goodness of fit), and
closeness testing (two sample test) of distributions over $k$ elements, under
differential privacy. While the problems have a long history in statistics,
finite sample bounds for these problems have only been established recently.
In this work, we derive upper and lower bounds on the sample complexity of
both the problems under $(\varepsilon, \delta)$-differential privacy. We
provide optimal sample complexity algorithms for identity testing problem for
all parameter ranges, and the first results for closeness testing. Our
closeness testing bounds are optimal in the sparse regime where the number of
samples is at most $k$.
Our upper bounds are obtained by privatizing non-private estimators for these
problems. The non-private estimators are chosen to have small sensitivity. We
propose a general framework to establish lower bounds on the sample complexity
of statistical tasks under differential privacy. We show a bound on
differentially private algorithms in terms of a coupling between the two
hypothesis classes we aim to test. By constructing carefully chosen priors over
the hypothesis classes, and using Le Cam's two point theorem we provide a
general mechanism for proving lower bounds. We believe that the framework can
be used to obtain strong lower bounds for other statistical tasks under
privacy.
| Jayadev Acharya, Ziteng Sun, Huanyu Zhang | null | 1707.05128 | null | null |
Comparative Study of Inference Methods for Bayesian Nonnegative Matrix
Factorisation | stat.ML cs.LG | In this paper, we study the trade-offs of different inference approaches for
Bayesian matrix factorisation methods, which are commonly used for predicting
missing values, and for finding patterns in the data. In particular, we
consider Bayesian nonnegative variants of matrix factorisation and
tri-factorisation, and compare non-probabilistic inference, Gibbs sampling,
variational Bayesian inference, and a maximum-a-posteriori approach. The
variational approach is new for the Bayesian nonnegative models. We compare
their convergence, and robustness to noise and sparsity of the data, on both
synthetic and real-world datasets. Furthermore, we extend the models with the
Bayesian automatic relevance determination prior, allowing the models to
perform automatic model selection, and demonstrate its efficiency.
| Thomas Brouwer, Jes Frellsen, Pietro Li\'o | null | 1707.05147 | null | null |
Trial without Error: Towards Safe Reinforcement Learning via Human
Intervention | cs.AI cs.LG cs.NE | AI systems are increasingly applied to complex tasks that involve interaction
with humans. During training, such systems are potentially dangerous, as they
haven't yet learned to avoid actions that could cause serious harm. How can an
AI system explore and learn without making a single mistake that harms humans
or otherwise causes serious damage? For model-free reinforcement learning,
having a human "in the loop" and ready to intervene is currently the only way
to prevent all catastrophes. We formalize human intervention for RL and show
how to reduce the human labor required by training a supervised learner to
imitate the human's intervention decisions. We evaluate this scheme on Atari
games, with a Deep RL agent being overseen by a human for four hours. When the
class of catastrophes is simple, we are able to prevent all catastrophes
without affecting the agent's learning (whereas an RL baseline fails due to
catastrophic forgetting). However, this scheme is less successful when
catastrophes are more complex: it reduces but does not eliminate catastrophes
and the supervised learner fails on adversarial examples found by the agent.
Extrapolating to more challenging environments, we show that our implementation
would not scale (due to the infeasible amount of human labor required). We
outline extensions of the scheme that are necessary if we are to train
model-free agents without a single catastrophe.
| William Saunders, Girish Sastry, Andreas Stuhlmueller, Owain Evans | null | 1707.05173 | null | null |
Auxiliary Objectives for Neural Error Detection Models | cs.CL cs.LG cs.NE | We investigate the utility of different auxiliary objectives and training
strategies within a neural sequence labeling approach to error detection in
learner writing. Auxiliary costs provide the model with additional linguistic
information, allowing it to learn general-purpose compositional features that
can then be exploited for other objectives. Our experiments show that a joint
learning approach trained with parallel labels on in-domain data improves
performance over the previous best error detection system. While the resulting
model has the same number of parameters, the additional objectives allow it to
be optimised more efficiently and achieve better performance.
| Marek Rei, Helen Yannakoudakis | null | 1707.05227 | null | null |
Detecting Off-topic Responses to Visual Prompts | cs.CL cs.LG cs.NE | Automated methods for essay scoring have made great progress in recent years,
achieving accuracies very close to human annotators. However, a known weakness
of such automated scorers is not taking into account the semantic relevance of
the submitted text. While there is existing work on detecting answer relevance
given a textual prompt, very little previous research has been done to
incorporate visual writing prompts. We propose a neural architecture and
several extensions for detecting off-topic responses to visual prompts and
evaluate it on a dataset of texts written by language learners.
| Marek Rei | null | 1707.05233 | null | null |
Artificial Error Generation with Machine Translation and Syntactic
Patterns | cs.CL cs.LG | Shortage of available training data is holding back progress in the area of
automated error detection. This paper investigates two alternative methods for
artificially generating writing errors, in order to create additional
resources. We propose treating error generation as a machine translation task,
where grammatically correct text is translated to contain errors. In addition,
we explore a system for extracting textual patterns from an annotated corpus,
which can then be used to insert errors into grammatically correct sentences.
Our experiments show that the inclusion of artificially generated errors
significantly improves error detection accuracy on both FCE and CoNLL 2014
datasets.
| Marek Rei, Mariano Felice, Zheng Yuan, Ted Briscoe | null | 1707.05236 | null | null |
Learning to select data for transfer learning with Bayesian Optimization | cs.CL cs.LG | Domain similarity measures can be used to gauge adaptability and select
suitable data for transfer learning, but existing approaches define ad hoc
measures that are deemed suitable for respective tasks. Inspired by work on
curriculum learning, we propose to \emph{learn} data selection measures using
Bayesian Optimization and evaluate them across models, domains and tasks. Our
learned measures outperform existing domain similarity measures significantly
on three tasks: sentiment analysis, part-of-speech tagging, and parsing. We
show the importance of complementing similarity with diversity, and that
learned measures are -- to some degree -- transferable across models, domains,
and even tasks.
| Sebastian Ruder, Barbara Plank | null | 1707.05246 | null | null |
Reverse Curriculum Generation for Reinforcement Learning | cs.AI cs.LG cs.NE cs.RO | Many relevant tasks require an agent to reach a certain state, or to
manipulate objects into a desired configuration. For example, we might want a
robot to align and assemble a gear onto an axle or insert and turn a key in a
lock. These goal-oriented tasks present a considerable challenge for
reinforcement learning, since their natural reward function is sparse and
prohibitive amounts of exploration are required to reach the goal and receive
some learning signal. Past approaches tackle these problems by exploiting
expert demonstrations or by manually designing a task-specific reward shaping
function to guide the learning agent. Instead, we propose a method to learn
these tasks without requiring any prior knowledge other than obtaining a single
state in which the task is achieved. The robot is trained in reverse, gradually
learning to reach the goal from a set of start states increasingly far from the
goal. Our method automatically generates a curriculum of start states that
adapts to the agent's performance, leading to efficient training on
goal-oriented tasks. We demonstrate our approach on difficult simulated
navigation and fine-grained manipulation problems, not solvable by
state-of-the-art reinforcement learning methods.
| Carlos Florensa, David Held, Markus Wulfmeier, Michael Zhang, Pieter
Abbeel | null | 1707.053 | null | null |
Auto-Conditioned Recurrent Networks for Extended Complex Human Motion
Synthesis | cs.LG | We present a real-time method for synthesizing highly complex human motions
using a novel training regime we call the auto-conditioned Recurrent Neural
Network (acRNN). Recently, researchers have attempted to synthesize new motion
by using autoregressive techniques, but existing methods tend to freeze or
diverge after a couple of seconds due to an accumulation of errors that are fed
back into the network. Furthermore, such methods have only been shown to be
reliable for relatively simple human motions, such as walking or running. In
contrast, our approach can synthesize arbitrary motions with highly complex
styles, including dances or martial arts in addition to locomotion. The acRNN
is able to accomplish this by explicitly accommodating for autoregressive noise
accumulation during training. Our work is the first to our knowledge that
demonstrates the ability to generate over 18,000 continuous frames (300
seconds) of new complex human motion w.r.t. different styles.
| Zimo Li, Yi Zhou, Shuangjiu Xiao, Chong He, Zeng Huang, Hao Li | null | 1707.05363 | null | null |
Houdini: Fooling Deep Structured Prediction Models | stat.ML cs.AI cs.CR cs.CV cs.LG | Generating adversarial examples is a critical step for evaluating and
improving the robustness of learning machines. So far, most existing methods
only work for classification and are not designed to alter the true performance
measure of the problem at hand. We introduce a novel flexible approach named
Houdini for generating adversarial examples specifically tailored for the final
performance measure of the task considered, be it combinatorial and
non-decomposable. We successfully apply Houdini to a range of applications such
as speech recognition, pose estimation and semantic segmentation. In all cases,
the attacks based on Houdini achieve higher success rate than those based on
the traditional surrogates used to train the models while using a less
perceptible adversarial perturbation.
| Moustapha Cisse, Yossi Adi, Natalia Neverova and Joseph Keshet | null | 1707.05373 | null | null |
TensorLog: Deep Learning Meets Probabilistic DBs | cs.AI cs.LG | We present an implementation of a probabilistic first-order logic called
TensorLog, in which classes of logical queries are compiled into differentiable
functions in a neural-network infrastructure such as Tensorflow or Theano. This
leads to a close integration of probabilistic logical reasoning with
deep-learning infrastructure: in particular, it enables high-performance deep
learning frameworks to be used for tuning the parameters of a probabilistic
logic. Experimental results show that TensorLog scales to problems involving
hundreds of thousands of knowledge-base triples and tens of thousands of
examples.
| William W. Cohen, Fan Yang, Kathryn Rivard Mazaitis | null | 1707.0539 | null | null |
Freehand Ultrasound Image Simulation with Spatially-Conditioned
Generative Adversarial Networks | cs.LG cs.CV | Sonography synthesis has a wide range of applications, including medical
procedure simulation, clinical training and multimodality image registration.
In this paper, we propose a machine learning approach to simulate ultrasound
images at given 3D spatial locations (relative to the patient anatomy), based
on conditional generative adversarial networks (GANs). In particular, we
introduce a novel neural network architecture that can sample anatomically
accurate images conditionally on spatial position of the (real or mock)
freehand ultrasound probe. To ensure an effective and efficient spatial
information assimilation, the proposed spatially-conditioned GANs take
calibrated pixel coordinates in global physical space as conditioning input,
and utilise residual network units and shortcuts of conditioning data in the
GANs' discriminator and generator, respectively. Using optically tracked B-mode
ultrasound images, acquired by an experienced sonographer on a fetus phantom,
we demonstrate the feasibility of the proposed method by two sets of
quantitative results: distances were calculated between corresponding
anatomical landmarks identified in the held-out ultrasound images and the
simulated data at the same locations unseen to the networks; a usability study
was carried out to distinguish the simulated data from the real images. In
summary, we present what we believe are state-of-the-art visually realistic
ultrasound images, simulated by the proposed GAN architecture that is stable to
train and capable of generating plausibly diverse image samples.
| Yipeng Hu, Eli Gibson, Li-Lin Lee, Weidi Xie, Dean C. Barratt, Tom
Vercauteren, J. Alison Noble | 10.1007/978-3-319-67564-0_11 | 1707.05392 | null | null |
Cooperative Hierarchical Dirichlet Processes: Superposition vs.
Maximization | cs.LG stat.ML | The cooperative hierarchical structure is a common and significant data
structure observed in, or adopted by, many research areas, such as: text mining
(author-paper-word) and multi-label classification (label-instance-feature).
Renowned Bayesian approaches for cooperative hierarchical structure modeling
are mostly based on topic models. However, these approaches suffer from a
serious issue in that the number of hidden topics/factors needs to be fixed in
advance and an inappropriate number may lead to overfitting or underfitting.
One elegant way to resolve this issue is Bayesian nonparametric learning, but
existing work in this area still cannot be applied to cooperative hierarchical
structure modeling.
In this paper, we propose a cooperative hierarchical Dirichlet process (CHDP)
to fill this gap. Each node in a cooperative hierarchical structure is assigned
a Dirichlet process to model its weights on the infinite hidden factors/topics.
Together with measure inheritance from hierarchical Dirichlet process, two
kinds of measure cooperation, i.e., superposition and maximization, are defined
to capture the many-to-many relationships in the cooperative hierarchical
structure. Furthermore, two constructive representations for CHDP, i.e.,
stick-breaking and international restaurant process, are designed to facilitate
the model inference. Experiments on synthetic and real-world data with
cooperative hierarchical structures demonstrate the properties and the ability
of CHDP for cooperative hierarchical structure modeling and its potential for
practical application scenarios.
| Junyu Xuan, Jie Lu, Guangquan Zhang, Richard Yi Da Xu | null | 1707.0542 | null | null |
Don't relax: early stopping for convex regularization | math.OC cs.LG | We consider the problem of designing efficient regularization algorithms when
regularization is encoded by a (strongly) convex functional. Unlike classical
penalization methods based on a relaxation approach, we propose an iterative
method where regularization is achieved via early stopping. Our results show
that the proposed procedure achieves the same recovery accuracy as penalization
methods, while naturally integrating computational considerations. An empirical
analysis on a number of problems provides promising results with respect to the
state of the art.
| Simon Matet, Lorenzo Rosasco, Silvia Villa and Bang Long Vu | null | 1707.05422 | null | null |
DeepProbe: Information Directed Sequence Understanding and Chatbot
Design via Recurrent Neural Networks | stat.ML cs.LG | Information extraction and user intention identification are central topics
in modern query understanding and recommendation systems. In this paper, we
propose DeepProbe, a generic information-directed interaction framework which
is built around an attention-based sequence to sequence (seq2seq) recurrent
neural network. DeepProbe can rephrase, evaluate, and even actively ask
questions, leveraging the generative ability and likelihood estimation made
possible by seq2seq models. DeepProbe makes decisions based on a derived
uncertainty (entropy) measure conditioned on user inputs, possibly with
multiple rounds of interactions. Three applications, namely a rewritter, a
relevance scorer and a chatbot for ad recommendation, were built around
DeepProbe, with the first two serving as precursory building blocks for the
third. We first use the seq2seq model in DeepProbe to rewrite a user query into
one of standard query form, which is submitted to an ordinary recommendation
system. Secondly, we evaluate DeepProbe's seq2seq model-based relevance
scoring. Finally, we build a chatbot prototype capable of making active user
interactions, which can ask questions that maximize information gain, allowing
for a more efficient user intention idenfication process. We evaluate first two
applications by 1) comparing with baselines by BLEU and AUC, and 2) human judge
evaluation. Both demonstrate significant improvements compared with current
state-of-the-art systems, proving their values as useful tools on their own,
and at the same time laying a good foundation for the ongoing chatbot
application.
| Zi Yin, Keng-hao Chang, Ruofei Zhang | 10.1145/3097983.3098148 | 1707.0547 | null | null |
Vision-based Real Estate Price Estimation | cs.CV cs.LG | Since the advent of online real estate database companies like Zillow, Trulia
and Redfin, the problem of automatic estimation of market values for houses has
received considerable attention. Several real estate websites provide such
estimates using a proprietary formula. Although these estimates are often close
to the actual sale prices, in some cases they are highly inaccurate. One of the
key factors that affects the value of a house is its interior and exterior
appearance, which is not considered in calculating automatic value estimates.
In this paper, we evaluate the impact of visual characteristics of a house on
its market value. Using deep convolutional neural networks on a large dataset
of photos of home interiors and exteriors, we develop a method for estimating
the luxury level of real estate photos. We also develop a novel framework for
automated value assessment using the above photos in addition to home
characteristics including size, offered price and number of bedrooms. Finally,
by applying our proposed method for price estimation to a new dataset of real
estate photos and metadata, we show that it outperforms Zillow's estimates.
| Omid Poursaeed, Tomas Matera, Serge Belongie | 10.1007/s00138-018-0922-2 | 1707.05489 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.