title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Multiplicative Normalizing Flows for Variational Bayesian Neural
Networks | stat.ML cs.LG | We reinterpret multiplicative noise in neural networks as auxiliary random
variables that augment the approximate posterior in a variational setting for
Bayesian neural networks. We show that through this interpretation it is both
efficient and straightforward to improve the approximation by employing
normalizing flows while still allowing for local reparametrizations and a
tractable lower bound. In experiments we show that with this new approximation
we can significantly improve upon classical mean field for Bayesian neural
networks on both predictive accuracy as well as predictive uncertainty.
| Christos Louizos and Max Welling | null | 1703.01961 | null | null |
Max-value Entropy Search for Efficient Bayesian Optimization | stat.ML cs.LG math.OC | Entropy Search (ES) and Predictive Entropy Search (PES) are popular and
empirically successful Bayesian Optimization techniques. Both rely on a
compelling information-theoretic motivation, and maximize the information
gained about the $\arg\max$ of the unknown function; yet, both are plagued by
the expensive computation for estimating entropies. We propose a new criterion,
Max-value Entropy Search (MES), that instead uses the information about the
maximum function value. We show relations of MES to other Bayesian optimization
methods, and establish a regret bound. We observe that MES maintains or
improves the good empirical performance of ES/PES, while tremendously
lightening the computational burden. In particular, MES is much more robust to
the number of samples used for computing the entropy, and hence more efficient
for higher dimensional problems.
| Zi Wang and Stefanie Jegelka | null | 1703.01968 | null | null |
Concentration Bounds for High Sensitivity Functions Through Differential
Privacy | cs.LG | A new line of work [Dwork et al. STOC 2015], [Hardt and Ullman FOCS 2014],
[Steinke and Ullman COLT 2015], [Bassily et al. STOC 2016] demonstrates how
differential privacy [Dwork et al. TCC 2006] can be used as a mathematical tool
for guaranteeing generalization in adaptive data analysis. Specifically, if a
differentially private analysis is applied on a sample S of i.i.d. examples to
select a low-sensitivity function f, then w.h.p. f(S) is close to its
expectation, although f is being chosen based on the data.
Very recently, Steinke and Ullman observed that these generalization
guarantees can be used for proving concentration bounds in the non-adaptive
setting, where the low-sensitivity function is fixed beforehand. In particular,
they obtain alternative proofs for classical concentration bounds for
low-sensitivity functions, such as the Chernoff bound and McDiarmid's
Inequality.
In this work, we set out to examine the situation for functions with
high-sensitivity, for which differential privacy does not imply generalization
guarantees under adaptive analysis. We show that differential privacy can be
used to prove concentration bounds for such functions in the non-adaptive
setting.
| Kobbi Nissim, Uri Stemmer | null | 1703.0197 | null | null |
Batched High-dimensional Bayesian Optimization via Structural Kernel
Learning | stat.ML cs.LG math.OC | Optimization of high-dimensional black-box functions is an extremely
challenging problem. While Bayesian optimization has emerged as a popular
approach for optimizing black-box functions, its applicability has been limited
to low-dimensional problems due to its computational and statistical challenges
arising from high-dimensional settings. In this paper, we propose to tackle
these challenges by (1) assuming a latent additive structure in the function
and inferring it properly for more efficient and effective BO, and (2)
performing multiple evaluations in parallel to reduce the number of iterations
required by the method. Our novel approach learns the latent structure with
Gibbs sampling and constructs batched queries using determinantal point
processes. Experimental validations on both synthetic and real-world functions
demonstrate that the proposed method outperforms the existing state-of-the-art
approaches.
| Zi Wang and Chengtao Li and Stefanie Jegelka and Pushmeet Kohli | null | 1703.01973 | null | null |
Linear, Machine Learning and Probabilistic Approaches for Time Series
Analysis | stat.AP cs.LG stat.ME | In this paper we study different approaches for time series modeling. The
forecasting approaches using linear models, ARIMA alpgorithm, XGBoost machine
learning algorithm are described. Results of different model combinations are
shown. For probabilistic modeling the approaches using copulas and Bayesian
inference are considered.
| B.M. Pavlyshenko | null | 1703.01977 | null | null |
Neural Episodic Control | cs.LG stat.ML | Deep reinforcement learning methods attain super-human performance in a wide
range of environments. Such methods are grossly inefficient, often taking
orders of magnitudes more data than humans to achieve reasonable performance.
We propose Neural Episodic Control: a deep reinforcement learning agent that is
able to rapidly assimilate new experiences and act upon them. Our agent uses a
semi-tabular representation of the value function: a buffer of past experience
containing slowly changing state representations and rapidly updated estimates
of the value function. We show across a wide range of environments that our
agent learns significantly faster than other state-of-the-art, general purpose
deep reinforcement learning agents.
| Alexander Pritzel, Benigno Uria, Sriram Srinivasan, Adri\`a
Puigdom\`enech, Oriol Vinyals, Demis Hassabis, Daan Wierstra, Charles
Blundell | null | 1703.01988 | null | null |
Activation Maximization Generative Adversarial Nets | cs.LG cs.AI cs.CV stat.ML | Class labels have been empirically shown useful in improving the sample
quality of generative adversarial nets (GANs). In this paper, we mathematically
study the properties of the current variants of GANs that make use of class
label information. With class aware gradient and cross-entropy decomposition,
we reveal how class labels and associated losses influence GAN's training.
Based on that, we propose Activation Maximization Generative Adversarial
Networks (AM-GAN) as an advanced solution. Comprehensive experiments have been
conducted to validate our analysis and evaluate the effectiveness of our
solution, where AM-GAN outperforms other strong baselines and achieves
state-of-the-art Inception Score (8.91) on CIFAR-10. In addition, we
demonstrate that, with the Inception ImageNet classifier, Inception Score
mainly tracks the diversity of the generator, and there is, however, no
reliable evidence that it can reflect the true sample quality. We thus propose
a new metric, called AM Score, to provide a more accurate estimation of the
sample quality. Our proposed model also outperforms the baseline methods in the
new metric.
| Zhiming Zhou, Han Cai, Shu Rong, Yuxuan Song, Kan Ren, Weinan Zhang,
Yong Yu, Jun Wang | null | 1703.02 | null | null |
Combining Self-Supervised Learning and Imitation for Vision-Based Rope
Manipulation | cs.CV cs.LG cs.RO | Manipulation of deformable objects, such as ropes and cloth, is an important
but challenging problem in robotics. We present a learning-based system where a
robot takes as input a sequence of images of a human manipulating a rope from
an initial to goal configuration, and outputs a sequence of actions that can
reproduce the human demonstration, using only monocular images as input. To
perform this task, the robot learns a pixel-level inverse dynamics model of
rope manipulation directly from images in a self-supervised manner, using about
60K interactions with the rope collected autonomously by the robot. The human
demonstration provides a high-level plan of what to do and the low-level
inverse model is used to execute the plan. We show that by combining the high
and low-level plans, the robot can successfully manipulate a rope into a
variety of target shapes using only a sequence of human-provided images for
direction.
| Ashvin Nair, Dian Chen, Pulkit Agrawal, Phillip Isola, Pieter Abbeel,
Jitendra Malik, Sergey Levine | null | 1703.02018 | null | null |
Cheshire: An Online Algorithm for Activity Maximization in Social
Networks | stat.ML cs.DS cs.LG cs.SI | User engagement in social networks depends critically on the number of online
actions their users take in the network. Can we design an algorithm that finds
when to incentivize users to take actions to maximize the overall activity in a
social network? In this paper, we model the number of online actions over time
using multidimensional Hawkes processes, derive an alternate representation of
these processes based on stochastic differential equations (SDEs) with jumps
and, exploiting this alternate representation, address the above question from
the perspective of stochastic optimal control of SDEs with jumps. We find that
the optimal level of incentivized actions depends linearly on the current level
of overall actions. Moreover, the coefficients of this linear relationship can
be found by solving a matrix Riccati differential equation, which can be solved
efficiently, and a first order differential equation, which has a closed form
solution. As a result, we are able to design an efficient online algorithm,
Cheshire, to sample the optimal times of the users' incentivized actions.
Experiments on both synthetic and real data gathered from Twitter show that our
algorithm is able to consistently maximize the number of online actions more
effectively than the state of the art.
| Ali Zarezade and Abir De and Hamid Rabiee and Manuel Gomez Rodriguez | null | 1703.02059 | null | null |
On the Expressive Power of Overlapping Architectures of Deep Learning | cs.LG cs.NE stat.ML | Expressive efficiency refers to the relation between two architectures A and
B, whereby any function realized by B could be replicated by A, but there
exists functions realized by A, which cannot be replicated by B unless its size
grows significantly larger. For example, it is known that deep networks are
exponentially efficient with respect to shallow networks, in the sense that a
shallow network must grow exponentially large in order to approximate the
functions represented by a deep network of polynomial size. In this work, we
extend the study of expressive efficiency to the attribute of network
connectivity and in particular to the effect of "overlaps" in the convolutional
process, i.e., when the stride of the convolution is smaller than its filter
size (receptive field). To theoretically analyze this aspect of network's
design, we focus on a well-established surrogate for ConvNets called
Convolutional Arithmetic Circuits (ConvACs), and then demonstrate empirically
that our results hold for standard ConvNets as well. Specifically, our analysis
shows that having overlapping local receptive fields, and more broadly denser
connectivity, results in an exponential increase in the expressive capacity of
neural networks. Moreover, while denser connectivity can increase the
expressive capacity, we show that the most common types of modern architectures
already exhibit exponential increase in expressivity, without relying on
fully-connected layers.
| Or Sharir and Amnon Shashua | null | 1703.02065 | null | null |
Guarantees for Greedy Maximization of Non-submodular Functions with
Applications | cs.DM cs.AI cs.DS cs.LG math.OC | We investigate the performance of the standard Greedy algorithm for
cardinality constrained maximization of non-submodular nondecreasing set
functions. While there are strong theoretical guarantees on the performance of
Greedy for maximizing submodular functions, there are few guarantees for
non-submodular ones. However, Greedy enjoys strong empirical performance for
many important non-submodular functions, e.g., the Bayesian A-optimality
objective in experimental design. We prove theoretical guarantees supporting
the empirical performance. Our guarantees are characterized by a combination of
the (generalized) curvature $\alpha$ and the submodularity ratio $\gamma$. In
particular, we prove that Greedy enjoys a tight approximation guarantee of
$\frac{1}{\alpha}(1- e^{-\gamma\alpha})$ for cardinality constrained
maximization. In addition, we bound the submodularity ratio and curvature for
several important real-world objectives, including the Bayesian A-optimality
objective, the determinantal function of a square submatrix and certain linear
programs with combinatorial constraints. We experimentally validate our
theoretical findings for both synthetic and real-world applications.
| Andrew An Bian, Joachim M. Buhmann, Andreas Krause, Sebastian
Tschiatschek | null | 1703.021 | null | null |
Revisiting stochastic off-policy action-value gradients | stat.ML cs.LG | Off-policy stochastic actor-critic methods rely on approximating the
stochastic policy gradient in order to derive an optimal policy. One may also
derive the optimal policy by approximating the action-value gradient. The use
of action-value gradients is desirable as policy improvement occurs along the
direction of steepest ascent. This has been studied extensively within the
context of natural gradient actor-critic algorithms and more recently within
the context of deterministic policy gradients. In this paper we briefly discuss
the off-policy stochastic counterpart to deterministic action-value gradients,
as well as an incremental approach for following the policy gradient in lieu of
the natural gradient.
| Yemi Okesanjo, Victor Kofia | null | 1703.02102 | null | null |
Classification and clustering for observations of event time data using
non-homogeneous Poisson process models | cs.LG stat.ML | Data of the form of event times arise in various applications. A simple model
for such data is a non-homogeneous Poisson process (NHPP) which is specified by
a rate function that depends on time. We consider the problem of having access
to multiple independent observations of event time data, observed on a common
interval, from which we wish to classify or cluster the observations according
to their rate functions. Each rate function is unknown but assumed to belong to
a finite number of rate functions each defining a distinct class. We model the
rate functions using a spline basis expansion, the coefficients of which need
to be estimated from data. The classification approach consists of using
training data for which the class membership is known, to calculate maximum
likelihood estimates of the coefficients for each group, then assigning test
observations to a group by a maximum likelihood criterion. For clustering, by
analogy to the Gaussian mixture model approach for Euclidean data, we consider
mixtures of NHPP and use the expectation-maximisation algorithm to estimate the
coefficients of the rate functions for the component models and group
membership probabilities for each observation. The classification and
clustering approaches perform well on both synthetic and real-world data sets.
Code associated with this paper is available at
https://github.com/duncan-barrack/NHPP .
| Duncan Barrack and Simon Preston | null | 1703.02111 | null | null |
Evaluation of Machine Learning Methods to Predict Coronary Artery
Disease Using Metabolomic Data | cs.LG | Metabolomic data can potentially enable accurate, non-invasive and low-cost
prediction of coronary artery disease. Regression-based analytical approaches
however might fail to fully account for interactions between metabolites, rely
on a priori selected input features and thus might suffer from poorer accuracy.
Supervised machine learning methods can potentially be used in order to fully
exploit the dimensionality and richness of the data. In this paper, we
systematically implement and evaluate a set of supervised learning methods (L1
regression, random forest classifier) and compare them to traditional
regression-based approaches for disease prediction using metabolomic data.
| Henrietta Forssen, Riyaz S. Patel, Natalie Fitzpatrick, Aroon
Hingorani, Adam Timmis, Harry Hemingway, Spiros C. Denaxas | null | 1703.02116 | null | null |
Contextual Motifs: Increasing the Utility of Motifs using Contextual
Data | cs.LG | Motifs are a powerful tool for analyzing physiological waveform data.
Standard motif methods, however, ignore important contextual information (e.g.,
what the patient was doing at the time the data were collected). We hypothesize
that these additional contextual data could increase the utility of motifs.
Thus, we propose an extension to motifs, contextual motifs, that incorporates
context. Recognizing that, oftentimes, context may be unobserved or
unavailable, we focus on methods to jointly infer motifs and context. Applied
to both simulated and real physiological data, our proposed approach improves
upon existing motif methods in terms of the discriminative utility of the
discovered motifs. In particular, we discovered contextual motifs in continuous
glucose monitor (CGM) data collected from patients with type 1 diabetes.
Compared to their contextless counterparts, these contextual motifs led to
better predictions of hypo- and hyperglycemic events. Our results suggest that
even when inferred, context is useful in both a long- and short-term prediction
horizon when processing and interpreting physiological waveform data.
| Ian Fox, Lynn Ang, Mamta Jaiswal, Rodica Pop-Busui, Jenna Wiens | 10.1145/3097983.3098068 | 1703.02144 | null | null |
Model-Based Multiple Instance Learning | stat.ML cs.LG | While Multiple Instance (MI) data are point patterns -- sets or multi-sets of
unordered points -- appropriate statistical point pattern models have not been
used in MI learning. This article proposes a framework for model-based MI
learning using point process theory. Likelihood functions for point pattern
data derived from point process theory enable principled yet conceptually
transparent extensions of learning tasks, such as classification, novelty
detection and clustering, to point pattern data. Furthermore, tractable point
pattern models as well as solutions for learning and decision making from point
pattern data are developed.
| Ba-Ngu Vo, Dinh Phung, Quang N. Tran, and Ba-Tuong Vo | null | 1703.02155 | null | null |
On the Limits of Learning Representations with Label-Based Supervision | cs.LG cs.AI stat.ML | Advances in neural network based classifiers have transformed automatic
feature learning from a pipe dream of stronger AI to a routine and expected
property of practical systems. Since the emergence of AlexNet every winning
submission of the ImageNet challenge has employed end-to-end representation
learning, and due to the utility of good representations for transfer learning,
representation learning has become as an important and distinct task from
supervised learning. At present, this distinction is inconsequential, as
supervised methods are state-of-the-art in learning transferable
representations. But recent work has shown that generative models can also be
powerful agents of representation learning. Will the representations learned
from these generative methods ever rival the quality of those from their
supervised competitors? In this work, we argue in the affirmative, that from an
information theoretic perspective, generative models have greater potential for
representation learning. Based on several experimentally validated assumptions,
we show that supervised learning is upper bounded in its capacity for
representation learning in ways that certain generative models, such as
Generative Adversarial Networks (GANs) are not. We hope that our analysis will
provide a rigorous motivation for further exploration of generative
representation learning.
| Jiaming Song, Russell Stewart, Shengjia Zhao and Stefano Ermon | null | 1703.02156 | null | null |
Distance Metric Learning using Graph Convolutional Networks: Application
to Functional Brain Networks | cs.CV cs.LG | Evaluating similarity between graphs is of major importance in several
computer vision and pattern recognition problems, where graph representations
are often used to model objects or interactions between elements. The choice of
a distance or similarity metric is, however, not trivial and can be highly
dependent on the application at hand. In this work, we propose a novel metric
learning method to evaluate distance between graphs that leverages the power of
convolutional neural networks, while exploiting concepts from spectral graph
theory to allow these operations on irregular graphs. We demonstrate the
potential of our method in the field of connectomics, where neuronal pathways
or functional connections between brain regions are commonly modelled as
graphs. In this problem, the definition of an appropriate graph similarity
function is critical to unveil patterns of disruptions associated with certain
brain disorders. Experimental results on the ABIDE dataset show that our method
can learn a graph similarity metric tailored for a clinical application,
improving the performance of a simple k-nn classifier by 11.9% compared to a
traditional distance metric.
| Sofia Ira Ktena, Sarah Parisot, Enzo Ferrante, Martin Rajchl, Matthew
Lee, Ben Glocker, Daniel Rueckert | null | 1703.02161 | null | null |
Raw Waveform-based Speech Enhancement by Fully Convolutional Networks | stat.ML cs.LG cs.SD | This study proposes a fully convolutional network (FCN) model for raw
waveform-based speech enhancement. The proposed system performs speech
enhancement in an end-to-end (i.e., waveform-in and waveform-out) manner, which
dif-fers from most existing denoising methods that process the magnitude
spectrum (e.g., log power spectrum (LPS)) only. Because the fully connected
layers, which are involved in deep neural networks (DNN) and convolutional
neural networks (CNN), may not accurately characterize the local information of
speech signals, particularly with high frequency components, we employed fully
convolutional layers to model the waveform. More specifically, FCN consists of
only convolutional layers and thus the local temporal structures of speech
signals can be efficiently and effectively preserved with relatively few
weights. Experimental results show that DNN- and CNN-based models have limited
capability to restore high frequency components of waveforms, thus leading to
decreased intelligibility of enhanced speech. By contrast, the proposed FCN
model can not only effectively recover the waveforms but also outperform the
LPS-based DNN baseline in terms of short-time objective intelligibility (STOI)
and perceptual evaluation of speech quality (PESQ). In addition, the number of
model parameters in FCN is approximately only 0.2% compared with that in both
DNN and CNN.
| Szu-Wei Fu, Yu Tsao, Xugang Lu, Hisashi Kawai | null | 1703.02205 | null | null |
Triple Generative Adversarial Nets | cs.LG cs.CV | Generative Adversarial Nets (GANs) have shown promise in image generation and
semi-supervised learning (SSL). However, existing GANs in SSL have two
problems: (1) the generator and the discriminator (i.e. the classifier) may not
be optimal at the same time; and (2) the generator cannot control the semantics
of the generated samples. The problems essentially arise from the two-player
formulation, where a single discriminator shares incompatible roles of
identifying fake samples and predicting labels and it only estimates the data
without considering the labels. To address the problems, we present triple
generative adversarial net (Triple-GAN), which consists of three players---a
generator, a discriminator and a classifier. The generator and the classifier
characterize the conditional distributions between images and labels, and the
discriminator solely focuses on identifying fake image-label pairs. We design
compatible utilities to ensure that the distributions characterized by the
classifier and the generator both converge to the data distribution. Our
results on various datasets demonstrate that Triple-GAN as a unified model can
simultaneously (1) achieve the state-of-the-art classification results among
deep generative models, and (2) disentangle the classes and styles of the input
and transfer smoothly in the data space via interpolation in the latent space
class-conditionally.
| Chongxuan Li and Kun Xu and Jun Zhu and Bo Zhang | null | 1703.02291 | null | null |
Deep Robust Kalman Filter | cs.AI cs.LG stat.ML | A Robust Markov Decision Process (RMDP) is a sequential decision making model
that accounts for uncertainty in the parameters of dynamic systems. This
uncertainty introduces difficulties in learning an optimal policy, especially
for environments with large state spaces. We propose two algorithms, RTD-DQN
and Deep-RoK, for solving large-scale RMDPs using nonlinear approximation
schemes such as deep neural networks. The RTD-DQN algorithm incorporates the
robust Bellman temporal difference error into a robust loss function, yielding
robust policies for the agent. The Deep-RoK algorithm is a robust Bayesian
method, based on the Extended Kalman Filter (EKF), that accounts for both the
uncertainty in the weights of the approximated value function and the
uncertainty in the transition probabilities, improving the robustness of the
agent. We provide theoretical results for our approach and test the proposed
algorithms on a continuous state domain.
| Shirli Di-Castro Shashua, Shie Mannor | null | 1703.0231 | null | null |
Convolutional Recurrent Neural Networks for Bird Audio Detection | cs.SD cs.LG stat.ML | Bird sounds possess distinctive spectral structure which may exhibit small
shifts in spectrum depending on the bird species and environmental conditions.
In this paper, we propose using convolutional recurrent neural networks on the
task of automated bird audio detection in real-life environments. In the
proposed method, convolutional layers extract high dimensional, local frequency
shift invariant features, while recurrent layers capture longer term
dependencies between the features extracted from short time frames. This method
achieves 88.5% Area Under ROC Curve (AUC) score on the unseen evaluation data
and obtains the second place in the Bird Audio Detection challenge.
| Emre\c{C}ak{\i}r, Sharath Adavanne, Giambattista Parascandolo,
Konstantinos Drossos, Tuomas Virtanen | null | 1703.02317 | null | null |
Qualitative Assessment of Recurrent Human Motion | cs.LG cs.CV | Smartphone applications designed to track human motion in combination with
wearable sensors, e.g., during physical exercising, raised huge attention
recently. Commonly, they provide quantitative services, such as personalized
training instructions or the counting of distances. But qualitative monitoring
and assessment is still missing, e.g., to detect malpositions, to prevent
injuries, or to optimize training success. We address this issue by presenting
a concept for qualitative as well as generic assessment of recurrent human
motion by processing multi-dimensional, continuous time series tracked with
motion sensors. Therefore, our segmentation procedure extracts individual
events of specific length and we propose expressive features to accomplish a
qualitative motion assessment by supervised classification. We verified our
approach within a comprehensive study encompassing 27 athletes undertaking
different body weight exercises. We are able to recognize six different
exercise types with a success rate of 100% and to assess them qualitatively
with an average success rate of 99.3%.
| Andre Ebert, Michael Till Beck, Andy Mattausch, Lenz Belzner, Claudia
Linnhoff Popien | 10.23919/EUSIPCO.2017.8081218 | 1703.02363 | null | null |
Graph sketching-based Space-efficient Data Clustering | cs.LG cs.DS | In this paper, we address the problem of recovering arbitrary-shaped data
clusters from datasets while facing \emph{high space constraints}, as this is
for instance the case in many real-world applications when analysis algorithms
are directly deployed on resources-limited mobile devices collecting the data.
We present DBMSTClu a new space-efficient density-based \emph{non-parametric}
method working on a Minimum Spanning Tree (MST) recovered from a limited number
of linear measurements i.e. a \emph{sketched} version of the dissimilarity
graph $\mathcal{G}$ between the $N$ objects to cluster. Unlike $k$-means,
$k$-medians or $k$-medoids algorithms, it does not fail at distinguishing
clusters with particular forms thanks to the property of the MST for expressing
the underlying structure of a graph. No input parameter is needed contrarily to
DBSCAN or the Spectral Clustering method. An approximate MST is retrieved by
following the dynamic \emph{semi-streaming} model in handling the dissimilarity
graph $\mathcal{G}$ as a stream of edge weight updates which is sketched in one
pass over the data into a compact structure requiring $O(N
\operatorname{polylog}(N))$ space, far better than the theoretical memory cost
$O(N^2)$ of $\mathcal{G}$. The recovered approximate MST $\mathcal{T}$ as
input, DBMSTClu then successfully detects the right number of nonconvex
clusters by performing relevant cuts on $\mathcal{T}$ in a time linear in $N$.
We provide theoretical guarantees on the quality of the clustering partition
and also demonstrate its advantage over the existing state-of-the-art on
several datasets.
| Anne Morvan, Krzysztof Choromanski, C\'edric Gouy-Pailler, Jamal Atif | 10.1137/1.9781611975321.2 | 1703.02375 | null | null |
Global Weisfeiler-Lehman Graph Kernels | cs.LG stat.ML | Most state-of-the-art graph kernels only take local graph properties into
account, i.e., the kernel is computed with regard to properties of the
neighborhood of vertices or other small substructures. On the other hand,
kernels that do take global graph propertiesinto account may not scale well to
large graph databases. Here we propose to start exploring the space between
local and global graph kernels, striking the balance between both worlds.
Specifically, we introduce a novel graph kernel based on the $k$-dimensional
Weisfeiler-Lehman algorithm. Unfortunately, the $k$-dimensional
Weisfeiler-Lehman algorithm scales exponentially in $k$. Consequently, we
devise a stochastic version of the kernel with provable approximation
guarantees using conditional Rademacher averages. On bounded-degree graphs, it
can even be computed in constant time. We support our theoretical results with
experiments on several graph classification benchmarks, showing that our
kernels often outperform the state-of-the-art in terms of classification
accuracies.
| Christopher Morris, Kristian Kersting, Petra Mutzel | null | 1703.02379 | null | null |
Learning from Noisy Labels with Distillation | cs.CV cs.LG stat.ML | The ability of learning from noisy labels is very useful in many visual
recognition tasks, as a vast amount of data with noisy labels are relatively
easy to obtain. Traditionally, the label noises have been treated as
statistical outliers, and approaches such as importance re-weighting and
bootstrap have been proposed to alleviate the problem. According to our
observation, the real-world noisy labels exhibit multi-mode characteristics as
the true labels, rather than behaving like independent random outliers. In this
work, we propose a unified distillation framework to use side information,
including a small clean dataset and label relations in knowledge graph, to
"hedge the risk" of learning from noisy labels. Furthermore, unlike the
traditional approaches evaluated based on simulated label noises, we propose a
suite of new benchmark datasets, in Sports, Species and Artifacts domains, to
evaluate the task of learning from noisy labels in the practical setting. The
empirical study demonstrates the effectiveness of our proposed method in all
the domains.
| Yuncheng Li, Jianchao Yang, Yale Song, Liangliang Cao, Jiebo Luo,
Li-Jia Li | null | 1703.02391 | null | null |
On Structured Prediction Theory with Calibrated Convex Surrogate Losses | cs.LG stat.ML | We provide novel theoretical insights on structured prediction in the context
of efficient convex surrogate loss minimization with consistency guarantees.
For any task loss, we construct a convex surrogate that can be optimized via
stochastic gradient descent and we prove tight bounds on the so-called
"calibration function" relating the excess surrogate risk to the actual risk.
In contrast to prior related work, we carefully monitor the effect of the
exponential number of classes in the learning guarantees as well as on the
optimization complexity. As an interesting consequence, we formalize the
intuition that some task losses make learning harder than others, and that the
classical 0-1 loss is ill-suited for general structured prediction.
| Anton Osokin, Francis Bach, Simon Lacoste-Julien | null | 1703.02403 | null | null |
Probabilistic learning of nonlinear dynamical systems using sequential
Monte Carlo | stat.CO cs.LG cs.SY | Probabilistic modeling provides the capability to represent and manipulate
uncertainty in data, models, predictions and decisions. We are concerned with
the problem of learning probabilistic models of dynamical systems from measured
data. Specifically, we consider learning of probabilistic nonlinear state-space
models. There is no closed-form solution available for this problem, implying
that we are forced to use approximations. In this tutorial we will provide a
self-contained introduction to one of the state-of-the-art methods---the
particle Metropolis--Hastings algorithm---which has proven to offer a practical
approximation. This is a Monte Carlo based method, where the particle filter is
used to guide a Markov chain Monte Carlo method through the parameter space.
One of the key merits of the particle Metropolis--Hastings algorithm is that it
is guaranteed to converge to the "true solution" under mild assumptions,
despite being based on a particle filter with only a finite number of
particles. We will also provide a motivating numerical example illustrating the
method using a modeling language tailored for sequential Monte Carlo methods.
The intention of modeling languages of this kind is to open up the power of
sophisticated Monte Carlo methods---including particle
Metropolis--Hastings---to a large group of users without requiring them to know
all the underlying mathematical details.
| Thomas B. Sch\"on, Andreas Svensson, Lawrence Murray, Fredrik Lindsten | 10.1016/j.ymssp.2017.10.033 | 1703.02419 | null | null |
An investigation into machine learning approaches for forecasting
spatio-temporal demand in ride-hailing service | cs.LG stat.ML | In this paper, we present machine learning approaches for characterizing and
forecasting the short-term demand for on-demand ride-hailing services. We
propose the spatio-temporal estimation of the demand that is a function of
variable effects related to traffic, pricing and weather conditions. With
respect to the methodology, a single decision tree, bootstrap-aggregated
(bagged) decision trees, random forest, boosted decision trees, and artificial
neural network for regression have been adapted and systematically compared
using various statistics, e.g. R-square, Root Mean Square Error (RMSE), and
slope. To better assess the quality of the models, they have been tested on a
real case study using the data of DiDi Chuxing, the main on-demand ride hailing
service provider in China. In the current study, 199,584 time-slots describing
the spatio-temporal ride-hailing demand has been extracted with an
aggregated-time interval of 10 mins. All the methods are trained and validated
on the basis of two independent samples from this dataset. The results revealed
that boosted decision trees provide the best prediction accuracy (RMSE=16.41),
while avoiding the risk of over-fitting, followed by artificial neural network
(20.09), random forest (23.50), bagged decision trees (24.29) and single
decision tree (33.55).
| Isma\"il Saadi, Melvin Wong, Bilal Farooq, Jacques Teller, Mario Cools | null | 1703.02433 | null | null |
Unsupervised learning of phase transitions: from principal component
analysis to variational autoencoders | cond-mat.stat-mech cs.LG stat.ML | We employ unsupervised machine learning techniques to learn latent parameters
which best describe states of the two-dimensional Ising model and the
three-dimensional XY model. These methods range from principal component
analysis to artificial neural network based variational autoencoders. The
states are sampled using a Monte-Carlo simulation above and below the critical
temperature. We find that the predicted latent parameters correspond to the
known order parameters. The latent representation of the states of the models
in question are clustered, which makes it possible to identify phases without
prior knowledge of their existence or the underlying Hamiltonian. Furthermore,
we find that the reconstruction loss function can be used as a universal
identifier for phase transitions.
| Sebastian Johann Wetzel | 10.1103/PhysRevE.96.022140 | 1703.02435 | null | null |
PathTrack: Fast Trajectory Annotation with Path Supervision | cs.CV cs.LG cs.MM | Progress in Multiple Object Tracking (MOT) has been historically limited by
the size of the available datasets. We present an efficient framework to
annotate trajectories and use it to produce a MOT dataset of unprecedented
size. In our novel path supervision the annotator loosely follows the object
with the cursor while watching the video, providing a path annotation for each
object in the sequence. Our approach is able to turn such weak annotations into
dense box trajectories. Our experiments on existing datasets prove that our
framework produces more accurate annotations than the state of the art, in a
fraction of the time. We further validate our approach by crowdsourcing the
PathTrack dataset, with more than 15,000 person trajectories in 720 sequences.
Tracking approaches can benefit training on such large-scale datasets, as did
object recognition. We prove this by re-training an off-the-shelf person
matching network, originally trained on the MOT15 dataset, almost halving the
misclassification rate. Additionally, training on our data consistently
improves tracking results, both on our dataset and on MOT15. On the latter, we
improve the top-performing tracker (NOMT) dropping the number of IDSwitches by
18% and fragments by 5%.
| Santiago Manen, Michael Gygli, Dengxin Dai, Luc Van Gool | null | 1703.02437 | null | null |
Leveraging Large Amounts of Weakly Supervised Data for Multi-Language
Sentiment Classification | cs.CL cs.IR cs.LG | This paper presents a novel approach for multi-lingual sentiment
classification in short texts. This is a challenging task as the amount of
training data in languages other than English is very limited. Previously
proposed multi-lingual approaches typically require to establish a
correspondence to English for which powerful classifiers are already available.
In contrast, our method does not require such supervision. We leverage large
amounts of weakly-supervised data in various languages to train a multi-layer
convolutional network and demonstrate the importance of using pre-training of
such networks. We thoroughly evaluate our approach on various multi-lingual
datasets, including the recent SemEval-2016 sentiment prediction benchmark
(Task 4), where we achieved state-of-the-art performance. We also compare the
performance of our model trained individually for each language to a variant
trained for all languages at once. We show that the latter model reaches
slightly worse - but still acceptable - performance when compared to the single
language model, while benefiting from better generalization properties across
languages.
| Jan Deriu, Aurelien Lucchi, Valeria De Luca, Aliaksei Severyn, Simon
M\"uller, Mark Cieliebak, Thomas Hofmann, Martin Jaggi | null | 1703.02504 | null | null |
Faster Coordinate Descent via Adaptive Importance Sampling | cs.LG cs.CV math.OC stat.CO stat.ML | Coordinate descent methods employ random partial updates of decision
variables in order to solve huge-scale convex optimization problems. In this
work, we introduce new adaptive rules for the random selection of their
updates. By adaptive, we mean that our selection rules are based on the dual
residual or the primal-dual gap estimates and can change at each iteration. We
theoretically characterize the performance of our selection rules and
demonstrate improvements over the state-of-the-art, and extend our theory and
algorithms to general convex objectives. Numerical evidence with hinge-loss
support vector machines and Lasso confirm that the practice follows the theory.
| Dmytro Perekrestenko, Volkan Cevher, Martin Jaggi | null | 1703.02518 | null | null |
Online Learning to Rank in Stochastic Click Models | cs.LG stat.ML | Online learning to rank is a core problem in information retrieval and
machine learning. Many provably efficient algorithms have been recently
proposed for this problem in specific click models. The click model is a model
of how the user interacts with a list of documents. Though these results are
significant, their impact on practice is limited, because all proposed
algorithms are designed for specific click models and lack convergence
guarantees in other models. In this work, we propose BatchRank, the first
online learning to rank algorithm for a broad class of click models. The class
encompasses two most fundamental click models, the cascade and position-based
models. We derive a gap-dependent upper bound on the $T$-step regret of
BatchRank and evaluate it on a range of web search queries. We observe that
BatchRank outperforms ranked bandits and is more robust than CascadeKL-UCB, an
existing algorithm for the cascade model.
| Masrour Zoghi, Tomas Tunys, Mohammad Ghavamzadeh, Branislav Kveton,
Csaba Szepesvari, and Zheng Wen | null | 1703.02527 | null | null |
Stopping GAN Violence: Generative Unadversarial Networks | stat.ML cs.LG | While the costs of human violence have attracted a great deal of attention
from the research community, the effects of the network-on-network (NoN)
violence popularised by Generative Adversarial Networks have yet to be
addressed. In this work, we quantify the financial, social, spiritual,
cultural, grammatical and dermatological impact of this aggression and address
the issue by proposing a more peaceful approach which we term Generative
Unadversarial Networks (GUNs). Under this framework, we simultaneously train
two models: a generator G that does its best to capture whichever data
distribution it feels it can manage, and a motivator M that helps G to achieve
its dream. Fighting is strictly verboten and both models evolve by learning to
respect their differences. The framework is both theoretically and electrically
grounded in game theory, and can be viewed as a winner-shares-all two-player
game in which both players work as a team to achieve the best score.
Experiments show that by working in harmony, the proposed model is able to
claim both the moral and log-likelihood high ground. Our work builds on a rich
history of carefully argued position-papers, published as anonymous YouTube
comments, which prove that the optimal solution to NoN violence is more GUNs.
| Samuel Albanie, S\'ebastien Ehrhardt, Jo\~ao F. Henriques | null | 1703.02528 | null | null |
Online Learning of Optimal Bidding Strategy in Repeated Multi-Commodity
Auctions | cs.GT cs.LG | We study the online learning problem of a bidder who participates in repeated
auctions. With the goal of maximizing his T-period payoff, the bidder
determines the optimal allocation of his budget among his bids for $K$ goods at
each period. As a bidding strategy, we propose a polynomial-time algorithm,
inspired by the dynamic programming approach to the knapsack problem. The
proposed algorithm, referred to as dynamic programming on discrete set (DPDS),
achieves a regret order of $O(\sqrt{T\log{T}})$. By showing that the regret is
lower bounded by $\Omega(\sqrt{T})$ for any strategy, we conclude that DPDS is
order optimal up to a $\sqrt{\log{T}}$ term. We evaluate the performance of
DPDS empirically in the context of virtual trading in wholesale electricity
markets by using historical data from the New York market. Empirical results
show that DPDS consistently outperforms benchmark heuristic methods that are
derived from machine learning and online learning approaches.
| Sevi Baltaoglu, Lang Tong, Qing Zhao | null | 1703.02567 | null | null |
Regularising Non-linear Models Using Feature Side-information | cs.LG stat.ML | Very often features come with their own vectorial descriptions which provide
detailed information about their properties. We refer to these vectorial
descriptions as feature side-information. In the standard learning scenario,
input is represented as a vector of features and the feature side-information
is most often ignored or used only for feature selection prior to model
fitting. We believe that feature side-information which carries information
about features intrinsic property will help improve model prediction if used in
a proper way during learning process. In this paper, we propose a framework
that allows for the incorporation of the feature side-information during the
learning of very general model families to improve the prediction performance.
We control the structures of the learned models so that they reflect features
similarities as these are defined on the basis of the side-information. We
perform experiments on a number of benchmark datasets which show significant
predictive performance gains, over a number of baselines, as a result of the
exploitation of the side-information.
| Amina Mollaysa, Pablo Strasser, Alexandros Kalousis | null | 1703.0257 | null | null |
Data Noising as Smoothing in Neural Network Language Models | cs.LG cs.CL | Data noising is an effective technique for regularizing neural network
models. While noising is widely adopted in application domains such as vision
and speech, commonly used noising primitives have not been developed for
discrete sequence-level settings such as language modeling. In this paper, we
derive a connection between input noising in neural network language models and
smoothing in $n$-gram models. Using this connection, we draw upon ideas from
smoothing to develop effective noising schemes. We demonstrate performance
gains when applying the proposed schemes to language modeling and machine
translation. Finally, we provide empirical analysis validating the relationship
between noising and smoothing.
| Ziang Xie, Sida I. Wang, Jiwei Li, Daniel L\'evy, Aiming Nie, Dan
Jurafsky, Andrew Y. Ng | null | 1703.02573 | null | null |
Customer Lifetime Value Prediction Using Embeddings | cs.LG cs.CY cs.IR cs.NE stat.ML | We describe the Customer LifeTime Value (CLTV) prediction system deployed at
ASOS.com, a global online fashion retailer. CLTV prediction is an important
problem in e-commerce where an accurate estimate of future value allows
retailers to effectively allocate marketing spend, identify and nurture high
value customers and mitigate exposure to losses. The system at ASOS provides
daily estimates of the future value of every customer and is one of the
cornerstones of the personalised shopping experience. The state of the art in
this domain uses large numbers of handcrafted features and ensemble regressors
to forecast value, predict churn and evaluate customer loyalty. Recently,
domains including language, vision and speech have shown dramatic advances by
replacing handcrafted features with features that are learned automatically
from data. We detail the system deployed at ASOS and show that learning feature
representations is a promising extension to the state of the art in CLTV
modelling. We propose a novel way to generate embeddings of customers, which
addresses the issue of the ever changing product catalogue and obtain a
significant improvement over an exhaustive set of handcrafted features.
| Benjamin Paul Chamberlain, Angelo Cardoso, C.H. Bryan Liu, Roberto
Pagliari, Marc Peter Deisenroth | 10.1145/3097983.3098123 | 1703.02596 | null | null |
Bootstrapped Graph Diffusions: Exposing the Power of Nonlinearity | cs.LG | Graph-based semi-supervised learning (SSL) algorithms predict labels for all
nodes based on provided labels of a small set of seed nodes. Classic methods
capture the graph structure through some underlying diffusion process that
propagates through the graph edges. Spectral diffusion, which includes
personalized page rank and label propagation, propagates through random walks.
Social diffusion propagates through shortest paths. A common ground to these
diffusions is their {\em linearity}, which does not distinguish between
contributions of few "strong" relations and many "weak" relations.
Recently, non-linear methods such as node embeddings and graph convolutional
networks (GCN) demonstrated a large gain in quality for SSL tasks. These
methods introduce multiple components and greatly vary on how the graph
structure, seed label information, and other features are used.
We aim here to study the contribution of non-linearity, as an isolated
ingredient, to the performance gain. To do so, we place classic linear graph
diffusions in a self-training framework. Surprisingly, we observe that SSL
using the resulting {\em bootstrapped diffusions} not only significantly
improves over the respective non-bootstrapped baselines but also outperform
state-of-the-art non-linear SSL methods. Moreover, since the self-training
wrapper retains the scalability of the base method, we obtain both higher
quality and better scalability.
| Eliav Buchnik and Edith Cohen | null | 1703.02618 | null | null |
Online Convex Optimization with Unconstrained Domains and Losses | cs.LG stat.ML | We propose an online convex optimization algorithm (RescaledExp) that
achieves optimal regret in the unconstrained setting without prior knowledge of
any bounds on the loss functions. We prove a lower bound showing an exponential
separation between the regret of existing algorithms that require a known bound
on the loss functions and any algorithm that does not require such knowledge.
RescaledExp matches this lower bound asymptotically in the number of
iterations. RescaledExp is naturally hyperparameter-free and we demonstrate
empirically that it matches prior optimization algorithms that require
hyperparameter optimization.
| Ashok Cutkosky and Kwabena Boahen | null | 1703.02622 | null | null |
Horde of Bandits using Gaussian Markov Random Fields | cs.LG | The gang of bandits (GOB) model \cite{cesa2013gang} is a recent contextual
bandits framework that shares information between a set of bandit problems,
related by a known (possibly noisy) graph. This model is useful in problems
like recommender systems where the large number of users makes it vital to
transfer information between users. Despite its effectiveness, the existing GOB
model can only be applied to small problems due to its quadratic
time-dependence on the number of nodes. Existing solutions to combat the
scalability issue require an often-unrealistic clustering assumption. By
exploiting a connection to Gaussian Markov random fields (GMRFs), we show that
the GOB model can be made to scale to much larger graphs without additional
assumptions. In addition, we propose a Thompson sampling algorithm which uses
the recent GMRF sampling-by-perturbation technique, allowing it to scale to
even larger problems (leading to a "horde" of bandits). We give regret bounds
and experimental results for GOB with Thompson sampling and epoch-greedy
algorithms, indicating that these methods are as good as or significantly
better than ignoring the graph or adopting a clustering-based approach.
Finally, when an existing graph is not available, we propose a heuristic for
learning it on the fly and show promising results.
| Sharan Vaswani, Mark Schmidt, Laks V.S. Lakshmanan | null | 1703.02626 | null | null |
Online Learning Without Prior Information | cs.LG stat.ML | The vast majority of optimization and online learning algorithms today
require some prior information about the data (often in the form of bounds on
gradients or on the optimal parameter value). When this information is not
available, these algorithms require laborious manual tuning of various
hyperparameters, motivating the search for algorithms that can adapt to the
data with no prior information. We describe a frontier of new lower bounds on
the performance of such algorithms, reflecting a tradeoff between a term that
depends on the optimal parameter value and a term that depends on the
gradients' rate of growth. Further, we construct a family of algorithms whose
performance matches any desired point on this frontier, which no previous
algorithm reaches.
| Ashok Cutkosky and Kwabena Boahen | null | 1703.02629 | null | null |
Don't Fear the Bit Flips: Optimized Coding Strategies for Binary
Classification | stat.ML cs.LG | After being trained, classifiers must often operate on data that has been
corrupted by noise. In this paper, we consider the impact of such noise on the
features of binary classifiers. Inspired by tools for classifier robustness, we
introduce the same classification probability (SCP) to measure the resulting
distortion on the classifier outputs. We introduce a low-complexity estimate of
the SCP based on quantization and polynomial multiplication. We also study
channel coding techniques based on replication error-correcting codes. In
contrast to the traditional channel coding approach, where error-correction is
meant to preserve the data and is agnostic to the application, our schemes
specifically aim to maximize the SCP (equivalently minimizing the distortion of
the classifier output) for the same redundancy overhead.
| Frederic Sala, Shahroze Kabir, Guy Van den Broeck, and Lara Dolecek | null | 1703.02641 | null | null |
Streaming Weak Submodularity: Interpreting Neural Networks on the Fly | stat.ML cs.IT cs.LG math.IT | In many machine learning applications, it is important to explain the
predictions of a black-box classifier. For example, why does a deep neural
network assign an image to a particular class? We cast interpretability of
black-box classifiers as a combinatorial maximization problem and propose an
efficient streaming algorithm to solve it subject to cardinality constraints.
By extending ideas from Badanidiyuru et al. [2014], we provide a constant
factor approximation guarantee for our algorithm in the case of random stream
order and a weakly submodular objective function. This is the first such
theoretical guarantee for this general class of functions, and we also show
that no such algorithm exists for a worst case stream order. Our algorithm
obtains similar explanations of Inception V3 predictions $10$ times faster than
the state-of-the-art LIME framework of Ribeiro et al. [2016].
| Ethan R. Elenberg, Alexandros G. Dimakis, Moran Feldman, Amin Karbasi | null | 1703.02647 | null | null |
Towards Generalization and Simplicity in Continuous Control | cs.LG cs.AI cs.RO cs.SY | This work shows that policies with simple linear and RBF parameterizations
can be trained to solve a variety of continuous control tasks, including the
OpenAI gym benchmarks. The performance of these trained policies are
competitive with state of the art results, obtained with more elaborate
parameterizations such as fully connected neural networks. Furthermore,
existing training and testing scenarios are shown to be very limited and prone
to over-fitting, thus giving rise to only trajectory-centric policies. Training
with a diverse initial state distribution is shown to produce more global
policies with better generalization. This allows for interactive control
scenarios where the system recovers from large on-line perturbations; as shown
in the supplementary video.
| Aravind Rajeswaran, Kendall Lowrey, Emanuel Todorov, Sham Kakade | null | 1703.0266 | null | null |
Structural Data Recognition with Graph Model Boosting | cs.LG stat.ML | This paper presents a novel method for structural data recognition using a
large number of graph models. In general, prevalent methods for structural data
recognition have two shortcomings: 1) Only a single model is used to capture
structural variation. 2) Naive recognition methods are used, such as the
nearest neighbor method. In this paper, we propose strengthening the
recognition performance of these models as well as their ability to capture
structural variation. The proposed method constructs a large number of graph
models and trains decision trees using the models. This paper makes two main
contributions. The first is a novel graph model that can quickly perform
calculations, which allows us to construct several models in a feasible amount
of time. The second contribution is a novel approach to structural data
recognition: graph model boosting. Comprehensive structural variations can be
captured with a large number of graph models constructed in a boosting
framework, and a sophisticated classifier can be formed by aggregating the
decision trees. Consequently, we can carry out structural data recognition with
powerful recognition capability in the face of comprehensive structural
variation. The experiments shows that the proposed method achieves impressive
results and outperforms existing methods on datasets of IAM graph database
repository.
| Tomo Miyazaki, Shinichiro Omachi | 10.1109/ACCESS.2018.2876860 | 1703.02662 | null | null |
Sparse Quadratic Logistic Regression in Sub-quadratic Time | stat.ML cs.IT cs.LG math.IT | We consider support recovery in the quadratic logistic regression setting -
where the target depends on both p linear terms $x_i$ and up to $p^2$ quadratic
terms $x_i x_j$. Quadratic terms enable prediction/modeling of higher-order
effects between features and the target, but when incorporated naively may
involve solving a very large regression problem. We consider the sparse case,
where at most $s$ terms (linear or quadratic) are non-zero, and provide a new
faster algorithm. It involves (a) identifying the weak support (i.e. all
relevant variables) and (b) standard logistic regression optimization only on
these chosen variables. The first step relies on a novel insight about
correlation tests in the presence of non-linearity, and takes $O(pn)$ time for
$n$ samples - giving potentially huge computational gains over the naive
approach. Motivated by insights from the boolean case, we propose a non-linear
correlation test for non-binary finite support case that involves hashing a
variable and then correlating with the output variable. We also provide
experimental results to demonstrate the effectiveness of our methods.
| Karthikeyan Shanmugam, Murat Kocaoglu, Alexandros G. Dimakis and Sujay
Sanghavi | null | 1703.02682 | null | null |
Exact MAP Inference by Avoiding Fractional Vertices | stat.ML cs.DS cs.IT cs.LG math.IT | Given a graphical model, one essential problem is MAP inference, that is,
finding the most likely configuration of states according to the model.
Although this problem is NP-hard, large instances can be solved in practice. A
major open question is to explain why this is true. We give a natural condition
under which we can provably perform MAP inference in polynomial time. We
require that the number of fractional vertices in the LP relaxation exceeding
the optimal solution is bounded by a polynomial in the problem size. This
resolves an open question by Dimakis, Gohari, and Wainwright. In contrast, for
general LP relaxations of integer programs, known techniques can only handle a
constant number of fractional vertices whose value exceeds the optimal
solution. We experimentally verify this condition and demonstrate how efficient
various integer programming methods are at removing fractional solutions.
| Erik M. Lindgren, Alexandros G. Dimakis, Adam Klivans | null | 1703.02689 | null | null |
Leveraging Sparsity for Efficient Submodular Data Summarization | stat.ML cs.DS cs.IT cs.LG math.IT | The facility location problem is widely used for summarizing large datasets
and has additional applications in sensor placement, image retrieval, and
clustering. One difficulty of this problem is that submodular optimization
algorithms require the calculation of pairwise benefits for all items in the
dataset. This is infeasible for large problems, so recent work proposed to only
calculate nearest neighbor benefits. One limitation is that several strong
assumptions were invoked to obtain provable approximation guarantees. In this
paper we establish that these extra assumptions are not necessary---solving the
sparsified problem will be almost optimal under the standard assumptions of the
problem. We then analyze a different method of sparsification that is a better
model for methods such as Locality Sensitive Hashing to accelerate the nearest
neighbor computations and extend the use of the problem to a broader family of
similarities. We validate our approach by demonstrating that it rapidly
generates interpretable summaries.
| Erik M. Lindgren, Shanshan Wu, Alexandros G. Dimakis | null | 1703.0269 | null | null |
Robust Adversarial Reinforcement Learning | cs.LG cs.AI cs.MA cs.RO | Deep neural networks coupled with fast simulation and improved computation
have led to recent successes in the field of reinforcement learning (RL).
However, most current RL-based approaches fail to generalize since: (a) the gap
between simulation and real world is so large that policy-learning approaches
fail to transfer; (b) even if policy learning is done in real world, the data
scarcity leads to failed generalization from training to test scenarios (e.g.,
due to different friction or object masses). Inspired from H-infinity control
methods, we note that both modeling errors and differences in training and test
scenarios can be viewed as extra forces/disturbances in the system. This paper
proposes the idea of robust adversarial reinforcement learning (RARL), where we
train an agent to operate in the presence of a destabilizing adversary that
applies disturbance forces to the system. The jointly trained adversary is
reinforced -- that is, it learns an optimal destabilization policy. We
formulate the policy learning as a zero-sum, minimax objective function.
Extensive experiments in multiple environments (InvertedPendulum, HalfCheetah,
Swimmer, Hopper and Walker2d) conclusively demonstrate that our method (a)
improves training stability; (b) is robust to differences in training/test
conditions; and c) outperform the baseline even in the absence of the
adversary.
| Lerrel Pinto, James Davidson, Rahul Sukthankar and Abhinav Gupta | null | 1703.02702 | null | null |
On Approximation Guarantees for Greedy Low Rank Optimization | stat.ML cs.IT cs.LG math.IT | We provide new approximation guarantees for greedy low rank matrix estimation
under standard assumptions of restricted strong convexity and smoothness. Our
novel analysis also uncovers previously unknown connections between the low
rank estimation and combinatorial optimization, so much so that our bounds are
reminiscent of corresponding approximation bounds in submodular maximization.
Additionally, we also provide statistical recovery guarantees. Finally, we
present empirical comparison of greedy estimation with established baselines on
two important real-world problems.
| Rajiv Khanna, Ethan Elenberg, Alexandros G. Dimakis, Sahand Negahban | null | 1703.02721 | null | null |
Scalable Greedy Feature Selection via Weak Submodularity | stat.ML cs.IT cs.LG math.IT | Greedy algorithms are widely used for problems in machine learning such as
feature selection and set function optimization. Unfortunately, for large
datasets, the running time of even greedy algorithms can be quite high. This is
because for each greedy step we need to refit a model or calculate a function
using the previously selected choices and the new candidate.
Two algorithms that are faster approximations to the greedy forward selection
were introduced recently ([Mirzasoleiman et al. 2013, 2015]). They achieve
better performance by exploiting distributed computation and stochastic
evaluation respectively. Both algorithms have provable performance guarantees
for submodular functions.
In this paper we show that divergent from previously held opinion,
submodularity is not required to obtain approximation guarantees for these two
algorithms. Specifically, we show that a generalized concept of weak
submodularity suffices to give multiplicative approximation guarantees. Our
result extends the applicability of these algorithms to a larger class of
functions. Furthermore, we show that a bounded submodularity ratio can be used
to provide data dependent bounds that can sometimes be tighter also for
submodular functions. We empirically validate our work by showing superior
performance of fast greedy approximations versus several established baselines
on artificial and real datasets.
| Rajiv Khanna, Ethan Elenberg, Alexandros G. Dimakis, Sahand Negahban,
Joydeep Ghosh | null | 1703.02723 | null | null |
Tensor SVD: Statistical and Computational Limits | math.ST cs.LG stat.ME stat.ML stat.TH | In this paper, we propose a general framework for tensor singular value
decomposition (tensor SVD), which focuses on the methodology and theory for
extracting the hidden low-rank structure from high-dimensional tensor data.
Comprehensive results are developed on both the statistical and computational
limits for tensor SVD. This problem exhibits three different phases according
to the signal-to-noise ratio (SNR). In particular, with strong SNR, we show
that the classical higher-order orthogonal iteration achieves the minimax
optimal rate of convergence in estimation; with weak SNR, the
information-theoretical lower bound implies that it is impossible to have
consistent estimation in general; with moderate SNR, we show that the
non-convex maximum likelihood estimation provides optimal solution, but with
NP-hard computational cost; moreover, under the hardness hypothesis of
hypergraphic planted clique detection, there are no polynomial-time algorithms
performing consistently in general.
| Anru Zhang and Dong Xia | null | 1703.02724 | null | null |
Inference in Sparse Graphs with Pairwise Measurements and Side
Information | cs.LG | We consider the statistical problem of recovering a hidden "ground truth"
binary labeling for the vertices of a graph up to low Hamming error from noisy
edge and vertex measurements. We present new algorithms and a sharp
finite-sample analysis for this problem on trees and sparse graphs with poor
expansion properties such as hypergrids and ring lattices. Our method
generalizes and improves over that of Globerson et al. (2015), who introduced
the problem for two-dimensional grid lattices.
For trees we provide a simple, efficient, algorithm that infers the ground
truth with optimal Hamming error has optimal sample complexity and implies
recovery results for all connected graphs. Here, the presence of side
information is critical to obtain a non-trivial recovery rate. We then show how
to adapt this algorithm to tree decompositions of edge-subgraphs of certain
graph families such as lattices, resulting in optimal recovery error rates that
can be obtained efficiently
The thrust of our analysis is to 1) use the tree decomposition along with
edge measurements to produce a small class of viable vertex labelings and 2)
apply an analysis influenced by statistical learning theory to show that we can
infer the ground truth from this class using vertex measurements. We show the
power of our method in several examples including hypergrids, ring lattices,
and the Newman-Watts model for small world graphs. For two-dimensional grids,
our results improve over Globerson et al. (2015) by obtaining optimal recovery
in the constant-height regime.
| Dylan J. Foster, Daniel Reichman, Karthik Sridharan | null | 1703.02728 | null | null |
Byzantine-Tolerant Machine Learning | cs.DC cs.LG cs.NE math.OC stat.ML | The growth of data, the need for scalability and the complexity of models
used in modern machine learning calls for distributed implementations. Yet, as
of today, distributed machine learning frameworks have largely ignored the
possibility of arbitrary (i.e., Byzantine) failures. In this paper, we study
the robustness to Byzantine failures at the fundamental level of stochastic
gradient descent (SGD), the heart of most machine learning algorithms. Assuming
a set of $n$ workers, up to $f$ of them being Byzantine, we ask how robust can
SGD be, without limiting the dimension, nor the size of the parameter space.
We first show that no gradient descent update rule based on a linear
combination of the vectors proposed by the workers (i.e, current approaches)
tolerates a single Byzantine failure. We then formulate a resilience property
of the update rule capturing the basic requirements to guarantee convergence
despite $f$ Byzantine workers. We finally propose Krum, an update rule that
satisfies the resilience property aforementioned. For a $d$-dimensional
learning problem, the time complexity of Krum is $O(n^2 \cdot (d + \log n))$.
| Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, Julien Stainer | null | 1703.02757 | null | null |
An Integrated and Scalable Platform for Proactive Event-Driven Traffic
Management | cs.AI cs.LG cs.SY | Traffic on freeways can be managed by means of ramp meters from Road Traffic
Control rooms. Human operators cannot efficiently manage a network of ramp
meters. To support them, we present an intelligent platform for traffic
management which includes a new ramp metering coordination scheme in the
decision making module, an efficient dashboard for interacting with human
operators, machine learning tools for learning event definitions and Complex
Event Processing tools able to deal with uncertainties inherent to the traffic
use case. Unlike the usual approach, the devised event-driven platform is able
to predict a congestion up to 4 minutes before it really happens. Proactive
decision making can then be established leading to significant improvement of
traffic conditions.
| Alain Kibangou and Alexander Artikis and Evangelos Michelioudakis and
Georgios Paliouras and Marius Schmitt and John Lygeros and Chris Baber and
Natan Morar and Fabiana Fournier and Inna Skarbovsky | null | 1703.0281 | null | null |
Pretata: predicting TATA binding proteins with novel features and
dimensionality reduction strategy | q-bio.QM cs.LG q-bio.BM | Background: It is necessary and essential to discovery protein function from
the novel primary sequences. Wet lab experimental procedures are not only
time-consuming, but also costly, so predicting protein structure and function
reliably based only on amino acid sequence has significant value. TATA-binding
protein (TBP) is a kind of DNA binding protein, which plays a key role in the
transcription regulation. Our study proposed an automatic approach for
identifying TATA-binding proteins efficiently, accurately, and conveniently.
This method would guide for the special protein identification with
computational intelligence strategies. Results: Firstly, we proposed novel
fingerprint features for TBP based on pseudo amino acid composition,
physicochemical properties, and secondary structure. Secondly, hierarchical
features dimensionality reduction strategies were employed to improve the
performance furthermore. Currently, Pretata achieves 92.92% TATA- binding
protein prediction accuracy, which is better than all other existing methods.
Conclusions: The experiments demonstrate that our method could greatly improve
the prediction accuracy and speed, thus allowing large-scale NGS data
prediction to be practical. A web server is developed to facilitate the other
researchers, which can be accessed at http://server.malab.cn/preTata/.
| Quan Zou, Shixiang Wan, Ying Ju, Jijun Tang and Xiangxiang Zeng | null | 1703.0285 | null | null |
Discriminative models for multi-instance problems with tree-structure | cs.CR cs.LG | Modeling network traffic is gaining importance in order to counter modern
threats of ever increasing sophistication. It is though surprisingly difficult
and costly to construct reliable classifiers on top of telemetry data due to
the variety and complexity of signals that no human can manage to interpret in
full. Obtaining training data with sufficiently large and variable body of
labels can thus be seen as prohibitive problem. The goal of this work is to
detect infected computers by observing their HTTP(S) traffic collected from
network sensors, which are typically proxy servers or network firewalls, while
relying on only minimal human input in model training phase. We propose a
discriminative model that makes decisions based on all computer's traffic
observed during predefined time window (5 minutes in our case). The model is
trained on collected traffic samples over equally sized time window per large
number of computers, where the only labels needed are human verdicts about the
computer as a whole (presumed infected vs. presumed clean). As part of training
the model itself recognizes discriminative patterns in traffic targeted to
individual servers and constructs the final high-level classifier on top of
them. We show the classifier to perform with very high precision, while the
learned traffic patterns can be interpreted as Indicators of Compromise. In the
following we implement the discriminative model as a neural network with
special structure reflecting two stacked multi-instance problems. The main
advantages of the proposed configuration include not only improved accuracy and
ability to learn from gross labels, but also automatic learning of server types
(together with their detectors) which are typically visited by infected
computers.
| Tomas Pevny and Petr Somol | null | 1703.02868 | null | null |
Memory Enriched Big Bang Big Crunch Optimization Algorithm for Data
Clustering | cs.AI cs.LG | Cluster analysis plays an important role in decision making process for many
knowledge-based systems. There exist a wide variety of different approaches for
clustering applications including the heuristic techniques, probabilistic
models, and traditional hierarchical algorithms. In this paper, a novel
heuristic approach based on big bang-big crunch algorithm is proposed for
clustering problems. The proposed method not only takes advantage of heuristic
nature to alleviate typical clustering algorithms such as k-means, but it also
benefits from the memory based scheme as compared to its similar heuristic
techniques. Furthermore, the performance of the proposed algorithm is
investigated based on several benchmark test functions as well as on the
well-known datasets. The experimental results show the significant superiority
of the proposed method over the similar algorithms.
| Kayvan Bijari, Hadi Zare, Hadi Veisi, Hossein Bobarshad | 10.1007/s00521-016-2528-9 | 1703.02883 | null | null |
Model-Based Policy Search for Automatic Tuning of Multivariate PID
Controllers | cs.LG cs.RO cs.SY stat.ML | PID control architectures are widely used in industrial applications. Despite
their low number of open parameters, tuning multiple, coupled PID controllers
can become tedious in practice. In this paper, we extend PILCO, a model-based
policy search framework, to automatically tune multivariate PID controllers
purely based on data observed on an otherwise unknown system. The system's
state is extended appropriately to frame the PID policy as a static state
feedback policy. This renders PID tuning possible as the solution of a finite
horizon optimal control problem without further a priori knowledge. The
framework is applied to the task of balancing an inverted pendulum on a seven
degree-of-freedom robotic arm, thereby demonstrating its capabilities of fast
and data-efficient policy learning, even on complex real world problems.
| Andreas Doerr, Duy Nguyen-Tuong, Alonso Marco, Stefan Schaal,
Sebastian Trimpe | null | 1703.02899 | null | null |
Learning a Unified Control Policy for Safe Falling | cs.RO cs.AI cs.LG | Being able to fall safely is a necessary motor skill for humanoids performing
highly dynamic tasks, such as running and jumping. We propose a new method to
learn a policy that minimizes the maximal impulse during the fall. The
optimization solves for both a discrete contact planning problem and a
continuous optimal control problem. Once trained, the policy can compute the
optimal next contacting body part (e.g. left foot, right foot, or hands),
contact location and timing, and the required joint actuation. We represent the
policy as a mixture of actor-critic neural network, which consists of n control
policies and the corresponding value functions. Each pair of actor-critic is
associated with one of the n possible contacting body parts. During execution,
the policy corresponding to the highest value function will be executed while
the associated body part will be the next contact with the ground. With this
mixture of actor-critic architecture, the discrete contact sequence planning is
solved through the selection of the best critics while the continuous control
problem is solved by the optimization of actors. We show that our policy can
achieve comparable, sometimes even higher, rewards than a recursive search of
the action space using dynamic programming, while enjoying 50 to 400 times of
speed gain during online execution.
| Visak CV Kumar, Sehoon Ha and C Karen Liu | null | 1703.02905 | null | null |
Deep Bayesian Active Learning with Image Data | cs.LG cs.CV stat.ML | Even though active learning forms an important pillar of machine learning,
deep learning tools are not prevalent within it. Deep learning poses several
difficulties when used in an active learning setting. First, active learning
(AL) methods generally rely on being able to learn and update models from small
amounts of data. Recent advances in deep learning, on the other hand, are
notorious for their dependence on large amounts of data. Second, many AL
acquisition functions rely on model uncertainty, yet deep learning methods
rarely represent such model uncertainty. In this paper we combine recent
advances in Bayesian deep learning into the active learning framework in a
practical way. We develop an active learning framework for high dimensional
data, a task which has been extremely challenging so far, with very sparse
existing literature. Taking advantage of specialised models such as Bayesian
convolutional neural networks, we demonstrate our active learning techniques
with image data, obtaining a significant improvement on existing active
learning approaches. We demonstrate this on both the MNIST dataset, as well as
for skin cancer diagnosis from lesion images (ISIC2016 task).
| Yarin Gal and Riashat Islam and Zoubin Ghahramani | null | 1703.0291 | null | null |
Dropout Inference in Bayesian Neural Networks with Alpha-divergences | cs.LG stat.ML | To obtain uncertainty estimates with real-world Bayesian deep learning
models, practical inference approximations are needed. Dropout variational
inference (VI) for example has been used for machine vision and medical
applications, but VI can severely underestimates model uncertainty.
Alpha-divergences are alternative divergences to VI's KL objective, which are
able to avoid VI's uncertainty underestimation. But these are hard to use in
practice: existing techniques can only use Gaussian approximating
distributions, and require existing models to be changed radically, thus are of
limited use for practitioners. We propose a re-parametrisation of the
alpha-divergence objectives, deriving a simple inference technique which,
together with dropout, can be easily implemented with existing models by simply
changing the loss of the model. We demonstrate improved uncertainty estimates
and accuracy compared to VI in dropout networks. We study our model's epistemic
uncertainty far away from the data using adversarial images, showing that these
can be distinguished from non-adversarial images by examining our model's
uncertainty.
| Yingzhen Li and Yarin Gal | null | 1703.02914 | null | null |
Nearly-tight VC-dimension and pseudodimension bounds for piecewise
linear neural networks | cs.LG | We prove new upper and lower bounds on the VC-dimension of deep neural
networks with the ReLU activation function. These bounds are tight for almost
the entire range of parameters. Letting $W$ be the number of weights and $L$ be
the number of layers, we prove that the VC-dimension is $O(W L \log(W))$, and
provide examples with VC-dimension $\Omega( W L \log(W/L) )$. This improves
both the previously known upper bounds and lower bounds. In terms of the number
$U$ of non-linear units, we prove a tight bound $\Theta(W U)$ on the
VC-dimension. All of these bounds generalize to arbitrary piecewise linear
activation functions, and also hold for the pseudodimensions of these function
classes.
Combined with previous results, this gives an intriguing range of
dependencies of the VC-dimension on depth for networks with different
non-linearities: there is no dependence for piecewise-constant, linear
dependence for piecewise-linear, and no more than quadratic dependence for
general piecewise-polynomial.
| Peter L. Bartlett and Nick Harvey and Chris Liaw and Abbas Mehrabian | null | 1703.0293 | null | null |
A Hybrid Deep Learning Architecture for Privacy-Preserving Mobile
Analytics | cs.LG cs.CV | Internet of Things (IoT) devices and applications are being deployed in our
homes and workplaces. These devices often rely on continuous data collection to
feed machine learning models. However, this approach introduces several privacy
and efficiency challenges, as the service operator can perform unwanted
inferences on the available data. Recently, advances in edge processing have
paved the way for more efficient, and private, data processing at the source
for simple tasks and lighter models, though they remain a challenge for larger,
and more complicated models. In this paper, we present a hybrid approach for
breaking down large, complex deep neural networks for cooperative,
privacy-preserving analytics. To this end, instead of performing the whole
operation on the cloud, we let an IoT device to run the initial layers of the
neural network, and then send the output to the cloud to feed the remaining
layers and produce the final result. In order to ensure that the user's device
contains no extra information except what is necessary for the main task and
preventing any secondary inference on the data, we introduce Siamese
fine-tuning. We evaluate the privacy benefits of this approach based on the
information exposed to the cloud service. We also assess the local inference
cost of different layers on a modern handset. Our evaluations show that by
using Siamese fine-tuning and at a small processing cost, we can greatly reduce
the level of unnecessary, potentially sensitive information in the personal
data, and thus achieving the desired trade-off between utility, privacy, and
performance.
| Seyed Ali Osia, Ali Shahin Shamsabadi, Sina Sajadmanesh, Ali Taheri,
Kleomenis Katevas, Hamid R. Rabiee, Nicholas D. Lane, Hamed Haddadi | 10.1109/JIOT.2020.2967734 | 1703.02952 | null | null |
Unsupervised Ensemble Regression | stat.ML cs.LG | Consider a regression problem where there is no labeled data and the only
observations are the predictions $f_i(x_j)$ of $m$ experts $f_{i}$ over many
samples $x_j$. With no knowledge on the accuracy of the experts, is it still
possible to accurately estimate the unknown responses $y_{j}$? Can one still
detect the least or most accurate experts? In this work we propose a framework
to study these questions, based on the assumption that the $m$ experts have
uncorrelated deviations from the optimal predictor. Assuming the first two
moments of the response are known, we develop methods to detect the best and
worst regressors, and derive U-PCR, a novel principal components approach for
unsupervised ensemble regression. We provide theoretical support for U-PCR and
illustrate its improved accuracy over the ensemble mean and median on a variety
of regression problems.
| Omer Dror, Boaz Nadler, Erhan Bilal and Yuval Kluger | null | 1703.02965 | null | null |
A Manifold Approach to Learning Mutually Orthogonal Subspaces | cs.LG | Although many machine learning algorithms involve learning subspaces with
particular characteristics, optimizing a parameter matrix that is constrained
to represent a subspace can be challenging. One solution is to use Riemannian
optimization methods that enforce such constraints implicitly, leveraging the
fact that the feasible parameter values form a manifold. While Riemannian
methods exist for some specific problems, such as learning a single subspace,
there are more general subspace constraints that offer additional flexibility
when setting up an optimization problem, but have not been formulated as a
manifold.
We propose the partitioned subspace (PS) manifold for optimizing matrices
that are constrained to represent one or more subspaces. Each point on the
manifold defines a partitioning of the input space into mutually orthogonal
subspaces, where the number of partitions and their sizes are defined by the
user. As a result, distinct groups of features can be learned by defining
different objective functions for each partition. We illustrate the properties
of the manifold through experiments on multiple dataset analysis and domain
adaptation.
| Stephen Giguere, Francisco Garcia, Sridhar Mahadevan | null | 1703.02992 | null | null |
Spectral Graph Convolutions for Population-based Disease Prediction | stat.ML cs.LG | Exploiting the wealth of imaging and non-imaging information for disease
prediction tasks requires models capable of representing, at the same time,
individual features as well as data associations between subjects from
potentially large populations. Graphs provide a natural framework for such
tasks, yet previous graph-based approaches focus on pairwise similarities
without modelling the subjects' individual characteristics and features. On the
other hand, relying solely on subject-specific imaging feature vectors fails to
model the interaction and similarity between subjects, which can reduce
performance. In this paper, we introduce the novel concept of Graph
Convolutional Networks (GCN) for brain analysis in populations, combining
imaging and non-imaging data. We represent populations as a sparse graph where
its vertices are associated with image-based feature vectors and the edges
encode phenotypic information. This structure was used to train a GCN model on
partially labelled graphs, aiming to infer the classes of unlabelled nodes from
the node features and pairwise associations between subjects. We demonstrate
the potential of the method on the challenging ADNI and ABIDE databases, as a
proof of concept of the benefit from integrating contextual information in
classification tasks. This has a clear impact on the quality of the
predictions, leading to 69.5% accuracy for ABIDE (outperforming the current
state of the art of 66.8%) and 77% for ADNI for prediction of MCI conversion,
significantly outperforming standard linear classifiers where only individual
features are considered.
| Sarah Parisot, Sofia Ira Ktena, Enzo Ferrante, Matthew Lee, Ricardo
Guerrerro Moreno, Ben Glocker, Daniel Rueckert | null | 1703.0302 | null | null |
Parallel Implementation of Efficient Search Schemes for the Inference of
Cancer Progression Models | cs.LG stat.ML | The emergence and development of cancer is a consequence of the accumulation
over time of genomic mutations involving a specific set of genes, which
provides the cancer clones with a functional selective advantage. In this work,
we model the order of accumulation of such mutations during the progression,
which eventually leads to the disease, by means of probabilistic graphic
models, i.e., Bayesian Networks (BNs). We investigate how to perform the task
of learning the structure of such BNs, according to experimental evidence,
adopting a global optimization meta-heuristics. In particular, in this work we
rely on Genetic Algorithms, and to strongly reduce the execution time of the
inference -- which can also involve multiple repetitions to collect
statistically significant assessments of the data -- we distribute the
calculations using both multi-threading and a multi-node architecture. The
results show that our approach is characterized by good accuracy and
specificity; we also demonstrate its feasibility, thanks to a 84x reduction of
the overall execution time with respect to a traditional sequential
implementation.
| Daniele Ramazzotti and Marco S. Nobile and Paolo Cazzaniga and
Giancarlo Mauri and Marco Antoniotti | 10.1109/CIBCB.2016.7758109 | 1703.03038 | null | null |
Combining Bayesian Approaches and Evolutionary Techniques for the
Inference of Breast Cancer Networks | cs.LG cs.AI | Gene and protein networks are very important to model complex large-scale
systems in molecular biology. Inferring or reverseengineering such networks can
be defined as the process of identifying gene/protein interactions from
experimental data through computational analysis. However, this task is
typically complicated by the enormously large scale of the unknowns in a rather
small sample size. Furthermore, when the goal is to study causal relationships
within the network, tools capable of overcoming the limitations of correlation
networks are required. In this work, we make use of Bayesian Graphical Models
to attach this problem and, specifically, we perform a comparative study of
different state-of-the-art heuristics, analyzing their performance in inferring
the structure of the Bayesian Network from breast cancer data.
| Stefano Beretta and Mauro Castelli and Ivo Goncalves and Ivan Merelli
and Daniele Ramazzotti | 10.5220/0006064102170224 | 1703.03041 | null | null |
A GAMP Based Low Complexity Sparse Bayesian Learning Algorithm | cs.LG stat.ML | In this paper, we present an algorithm for the sparse signal recovery problem
that incorporates damped Gaussian generalized approximate message passing
(GGAMP) into Expectation-Maximization (EM)-based sparse Bayesian learning
(SBL). In particular, GGAMP is used to implement the E-step in SBL in place of
matrix inversion, leveraging the fact that GGAMP is guaranteed to converge with
appropriate damping. The resulting GGAMP-SBL algorithm is much more robust to
arbitrary measurement matrix $\boldsymbol{A}$ than the standard damped GAMP
algorithm while being much lower complexity than the standard SBL algorithm. We
then extend the approach from the single measurement vector (SMV) case to the
temporally correlated multiple measurement vector (MMV) case, leading to the
GGAMP-TSBL algorithm. We verify the robustness and computational advantages of
the proposed algorithms through numerical experiments.
| Maher Al-Shoukairi, Philip Schniter, Bhaskar D. Rao | 10.1109/TSP.2017.2764855 | 1703.03044 | null | null |
Deep Variation-structured Reinforcement Learning for Visual Relationship
and Attribute Detection | cs.CV cs.AI cs.LG | Despite progress in visual perception tasks such as image classification and
detection, computers still struggle to understand the interdependency of
objects in the scene as a whole, e.g., relations between objects or their
attributes. Existing methods often ignore global context cues capturing the
interactions among different object instances, and can only recognize a handful
of types by exhaustively training individual detectors for all possible
relationships. To capture such global interdependency, we propose a deep
Variation-structured Reinforcement Learning (VRL) framework to sequentially
discover object relationships and attributes in the whole image. First, a
directed semantic action graph is built using language priors to provide a rich
and compact representation of semantic correlations between object categories,
predicates, and attributes. Next, we use a variation-structured traversal over
the action graph to construct a small, adaptive action set for each step based
on the current state and historical actions. In particular, an ambiguity-aware
object mining scheme is used to resolve semantic ambiguity among object
categories that the object detector fails to distinguish. We then make
sequential predictions using a deep RL framework, incorporating global context
cues and semantic embeddings of previously extracted phrases in the state
vector. Our experiments on the Visual Relationship Detection (VRD) dataset and
the large-scale Visual Genome dataset validate the superiority of VRL, which
can achieve significantly better detection results on datasets involving
thousands of relationship and attribute types. We also demonstrate that VRL is
able to predict unseen types embedded in our action graph by learning
correlations on shared graph nodes.
| Xiaodan Liang and Lisa Lee and Eric P. Xing | null | 1703.03054 | null | null |
Interpretable Structure-Evolving LSTM | cs.CV cs.AI cs.LG | This paper develops a general framework for learning interpretable data
representation via Long Short-Term Memory (LSTM) recurrent neural networks over
hierarchal graph structures. Instead of learning LSTM models over the pre-fixed
structures, we propose to further learn the intermediate interpretable
multi-level graph structures in a progressive and stochastic way from data
during the LSTM network optimization. We thus call this model the
structure-evolving LSTM. In particular, starting with an initial element-level
graph representation where each node is a small data element, the
structure-evolving LSTM gradually evolves the multi-level graph representations
by stochastically merging the graph nodes with high compatibilities along the
stacked LSTM layers. In each LSTM layer, we estimate the compatibility of two
connected nodes from their corresponding LSTM gate outputs, which is used to
generate a merging probability. The candidate graph structures are accordingly
generated where the nodes are grouped into cliques with their merging
probabilities. We then produce the new graph structure with a
Metropolis-Hasting algorithm, which alleviates the risk of getting stuck in
local optimums by stochastic sampling with an acceptance probability. Once a
graph structure is accepted, a higher-level graph is then constructed by taking
the partitioned cliques as its nodes. During the evolving process,
representation becomes more abstracted in higher-levels where redundant
information is filtered out, allowing more efficient propagation of long-range
data dependencies. We evaluate the effectiveness of structure-evolving LSTM in
the application of semantic object parsing and demonstrate its advantage over
state-of-the-art LSTM models on standard benchmarks.
| Xiaodan Liang and Liang Lin and Xiaohui Shen and Jiashi Feng and
Shuicheng Yan and Eric P. Xing | null | 1703.03055 | null | null |
Deep Convolutional Neural Network Inference with Floating-point Weights
and Fixed-point Activations | cs.LG cs.CV | Deep convolutional neural network (CNN) inference requires significant amount
of memory and computation, which limits its deployment on embedded devices. To
alleviate these problems to some extent, prior research utilize low precision
fixed-point numbers to represent the CNN weights and activations. However, the
minimum required data precision of fixed-point weights varies across different
networks and also across different layers of the same network. In this work, we
propose using floating-point numbers for representing the weights and
fixed-point numbers for representing the activations. We show that using
floating-point representation for weights is more efficient than fixed-point
representation for the same bit-width and demonstrate it on popular large-scale
CNNs such as AlexNet, SqueezeNet, GoogLeNet and VGG-16. We also show that such
a representation scheme enables compact hardware multiply-and-accumulate (MAC)
unit design. Experimental results show that the proposed scheme reduces the
weight storage by up to 36% and power consumption of the hardware multiplier by
up to 50%.
| Liangzhen Lai, Naveen Suda, Vikas Chandra | null | 1703.03073 | null | null |
Efficient computational strategies to learn the structure of
probabilistic graphical models of cumulative phenomena | cs.LG cs.AI | Structural learning of Bayesian Networks (BNs) is a NP-hard problem, which is
further complicated by many theoretical issues, such as the I-equivalence among
different structures. In this work, we focus on a specific subclass of BNs,
named Suppes-Bayes Causal Networks (SBCNs), which include specific structural
constraints based on Suppes' probabilistic causation to efficiently model
cumulative phenomena. Here we compare the performance, via extensive
simulations, of various state-of-the-art search strategies, such as local
search techniques and Genetic Algorithms, as well as of distinct regularization
methods. The assessment is performed on a large number of simulated datasets
from topologies with distinct levels of complexity, various sample size and
different rates of errors in the data. Among the main results, we show that the
introduction of Suppes' constraints dramatically improve the inference
accuracy, by reducing the solution space and providing a temporal ordering on
the variables. We also report on trade-offs among different search techniques
that can be efficiently employed in distinct experimental settings. This
manuscript is an extended version of the paper "Structural Learning of
Probabilistic Graphical Models of Cumulative Phenomena" presented at the 2018
International Conference on Computational Science.
| Daniele Ramazzotti and Marco S. Nobile and Marco Antoniotti and Alex
Graudenzi | null | 1703.03074 | null | null |
Causal Data Science for Financial Stress Testing | cs.LG cs.AI cs.CE | The most recent financial upheavals have cast doubt on the adequacy of some
of the conventional quantitative risk management strategies, such as VaR (Value
at Risk), in many common situations. Consequently, there has been an increasing
need for verisimilar financial stress testings, namely simulating and analyzing
financial portfolios in extreme, albeit rare scenarios. Unlike conventional
risk management which exploits statistical correlations among financial
instruments, here we focus our analysis on the notion of probabilistic
causation, which is embodied by Suppes-Bayes Causal Networks (SBCNs); SBCNs are
probabilistic graphical models that have many attractive features in terms of
more accurate causal analysis for generating financial stress scenarios. In
this paper, we present a novel approach for conducting stress testing of
financial portfolios based on SBCNs in combination with classical machine
learning classification tools. The resulting method is shown to be capable of
correctly discovering the causal relationships among financial factors that
affect the portfolios and thus, simulating stress testing scenarios with a
higher accuracy and lower computational complexity than conventional Monte
Carlo Simulations.
| Gelin Gao and Bud Mishra and Daniele Ramazzotti | 10.1016/j.jocs.2018.04.003 | 1703.03076 | null | null |
Statistical Cost Sharing | cs.GT cs.LG | We study the cost sharing problem for cooperative games in situations where
the cost function $C$ is not available via oracle queries, but must instead be
derived from data, represented as tuples $(S, C(S))$, for different subsets $S$
of players. We formalize this approach, which we call statistical cost sharing,
and consider the computation of the core and the Shapley value, when the tuples
are drawn from some distribution $\mathcal{D}$.
Previous work by Balcan et al. in this setting showed how to compute cost
shares that satisfy the core property with high probability for limited classes
of functions. We expand on their work and give an algorithm that computes such
cost shares for any function with a non-empty core. We complement these results
by proving an inapproximability lower bound for a weaker relaxation.
We then turn our attention to the Shapley value. We first show that when cost
functions come from the family of submodular functions with bounded curvature,
$\kappa$, the Shapley value can be approximated from samples up to a $\sqrt{1 -
\kappa}$ factor, and that the bound is tight. We then define statistical
analogues of the Shapley axioms, and derive a notion of statistical Shapley
value. We show that these can always be approximated arbitrarily well for
general functions over any distribution $\mathcal{D}$.
| Eric Balkanski, Umar Syed, Sergei Vassilvitskii | null | 1703.03111 | null | null |
Coordinated Multi-Agent Imitation Learning | cs.LG | We study the problem of imitation learning from demonstrations of multiple
coordinating agents. One key challenge in this setting is that learning a good
model of coordination can be difficult, since coordination is often implicit in
the demonstrations and must be inferred as a latent variable. We propose a
joint approach that simultaneously learns a latent coordination model along
with the individual policies. In particular, our method integrates unsupervised
structure learning with conventional imitation learning. We illustrate the
power of our approach on a difficult problem of learning multiple policies for
fine-grained behavior modeling in team sports, where different players occupy
different roles in the coordinated team strategy. We show that having a
coordination model to infer the roles of players yields substantially improved
imitation loss compared to conventional baselines.
| Hoang M. Le, Yisong Yue, Peter Carr, Patrick Lucey | null | 1703.03121 | null | null |
Learning to Remember Rare Events | cs.LG | Despite recent advances, memory-augmented deep neural networks are still
limited when it comes to life-long and one-shot learning, especially in
remembering rare events. We present a large-scale life-long memory module for
use in deep learning. The module exploits fast nearest-neighbor algorithms for
efficiency and thus scales to large memory sizes. Except for the
nearest-neighbor query, the module is fully differentiable and trained
end-to-end with no extra supervision. It operates in a life-long manner, i.e.,
without the need to reset it during training.
Our memory module can be easily added to any part of a supervised neural
network. To show its versatility we add it to a number of networks, from simple
convolutional ones tested on image classification to deep sequence-to-sequence
and recurrent-convolutional models. In all cases, the enhanced network gains
the ability to remember and do life-long one-shot learning. Our module
remembers training examples shown many thousands of steps in the past and it
can successfully generalize from them. We set new state-of-the-art for one-shot
learning on the Omniglot dataset and demonstrate, for the first time, life-long
one-shot learning in recurrent neural networks on a large-scale machine
translation task.
| {\L}ukasz Kaiser and Ofir Nachum and Aurko Roy and Samy Bengio | null | 1703.03129 | null | null |
A Structured Self-attentive Sentence Embedding | cs.CL cs.AI cs.LG cs.NE | This paper proposes a new model for extracting an interpretable sentence
embedding by introducing self-attention. Instead of using a vector, we use a
2-D matrix to represent the embedding, with each row of the matrix attending on
a different part of the sentence. We also propose a self-attention mechanism
and a special regularization term for the model. As a side effect, the
embedding comes with an easy way of visualizing what specific parts of the
sentence are encoded into the embedding. We evaluate our model on 3 different
tasks: author profiling, sentiment classification, and textual entailment.
Results show that our model yields a significant performance gain compared to
other sentence embedding methods in all of the 3 tasks.
| Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing
Xiang, Bowen Zhou, Yoshua Bengio | null | 1703.0313 | null | null |
Compressed Sensing using Generative Models | stat.ML cs.IT cs.LG math.IT | The goal of compressed sensing is to estimate a vector from an
underdetermined system of noisy linear measurements, by making use of prior
knowledge on the structure of vectors in the relevant domain. For almost all
results in this literature, the structure is represented by sparsity in a
well-chosen basis. We show how to achieve guarantees similar to standard
compressed sensing but without employing sparsity at all. Instead, we suppose
that vectors lie near the range of a generative model $G: \mathbb{R}^k \to
\mathbb{R}^n$. Our main theorem is that, if $G$ is $L$-Lipschitz, then roughly
$O(k \log L)$ random Gaussian measurements suffice for an $\ell_2/\ell_2$
recovery guarantee. We demonstrate our results using generative models from
published variational autoencoder and generative adversarial networks. Our
method can use $5$-$10$x fewer measurements than Lasso for the same accuracy.
| Ashish Bora, Ajil Jalal, Eric Price, Alexandros G. Dimakis | null | 1703.03208 | null | null |
Learning Active Learning from Data | cs.LG | In this paper, we suggest a novel data-driven approach to active learning
(AL). The key idea is to train a regressor that predicts the expected error
reduction for a candidate sample in a particular learning state. By formulating
the query selection procedure as a regression problem we are not restricted to
working with existing AL heuristics; instead, we learn strategies based on
experience from previous AL outcomes. We show that a strategy can be learnt
either from simple synthetic 2D datasets or from a subset of domain-specific
data. Our method yields strategies that work well on real data from a wide
range of domains.
| Ksenia Konyushkova, Raphael Sznitman and Pascal Fua | null | 1703.03365 | null | null |
Visual-Interactive Similarity Search for Complex Objects by Example of
Soccer Player Analysis | cs.LG cs.IR | The definition of similarity is a key prerequisite when analyzing complex
data types in data mining, information retrieval, or machine learning. However,
the meaningful definition is often hampered by the complexity of data objects
and particularly by different notions of subjective similarity latent in
targeted user groups. Taking the example of soccer players, we present a
visual-interactive system that learns users' mental models of similarity. In a
visual-interactive interface, users are able to label pairs of soccer players
with respect to their subjective notion of similarity. Our proposed similarity
model automatically learns the respective concept of similarity using an active
learning strategy. A visual-interactive retrieval technique is provided to
validate the model and to execute downstream retrieval tasks for soccer player
analysis. The applicability of the approach is demonstrated in different
evaluation strategies, including usage scenarions and cross-validation tests.
| J\"urgen Bernard and Christian Ritter and David Sessler and Matthias
Zeppelzauer and J\"orn Kohlhammer and Dieter Fellner | null | 1703.03385 | null | null |
Faster Greedy MAP Inference for Determinantal Point Processes | cs.DM cs.LG | Determinantal point processes (DPPs) are popular probabilistic models that
arise in many machine learning tasks, where distributions of diverse sets are
characterized by matrix determinants. In this paper, we develop fast algorithms
to find the most likely configuration (MAP) of large-scale DPPs, which is
NP-hard in general. Due to the submodular nature of the MAP objective, greedy
algorithms have been used with empirical success. Greedy implementations
require computation of log-determinants, matrix inverses or solving linear
systems at each iteration. We present faster implementations of the greedy
algorithms by utilizing the complementary benefits of two log-determinant
approximation schemes: (a) first-order expansions to the matrix log-determinant
function and (b) high-order expansions to the scalar log function with
stochastic trace estimators. In our experiments, our algorithms are orders of
magnitude faster than their competitors, while sacrificing marginal accuracy.
| Insu Han, Prabhanjan Kambadur, Kyoungsoo Park, Jinwoo Shin | null | 1703.03389 | null | null |
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks | cs.LG cs.AI cs.CV cs.NE | We propose an algorithm for meta-learning that is model-agnostic, in the
sense that it is compatible with any model trained with gradient descent and
applicable to a variety of different learning problems, including
classification, regression, and reinforcement learning. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it
can solve new learning tasks using only a small number of training samples. In
our approach, the parameters of the model are explicitly trained such that a
small number of gradient steps with a small amount of training data from a new
task will produce good generalization performance on that task. In effect, our
method trains the model to be easy to fine-tune. We demonstrate that this
approach leads to state-of-the-art performance on two few-shot image
classification benchmarks, produces good results on few-shot regression, and
accelerates fine-tuning for policy gradient reinforcement learning with neural
network policies.
| Chelsea Finn, Pieter Abbeel, Sergey Levine | null | 1703.034 | null | null |
Sample Efficient Feature Selection for Factored MDPs | cs.LG stat.ML | In reinforcement learning, the state of the real world is often represented
by feature vectors. However, not all of the features may be pertinent for
solving the current task. We propose Feature Selection Explore and Exploit
(FS-EE), an algorithm that automatically selects the necessary features while
learning a Factored Markov Decision Process, and prove that under mild
assumptions, its sample complexity scales with the in-degree of the dynamics of
just the necessary features, rather than the in-degree of all features. This
can result in a much better sample complexity when the in-degree of the
necessary features is smaller than the in-degree of all features.
| Zhaohan Daniel Guo, Emma Brunskill | null | 1703.03454 | null | null |
Deep Radial Kernel Networks: Approximating Radially Symmetric Functions
with Deep Networks | cs.LG | We prove that a particular deep network architecture is more efficient at
approximating radially symmetric functions than the best known 2 or 3 layer
networks. We use this architecture to approximate Gaussian kernel SVMs, and
subsequently improve upon them with further training. The architecture and
initial weights of the Deep Radial Kernel Network are completely specified by
the SVM and therefore sidesteps the problem of empirically choosing an
appropriate deep network architecture.
| Brendan McCane and Lech Szymanski | null | 1703.0347 | null | null |
Online Learning with Abstention | cs.LG | We present an extensive study of the key problem of online learning where
algorithms are allowed to abstain from making predictions. In the adversarial
setting, we show how existing online algorithms and guarantees can be adapted
to this problem. In the stochastic setting, we first point out a bias problem
that limits the straightforward extension of algorithms such as UCB-N to
time-varying feedback graphs, as needed in this context. Next, we give a new
algorithm, UCB-GT, that exploits historical data and is adapted to time-varying
feedback graphs. We show that this algorithm benefits from more favorable
regret guarantees than a possible, but limited, extension of UCB-N. We further
report the results of a series of experiments demonstrating that UCB-GT largely
outperforms that extension of UCB-N, as well as more standard baselines.
| Corinna Cortes, Giulia DeSalvo, Claudio Gentile, Mehryar Mohri, Scott
Yang | null | 1703.03478 | null | null |
Learning Gradient Descent: Better Generalization and Longer Horizons | cs.LG cs.AI | Training deep neural networks is a highly nontrivial task, involving
carefully selecting appropriate training algorithms, scheduling step sizes and
tuning other hyperparameters. Trying different combinations can be quite
labor-intensive and time consuming. Recently, researchers have tried to use
deep learning algorithms to exploit the landscape of the loss function of the
training problem of interest, and learn how to optimize over it in an automatic
way. In this paper, we propose a new learning-to-learn model and some useful
and practical tricks. Our optimizer outperforms generic, hand-crafted
optimization algorithms and state-of-the-art learning-to-learn optimizers by
DeepMind in many tasks. We demonstrate the effectiveness of our algorithms on a
number of tasks, including deep MLPs, CNNs, and simple LSTMs.
| Kaifeng Lv, Shunhua Jiang, Jian Li | null | 1703.03633 | null | null |
Right for the Right Reasons: Training Differentiable Models by
Constraining their Explanations | cs.LG cs.AI stat.ML | Neural networks are among the most accurate supervised learning methods in
use today, but their opacity makes them difficult to trust in critical
applications, especially when conditions in training differ from those in test.
Recent work on explanations for black-box models has produced tools (e.g. LIME)
to show the implicit rules behind predictions, which can help us identify when
models are right for the wrong reasons. However, these methods do not scale to
explaining entire datasets and cannot correct the problems they reveal. We
introduce a method for efficiently explaining and regularizing differentiable
models by examining and selectively penalizing their input gradients, which
provide a normal to the decision boundary. We apply these penalties both based
on expert annotation and in an unsupervised fashion that encourages diverse
models with qualitatively different decision boundaries for the same
classification problem. On multiple datasets, we show our approach generates
faithful explanations and models that generalize much better when conditions
differ between training and test.
| Andrew Slavin Ross, Michael C. Hughes, Finale Doshi-Velez | null | 1703.03717 | null | null |
Markov Chain Lifting and Distributed ADMM | stat.ML cs.DS cs.IT cs.LG math.IT math.OC | The time to converge to the steady state of a finite Markov chain can be
greatly reduced by a lifting operation, which creates a new Markov chain on an
expanded state space. For a class of quadratic objectives, we show an analogous
behavior where a distributed ADMM algorithm can be seen as a lifting of
Gradient Descent algorithm. This provides a deep insight for its faster
convergence rate under optimal parameter tuning. We conjecture that this gain
is always present, as opposed to the lifting of a Markov chain which sometimes
only provides a marginal speedup.
| Guilherme Fran\c{c}a, Jos\'e Bento | 10.1109/LSP.2017.2654860 | 1703.03859 | null | null |
Joint Embedding of Graphs | stat.AP cs.LG stat.ML | Feature extraction and dimension reduction for networks is critical in a wide
variety of domains. Efficiently and accurately learning features for multiple
graphs has important applications in statistical inference on graphs. We
propose a method to jointly embed multiple undirected graphs. Given a set of
graphs, the joint embedding method identifies a linear subspace spanned by rank
one symmetric matrices and projects adjacency matrices of graphs into this
subspace. The projection coefficients can be treated as features of the graphs,
while the embedding components can represent vertex features. We also propose a
random graph model for multiple graphs that generalizes other classical models
for graphs. We show through theory and numerical experiments that under the
model, the joint embedding method produces estimates of parameters with small
errors. Via simulation experiments, we demonstrate that the joint embedding
method produces features which lead to state of the art performance in
classifying graphs. Applying the joint embedding method to human brain graphs,
we find it extracts interpretable features with good prediction accuracy in
different tasks.
| Shangsi Wang, Jes\'us Arroyo, Joshua T. Vogelstein, Carey E. Priebe | 10.1109/TPAMI.2019.2948619 | 1703.03862 | null | null |
Evolution Strategies as a Scalable Alternative to Reinforcement Learning | stat.ML cs.AI cs.LG cs.NE | We explore the use of Evolution Strategies (ES), a class of black box
optimization algorithms, as an alternative to popular MDP-based RL techniques
such as Q-learning and Policy Gradients. Experiments on MuJoCo and Atari show
that ES is a viable solution strategy that scales extremely well with the
number of CPUs available: By using a novel communication strategy based on
common random numbers, our ES implementation only needs to communicate scalars,
making it possible to scale to over a thousand parallel workers. This allows us
to solve 3D humanoid walking in 10 minutes and obtain competitive results on
most Atari games after one hour of training. In addition, we highlight several
advantages of ES as a black box optimization technique: it is invariant to
action frequency and delayed rewards, tolerant of extremely long horizons, and
does not need temporal discounting or value function approximation.
| Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, Ilya Sutskever | null | 1703.03864 | null | null |
Deep Learning in Customer Churn Prediction: Unsupervised Feature
Learning on Abstract Company Independent Feature Vectors | cs.LG stat.ML | As companies increase their efforts in retaining customers, being able to
predict accurately ahead of time, whether a customer will churn in the
foreseeable future is an extremely powerful tool for any marketing team. The
paper describes in depth the application of Deep Learning in the problem of
churn prediction. Using abstract feature vectors, that can generated on any
subscription based company's user event logs, the paper proves that through the
use of the intrinsic property of Deep Neural Networks (learning secondary
features in an unsupervised manner), the complete pipeline can be applied to
any subscription based company with extremely good churn predictive
performance. Furthermore the research documented in the paper was performed for
Framed Data (a company that sells churn prediction as a service for other
companies) in conjunction with the Data Science Institute at Lancaster
University, UK. This paper is the intellectual property of Framed Data.
| Philip Spanoudes, Thomson Nguyen | null | 1703.03869 | null | null |
Real-Time Machine Learning: The Missing Pieces | cs.DC cs.AI cs.LG | Machine learning applications are increasingly deployed not only to serve
predictions using static models, but also as tightly-integrated components of
feedback loops involving dynamic, real-time decision making. These applications
pose a new set of requirements, none of which are difficult to achieve in
isolation, but the combination of which creates a challenge for existing
distributed execution frameworks: computation with millisecond latency at high
throughput, adaptive construction of arbitrary task graphs, and execution of
heterogeneous kernels over diverse sets of resources. We assert that a new
distributed execution framework is needed for such ML applications and propose
a candidate approach with a proof-of-concept architecture that achieves a 63x
performance improvement over a state-of-the-art execution framework for a
representative application.
| Robert Nishihara, Philipp Moritz, Stephanie Wang, Alexey Tumanov,
William Paul, Johann Schleier-Smith, Richard Liaw, Mehrdad Niknami, Michael
I. Jordan, Ion Stoica | null | 1703.03924 | null | null |
Recruiting from the network: discovering Twitter users who can help
combat Zika epidemics | cs.LG cs.SI | Tropical diseases like \textit{Chikungunya} and \textit{Zika} have come to
prominence in recent years as the cause of serious, long-lasting,
population-wide health problems. In large countries like Brasil, traditional
disease prevention programs led by health authorities have not been
particularly effective. We explore the hypothesis that monitoring and analysis
of social media content streams may effectively complement such efforts.
Specifically, we aim to identify selected members of the public who are likely
to be sensitive to virus combat initiatives that are organised in local
communities. Focusing on Twitter and on the topic of Zika, our approach
involves (i) training a classifier to select topic-relevant tweets from the
Twitter feed, and (ii) discovering the top users who are actively posting
relevant content about the topic. We may then recommend these users as the
prime candidates for direct engagement within their community. In this short
paper we describe our analytical approach and prototype architecture, discuss
the challenges of dealing with noisy and sparse signal, and present encouraging
preliminary results.
| Paolo Missier and Callum McClean and Jonathan Carlton and Diego Cedrim
and Leonardo Silva and Alessandro Garcia and Alexandre Plastino and Alexander
Romanovsky | null | 1703.03928 | null | null |
Ask Me Even More: Dynamic Memory Tensor Networks (Extended Model) | cs.CL cs.LG cs.NE | We examine Memory Networks for the task of question answering (QA), under
common real world scenario where training examples are scarce and under weakly
supervised scenario, that is only extrinsic labels are available for training.
We propose extensions for the Dynamic Memory Network (DMN), specifically within
the attention mechanism, we call the resulting Neural Architecture as Dynamic
Memory Tensor Network (DMTN). Ultimately, we see that our proposed extensions
results in over 80% improvement in the number of task passed against the
baselined standard DMN and 20% more task passed compared to state-of-the-art
End-to-End Memory Network for Facebook's single task weakly trained 1K bAbi
dataset.
| Govardana Sachithanandam Ramachandran, Ajay Sohmshetty | null | 1703.03939 | null | null |
Learning Large-Scale Bayesian Networks with the sparsebn Package | stat.ML cs.LG stat.CO stat.ME | Learning graphical models from data is an important problem with wide
applications, ranging from genomics to the social sciences. Nowadays datasets
often have upwards of thousands---sometimes tens or hundreds of thousands---of
variables and far fewer samples. To meet this challenge, we have developed a
new R package called sparsebn for learning the structure of large, sparse
graphical models with a focus on Bayesian networks. While there are many
existing software packages for this task, this package focuses on the unique
setting of learning large networks from high-dimensional data, possibly with
interventions. As such, the methods provided place a premium on scalability and
consistency in a high-dimensional setting. Furthermore, in the presence of
interventions, the methods implemented here achieve the goal of learning a
causal network from data. Additionally, the sparsebn package is fully
compatible with existing software packages for network analysis.
| Bryon Aragam, Jiaying Gu, Qing Zhou | 10.18637/jss.v091.i11 | 1703.04025 | null | null |
Prediction and Control with Temporal Segment Models | cs.LG cs.AI cs.RO stat.ML | We introduce a method for learning the dynamics of complex nonlinear systems
based on deep generative models over temporal segments of states and actions.
Unlike dynamics models that operate over individual discrete timesteps, we
learn the distribution over future state trajectories conditioned on past
state, past action, and planned future action trajectories, as well as a latent
prior over action trajectories. Our approach is based on convolutional
autoregressive models and variational autoencoders. It makes stable and
accurate predictions over long horizons for complex, stochastic systems,
effectively expressing uncertainty and modeling the effects of collisions,
sensory noise, and action delays. The learned dynamics model and action prior
can be used for end-to-end, fully differentiable trajectory optimization and
model-based policy optimization, which we use to evaluate the performance and
sample-efficiency of our method.
| Nikhil Mishra, Pieter Abbeel, Igor Mordatch | null | 1703.0407 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.