title
stringlengths 5
246
| categories
stringlengths 5
94
⌀ | abstract
stringlengths 54
5.03k
| authors
stringlengths 0
6.72k
| doi
stringlengths 12
54
⌀ | id
stringlengths 6
10
⌀ | year
float64 2.02k
2.02k
⌀ | venue
stringclasses 13
values |
---|---|---|---|---|---|---|---|
Convolutional Neural Networks for Page Segmentation of Historical
Document Images | cs.CV cs.LG stat.ML | This paper presents a Convolutional Neural Network (CNN) based page
segmentation method for handwritten historical document images. We consider
page segmentation as a pixel labeling problem, i.e., each pixel is classified
as one of the predefined classes. Traditional methods in this area rely on
carefully hand-crafted features or large amounts of prior knowledge. In
contrast, we propose to learn features from raw image pixels using a CNN. While
many researchers focus on developing deep CNN architectures to solve different
problems, we train a simple CNN with only one convolution layer. We show that
the simple architecture achieves competitive results against other deep
architectures on different public datasets. Experiments also demonstrate the
effectiveness and superiority of the proposed method compared to previous
methods.
| Kai Chen and Mathias Seuret | null | 1704.01474 | null | null |
Comment on "Biologically inspired protection of deep networks from
adversarial attacks" | stat.ML cs.LG q-bio.NC | A recent paper suggests that Deep Neural Networks can be protected from
gradient-based adversarial perturbations by driving the network activations
into a highly saturated regime. Here we analyse such saturated networks and
show that the attacks fail due to numerical limitations in the gradient
computations. A simple stabilisation of the gradient estimates enables
successful and efficient attacks. Thus, it has yet to be shown that the
robustness observed in highly saturated networks is not simply due to numerical
limitations.
| Wieland Brendel, Matthias Bethge | null | 1704.01547 | null | null |
Deep Learning and Quantum Entanglement: Fundamental Connections with
Implications to Network Design | cs.LG cs.NE quant-ph | Deep convolutional networks have witnessed unprecedented success in various
machine learning applications. Formal understanding on what makes these
networks so successful is gradually unfolding, but for the most part there are
still significant mysteries to unravel. The inductive bias, which reflects
prior knowledge embedded in the network architecture, is one of them. In this
work, we establish a fundamental connection between the fields of quantum
physics and deep learning. We use this connection for asserting novel
theoretical observations regarding the role that the number of channels in each
layer of the convolutional network fulfills in the overall inductive bias.
Specifically, we show an equivalence between the function realized by a deep
convolutional arithmetic circuit (ConvAC) and a quantum many-body wave
function, which relies on their common underlying tensorial structure. This
facilitates the use of quantum entanglement measures as well-defined
quantifiers of a deep network's expressive ability to model intricate
correlation structures of its inputs. Most importantly, the construction of a
deep ConvAC in terms of a Tensor Network is made available. This description
enables us to carry a graph-theoretic analysis of a convolutional network, with
which we demonstrate a direct control over the inductive bias of the deep
network via its channel numbers, that are related to the min-cut in the
underlying graph. This result is relevant to any practitioner designing a
network for a specific task. We theoretically analyze ConvACs, and empirically
validate our findings on more common ConvNets which involve ReLU activations
and max pooling. Beyond the results described above, the description of a deep
convolutional network in well-defined graph-theoretic tools and the formal
connection to quantum entanglement, are two interdisciplinary bridges that are
brought forth by this work.
| Yoav Levine, David Yakira, Nadav Cohen and Amnon Shashua | null | 1704.01552 | null | null |
Bag-of-Words Method Applied to Accelerometer Measurements for the
Purpose of Classification and Energy Estimation | cs.LG stat.ML | Accelerometer measurements are the prime type of sensor information most
think of when seeking to measure physical activity. On the market, there are
many fitness measuring devices which aim to track calories burned and steps
counted through the use of accelerometers. These measurements, though good
enough for the average consumer, are noisy and unreliable in terms of the
precision of measurement needed in a scientific setting. The contribution of
this paper is an innovative and highly accurate regression method which uses an
intermediary two-stage classification step to better direct the regression of
energy expenditure values from accelerometer counts.
We show that through an additional unsupervised layer of intermediate feature
construction, we can leverage latent patterns within accelerometer counts to
provide better grounds for activity classification than expert-constructed
timeseries features. For this, our approach utilizes a mathematical model
originating in natural language processing, the bag-of-words model, that has in
the past years been appearing in diverse disciplines outside of the natural
language processing field such as image processing. Further emphasizing the
natural language connection to stochastics, we use a gaussian mixture model to
learn the dictionary upon which the bag-of-words model is built. Moreover, we
show that with the addition of these features, we're able to improve regression
root mean-squared error of energy expenditure by approximately 1.4 units over
existing state-of-the-art methods.
| Kevin M. Amaral, Ping Chen, Scott Crouter, Wei Ding | null | 1704.01574 | null | null |
Nonnegative/binary matrix factorization with a D-Wave quantum annealer | cs.LG quant-ph stat.ML | D-Wave quantum annealers represent a novel computational architecture and
have attracted significant interest, but have been used for few real-world
computations. Machine learning has been identified as an area where quantum
annealing may be useful. Here, we show that the D-Wave 2X can be effectively
used as part of an unsupervised machine learning method. This method can be
used to analyze large datasets. The D-Wave only limits the number of features
that can be extracted from the dataset. We apply this method to learn the
features from a set of facial images.
| Daniel O'Malley, Velimir V. Vesselinov, Boian S. Alexandrov, Ludmil B.
Alexandrov | 10.1371/journal.pone.0206653 | 1704.01605 | null | null |
Greed is Good: Near-Optimal Submodular Maximization via Greedy
Optimization | cs.LG cs.DS | It is known that greedy methods perform well for maximizing monotone
submodular functions. At the same time, such methods perform poorly in the face
of non-monotonicity. In this paper, we show - arguably, surprisingly - that
invoking the classical greedy algorithm $O(\sqrt{k})$-times leads to the
(currently) fastest deterministic algorithm, called Repeated Greedy, for
maximizing a general submodular function subject to $k$-independent system
constraints. Repeated Greedy achieves $(1 + O(1/\sqrt{k}))k$ approximation
using $O(nr\sqrt{k})$ function evaluations (here, $n$ and $r$ denote the size
of the ground set and the maximum size of a feasible solution, respectively).
We then show that by a careful sampling procedure, we can run the greedy
algorithm only once and obtain the (currently) fastest randomized algorithm,
called Sample Greedy, for maximizing a submodular function subject to
$k$-extendible system constraints (a subclass of $k$-independent system
constrains). Sample Greedy achieves $(k + 3)$-approximation with only $O(nr/k)$
function evaluations. Finally, we derive an almost matching lower bound, and
show that no polynomial time algorithm can have an approximation ratio smaller
than $ k + 1/2 - \varepsilon$. To further support our theoretical results, we
compare the performance of Repeated Greedy and Sample Greedy with prior art in
a concrete application (movie recommendation). We consistently observe that
while Sample Greedy achieves practically the same utility as the best baseline,
it performs at least two orders of magnitude faster.
| Moran Feldman, Christopher Harshaw, Amin Karbasi | null | 1704.01652 | null | null |
The Relative Performance of Ensemble Methods with Deep Convolutional
Neural Networks for Image Classification | stat.ML cs.CV cs.LG stat.ME | Artificial neural networks have been successfully applied to a variety of
machine learning tasks, including image recognition, semantic segmentation, and
machine translation. However, few studies fully investigated ensembles of
artificial neural networks. In this work, we investigated multiple widely used
ensemble methods, including unweighted averaging, majority voting, the Bayes
Optimal Classifier, and the (discrete) Super Learner, for image recognition
tasks, with deep neural networks as candidate algorithms. We designed several
experiments, with the candidate algorithms being the same network structure
with different model checkpoints within a single training process, networks
with same structure but trained multiple times stochastically, and networks
with different structure. In addition, we further studied the over-confidence
phenomenon of the neural networks, as well as its impact on the ensemble
methods. Across all of our experiments, the Super Learner achieved best
performance among all the ensemble methods in this study.
| Cheng Ju and Aur\'elien Bibaut and Mark J. van der Laan | null | 1704.01664 | null | null |
Learning Combinatorial Optimization Algorithms over Graphs | cs.LG stat.ML | The design of good heuristics or approximation algorithms for NP-hard
combinatorial optimization problems often requires significant specialized
knowledge and trial-and-error. Can we automate this challenging, tedious
process, and learn the algorithms instead? In many real-world applications, it
is typically the case that the same optimization problem is solved again and
again on a regular basis, maintaining the same problem structure but differing
in the data. This provides an opportunity for learning heuristic algorithms
that exploit the structure of such recurring problems. In this paper, we
propose a unique combination of reinforcement learning and graph embedding to
address this challenge. The learned greedy policy behaves like a meta-algorithm
that incrementally constructs a solution, and the action is determined by the
output of a graph embedding network capturing the current state of the
solution. We show that our framework can be applied to a diverse range of
optimization problems over graphs, and learns effective algorithms for the
Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems.
| Hanjun Dai, Elias B. Khalil, Yuyu Zhang, Bistra Dilkina, Le Song | null | 1704.01665 | null | null |
Multi-space Variational Encoder-Decoders for Semi-supervised Labeled
Sequence Transduction | cs.CL cs.LG | Labeled sequence transduction is a task of transforming one sequence into
another sequence that satisfies desiderata specified by a set of labels. In
this paper we propose multi-space variational encoder-decoders, a new model for
labeled sequence transduction with semi-supervised learning. The generative
model can use neural networks to handle both discrete and continuous latent
variables to exploit various features of data. Experiments show that our model
provides not only a powerful supervised framework but also can effectively take
advantage of the unlabeled data. On the SIGMORPHON morphological inflection
benchmark, our model outperforms single-model state-of-art results by a large
margin for the majority of languages.
| Chunting Zhou and Graham Neubig | null | 1704.01691 | null | null |
Accelerated Stochastic Quasi-Newton Optimization on Riemann Manifolds | math.OC cs.LG math.DG stat.ML | We propose an L-BFGS optimization algorithm on Riemannian manifolds using
minibatched stochastic variance reduction techniques for fast convergence with
constant step sizes, without resorting to linesearch methods designed to
satisfy Wolfe conditions. We provide a new convergence proof for strongly
convex functions without using curvature conditions on the manifold, as well as
a convergence discussion for nonconvex functions. We discuss a couple of ways
to obtain the correction pairs used to calculate the product of the gradient
with the inverse Hessian, and empirically demonstrate their use in synthetic
experiments on computation of Karcher means for symmetric positive definite
matrices and leading eigenvalues of large scale data matrices. We compare our
method to VR-PCA for the latter experiment, along with Riemannian SVRG for both
cases, and show strong convergence results for a range of datasets.
| Anirban Roychowdhury | null | 1704.017 | null | null |
Learning Certifiably Optimal Rule Lists for Categorical Data | stat.ML cs.LG | We present the design and implementation of a custom discrete optimization
technique for building rule lists over a categorical feature space. Our
algorithm produces rule lists with optimal training performance, according to
the regularized empirical risk, with a certificate of optimality. By leveraging
algorithmic bounds, efficient data structures, and computational reuse, we
achieve several orders of magnitude speedup in time and a massive reduction of
memory consumption. We demonstrate that our approach produces optimal rule
lists on practical problems in seconds. Our results indicate that it is
possible to construct optimal sparse rule lists that are approximately as
accurate as the COMPAS proprietary risk prediction tool on data from Broward
County, Florida, but that are completely interpretable. This framework is a
novel alternative to CART and other decision tree methods for interpretable
modeling.
| Elaine Angelino, Nicholas Larus-Stone, Daniel Alabi, Margo Seltzer,
Cynthia Rudin | null | 1704.01701 | null | null |
Adequacy of the Gradient-Descent Method for Classifier Evasion Attacks | cs.CR cs.LG stat.ML | Despite the wide use of machine learning in adversarial settings including
computer security, recent studies have demonstrated vulnerabilities to evasion
attacks---carefully crafted adversarial samples that closely resemble
legitimate instances, but cause misclassification. In this paper, we examine
the adequacy of the leading approach to generating adversarial samples---the
gradient descent approach. In particular (1) we perform extensive experiments
on three datasets, MNIST, USPS and Spambase, in order to analyse the
effectiveness of the gradient-descent method against non-linear support vector
machines, and conclude that carefully reduced kernel smoothness can
significantly increase robustness to the attack; (2) we demonstrate that
separated inter-class support vectors lead to more secure models, and propose a
quantity similar to margin that can efficiently predict potential
susceptibility to gradient-descent attacks, before the attack is launched; and
(3) we design a new adversarial sample construction algorithm based on
optimising the multiplicative ratio of class decision functions.
| Yi Han, Benjamin I. P. Rubinstein | null | 1704.01704 | null | null |
Enabling Smart Data: Noise filtering in Big Data classification | cs.DB cs.LG | In any knowledge discovery process the value of extracted knowledge is
directly related to the quality of the data used. Big Data problems, generated
by massive growth in the scale of data observed in recent years, also follow
the same dictate. A common problem affecting data quality is the presence of
noise, particularly in classification problems, where label noise refers to the
incorrect labeling of training instances, and is known to be a very disruptive
feature of data. However, in this Big Data era, the massive growth in the scale
of the data poses a challenge to traditional proposals created to tackle noise,
as they have difficulties coping with such a large amount of data. New
algorithms need to be proposed to treat the noise in Big Data problems,
providing high quality and clean data, also known as Smart Data. In this paper,
two Big Data preprocessing approaches to remove noisy examples are proposed: an
homogeneous ensemble and an heterogeneous ensemble filter, with special
emphasis in their scalability and performance traits. The obtained results show
that these proposals enable the practitioner to efficiently obtain a Smart
Dataset from any Big Data classification problem.
| Diego Garc\'ia-Gil, Juli\'an Luengo, Salvador Garc\'ia and Francisco
Herrera | null | 1704.0177 | null | null |
Incremental Transductive Learning Approaches to Schistosomiasis Vector
Classification | cs.LG | The key issues pertaining to collection of epidemic disease data for our
analysis purposes are that it is a labour intensive, time consuming and
expensive process resulting in availability of sparse sample data which we use
to develop prediction models. To address this sparse data issue, we present
novel Incremental Transductive methods to circumvent the data collection
process by applying previously acquired data to provide consistent,
confidence-based labelling alternatives to field survey research. We
investigated various reasoning approaches for semisupervised machine learning
including Bayesian models for labelling data. The results show that using the
proposed methods, we can label instances of data with a class of vector density
at a high level of confidence. By applying the Liberal and Strict Training
Approaches, we provide a labelling and classification alternative to standalone
algorithms. The methods in this paper are components in the process of reducing
the proliferation of the Schistosomiasis disease and its effects.
| Terence Fusco, Yaxin Bi, Haiying Wang, Fiona Browne | null | 1704.01815 | null | null |
An Online Hierarchical Algorithm for Extreme Clustering | cs.LG stat.ML | Many modern clustering methods scale well to a large number of data items, N,
but not to a large number of clusters, K. This paper introduces PERCH, a new
non-greedy algorithm for online hierarchical clustering that scales to both
massive N and K--a problem setting we term extreme clustering. Our algorithm
efficiently routes new data points to the leaves of an incrementally-built
tree. Motivated by the desire for both accuracy and speed, our approach
performs tree rotations for the sake of enhancing subtree purity and
encouraging balancedness. We prove that, under a natural separability
assumption, our non-greedy algorithm will produce trees with perfect dendrogram
purity regardless of online data arrival order. Our experiments demonstrate
that PERCH constructs more accurate trees than other tree-building clustering
algorithms and scales well with both N and K, achieving a higher quality
clustering than the strongest flat clustering competitor in nearly half the
time.
| Ari Kobren, Nicholas Monath, Akshay Krishnamurthy, Andrew McCallum | null | 1704.01858 | null | null |
On the Statistical Efficiency of Compositional Nonparametric Prediction | stat.ML cs.LG | In this paper, we propose a compositional nonparametric method in which a
model is expressed as a labeled binary tree of $2k+1$ nodes, where each node is
either a summation, a multiplication, or the application of one of the $q$
basis functions to one of the $p$ covariates. We show that in order to recover
a labeled binary tree from a given dataset, the sufficient number of samples is
$O(k\log(pq)+\log(k!))$, and the necessary number of samples is $\Omega(k\log
(pq)-\log(k!))$. We further propose a greedy algorithm for regression in order
to validate our theoretical findings through synthetic experiments.
| Yixi Xu, Jean Honorio, Xiao Wang | null | 1704.01896 | null | null |
Online Hashing | cs.CV cs.LG | Although hash function learning algorithms have achieved great success in
recent years, most existing hash models are off-line, which are not suitable
for processing sequential or online data. To address this problem, this work
proposes an online hash model to accommodate data coming in stream for online
learning. Specifically, a new loss function is proposed to measure the
similarity loss between a pair of data samples in hamming space. Then, a
structured hash model is derived and optimized in a passive-aggressive way.
Theoretical analysis on the upper bound of the cumulative loss for the proposed
online hash model is provided. Furthermore, we extend our online hashing from a
single-model to a multi-model online hashing that trains multiple models so as
to retain diverse online hashing models in order to avoid biased update. The
competitive efficiency and effectiveness of the proposed online hash models are
verified through extensive experiments on several large-scale datasets as
compared to related hashing methods.
| Long-Kai Huang, Qiang Yang, Wei-Shi Zheng | 10.1109/TNNLS.2017.2689242 | 1704.01897 | null | null |
Recognizing Multi-talker Speech with Permutation Invariant Training | cs.SD cs.LG eess.AS | In this paper, we propose a novel technique for direct recognition of
multiple speech streams given the single channel of mixed speech, without first
separating them. Our technique is based on permutation invariant training (PIT)
for automatic speech recognition (ASR). In PIT-ASR, we compute the average
cross entropy (CE) over all frames in the whole utterance for each possible
output-target assignment, pick the one with the minimum CE, and optimize for
that assignment. PIT-ASR forces all the frames of the same speaker to be
aligned with the same output layer. This strategy elegantly solves the label
permutation problem and speaker tracing problem in one shot. Our experiments on
artificially mixed AMI data showed that the proposed approach is very
promising.
| Dong Yu, Xuankai Chang, Yanmin Qian | null | 1704.01985 | null | null |
Treatment-Response Models for Counterfactual Reasoning with
Continuous-time, Continuous-valued Interventions | stat.ML cs.AI cs.LG | Treatment effects can be estimated from observational data as the difference
in potential outcomes. In this paper, we address the challenge of estimating
the potential outcome when treatment-dose levels can vary continuously over
time. Further, the outcome variable may not be measured at a regular frequency.
Our proposed solution represents the treatment response curves using linear
time-invariant dynamical systems---this provides a flexible means for modeling
response over time to highly variable dose curves. Moreover, for multivariate
data, the proposed method: uncovers shared structure in treatment response and
the baseline across multiple markers; and, flexibly models challenging
correlation structure both across and within signals over time. For this, we
build upon the framework of multiple-output Gaussian Processes. On simulated
and a challenging clinical dataset, we show significant gains in accuracy over
state-of-the-art models.
| Hossein Soleimani, Adarsh Subbaswamy, Suchi Saria | null | 1704.02038 | null | null |
End to End Deep Neural Network Frequency Demodulation of Speech Signals | cs.LG cs.SD | Frequency modulation (FM) is a form of radio broadcasting which is widely
used nowadays and has been for almost a century. We suggest a
software-defined-radio (SDR) receiver for FM demodulation that adopts an
end-to-end learning based approach and utilizes the prior information of
transmitted speech message in the demodulation process. The receiver detects
and enhances speech from the in-phase and quadrature components of its base
band version. The new system yields high performance detection for both
acoustical disturbances, and communication channel noise and is foreseen to
out-perform the established methods for low signal to noise ratio (SNR)
conditions in both mean square error and in perceptual evaluation of speech
quality score.
| Dan Elbaz, Michael Zibulevsky | null | 1704.02046 | null | null |
Restricted Isometry Property of Gaussian Random Projection for Finite
Set of Subspaces | cs.IT cs.LG math.IT | Dimension reduction plays an essential role when decreasing the complexity of
solving large-scale problems. The well-known Johnson-Lindenstrauss (JL) Lemma
and Restricted Isometry Property (RIP) admit the use of random projection to
reduce the dimension while keeping the Euclidean distance, which leads to the
boom of Compressed Sensing and the field of sparsity related signal processing.
Recently, successful applications of sparse models in computer vision and
machine learning have increasingly hinted that the underlying structure of high
dimensional data looks more like a union of subspaces (UoS). In this paper,
motivated by JL Lemma and an emerging field of Compressed Subspace Clustering
(CSC), we study for the first time the RIP of Gaussian random matrices for the
compression of two subspaces based on the generalized projection $F$-norm
distance. We theoretically prove that with high probability the affinity or
distance between two projected subspaces are concentrated around their
estimates. When the ambient dimension after projection is sufficiently large,
the affinity and distance between two subspaces almost remain unchanged after
random projection. Numerical experiments verify the theoretical work.
| Gen Li and Yuantao Gu | 10.1109/TSP.2017.2778685 | 1704.02109 | null | null |
Jet Constituents for Deep Neural Network Based Top Quark Tagging | hep-ex cs.LG hep-ph stat.ML | Recent literature on deep neural networks for tagging of highly energetic
jets resulting from top quark decays has focused on image based techniques or
multivariate approaches using high-level jet substructure variables. Here, a
sequential approach to this task is taken by using an ordered sequence of jet
constituents as training inputs. Unlike the majority of previous approaches,
this strategy does not result in a loss of information during pixelisation or
the calculation of high level features. The jet classification method achieves
a background rejection of 45 at a 50% efficiency operating point for
reconstruction level jets with transverse momentum range of 600 to 2500 GeV and
is insensitive to multiple proton-proton interactions at the levels expected
throughout Run 2 of the LHC.
| Jannicke Pearkes, Wojciech Fedorko, Alison Lister, Colin Gay | null | 1704.02124 | null | null |
Quantum ensembles of quantum classifiers | quant-ph cs.LG math.ST stat.TH | Quantum machine learning witnesses an increasing amount of quantum algorithms
for data-driven decision making, a problem with potential applications ranging
from automated image recognition to medical diagnosis. Many of those algorithms
are implementations of quantum classifiers, or models for the classification of
data inputs with a quantum computer. Following the success of collective
decision making with ensembles in classical machine learning, this paper
introduces the concept of quantum ensembles of quantum classifiers. Creating
the ensemble corresponds to a state preparation routine, after which the
quantum classifiers are evaluated in parallel and their combined decision is
accessed by a single-qubit measurement. This framework naturally allows for
exponentially large ensembles in which -- similar to Bayesian learning -- the
individual classifiers do not have to be trained. As an example, we analyse an
exponentially large quantum ensemble in which each classifier is weighed
according to its performance in classifying the training data, leading to new
results for quantum as well as classical machine learning.
| Maria Schuld and Francesco Petruccione | null | 1704.02146 | null | null |
Hierarchical Clustering: Objective Functions and Algorithms | cs.DS cs.LG | Hierarchical clustering is a recursive partitioning of a dataset into
clusters at an increasingly finer granularity. Motivated by the fact that most
work on hierarchical clustering was based on providing algorithms, rather than
optimizing a specific objective, Dasgupta framed similarity-based hierarchical
clustering as a combinatorial optimization problem, where a `good' hierarchical
clustering is one that minimizes some cost function. He showed that this cost
function has certain desirable properties.
We take an axiomatic approach to defining `good' objective functions for both
similarity and dissimilarity-based hierarchical clustering. We characterize a
set of "admissible" objective functions (that includes Dasgupta's one) that
have the property that when the input admits a `natural' hierarchical
clustering, it has an optimal value.
Equipped with a suitable objective function, we analyze the performance of
practical algorithms, as well as develop better algorithms. For
similarity-based hierarchical clustering, Dasgupta showed that the divisive
sparsest-cut approach achieves an $O(\log^{3/2} n)$-approximation. We give a
refined analysis of the algorithm and show that it in fact achieves an
$O(\sqrt{\log n})$-approx. (Charikar and Chatziafratis independently proved
that it is a $O(\sqrt{\log n})$-approx.). This improves upon the LP-based
$O(\log n)$-approx. of Roy and Pokutta. For dissimilarity-based hierarchical
clustering, we show that the classic average-linkage algorithm gives a factor 2
approx., and provide a simple and better algorithm that gives a factor 3/2
approx..
Finally, we consider `beyond-worst-case' scenario through a generalisation of
the stochastic block model for hierarchical clustering. We show that Dasgupta's
cost function has desirable properties for these inputs and we provide a simple
1 + o(1)-approximation in this setting.
| Vincent Cohen-Addad and Varun Kanade and Frederik Mallmann-Trenn and
Claire Mathieu | null | 1704.02147 | null | null |
Variance Based Moving K-Means Algorithm | cs.LG | Clustering is a useful data exploratory method with its wide applicability in
multiple fields. However, data clustering greatly relies on initialization of
cluster centers that can result in large intra-cluster variance and dead
centers, therefore leading to sub-optimal solutions. This paper proposes a
novel variance based version of the conventional Moving K-Means (MKM) algorithm
called Variance Based Moving K-Means (VMKM) that can partition data into
optimal homogeneous clusters, irrespective of cluster initialization. The
algorithm utilizes a novel distance metric and a unique data element selection
criteria to transfer the selected elements between clusters to achieve low
intra-cluster variance and subsequently avoid dead centers. Quantitative and
qualitative comparison with various clustering techniques is performed on four
datasets selected from image processing, bioinformatics, remote sensing and the
stock market respectively. An extensive analysis highlights the superior
performance of the proposed method over other techniques.
| Vibin Vijay, Raghunath Vp, Amarjot Singh, SN Omar | null | 1704.02197 | null | null |
OBTAIN: Real-Time Beat Tracking in Audio Signals | cs.SD cs.IR cs.LG cs.MM | In this paper, we design a system in order to perform the real-time beat
tracking for an audio signal. We use Onset Strength Signal (OSS) to detect the
onsets and estimate the tempos. Then, we form Cumulative Beat Strength Signal
(CBSS) by taking advantage of OSS and estimated tempos. Next, we perform peak
detection by extracting the periodic sequence of beats among all CBSS peaks. In
simulations, we can see that our proposed algorithm, Online Beat TrAckINg
(OBTAIN), outperforms state-of-art results in terms of prediction accuracy
while maintaining comparable and practical computational complexity. The
real-time performance is tractable visually as illustrated in the simulations.
| Ali Mottaghi, Kayhan Behdin, Ashkan Esmaeili, Mohammadreza Heydari,
and Farokh Marvasti | null | 1704.02216 | null | null |
Training Triplet Networks with GAN | cs.LG stat.ML | Triplet networks are widely used models that are characterized by good
performance in classification and retrieval tasks. In this work we propose to
train a triplet network by putting it as the discriminator in Generative
Adversarial Nets (GANs). We make use of the good capability of representation
learning of the discriminator to increase the predictive quality of the model.
We evaluated our approach on Cifar10 and MNIST datasets and observed
significant improvement on the classification performance using the simple k-nn
method.
| Maciej Zieba, Lei Wang | null | 1704.02227 | null | null |
Rapid Mixing Swendsen-Wang Sampler for Stochastic Partitioned Attractive
Models | cs.LG stat.ML | The Gibbs sampler is a particularly popular Markov chain used for learning
and inference problems in Graphical Models (GMs). These tasks are
computationally intractable in general, and the Gibbs sampler often suffers
from slow mixing. In this paper, we study the Swendsen-Wang dynamics which is a
more sophisticated Markov chain designed to overcome bottlenecks that impede
the Gibbs sampler. We prove O(\log n) mixing time for attractive binary
pairwise GMs (i.e., ferromagnetic Ising models) on stochastic partitioned
graphs having n vertices, under some mild conditions, including low temperature
regions where the Gibbs sampler provably mixes exponentially slow. Our
experiments also confirm that the Swendsen-Wang sampler significantly
outperforms the Gibbs sampler when they are used for learning parameters of
attractive GMs.
| Sejun Park, Yunhun Jang, Andreas Galanis, Jinwoo Shin, Daniel
Stefankovic, Eric Vigoda | null | 1704.02232 | null | null |
\'Echantillonnage de signaux sur graphes via des processus
d\'eterminantaux | cs.DS cs.DM cs.LG | We consider the problem of sampling k-bandlimited graph signals, ie, linear
combinations of the first k graph Fourier modes. We know that a set of k nodes
embedding all k-bandlimited signals always exists, thereby enabling their
perfect reconstruction after sampling. Unfortunately, to exhibit such a set,
one needs to partially diagonalize the graph Laplacian, which becomes
prohibitive at large scale. We propose a novel strategy based on determinantal
point processes that side-steps partial diagonalisation and enables
reconstruction with only O(k) samples. While doing so, we exhibit a new general
algorithm to sample determinantal process, faster than the state-of-the-art
algorithm by an order k.
| Nicolas Tremblay (1), Simon Barthelme (2), Pierre-Olivier Amblard (1)
((1) CNRS, GIPSA-CICS (2) CNRS, GIPSA-VIBS) | null | 1704.02239 | null | null |
Recurrent Environment Simulators | cs.AI cs.LG stat.ML | Models that can simulate how environments change in response to actions can
be used by agents to plan and act efficiently. We improve on previous
environment simulators from high-dimensional pixel observations by introducing
recurrent neural networks that are able to make temporally and spatially
coherent predictions for hundreds of time-steps into the future. We present an
in-depth analysis of the factors affecting performance, providing the most
extensive attempt to advance the understanding of the properties of these
models. We address the issue of computationally inefficiency with a model that
does not need to generate a high-dimensional image at each time-step. We show
that our approach can be used to improve exploration and is adaptable to many
diverse environments, namely 10 Atari games, a 3D car racing environment, and
complex 3D mazes.
| Silvia Chiappa and S\'ebastien Racaniere and Daan Wierstra and Shakir
Mohamed | null | 1704.02254 | null | null |
NILC-USP at SemEval-2017 Task 4: A Multi-view Ensemble for Twitter
Sentiment Analysis | cs.CL cs.LG | This paper describes our multi-view ensemble approach to SemEval-2017 Task 4
on Sentiment Analysis in Twitter, specifically, the Message Polarity
Classification subtask for English (subtask A). Our system is a voting
ensemble, where each base classifier is trained in a different feature space.
The first space is a bag-of-words model and has a Linear SVM as base
classifier. The second and third spaces are two different strategies of
combining word embeddings to represent sentences and use a Linear SVM and a
Logistic Regressor as base classifiers. The proposed system was ranked 18th out
of 38 systems considering F1 score and 20th considering recall.
| Edilson A. Corr\^ea Jr., Vanessa Queiroz Marinho, Leandro Borges dos
Santos | null | 1704.02263 | null | null |
Thresholding Bandits with Augmented UCB | cs.LG | In this paper we propose the Augmented-UCB (AugUCB) algorithm for a
fixed-budget version of the thresholding bandit problem (TBP), where the
objective is to identify a set of arms whose quality is above a threshold. A
key feature of AugUCB is that it uses both mean and variance estimates to
eliminate arms that have been sufficiently explored; to the best of our
knowledge this is the first algorithm to employ such an approach for the
considered TBP. Theoretically, we obtain an upper bound on the loss
(probability of mis-classification) incurred by AugUCB. Although UCBEV in
literature provides a better guarantee, it is important to emphasize that UCBEV
has access to problem complexity (whose computation requires arms' mean and
variances), and hence is not realistic in practice; this is in contrast to
AugUCB whose implementation does not require any such complexity inputs. We
conduct extensive simulation experiments to validate the performance of AugUCB.
Through our simulation work, we establish that AugUCB, owing to its utilization
of variance estimates, performs significantly better than the state-of-the-art
APT, CSAR and other non variance-based algorithms.
| Subhojyoti Mukherjee, K. P. Naveen, Nandan Sudarsanam, Balaraman
Ravindran | 10.24963/ijcai.2017/350 | 1704.02281 | null | null |
TransNets: Learning to Transform for Recommendation | cs.IR cs.CL cs.LG | Recently, deep learning methods have been shown to improve the performance of
recommender systems over traditional methods, especially when review text is
available. For example, a recent model, DeepCoNN, uses neural nets to learn one
latent representation for the text of all reviews written by a target user, and
a second latent representation for the text of all reviews for a target item,
and then combines these latent representations to obtain state-of-the-art
performance on recommendation tasks. We show that (unsurprisingly) much of the
predictive value of review text comes from reviews of the target user for the
target item. We then introduce a way in which this information can be used in
recommendation, even when the target user's review for the target item is not
available. Our model, called TransNets, extends the DeepCoNN model by
introducing an additional latent layer representing the target user-target item
pair. We then regularize this layer, at training time, to be similar to another
latent representation of the target user's review of the target item. We show
that TransNets and extensions of it improve substantially over the previous
state-of-the-art.
| Rose Catherine, William Cohen | 10.1145/3109859.3109878 | 1704.02298 | null | null |
It Takes (Only) Two: Adversarial Generator-Encoder Networks | cs.CV cs.LG stat.ML | We present a new autoencoder-type architecture that is trainable in an
unsupervised mode, sustains both generation and inference, and has the quality
of conditional and unconditional samples boosted by adversarial learning.
Unlike previous hybrids of autoencoders and adversarial networks, the
adversarial game in our approach is set up directly between the encoder and the
generator, and no external mappings are trained in the process of learning. The
game objective compares the divergences of each of the real and the generated
data distributions with the prior distribution in the latent space. We show
that direct generator-vs-encoder game leads to a tight coupling of the two
components, resulting in samples and reconstructions of a comparable quality to
some recently-proposed more complex architectures.
| Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky | null | 1704.02304 | null | null |
Fast Spectral Clustering Using Autoencoders and Landmarks | cs.LG stat.ML | In this paper, we introduce an algorithm for performing spectral clustering
efficiently. Spectral clustering is a powerful clustering algorithm that
suffers from high computational complexity, due to eigen decomposition. In this
work, we first build the adjacency matrix of the corresponding graph of the
dataset. To build this matrix, we only consider a limited number of points,
called landmarks, and compute the similarity of all data points with the
landmarks. Then, we present a definition of the Laplacian matrix of the graph
that enable us to perform eigen decomposition efficiently, using a deep
autoencoder. The overall complexity of the algorithm for eigen decomposition is
$O(np)$, where $n$ is the number of data points and $p$ is the number of
landmarks. At last, we evaluate the performance of the algorithm in different
experiments.
| Ershad Banijamali, Ali Ghodsi | null | 1704.02345 | null | null |
Joint Probabilistic Linear Discriminant Analysis | cs.LG stat.ML | Standard probabilistic linear discriminant analysis (PLDA) for speaker
recognition assumes that the sample's features (usually, i-vectors) are given
by a sum of three terms: a term that depends on the speaker identity, a term
that models the within-speaker variability and is assumed independent across
samples, and a final term that models any remaining variability and is also
independent across samples. In this work, we propose a generalization of this
model where the within-speaker variability is not necessarily assumed
independent across samples but dependent on another discrete variable. This
variable, which we call the channel variable as in the standard PLDA approach,
could be, for example, a discrete category for the channel characteristics, the
language spoken by the speaker, the type of speech in the sample
(conversational, monologue, read), etc. The value of this variable is assumed
to be known during training but not during testing. Scoring is performed, as in
standard PLDA, by computing a likelihood ratio between the null hypothesis that
the two sides of a trial belong to the same speaker versus the alternative
hypothesis that the two sides belong to different speakers. The two likelihoods
are computed by marginalizing over two hypothesis about the channels in both
sides of a trial: that they are the same and that they are different. This way,
we expect that the new model will be better at coping with same-channel versus
different-channel trials than standard PLDA, since knowledge about the channel
(or language, or speech style) is used during training and implicitly
considered during scoring.
| Luciana Ferrer | null | 1704.02346 | null | null |
Voice Conversion Using Sequence-to-Sequence Learning of Context
Posterior Probabilities | cs.SD cs.CL cs.LG | Voice conversion (VC) using sequence-to-sequence learning of context
posterior probabilities is proposed. Conventional VC using shared context
posterior probabilities predicts target speech parameters from the context
posterior probabilities estimated from the source speech parameters. Although
conventional VC can be built from non-parallel data, it is difficult to convert
speaker individuality such as phonetic property and speaking rate contained in
the posterior probabilities because the source posterior probabilities are
directly used for predicting target speech parameters. In this work, we assume
that the training data partly include parallel speech data and propose
sequence-to-sequence learning between the source and target posterior
probabilities. The conversion models perform non-linear and variable-length
transformation from the source probability sequence to the target one. Further,
we propose a joint training algorithm for the modules. In contrast to
conventional VC, which separately trains the speech recognition that estimates
posterior probabilities and the speech synthesis that predicts target speech
parameters, our proposed method jointly trains these modules along with the
proposed probability conversion modules. Experimental results demonstrate that
our approach outperforms the conventional VC.
| Hiroyuki Miyoshi, Yuki Saito, Shinnosuke Takamichi, and Hiroshi
Saruwatari | null | 1704.0236 | null | null |
Time-Contrastive Learning Based DNN Bottleneck Features for
Text-Dependent Speaker Verification | cs.SD cs.LG | In this paper, we present a time-contrastive learning (TCL) based bottleneck
(BN)feature extraction method for speech signals with an application to
text-dependent (TD) speaker verification (SV). It is well-known that speech
signals exhibit quasi-stationary behavior in and only in a short interval, and
the TCL method aims to exploit this temporal structure. More specifically, it
trains deep neural networks (DNNs) to discriminate temporal events obtained by
uniformly segmenting speech signals, in contrast to existing DNN based BN
feature extraction methods that train DNNs using labeled data to discriminate
speakers or pass-phrases or phones or a combination of them. In the context of
speaker verification, speech data of fixed pass-phrases are used for TCL-BN
training, while the pass-phrases used for TCL-BN training are excluded from
being used for SV, so that the learned features can be considered generic. The
method is evaluated on the RedDots Challenge 2016 database. Experimental
results show that TCL-BN is superior to the existing speaker and pass-phrase
discriminant BN features and the Mel-frequency cepstral coefficient feature for
text-dependent speaker verification.
| Achintya Kr. Sarkar and Zheng-Hua Tan | null | 1704.02373 | null | null |
Uncovering Group Level Insights with Accordant Clustering | cs.LG | Clustering is a widely-used data mining tool, which aims to discover
partitions of similar items in data. We introduce a new clustering paradigm,
\emph{accordant clustering}, which enables the discovery of (predefined) group
level insights. Unlike previous clustering paradigms that aim to understand
relationships amongst the individual members, the goal of accordant clustering
is to uncover insights at the group level through the analysis of their
members. Group level insight can often support a call to action that cannot be
informed through previous clustering techniques. We propose the first accordant
clustering algorithm, and prove that it finds near-optimal solutions when data
possesses inherent cluster structure. The insights revealed by accordant
clusterings enabled experts in the field of medicine to isolate successful
treatments for a neurodegenerative disease, and those in finance to discover
patterns of unnecessary spending.
| Amit Dhurandhar and Margareta Ackerman and Xiang Wang | null | 1704.02378 | null | null |
Stein Variational Policy Gradient | cs.LG | Policy gradient methods have been successfully applied to many complex
reinforcement learning problems. However, policy gradient methods suffer from
high variance, slow convergence, and inefficient exploration. In this work, we
introduce a maximum entropy policy optimization framework which explicitly
encourages parameter exploration, and show that this framework can be reduced
to a Bayesian inference problem. We then propose a novel Stein variational
policy gradient method (SVPG) which combines existing policy gradient methods
and a repulsive functional to generate a set of diverse but well-behaved
policies. SVPG is robust to initialization and can easily be implemented in a
parallel manner. On continuous control problems, we find that implementing SVPG
on top of REINFORCE and advantage actor-critic algorithms improves both average
return and data efficiency.
| Yang Liu, Prajit Ramachandran, Qiang Liu, Jian Peng | null | 1704.02399 | null | null |
Deep Reinforcement Learning framework for Autonomous Driving | stat.ML cs.LG cs.RO | Reinforcement learning is considered to be a strong AI paradigm which can be
used to teach machines through interaction with the environment and learning
from their mistakes. Despite its perceived utility, it has not yet been
successfully applied in automotive applications. Motivated by the successful
demonstrations of learning of Atari games and Go by Google DeepMind, we propose
a framework for autonomous driving using deep reinforcement learning. This is
of particular relevance as it is difficult to pose autonomous driving as a
supervised learning problem due to strong interactions with the environment
including other vehicles, pedestrians and roadworks. As it is a relatively new
area of research for autonomous driving, we provide a short overview of deep
reinforcement learning and then describe our proposed framework. It
incorporates Recurrent Neural Networks for information integration, enabling
the car to handle partially observable scenarios. It also integrates the recent
work on attention models to focus on relevant information, thereby reducing the
computational complexity for deployment on embedded hardware. The framework was
tested in an open source 3D car racing simulator called TORCS. Our simulation
results demonstrate learning of autonomous maneuvering in a scenario of complex
road curvatures and simple interaction of other vehicles.
| Ahmad El Sallab, Mohammed Abdou, Etienne Perot and Senthil Yogamani | 10.2352/ISSN.2470-1173.2017.19.AVM-023 | 1704.02532 | null | null |
MLC Toolbox: A MATLAB/OCTAVE Library for Multi-Label Classification | cs.LG | Multi-Label Classification toolbox is a MATLAB/OCTAVE library for Multi-Label
Classification (MLC). There exists a few Java libraries for MLC, but no
MATLAB/OCTAVE library that covers various methods. This toolbox offers an
environment for evaluation, comparison and visualization of the MLC results.
One attraction of this toolbox is that it enables us to try many combinations
of feature space dimension reduction, sample clustering, label space dimension
reduction and ensemble, etc.
| Keigo Kimura and Lu Sun and Mineichi Kudo | null | 1704.02592 | null | null |
A Sample Complexity Measure with Applications to Learning Optimal
Auctions | cs.GT cs.LG math.ST stat.TH | We introduce a new sample complexity measure, which we refer to as
split-sample growth rate. For any hypothesis $H$ and for any sample $S$ of size
$m$, the split-sample growth rate $\hat{\tau}_H(m)$ counts how many different
hypotheses can empirical risk minimization output on any sub-sample of $S$ of
size $m/2$. We show that the expected generalization error is upper bounded by
$O\left(\sqrt{\frac{\log(\hat{\tau}_H(2m))}{m}}\right)$. Our result is enabled
by a strengthening of the Rademacher complexity analysis of the expected
generalization error. We show that this sample complexity measure, greatly
simplifies the analysis of the sample complexity of optimal auction design, for
many auction classes studied in the literature. Their sample complexity can be
derived solely by noticing that in these auction classes, ERM on any sample or
sub-sample will pick parameters that are equal to one of the points in the
sample.
| Vasilis Syrgkanis | null | 1704.02598 | null | null |
Enhancing Robustness of Machine Learning Systems via Data
Transformations | cs.CR cs.LG | We propose the use of data transformations as a defense against evasion
attacks on ML classifiers. We present and investigate strategies for
incorporating a variety of data transformations including dimensionality
reduction via Principal Component Analysis and data `anti-whitening' to enhance
the resilience of machine learning, targeting both the classification and the
training phase. We empirically evaluate and demonstrate the feasibility of
linear transformations of data as a defense mechanism against evasion attacks
using multiple real-world datasets. Our key findings are that the defense is
(i) effective against the best known evasion attacks from the literature,
resulting in a two-fold increase in the resources required by a white-box
adversary with knowledge of the defense for a successful attack, (ii)
applicable across a range of ML classifiers, including Support Vector Machines
and Deep Neural Networks, and (iii) generalizable to multiple application
domains, including image classification and human activity classification.
| Arjun Nitin Bhagoji, Daniel Cullina, Chawin Sitawarin, Prateek Mittal | null | 1704.02654 | null | null |
Supervised Infinite Feature Selection | cs.LG | In this paper, we present a new feature selection method that is suitable for
both unsupervised and supervised problems. We build upon the recently proposed
Infinite Feature Selection (IFS) method where feature subsets of all sizes
(including infinity) are considered. We extend IFS in two ways. First, we
propose a supervised version of it. Second, we propose new ways of forming the
feature adjacency matrix that perform better for unsupervised problems. We
extensively evaluate our methods on many benchmark datasets, including large
image-classification datasets (PASCAL VOC), and show that our methods
outperform both the IFS and the widely used "minimum-redundancy
maximum-relevancy (mRMR)" feature selection algorithm.
| Sadegh Eskandari, Emre Akbas | null | 1704.02665 | null | null |
Pyramid Vector Quantization for Deep Learning | cs.LG cs.NE | This paper explores the use of Pyramid Vector Quantization (PVQ) to reduce
the computational cost for a variety of neural networks (NNs) while, at the
same time, compressing the weights that describe them. This is based on the
fact that the dot product between an N dimensional vector of real numbers and
an N dimensional PVQ vector can be calculated with only additions and
subtractions and one multiplication. This is advantageous since tensor
products, commonly used in NNs, can be re-conduced to a dot product or a set of
dot products. Finally, it is stressed that any NN architecture that is based on
an operation that can be re-conduced to a dot product can benefit from the
techniques described here.
| Vincenzo Liguori | null | 1704.02681 | null | null |
Learning Important Features Through Propagating Activation Differences | cs.CV cs.LG cs.NE | The purported "black box" nature of neural networks is a barrier to adoption
in applications where interpretability is essential. Here we present DeepLIFT
(Deep Learning Important FeaTures), a method for decomposing the output
prediction of a neural network on a specific input by backpropagating the
contributions of all neurons in the network to every feature of the input.
DeepLIFT compares the activation of each neuron to its 'reference activation'
and assigns contribution scores according to the difference. By optionally
giving separate consideration to positive and negative contributions, DeepLIFT
can also reveal dependencies which are missed by other approaches. Scores can
be computed efficiently in a single backward pass. We apply DeepLIFT to models
trained on MNIST and simulated genomic data, and show significant advantages
over gradient-based methods. Video tutorial: http://goo.gl/qKb7pL, ICML slides:
bit.ly/deeplifticmlslides, ICML talk: https://vimeo.com/238275076, code:
http://goo.gl/RM8jvH.
| Avanti Shrikumar, Peyton Greenside, Anshul Kundaje | null | 1704.02685 | null | null |
Word Embeddings via Tensor Factorization | stat.ML cs.CL cs.LG | Most popular word embedding techniques involve implicit or explicit
factorization of a word co-occurrence based matrix into low rank factors. In
this paper, we aim to generalize this trend by using numerical methods to
factor higher-order word co-occurrence based arrays, or \textit{tensors}. We
present four word embeddings using tensor factorization and analyze their
advantages and disadvantages. One of our main contributions is a novel joint
symmetric tensor factorization technique related to the idea of coupled tensor
factorization. We show that embeddings based on tensor factorization can be
used to discern the various meanings of polysemous words without being
explicitly trained to do so, and motivate the intuition behind why this works
in a way that doesn't with existing methods. We also modify an existing word
embedding evaluation metric known as Outlier Detection [Camacho-Collados and
Navigli, 2016] to evaluate the quality of the order-$N$ relations that a word
embedding captures, and show that tensor-based methods outperform existing
matrix-based methods at this task. Experimentally, we show that all of our word
embeddings either outperform or are competitive with state-of-the-art baselines
commonly used today on a variety of recent datasets. Suggested applications of
tensor factorization-based word embeddings are given, and all source code and
pre-trained vectors are publicly available online.
| Eric Bailey and Shuchin Aeron | null | 1704.02686 | null | null |
Evolving a Vector Space with any Generating Set | cs.LG | In Valiant's model of evolution, a class of representations is evolvable iff
a polynomial-time process of random mutations guided by selection converges
with high probability to a representation as $\epsilon$-close as desired from
the optimal one, for any required $\epsilon>0$. Several previous positive
results exist that can be related to evolving a vector space, but each former
result imposes disproportionate representations or restrictions on
(re)initialisations, distributions, performance functions and/or the mutator.
In this paper, we show that all it takes to evolve a normed vector space is
merely a set that generates the space. Furthermore, it takes only
$\tilde{O}(1/\epsilon^2)$ steps and it is essentially stable, agnostic and
handles target drifts that rival some proven in fairly restricted settings. Our
algorithm can be viewed as a close relative to a popular fifty-years old
gradient-free optimization method for which little is still known from the
convergence standpoint: Nelder-Mead simplex method.
| Richard Nock, Frank Nielsen | null | 1704.02708 | null | null |
Adaptive Relaxed ADMM: Convergence Theory and Practical Implementation | cs.CV cs.AI cs.LG | Many modern computer vision and machine learning applications rely on solving
difficult optimization problems that involve non-differentiable objective
functions and constraints. The alternating direction method of multipliers
(ADMM) is a widely used approach to solve such problems. Relaxed ADMM is a
generalization of ADMM that often achieves better performance, but its
efficiency depends strongly on algorithm parameters that must be chosen by an
expert user. We propose an adaptive method that automatically tunes the key
algorithm parameters to achieve optimal performance without user oversight.
Inspired by recent work on adaptivity, the proposed adaptive relaxed ADMM
(ARADMM) is derived by assuming a Barzilai-Borwein style linear gradient. A
detailed convergence analysis of ARADMM is provided, and numerical results on
several applications demonstrate fast practical convergence.
| Zheng Xu, Mario A. T. Figueiredo, Xiaoming Yuan, Christoph Studer, and
Tom Goldstein | null | 1704.02712 | null | null |
Distributed Learning for Cooperative Inference | math.OC cs.LG cs.MA math.PR stat.ML | We study the problem of cooperative inference where a group of agents
interact over a network and seek to estimate a joint parameter that best
explains a set of observations. Agents do not know the network topology or the
observations of other agents. We explore a variational interpretation of the
Bayesian posterior density, and its relation to the stochastic mirror descent
algorithm, to propose a new distributed learning algorithm. We show that, under
appropriate assumptions, the beliefs generated by the proposed algorithm
concentrate around the true parameter exponentially fast. We provide explicit
non-asymptotic bounds for the convergence rate. Moreover, we develop explicit
and computationally efficient algorithms for observation models belonging to
exponential families.
| Angelia Nedi\'c, Alex Olshevsky and C\'esar A. Uribe | null | 1704.02718 | null | null |
Group Importance Sampling for Particle Filtering and MCMC | stat.CO cs.CE cs.LG stat.ME stat.ML | Bayesian methods and their implementations by means of sophisticated Monte
Carlo techniques have become very popular in signal processing over the last
years. Importance Sampling (IS) is a well-known Monte Carlo technique that
approximates integrals involving a posterior distribution by means of weighted
samples. In this work, we study the assignation of a single weighted sample
which compresses the information contained in a population of weighted samples.
Part of the theory that we present as Group Importance Sampling (GIS) has been
employed implicitly in different works in the literature. The provided analysis
yields several theoretical and practical consequences. For instance, we discuss
the application of GIS into the Sequential Importance Resampling framework and
show that Independent Multiple Try Metropolis schemes can be interpreted as a
standard Metropolis-Hastings algorithm, following the GIS approach. We also
introduce two novel Markov Chain Monte Carlo (MCMC) techniques based on GIS.
The first one, named Group Metropolis Sampling method, produces a Markov chain
of sets of weighted samples. All these sets are then employed for obtaining a
unique global estimator. The second one is the Distributed Particle
Metropolis-Hastings technique, where different parallel particle filters are
jointly used to drive an MCMC algorithm. Different resampled trajectories are
compared and then tested with a proper acceptance probability. The novel
schemes are tested in different numerical experiments such as learning the
hyperparameters of Gaussian Processes, two localization problems in a wireless
sensor network (with synthetic and real data) and the tracking of vegetation
parameters given satellite observations, where they are compared with several
benchmark Monte Carlo techniques. Three illustrative Matlab demos are also
provided.
| L. Martino, V. Elvira, G. Camps-Valls | 10.1016/j.dsp.2018.07.007 | 1704.02771 | null | null |
Parsimonious Random Vector Functional Link Network for Data Streams | cs.NE cs.LG | The theory of random vector functional link network (RVFLN) has provided a
breakthrough in the design of neural networks (NNs) since it conveys solid
theoretical justification of randomized learning. Existing works in RVFLN are
hardly scalable for data stream analytics because they are inherent to the
issue of complexity as a result of the absence of structural learning
scenarios. A novel class of RVLFN, namely parsimonious random vector functional
link network (pRVFLN), is proposed in this paper. pRVFLN features an open
structure paradigm where its network structure can be built from scratch and
can be automatically generated in accordance with degree of nonlinearity and
time-varying property of system being modelled. pRVFLN is equipped with
complexity reduction scenarios where inconsequential hidden nodes can be pruned
and input features can be dynamically selected. pRVFLN puts into perspective an
online active learning mechanism which expedites the training process and
relieves operator labelling efforts. In addition, pRVFLN introduces a
non-parametric type of hidden node, developed using an interval-valued data
cloud. The hidden node completely reflects the real data distribution and is
not constrained by a specific shape of the cluster. All learning procedures of
pRVFLN follow a strictly single-pass learning mode, which is applicable for an
online real-time deployment. The efficacy of pRVFLN was rigorously validated
through numerous simulations and comparisons with state-of-the art algorithms
where it produced the most encouraging numerical results. Furthermore, the
robustness of pRVFLN was investigated and a new conclusion is made to the scope
of random parameters where it plays vital role to the success of randomized
learning.
| Mahardhika Pratama, Plamen P. Angelov, Edwin Lughofer | 10.1016/j.ins.2017.11.050 | 1704.02789 | null | null |
Bayesian Recurrent Neural Networks | cs.LG stat.ML | In this work we explore a straightforward variational Bayes scheme for
Recurrent Neural Networks. Firstly, we show that a simple adaptation of
truncated backpropagation through time can yield good quality uncertainty
estimates and superior regularisation at only a small extra computational cost
during training, also reducing the amount of parameters by 80\%. Secondly, we
demonstrate how a novel kind of posterior approximation yields further
improvements to the performance of Bayesian RNNs. We incorporate local gradient
information into the approximate posterior to sharpen it around the current
batch statistics. We show how this technique is not exclusive to recurrent
neural networks and can be applied more widely to train Bayesian neural
networks. We also empirically demonstrate how Bayesian RNNs are superior to
traditional RNNs on a language modelling benchmark and an image captioning
task, as well as showing how each of these methods improve our model over a
variety of other schemes for training them. We also introduce a new benchmark
for studying uncertainty for language models so future methods can be easily
compared.
| Meire Fortunato, Charles Blundell, Oriol Vinyals | null | 1704.02798 | null | null |
A Comparative Study for Predicting Heart Diseases Using Data Mining
Classification Methods | cs.CY cs.LG stat.ML | Improving the precision of heart diseases detection has been investigated by
many researchers in the literature. Such improvement induced by the
overwhelming health care expenditures and erroneous diagnosis. As a result,
various methodologies have been proposed to analyze the disease factors aiming
to decrease the physicians practice variation and reduce medical costs and
errors. In this paper, our main motivation is to develop an effective
intelligent medical decision support system based on data mining techniques. In
this context, five data mining classifying algorithms, with large datasets,
have been utilized to assess and analyze the risk factors statistically related
to heart diseases in order to compare the performance of the implemented
classifiers (e.g., Na\"ive Bayes, Decision Tree, Discriminant, Random Forest,
and Support Vector Machine). To underscore the practical viability of our
approach, the selected classifiers have been implemented using MATLAB tool with
two datasets. Results of the conducted experiments showed that all
classification algorithms are predictive and can give relatively correct
answer. However, the decision tree outperforms other classifiers with an
accuracy rate of 99.0% followed by Random forest. That is the case because both
of them have relatively same mechanism but the Random forest can build ensemble
of decision tree. Although ensemble learning has been proved to produce
superior results, but in our case the decision tree has outperformed its
ensemble version.
| Israa Ahmed Zriqat, Ahmad Mousa Altamimi, Mohammad Azzeh | null | 1704.02799 | null | null |
Bayesian Inference of Individualized Treatment Effects using Multi-task
Gaussian Processes | cs.LG | Predicated on the increasing abundance of electronic health records, we
investi- gate the problem of inferring individualized treatment effects using
observational data. Stemming from the potential outcomes model, we propose a
novel multi- task learning framework in which factual and counterfactual
outcomes are mod- eled as the outputs of a function in a vector-valued
reproducing kernel Hilbert space (vvRKHS). We develop a nonparametric Bayesian
method for learning the treatment effects using a multi-task Gaussian process
(GP) with a linear coregion- alization kernel as a prior over the vvRKHS. The
Bayesian approach allows us to compute individualized measures of confidence in
our estimates via pointwise credible intervals, which are crucial for realizing
the full potential of precision medicine. The impact of selection bias is
alleviated via a risk-based empirical Bayes method for adapting the multi-task
GP prior, which jointly minimizes the empirical error in factual outcomes and
the uncertainty in (unobserved) counter- factual outcomes. We conduct
experiments on observational datasets for an inter- ventional social program
applied to premature infants, and a left ventricular assist device applied to
cardiac patients wait-listed for a heart transplant. In both experi- ments, we
show that our method significantly outperforms the state-of-the-art.
| Ahmed M. Alaa and Mihaela van der Schaar | null | 1704.02801 | null | null |
Unsupervised prototype learning in an associative-memory network | cs.NE cond-mat.dis-nn cs.LG | Unsupervised learning in a generalized Hopfield associative-memory network is
investigated in this work. First, we prove that the (generalized) Hopfield
model is equivalent to a semi-restricted Boltzmann machine with a layer of
visible neurons and another layer of hidden binary neurons, so it could serve
as the building block for a multilayered deep-learning system. We then
demonstrate that the Hopfield network can learn to form a faithful internal
representation of the observed samples, with the learned memory patterns being
prototypes of the input data. Furthermore, we propose a spectral method to
extract a small set of concepts (idealized prototypes) as the most concise
summary or abstraction of the empirical data.
| Huiling Zhen, Shang-Nan Wang, and Hai-Jun Zhou | null | 1704.02848 | null | null |
Dynamic Safe Interruptibility for Decentralized Multi-Agent
Reinforcement Learning | cs.AI cs.LG cs.MA stat.ML | In reinforcement learning, agents learn by performing actions and observing
their outcomes. Sometimes, it is desirable for a human operator to
\textit{interrupt} an agent in order to prevent dangerous situations from
happening. Yet, as part of their learning process, agents may link these
interruptions, that impact their reward, to specific states and deliberately
avoid them. The situation is particularly challenging in a multi-agent context
because agents might not only learn from their own past interruptions, but also
from those of other agents. Orseau and Armstrong defined \emph{safe
interruptibility} for one learner, but their work does not naturally extend to
multi-agent systems. This paper introduces \textit{dynamic safe
interruptibility}, an alternative definition more suited to decentralized
learning problems, and studies this notion in two learning frameworks:
\textit{joint action learners} and \textit{independent learners}. We give
realistic sufficient conditions on the learning algorithm to enable dynamic
safe interruptibility in the case of joint action learners, yet show that these
conditions are not sufficient for independent learners. We show however that if
agents can detect interruptions, it is possible to prune the observations to
ensure dynamic safe interruptibility even for independent learners.
| El Mahdi El Mhamdi, Rachid Guerraoui, Hadrien Hendrikx, Alexandre
Maurer | null | 1704.02882 | null | null |
Opinion Polarization by Learning from Social Feedback | physics.soc-ph cs.LG cs.SI nlin.AO | We explore a new mechanism to explain polarization phenomena in opinion
dynamics in which agents evaluate alternative views on the basis of the social
feedback obtained on expressing them. High support of the favored opinion in
the social environment, is treated as a positive feedback which reinforces the
value associated to this opinion. In connected networks of sufficiently high
modularity, different groups of agents can form strong convictions of competing
opinions. Linking the social feedback process to standard equilibrium concepts
we analytically characterize sufficient conditions for the stability of
bi-polarization. While previous models have emphasized the polarization effects
of deliberative argument-based communication, our model highlights an affective
experience-based route to polarization, without assumptions about negative
influence or bounded confidence.
| Sven Banisch and Eckehard Olbrich | 10.1080/0022250X.2018.1517761 | 1704.0289 | null | null |
Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on
Graphs | cs.CV cs.LG cs.NE | A number of problems can be formulated as prediction on graph-structured
data. In this work, we generalize the convolution operator from regular grids
to arbitrary graphs while avoiding the spectral domain, which allows us to
handle graphs of varying size and connectivity. To move beyond a simple
diffusion, filter weights are conditioned on the specific edge labels in the
neighborhood of a vertex. Together with the proper choice of graph coarsening,
we explore constructing deep neural networks for graph classification. In
particular, we demonstrate the generality of our formulation in point cloud
classification, where we set the new state of the art, and on a graph
classification dataset, where we outperform other deep learning approaches. The
source code is available at https://github.com/mys007/ecc
| Martin Simonovsky, Nikos Komodakis | null | 1704.02901 | null | null |
Multi-Agent Diverse Generative Adversarial Networks | cs.CV cs.AI cs.GR cs.LG stat.ML | We propose MAD-GAN, an intuitive generalization to the Generative Adversarial
Networks (GANs) and its conditional variants to address the well known problem
of mode collapse. First, MAD-GAN is a multi-agent GAN architecture
incorporating multiple generators and one discriminator. Second, to enforce
that different generators capture diverse high probability modes, the
discriminator of MAD-GAN is designed such that along with finding the real and
fake samples, it is also required to identify the generator that generated the
given fake sample. Intuitively, to succeed in this task, the discriminator must
learn to push different generators towards different identifiable modes. We
perform extensive experiments on synthetic and real datasets and compare
MAD-GAN with different variants of GAN. We show high quality diverse sample
generations for challenging tasks such as image-to-image translation and face
generation. In addition, we also show that MAD-GAN is able to disentangle
different modalities when trained using highly challenging diverse-class
dataset (e.g. dataset with images of forests, icebergs, and bedrooms). In the
end, we show its efficacy on the unsupervised feature representation task. In
Appendix, we introduce a similarity based competing objective (MAD-GAN-Sim)
which encourages different generators to generate diverse samples based on a
user defined similarity metric. We show its performance on the image-to-image
translation, and also show its effectiveness on the unsupervised feature
representation task.
| Arnab Ghosh and Viveka Kulharia and Vinay Namboodiri and Philip H. S.
Torr and Puneet K. Dokania | null | 1704.02906 | null | null |
On the Fine-Grained Complexity of Empirical Risk Minimization: Kernel
Methods and Neural Networks | cs.CC cs.DS cs.LG stat.ML | Empirical risk minimization (ERM) is ubiquitous in machine learning and
underlies most supervised learning methods. While there has been a large body
of work on algorithms for various ERM problems, the exact computational
complexity of ERM is still not understood. We address this issue for multiple
popular ERM problems including kernel SVMs, kernel ridge regression, and
training the final layer of a neural network. In particular, we give
conditional hardness results for these problems based on complexity-theoretic
assumptions such as the Strong Exponential Time Hypothesis. Under these
assumptions, we show that there are no algorithms that solve the aforementioned
ERM problems to high accuracy in sub-quadratic time. We also give similar
hardness results for computing the gradient of the empirical loss, which is the
main computational burden in many non-convex learning tasks.
| Arturs Backurs, Piotr Indyk, Ludwig Schmidt | null | 1704.02958 | null | null |
A Dual-Stage Attention-Based Recurrent Neural Network for Time Series
Prediction | cs.LG stat.ML | The Nonlinear autoregressive exogenous (NARX) model, which predicts the
current value of a time series based upon its previous values as well as the
current and past values of multiple driving (exogenous) series, has been
studied for decades. Despite the fact that various NARX models have been
developed, few of them can capture the long-term temporal dependencies
appropriately and select the relevant driving series to make predictions. In
this paper, we propose a dual-stage attention-based recurrent neural network
(DA-RNN) to address these two issues. In the first stage, we introduce an input
attention mechanism to adaptively extract relevant driving series (a.k.a.,
input features) at each time step by referring to the previous encoder hidden
state. In the second stage, we use a temporal attention mechanism to select
relevant encoder hidden states across all time steps. With this dual-stage
attention scheme, our model can not only make predictions effectively, but can
also be easily interpreted. Thorough empirical studies based upon the SML 2010
dataset and the NASDAQ 100 Stock dataset demonstrate that the DA-RNN can
outperform state-of-the-art methods for time series prediction.
| Yao Qin, Dongjin Song, Haifeng Chen, Wei Cheng, Guofei Jiang, and
Garrison Cottrell | null | 1704.02971 | null | null |
Stochastic Neural Networks for Hierarchical Reinforcement Learning | cs.AI cs.LG cs.NE cs.RO | Deep reinforcement learning has achieved many impressive results in recent
years. However, tasks with sparse rewards or long horizons continue to pose
significant challenges. To tackle these important problems, we propose a
general framework that first learns useful skills in a pre-training
environment, and then leverages the acquired skills for learning faster in
downstream tasks. Our approach brings together some of the strengths of
intrinsic motivation and hierarchical methods: the learning of useful skill is
guided by a single proxy reward, the design of which requires very minimal
domain knowledge about the downstream tasks. Then a high-level policy is
trained on top of these skills, providing a significant improvement of the
exploration and allowing to tackle sparse rewards in the downstream tasks. To
efficiently pre-train a large span of skills, we use Stochastic Neural Networks
combined with an information-theoretic regularizer. Our experiments show that
this combination is effective in learning a wide span of interpretable skills
in a sample-efficient way, and can significantly boost the learning performance
uniformly across a wide range of downstream tasks.
| Carlos Florensa, Yan Duan, Pieter Abbeel | null | 1704.03012 | null | null |
A probabilistic data-driven model for planar pushing | cs.RO cs.LG stat.ML | This paper presents a data-driven approach to model planar pushing
interaction to predict both the most likely outcome of a push and its expected
variability. The learned models rely on a variation of Gaussian processes with
input-dependent noise called Variational Heteroscedastic Gaussian processes
(VHGP) that capture the mean and variance of a stochastic function. We show
that we can learn accurate models that outperform analytical models after less
than 100 samples and saturate in performance with less than 1000 samples. We
validate the results against a collected dataset of repeated trajectories, and
use the learned models to study questions such as the nature of the variability
in pushing, and the validity of the quasi-static assumption.
| Maria Bauza, Alberto Rodriguez | null | 1704.03033 | null | null |
Learning from Multi-View Multi-Way Data via Structural Factorization
Machines | cs.LG | Real-world relations among entities can often be observed and determined by
different perspectives/views. For example, the decision made by a user on
whether to adopt an item relies on multiple aspects such as the contextual
information of the decision, the item's attributes, the user's profile and the
reviews given by other users. Different views may exhibit multi-way
interactions among entities and provide complementary information. In this
paper, we introduce a multi-tensor-based approach that can preserve the
underlying structure of multi-view data in a generic predictive model.
Specifically, we propose structural factorization machines (SFMs) that learn
the common latent spaces shared by multi-view tensors and automatically adjust
the importance of each view in the predictive model. Furthermore, the
complexity of SFMs is linear in the number of parameters, which make SFMs
suitable to large-scale problems. Extensive experiments on real-world datasets
demonstrate that the proposed SFMs outperform several state-of-the-art methods
in terms of prediction accuracy and computational cost.
| Chun-Ta Lu, Lifang He, Hao Ding, Bokai Cao, Philip S. Yu | 10.1145/3178876.3186071 | 1704.03037 | null | null |
Semantically Consistent Regularization for Zero-Shot Recognition | cs.CV cs.AI cs.LG | The role of semantics in zero-shot learning is considered. The effectiveness
of previous approaches is analyzed according to the form of supervision
provided. While some learn semantics independently, others only supervise the
semantic subspace explained by training classes. Thus, the former is able to
constrain the whole space but lacks the ability to model semantic correlations.
The latter addresses this issue but leaves part of the semantic space
unsupervised. This complementarity is exploited in a new convolutional neural
network (CNN) framework, which proposes the use of semantics as constraints for
recognition.Although a CNN trained for classification has no transfer ability,
this can be encouraged by learning an hidden semantic layer together with a
semantic code for classification. Two forms of semantic constraints are then
introduced. The first is a loss-based regularizer that introduces a
generalization constraint on each semantic predictor. The second is a codeword
regularizer that favors semantic-to-class mappings consistent with prior
semantic knowledge while allowing these to be learned from data. Significant
improvements over the state-of-the-art are achieved on several datasets.
| Pedro Morgado, Nuno Vasconcelos | null | 1704.03039 | null | null |
CERN: Confidence-Energy Recurrent Network for Group Activity Recognition | cs.CV cs.LG stat.ML | This work is about recognizing human activities occurring in videos at
distinct semantic levels, including individual actions, interactions, and group
activities. The recognition is realized using a two-level hierarchy of Long
Short-Term Memory (LSTM) networks, forming a feed-forward deep architecture,
which can be trained end-to-end. In comparison with existing architectures of
LSTMs, we make two key contributions giving the name to our approach as
Confidence-Energy Recurrent Network -- CERN. First, instead of using the common
softmax layer for prediction, we specify a novel energy layer (EL) for
estimating the energy of our predictions. Second, rather than finding the
common minimum-energy class assignment, which may be numerically unstable under
uncertainty, we specify that the EL additionally computes the p-values of the
solutions, and in this way estimates the most confident energy minimum. The
evaluation on the Collective Activity and Volleyball datasets demonstrates: (i)
advantages of our two contributions relative to the common softmax and
energy-minimization formulations and (ii) a superior performance relative to
the state-of-the-art approaches.
| Tianmin Shu, Sinisa Todorovic, Song-Chun Zhu | null | 1704.03058 | null | null |
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation | cs.LG cs.RO | Deep learning and reinforcement learning methods have recently been used to
solve a variety of problems in continuous control domains. An obvious
application of these techniques is dexterous manipulation tasks in robotics
which are difficult to solve using traditional control theory or
hand-engineered approaches. One example of such a task is to grasp an object
and precisely stack it on another. Solving this difficult and practically
relevant problem in the real world is an important long-term goal for the field
of robotics. Here we take a step towards this goal by examining the problem in
simulation and providing models and techniques aimed at solving it. We
introduce two extensions to the Deep Deterministic Policy Gradient algorithm
(DDPG), a model-free Q-learning based method, which make it significantly more
data-efficient and scalable. Our results show that by making extensive use of
off-policy data and replay, it is possible to find control policies that
robustly grasp objects and stack them. Further, our results hint that it may
soon be feasible to train successful stacking policies by collecting
interactions on real robots.
| Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel
Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin
Riedmiller | null | 1704.03073 | null | null |
WRPN: Training and Inference using Wide Reduced-Precision Networks | cs.LG cs.AI cs.CV cs.NE | For computer vision applications, prior works have shown the efficacy of
reducing the numeric precision of model parameters (network weights) in deep
neural networks but also that reducing the precision of activations hurts model
accuracy much more than reducing the precision of model parameters. We study
schemes to train networks from scratch using reduced-precision activations
without hurting the model accuracy. We reduce the precision of activation maps
(along with model parameters) using a novel quantization scheme and increase
the number of filter maps in a layer, and find that this scheme compensates or
surpasses the accuracy of the baseline full-precision network. As a result, one
can significantly reduce the dynamic memory footprint, memory bandwidth,
computational energy and speed up the training and inference process with
appropriate hardware support. We call our scheme WRPN - wide reduced-precision
networks. We report results using our proposed schemes and show that our
results are better than previously reported accuracies on ILSVRC-12 dataset
while being computationally less expensive compared to previously reported
reduced-precision networks.
| Asit Mishra, Jeffrey J Cook, Eriko Nurvitadhi and Debbie Marr | null | 1704.03079 | null | null |
Composite Task-Completion Dialogue Policy Learning via Hierarchical Deep
Reinforcement Learning | cs.CL cs.AI cs.LG | Building a dialogue agent to fulfill complex tasks, such as travel planning,
is challenging because the agent has to learn to collectively complete multiple
subtasks. For example, the agent needs to reserve a hotel and book a flight so
that there leaves enough time for commute between arrival and hotel check-in.
This paper addresses this challenge by formulating the task in the mathematical
framework of options over Markov Decision Processes (MDPs), and proposing a
hierarchical deep reinforcement learning approach to learning a dialogue
manager that operates at different temporal scales. The dialogue manager
consists of: (1) a top-level dialogue policy that selects among subtasks or
options, (2) a low-level dialogue policy that selects primitive actions to
complete the subtask given by the top-level policy, and (3) a global state
tracker that helps ensure all cross-subtask constraints be satisfied.
Experiments on a travel planning task with simulated and real users show that
our approach leads to significant improvements over three baselines, two based
on handcrafted rules and the other based on flat deep reinforcement learning.
| Baolin Peng and Xiujun Li and Lihong Li and Jianfeng Gao and Asli
Celikyilmaz and Sungjin Lee and Kam-Fai Wong | null | 1704.03084 | null | null |
Federated Tensor Factorization for Computational Phenotyping | cs.LG stat.ML | Tensor factorization models offer an effective approach to convert massive
electronic health records into meaningful clinical concepts (phenotypes) for
data analysis. These models need a large amount of diverse samples to avoid
population bias. An open challenge is how to derive phenotypes jointly across
multiple hospitals, in which direct patient-level data sharing is not possible
(e.g., due to institutional policies). In this paper, we developed a novel
solution to enable federated tensor factorization for computational phenotyping
without sharing patient-level data. We developed secure data harmonization and
federated computation procedures based on alternating direction method of
multipliers (ADMM). Using this method, the multiple hospitals iteratively
update tensors and transfer secure summarized information to a central server,
and the server aggregates the information to generate phenotypes. We
demonstrated with real medical datasets that our method resembles the
centralized training model (based on combined datasets) in terms of accuracy
and phenotypes discovery while respecting privacy.
| Yejin Kim, Jimeng Sun, Hwanjo Yu, Xiaoqian Jiang | 10.1145/3097983.3098118 | 1704.03141 | null | null |
Parametric Gaussian Process Regression for Big Data | stat.ML cs.LG | This work introduces the concept of parametric Gaussian processes (PGPs),
which is built upon the seemingly self-contradictory idea of making Gaussian
processes parametric. Parametric Gaussian processes, by construction, are
designed to operate in "big data" regimes where one is interested in
quantifying the uncertainty associated with noisy data. The proposed
methodology circumvents the well-established need for stochastic variational
inference, a scalable algorithm for approximating posterior distributions. The
effectiveness of the proposed approach is demonstrated using an illustrative
example with simulated data and a benchmark dataset in the airline industry
with approximately 6 million records.
| Maziar Raissi | null | 1704.03144 | null | null |
struc2vec: Learning Node Representations from Structural Identity | cs.SI cs.LG stat.ML | Structural identity is a concept of symmetry in which network nodes are
identified according to the network structure and their relationship to other
nodes. Structural identity has been studied in theory and practice over the
past decades, but only recently has it been addressed with representational
learning techniques. This work presents struc2vec, a novel and flexible
framework for learning latent representations for the structural identity of
nodes. struc2vec uses a hierarchy to measure node similarity at different
scales, and constructs a multilayer graph to encode structural similarities and
generate structural context for nodes. Numerical experiments indicate that
state-of-the-art techniques for learning node representations fail in capturing
stronger notions of structural identity, while struc2vec exhibits much superior
performance in this task, as it overcomes limitations of prior approaches. As a
consequence, numerical experiments indicate that struc2vec improves performance
on classification tasks that depend more on structural identity.
| Leonardo F. R. Ribeiro, Pedro H. P. Savarese, Daniel R. Figueiredo | 10.1145/3097983.3098061 | 1704.03165 | null | null |
Simplified Stochastic Feedforward Neural Networks | cs.LG | It has been believed that stochastic feedforward neural networks (SFNNs) have
several advantages beyond deterministic deep neural networks (DNNs): they have
more expressive power allowing multi-modal mappings and regularize better due
to their stochastic nature. However, training large-scale SFNN is notoriously
harder. In this paper, we aim at developing efficient training methods for
SFNN, in particular using known architectures and pre-trained parameters of
DNN. To this end, we propose a new intermediate stochastic model, called
Simplified-SFNN, which can be built upon any baseline DNNand approximates
certain SFNN by simplifying its upper latent units above stochastic ones. The
main novelty of our approach is in establishing the connection between three
models, i.e., DNN->Simplified-SFNN->SFNN, which naturally leads to an efficient
training procedure of the stochastic models utilizing pre-trained parameters of
DNN. Using several popular DNNs, we show how they can be effectively
transferred to the corresponding stochastic models for both multi-modal and
classification tasks on MNIST, TFD, CASIA, CIFAR-10, CIFAR-100 and SVHN
datasets. In particular, we train a stochastic model of 28 layers and 36
million parameters, where training such a large-scale stochastic network is
significantly challenging without using Simplified-SFNN
| Kimin Lee, Jaehyung Kim, Song Chong, Jinwoo Shin | null | 1704.03188 | null | null |
On Feature Reduction using Deep Learning for Trend Prediction in Finance | q-fin.TR cs.LG | One of the major advantages in using Deep Learning for Finance is to embed a
large collection of information into investment decisions. A way to do that is
by means of compression, that lead us to consider a smaller feature space.
Several studies are proving that non-linear feature reduction performed by Deep
Learning tools is effective in price trend prediction. The focus has been put
mainly on Restricted Boltzmann Machines (RBM) and on output obtained by them.
Few attention has been payed to Auto-Encoders (AE) as an alternative means to
perform a feature reduction. In this paper we investigate the application of
both RBM and AE in more general terms, attempting to outline how architectural
and input space characteristics can affect the quality of prediction.
| Luigi Troiano and Elena Mejuto and Pravesh Kriplani | null | 1704.03205 | null | null |
Persian Wordnet Construction using Supervised Learning | cs.CL cs.LG stat.ML | This paper presents an automated supervised method for Persian wordnet
construction. Using a Persian corpus and a bi-lingual dictionary, the initial
links between Persian words and Princeton WordNet synsets have been generated.
These links will be discriminated later as correct or incorrect by employing
seven features in a trained classification system. The whole method is just a
classification system, which has been trained on a train set containing FarsNet
as a set of correct instances. State of the art results on the automatically
derived Persian wordnet is achieved. The resulted wordnet with a precision of
91.18% includes more than 16,000 words and 22,000 synsets.
| Zahra Mousavi, Heshaam Faili | null | 1704.03223 | null | null |
Interpretable Explanations of Black Boxes by Meaningful Perturbation | cs.CV cs.AI cs.LG stat.ML | As machine learning algorithms are increasingly applied to high impact yet
high risk tasks, such as medical diagnosis or autonomous driving, it is
critical that researchers can explain how such algorithms arrived at their
predictions. In recent years, a number of image saliency methods have been
developed to summarize where highly complex neural networks "look" in an image
for evidence for their predictions. However, these techniques are limited by
their heuristic nature and architectural constraints. In this paper, we make
two main contributions: First, we propose a general framework for learning
different kinds of explanations for any black box algorithm. Second, we
specialise the framework to find the part of an image most responsible for a
classifier decision. Unlike previous works, our method is model-agnostic and
testable because it is grounded in explicit and interpretable image
perturbations.
| Ruth Fong and Andrea Vedaldi | 10.1109/ICCV.2017.371 | 1704.03296 | null | null |
The MATLAB Toolbox SciXMiner: User's Manual and Programmer's Guide | cs.LG | The Matlab toolbox SciXMiner is designed for the visualization and analysis
of time series and features with a special focus to classification problems. It
was developed at the Institute of Applied Computer Science of the Karlsruhe
Institute of Technology (KIT), a member of the Helmholtz Association of German
Research Centres in Germany. The aim was to provide an open platform for the
development and improvement of data mining methods and its applications to
various medical and technical problems. SciXMiner bases on Matlab (tested for
the version 2017a). Many functions do not require additional standard toolboxes
but some parts of Signal, Statistics and Wavelet toolboxes are used for special
cases. The decision to a Matlab-based solution was made to use the wide
mathematical functionality of this package provided by The Mathworks Inc.
SciXMiner is controlled by a graphical user interface (GUI) with menu items and
control elements like popup lists, checkboxes and edit elements. This makes it
easier to work with SciXMiner for inexperienced users. Furthermore, an
automatization and batch standardization of analyzes is possible using macros.
The standard Matlab style using the command line is also available. SciXMiner
is an open source software. The download page is
http://sourceforge.net/projects/SciXMiner. It is licensed under the conditions
of the GNU General Public License (GNU-GPL) of The Free Software Foundation.
| Ralf Mikut, Andreas Bartschat, Wolfgang Doneit, Jorge \'Angel
Gonz\'alez Ordiano, Benjamin Schott, Johannes Stegmaier, Simon Waczowicz,
Markus Reischl | null | 1704.03298 | null | null |
Sublinear Time Low-Rank Approximation of Positive Semidefinite Matrices | cs.DS cs.LG math.NA | We show how to compute a relative-error low-rank approximation to any
positive semidefinite (PSD) matrix in sublinear time, i.e., for any $n \times
n$ PSD matrix $A$, in $\tilde O(n \cdot poly(k/\epsilon))$ time we output a
rank-$k$ matrix $B$, in factored form, for which $\|A-B\|_F^2 \leq
(1+\epsilon)\|A-A_k\|_F^2$, where $A_k$ is the best rank-$k$ approximation to
$A$. When $k$ and $1/\epsilon$ are not too large compared to the sparsity of
$A$, our algorithm does not need to read all entries of the matrix. Hence, we
significantly improve upon previous $nnz(A)$ time algorithms based on oblivious
subspace embeddings, and bypass an $nnz(A)$ time lower bound for general
matrices (where $nnz(A)$ denotes the number of non-zero entries in the matrix).
We prove time lower bounds for low-rank approximation of PSD matrices, showing
that our algorithm is close to optimal. Finally, we extend our techniques to
give sublinear time algorithms for low-rank approximation of $A$ in the (often
stronger) spectral norm metric $\|A-B\|_2^2$ and for ridge regression on PSD
matrices.
| Cameron Musco and David P. Woodruff | null | 1704.03371 | null | null |
ENWalk: Learning Network Features for Spam Detection in Twitter | cs.LG cs.SI | Social medias are increasing their influence with the vast public information
leading to their active use for marketing by the companies and organizations.
Such marketing promotions are difficult to identify unlike the traditional
medias like TV and newspaper. So, it is very much important to identify the
promoters in the social media. Although, there are active ongoing researches,
existing approaches are far from solving the problem. To identify such
imposters, it is very much important to understand their strategies of social
circle creation and dynamics of content posting. Are there any specific spammer
types? How successful are each types? We analyze these questions in the light
of social relationships in Twitter. Our analyses discover two types of spammers
and their relationships with the dynamics of content posts. Our results
discover novel dynamics of spamming which are intuitive and arguable. We
propose ENWalk, a framework to detect the spammers by learning the feature
representations of the users in the social media. We learn the feature
representations using the random walks biased on the spam dynamics.
Experimental results on large-scale twitter network and the corresponding
tweets show the effectiveness of our approach that outperforms the existing
approaches
| K C Santosh, Suman Kalyan Maity and Arjun Mukherjee | null | 1704.03404 | null | null |
Efficient Large Scale Clustering based on Data Partitioning | cs.DB cs.LG | Clustering techniques are very attractive for extracting and identifying
patterns in datasets. However, their application to very large spatial datasets
presents numerous challenges such as high-dimensionality data, heterogeneity,
and high complexity of some algorithms. For instance, some algorithms may have
linear complexity but they require the domain knowledge in order to determine
their input parameters. Distributed clustering techniques constitute a very
good alternative to the big data challenges (e.g.,Volume, Variety, Veracity,
and Velocity). Usually these techniques consist of two phases. The first phase
generates local models or patterns and the second one tends to aggregate the
local results to obtain global models. While the first phase can be executed in
parallel on each site and, therefore, efficient, the aggregation phase is
complex, time consuming and may produce incorrect and ambiguous global clusters
and therefore incorrect models. In this paper we propose a new distributed
clustering approach to deal efficiently with both phases, generation of local
results and generation of global models by aggregation. For the first phase,
our approach is capable of analysing the datasets located in each site using
different clustering techniques. The aggregation phase is designed in such a
way that the final clusters are compact and accurate while the overall process
is efficient in time and memory allocation. For the evaluation, we use two
well-known clustering algorithms, K-Means and DBSCAN. One of the key outputs of
this distributed clustering technique is that the number of global clusters is
dynamic, no need to be fixed in advance. Experimental results show that the
approach is scalable and produces high quality results.
| Malika Bendechache, Nhien-An Le-Khac, M-Tahar Kechadi | 10.1109/DSAA.2016.70 | 1704.03421 | null | null |
The Space of Transferable Adversarial Examples | stat.ML cs.CR cs.LG | Adversarial examples are maliciously perturbed inputs designed to mislead
machine learning (ML) models at test-time. They often transfer: the same
adversarial example fools more than one model.
In this work, we propose novel methods for estimating the previously unknown
dimensionality of the space of adversarial inputs. We find that adversarial
examples span a contiguous subspace of large (~25) dimensionality. Adversarial
subspaces with higher dimensionality are more likely to intersect. We find that
for two different models, a significant fraction of their subspaces is shared,
thus enabling transferability.
In the first quantitative analysis of the similarity of different models'
decision boundaries, we show that these boundaries are actually close in
arbitrary directions, whether adversarial or benign. We conclude by formally
studying the limits of transferability. We derive (1) sufficient conditions on
the data distribution that imply transferability for simple model classes and
(2) examples of scenarios in which transfer does not occur. These findings
indicate that it may be possible to design defenses against transfer-based
attacks, even for models that are vulnerable to direct attacks.
| Florian Tram\`er, Nicolas Papernot, Ian Goodfellow, Dan Boneh, Patrick
McDaniel | null | 1704.03453 | null | null |
Personalized Survival Predictions for Cardiac Transplantation via Trees
of Predictors | stat.AP cs.LG | Given the limited pool of donor organs, accurate predictions of survival on
the wait list and post transplantation are crucial for cardiac transplantation
decisions and policy. However, current clinical risk scores do not yield
accurate predictions. We develop a new methodology (ToPs, Trees of Predictors)
built on the principle that specific predictors should be used for specific
clusters within the target population. ToPs discovers these specific clusters
of patients and the specific predictor that perform best for each cluster. In
comparison with current clinical risk scoring systems, our method provides
significant improvements in the prediction of survival time on the wait list
and post transplantation. For example, in terms of 3 month survival for
patients who were on the US patient wait list in the period 1985 to 2015, our
method achieves AUC of 0.847, the best commonly used clinical risk score
(MAGGIC) achieves 0.630. In terms of 3 month survival/mortality predictions (in
comparison to MAGGIC), holding specificity at 80.0 percents, our algorithm
correctly predicts survival for 1,228 (26.0 percents more patients out of 4,723
who actually survived, holding sensitivity at 80.0 percents, our algorithm
correctly predicts mortality for 839 (33.0 percents) more patients out of 2,542
who did not survive. Our method achieves similar improvements for other time
horizons and for predictions post transplantation. Therefore, we offer a more
accurate, personalized approach to survival analysis that can benefit patients,
clinicians and policymakers in making clinical decisions and setting clinical
policy. Because risk prediction is widely used in diagnostic and prognostic
clinical decision making across diseases and clinical specialties, the
implications of our methods are far reaching.
| J. Yoon, W. R. Zame, A. Banerjee, M. Cadeiras, A. M. Alaa, M. van der
Schaar | null | 1704.03458 | null | null |
A Neural Representation of Sketch Drawings | cs.NE cs.LG stat.ML | We present sketch-rnn, a recurrent neural network (RNN) able to construct
stroke-based drawings of common objects. The model is trained on thousands of
crude human-drawn images representing hundreds of classes. We outline a
framework for conditional and unconditional sketch generation, and describe new
robust training methods for generating coherent sketch drawings in a vector
format.
| David Ha and Douglas Eck | null | 1704.03477 | null | null |
Leveraging Term Banks for Answering Complex Questions: A Case for Sparse
Vectors | cs.IR cs.CL cs.LG | While open-domain question answering (QA) systems have proven effective for
answering simple questions, they struggle with more complex questions. Our goal
is to answer more complex questions reliably, without incurring a significant
cost in knowledge resource construction to support the QA. One readily
available knowledge resource is a term bank, enumerating the key concepts in a
domain. We have developed an unsupervised learning approach that leverages a
term bank to guide a QA system, by representing the terminological knowledge
with thousands of specialized vector spaces. In experiments with complex
science questions, we show that this approach significantly outperforms several
state-of-the-art QA systems, demonstrating that significant leverage can be
gained from continuous vector representations of domain terminology.
| Peter D. Turney | null | 1704.03543 | null | null |
Active classification with comparison queries | cs.LG cs.CG | We study an extension of active learning in which the learning algorithm may
ask the annotator to compare the distances of two examples from the boundary of
their label-class. For example, in a recommendation system application (say for
restaurants), the annotator may be asked whether she liked or disliked a
specific restaurant (a label query); or which one of two restaurants did she
like more (a comparison query).
We focus on the class of half spaces, and show that under natural
assumptions, such as large margin or bounded bit-description of the input
examples, it is possible to reveal all the labels of a sample of size $n$ using
approximately $O(\log n)$ queries. This implies an exponential improvement over
classical active learning, where only label queries are allowed. We complement
these results by showing that if any of these assumptions is removed then, in
the worst case, $\Omega(n)$ queries are required.
Our results follow from a new general framework of active learning with
additional queries. We identify a combinatorial dimension, called the
\emph{inference dimension}, that captures the query complexity when each
additional query is determined by $O(1)$ examples (such as comparison queries,
each of which is determined by the two compared examples). Our results for half
spaces follow by bounding the inference dimension in the cases discussed above.
| Daniel M. Kane and Shachar Lovett and Shay Moran and Jiapeng Zhang | null | 1704.03564 | null | null |
Representation Stability as a Regularizer for Improved Text Analytics
Transfer Learning | cs.CL cs.LG | Although neural networks are well suited for sequential transfer learning
tasks, the catastrophic forgetting problem hinders proper integration of prior
knowledge. In this work, we propose a solution to this problem by using a
multi-task objective based on the idea of distillation and a mechanism that
directly penalizes forgetting at the shared representation layer during the
knowledge integration phase of training. We demonstrate our approach on a
Twitter domain sentiment analysis task with sequential knowledge transfer from
four related tasks. We show that our technique outperforms networks fine-tuned
to the target task. Additionally, we show both through empirical evidence and
examples that it does not forget useful knowledge from the source task that is
forgotten during standard fine-tuning. Surprisingly, we find that first
distilling a human made rule based sentiment engine into a recurrent neural
network and then integrating the knowledge with the target task data leads to a
substantial gain in generalization performance. Our experiments demonstrate the
power of multi-source transfer techniques in practical text analytics problems
when paired with distillation. In particular, for the SemEval 2016 Task 4
Subtask A (Nakov et al., 2016) dataset we surpass the state of the art
established during the competition with a comparatively simple model
architecture that is not even competitive when trained on only the labeled task
specific data.
| Matthew Riemer, Elham Khabiri, and Richard Goodwin | null | 1704.03617 | null | null |
Sampling-based speech parameter generation using moment-matching
networks | cs.SD cs.LG stat.ML | This paper presents sampling-based speech parameter generation using
moment-matching networks for Deep Neural Network (DNN)-based speech synthesis.
Although people never produce exactly the same speech even if we try to express
the same linguistic and para-linguistic information, typical statistical speech
synthesis produces completely the same speech, i.e., there is no
inter-utterance variation in synthetic speech. To give synthetic speech natural
inter-utterance variation, this paper builds DNN acoustic models that make it
possible to randomly sample speech parameters. The DNNs are trained so that
they make the moments of generated speech parameters close to those of natural
speech parameters. Since the variation of speech parameters is compressed into
a low-dimensional simple prior noise vector, our algorithm has lower
computation cost than direct sampling of speech parameters. As the first step
towards generating synthetic speech that has natural inter-utterance variation,
this paper investigates whether or not the proposed sampling-based generation
deteriorates synthetic speech quality. In evaluation, we compare speech quality
of conventional maximum likelihood-based generation and proposed sampling-based
generation. The result demonstrates the proposed generation causes no
degradation in speech quality.
| Shinnosuke Takamichi, Tomoki Koriyama, Hiroshi Saruwatari | null | 1704.03626 | null | null |
Energy Propagation in Deep Convolutional Neural Networks | cs.IT cs.LG math.FA math.IT stat.ML | Many practical machine learning tasks employ very deep convolutional neural
networks. Such large depths pose formidable computational challenges in
training and operating the network. It is therefore important to understand how
fast the energy contained in the propagated signals (a.k.a. feature maps)
decays across layers. In addition, it is desirable that the feature extractor
generated by the network be informative in the sense of the only signal mapping
to the all-zeros feature vector being the zero input signal. This "trivial
null-set" property can be accomplished by asking for "energy conservation" in
the sense of the energy in the feature vector being proportional to that of the
corresponding input signal. This paper establishes conditions for energy
conservation (and thus for a trivial null-set) for a wide class of deep
convolutional neural network-based feature extractors and characterizes
corresponding feature map energy decay rates. Specifically, we consider general
scattering networks employing the modulus non-linearity and we find that under
mild analyticity and high-pass conditions on the filters (which encompass,
inter alia, various constructions of Weyl-Heisenberg filters, wavelets,
ridgelets, ($\alpha$)-curvelets, and shearlets) the feature map energy decays
at least polynomially fast. For broad families of wavelets and Weyl-Heisenberg
filters, the guaranteed decay rate is shown to be exponential. Moreover, we
provide handy estimates of the number of layers needed to have at least
$((1-\varepsilon)\cdot 100)\%$ of the input signal energy be contained in the
feature vector.
| Thomas Wiatowski and Philipp Grohs and Helmut B\"olcskei | null | 1704.03636 | null | null |
Investigation on the use of Hidden-Markov Models in automatic
transcription of music | stat.ML cs.LG cs.SD | Hidden Markov Models (HMMs) are a ubiquitous tool to model time series data,
and have been widely used in two main tasks of Automatic Music Transcription
(AMT): note segmentation, i.e. identifying the played notes after a multi-pitch
estimation, and sequential post-processing, i.e. correcting note segmentation
using training data. In this paper, we employ the multi-pitch estimation method
called Probabilistic Latent Component Analysis (PLCA), and develop AMT systems
by integrating different HMM-based modules in this framework. For note
segmentation, we use two different twostate on/o? HMMs, including a
higher-order one for duration modeling. For sequential post-processing, we
focused on a musicological modeling of polyphonic harmonic transitions, using a
first- and second-order HMMs whose states are defined through candidate note
mixtures. These different PLCA plus HMM systems have been evaluated
comparatively on two different instrument repertoires, namely the piano (using
the MAPS database) and the marovany zither. Our results show that the use of
HMMs could bring noticeable improvements to transcription results, depending on
the instrument repertoire.
| D. Cazau, G. Nuel | null | 1704.03711 | null | null |
Deep Extreme Multi-label Learning | cs.LG | Extreme multi-label learning (XML) or classification has been a practical and
important problem since the boom of big data. The main challenge lies in the
exponential label space which involves $2^L$ possible label sets especially
when the label dimension $L$ is huge, e.g., in millions for Wikipedia labels.
This paper is motivated to better explore the label space by originally
establishing an explicit label graph. In the meanwhile, deep learning has been
widely studied and used in various classification problems including
multi-label classification, however it has not been properly introduced to XML,
where the label space can be as large as in millions. In this paper, we propose
a practical deep embedding method for extreme multi-label classification, which
harvests the ideas of non-linear embedding and graph priors-based label space
modeling simultaneously. Extensive experiments on public datasets for XML show
that our method performs competitive against state-of-the-art result.
| Wenjie Zhang, Junchi Yan, Xiangfeng Wang and Hongyuan Zha | null | 1704.03718 | null | null |
Deep Q-learning from Demonstrations | cs.AI cs.LG | Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN.
| Todd Hester, Matej Vecerik, Olivier Pietquin, Marc Lanctot, Tom
Schaul, Bilal Piot, Dan Horgan, John Quan, Andrew Sendonaris, Gabriel
Dulac-Arnold, Ian Osband, John Agapiou, Joel Z. Leibo, Audrunas Gruslys | null | 1704.03732 | null | null |
Deep-FExt: Deep Feature Extraction for Vessel Segmentation and
Centerline Prediction | stat.ML cs.CV cs.LG | Feature extraction is a very crucial task in image and pixel (voxel)
classification and regression in biomedical image modeling. In this work we
present a machine learning based feature extraction scheme based on inception
models for pixel classification tasks. We extract features under multi-scale
and multi-layer schemes through convolutional operators. Layers of Fully
Convolutional Network are later stacked on this feature extraction layers and
trained end-to-end for the purpose of classification. We test our model on the
DRIVE and STARE public data sets for the purpose of segmentation and centerline
detection and it out performs most existing hand crafted or deterministic
feature schemes found in literature. We achieve an average maximum Dice of 0.85
on the DRIVE data set which out performs the scores from the second human
annotator of this data set. We also achieve an average maximum Dice of 0.85 and
kappa of 0.84 on the STARE data set. Though these datasets are mainly 2-D we
also propose ways of extending this feature extraction scheme to handle 3-D
datasets.
| Giles Tetteh, Markus Rempfler, Bjoern H. Menze, Claus Zimmer | 10.1007/978-3-319-67389-9_40 | 1704.03743 | null | null |
Enabling Embedded Inference Engine with ARM Compute Library: A Case
Study | cs.LG | When you need to enable deep learning on low-cost embedded SoCs, is it better
to port an existing deep learning framework or should you build one from
scratch? In this paper, we share our practical experiences of building an
embedded inference engine using ARM Compute Library (ACL). The results show
that, contradictory to conventional wisdoms, for simple models, it takes much
less development time to build an inference engine from scratch compared to
porting existing frameworks. In addition, by utilizing ACL, we managed to build
an inference engine that outperforms TensorFlow by 25%. Our conclusion is that,
on embedded devices, we most likely will use very simple deep learning models
for inference, and with well-developed building blocks such as ACL, it may be
better in both performance and development time to build the engine from
scratch.
| Dawei Sun, Shaoshan Liu, Jean-Luc Gaudiot | null | 1704.03751 | null | null |
A Neural Parametric Singing Synthesizer | cs.SD cs.CL cs.LG | We present a new model for singing synthesis based on a modified version of
the WaveNet architecture. Instead of modeling raw waveform, we model features
produced by a parametric vocoder that separates the influence of pitch and
timbre. This allows conveniently modifying pitch to match any target melody,
facilitates training on more modest dataset sizes, and significantly reduces
training and generation times. Our model makes frame-wise predictions using
mixture density outputs rather than categorical outputs in order to reduce the
required parameter count. As we found overfitting to be an issue with the
relatively small datasets used in our experiments, we propose a method to
regularize the model and make the autoregressive generation process more robust
to prediction errors. Using a simple multi-stream architecture, harmonic,
aperiodic and voiced/unvoiced components can all be predicted in a coherent
manner. We compare our method to existing parametric statistical and
state-of-the-art concatenative methods using quantitative metrics and a
listening test. While naive implementations of the autoregressive generation
algorithm tend to be inefficient, using a smart algorithm we can greatly speed
up the process and obtain a system that's competitive in both speed and
quality.
| Merlijn Blaauw, Jordi Bonada | null | 1704.03809 | null | null |
MAGAN: Margin Adaptation for Generative Adversarial Networks | cs.LG stat.ML | We propose the Margin Adaptation for Generative Adversarial Networks (MAGANs)
algorithm, a novel training procedure for GANs to improve stability and
performance by using an adaptive hinge loss function. We estimate the
appropriate hinge loss margin with the expected energy of the target
distribution, and derive principled criteria for when to update the margin. We
prove that our method converges to its global optimum under certain
assumptions. Evaluated on the task of unsupervised image generation, the
proposed training procedure is simple yet robust on a diverse set of data, and
achieves qualitative and quantitative improvements compared to the
state-of-the-art.
| Ruohan Wang, Antoine Cully, Hyung Jin Chang, Yiannis Demiris | null | 1704.03817 | null | null |
Deep Neural Network Based Precursor microRNA Prediction on Eleven
Species | q-bio.QM cs.LG | MicroRNA (miRNA) are small non-coding RNAs that regulates the gene expression
at the post-transcriptional level. Determining whether a sequence segment is
miRNA is experimentally challenging. Also, experimental results are sensitive
to the experimental environment. These limitations inspire the development of
computational methods for predicting the miRNAs. We propose a deep learning
based classification model, called DP-miRNA, for predicting precursor miRNA
sequence that contains the miRNA sequence. The feature set based Restricted
Boltzmann Machine method, which we call DP-miRNA, uses 58 features that are
categorized into four groups: sequence features, folding measures, stem-loop
features and statistical feature. We evaluate the performance of the DP-miRNA
on eleven twelve data sets of varying species, including the human. The deep
neural network based classification outperformed support vector machine, neural
network, naive Baye's classifiers, k-nearest neighbors, random forests, and a
hybrid system combining support vector machine and genetic algorithm.
| Jaya Thomas and Lee Sael | null | 1704.03834 | null | null |
Determining Song Similarity via Machine Learning Techniques and Tagging
Information | cs.LG stat.ML | The task of determining item similarity is a crucial one in a recommender
system. This constitutes the base upon which the recommender system will work
to determine which items are more likely to be enjoyed by a user, resulting in
more user engagement. In this paper we tackle the problem of determining song
similarity based solely on song metadata (such as the performer, and song
title) and on tags contributed by users. We evaluate our approach under a
series of different machine learning algorithms. We conclude that tf-idf
achieves better results than Word2Vec to model the dataset to feature vectors.
We also conclude that k-NN models have better performance than SVMs and Linear
Regression for this problem.
| Renato L. F. Cunha, Evandro Caldeira, Luciana Fujii | null | 1704.03844 | null | null |
Semantic3D.net: A new Large-scale Point Cloud Classification Benchmark | cs.CV cs.LG cs.NE cs.RO | This paper presents a new 3D point cloud classification benchmark data set
with over four billion manually labelled points, meant as input for data-hungry
(deep) learning methods. We also discuss first submissions to the benchmark
that use deep convolutional neural networks (CNNs) as a work horse, which
already show remarkable performance improvements over state-of-the-art. CNNs
have become the de-facto standard for many tasks in computer vision and machine
learning like semantic segmentation or object detection in images, but have no
yet led to a true breakthrough for 3D point cloud labelling tasks due to lack
of training data. With the massive data set presented in this paper, we aim at
closing this data gap to help unleash the full potential of deep learning
methods for 3D labelling tasks. Our semantic3D.net data set consists of dense
point clouds acquired with static terrestrial laser scanners. It contains 8
semantic classes and covers a wide range of urban outdoor scenes: churches,
streets, railroad tracks, squares, villages, soccer fields and castles. We
describe our labelling interface and show that our data set provides more dense
and complete point clouds with much higher overall number of labelled points
compared to those already available to the research community. We further
provide baseline method descriptions and comparison between methods submitted
to our online system. We hope semantic3D.net will pave the way for deep
learning methods in 3D point cloud labelling to learn richer, more general 3D
representations, and first submissions after only a few months indicate that
this might indeed be the case.
| Timo Hackel, Nikolay Savinov, Lubor Ladicky, Jan D. Wegner, Konrad
Schindler, Marc Pollefeys | null | 1704.03847 | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.